FreeBSD Bugzilla – Attachment 147883 Details for
Bug 193706
[new port] devel/apache-spark: high performance distributed computing system
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
my version of spark port
spark.shar (text/plain), 21.89 KB, created by
Dmitry Sivachenko
on 2014-10-01 11:56:02 UTC
(
hide
)
Description:
my version of spark port
Filename:
MIME Type:
Creator:
Dmitry Sivachenko
Created:
2014-10-01 11:56:02 UTC
Size:
21.89 KB
patch
obsolete
># This is a shell archive. Save it in a file, remove anything before ># this line, and then unpack it by entering "sh file". Note, it may ># create directories; files and directories will be owned by you and ># have default permissions. ># ># This archive contains: ># ># spark ># spark/Makefile ># spark/distinfo ># spark/pkg-descr ># spark/pkg-plist ># spark/files ># spark/files/patch-core-pom.xml ># spark/files/patch-assembly-pom.xml ># spark/files/spark_worker.in ># spark/files/patch-sbin-spark-daemon.sh ># spark/files/spark_master.in ># >echo c - spark >mkdir -p spark > /dev/null 2>&1 >echo x - spark/Makefile >sed 's/^X//' >spark/Makefile << 'a3d4a3aecd089b8d80746c432208ad00' >X# Created by: Dmitry Sivachenko <demon@FreeBSD.org> >X# $FreeBSD$ >X >XPORTNAME= spark >XPORTVERSION= 1.1.0 >XCATEGORIES= devel java >XMASTER_SITES= ${MASTER_SITE_APACHE} \ >X LOCAL/demon/:maven \ >X http://people.freebsd.org/~demon/:maven >XMASTER_SITE_SUBDIR=${PORTNAME}/${PORTNAME}-${PORTVERSION} >XDISTFILES= ${PORTNAME}-${PORTVERSION}.tgz FreeBSD-${PORTNAME}-${PORTVERSION}-maven-repository.tar.gz:maven >XDIST_SUBDIR= hadoop >X >XMAINTAINER= demon@FreeBSD.org >XCOMMENT= Fast big data processing engine >X >XLICENSE= APACHE20 >X >XBUILD_DEPENDS= ${LOCALBASE}/share/java/maven3/bin/mvn:${PORTSDIR}/devel/maven3 >XRUN_DEPENDS= bash:${PORTSDIR}/shells/bash \ >X ${LOCALBASE}/share/hadoop/common/hadoop-common-2.4.1.jar:${PORTSDIR}/devel/hadoop2 >X >XUSES= python:2 >XUSE_JAVA= yes >XJAVA_VERSION= 1.7+ >XMAKE_ENV+= MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m" >X >XUSERS= spark >XGROUPS= spark >X >XUSE_RC_SUBR= spark_master spark_worker >XPLIST_SUB+= SPARK_USER=spark SPARK_GROUP=spark >XSUB_LIST+= SPARK_USER=spark SPARK_GROUP=spark >X >Xdo-build: >X cd ${WRKSRC} && ${SETENV} ${MAKE_ENV} ${LOCALBASE}/share/java/maven3/bin/mvn -Dmaven.repo.local=${WRKDIR}/m2 clean package -Dhadoop.version=2.4.1 -Pyarn -Phadoop-2.4 -DskipTests >X >Xpost-build: >X ${RM} ${WRKSRC}/bin/*.cmd ${WRKSRC}/sbin/spark-daemon.sh.orig >X >Xdo-install: >X ${MKDIR} ${STAGEDIR}${DATADIR}/lib ${STAGEDIR}${DATADIR}/examples ${STAGEDIR}${DATADIR}/bin ${STAGEDIR}${DATADIR}/sbin ${STAGEDIR}${DATADIR}/conf >X ${ECHO_CMD} "Spark $PORTVERSION built for Hadoop 2.4.1" > ${STAGEDIR}${DATADIR}/RELEASE >X ${INSTALL_DATA} ${WRKSRC}/assembly/target/scala*/*assembly*hadoop*.jar ${STAGEDIR}${DATADIR}/lib/ >X ${INSTALL_DATA} ${WRKSRC}/examples/target/scala*/spark-examples*.jar ${STAGEDIR}${DATADIR}/lib/ >X cd ${WRKSRC}/examples && ${COPYTREE_SHARE} src ${STAGEDIR}${DATADIR}/examples/ >X cd ${WRKSRC}/bin && ${INSTALL_SCRIPT} * ${STAGEDIR}${DATADIR}/bin/ >X cd ${WRKSRC}/sbin && ${INSTALL_SCRIPT} * ${STAGEDIR}${DATADIR}/sbin/ >X cd ${WRKSRC} && ${COPYTREE_SHARE} python ${STAGEDIR}${DATADIR}/ >X ${INSTALL_DATA} ${WRKSRC}/conf/*.template ${STAGEDIR}${DATADIR}/conf/ >X ${MKDIR} ${STAGEDIR}/var/run/spark >X ${MKDIR} ${STAGEDIR}/var/log/spark >X >X.include <bsd.port.mk> >a3d4a3aecd089b8d80746c432208ad00 >echo x - spark/distinfo >sed 's/^X//' >spark/distinfo << '586e348b9ae23d37d050410e0572ded3' >XSHA256 (hadoop/spark-1.1.0.tgz) = cd3b1405fdfd32e890f4ebc505c92f3b1fd1df2af97abf49f1876044f7642265 >XSIZE (hadoop/spark-1.1.0.tgz) = 9556497 >XSHA256 (hadoop/FreeBSD-spark-1.1.0-maven-repository.tar.gz) = 597064113ab260ddbe975cced5f0daecee19f553aa02cd6ebcc97321f2d5c45d >XSIZE (hadoop/FreeBSD-spark-1.1.0-maven-repository.tar.gz) = 211453094 >586e348b9ae23d37d050410e0572ded3 >echo x - spark/pkg-descr >sed 's/^X//' >spark/pkg-descr << 'e5e87b7be7dec59035073c07328664c0' >XApache Spark is a fast and general-purpose cluster computing system. It >Xprovides high-level APIs in Java, Scala and Python, and an optimized engine >Xthat supports general execution graphs. It also supports a rich set of >Xhigher-level tools including Spark SQL for SQL and structured data processing, >XMLlib for machine learning, GraphX for graph processing, and Spark Streaming. >X >XWWW: http://spark.apache.org/ >e5e87b7be7dec59035073c07328664c0 >echo x - spark/pkg-plist >sed 's/^X//' >spark/pkg-plist << 'ec8cb471685918b9ecc905200bf3f199' >X%%DATADIR%%/conf/fairscheduler.xml.template >X%%DATADIR%%/conf/log4j.properties.template >X%%DATADIR%%/conf/metrics.properties.template >X%%DATADIR%%/conf/spark-defaults.conf.template >X%%DATADIR%%/conf/spark-env.sh.template >X%%DATADIR%%/RELEASE >X%%DATADIR%%/bin/beeline >X%%DATADIR%%/bin/compute-classpath.sh >X%%DATADIR%%/bin/load-spark-env.sh >X%%DATADIR%%/bin/pyspark >X%%DATADIR%%/bin/run-example >X%%DATADIR%%/bin/spark-class >X%%DATADIR%%/bin/spark-shell >X%%DATADIR%%/bin/spark-sql >X%%DATADIR%%/bin/spark-submit >X%%DATADIR%%/bin/utils.sh >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaHdfsLR.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaLogQuery.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaPageRank.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaSparkPi.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaTC.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/JavaWordCount.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/mllib/JavaALS.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/mllib/JavaDecisionTree.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/mllib/JavaKMeans.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/mllib/JavaLR.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/streaming/JavaCustomReceiver.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/streaming/JavaFlumeEventCount.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/streaming/JavaNetworkWordCount.java >X%%DATADIR%%/examples/src/main/java/org/apache/spark/examples/streaming/JavaQueueStream.java >X%%DATADIR%%/examples/src/main/python/als.py >X%%DATADIR%%/examples/src/main/python/avro_inputformat.py >X%%DATADIR%%/examples/src/main/python/cassandra_inputformat.py >X%%DATADIR%%/examples/src/main/python/cassandra_outputformat.py >X%%DATADIR%%/examples/src/main/python/hbase_inputformat.py >X%%DATADIR%%/examples/src/main/python/hbase_outputformat.py >X%%DATADIR%%/examples/src/main/python/kmeans.py >X%%DATADIR%%/examples/src/main/python/logistic_regression.py >X%%DATADIR%%/examples/src/main/python/mllib/correlations.py >X%%DATADIR%%/examples/src/main/python/mllib/decision_tree_runner.py >X%%DATADIR%%/examples/src/main/python/mllib/kmeans.py >X%%DATADIR%%/examples/src/main/python/mllib/logistic_regression.py >X%%DATADIR%%/examples/src/main/python/mllib/random_rdd_generation.py >X%%DATADIR%%/examples/src/main/python/mllib/sampled_rdds.py >X%%DATADIR%%/examples/src/main/python/pagerank.py >X%%DATADIR%%/examples/src/main/python/pi.py >X%%DATADIR%%/examples/src/main/python/sort.py >X%%DATADIR%%/examples/src/main/python/transitive_closure.py >X%%DATADIR%%/examples/src/main/python/wordcount.py >X%%DATADIR%%/examples/src/main/resources/kv1.txt >X%%DATADIR%%/examples/src/main/resources/people.json >X%%DATADIR%%/examples/src/main/resources/people.txt >X%%DATADIR%%/examples/src/main/resources/user.avsc >X%%DATADIR%%/examples/src/main/resources/users.avro >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/BroadcastTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/CassandraCQLTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/CassandraTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/DriverSubmissionTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/ExceptionHandlingTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/GroupByTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/HBaseTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/HdfsTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LocalALS.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LocalFileLR.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LocalKMeans.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LocalLR.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LocalPi.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/LogQuery.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/MultiBroadcastTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SimpleSkewedGroupByTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SkewedGroupByTest.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkALS.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkHdfsLR.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkKMeans.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkLR.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkPageRank.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkTC.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkTachyonHdfsLR.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/SparkTachyonPi.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/bagel/PageRankUtils.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/bagel/WikipediaPageRank.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/bagel/WikipediaPageRankStandalone.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/graphx/LiveJournalPageRank.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/graphx/SynthBenchmark.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/BinaryClassification.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/Correlations.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/DecisionTreeRunner.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/DenseKMeans.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/LinearRegression.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/MovieLensALS.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/MultivariateSummarizer.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/RandomRDDGeneration.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/SampledRDDs.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/SparseNaiveBayes.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingLinearRegression.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/TallSkinnyPCA.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/mllib/TallSkinnySVD.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/pythonconverters/AvroConverters.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/pythonconverters/CassandraConverters.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/pythonconverters/HBaseConverters.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/sql/hive/HiveFromSpark.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/CustomReceiver.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/FlumeEventCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/FlumePollingEventCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/HdfsWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/MQTTWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/NetworkWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/QueueStream.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/RawNetworkGrep.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/StatefulNetworkWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/StreamingExamples.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/TwitterAlgebirdCMS.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/TwitterAlgebirdHLL.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/TwitterPopularTags.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/ZeroMQWordCount.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/clickstream/PageViewGenerator.scala >X%%DATADIR%%/examples/src/main/scala/org/apache/spark/examples/streaming/clickstream/PageViewStream.scala >X%%DATADIR%%/lib/spark-assembly-1.1.0-hadoop2.4.1.jar >X%%DATADIR%%/lib/spark-examples-1.1.0-hadoop2.4.1.jar >X%%DATADIR%%/python/.gitignore >X%%DATADIR%%/python/build/py4j/__init__.py >X%%DATADIR%%/python/build/py4j/compat.py >X%%DATADIR%%/python/build/py4j/finalizer.py >X%%DATADIR%%/python/build/py4j/java_collections.py >X%%DATADIR%%/python/build/py4j/java_gateway.py >X%%DATADIR%%/python/build/py4j/protocol.py >X%%DATADIR%%/python/build/py4j/tests/__init__.py >X%%DATADIR%%/python/build/py4j/tests/byte_string_test.py >X%%DATADIR%%/python/build/py4j/tests/finalizer_test.py >X%%DATADIR%%/python/build/py4j/tests/java_array_test.py >X%%DATADIR%%/python/build/py4j/tests/java_callback_test.py >X%%DATADIR%%/python/build/py4j/tests/java_gateway_test.py >X%%DATADIR%%/python/build/py4j/tests/java_list_test.py >X%%DATADIR%%/python/build/py4j/tests/java_map_test.py >X%%DATADIR%%/python/build/py4j/tests/java_set_test.py >X%%DATADIR%%/python/build/py4j/tests/multithreadtest.py >X%%DATADIR%%/python/build/py4j/tests/py4j_callback_example.py >X%%DATADIR%%/python/build/py4j/tests/py4j_callback_example2.py >X%%DATADIR%%/python/build/py4j/tests/py4j_example.py >X%%DATADIR%%/python/build/py4j/version.py >X%%DATADIR%%/python/epydoc.conf >X%%DATADIR%%/python/lib/PY4J_LICENSE.txt >X%%DATADIR%%/python/lib/py4j-0.8.2.1-src.zip >X%%DATADIR%%/python/pyspark/__init__.py >X%%DATADIR%%/python/pyspark/accumulators.py >X%%DATADIR%%/python/pyspark/broadcast.py >X%%DATADIR%%/python/pyspark/cloudpickle.py >X%%DATADIR%%/python/pyspark/conf.py >X%%DATADIR%%/python/pyspark/context.py >X%%DATADIR%%/python/pyspark/daemon.py >X%%DATADIR%%/python/pyspark/files.py >X%%DATADIR%%/python/pyspark/java_gateway.py >X%%DATADIR%%/python/pyspark/join.py >X%%DATADIR%%/python/pyspark/mllib/__init__.py >X%%DATADIR%%/python/pyspark/mllib/_common.py >X%%DATADIR%%/python/pyspark/mllib/classification.py >X%%DATADIR%%/python/pyspark/mllib/clustering.py >X%%DATADIR%%/python/pyspark/mllib/linalg.py >X%%DATADIR%%/python/pyspark/mllib/random.py >X%%DATADIR%%/python/pyspark/mllib/recommendation.py >X%%DATADIR%%/python/pyspark/mllib/regression.py >X%%DATADIR%%/python/pyspark/mllib/stat.py >X%%DATADIR%%/python/pyspark/mllib/tests.py >X%%DATADIR%%/python/pyspark/mllib/tree.py >X%%DATADIR%%/python/pyspark/mllib/util.py >X%%DATADIR%%/python/pyspark/rdd.py >X%%DATADIR%%/python/pyspark/rddsampler.py >X%%DATADIR%%/python/pyspark/resultiterable.py >X%%DATADIR%%/python/pyspark/serializers.py >X%%DATADIR%%/python/pyspark/shell.py >X%%DATADIR%%/python/pyspark/shuffle.py >X%%DATADIR%%/python/pyspark/sql.py >X%%DATADIR%%/python/pyspark/statcounter.py >X%%DATADIR%%/python/pyspark/storagelevel.py >X%%DATADIR%%/python/pyspark/tests.py >X%%DATADIR%%/python/pyspark/worker.py >X%%DATADIR%%/python/run-tests >X%%DATADIR%%/python/test_support/hello.txt >X%%DATADIR%%/python/test_support/userlib-0.1-py%%PYTHON_VER%%.egg >X%%DATADIR%%/python/test_support/userlibrary.py >X%%DATADIR%%/sbin/slaves.sh >X%%DATADIR%%/sbin/spark-config.sh >X%%DATADIR%%/sbin/spark-daemon.sh >X%%DATADIR%%/sbin/spark-daemons.sh >X%%DATADIR%%/sbin/spark-executor >X%%DATADIR%%/sbin/start-all.sh >X%%DATADIR%%/sbin/start-history-server.sh >X%%DATADIR%%/sbin/start-master.sh >X%%DATADIR%%/sbin/start-slave.sh >X%%DATADIR%%/sbin/start-slaves.sh >X%%DATADIR%%/sbin/start-thriftserver.sh >X%%DATADIR%%/sbin/stop-all.sh >X%%DATADIR%%/sbin/stop-history-server.sh >X%%DATADIR%%/sbin/stop-master.sh >X%%DATADIR%%/sbin/stop-slaves.sh >X@dir(%%SPARK_USER%%,%%SPARK_GROUP%%,) /var/run/spark >X@dir(%%SPARK_USER%%,%%SPARK_GROUP%%,) /var/log/spark >ec8cb471685918b9ecc905200bf3f199 >echo c - spark/files >mkdir -p spark/files > /dev/null 2>&1 >echo x - spark/files/patch-core-pom.xml >sed 's/^X//' >spark/files/patch-core-pom.xml << '53646a003c78f710be2e7902b1025bcf' >X--- core/pom.xml.orig 2014-09-03 10:00:33.000000000 +0400 >X+++ core/pom.xml 2014-09-23 18:45:21.000000000 +0400 >X@@ -300,26 +300,20 @@ >X </plugin> >X <!-- Unzip py4j so we can include its files in the jar --> >X <plugin> >X- <groupId>org.codehaus.mojo</groupId> >X- <artifactId>exec-maven-plugin</artifactId> >X- <version>1.2.1</version> >X+ <groupId>org.apache.maven.plugins</groupId> >X+ <artifactId>maven-antrun-plugin</artifactId> >X <executions> >X <execution> >X <phase>generate-resources</phase> >X <goals> >X- <goal>exec</goal> >X+ <goal>run</goal> >X </goals> >X </execution> >X </executions> >X <configuration> >X- <executable>unzip</executable> >X- <workingDirectory>../python</workingDirectory> >X- <arguments> >X- <argument>-o</argument> >X- <argument>lib/py4j*.zip</argument> >X- <argument>-d</argument> >X- <argument>build</argument> >X- </arguments> >X+ <tasks> >X+ <unzip src="../python/lib/py4j-0.8.2.1-src.zip" dest="../python/build" /> >X+ </tasks> >X </configuration> >X </plugin> >X </plugins> >53646a003c78f710be2e7902b1025bcf >echo x - spark/files/patch-assembly-pom.xml >sed 's/^X//' >spark/files/patch-assembly-pom.xml << '7fc4ddf2ac6469dbc8cfd040449c5477' >X--- assembly/pom.xml.orig 2014-09-03 10:00:33.000000000 +0400 >X+++ assembly/pom.xml 2014-09-30 15:30:44.000000000 +0400 >X@@ -113,6 +113,12 @@ >X <goal>shade</goal> >X </goals> >X <configuration> >X+<relocations> >X+ <relocation> >X+ <pattern>com.google.common</pattern> >X+ <shadedPattern>spark.com.google.common</shadedPattern> >X+ </relocation> >X+ </relocations> >X <transformers> >X <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> >X <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> >7fc4ddf2ac6469dbc8cfd040449c5477 >echo x - spark/files/spark_worker.in >sed 's/^X//' >spark/files/spark_worker.in << '643119cda16e0bf53f02673dbaccddcf' >X#!/bin/sh >X# >X# PROVIDE: spark_worker >X# REQUIRE: LOGIN >X# KEYWORD: shutdown >X# >X >X. /etc/rc.subr >X >Xname=spark_worker >Xrcvar=spark_worker_enable >Xload_rc_config $name >X >X: ${spark_worker_enable:=NO} >X: ${spark_worker_master:="spark://`hostname`:7077"} >X: ${spark_worker_ui_port:=8081} >X: ${spark_worker_wrkdir:="/tmp/spark/worker"} >X >Xexport SPARK_PID_DIR=/var/run/spark >Xexport SPARK_LOG_DIR=/var/log/spark >X >Xpidfile=${SPARK_PID_DIR}/spark-%%SPARK_USER%%-org.apache.spark.deploy.worker.Worker-1.pid >Xstart_cmd=worker_start >Xstop_cmd='su -m spark -c "%%DATADIR%%/sbin/spark-daemon.sh stop org.apache.spark.deploy.worker.Worker 1"' >X >Xexport PATH=$PATH:%%LOCALBASE%%/bin >X >Xworker_start () >X{ >X cmd="%%DATADIR%%/sbin/start-slave.sh" >X args="-d ${spark_worker_wrkdir} --webui-port ${spark_worker_ui_port}" >X if [ "$spark_worker_cores" != "" ]; then >X args="$args -c $spark_worker_cores" >X fi >X >X if [ "$spark_worker_memory" != "" ]; then >X args="$args -m $spark_worker_memory" >X fi >X >X su -m spark -c "$cmd 1 $args ${spark_worker_master}" >X} >X >Xrun_rc_command "$1" >643119cda16e0bf53f02673dbaccddcf >echo x - spark/files/patch-sbin-spark-daemon.sh >sed 's/^X//' >spark/files/patch-sbin-spark-daemon.sh << 'a41e94ddeddfdb4b17effd2a3a4a2da3' >X--- sbin/spark-daemon.sh.orig 2014-09-03 10:00:33.000000000 +0400 >X+++ sbin/spark-daemon.sh 2014-09-30 18:30:00.000000000 +0400 >X@@ -99,14 +99,6 @@ >X if [ "$SPARK_LOG_DIR" = "" ]; then >X export SPARK_LOG_DIR="$SPARK_HOME/logs" >X fi >X-mkdir -p "$SPARK_LOG_DIR" >X-touch $SPARK_LOG_DIR/.spark_test > /dev/null 2>&1 >X-TEST_LOG_DIR=$? >X-if [ "${TEST_LOG_DIR}" = "0" ]; then >X- rm -f $SPARK_LOG_DIR/.spark_test >X-else >X- chown $SPARK_IDENT_STRING $SPARK_LOG_DIR >X-fi >X >X if [ "$SPARK_PID_DIR" = "" ]; then >X SPARK_PID_DIR=/tmp >X@@ -126,8 +118,6 @@ >X >X (start) >X >X- mkdir -p "$SPARK_PID_DIR" >X- >X if [ -f $pid ]; then >X if kill -0 `cat $pid` > /dev/null 2>&1; then >X echo $command running as process `cat $pid`. Stop it first. >a41e94ddeddfdb4b17effd2a3a4a2da3 >echo x - spark/files/spark_master.in >sed 's/^X//' >spark/files/spark_master.in << '5df3893a509555ea7ab06a402c6cf399' >X#!/bin/sh >X# >X# PROVIDE: spark_master >X# REQUIRE: LOGIN >X# KEYWORD: shutdown >X# >X >X. /etc/rc.subr >X >Xname=spark_master >Xrcvar=spark_master_enable >Xload_rc_config $name >X >X: ${spark_master_enable:=NO} >X: ${spark_master_port:=7077} >X: ${spark_master_ui_port:=8080} >X >Xexport SPARK_PID_DIR=/var/run/spark >Xexport SPARK_LOG_DIR=/var/log/spark >Xexport SPARK_MASTER_PORT=${spark_master_port} >Xexport SPARK_MASTER_WEBUI_PORT=${spark_master_ui_port} >Xexport SPARK_IDENT_STRING=%%SPARK_USER%% >X >Xpidfile=${SPARK_PID_DIR}/spark-%%SPARK_USER%%-org.apache.spark.deploy.master.Master-1.pid >Xstart_cmd="su -m spark -c %%DATADIR%%/sbin/start-master.sh" >Xstop_cmd="su -m spark -c %%DATADIR%%/sbin/stop-master.sh" >X >Xexport PATH=$PATH:%%LOCALBASE%%/bin >X >Xrun_rc_command "$1" >5df3893a509555ea7ab06a402c6cf399 >exit >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 193706
:
147395
|
147452
|
147677
| 147883