Spark 配置指南
目錄 [?]
- Spark屬性
- 動態加載Spark屬性
- 查看Spark屬性
- 可用的屬性
- 應用屬性
- 運行時環境Runtime Environment
- Shuffle Behavior
- Spark UI
- Compression and Serialization
- Execution Behavior
- Networking
- Scheduling
- Security
- Spark Streaming
- 集群管理器Cluster Managers
- 環境變量
- 配置日志
- 改變配置文件夾路徑
Spark可以在三個地方配置系統:
- Spark屬性控制大部分的應用參數。 這些屬性可以通過SparkConf對象, 或者Java系統屬性.
- 環境變量可以為每臺機器配置,比如IP地址, 通過每個節點上的conf/spark-env.sh腳本.
- 可同通過log4j.properties配置日志.
Spark屬性
Spark屬性控制應用的大部分設置, 可以為不同的應用分別設置. 這些屬性在SparkConf對象上設置, SparkConf被傳給SparkContext. SparkConf允許你配置一些通用的屬性(比如master URL 和應用名), 也可以通過set() 方法設置鍵值對. 例如,我們可以這樣初始化一個應用:
|
|
動態加載Spark屬性
在一些情況下你可能想避免在SparkConf上硬編碼. 舉例來說, 如果你想在不同的master上或者不同的內存上運行同樣的應用, Spark允許你簡單創建一個空的conf:
|
|
你可以在運行時提供這些配置值:
|
|
Spark shell 和 spark-submit腳本支持兩個動態加載配置的方法. 第一種是命令行參數, 如上面用到的 —master. spark-submit通過—conf可以接收任意的spark屬性, 但會使用一些其它參數來啟動Spark應用. 運行./bin/spark-submit —hp 會顯示完整的參數列表.
bin/spark-submit會從conf/spark-defaults.conf讀取缺省的配置參數, 每一行包括一個鍵和一個值, 由空格分隔. 比如下面的例子:
|
|
命令行參數和文件中配置的屬性都會傳給應用,由SparkConf合并這些配置. SparkConf上設置的屬性有最高優先級,然后是命令行傳入給spark-submit或spark-shell的參數, 最后才是缺省文件中配置的屬性.
查看Spark屬性
應用的web UI (http://:4040)在”Environment”標簽頁列出了所有的Spark屬性. 這是一個很有用的頁面,可以幫助你檢查你的屬性是否設置正確。 注意只有顯式在spark-defaults.conf 或 SparkConf 設置的屬性才顯示。 其它的配置屬性將使用缺省值。
可用的屬性
大部分控制內部設置的屬性都有合適的缺省值。一些常用的屬性分門別類的列在這里:
應用屬性
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.app.name |
(none) | The name of your application. This will appear in the UI and in log data. |
spark.master |
(none) | The cluster manager to connect to. See the list of allowed master URL’s. |
spark.executor.memory |
512m | Amount of memory to use per executor process, in the same format as JVM memory strings (e.g.512m ,2g ). |
spark.serializer |
org.apache.spark.serializer. JavaSerializer |
Class to use for serializing objects that will be sent over the network or need to be cached in serialized form. The default of Java serialization works with any Serializable Java object but is quite slow, so we recommendusingorg.apache.spark.serializer.KryoSerializer and configuring Kryo serialization when speed is necessary. Can be any subclass oforg.apache.spark.Serializer . |
spark.kryo.registrator |
(none) | If you use Kryo serialization, set this class to register your custom classes with Kryo. It should be set to a class that extendsKryoRegistrator . See thetuning guide for more details. |
spark.local.dir |
/tmp | Directory to use for “scratch” space in Spark, including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks. NOTE: In Spark 1.0 and later this will be overriden by SPARK_LOCAL_DIRS (Standalone, Mesos) or LOCAL_DIRS (YARN) environment variables set by the cluster manager. |
spark.logConf |
false | Logs the effective SparkConf as INFO when a SparkContext is started. |
除了這些,下面的屬性也可用,在某些情況下需要設置:
運行時環境Runtime Environment
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.executor.extraJavaOptions |
(none) | A string of extra JVM options to pass to executors. For instance, GC settings or other logging. Note that it is illegal to set Spark properties or heap size settings with this option. Spark properties should be set using a SparkConf object or the spark-defaults.conf file used with the spark-submit script. Heap size settings can be set with spark.executor.memory. |
spark.executor.extraClassPath |
(none) | Extra classpath entries to append to the classpath of executors. This exists primarily for backwards-compatibility with older versions of Spark. Users typically should not need to set this option. |
spark.executor.extraLibraryPath |
(none) | Set a special library path to use when launching executor JVM’s. |
spark.files.userClassPathFirst |
false | (Experimental) Whether to give user-added jars precedence over Spark’s own jars when loading classes in Executors. This feature can be used to mitigate conflicts between Spark’s dependencies and user dependencies. It is currently an experimental feature. |
spark.python.worker.memory |
512m | Amount of memory to use per python worker process during aggregation, in the same format as JVM memory strings (e.g.512m ,2g ). If the memory used during aggregation goes above this amount, it will spill the data into disks. |
spark.executorEnv.[EnvironmentVariableName] |
(none) | Add the environment variable specified byEnvironmentVariableName to the Executor process. The user can specify multiple of these and to set multiple environment variables. |
spark.mesos.executor.home |
driver sideSPARK_HOME |
Set the directory in which Spark is installed on the executors in Mesos. By default, the executors will simply use the driver’s Spark home directory, which may not be visible to them. Note that this is only relevant if a Spark binary package is not specified throughspark.executor.uri . |
spark.mesos.executor.memoryOverhead |
executor memory * 0.07, with minimum of 384 | This value is an additive forspark.executor.memory , specified in MiB, which is used to calculate the total Mesos task memory. A value of384 implies a 384MiB overhead. Additionally, there is a hard-coded 7% minimum overhead. The final overhead will be the larger of either spark.mesos.executor.memoryOverhead or 7% of spark.executor.memory . |
Shuffle Behavior
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.shuffle.consolidateFiles |
false | If set to “true”, consolidates intermediate files created during a shuffle. Creating fewer files can improve filesystem performance for shuffles with large numbers of reduce tasks. It is recommended to set this to “true” when using ext4 or xfs filesystems. On ext3, this option might degrade performance on machines with many (>8) cores due to filesystem limitations. |
spark.shuffle.spill |
true | If set to “true”, limits the amount of memory used during reduces by spilling data out to disk. This spilling threshold is specified byspark.shuffle.memoryFraction . |
spark.shuffle.spill.compress |
true | Whether to compress data spilled during shuffles. Compression will usespark.io.compression.codec . |
spark.shuffle.memoryFraction |
0.2 | Fraction of Java heap to use for aggregation and cogroups during shuffles, ifspark.shuffle.spill is true. At any given time, the collective size of all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will begin to spill to disk. If spills are often, consider increasing this value at the expense ofspark.storage.memoryFraction . |
spark.shuffle.compress |
true | Whether to compress map output files. Generally a good idea. Compression will usespark.io.compression.codec . |
spark.shuffle.file.buffer.kb |
32 | Size of the in-memory buffer for each shuffle file output stream, in kilobytes. These buffers reduce the number of disk seeks and system calls made in creating intermediate shuffle files. |
spark.reducer.maxMbInFlight |
48 | Maximum size (in megabytes) of map outputs to fetch simultaneously from each reduce task. Since each output requires us to create a buffer to receive it, this represents a fixed memory overhead per reduce task, so keep it small unless you have a large amount of memory. |
spark.shuffle.manager |
HASH | Implementation to use for shuffling data. A hash-based shuffle manager is the default, but starting in Spark 1.1 there is an experimental sort-based shuffle manager that is more memory-efficient in environments with small executors, such as YARN. To use that, change this value toSORT . |
spark.shuffle.sort.bypassMergeThreshold |
200 | (Advanced) In the sort-based shuffle manager, avoid merge-sorting data if there is no map-side aggregation and there are at most this many reduce partitions. |
Spark UI
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.ui.port |
4040 | Port for your application’s dashboard, which shows memory and workload data. |
spark.ui.retainedStages |
1000 | How many stages the Spark UI remembers before garbage collecting. |
spark.ui.killEnabled |
true | Allows stages and corresponding jobs to be killed from the web ui. |
spark.eventLog.enabled |
false | Whether to log Spark events, useful for reconstructing the Web UI after the application has finished. |
spark.eventLog.compress |
false | Whether to compress logged events, ifspark.eventLog.enabled is true. |
spark.eventLog.dir |
file:///tmp/spark-events | Base directory in which Spark events are logged, ifspark.eventLog.enabled is true. Within this base directory, Spark creates a sub-directory for each application, and logs the events specific to the application in this directory. Users may want to set this to a unified location like an HDFS directory so history files can be read by the history server. |
Compression and Serialization
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.broadcast.compress |
true | Whether to compress broadcast variables before sending them. Generally a good idea. |
spark.rdd.compress |
false | Whether to compress serialized RDD partitions (e.g. forStorageLevel.MEMORY_ONLY_SER ). Can save substantial space at the cost of some extra CPU time. |
spark.io.compression.codec |
snappy | The codec used to compress internal data such as RDD partitions and shuffle outputs. By default, Spark provides three codecs:lz4 ,lzf , andsnappy . You can also use fully qualified class names to specify the codec, e.g.org.apache.spark.io.LZ4CompressionCodec ,org.apache.spark.io.LZFCompressionCodec , andorg.apache.spark.io.SnappyCompressionCodec . |
spark.io.compression.snappy.block.size |
32768 | Block size (in bytes) used in Snappy compression, in the case when Snappy compression codec is used. Lowering this block size will also lower shuffle memory usage when Snappy is used. |
spark.io.compression.lz4.block.size |
32768 | Block size (in bytes) used in LZ4 compression, in the case when LZ4 compression codec is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used. |
spark.closure.serializer |
org.apache.spark.serializer. JavaSerializer |
Serializer class to use for closures. Currently only the Java serializer is supported. |
spark.serializer.objectStreamReset |
100 | When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches objects to prevent writing redundant data, however that stops garbage collection of those objects. By calling ‘reset’ you flush that info from the serializer, and allow old objects to be collected. To turn off this periodic reset set it to -1. By default it will reset the serializer every 100 objects. |
spark.kryo.referenceTracking |
true | Whether to track references to the same object when serializing data with Kryo, which is necessary if your object graphs have loops and useful for efficiency if they contain multiple copies of the same object. Can be disabled to improve performance if you know this is not the case. |
spark.kryo.registrationRequired |
false | Whether to require registration with Kryo. If set to ‘true’, Kryo will throw an exception if an unregistered class is serialized. If set to false (the default), Kryo will write unregistered class names along with each object. Writing class names can cause significant performance overhead, so enabling this option can enforce strictly that a user has not omitted classes from registration. |
spark.kryoserializer.buffer.mb |
0.064 | Initial size of Kryo’s serialization buffer, in megabytes. Note that there will be one bufferper core on each worker. This buffer will grow up tospark.kryoserializer.buffer.max.mb if needed. |
spark.kryoserializer.buffer.max.mb |
64 | Maximum allowable size of Kryo serialization buffer, in megabytes. This must be larger than any object you attempt to serialize. Increase this if you get a “buffer limit exceeded” exception inside Kryo. |
Execution Behavior
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.default.parallelism |
|
Default number of tasks to use across the cluster for distributed shuffle operations (groupByKey ,reduceByKey , etc) when not set by user. |
spark.broadcast.factory |
org.apache.spark.broadcast. TorrentBroadcastFactory |
Which broadcast implementation to use. |
spark.broadcast.blockSize |
4096 | Size of each piece of a block in kilobytes forTorrentBroadcastFactory . Too large a value decreases parallelism during broadcast (makes it slower); however, if it is too small,BlockManager might take a performance hit. |
spark.files.overwrite |
false | Whether to overwrite files added through SparkContext.addFile() when the target file exists and its contents do not match those of the source. |
spark.files.fetchTimeout |
false | Communication timeout to use when fetching files added through SparkContext.addFile() from the driver. |
spark.storage.memoryFraction |
0.6 | Fraction of Java heap to use for Spark’s memory cache. This should not be larger than the “old” generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size. |
spark.storage.unrollFraction |
0.2 | Fraction ofspark.storage.memoryFraction to use for unrolling blocks in memory. This is dynamically allocated by dropping existing blocks when there is not enough free storage space to unroll the new block in its entirety. |
spark.tachyonStore.baseDir |
System.getProperty(“java.io.tmpdir”) | Directories of the Tachyon File System that store RDDs. The Tachyon file system’s URL is set byspark.tachyonStore.url . It can also be a comma-separated list of multiple directories on Tachyon file system. |
spark.storage.memoryMapThreshold |
8192 | Size of a block, in bytes, above which Spark memory maps when reading a block from disk. This prevents Spark from memory mapping very small blocks. In general, memory mapping has high overhead for blocks close to or below the page size of the operating system. |
spark.tachyonStore.url |
tachyon://localhost:19998 | The URL of the underlying Tachyon file system in the TachyonStore. |
spark.cleaner.ttl |
(infinite) | Duration (seconds) of how long Spark will remember any metadata (stages generated, tasks generated, etc.). Periodic cleanups will ensure that metadata older than this duration will be forgotten. This is useful for running Spark for many hours / days (for example, running 24/7 in case of Spark Streaming applications). Note that any RDD that persists in memory for more than this duration will be cleared as well. |
spark.hadoop.validateOutputSpecs |
true | If set to true, validates the output specification (e.g. checking if the output directory already exists) used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing output directories. We recommend that users do not disable this except if trying to achieve compatibility with previous versions of Spark. Simply use Hadoop’s FileSystem API to delete output directories by hand. |
spark.hadoop.cloneConf |
false | If set to true, clones a new HadoopConfiguration object for each task. This option should be enabled to work aroundConfiguration thread-safety issues (seeSPARK-2546 for more details). This is disabled by default in order to avoid unexpected performance regressions for jobs that are not affected by these issues. |
spark.executor.heartbeatInterval |
10000 | Interval (milliseconds) between each executor’s heartbeats to the driver. Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress tasks. |
Networking
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.driver.host |
(local hostname) | Hostname or IP address for the driver to listen on. This is used for communicating with the executors and the standalone Master. |
spark.driver.port |
(random) | Port for the driver to listen on. This is used for communicating with the executors and the standalone Master. |
spark.fileserver.port |
(random) | Port for the driver’s HTTP file server to listen on. |
spark.broadcast.port |
(random) | Port for the driver’s HTTP broadcast server to listen on. This is not relevant for torrent broadcast. |
spark.replClassServer.port |
(random) | Port for the driver’s HTTP class server to listen on. This is only relevant for the Spark shell. |
spark.blockManager.port |
(random) | Port for all block managers to listen on. These exist on both the driver and the executors. |
spark.executor.port |
(random) | Port for the executor to listen on. This is used for communicating with the driver. |
spark.port.maxRetries |
16 | Default maximum number of retries when binding to a port before giving up. |
spark.akka.frameSize |
10 | Maximum message size to allow in “control plane” communication (for serialized tasks and task results), in MB. Increase this if your tasks need to send back large results to the driver (e.g. usingcollect() on a large dataset). |
spark.akka.threads |
4 | Number of actor threads to use for communication. Can be useful to increase on large clusters when the driver has a lot of CPU cores. |
spark.akka.timeout |
100 | Communication timeout between Spark nodes, in seconds. |
spark.akka.heartbeat.pauses |
600 | This is set to a larger value to disable failure detector that comes inbuilt akka. It can be enabled again, if you plan to use this feature (Not recommended). Acceptable heart beat pause in seconds for akka. This can be used to control sensitivity to gc pauses. Tune this in combination of spark.akka.heartbeat.interval and spark.akka.failure-detector.threshold if you need to. |
spark.akka.failure-detector.threshold |
300.0 | This is set to a larger value to disable failure detector that comes inbuilt akka. It can be enabled again, if you plan to use this feature (Not recommended). This maps to akka’s akka.remote.transport-failure-detector.threshold . Tune this in combination of spark.akka.heartbeat.pauses and spark.akka.heartbeat.interval if you need to. |
spark.akka.heartbeat.interval |
1000 | This is set to a larger value to disable failure detector that comes inbuilt akka. It can be enabled again, if you plan to use this feature (Not recommended). A larger interval value in seconds reduces network overhead and a smaller value ( ~ 1 s) might be more informative for akka’s failure detector. Tune this in combination of spark.akka.heartbeat.pauses and spark.akka.failure-detector.threshold if you need to. Only positive use case for using failure detector can be, a sensistive failure detector can help evict rogue executors really quick. However this is usually not the case as gc pauses and network lags are expected in a real Spark cluster. Apart from that enabling this leads to a lot of exchanges of heart beats between nodes leading to flooding the network with those. |
Scheduling
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.task.cpus |
1 | Number of cores to allocate for each task. |
spark.task.maxFailures |
4 | Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1. |
spark.scheduler.mode |
FIFO | Thescheduling mode between jobs submitted to the same SparkContext. Can be set toFAIR to use fair sharing instead of queueing jobs one after another. Useful for multi-user services. |
spark.cores.max |
(not set) | When running on astandalone deploy cluster or aMesos cluster in “coarse-grained” sharing mode, the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will bespark.deploy.defaultCores on Spark’s standalone cluster manager, or infinite (all available cores) on Mesos. |
spark.mesos.coarse |
false | If set to “true”, runs over Mesos clusters in“coarse-grained” sharing mode, where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task. This gives lower-latency scheduling for short queries, but leaves resources in use for the whole duration of the Spark job. |
spark.speculation |
false | If set to “true”, performs speculative execution of tasks. This means if one or more tasks are running slowly in a stage, they will be re-launched. |
spark.speculation.interval |
100 | How often Spark will check for tasks to speculate, in milliseconds. |
spark.speculation.quantile |
0.75 | Percentage of tasks which must be complete before speculation is enabled for a particular stage. |
spark.speculation.multiplier |
1.5 | How many times slower a task is than the median to be considered for speculation. |
spark.locality.wait |
3000 | Number of milliseconds to wait to launch a data-local task before giving up and launching it on a less-local node. The same wait will be used to step through multiple locality levels (process-local, node-local, rack-local and then any). It is also possible to customize the waiting time for each level by settingspark.locality.wait.node , etc. You should increase this setting if your tasks are long and see poor locality, but the default usually works well. |
spark.locality.wait.process |
spark.locality.wait | Customize the locality wait for process locality. This affects tasks that attempt to access cached data in a particular executor process. |
spark.locality.wait.node |
spark.locality.wait | Customize the locality wait for node locality. For example, you can set this to 0 to skip node locality and search immediately for rack locality (if your cluster has rack information). |
spark.locality.wait.rack |
spark.locality.wait | Customize the locality wait for rack locality. |
spark.scheduler.revive.interval |
1000 | The interval length for the scheduler to revive the worker resource offers to run tasks (in milliseconds). |
spark.scheduler.minRegisteredResourcesRatio |
0 | The minimum ratio of registered resources (registered resources / total expected resources) (resources are executors in yarn mode, CPU cores in standalone mode) to wait for before scheduling begins. Specified as a double between 0 and 1. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by configspark.scheduler.maxRegisteredResourcesWaitingTime . |
spark.scheduler.maxRegisteredResourcesWaitingTime |
30000 | Maximum amount of time to wait for resources to register before scheduling begins (in milliseconds). |
spark.localExecution.enabled |
false | Enables Spark to run certain jobs, such as first() or take() on the driver, without sending tasks to the cluster. This can make certain jobs execute very quickly, but may require shipping a whole partition of data to the driver. |
Security
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.authenticate |
false | Whether Spark authenticates its internal connections. Seespark.authenticate.secret if not running on YARN. |
spark.authenticate.secret |
None | Set the secret key used for Spark to authenticate between components. This needs to be set if not running on YARN and authentication is enabled. |
spark.core.connection.auth.wait.timeout |
30 | Number of seconds for the connection to wait for authentication to occur before timing out and giving up. |
spark.core.connection.ack.wait.timeout |
60 | Number of seconds for the connection to wait for ack to occur before timing out and giving up. To avoid unwilling timeout caused by long pause like GC, you can set larger value. |
spark.ui.filters |
None | Comma separated list of filter class names to apply to the Spark web UI. The filter should be a standard javax servlet Filter. Parameters to each filter can also be specified by setting a java system property of:spark.<class name of filter>.params=’param1=value1,param2=value2’ For example: -Dspark.ui.filters=com.test.filter1 -Dspark.com.test.filter1.params=’param1=foo,param2=testing’ |
spark.acls.enable |
false | Whether Spark acls should are enabled. If enabled, this checks to see if the user has access permissions to view or modify the job. Note this requires the user to be known, so if the user comes across as null no checks are done. Filters can be used with the UI to authenticate and set the user. |
spark.ui.view.acls |
Empty | Comma separated list of users that have view access to the Spark web ui. By default only the user that started the Spark job has view access. |
spark.modify.acls |
Empty | Comma separated list of users that have modify access to the Spark job. By default only the user that started the Spark job has access to modify it (kill it for example). |
spark.admin.acls |
Empty | Comma separated list of users/administrators that have view and modify access to all Spark jobs. This can be used if you run on a shared cluster and have a set of administrators or devs who help debug when things work. |
Spark Streaming
屬性名 | 缺省值 | 意義 |
---|---|---|
spark.streaming.blockInterval |
200 | Interval (milliseconds) at which data received by Spark Streaming receivers is coalesced into blocks of data before storing them in Spark. |
spark.streaming.receiver.maxRate |
infinite | Maximum rate (per second) at which each receiver will push data into blocks. Effectively, each stream will consume at most this number of records per second. Setting this configuration to 0 or a negative number will put no limit on the rate. |
spark.streaming.unpersist |
true | Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from Spark’s memory. The raw input data received by Spark Streaming is also automatically cleared. Setting this to false will allow the raw data and persisted RDDs to be accessible outside the streaming application as they will not be cleared automatically. But it comes at the cost of higher memory usage in Spark. |
spark.executor.logs.rolling.strategy |
(none) | Set the strategy of rolling of executor logs. By default it is disabled. It can be set to “time” (time-based rolling) or “size” (size-based rolling). For “time”, usespark.executor.logs.rolling.time.interval to set the rolling interval. For “size”, usespark.executor.logs.rolling.size.maxBytes to set the maximum file size for rolling. |
spark.executor.logs.rolling.time.interval |
daily | Set the time interval by which the executor logs will be rolled over. Rolling is disabled by default. Valid values are daily , hourly , minutely or any interval in seconds. Seespark.executor.logs.rolling.maxRetainedFiles for automatic cleaning of old logs. |
spark.executor.logs.rolling.size.maxBytes |
(none) | Set the max size of the file by which the executor logs will be rolled over. Rolling is disabled by default. Value is set in terms of bytes. Seespark.executor.logs.rolling.maxRetainedFiles for automatic cleaning of old logs. |
spark.executor.logs.rolling.maxRetainedFiles |
(none) | Sets the number of latest rolling log files that are going to be retained by the system. Older log files will be deleted. Disabled by default. |
集群管理器Cluster Managers
每種集群管理都有自己額外的配置參數. 可以在下面的頁面中找到每種管理器相應的配置:
環境變量
有些Spark設置可以通過環境變量來設置, 從Spark文件夾中的conf/spark-env.sh腳本中讀取(Windows操作系統中用conf/spark-env.cmd). 在Standalone 和 Mesos 模式下, 這個文件可以給機器特定的信息如hostnames. It is also sourced when running local Spark applications or submission scripts.
注意conf/spark-env.sh缺省情況下并不存在。 然而,你可以從conf/spark-env.sh.template復制一份. 請確保這個復制腳本可執行.
下面變量可以在spark-env.sh中設置:
環境變量 | 意義 |
---|---|
JAVA_HOME | Location where Java is installed (if it’s not on your default PATH ). |
PYSPARK_PYTHON | Python binary executable to use for PySpark. |
SPARK_LOCAL_IP | IP address of the machine to bind to. |
SPARK_PUBLIC_DNS | Hostname your Spark program will advertise to other machines. |
除此之外,還有一些參數用來設置Spark standalone 集群腳本, 比如每臺機器使用的核數和最大內存。
既然spark-env.sh是一個shell腳本, 可以通過編程的方式設置。 舉例來說,你可能查找一個特定網絡的IP來設置SPARK_LOCAL_IP.
配置日志
Spark使用log4j記錄日志. 你可以在conf增加一個log4j.properties文件. 其文件夾下有一個log4j的模版.
改變配置文件夾路徑
為了使用一個其它的配置文件夾而不是“SPARK_HOME/conf”, 你可以設置SPARK_CONF_DIR. Spark會使用你指定的文件夾的文件(spark-defaults.conf, spark-env.sh, log4j.properties, 等等).
來自:http://colobu.com/2014/12/10/spark-configuration/