spark.sql.adaptive.advisoryPartitionSizeInBytes
spark.sql.adaptive.shuffle.targetPostShuffleInputSize
)The advisory size in bytes of the shuffle partition during adaptive optimization (when spark.sql.adaptive.enabled is true). It takes effect when Spark coalesces small shuffle partitions or splits skewed shuffle partition.
spark.sql.adaptive.autoBroadcastJoinThreshold
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. The default value is same with spark.sql.autoBroadcastJoinThreshold. Note that, this config is used only in adaptive framework.
spark.sql.adaptive.coalescePartitions.enabled
When true and 'spark.sql.adaptive.enabled' is true, Spark will coalesce contiguous shuffle partitions according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid too many small tasks.
spark.sql.adaptive.coalescePartitions.initialPartitionNum
The initial number of shuffle partitions before coalescing. If not set, it equals to spark.sql.shuffle.partitions. This configuration only has an effect when 'spark.sql.adaptive.enabled' and 'spark.sql.adaptive.coalescePartitions.enabled' are both true.
spark.sql.adaptive.coalescePartitions.minPartitionSize
The minimum size of shuffle partitions after coalescing. This is useful when the adaptively calculated target size is too small during partition coalescing.
spark.sql.adaptive.coalescePartitions.parallelismFirst
When true, Spark does not respect the target size specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes' (default 64MB) when coalescing contiguous shuffle partitions, but adaptively calculate the target size according to the default parallelism of the Spark cluster. The calculated size is usually smaller than the configured target size. This is to maximize the parallelism and avoid performance regression when enabling adaptive query execution. It's recommended to set this config to false and respect the configured target size.
spark.sql.adaptive.customCostEvaluatorClass
The custom cost evaluator class to be used for adaptive execution. If not being set, Spark will use its own SimpleCostEvaluator by default.
spark.sql.adaptive.enabled
When true, enable adaptive query execution, which re-optimizes the query plan in the middle of query execution, based on accurate runtime statistics.
spark.sql.adaptive.localShuffleReader.enabled
When true and 'spark.sql.adaptive.enabled' is true, Spark tries to use local shuffle reader to read the shuffle data when the shuffle partitioning is not needed, for example, after converting sort-merge join to broadcast-hash join.
spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold
Configures the maximum size in bytes per partition that can be allowed to build local hash map. If this value is not smaller than spark.sql.adaptive.advisoryPartitionSizeInBytes and all the partition size are not larger than this config, join selection prefer to use shuffled hash join instead of sort merge join regardless of the value of spark.sql.join.preferSortMergeJoin.
spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled
When true and 'spark.sql.adaptive.enabled' is true, Spark will optimize the skewed shuffle partitions in RebalancePartitions and split them to smaller ones according to the target size (specified by 'spark.sql.adaptive.advisoryPartitionSizeInBytes'), to avoid data skew.
spark.sql.adaptive.optimizer.excludedRules
Configures a list of rules to be disabled in the adaptive optimizer, in which the rules are specified by their rule names and separated by comma. The optimizer will log the rules that have indeed been excluded.
spark.sql.adaptive.skewJoin.enabled
When true and 'spark.sql.adaptive.enabled' is true, Spark dynamically handles skew in shuffled join (sort-merge and shuffled hash) by splitting (and replicating if needed) skewed partitions.
spark.sql.adaptive.skewJoin.skewedPartitionFactor
A partition is considered as skewed if its size is larger than this factor multiplying the median partition size and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes'
spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes
A partition is considered as skewed if its size in bytes is larger than this threshold and also larger than 'spark.sql.adaptive.skewJoin.skewedPartitionFactor' multiplying the median partition size. Ideally this config should be set larger than 'spark.sql.adaptive.advisoryPartitionSizeInBytes'.
spark.sql.ansi.enabled
When true, Spark SQL uses an ANSI compliant dialect instead of being Hive compliant. For example, Spark will throw an exception at runtime instead of returning null results when the inputs to a SQL operator/function are invalid.For full details of this dialect, you can find them in the section "ANSI Compliance" of Spark's documentation. Some ANSI dialect features may be not from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's style
spark.sql.autoBroadcastJoinThreshold
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently statistics are only supported for Hive Metastore tables where the command ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan
has been run, and file-based data source tables where the statistics are computed directly on the files of data.
spark.sql.avro.compression.codec
Compression codec used in writing of AVRO files. Supported codecs: uncompressed, deflate, snappy, bzip2, xz and zstandard. Default codec is snappy.
spark.sql.avro.deflate.level
Compression level for the deflate codec used in writing of AVRO files. Valid value must be in the range of from 1 to 9 inclusive or -1. The default value is -1 which corresponds to 6 level in the current implementation.
spark.sql.avro.filterPushdown.enabled
When true, enable filter pushdown to Avro datasource.
spark.sql.broadcastTimeout
Timeout in seconds for the broadcast wait time in broadcast joins.
spark.sql.bucketing.coalesceBucketsInJoin.enabled
When true, if two bucketed tables with the different number of buckets are joined, the side with a bigger number of buckets will be coalesced to have the same number of buckets as the other side. Bigger number of buckets is divisible by the smaller number of buckets. Bucket coalescing is applied to sort-merge joins and shuffled hash join. Note: Coalescing bucketed table can avoid unnecessary shuffling in join, but it also reduces parallelism and could possibly cause OOM for shuffled hash join.
spark.sql.bucketing.coalesceBucketsInJoin.maxBucketRatio
The ratio of the number of two buckets being coalesced should be less than or equal to this value for bucket coalescing to be applied. This configuration only has an effect when 'spark.sql.bucketing.coalesceBucketsInJoin.enabled' is set to true.
spark.sql.catalog.spark_catalog
A catalog implementation that will be used as the v2 interface to Spark's built-in v1 catalog: spark_catalog. This catalog shares its identifier namespace with the spark_catalog and must be consistent with it; for example, if a table can be loaded by the spark_catalog, this catalog must also return the table metadata. To delegate operations to the spark_catalog, implementations can extend 'CatalogExtension'.
spark.sql.cbo.enabled
Enables CBO for estimation of plan statistics when set true.
spark.sql.cbo.joinReorder.dp.star.filter
Applies star-join filter heuristics to cost based join enumeration.
spark.sql.cbo.joinReorder.dp.threshold
The maximum number of joined nodes allowed in the dynamic programming algorithm.
spark.sql.cbo.joinReorder.enabled
Enables join reorder in CBO.
spark.sql.cbo.planStats.enabled
When true, the logical plan will fetch row counts and column statistics from catalog.
spark.sql.cbo.starSchemaDetection
When true, it enables join reordering based on star schema detection.
spark.sql.cli.print.header
When set to true, spark-sql CLI prints the names of the columns in query output.
spark.sql.columnNameOfCorruptRecord
The name of internal column for storing raw/un-parsed JSON and CSV records that fail to parse.
spark.sql.csv.filterPushdown.enabled
When true, enable filter pushdown to CSV datasource.
spark.sql.datetime.java8API.enabled
If the configuration property is set to true, java.time.Instant and java.time.LocalDate classes of Java 8 API are used as external types for Catalyst's TimestampType and DateType. If it is set to false, java.sql.Timestamp and java.sql.Date are used for the same purpose.
spark.sql.debug.maxToStringFields
Maximum number of fields of sequence-like entries can be converted to strings in debug output. Any elements beyond the limit will be dropped and replaced by a "... N more fields" placeholder.
spark.sql.defaultCatalog
Name of the default catalog. This will be the current catalog if users have not explicitly set the current catalog yet.
spark.sql.execution.arrow.enabled
(Deprecated since Spark 3.0, please set 'spark.sql.execution.arrow.pyspark.enabled'.)
spark.sql.execution.arrow.fallback.enabled
(Deprecated since Spark 3.0, please set 'spark.sql.execution.arrow.pyspark.fallback.enabled'.)
spark.sql.execution.arrow.maxRecordsPerBatch
When using Apache Arrow, limit the maximum number of records that can be written to a single ArrowRecordBatch in memory. If set to zero or negative there is no limit.
spark.sql.execution.arrow.pyspark.enabled
spark.sql.execution.arrow.enabled
)When true, make use of Apache Arrow for columnar data transfers in PySpark. This optimization applies to: 1. pyspark.sql.DataFrame.toPandas 2. pyspark.sql.SparkSession.createDataFrame when its input is a Pandas DataFrame The following data types are unsupported: ArrayType of TimestampType, and nested StructType.
spark.sql.execution.arrow.pyspark.fallback.enabled
spark.sql.execution.arrow.fallback.enabled
)When true, optimizations enabled by 'spark.sql.execution.arrow.pyspark.enabled' will fallback automatically to non-optimized implementations if an error occurs.
spark.sql.execution.arrow.pyspark.selfDestruct.enabled
(Experimental) When true, make use of Apache Arrow's self-destruct and split-blocks options for columnar data transfers in PySpark, when converting from Arrow to Pandas. This reduces memory usage at the cost of some CPU time. This optimization applies to: pyspark.sql.DataFrame.toPandas when 'spark.sql.execution.arrow.pyspark.enabled' is set.
spark.sql.execution.arrow.sparkr.enabled
When true, make use of Apache Arrow for columnar data transfers in SparkR. This optimization applies to: 1. createDataFrame when its input is an R DataFrame 2. collect 3. dapply 4. gapply The following data types are unsupported: FloatType, BinaryType, ArrayType, StructType and MapType.
spark.sql.execution.pandas.udf.buffer.size
spark.buffer.size
)Same as spark.buffer.size
but only applies to Pandas UDF executions. If it is not set, the fallback is spark.buffer.size
. Note that Pandas execution requires more than 4 bytes. Lowering this value could make small Pandas UDF batch iterated and pipelined; however, it might degrade performance. See SPARK-27870.
spark.sql.execution.pyspark.udf.simplifiedTraceback.enabled
When true, the traceback from Python UDFs is simplified. It hides the Python worker, (de)serialization, etc from PySpark in tracebacks, and only shows the exception messages from UDFs. Note that this works only with CPython 3.7+.
spark.sql.execution.topKSortFallbackThreshold
In SQL queries with a SORT followed by a LIMIT like 'SELECT x FROM t ORDER BY y LIMIT m', if m is under this threshold, do a top-K sort in memory, otherwise do a global sort which spills to disk if necessary.
spark.sql.files.ignoreCorruptFiles
Whether to ignore corrupt files. If true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
spark.sql.files.ignoreMissingFiles
Whether to ignore missing files. If true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
spark.sql.files.maxPartitionBytes
The maximum number of bytes to pack into a single partition when reading files. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
spark.sql.files.maxRecordsPerFile
Maximum number of records to write out to a single file. If this value is zero or negative, there is no limit.
spark.sql.files.minPartitionNum
The suggested (not guaranteed) minimum number of split file partitions. If not set, the default value is spark.default.parallelism
. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
spark.sql.function.concatBinaryAsString
When this option is set to false and all inputs are binary, functions.concat
returns an output as binary. Otherwise, it returns as a string.
spark.sql.function.eltOutputAsString
When this option is set to false and all inputs are binary, elt
returns an output as binary. Otherwise, it returns as a string.
spark.sql.groupByAliases
When true, aliases in a select list can be used in group by clauses. When false, an analysis exception is thrown in the case.
spark.sql.groupByOrdinal
When true, the ordinal numbers in group by clauses are treated as the position in the select list. When false, the ordinal numbers are ignored.
spark.sql.hive.convertInsertingPartitionedTable
When set to true, and spark.sql.hive.convertMetastoreParquet
or spark.sql.hive.convertMetastoreOrc
is true, the built-in ORC/Parquet writer is usedto process inserting into partitioned ORC/Parquet tables created by using the HiveSQL syntax.
spark.sql.hive.convertMetastoreCtas
When set to true, Spark will try to use built-in data source writer instead of Hive serde in CTAS. This flag is effective only if spark.sql.hive.convertMetastoreParquet
or spark.sql.hive.convertMetastoreOrc
is enabled respectively for Parquet and ORC formats
spark.sql.hive.convertMetastoreOrc
When set to true, the built-in ORC reader and writer are used to process ORC tables created by using the HiveQL syntax, instead of Hive serde.
spark.sql.hive.convertMetastoreParquet
When set to true, the built-in Parquet reader and writer are used to process parquet tables created by using the HiveQL syntax, instead of Hive serde.
spark.sql.hive.convertMetastoreParquet.mergeSchema
When true, also tries to merge possibly different but compatible Parquet schemas in different Parquet data files. This configuration is only effective when "spark.sql.hive.convertMetastoreParquet" is true.
spark.sql.hive.filesourcePartitionFileCacheSize
When nonzero, enable caching of partition file metadata in memory. All tables share a cache that can use up to specified num bytes for file metadata. This conf only has an effect when hive filesource partition management is enabled.
spark.sql.hive.manageFilesourcePartitions
When true, enable metastore partition management for file source tables as well. This includes both datasource and converted Hive tables. When partition management is enabled, datasource tables store partition in the Hive metastore, and use the metastore to prune partitions during query planning when spark.sql.hive.metastorePartitionPruning is set to true.
spark.sql.hive.metastorePartitionPruning
When true, some predicates will be pushed down into the Hive metastore so that unmatching partitions can be eliminated earlier.
spark.sql.hive.thriftServer.async
When set to true, Hive Thrift server executes SQL queries in an asynchronous way.
spark.sql.hive.verifyPartitionPath
When true, check all the partition paths under the table's root directory when reading data stored in HDFS. This configuration will be deprecated in the future releases and replaced by spark.files.ignoreMissingFiles.
spark.sql.inMemoryColumnarStorage.batchSize
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data.
spark.sql.inMemoryColumnarStorage.compressed
When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data.
spark.sql.inMemoryColumnarStorage.enableVectorizedReader
Enables vectorized reader for columnar caching.
spark.sql.json.filterPushdown.enabled
When true, enable filter pushdown to JSON datasource.
spark.sql.jsonGenerator.ignoreNullFields
Whether to ignore null fields when generating JSON objects in JSON data source and JSON functions such as to_json. If false, it generates null for null fields in JSON objects.
spark.sql.leafNodeDefaultParallelism
The default parallelism of Spark SQL leaf nodes that produce data, such as the file scan node, the local data scan node, the range node, etc. The default value of this config is 'SparkContext#defaultParallelism'.
spark.sql.mapKeyDedupPolicy
The policy to deduplicate map keys in builtin function: CreateMap, MapFromArrays, MapFromEntries, StringToMap, MapConcat and TransformKeys. When EXCEPTION, the query fails if duplicated map keys are detected. When LAST_WIN, the map key that is inserted at last takes precedence.
spark.sql.maven.additionalRemoteRepositories
A comma-delimited string config of the optional additional remote Maven mirror repositories. This is only used for downloading Hive jars in IsolatedClientLoader if the default Maven Central repo is unreachable.
spark.sql.maxMetadataStringLength
Maximum number of characters to output for a metadata string. e.g. file location in DataSourceScanExec
, every value will be abbreviated if exceed length.
spark.sql.maxPlanStringLength
Maximum number of characters to output for a plan string. If the plan is longer, further output will be truncated. The default setting always generates a full plan. Set this to a lower value such as 8k if plan strings are taking up too much memory or are causing OutOfMemory errors in the driver or UI processes.
spark.sql.optimizer.dynamicPartitionPruning.enabled
When true, we will generate predicate for partition column when it's used as join key
spark.sql.optimizer.enableCsvExpressionOptimization
Whether to optimize CSV expressions in SQL optimizer. It includes pruning unnecessary columns from from_csv.
spark.sql.optimizer.enableJsonExpressionOptimization
Whether to optimize JSON expressions in SQL optimizer. It includes pruning unnecessary columns from from_json, simplifying from_json + to_json, to_json + named_struct(from_json.col1, from_json.col2, ....).
spark.sql.optimizer.excludedRules
Configures a list of rules to be disabled in the optimizer, in which the rules are specified by their rule names and separated by comma. It is not guaranteed that all the rules in this configuration will eventually be excluded, as some rules are necessary for correctness. The optimizer will log the rules that have indeed been excluded.
spark.sql.orc.columnarReaderBatchSize
The number of rows to include in a orc vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.
spark.sql.orc.compression.codec
Sets the compression codec used when writing ORC files. If either compression
or orc.compress
is specified in the table-specific options/properties, the precedence would be compression
, orc.compress
, spark.sql.orc.compression.codec
.Acceptable values include: none, uncompressed, snappy, zlib, lzo, zstd, lz4.
spark.sql.orc.enableNestedColumnVectorizedReader
Enables vectorized orc decoding for nested column.
spark.sql.orc.enableVectorizedReader
Enables vectorized orc decoding.
spark.sql.orc.filterPushdown
When true, enable filter pushdown for ORC files.
spark.sql.orc.mergeSchema
When true, the Orc data source merges schemas collected from all data files, otherwise the schema is picked from a random data file.
spark.sql.orderByOrdinal
When true, the ordinal numbers are treated as the position in the select list. When false, the ordinal numbers in order/sort by clause are ignored.
spark.sql.parquet.binaryAsString
Some other Parquet-producing systems, in particular Impala and older versions of Spark SQL, do not differentiate between binary data and strings when writing out the Parquet schema. This flag tells Spark SQL to interpret binary data as a string to provide compatibility with these systems.
spark.sql.parquet.columnarReaderBatchSize
The number of rows to include in a parquet vectorized reader batch. The number should be carefully chosen to minimize overhead and avoid OOMs in reading data.
spark.sql.parquet.compression.codec
Sets the compression codec used when writing Parquet files. If either compression
or parquet.compression
is specified in the table-specific options/properties, the precedence would be compression
, parquet.compression
, spark.sql.parquet.compression.codec
. Acceptable values include: none, uncompressed, snappy, gzip, lzo, brotli, lz4, zstd.
spark.sql.parquet.enableVectorizedReader
Enables vectorized parquet decoding.
spark.sql.parquet.filterPushdown
Enables Parquet filter push-down optimization when set to true.
spark.sql.parquet.int96AsTimestamp
Some Parquet-producing systems, in particular Impala, store Timestamp into INT96. Spark would also store Timestamp as INT96 because we need to avoid precision lost of the nanoseconds field. This flag tells Spark SQL to interpret INT96 data as a timestamp to provide compatibility with these systems.
spark.sql.parquet.int96TimestampConversion
This controls whether timestamp adjustments should be applied to INT96 data when converting to timestamps, for data written by Impala. This is necessary because Impala stores INT96 data with a different timezone offset than Hive & Spark.
spark.sql.parquet.mergeSchema
When true, the Parquet data source merges schemas collected from all data files, otherwise the schema is picked from the summary file or a random data file if no summary file is available.
spark.sql.parquet.outputTimestampType
Sets which Parquet timestamp type to use when Spark writes data to Parquet files. INT96 is a non-standard but commonly used timestamp type in Parquet. TIMESTAMP_MICROS is a standard timestamp type in Parquet, which stores number of microseconds from the Unix epoch. TIMESTAMP_MILLIS is also standard, but with millisecond precision, which means Spark has to truncate the microsecond portion of its timestamp value.
spark.sql.parquet.recordLevelFilter.enabled
If true, enables Parquet's native record-level filtering using the pushed down filters. This configuration only has an effect when 'spark.sql.parquet.filterPushdown' is enabled and the vectorized reader is not used. You can ensure the vectorized reader is not used by setting 'spark.sql.parquet.enableVectorizedReader' to false.
spark.sql.parquet.respectSummaryFiles
When true, we make assumption that all part-files of Parquet are consistent with summary files and we will ignore them when merging schema. Otherwise, if this is false, which is the default, we will merge all part-files. This should be considered as expert-only option, and shouldn't be enabled before knowing what it means exactly.
spark.sql.parquet.writeLegacyFormat
If true, data will be written in a way of Spark 1.4 and earlier. For example, decimal values will be written in Apache Parquet's fixed-length byte array format, which other systems such as Apache Hive and Apache Impala use. If false, the newer format in Parquet will be used. For example, decimals will be written in int-based format. If Parquet output is intended for use with systems that do not support this newer format, set to true.
spark.sql.parser.quotedRegexColumnNames
When true, quoted Identifiers (using backticks) in SELECT statement are interpreted as regular expressions.
spark.sql.pivotMaxValues
When doing a pivot without specifying values for the pivot column this is the maximum number of (distinct) values that will be collected without error.
spark.sql.pyspark.jvmStacktrace.enabled
When true, it shows the JVM stacktrace in the user-facing PySpark exception together with Python stacktrace. By default, it is disabled and hides JVM stacktrace and shows a Python-friendly exception only.
spark.sql.redaction.options.regex
Regex to decide which keys in a Spark SQL command's options map contain sensitive information. The values of options whose names that match this regex will be redacted in the explain output. This redaction is applied on top of the global redaction configuration defined by spark.redaction.regex.
spark.sql.redaction.string.regex
spark.redaction.string.regex
)Regex to decide which parts of strings produced by Spark contain sensitive information. When this regex matches a string part, that string part is replaced by a dummy value. This is currently used to redact the output of SQL explain commands. When this conf is not set, the value from spark.redaction.string.regex
is used.
spark.sql.repl.eagerEval.enabled
Enables eager evaluation or not. When true, the top K rows of Dataset will be displayed if and only if the REPL supports the eager evaluation. Currently, the eager evaluation is supported in PySpark and SparkR. In PySpark, for the notebooks like Jupyter, the HTML table (generated by repr_html) will be returned. For plain Python REPL, the returned outputs are formatted like dataframe.show(). In SparkR, the returned outputs are showed similar to R data.frame would.
spark.sql.repl.eagerEval.maxNumRows
The max number of rows that are returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true. The valid range of this config is from 0 to (Int.MaxValue - 1), so the invalid config like negative and greater than (Int.MaxValue - 1) will be normalized to 0 and (Int.MaxValue - 1).
spark.sql.repl.eagerEval.truncate
The max number of characters for each cell that is returned by eager evaluation. This only takes effect when spark.sql.repl.eagerEval.enabled is set to true.
spark.sql.session.timeZone
The ID of session local timezone in the format of either region-based zone IDs or zone offsets. Region IDs must have the form 'area/city', such as 'America/Los_Angeles'. Zone offsets must be in the format '(+|-)HH', '(+|-)HH:mm' or '(+|-)HH:mm:ss', e.g '-08', '+01:00' or '-13:33:33'. Also 'UTC' and 'Z' are supported as aliases of '+00:00'. Other short names are not recommended to use because they can be ambiguous.
spark.sql.shuffle.partitions
The default number of partitions to use when shuffling data for joins or aggregations. Note: For structured streaming, this configuration cannot be changed between query restarts from the same checkpoint location.
spark.sql.sources.bucketing.autoBucketedScan.enabled
When true, decide whether to do bucketed scan on input tables based on query plan automatically. Do not use bucketed scan if 1. query does not have operators to utilize bucketing (e.g. join, group-by, etc), or 2. there's an exchange operator between these operators and table scan. Note when 'spark.sql.sources.bucketing.enabled' is set to false, this configuration does not take any effect.
spark.sql.sources.bucketing.enabled
When false, we will treat bucketed table as normal table
spark.sql.sources.bucketing.maxBuckets
The maximum number of buckets allowed.
spark.sql.sources.default
The default data source to use in input/output.
spark.sql.sources.parallelPartitionDiscovery.threshold
The maximum number of paths allowed for listing files at driver side. If the number of detected paths exceeds this value during partition discovery, it tries to list the files with another Spark distributed job. This configuration is effective only when using file-based sources such as Parquet, JSON and ORC.
spark.sql.sources.partitionColumnTypeInference.enabled
When true, automatically infer the data types for partitioned columns.
spark.sql.sources.partitionOverwriteMode
When INSERT OVERWRITE a partitioned data source table, we currently support 2 modes: static and dynamic. In static mode, Spark deletes all the partitions that match the partition specification(e.g. PARTITION(a=1,b)) in the INSERT statement, before overwriting. In dynamic mode, Spark doesn't delete partitions ahead, and only overwrite those partitions that have data written into it at runtime. By default we use static mode to keep the same behavior of Spark prior to 2.3. Note that this config doesn't affect Hive serde tables, as they are always overwritten with dynamic mode. This can also be set as an output option for a data source using key partitionOverwriteMode (which takes precedence over this setting), e.g. dataframe.write.option("partitionOverwriteMode", "dynamic").save(path).
spark.sql.statistics.fallBackToHdfs
When true, it will fall back to HDFS if the table statistics are not available from table metadata. This is useful in determining if a table is small enough to use broadcast joins. This flag is effective only for non-partitioned Hive tables. For non-partitioned data source tables, it will be automatically recalculated if table statistics are not available. For partitioned data source and partitioned Hive tables, It is 'spark.sql.defaultSizeInBytes' if table statistics are not available.
spark.sql.statistics.histogram.enabled
Generates histograms when computing column statistics if enabled. Histograms can provide better estimation accuracy. Currently, Spark only supports equi-height histogram. Note that collecting histograms takes extra cost. For example, collecting column statistics usually takes only one table scan, but generating equi-height histogram will cause an extra table scan.
spark.sql.statistics.size.autoUpdate.enabled
Enables automatic update for table size once table's data is changed. Note that if the total number of files of the table is very large, this can be expensive and slow down data change commands.
spark.sql.storeAssignmentPolicy
When inserting a value into a column with different data type, Spark will perform type coercion. Currently, we support 3 policies for the type coercion rules: ANSI, legacy and strict. With ANSI policy, Spark performs the type coercion as per ANSI SQL. In practice, the behavior is mostly the same as PostgreSQL. It disallows certain unreasonable type conversions such as converting string
to int
or double
to boolean
. With legacy policy, Spark allows the type coercion as long as it is a valid Cast
, which is very loose. e.g. converting string
to int
or double
to boolean
is allowed. It is also the only behavior in Spark 2.x and it is compatible with Hive. With strict policy, Spark doesn't allow any possible precision loss or data truncation in type coercion, e.g. converting double
to int
or decimal
to double
is not allowed.
spark.sql.streaming.checkpointLocation
The default location for storing checkpoint data for streaming queries.
spark.sql.streaming.continuous.epochBacklogQueueSize
The max number of entries to be stored in queue to wait for late epochs. If this parameter is exceeded by the size of the queue, stream will stop with an error.
spark.sql.streaming.disabledV2Writers
A comma-separated list of fully qualified data source register class names for which StreamWriteSupport is disabled. Writes to these sources will fall back to the V1 Sinks.
spark.sql.streaming.fileSource.cleaner.numThreads
Number of threads used in the file source completed file cleaner.
spark.sql.streaming.forceDeleteTempCheckpointLocation
When true, enable temporary checkpoint locations force delete.
spark.sql.streaming.metricsEnabled
Whether Dropwizard/Codahale metrics will be reported for active streaming queries.
spark.sql.streaming.multipleWatermarkPolicy
Policy to calculate the global watermark value when there are multiple watermark operators in a streaming query. The default value is 'min' which chooses the minimum watermark reported across multiple operators. Other alternative value is 'max' which chooses the maximum across multiple operators. Note: This configuration cannot be changed between query restarts from the same checkpoint location.
spark.sql.streaming.noDataMicroBatches.enabled
Whether streaming micro-batch engine will execute batches without data for eager state management for stateful streaming queries.
spark.sql.streaming.numRecentProgressUpdates
The number of progress updates to retain for a streaming query
spark.sql.streaming.stateStore.stateSchemaCheck
When true, Spark will validate the state schema against schema on existing state and fail query if it's incompatible.
spark.sql.streaming.stopActiveRunOnRestart
Running multiple runs of the same streaming query concurrently is not supported. If we find a concurrent active run for a streaming query (in the same or different SparkSessions on the same cluster) and this flag is true, we will stop the old streaming query run to start the new one.
spark.sql.streaming.stopTimeout
How long to wait in milliseconds for the streaming execution thread to stop when calling the streaming query's stop() method. 0 or negative values wait indefinitely.
spark.sql.thriftServer.interruptOnCancel
When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished.
spark.sql.thriftServer.queryTimeout
Set a query duration timeout in seconds in Thrift Server. If the timeout is set to a positive value, a running query will be cancelled automatically when the timeout is exceeded, otherwise the query continues to run till completion. If timeout values are set for each statement via java.sql.Statement.setQueryTimeout
and they are smaller than this configuration value, they take precedence. If you set this timeout and prefer to cancel the queries right away without waiting task to finish, consider enabling spark.sql.thriftServer.interruptOnCancel together.
spark.sql.thriftserver.scheduler.pool
Set a Fair Scheduler pool for a JDBC client session.
spark.sql.thriftserver.ui.retainedSessions
The number of SQL client sessions kept in the JDBC/ODBC web UI history.
spark.sql.thriftserver.ui.retainedStatements
The number of SQL statements kept in the JDBC/ODBC web UI history.
spark.sql.ui.explainMode
Configures the query explain mode used in the Spark SQL UI. The value can be 'simple', 'extended', 'codegen', 'cost', or 'formatted'. The default value is 'formatted'.
spark.sql.variable.substitute
This enables substitution using syntax like ${var}
, ${system:var}
, and ${env:var}
.