datafusion.dataframe¶
DataFrame is one of the core concepts in DataFusion.
See Concepts in the online documentation for more information.
Classes¶
Enum representing the available compression types for Parquet files. |
|
Two dimensional table representation of data. |
|
Writer options for DataFrame. |
|
Insert operation mode. |
|
Parquet options for individual columns. |
|
Advanced parquet writer options. |
Module Contents¶
- class datafusion.dataframe.Compression(*args, **kwds)¶
Bases:
enum.EnumEnum representing the available compression types for Parquet files.
- classmethod from_str(value: str) Compression¶
Convert a string to a Compression enum value.
- Parameters:
value – The string representation of the compression type.
- Returns:
The Compression enum lowercase value.
- Raises:
ValueError – If the string does not match any Compression enum value.
- get_default_level() int | None¶
Get the default compression level for the compression type.
- Returns:
The default compression level for the compression type.
- BROTLI = 'brotli'¶
- GZIP = 'gzip'¶
- LZ4 = 'lz4'¶
- LZ4_RAW = 'lz4_raw'¶
- SNAPPY = 'snappy'¶
- UNCOMPRESSED = 'uncompressed'¶
- ZSTD = 'zstd'¶
- class datafusion.dataframe.DataFrame(df: datafusion._internal.DataFrame)¶
Two dimensional table representation of data.
See Concepts in the online documentation for more information.
This constructor is not to be used by the end user.
See
SessionContextfor methods to create aDataFrame.- __arrow_c_stream__(requested_schema: object | None = None) object¶
Export an Arrow PyCapsule Stream.
This will execute and collect the DataFrame. We will attempt to respect the requested schema, but only trivial transformations will be applied such as only returning the fields listed in the requested schema if their data types match those in the DataFrame.
- Parameters:
requested_schema – Attempt to provide the DataFrame using this schema.
- Returns:
Arrow PyCapsule object.
- __getitem__(key: str | list[str]) DataFrame¶
Return a new :py:class`DataFrame` with the specified column or columns.
- Parameters:
key – Column name or list of column names to select.
- Returns:
DataFrame with the specified column or columns.
- __repr__() str¶
Return a string representation of the DataFrame.
- Returns:
String representation of the DataFrame.
- _repr_html_() str¶
- aggregate(group_by: collections.abc.Sequence[datafusion.expr.Expr | str] | datafusion.expr.Expr | str, aggs: collections.abc.Sequence[datafusion.expr.Expr] | datafusion.expr.Expr) DataFrame¶
Aggregates the rows of the current DataFrame.
- Parameters:
group_by – Sequence of expressions or column names to group by.
aggs – Sequence of expressions to aggregate.
- Returns:
DataFrame after aggregation.
- cast(mapping: dict[str, pyarrow.DataType[Any]]) DataFrame¶
Cast one or more columns to a different data type.
- Parameters:
mapping – Mapped with column as key and column dtype as value.
- Returns:
DataFrame after casting columns
- collect() list[pyarrow.RecordBatch]¶
Execute this
DataFrameand collect results into memory.Prior to calling
collect, modifying a DataFrame simply updates a plan (no actual computation is performed). Callingcollecttriggers the computation.- Returns:
List of
pyarrow.RecordBatchcollected from the DataFrame.
- collect_partitioned() list[list[pyarrow.RecordBatch]]¶
Execute this DataFrame and collect all partitioned results.
This operation returns
pyarrow.RecordBatchmaintaining the input partitioning.- Returns:
- List of list of
RecordBatchcollected from the DataFrame.
- List of list of
- count() int¶
Return the total number of rows in this
DataFrame.Note that this method will actually run a plan to calculate the count, which may be slow for large or complicated DataFrames.
- Returns:
Number of rows in the DataFrame.
- static default_str_repr(batches: list[pyarrow.RecordBatch], schema: pyarrow.Schema, has_more: bool, table_uuid: str | None = None) str¶
Return the default string representation of a DataFrame.
This method is used by the default formatter and implemented in Rust for performance reasons.
- describe() DataFrame¶
Return the statistics for this DataFrame.
Only summarized numeric datatypes at the moments and returns nulls for non-numeric datatypes.
The output format is modeled after pandas.
- Returns:
A summary DataFrame containing statistics.
- distinct() DataFrame¶
Return a new
DataFramewith all duplicated rows removed.- Returns:
DataFrame after removing duplicates.
- drop(*columns: str) DataFrame¶
Drop arbitrary amount of columns.
Column names are case-sensitive and do not require double quotes like other operations such as select. Leading and trailing double quotes are allowed and will be automatically stripped if present.
- Parameters:
columns – Column names to drop from the dataframe. Both
column_nameand"column_name"are accepted.- Returns:
DataFrame with those columns removed in the projection.
Example Usage:
df.drop('ID_For_Students') # Works df.drop('"ID_For_Students"') # Also works (quotes stripped)
- except_all(other: DataFrame) DataFrame¶
Calculate the exception of two
DataFrame.The two
DataFramemust have exactly the same schema.- Parameters:
other – DataFrame to calculate exception with.
- Returns:
DataFrame after exception.
- execute_stream() datafusion.record_batch.RecordBatchStream¶
Executes this DataFrame and returns a stream over a single partition.
- Returns:
Record Batch Stream over a single partition.
- execute_stream_partitioned() list[datafusion.record_batch.RecordBatchStream]¶
Executes this DataFrame and returns a stream for each partition.
- Returns:
One record batch stream per partition.
- execution_plan() datafusion.plan.ExecutionPlan¶
Return the execution/physical plan.
- Returns:
Execution plan.
- explain(verbose: bool = False, analyze: bool = False) None¶
Print an explanation of the DataFrame’s plan so far.
If
analyzeis specified, runs the plan and reports metrics.- Parameters:
verbose – If
True, more details will be included.analyze – If
True, the plan will run and metrics reported.
- fill_null(value: Any, subset: list[str] | None = None) DataFrame¶
Fill null values in specified columns with a value.
- Parameters:
value – Value to replace nulls with. Will be cast to match column type.
subset – Optional list of column names to fill. If None, fills all columns.
- Returns:
DataFrame with null values replaced where type casting is possible
Examples
>>> df = df.fill_null(0) # Fill all nulls with 0 where possible >>> # Fill nulls in specific string columns >>> df = df.fill_null("missing", subset=["name", "category"])
Notes
Only fills nulls in columns where the value can be cast to the column type
For columns where casting fails, the original column is kept unchanged
For columns not in subset, the original column is kept unchanged
- filter(*predicates: datafusion.expr.Expr) DataFrame¶
Return a DataFrame for which
predicateevaluates toTrue.Rows for which
predicateevaluates toFalseorNoneare filtered out. If more than one predicate is provided, these predicates will be combined as a logical AND. Eachpredicatemust be anExprcreated using helper functions such asdatafusion.col()ordatafusion.lit(). If more complex logic is required, see the logical operations infunctions.Example:
from datafusion import col, lit df.filter(col("a") > lit(1))
- Parameters:
predicates – Predicate expression(s) to filter the DataFrame.
- Returns:
DataFrame after filtering.
- head(n: int = 5) DataFrame¶
Return a new
DataFramewith a limited number of rows.- Parameters:
n – Number of rows to take from the head of the DataFrame.
- Returns:
DataFrame after limiting.
- intersect(other: DataFrame) DataFrame¶
Calculate the intersection of two
DataFrame.The two
DataFramemust have exactly the same schema.- Parameters:
other – DataFrame to intersect with.
- Returns:
DataFrame after intersection.
- into_view() datafusion.catalog.Table¶
Convert
DataFrameinto aTable.Examples
>>> from datafusion import SessionContext >>> ctx = SessionContext() >>> df = ctx.sql("SELECT 1 AS value") >>> view = df.into_view() >>> ctx.register_table("values_view", view) >>> df.collect() # The DataFrame is still usable >>> ctx.sql("SELECT value FROM values_view").collect()
- join(right: DataFrame, on: str | collections.abc.Sequence[str], how: Literal['inner', 'left', 'right', 'full', 'semi', 'anti'] = 'inner', *, left_on: None = None, right_on: None = None, join_keys: None = None) DataFrame¶
- join(right: DataFrame, on: None = None, how: Literal['inner', 'left', 'right', 'full', 'semi', 'anti'] = 'inner', *, left_on: str | collections.abc.Sequence[str], right_on: str | collections.abc.Sequence[str], join_keys: tuple[list[str], list[str]] | None = None) DataFrame
- join(right: DataFrame, on: None = None, how: Literal['inner', 'left', 'right', 'full', 'semi', 'anti'] = 'inner', *, join_keys: tuple[list[str], list[str]], left_on: None = None, right_on: None = None) DataFrame
Join this
DataFramewith anotherDataFrame.on has to be provided or both left_on and right_on in conjunction.
- Parameters:
right – Other DataFrame to join with.
on – Column names to join on in both dataframes.
how – Type of join to perform. Supported types are “inner”, “left”, “right”, “full”, “semi”, “anti”.
left_on – Join column of the left dataframe.
right_on – Join column of the right dataframe.
join_keys – Tuple of two lists of column names to join on. [Deprecated]
- Returns:
DataFrame after join.
- join_on(right: DataFrame, *on_exprs: datafusion.expr.Expr, how: Literal['inner', 'left', 'right', 'full', 'semi', 'anti'] = 'inner') DataFrame¶
Join two
DataFrameusing the specified expressions.Join predicates must be
Exprobjects, typically built withdatafusion.col(). On expressions are used to support in-equality predicates. Equality predicates are correctly optimized.Example:
from datafusion import col df.join_on(other_df, col("id") == col("other_id"))
- Parameters:
right – Other DataFrame to join with.
on_exprs – single or multiple (in)-equality predicates.
how – Type of join to perform. Supported types are “inner”, “left”, “right”, “full”, “semi”, “anti”.
- Returns:
DataFrame after join.
- limit(count: int, offset: int = 0) DataFrame¶
Return a new
DataFramewith a limited number of rows.- Parameters:
count – Number of rows to limit the DataFrame to.
offset – Number of rows to skip.
- Returns:
DataFrame after limiting.
- logical_plan() datafusion.plan.LogicalPlan¶
Return the unoptimized
LogicalPlan.- Returns:
Unoptimized logical plan.
- optimized_logical_plan() datafusion.plan.LogicalPlan¶
Return the optimized
LogicalPlan.- Returns:
Optimized logical plan.
- parse_sql_expr(expr: str) datafusion.expr.Expr¶
Creates logical expression from a SQL query text.
The expression is created and processed against the current schema.
Example:
from datafusion import col, lit df.parse_sql_expr("a > 1") should produce: col("a") > lit(1)
- Parameters:
expr – Expression string to be converted to datafusion expression
- Returns:
Logical expression .
- repartition(num: int) DataFrame¶
Repartition a DataFrame into
numpartitions.The batches allocation uses a round-robin algorithm.
- Parameters:
num – Number of partitions to repartition the DataFrame into.
- Returns:
Repartitioned DataFrame.
- repartition_by_hash(*exprs: datafusion.expr.Expr, num: int) DataFrame¶
Repartition a DataFrame using a hash partitioning scheme.
- Parameters:
exprs – Expressions to evaluate and perform hashing on.
num – Number of partitions to repartition the DataFrame into.
- Returns:
Repartitioned DataFrame.
- schema() pyarrow.Schema¶
Return the
pyarrow.Schemaof this DataFrame.The output schema contains information on the name, data type, and nullability for each column.
- Returns:
Describing schema of the DataFrame
- select(*exprs: datafusion.expr.Expr | str) DataFrame¶
Project arbitrary expressions into a new
DataFrame.- Parameters:
exprs – Either column names or
Exprto select.- Returns:
DataFrame after projection. It has one column for each expression.
Example usage:
The following example will return 3 columns from the original dataframe. The first two columns will be the original column
aandbsince the string “a” is assumed to refer to column selection. Also a duplicate of columnawill be returned with the column namealternate_a:df = df.select("a", col("b"), col("a").alias("alternate_a"))
- select_columns(*args: str) DataFrame¶
Filter the DataFrame by columns.
- Returns:
DataFrame only containing the specified columns.
- show(num: int = 20) None¶
Execute the DataFrame and print the result to the console.
- Parameters:
num – Number of lines to show.
- sort(*exprs: datafusion.expr.SortKey) DataFrame¶
Sort the DataFrame by the specified sorting expressions or column names.
Note that any expression can be turned into a sort expression by calling its
sortmethod.- Parameters:
exprs – Sort expressions or column names, applied in order.
- Returns:
DataFrame after sorting.
- tail(n: int = 5) DataFrame¶
Return a new
DataFramewith a limited number of rows.Be aware this could be potentially expensive since the row size needs to be determined of the dataframe. This is done by collecting it.
- Parameters:
n – Number of rows to take from the tail of the DataFrame.
- Returns:
DataFrame after limiting.
- to_arrow_table() pyarrow.Table¶
Execute the
DataFrameand convert it into an Arrow Table.- Returns:
Arrow Table.
- to_pandas() pandas.DataFrame¶
Execute the
DataFrameand convert it into a Pandas DataFrame.- Returns:
Pandas DataFrame.
- to_polars() polars.DataFrame¶
Execute the
DataFrameand convert it into a Polars DataFrame.- Returns:
Polars DataFrame.
- to_pydict() dict[str, list[Any]]¶
Execute the
DataFrameand convert it into a dictionary of lists.- Returns:
Dictionary of lists.
- to_pylist() list[dict[str, Any]]¶
Execute the
DataFrameand convert it into a list of dictionaries.- Returns:
List of dictionaries.
- transform(func: Callable[Ellipsis, DataFrame], *args: Any) DataFrame¶
Apply a function to the current DataFrame which returns another DataFrame.
This is useful for chaining together multiple functions. For example:
def add_3(df: DataFrame) -> DataFrame: return df.with_column("modified", lit(3)) def within_limit(df: DataFrame, limit: int) -> DataFrame: return df.filter(col("a") < lit(limit)).distinct() df = df.transform(modify_df).transform(within_limit, 4)
- Parameters:
func – A callable function that takes a DataFrame as it’s first argument
args – Zero or more arguments to pass to func
- Returns:
After applying func to the original dataframe.
- Return type:
- union(other: DataFrame, distinct: bool = False) DataFrame¶
Calculate the union of two
DataFrame.The two
DataFramemust have exactly the same schema.- Parameters:
other – DataFrame to union with.
distinct – If
True, duplicate rows will be removed.
- Returns:
DataFrame after union.
- union_distinct(other: DataFrame) DataFrame¶
Calculate the distinct union of two
DataFrame.The two
DataFramemust have exactly the same schema. Any duplicate rows are discarded.- Parameters:
other – DataFrame to union with.
- Returns:
DataFrame after union.
- unnest_columns(*columns: str, preserve_nulls: bool = True) DataFrame¶
Expand columns of arrays into a single row per array element.
- Parameters:
columns – Column names to perform unnest operation on.
preserve_nulls – If False, rows with null entries will not be returned.
- Returns:
A DataFrame with the columns expanded.
- with_column(name: str, expr: datafusion.expr.Expr) DataFrame¶
Add an additional column to the DataFrame.
The
exprmust be anExprconstructed withdatafusion.col()ordatafusion.lit().Example:
from datafusion import col, lit df.with_column("b", col("a") + lit(1))
- Parameters:
name – Name of the column to add.
expr – Expression to compute the column.
- Returns:
DataFrame with the new column.
- with_column_renamed(old_name: str, new_name: str) DataFrame¶
Rename one column by applying a new projection.
This is a no-op if the column to be renamed does not exist.
The method supports case sensitive rename with wrapping column name into one the following symbols (” or ‘ or `).
- Parameters:
old_name – Old column name.
new_name – New column name.
- Returns:
DataFrame with the column renamed.
- with_columns(*exprs: datafusion.expr.Expr | Iterable[datafusion.expr.Expr], **named_exprs: datafusion.expr.Expr) DataFrame¶
Add columns to the DataFrame.
By passing expressions, iterables of expressions, or named expressions. All expressions must be
Exprobjects created viadatafusion.col()ordatafusion.lit(). To pass named expressions use the formname=Expr.Example usage: The following will add 4 columns labeled
a,b,c, andd:from datafusion import col, lit df = df.with_columns( col("x").alias("a"), [lit(1).alias("b"), col("y").alias("c")], d=lit(3) )
- Parameters:
exprs – Either a single expression or an iterable of expressions to add.
named_exprs – Named expressions in the form of
name=expr
- Returns:
DataFrame with the new columns added.
- write_csv(path: str | pathlib.Path, with_header: bool = False, write_options: DataFrameWriteOptions | None = None) None¶
Execute the
DataFrameand write the results to a CSV file.- Parameters:
path – Path of the CSV file to write.
with_header – If true, output the CSV header row.
write_options – Options that impact how the DataFrame is written.
- write_json(path: str | pathlib.Path, write_options: DataFrameWriteOptions | None = None) None¶
Execute the
DataFrameand write the results to a JSON file.- Parameters:
path – Path of the JSON file to write.
write_options – Options that impact how the DataFrame is written.
- write_parquet(path: str | pathlib.Path, compression: str, compression_level: int | None = None, write_options: DataFrameWriteOptions | None = None) None¶
- write_parquet(path: str | pathlib.Path, compression: Compression = Compression.ZSTD, compression_level: int | None = None, write_options: DataFrameWriteOptions | None = None) None
- write_parquet(path: str | pathlib.Path, compression: ParquetWriterOptions, compression_level: None = None, write_options: DataFrameWriteOptions | None = None) None
Execute the
DataFrameand write the results to a Parquet file.Available compression types are:
“uncompressed”: No compression.
“snappy”: Snappy compression.
“gzip”: Gzip compression.
“brotli”: Brotli compression.
“lz4”: LZ4 compression.
“lz4_raw”: LZ4_RAW compression.
“zstd”: Zstandard compression.
LZO compression is not yet implemented in arrow-rs and is therefore excluded.
- Parameters:
path – Path of the Parquet file to write.
compression – Compression type to use. Default is “ZSTD”.
compression_level – Compression level to use. For ZSTD, the recommended range is 1 to 22, with the default being 4. Higher levels provide better compression but slower speed.
write_options – Options that impact how the DataFrame is written.
- write_parquet_with_options(path: str | pathlib.Path, options: ParquetWriterOptions, write_options: DataFrameWriteOptions | None = None) None¶
Execute the
DataFrameand write the results to a Parquet file.Allows advanced writer options to be set with ParquetWriterOptions.
- Parameters:
path – Path of the Parquet file to write.
options – Sets the writer parquet options (see ParquetWriterOptions).
write_options – Options that impact how the DataFrame is written.
- write_table(table_name: str, write_options: DataFrameWriteOptions | None = None) None¶
Execute the
DataFrameand write the results to a table.The table must be registered with the session to perform this operation. Not all table providers support writing operations. See the individual implementations for details.
- df¶
- class datafusion.dataframe.DataFrameWriteOptions(insert_operation: InsertOp | None = None, single_file_output: bool = False, partition_by: str | collections.abc.Sequence[str] | None = None, sort_by: datafusion.expr.Expr | datafusion.expr.SortExpr | collections.abc.Sequence[datafusion.expr.Expr] | collections.abc.Sequence[datafusion.expr.SortExpr] | None = None)¶
Writer options for DataFrame.
There is no guarantee the table provider supports all writer options. See the individual implementation and documentation for details.
Instantiate writer options for DataFrame.
- _raw_write_options¶
- class datafusion.dataframe.InsertOp(*args, **kwds)¶
Bases:
enum.EnumInsert operation mode.
These modes are used by the table writing feature to define how record batches should be written to a table.
- APPEND¶
Appends new rows to the existing table without modifying any existing rows.
- OVERWRITE¶
Overwrites all existing rows in the table with the new rows.
- REPLACE¶
Replace existing rows that collide with the inserted rows.
Replacement is typically based on a unique key or primary key.
- class datafusion.dataframe.ParquetColumnOptions(encoding: str | None = None, dictionary_enabled: bool | None = None, compression: str | None = None, statistics_enabled: str | None = None, bloom_filter_enabled: bool | None = None, bloom_filter_fpp: float | None = None, bloom_filter_ndv: int | None = None)¶
Parquet options for individual columns.
Contains the available options that can be applied for an individual Parquet column, replacing the global options in
ParquetWriterOptions.Initialize the ParquetColumnOptions.
- Parameters:
encoding – Sets encoding for the column path. Valid values are:
plain,plain_dictionary,rle,bit_packed,delta_binary_packed,delta_length_byte_array,delta_byte_array,rle_dictionary, andbyte_stream_split. These values are not case-sensitive. IfNone, uses the default parquet optionsdictionary_enabled – Sets if dictionary encoding is enabled for the column path. If None, uses the default parquet options
compression – Sets default parquet compression codec for the column path. Valid values are
uncompressed,snappy,gzip(level),lzo,brotli(level),lz4,zstd(level), andlz4_raw. These values are not case-sensitive. IfNone, uses the default parquet options.statistics_enabled – Sets if statistics are enabled for the column Valid values are:
none,chunk, andpageThese values are not case sensitive. IfNone, uses the default parquet options.bloom_filter_enabled – Sets if bloom filter is enabled for the column path. If
None, uses the default parquet options.bloom_filter_fpp – Sets bloom filter false positive probability for the column path. If
None, uses the default parquet options.bloom_filter_ndv – Sets bloom filter number of distinct values. If
None, uses the default parquet options.
- bloom_filter_enabled = None¶
- bloom_filter_fpp = None¶
- bloom_filter_ndv = None¶
- compression = None¶
- dictionary_enabled = None¶
- encoding = None¶
- statistics_enabled = None¶
- class datafusion.dataframe.ParquetWriterOptions(data_pagesize_limit: int = 1024 * 1024, write_batch_size: int = 1024, writer_version: str = '1.0', skip_arrow_metadata: bool = False, compression: str | None = 'zstd(3)', compression_level: int | None = None, dictionary_enabled: bool | None = True, dictionary_page_size_limit: int = 1024 * 1024, statistics_enabled: str | None = 'page', max_row_group_size: int = 1024 * 1024, created_by: str = 'datafusion-python', column_index_truncate_length: int | None = 64, statistics_truncate_length: int | None = None, data_page_row_count_limit: int = 20000, encoding: str | None = None, bloom_filter_on_write: bool = False, bloom_filter_fpp: float | None = None, bloom_filter_ndv: int | None = None, allow_single_file_parallelism: bool = True, maximum_parallel_row_group_writers: int = 1, maximum_buffered_record_batches_per_stream: int = 2, column_specific_options: dict[str, ParquetColumnOptions] | None = None)¶
Advanced parquet writer options.
Allows settings the writer options that apply to the entire file. Some options can also be set on a column by column basis, with the field
column_specific_options(seeParquetColumnOptions).Initialize the ParquetWriterOptions.
- Parameters:
data_pagesize_limit – Sets best effort maximum size of data page in bytes.
write_batch_size – Sets write_batch_size in bytes.
writer_version – Sets parquet writer version. Valid values are
1.0and2.0.skip_arrow_metadata – Skip encoding the embedded arrow metadata in the KV_meta.
compression –
Compression type to use. Default is
zstd(3). Available compression types areuncompressed: No compression.snappy: Snappy compression.gzip(n): Gzip compression with level n.brotli(n): Brotli compression with level n.lz4: LZ4 compression.lz4_raw: LZ4_RAW compression.zstd(n): Zstandard compression with level n.
compression_level – Compression level to set.
dictionary_enabled – Sets if dictionary encoding is enabled. If
None, uses the default parquet writer setting.dictionary_page_size_limit – Sets best effort maximum dictionary page size, in bytes.
statistics_enabled – Sets if statistics are enabled for any column Valid values are
none,chunk, andpage. IfNone, uses the default parquet writer setting.max_row_group_size – Target maximum number of rows in each row group (defaults to 1M rows). Writing larger row groups requires more memory to write, but can get better compression and be faster to read.
created_by – Sets “created by” property.
column_index_truncate_length – Sets column index truncate length.
statistics_truncate_length – Sets statistics truncate length. If
None, uses the default parquet writer setting.data_page_row_count_limit – Sets best effort maximum number of rows in a data page.
encoding – Sets default encoding for any column. Valid values are
plain,plain_dictionary,rle,bit_packed,delta_binary_packed,delta_length_byte_array,delta_byte_array,rle_dictionary, andbyte_stream_split. IfNone, uses the default parquet writer setting.bloom_filter_on_write – Write bloom filters for all columns when creating parquet files.
bloom_filter_fpp – Sets bloom filter false positive probability. If
None, uses the default parquet writer settingbloom_filter_ndv – Sets bloom filter number of distinct values. If
None, uses the default parquet writer setting.allow_single_file_parallelism – Controls whether DataFusion will attempt to speed up writing parquet files by serializing them in parallel. Each column in each row group in each output file are serialized in parallel leveraging a maximum possible core count of
n_files * n_row_groups * n_columns.maximum_parallel_row_group_writers – By default parallel parquet writer is tuned for minimum memory usage in a streaming execution plan. You may see a performance benefit when writing large parquet files by increasing
maximum_parallel_row_group_writersandmaximum_buffered_record_batches_per_streamif your system has idle cores and can tolerate additional memory usage. Boosting these values is likely worthwhile when writing out already in-memory data, such as from a cached data frame.maximum_buffered_record_batches_per_stream – See
maximum_parallel_row_group_writers.column_specific_options – Overrides options for specific columns. If a column is not a part of this dictionary, it will use the parameters provided here.
- allow_single_file_parallelism = True¶
- bloom_filter_fpp = None¶
- bloom_filter_ndv = None¶
- bloom_filter_on_write = False¶
- column_index_truncate_length = 64¶
- column_specific_options = None¶
- created_by = 'datafusion-python'¶
- data_page_row_count_limit = 20000¶
- data_pagesize_limit = 1048576¶
- dictionary_enabled = True¶
- dictionary_page_size_limit = 1048576¶
- encoding = None¶
- max_row_group_size = 1048576¶
- maximum_buffered_record_batches_per_stream = 2¶
- maximum_parallel_row_group_writers = 1¶
- skip_arrow_metadata = False¶
- statistics_enabled = 'page'¶
- statistics_truncate_length = None¶
- write_batch_size = 1024¶
- writer_version = '1.0'¶