Parquet file format
Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. dlt
is capable of storing data in this format when configured to do so.
To use this format, you need a pyarrow
package. You can get this package as a dlt
extra as well:
pip install "dlt[parquet]"
Supported Destinations
Supported by: BigQuery, DuckDB, Snowflake, Filesystem, Athena, Databricks, Synapse
How to configure
There are several ways of configuring dlt to useparquet
file format for normalization step and to store your data at the destination:- You can set the
loader_file_format
argument toparquet
in the run command:
info = pipeline.run(some_source(), loader_file_format="parquet")
- You can set the
loader_file_format
inconfig.toml
orsecrets.toml
:
[normalize]
loader_file_format="parquet"
- You can set the
loader_file_format
via environment variable:
export NORMALIZE__LOADER_FILE_FORMAT="parquet"
- You can set the file type directly in the resource decorator.
@dlt.resource(file_format="parquet")
def generate_rows(nr):
pass
Destination AutoConfig
dlt
uses destination capabilities to configure the parquet writer:
- It uses decimal and wei precision to pick the right decimal type and sets precision and scale.
- It uses timestamp precision to pick the right timestamp type resolution (seconds, micro, or nano).
Writer settings
Under the hood, dlt
uses the pyarrow parquet writer to create the files. The following options can be used to change the behavior of the writer:
flavor
: Sanitize schema or set other compatibility options to work with various target systems. Defaults to None which is pyarrow default.version
: Determine which Parquet logical types are available for use, whether the reduced set from the Parquet 1.x.x format or the expanded logical types added in later format versions. Defaults to "2.6".data_page_size
: Set a target threshold for the approximate encoded size of data pages within a column chunk (in bytes). Defaults to None which is pyarrow default.row_group_size
: Set the number of rows in a row group. See here how this can optimize parallel processing of queries on your destination over the default setting ofpyarrow
.timestamp_timezone
: A string specifying timezone, default is UTC.coerce_timestamps
: resolution to which coerce timestamps, choose from s, ms, us, nsallow_truncated_timestamps
- will raise if precision is lost on truncated timestamp.
Default parquet version used by dlt
is 2.4. It coerces timestamps to microseconds and truncates nanoseconds silently. Such setting
provides best interoperability with database systems, including loading panda frames which have nanosecond resolution by default
Read the pyarrow parquet docs to learn more about these settings.
Example:
[normalize.data_writer]
# the default values
flavor="spark"
version="2.4"
data_page_size=1048576
timestamp_timezone="Europe/Berlin"
Or using environment variables:
NORMALIZE__DATA_WRITER__FLAVOR
NORMALIZE__DATA_WRITER__VERSION
NORMALIZE__DATA_WRITER__DATA_PAGE_SIZE
NORMALIZE__DATA_WRITER__TIMESTAMP_TIMEZONE
Timestamps and timezones
dlt
adds timezone (UTC adjustment) to all timestamps regardless of a precision (from seconds to nanoseconds). dlt
will also create TZ aware timestamp columns in
the destinations. duckdb is an exception here
Disable timezones / utc adjustment flags
You can generate parquet files without timezone adjustment information in two ways:
- Set the flavor to spark. All timestamps will be generated via deprecated
int96
physical data type, without the logical one - Set the timestamp_timezone to empty string (ie.
DATA_WRITER__TIMESTAMP_TIMEZONE=""
) to generate logical type without UTC adjustment.
To our best knowledge, arrow will convert your timezone aware DateTime(s) to UTC and store them in parquet without timezone information.
Row group size
The pyarrow
parquet writer writes each item, i.e. table or record batch, in a separate row group.
This may lead to many small row groups which may not be optimal for certain query engines. For example, duckdb
parallelizes on a row group.
dlt
allows controlling the size of the row group by
buffering and concatenating tables and batches before they are written. The concatenation is done as a zero-copy to save memory.
You can control the size of the row group by setting the maximum number of rows kept in the buffer.
[extract.data_writer]
buffer_max_items=10e6
Mind that dlt
holds the tables in memory. Thus, 1,000,000 rows in the example above may consume a significant amount of RAM.
row_group_size
configuration setting has limited utility with pyarrow
writer. It may be useful when you write single very large pyarrow tables
or when your in memory buffer is really large.