pyspark.sql.DataFrameReader.parquet¶
-
DataFrameReader.
parquet
(*paths, **options)[source]¶ Loads Parquet files, returning the result as a
DataFrame
.- Parameters
mergeSchema – sets whether we should merge schemas collected from all Parquet part-files. This will override
spark.sql.parquet.mergeSchema
. The default value is specified inspark.sql.parquet.mergeSchema
.pathGlobFilter – an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of `partition discovery`_.
recursiveFileLookup – recursively scan a directory for files. Using this option disables `partition discovery`_.
>>> df = spark.read.parquet('python/test_support/sql/parquet_partitioned') >>> df.dtypes [('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]
New in version 1.4.