org.apache.spark.sql.execution.datasources.parquet
ParquetFileFormat
Companion object ParquetFileFormat
class ParquetFileFormat extends FileFormat with DataSourceRegister with Logging with Serializable
- Alphabetic
- By Inheritance
- ParquetFileFormat
- Serializable
- Serializable
- Logging
- DataSourceRegister
- FileFormat
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new ParquetFileFormat()
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
buildReader(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
Returns a function that can be used to read a single file in as an Iterator of InternalRow.
- dataSchema
The global data schema. It can be either specified by the user, or reconciled/merged from all underlying data files. If any partition columns are contained in the files, they are preserved in this schema.
- partitionSchema
The schema of the partition column row that will be present in each PartitionedFile. These columns should be appended to the rows that are produced by the iterator.
- requiredSchema
The schema of the data that should be output for each row. This may be a subset of the columns that are present in the file if column pruning has occurred.
- filters
A set of filters than can optionally be used to reduce the number of rows output
- options
A set of string -> string configuration options.
- Attributes
- protected
- Definition Classes
- FileFormat
-
def
buildReaderWithPartitionValues(sparkSession: SparkSession, dataSchema: StructType, partitionSchema: StructType, requiredSchema: StructType, filters: Seq[Filter], options: Map[String, String], hadoopConf: Configuration): (PartitionedFile) ⇒ Iterator[InternalRow]
Build the reader.
Build the reader.
- Definition Classes
- ParquetFileFormat → FileFormat
- Note
It is required to pass FileFormat.OPTION_RETURNING_BATCH in options, to indicate whether the reader should return row or columnar output. If the caller can handle both, pass FileFormat.OPTION_RETURNING_BATCH -> supportBatch(sparkSession, StructType(requiredSchema.fields ++ partitionSchema.fields)) as the option. It should be set to "true" only if this reader can support it.
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(other: Any): Boolean
- Definition Classes
- ParquetFileFormat → AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- ParquetFileFormat → AnyRef → Any
-
def
inferSchema(sparkSession: SparkSession, parameters: Map[String, String], files: Seq[FileStatus]): Option[StructType]
When possible, this method should return the schema of the given
files.When possible, this method should return the schema of the given
files. When the format does not support inference, or no valid files are given should return None. In these cases Spark will require that user specify the schema manually.- Definition Classes
- ParquetFileFormat → FileFormat
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isSplitable(sparkSession: SparkSession, options: Map[String, String], path: Path): Boolean
Returns whether a file with
pathcould be split or not.Returns whether a file with
pathcould be split or not.- Definition Classes
- ParquetFileFormat → FileFormat
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
prepareWrite(sparkSession: SparkSession, job: Job, options: Map[String, String], dataSchema: StructType): OutputWriterFactory
Prepares a write job and returns an OutputWriterFactory.
Prepares a write job and returns an OutputWriterFactory. Client side job preparation can be put here. For example, user defined output committer can be configured here by setting the output committer class in the conf of spark.sql.sources.outputCommitterClass.
- Definition Classes
- ParquetFileFormat → FileFormat
-
def
shortName(): String
The string that represents the format that this data source provider uses.
The string that represents the format that this data source provider uses. This is overridden by children to provide a nice alias for the data source. For example:
override def shortName(): String = "parquet"
- Definition Classes
- ParquetFileFormat → DataSourceRegister
- Since
1.5.0
-
def
supportBatch(sparkSession: SparkSession, schema: StructType): Boolean
Returns whether the reader can return the rows as batch or not.
Returns whether the reader can return the rows as batch or not.
- Definition Classes
- ParquetFileFormat → FileFormat
-
def
supportDataType(dataType: DataType): Boolean
Returns whether this format supports the given DataType in read/write path.
Returns whether this format supports the given DataType in read/write path. By default all data types are supported.
- Definition Classes
- ParquetFileFormat → FileFormat
-
def
supportFieldName(name: String): Boolean
Returns whether this format supports the given filed name in read/write path.
Returns whether this format supports the given filed name in read/write path. By default all field name is supported.
- Definition Classes
- FileFormat
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- ParquetFileFormat → AnyRef → Any
-
def
vectorTypes(requiredSchema: StructType, partitionSchema: StructType, sqlConf: SQLConf): Option[Seq[String]]
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
Returns concrete column vector class names for each column to be used in a columnar batch if this format supports returning columnar batch.
- Definition Classes
- ParquetFileFormat → FileFormat
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()