Packages

object OrcUtils extends Logging

Linear Supertypes
Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. OrcUtils
  2. Logging
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val CATALYST_TYPE_ATTRIBUTE_NAME: String
  5. def addSparkVersionMetadata(writer: Writer): Unit

    Add a metadata specifying Spark version.

  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  8. def createAggInternalRowFromFooter(reader: Reader, filePath: String, dataSchema: StructType, partitionSchema: StructType, aggregation: Aggregation, aggSchema: StructType, partitionValues: InternalRow): InternalRow

    When the partial aggregates (Max/Min/Count) are pushed down to ORC, we don't need to read data from ORC and aggregate at Spark layer.

    When the partial aggregates (Max/Min/Count) are pushed down to ORC, we don't need to read data from ORC and aggregate at Spark layer. Instead we want to get the partial aggregates (Max/Min/Count) result using the statistics information from ORC file footer, and then construct an InternalRow from these aggregate results.

    NOTE: if statistics is missing from ORC file footer, exception would be thrown.

    returns

    Aggregate results in the format of InternalRow

  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  11. val extensionsForCompressionCodecNames: Map[String, String]
  12. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. def getOrcSchemaString(dt: DataType): String

    Given a StructType object, this methods converts it to corresponding string representation in ORC.

  15. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  16. def inferSchema(sparkSession: SparkSession, files: Seq[FileStatus], options: Map[String, String]): Option[StructType]
  17. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  18. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  19. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  20. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  21. def listOrcFiles(pathStr: String, conf: Configuration): Seq[Path]
  22. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  23. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  24. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  25. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  26. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  30. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  35. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  36. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  37. def orcResultSchemaString(canPruneCols: Boolean, dataSchema: StructType, resultSchema: StructType, partitionSchema: StructType, conf: Configuration): String

    Returns the result schema to read from ORC file.

    Returns the result schema to read from ORC file. In addition, It sets the schema string to 'orc.mapred.input.schema' so ORC reader can use later.

    canPruneCols

    Flag to decide whether pruned cols schema is send to resultSchema or to send the entire dataSchema to resultSchema.

    dataSchema

    Schema of the orc files.

    resultSchema

    Result data schema created after pruning cols.

    partitionSchema

    Schema of partitions.

    conf

    Hadoop Configuration.

    returns

    Returns the result schema as string.

  38. def orcTypeDescription(dt: DataType): TypeDescription
  39. def readCatalystSchema(file: Path, conf: Configuration, ignoreCorruptFiles: Boolean): Option[StructType]
  40. def readOrcSchemasInParallel(files: Seq[FileStatus], conf: Configuration, ignoreCorruptFiles: Boolean): Seq[StructType]

    Reads ORC file schemas in multi-threaded manner, using native version of ORC.

    Reads ORC file schemas in multi-threaded manner, using native version of ORC. This is visible for testing.

  41. def readSchema(sparkSession: SparkSession, files: Seq[FileStatus], options: Map[String, String]): Option[StructType]
  42. def readSchema(file: Path, conf: Configuration, ignoreCorruptFiles: Boolean): Option[TypeDescription]
  43. def requestedColumnIds(isCaseSensitive: Boolean, dataSchema: StructType, requiredSchema: StructType, orcSchema: TypeDescription, conf: Configuration): Option[(Array[Int], Boolean)]

    returns

    Returns the combination of requested column ids from the given ORC file and boolean flag to find if the pruneCols is allowed or not. Requested Column id can be -1, which means the requested column doesn't exist in the ORC file. Returns None if the given ORC file is empty.

  44. def supportColumnarReads(dataType: DataType, nestedColumnEnabled: Boolean): Boolean

    Checks if dataType supports columnar reads.

    Checks if dataType supports columnar reads.

    dataType

    Data type of the orc files.

    nestedColumnEnabled

    True if columnar reads is enabled for nested column types.

    returns

    Returns true if data type supports columnar reads.

  45. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  46. def toCatalystSchema(schema: TypeDescription): StructType
  47. def toString(): String
    Definition Classes
    AnyRef → Any
  48. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  49. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  50. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped