package logical
- Alphabetic
- Public
- All
Type Members
-
case class
AddColumns(table: LogicalPlan, columnsToAdd: Seq[QualifiedColType]) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... ADD COLUMNS command.
-
case class
AddPartitions(table: LogicalPlan, parts: Seq[PartitionSpec], ifNotExists: Boolean) extends LogicalPlan with V2PartitionCommand with Product with Serializable
The logical plan of the ALTER TABLE ADD PARTITION command.
The logical plan of the ALTER TABLE ADD PARTITION command.
The syntax of this command is:
ALTER TABLE table ADD [IF NOT EXISTS] PARTITION spec1 [LOCATION 'loc1'][, PARTITION spec2 [LOCATION 'loc2'], ...]; -
case class
Aggregate(groupingExpressions: Seq[Expression], aggregateExpressions: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
This is a Group by operator with the aggregate functions and projections.
This is a Group by operator with the aggregate functions and projections.
- groupingExpressions
expressions for grouping keys
- aggregateExpressions
expressions for a project list, which could contain AggregateExpressions. Note: Currently, aggregateExpressions is the project list of this Group by operator. Before separating projection from grouping and aggregate, we should avoid expression-level optimization on aggregateExpressions, which could reference an expression in groupingExpressions. For example, see the rule org.apache.spark.sql.catalyst.optimizer.SimplifyExtractValueOps
-
case class
AlterColumn(table: LogicalPlan, column: FieldName, dataType: Option[DataType], nullable: Option[Boolean], comment: Option[String], position: Option[FieldPosition], setDefaultExpression: Option[String]) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... ALTER COLUMN command.
-
trait
AlterTableCommand extends LogicalPlan with UnaryCommand
The base trait for commands that need to alter a v2 table with TableChanges.
-
case class
AlterViewAs(child: LogicalPlan, originalText: String, query: LogicalPlan) extends LogicalPlan with BinaryCommand with Product with Serializable
The logical plan of the ALTER VIEW ...
The logical plan of the ALTER VIEW ... AS command.
-
trait
AnalysisHelper extends QueryPlan[LogicalPlan]
AnalysisHelper defines some infrastructure for the query analyzer.
AnalysisHelper defines some infrastructure for the query analyzer. In particular, in query analysis we don't want to repeatedly re-analyze sub-plans that have previously been analyzed.
This trait defines a flag
analyzedthat can be set to true once analysis is done on the tree. This also provides a set of resolve methods that do not recurse down to sub-plans that have the analyzed flag set to true.The analyzer rules should use the various resolve methods, in lieu of the various transform methods defined in org.apache.spark.sql.catalyst.trees.TreeNode and QueryPlan.
To prevent accidental use of the transform methods, this trait also overrides the transform methods to throw exceptions in test mode, if they are used in the analyzer.
-
trait
AnalysisOnlyCommand extends LogicalPlan with Command
A logical node that can be used for a command that requires its children to be only analyzed, but not optimized.
A logical node that can be used for a command that requires its children to be only analyzed, but not optimized. An example would be "create view": we don't need to optimize the view subtree because we will just store the entire view text as is in the catalog.
The way we do this is by setting the children to empty once the subtree is analyzed. This will prevent the optimizer (or the analyzer from that point on) from traversing into the children.
There's a corresponding rule org.apache.spark.sql.catalyst.analysis.Analyzer.HandleSpecialCommand that marks these commands analyzed.
-
case class
AnalyzeColumn(child: LogicalPlan, columnNames: Option[Seq[String]], allColumns: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ANALYZE TABLE FOR COLUMNS command.
-
case class
AnalyzeTable(child: LogicalPlan, partitionSpec: Map[String, Option[String]], noScan: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ANALYZE TABLE command.
-
case class
AnalyzeTables(namespace: LogicalPlan, noScan: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ANALYZE TABLES command.
-
case class
AppendColumns(func: (Any) ⇒ Any, argumentClass: Class[_], argumentSchema: StructType, deserializer: Expression, serializer: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A relation produced by applying
functo each element of thechild, concatenating the resulting columns at the end of the input row.A relation produced by applying
functo each element of thechild, concatenating the resulting columns at the end of the input row.- deserializer
used to extract the input to
funcfrom an input row.- serializer
use to serialize the output of
func.
-
case class
AppendColumnsWithObject(func: (Any) ⇒ Any, childSerializer: Seq[NamedExpression], newColumnsSerializer: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with ObjectConsumer with Product with Serializable
An optimized version of AppendColumns, that can be executed on deserialized object directly.
-
case class
AppendData(table: NamedRelation, query: LogicalPlan, writeOptions: Map[String, String], isByName: Boolean, write: Option[Write] = None, analyzedQuery: Option[LogicalPlan] = None) extends LogicalPlan with V2WriteCommand with Product with Serializable
Append data to an existing table.
-
case class
ArrowEvalPython(udfs: Seq[PythonUDF], resultAttrs: Seq[Attribute], child: LogicalPlan, evalType: Int) extends LogicalPlan with BaseEvalPython with Product with Serializable
A logical plan that evaluates a PythonUDF with Apache Arrow.
-
case class
ArrowEvalPythonUDTF(udtf: PythonUDTF, requiredChildOutput: Seq[Attribute], resultAttrs: Seq[Attribute], child: LogicalPlan, evalType: Int) extends LogicalPlan with BaseEvalPythonUDTF with Product with Serializable
A logical plan that evaluates a PythonUDTF using Apache Arrow.
A logical plan that evaluates a PythonUDTF using Apache Arrow.
- udtf
the user-defined Python function
- requiredChildOutput
the required output of the child plan. It's used for omitting data generation that will be discarded next by a projection.
- resultAttrs
the output schema of the Python UDTF.
- child
the child plan
-
case class
AsOfJoin(left: LogicalPlan, right: LogicalPlan, asOfCondition: Expression, condition: Option[Expression], joinType: JoinType, orderExpression: Expression, toleranceAssertion: Option[Expression]) extends LogicalPlan with BinaryNode with Product with Serializable
A logical plan for as-of join.
- case class Assignment(key: Expression, value: Expression) extends Expression with Unevaluable with BinaryLike[Expression] with Product with Serializable
-
case class
AttachDistributedSequence(sequenceAttr: Attribute, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A logical plan that adds a new long column with the name
namethat increases one by one.A logical plan that adds a new long column with the name
namethat increases one by one. This is for 'distributed-sequence' default index in pandas API on Spark. - trait BaseEvalPython extends LogicalPlan with UnaryNode
- trait BaseEvalPythonUDTF extends LogicalPlan with UnaryNode
-
case class
BatchEvalPython(udfs: Seq[PythonUDF], resultAttrs: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with BaseEvalPython with Product with Serializable
A logical plan that evaluates a PythonUDF
-
case class
BatchEvalPythonUDTF(udtf: PythonUDTF, requiredChildOutput: Seq[Attribute], resultAttrs: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with BaseEvalPythonUDTF with Product with Serializable
A logical plan that evaluates a PythonUDTF.
A logical plan that evaluates a PythonUDTF.
- udtf
the user-defined Python function
- requiredChildOutput
the required output of the child plan. It's used for omitting data generation that will be discarded next by a projection.
- resultAttrs
the output schema of the Python UDTF.
- child
the child plan
- trait BinaryCommand extends LogicalPlan with Command with BinaryLike[LogicalPlan]
-
trait
BinaryNode extends LogicalPlan with BinaryLike[LogicalPlan]
A logical plan node with a left and right child.
-
case class
CTERelationDef(child: LogicalPlan, id: Long = CTERelationDef.newId, originalPlanWithPredicates: Option[(LogicalPlan, Seq[Expression])] = None, underSubquery: Boolean = false) extends LogicalPlan with UnaryNode with Product with Serializable
A wrapper for CTE definition plan with a unique ID.
A wrapper for CTE definition plan with a unique ID.
- child
The CTE definition query plan.
- id
The unique ID for this CTE definition.
- originalPlanWithPredicates
The original query plan before predicate pushdown and the predicates that have been pushed down into
child. This is a temporary field used by optimization rules for CTE predicate pushdown to help ensure rule idempotency.- underSubquery
If true, it means we don't need to add a shuffle for this CTE relation as subquery reuse will be applied to reuse CTE relation output.
-
case class
CTERelationRef(cteId: Long, _resolved: Boolean, output: Seq[Attribute], statsOpt: Option[Statistics] = None) extends LogicalPlan with LeafNode with MultiInstanceRelation with Product with Serializable
Represents the relation of a CTE reference.
Represents the relation of a CTE reference.
- cteId
The ID of the corresponding CTE definition.
- _resolved
Whether this reference is resolved.
- output
The output attributes of this CTE reference, which can be different from the output of its corresponding CTE definition after attribute de-duplication.
- statsOpt
The optional statistics inferred from the corresponding CTE definition.
-
case class
CacheTable(table: LogicalPlan, multipartIdentifier: Seq[String], isLazy: Boolean, options: Map[String, String], isAnalyzed: Boolean = false) extends LogicalPlan with AnalysisOnlyCommand with Product with Serializable
The logical plan of the CACHE TABLE command.
-
case class
CacheTableAsSelect(tempViewName: String, plan: LogicalPlan, originalText: String, isLazy: Boolean, options: Map[String, String], isAnalyzed: Boolean = false, referredTempFunctions: Seq[String] = Seq.empty) extends LogicalPlan with AnalysisOnlyCommand with Product with Serializable
The logical plan of the CACHE TABLE ...
The logical plan of the CACHE TABLE ... AS SELECT command.
-
case class
CoGroup(func: (Any, Iterator[Any], Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, leftDeserializer: Expression, rightDeserializer: Expression, leftGroup: Seq[Attribute], rightGroup: Seq[Attribute], leftAttr: Seq[Attribute], rightAttr: Seq[Attribute], leftOrder: Seq[SortOrder], rightOrder: Seq[SortOrder], outputObjAttr: Attribute, left: LogicalPlan, right: LogicalPlan) extends LogicalPlan with BinaryNode with ObjectProducer with Product with Serializable
A relation produced by applying
functo each grouping key and associated values from left and right children. -
case class
CollectMetrics(name: String, metrics: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Collect arbitrary (named) metrics from a dataset.
Collect arbitrary (named) metrics from a dataset. As soon as the query reaches a completion point (batch query completes or streaming query epoch completes) an event is emitted on the driver which can be observed by attaching a listener to the spark session. The metrics are named so we can collect metrics at multiple places in a single dataset.
This node behaves like a global aggregate. All the metrics collected must be aggregate functions or be literals.
-
case class
ColumnStat(distinctCount: Option[BigInt] = None, min: Option[Any] = None, max: Option[Any] = None, nullCount: Option[BigInt] = None, avgLen: Option[Long] = None, maxLen: Option[Long] = None, histogram: Option[Histogram] = None, version: Int = CatalogColumnStat.VERSION) extends Product with Serializable
Statistics collected for a column.
Statistics collected for a column.
1. The JVM data type stored in min/max is the internal data type for the corresponding Catalyst data type. For example, the internal type of DateType is Int, and that the internal type of TimestampType is Long. 2. There is no guarantee that the statistics collected are accurate. Approximation algorithms (sketches) might have been used, and the data collected can also be stale.
- distinctCount
number of distinct values
- min
minimum value
- max
maximum value
- nullCount
number of nulls
- avgLen
average length of the values. For fixed-length types, this should be a constant.
- maxLen
maximum length of the values. For fixed-length types, this should be a constant.
- histogram
histogram of the values
- version
version of statistics saved to or retrieved from the catalog
-
trait
Command extends LogicalPlan
A logical node that represents a non-query command to be executed by the system.
A logical node that represents a non-query command to be executed by the system. For example, commands can be used by parsers to represent DDL operations. Commands, unlike queries, are eagerly executed.
-
case class
CommentOnNamespace(child: LogicalPlan, comment: String) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan that defines or changes the comment of an NAMESPACE for v2 catalogs.
The logical plan that defines or changes the comment of an NAMESPACE for v2 catalogs.
COMMENT ON (DATABASE|SCHEMA|NAMESPACE) namespaceIdentifier IS ('text' | NULL)where the
textis the new comment written as a string literal; orNULLto drop the comment. -
case class
CommentOnTable(table: LogicalPlan, comment: String) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan that defines or changes the comment of an TABLE for v2 catalogs.
The logical plan that defines or changes the comment of an TABLE for v2 catalogs.
COMMENT ON TABLE tableIdentifier IS ('text' | NULL)where the
textis the new comment written as a string literal; orNULLto drop the comment. - trait ConstraintHelper extends AnyRef
-
case class
CreateFunction(child: LogicalPlan, className: String, resources: Seq[FunctionResource], ifExists: Boolean, replace: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the CREATE FUNCTION command.
-
case class
CreateIndex(table: LogicalPlan, indexName: String, indexType: String, ignoreIfExists: Boolean, columns: Seq[(FieldName, Map[String, String])], properties: Map[String, String]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the CREATE INDEX command.
-
case class
CreateNamespace(name: LogicalPlan, ifNotExists: Boolean, properties: Map[String, String]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the CREATE NAMESPACE command.
-
case class
CreateTable(name: LogicalPlan, tableSchema: StructType, partitioning: Seq[Transform], tableSpec: TableSpecBase, ignoreIfExists: Boolean) extends LogicalPlan with UnaryCommand with V2CreateTablePlan with Product with Serializable
Create a new table with a v2 catalog.
-
case class
CreateTableAsSelect(name: LogicalPlan, partitioning: Seq[Transform], query: LogicalPlan, tableSpec: TableSpecBase, writeOptions: Map[String, String], ignoreIfExists: Boolean, isAnalyzed: Boolean = false) extends LogicalPlan with V2CreateTableAsSelectPlan with Product with Serializable
Create a new table from a select query with a v2 catalog.
-
case class
CreateView(child: LogicalPlan, userSpecifiedColumns: Seq[(String, Option[String])], comment: Option[String], properties: Map[String, String], originalText: Option[String], query: LogicalPlan, allowExisting: Boolean, replace: Boolean) extends LogicalPlan with BinaryCommand with Product with Serializable
The logical plan of the CREATE VIEW ...
The logical plan of the CREATE VIEW ... command.
-
case class
Deduplicate(keys: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A logical plan for
dropDuplicates. - case class DeduplicateWithinWatermark(keys: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
- case class DeleteAction(condition: Option[Expression]) extends MergeAction with Product with Serializable
-
case class
DeleteFromTable(table: LogicalPlan, condition: Expression) extends LogicalPlan with UnaryCommand with SupportsSubquery with Product with Serializable
The logical plan of the DELETE FROM command.
-
case class
DeleteFromTableWithFilters(table: LogicalPlan, condition: Seq[Predicate]) extends LogicalPlan with LeafCommand with Product with Serializable
The logical plan of the DELETE FROM command that can be executed using data source filters.
The logical plan of the DELETE FROM command that can be executed using data source filters.
As opposed to DeleteFromTable, this node represents a DELETE operation where the condition was converted into filters and the data source reported that it can handle all of them.
-
case class
DescribeColumn(relation: LogicalPlan, column: Expression, isExtended: Boolean, output: Seq[Attribute] = DescribeColumn.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DESCRIBE relation_name col_name command.
-
case class
DescribeFunction(child: LogicalPlan, isExtended: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DESCRIBE FUNCTION command.
-
case class
DescribeNamespace(namespace: LogicalPlan, extended: Boolean, output: Seq[Attribute] = DescribeNamespace.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DESCRIBE NAMESPACE command.
-
case class
DescribeRelation(relation: LogicalPlan, partitionSpec: TablePartitionSpec, isExtended: Boolean, output: Seq[Attribute] = DescribeRelation.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DESCRIBE relation_name command.
-
case class
DeserializeToObject(deserializer: Expression, outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with UnaryNode with ObjectProducer with Product with Serializable
Takes the input row from child and turns it into object using the given deserializer expression.
-
case class
Distinct(child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Returns a new logical plan that dedups input rows.
-
case class
DomainJoin(domainAttrs: Seq[Attribute], child: LogicalPlan, joinType: JoinType = Inner, condition: Option[Expression] = None) extends LogicalPlan with UnaryNode with Product with Serializable
A placeholder for domain join that can be added when decorrelating subqueries.
A placeholder for domain join that can be added when decorrelating subqueries. It should be rewritten during the optimization phase.
-
case class
DropColumns(table: LogicalPlan, columnsToDrop: Seq[FieldName], ifExists: Boolean) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... DROP COLUMNS command.
-
case class
DropFunction(child: LogicalPlan, ifExists: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DROP FUNCTION command.
-
case class
DropIndex(table: LogicalPlan, indexName: String, ignoreIfNotExists: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DROP INDEX command.
-
case class
DropNamespace(namespace: LogicalPlan, ifExists: Boolean, cascade: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DROP NAMESPACE command.
-
case class
DropPartitions(table: LogicalPlan, parts: Seq[PartitionSpec], ifExists: Boolean, purge: Boolean) extends LogicalPlan with V2PartitionCommand with Product with Serializable
The logical plan of the ALTER TABLE DROP PARTITION command.
The logical plan of the ALTER TABLE DROP PARTITION command. This may remove the data and metadata for this partition.
If the
PURGEoption is set, the table catalog must remove partition data by skipping the trash even when the catalog has configured one. The option is applicable only for managed tables.The syntax of this command is:
ALTER TABLE table DROP [IF EXISTS] PARTITION spec1[, PARTITION spec2, ...] [PURGE];
-
case class
DropTable(child: LogicalPlan, ifExists: Boolean, purge: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DROP TABLE command.
The logical plan of the DROP TABLE command.
If the
PURGEoption is set, the table catalog must remove table data by skipping the trash even when the catalog has configured one. The option is applicable only for managed tables.The syntax of this command is:
DROP TABLE [IF EXISTS] table [PURGE];
-
case class
DropView(child: LogicalPlan, ifExists: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the DROP VIEW command.
-
case class
EventTimeWatermark(eventTime: Attribute, delay: CalendarInterval, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Used to mark a user specified column as holding the event time for a row.
- case class Except(left: LogicalPlan, right: LogicalPlan, isAll: Boolean) extends SetOperation with Product with Serializable
-
case class
Expand(projections: Seq[Seq[Expression]], output: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Apply a number of projections to every input row, hence we will get multiple output rows for an input row.
Apply a number of projections to every input row, hence we will get multiple output rows for an input row.
- projections
to apply
- output
of all projections.
- child
operator.
-
trait
ExposesMetadataColumns extends LogicalPlan
A logical plan node that can generate metadata columns
- case class Filter(condition: Expression, child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with PredicateHelper with Product with Serializable
-
case class
FlatMapCoGroupsInPandas(leftGroupingLen: Int, rightGroupingLen: Int, functionExpr: Expression, output: Seq[Attribute], left: LogicalPlan, right: LogicalPlan) extends LogicalPlan with BinaryNode with Product with Serializable
Flatmap cogroups using a udf: pandas.Dataframe, pandas.Dataframe -> pandas.Dataframe This is used by DataFrame.groupby().cogroup().apply().
-
case class
FlatMapGroupsInPandas(groupingAttributes: Seq[Attribute], functionExpr: Expression, output: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
FlatMap groups using a udf: pandas.Dataframe -> pandas.DataFrame.
FlatMap groups using a udf: pandas.Dataframe -> pandas.DataFrame. This is used by DataFrame.groupby().apply().
-
case class
FlatMapGroupsInPandasWithState(functionExpr: Expression, groupingAttributes: Seq[Attribute], outputAttrs: Seq[Attribute], stateType: StructType, outputMode: OutputMode, timeout: GroupStateTimeout, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Similar with FlatMapGroupsWithState.
Similar with FlatMapGroupsWithState. Applies func to each unique group in
child, based on the evaluation ofgroupingAttributes, while using state data.functionExpris invoked with an pandas DataFrame representation and the grouping key (tuple).- functionExpr
function called on each group
- groupingAttributes
used to group the data
- outputAttrs
used to define the output rows
- stateType
used to serialize/deserialize state before calling
functionExpr- outputMode
the output mode of
func- timeout
used to timeout groups that have not received data in a while
- child
logical plan of the underlying data
- case class FlatMapGroupsInR(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, outputSchema: StructType, keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with UnaryNode with ObjectProducer with Product with Serializable
-
case class
FlatMapGroupsInRWithArrow(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, output: Seq[Attribute], keyDeserializer: Expression, groupingAttributes: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Similar with
FlatMapGroupsInRbut serializes and deserializes input/output in Arrow format.Similar with
FlatMapGroupsInRbut serializes and deserializes input/output in Arrow format. This is also somewhat similar with FlatMapGroupsInPandas. -
case class
FlatMapGroupsWithState(func: (Any, Iterator[Any], LogicalGroupState[Any]) ⇒ Iterator[Any], keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], outputObjAttr: Attribute, stateEncoder: ExpressionEncoder[Any], outputMode: OutputMode, isMapGroupsWithState: Boolean = false, timeout: GroupStateTimeout, hasInitialState: Boolean = false, initialStateGroupAttrs: Seq[Attribute], initialStateDataAttrs: Seq[Attribute], initialStateDeserializer: Expression, initialState: LogicalPlan, child: LogicalPlan) extends LogicalPlan with BinaryNode with ObjectProducer with Product with Serializable
Applies func to each unique group in
child, based on the evaluation ofgroupingAttributes, while using state data.Applies func to each unique group in
child, based on the evaluation ofgroupingAttributes, while using state data. Func is invoked with an object representation of the grouping key an iterator containing the object representation of all the rows with that key.- func
function called on each group
- keyDeserializer
used to extract the key object for each group.
- valueDeserializer
used to extract the items in the iterator from an input row.
- groupingAttributes
used to group the data
- dataAttributes
used to read the data
- outputObjAttr
used to define the output object
- stateEncoder
used to serialize/deserialize state before calling
func- outputMode
the output mode of
func- isMapGroupsWithState
whether it is created by the
mapGroupsWithStatemethod- timeout
used to timeout groups that have not received data in a while
- hasInitialState
Indicates whether initial state needs to be applied or not.
- initialStateGroupAttrs
grouping attributes for the initial state
- initialStateDataAttrs
used to read the initial state
- initialStateDeserializer
used to extract the initial state objects.
- initialState
user defined initial state that is applied in the first batch.
- child
logical plan of the underlying data
- case class FormatClasses(input: String, output: String) extends Product with Serializable
-
trait
FunctionBuilderBase[T] extends AnyRef
This is a base trait that is used for implementing builder classes that can be used to construct expressions or logical plans depending on if it is a table-valued or scalar-valued function.
This is a base trait that is used for implementing builder classes that can be used to construct expressions or logical plans depending on if it is a table-valued or scalar-valued function.
Two classes of builders currently exist for this trait: GeneratorBuilder and ExpressionBuilder. If a new class of functions are to be added, a new trait should also be created which extends this trait.
- T
The type that is expected to be returned by the FunctionBuilderBase.build function
-
case class
FunctionSignature(parameters: Seq[InputParameter]) extends Product with Serializable
Represents a method signature and the list of arguments it receives as input.
Represents a method signature and the list of arguments it receives as input. Currently, overloads are not supported and only one FunctionSignature is allowed per function expression.
- parameters
The list of arguments which the function takes
-
case class
Generate(generator: Generator, unrequiredChildIndex: Seq[Int], outer: Boolean, qualifier: Option[String], generatorOutput: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows.
Applies a Generator to a stream of input rows, combining the output of each into a new stream of rows. This operation is similar to a
flatMapin functional programming with one important additional feature, which allows the input rows to be joined with their output.- generator
the generator expression
- unrequiredChildIndex
this parameter starts as Nil and gets filled by the Optimizer. It's used as an optimization for omitting data generation that will be discarded next by a projection. A common use case is when we explode(array(..)) and are interested only in the exploded data and not in the original array. before this optimization the array got duplicated for each of its elements, causing O(n^^2) memory consumption. (see [SPARK-21657])
- outer
when true, each input row will be output at least once, even if the output of the given
generatoris empty.- qualifier
Qualifier for the attributes of generator(UDTF)
- generatorOutput
The output schema of the Generator.
- child
Children logical plan node
-
case class
GlobalLimit(limitExpr: Expression, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A global (coordinated) limit.
A global (coordinated) limit. This operator can emit at most
limitExprnumber in total.See Limit for more information.
Note that, we can not make it inherit OrderPreservingUnaryNode due to the different strategy of physical plan. The output ordering of child will be broken if a shuffle exchange comes in between the child and global limit, due to the fact that shuffle reader fetches blocks in random order.
- trait HasPartitionExpressions extends SQLConfHelper
-
trait
HintErrorHandler extends AnyRef
The callback for implementing customized strategies of handling hint errors.
-
case class
HintInfo(strategy: Option[JoinStrategyHint] = None) extends Product with Serializable
The hint attributes to be applied on a specific node.
The hint attributes to be applied on a specific node.
- strategy
The preferred join strategy.
-
case class
Histogram(height: Double, bins: Array[HistogramBin]) extends Product with Serializable
This class is an implementation of equi-height histogram.
This class is an implementation of equi-height histogram. Equi-height histogram represents the distribution of a column's values by a sequence of bins. Each bin has a value range and contains approximately the same number of rows.
- height
number of rows in each bin
- bins
equi-height histogram bins
-
case class
HistogramBin(lo: Double, hi: Double, ndv: Long) extends Product with Serializable
A bin in an equi-height histogram.
A bin in an equi-height histogram. We use double type for lower/higher bound for simplicity.
- lo
lower bound of the value range in this bin
- hi
higher bound of the value range in this bin
- ndv
approximate number of distinct values in this bin
-
trait
IgnoreCachedData extends LogicalPlan
A LogicalPlan operator that does not use the cached results stored in CacheManager
-
case class
InputParameter(name: String, default: Option[Expression] = None) extends Product with Serializable
Represents a parameter of a function expression.
Represents a parameter of a function expression. Function expressions should use this class to construct the argument lists returned in Builder
- name
The name of the string.
- default
The default value of the argument. If the default is none, then that means the argument is required. If no argument is provided, an exception is thrown.
- case class InsertAction(condition: Option[Expression], assignments: Seq[Assignment]) extends MergeAction with Product with Serializable
-
case class
InsertIntoDir(isLocal: Boolean, storage: CatalogStorageFormat, provider: Option[String], child: LogicalPlan, overwrite: Boolean = true) extends LogicalPlan with UnaryNode with Product with Serializable
Insert query result into a directory.
Insert query result into a directory.
- isLocal
Indicates whether the specified directory is local directory
- storage
Info about output file, row and what serialization format
- provider
Specifies what data source to use; only used for data source file.
- child
The query to be executed
- overwrite
If true, the existing directory will be overwritten Note that this plan is unresolved and has to be replaced by the concrete implementations during analysis.
-
case class
InsertIntoStatement(table: LogicalPlan, partitionSpec: Map[String, Option[String]], userSpecifiedCols: Seq[String], query: LogicalPlan, overwrite: Boolean, ifPartitionNotExists: Boolean, byName: Boolean = false) extends ParsedStatement with UnaryParsedStatement with Product with Serializable
An INSERT INTO statement, as parsed from SQL.
An INSERT INTO statement, as parsed from SQL.
- table
the logical plan representing the table.
- partitionSpec
a map from the partition key to the partition value (optional). If the value is missing, dynamic partition insert will be performed. As an example,
INSERT INTO tbl PARTITION (a=1, b=2) ASwould have Map('a' -> Some('1'), 'b' -> Some('2')), andINSERT INTO tbl PARTITION (a=1, b) AS ...would have Map('a' -> Some('1'), 'b' -> None).- userSpecifiedCols
the user specified list of columns that belong to the table.
- query
the logical plan representing data to write to.
- overwrite
overwrite existing table or partitions.
- ifPartitionNotExists
If true, only write if the partition does not exist. Only valid for static partitions.
- byName
If true, reorder the data columns to match the column names of the target table.
- case class InsertStarAction(condition: Option[Expression]) extends MergeAction with Product with Serializable
- case class Intersect(left: LogicalPlan, right: LogicalPlan, isAll: Boolean) extends SetOperation with Product with Serializable
- case class Join(left: LogicalPlan, right: LogicalPlan, joinType: JoinType, condition: Option[Expression], hint: JoinHint) extends LogicalPlan with BinaryNode with PredicateHelper with Product with Serializable
-
case class
JoinHint(leftHint: Option[HintInfo], rightHint: Option[HintInfo]) extends Product with Serializable
Hint that is associated with a Join node, with HintInfo on its left child and on its right child respectively.
- sealed abstract class JoinStrategyHint extends AnyRef
- trait KeepAnalyzedQuery extends LogicalPlan with Command
-
case class
LateralJoin(left: LogicalPlan, right: LateralSubquery, joinType: JoinType, condition: Option[Expression]) extends LogicalPlan with UnaryNode with Product with Serializable
A logical plan for lateral join.
- trait LeafCommand extends LogicalPlan with Command with LeafLike[LogicalPlan]
-
trait
LeafNode extends LogicalPlan with LeafLike[LogicalPlan]
A logical plan node with no children.
- trait LeafParsedStatement extends ParsedStatement with LeafLike[LogicalPlan]
-
case class
LoadData(child: LogicalPlan, path: String, isLocal: Boolean, isOverwrite: Boolean, partition: Option[TablePartitionSpec]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the LOAD DATA INTO TABLE command.
-
case class
LocalLimit(limitExpr: Expression, child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
A partition-local (non-coordinated) limit.
A partition-local (non-coordinated) limit. This operator can emit at most
limitExprnumber of tuples on each physical partition.See Limit for more information.
-
case class
LocalRelation(output: Seq[Attribute], data: Seq[InternalRow] = Nil, isStreaming: Boolean = false) extends LogicalPlan with LeafNode with MultiInstanceRelation with Product with Serializable
Logical plan node for scanning data from a local collection.
Logical plan node for scanning data from a local collection.
- data
The local collection holding the data. It doesn't need to be sent to executors and then doesn't need to be serializable.
- abstract class LogicalPlan extends QueryPlan[LogicalPlan] with AnalysisHelper with LogicalPlanStats with LogicalPlanDistinctKeys with QueryPlanConstraints with Logging
-
trait
LogicalPlanDistinctKeys extends AnyRef
A trait to add distinct attributes to LogicalPlan.
A trait to add distinct attributes to LogicalPlan. For example:
SELECT a, b, SUM(c) FROM Tab1 GROUP BY a, b // returns a, b -
trait
LogicalPlanVisitor[T] extends AnyRef
A visitor pattern for traversing a LogicalPlan tree and computing some properties.
-
case class
MapElements(func: AnyRef, argumentClass: Class[_], argumentSchema: StructType, outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with ObjectConsumer with ObjectProducer with Product with Serializable
A relation produced by applying
functo each element of thechild. -
case class
MapGroups(func: (Any, Iterator[Any]) ⇒ TraversableOnce[Any], keyDeserializer: Expression, valueDeserializer: Expression, groupingAttributes: Seq[Attribute], dataAttributes: Seq[Attribute], dataOrder: Seq[SortOrder], outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with UnaryNode with ObjectProducer with Product with Serializable
Applies func to each unique group in
child, based on the evaluation ofgroupingAttributes.Applies func to each unique group in
child, based on the evaluation ofgroupingAttributes. Func is invoked with an object representation of the grouping key an iterator containing the object representation of all the rows with that key. Given an additionaldataOrder, data in the iterator will be sorted accordingly. That sorting does not add computational complexity.- keyDeserializer
used to extract the key object for each group.
- valueDeserializer
used to extract the items in the iterator from an input row.
-
case class
MapInPandas(functionExpr: Expression, output: Seq[Attribute], child: LogicalPlan, isBarrier: Boolean) extends LogicalPlan with UnaryNode with Product with Serializable
Map partitions using a udf: iter(pandas.Dataframe) -> iter(pandas.DataFrame).
Map partitions using a udf: iter(pandas.Dataframe) -> iter(pandas.DataFrame). This is used by DataFrame.mapInPandas()
-
case class
MapPartitions(func: (Iterator[Any]) ⇒ Iterator[Any], outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with ObjectConsumer with ObjectProducer with Product with Serializable
A relation produced by applying
functo each partition of thechild. -
case class
MapPartitionsInR(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, outputSchema: StructType, outputObjAttr: Attribute, child: LogicalPlan) extends LogicalPlan with ObjectConsumer with ObjectProducer with Product with Serializable
A relation produced by applying a serialized R function
functo each partition of thechild. -
case class
MapPartitionsInRWithArrow(func: Array[Byte], packageNames: Array[Byte], broadcastVars: Array[Broadcast[AnyRef]], inputSchema: StructType, output: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Similar with
MapPartitionsInRbut serializes and deserializes input/output in Arrow format.Similar with
MapPartitionsInRbut serializes and deserializes input/output in Arrow format.This is somewhat similar with
org.apache.spark.sql.execution.python.ArrowEvalPython - sealed abstract class MergeAction extends Expression with Unevaluable
-
case class
MergeIntoTable(targetTable: LogicalPlan, sourceTable: LogicalPlan, mergeCondition: Expression, matchedActions: Seq[MergeAction], notMatchedActions: Seq[MergeAction], notMatchedBySourceActions: Seq[MergeAction]) extends LogicalPlan with BinaryCommand with SupportsSubquery with Product with Serializable
The logical plan of the MERGE INTO command.
- case class MergeRows(isSourceRowPresent: Expression, isTargetRowPresent: Expression, matchedInstructions: Seq[Instruction], notMatchedInstructions: Seq[Instruction], notMatchedBySourceInstructions: Seq[Instruction], checkCardinality: Boolean, output: Seq[Attribute], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
-
case class
NoopCommand(commandName: String, multipartIdentifier: Seq[String]) extends LogicalPlan with LeafCommand with Product with Serializable
The logical plan for no-op command handling non-existing table.
-
trait
ObjectConsumer extends LogicalPlan with UnaryNode with ReferenceAllColumns[LogicalPlan]
A trait for logical operators that consumes domain objects as input.
A trait for logical operators that consumes domain objects as input. The output of its child must be a single-field row containing the input object.
-
trait
ObjectProducer extends LogicalPlan
A trait for logical operators that produces domain objects as output.
A trait for logical operators that produces domain objects as output. The output of this operator is a single-field safe row containing the produced object.
-
case class
Offset(offsetExpr: Expression, child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
A logical offset, which may removing a specified number of rows from the beginning of the output of child logical plan.
-
case class
OneRowRelation() extends LogicalPlan with LeafNode with Product with Serializable
A relation with one row.
A relation with one row. This is used in "SELECT ..." without a from clause.
-
case class
OptionList(options: Seq[(String, Expression)]) extends Expression with Unevaluable with Product with Serializable
This contains the expressions in an OPTIONS list.
This contains the expressions in an OPTIONS list. We store it alongside anywhere the above UnresolvedTableSpec lives. We use a separate object so that tree traversals in analyzer rules can descend into the child expressions naturally without extra treatment.
- trait OrderPreservingUnaryNode extends LogicalPlan with UnaryNode with AliasAwareQueryOutputOrdering[LogicalPlan]
-
case class
OverwriteByExpression(table: NamedRelation, deleteExpr: Expression, query: LogicalPlan, writeOptions: Map[String, String], isByName: Boolean, write: Option[Write] = None, analyzedQuery: Option[LogicalPlan] = None) extends LogicalPlan with V2WriteCommand with Product with Serializable
Overwrite data matching a filter in an existing table.
-
case class
OverwritePartitionsDynamic(table: NamedRelation, query: LogicalPlan, writeOptions: Map[String, String], isByName: Boolean, write: Option[Write] = None) extends LogicalPlan with V2WriteCommand with Product with Serializable
Dynamically overwrite partitions in an existing table.
-
abstract
class
ParsedStatement extends LogicalPlan
A logical plan node that contains exactly what was parsed from SQL.
A logical plan node that contains exactly what was parsed from SQL.
This is used to hold information parsed from SQL when there are multiple implementations of a query or command. For example, CREATE TABLE may be implemented by different nodes for v1 and v2. Instead of parsing directly to a v1 CreateTable that keeps metadata in CatalogTable, and then converting that v1 metadata to the v2 equivalent, the sql CreateTableStatement plan is produced by the parser and converted once into both implementations.
Parsed logical plans are not resolved because they must be converted to concrete logical plans.
Parsed logical plans are located in Catalyst so that as much SQL parsing logic as possible is be kept in a org.apache.spark.sql.catalyst.parser.AbstractSqlParser.
-
case class
Pivot(groupByExprsOpt: Option[Seq[NamedExpression]], pivotColumn: Expression, pivotValues: Seq[Expression], aggregates: Seq[Expression], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A constructor for creating a pivot, which will later be converted to a Project or an Aggregate during the query analysis.
A constructor for creating a pivot, which will later be converted to a Project or an Aggregate during the query analysis.
- groupByExprsOpt
A sequence of group by expressions. This field should be None if coming from SQL, in which group by expressions are not explicitly specified.
- pivotColumn
The pivot column.
- pivotValues
A sequence of values for the pivot column.
- aggregates
The aggregation expressions, each with or without an alias.
- child
Child operator
- case class Project(projectList: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
-
case class
PythonMapInArrow(functionExpr: Expression, output: Seq[Attribute], child: LogicalPlan, isBarrier: Boolean) extends LogicalPlan with UnaryNode with Product with Serializable
Map partitions using a udf: iter(pyarrow.RecordBatch) -> iter(pyarrow.RecordBatch).
Map partitions using a udf: iter(pyarrow.RecordBatch) -> iter(pyarrow.RecordBatch). This is used by DataFrame.mapInArrow() in PySpark
-
case class
QualifiedColType(path: Option[FieldName], colName: String, dataType: DataType, nullable: Boolean, comment: Option[String], position: Option[FieldPosition], default: Option[String]) extends Product with Serializable
Column data as parsed by ALTER TABLE ...
Column data as parsed by ALTER TABLE ... (ADD|REPLACE) COLUMNS.
- trait QueryPlanConstraints extends ConstraintHelper
-
case class
Range(start: Long, end: Long, step: Long, numSlices: Option[Int], output: Seq[Attribute] = Range.getOutputAttrs, isStreaming: Boolean = false) extends LogicalPlan with LeafNode with MultiInstanceRelation with Product with Serializable
- Annotations
- @ExpressionDescription()
-
case class
RebalancePartitions(partitionExpressions: Seq[Expression], child: LogicalPlan, optNumPartitions: Option[Int] = None, optAdvisoryPartitionSize: Option[Long] = None) extends LogicalPlan with UnaryNode with HasPartitionExpressions with Product with Serializable
This operator is used to rebalance the output partitions of the given
child, so that every partition is of a reasonable size (not too small and not too big).This operator is used to rebalance the output partitions of the given
child, so that every partition is of a reasonable size (not too small and not too big). It also try its best to partition the child output bypartitionExpressions. If there are skews, Spark will split the skewed partitions, to make these partitions not too big. This operator is useful when you need to write the result ofchildto a table, to avoid too small/big files.Note that, this operator only makes sense when AQE is enabled.
-
case class
RecoverPartitions(child: LogicalPlan) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... RECOVER PARTITIONS command.
-
case class
RefreshFunction(child: LogicalPlan) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the REFRESH FUNCTION command.
-
case class
RefreshTable(child: LogicalPlan) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the REFRESH TABLE command.
-
case class
RenameColumn(table: LogicalPlan, column: FieldName, newName: String) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... RENAME COLUMN command.
-
case class
RenamePartitions(table: LogicalPlan, from: PartitionSpec, to: PartitionSpec) extends LogicalPlan with V2PartitionCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... RENAME TO PARTITION command.
-
case class
RenameTable(child: LogicalPlan, newName: Seq[String], isView: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER [TABLE|VIEW] ...
The logical plan of the ALTER [TABLE|VIEW] ... RENAME TO command.
-
case class
RepairTable(child: LogicalPlan, enableAddPartitions: Boolean, enableDropPartitions: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the MSCK REPAIR TABLE command.
-
case class
Repartition(numPartitions: Int, shuffle: Boolean, child: LogicalPlan) extends RepartitionOperation with Product with Serializable
Returns a new RDD that has exactly
numPartitionspartitions.Returns a new RDD that has exactly
numPartitionspartitions. Differs from RepartitionByExpression as this method is called directly by DataFrame's, because the user asked forcoalesceorrepartition. RepartitionByExpression is used when the consumer of the output requires some specific ordering or distribution of the data. -
case class
RepartitionByExpression(partitionExpressions: Seq[Expression], child: LogicalPlan, optNumPartitions: Option[Int], optAdvisoryPartitionSize: Option[Long] = None) extends RepartitionOperation with HasPartitionExpressions with Product with Serializable
This method repartitions data using Expressions into
optNumPartitions, and receives information about the number of partitions during execution.This method repartitions data using Expressions into
optNumPartitions, and receives information about the number of partitions during execution. Used when a specific ordering or distribution is expected by the consumer of the query result. Use Repartition for RDD-likecoalesceandrepartition. If nooptNumPartitionsis given, by default it partitions data intonumShufflePartitionsdefined inSQLConf, and could be coalesced by AQE. -
abstract
class
RepartitionOperation extends LogicalPlan with UnaryNode
A base interface for RepartitionByExpression and Repartition
-
case class
ReplaceColumns(table: LogicalPlan, columnsToAdd: Seq[QualifiedColType]) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... REPLACE COLUMNS command.
-
case class
ReplaceData(table: NamedRelation, condition: Expression, query: LogicalPlan, originalTable: NamedRelation, groupFilterCondition: Option[Expression] = None, write: Option[Write] = None) extends LogicalPlan with RowLevelWrite with Product with Serializable
Replace groups of data in an existing table during a row-level operation.
Replace groups of data in an existing table during a row-level operation.
This node is constructed in rules that rewrite DELETE, UPDATE, MERGE operations for data sources that can replace groups of data (e.g. files, partitions).
- table
a plan that references a row-level operation table
- condition
a condition that defines matching groups
- query
a query with records that should replace the records that were read
- originalTable
a plan for the original table for which the row-level command was triggered
- groupFilterCondition
a condition that can be used to filter groups at runtime
- write
a logical write, if already constructed
-
case class
ReplaceTable(name: LogicalPlan, tableSchema: StructType, partitioning: Seq[Transform], tableSpec: TableSpecBase, orCreate: Boolean) extends LogicalPlan with UnaryCommand with V2CreateTablePlan with Product with Serializable
Replace a table with a v2 catalog.
Replace a table with a v2 catalog.
If the table does not exist, and orCreate is true, then it will be created. If the table does not exist, and orCreate is false, then an exception will be thrown.
The persisted table will have no contents as a result of this operation.
-
case class
ReplaceTableAsSelect(name: LogicalPlan, partitioning: Seq[Transform], query: LogicalPlan, tableSpec: TableSpecBase, writeOptions: Map[String, String], orCreate: Boolean, isAnalyzed: Boolean = false) extends LogicalPlan with V2CreateTableAsSelectPlan with Product with Serializable
Replaces a table from a select query with a v2 catalog.
Replaces a table from a select query with a v2 catalog.
If the table does not exist, and orCreate is true, then it will be created. If the table does not exist, and orCreate is false, then an exception will be thrown.
-
case class
ResolvedHint(child: LogicalPlan, hints: HintInfo = HintInfo()) extends LogicalPlan with UnaryNode with Product with Serializable
A resolved hint node.
A resolved hint node. The analyzer should convert all UnresolvedHint into ResolvedHint. This node will be eliminated before optimization starts.
-
case class
ReturnAnswer(child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
When planning take() or collect() operations, this special node is inserted at the top of the logical plan before invoking the query planner.
When planning take() or collect() operations, this special node is inserted at the top of the logical plan before invoking the query planner.
Rules can pattern-match on this node in order to apply transformations that only take effect at the top of the logical query plan.
- trait RowLevelWrite extends LogicalPlan with V2WriteCommand with SupportsSubquery
-
case class
Sample(lowerBound: Double, upperBound: Double, withReplacement: Boolean, seed: Long, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
Sample the dataset.
Sample the dataset.
- lowerBound
Lower-bound of the sampling probability (usually 0.0)
- upperBound
Upper-bound of the sampling probability. The expected fraction sampled will be ub - lb.
- withReplacement
Whether to sample with replacement.
- seed
the random seed
- child
the LogicalPlan
-
case class
ScriptInputOutputSchema(inputRowFormat: Seq[(String, String)], outputRowFormat: Seq[(String, String)], inputSerdeClass: Option[String], outputSerdeClass: Option[String], inputSerdeProps: Seq[(String, String)], outputSerdeProps: Seq[(String, String)], recordReaderClass: Option[String], recordWriterClass: Option[String], schemaLess: Boolean) extends Product with Serializable
Input and output properties when passing data to a script.
Input and output properties when passing data to a script. For example, in Hive this would specify which SerDes to use.
-
case class
ScriptTransformation(script: String, output: Seq[Attribute], child: LogicalPlan, ioschema: ScriptInputOutputSchema) extends LogicalPlan with UnaryNode with ReferenceAllColumns[LogicalPlan] with Product with Serializable
Transforms the input by forking and running the specified script.
Transforms the input by forking and running the specified script.
- script
the command that should be executed.
- output
the attributes that are produced by the script.
- ioschema
the input and output schema applied in the execution of the script.
-
case class
SerdeInfo(storedAs: Option[String] = None, formatClasses: Option[FormatClasses] = None, serde: Option[String] = None, serdeProperties: Map[String, String] = Map.empty) extends Product with Serializable
Type to keep track of Hive serde info
-
case class
SerializeFromObject(serializer: Seq[NamedExpression], child: LogicalPlan) extends LogicalPlan with ObjectConsumer with Product with Serializable
Takes the input object from child and turns it into unsafe row using the given serializer expression.
-
case class
SetCatalogAndNamespace(child: LogicalPlan) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the USE command.
-
case class
SetNamespaceLocation(namespace: LogicalPlan, location: String) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER (DATABASE|SCHEMA|NAMESPACE) ...
The logical plan of the ALTER (DATABASE|SCHEMA|NAMESPACE) ... SET LOCATION command.
-
case class
SetNamespaceProperties(namespace: LogicalPlan, properties: Map[String, String]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER (DATABASE|SCHEMA|NAMESPACE) ...
The logical plan of the ALTER (DATABASE|SCHEMA|NAMESPACE) ... SET (DBPROPERTIES|PROPERTIES) command.
- abstract class SetOperation extends LogicalPlan with BinaryNode
-
case class
SetTableLocation(table: LogicalPlan, partitionSpec: Option[TablePartitionSpec], location: String) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... SET LOCATION command.
-
case class
SetTableProperties(table: LogicalPlan, properties: Map[String, String]) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... SET TBLPROPERTIES command.
-
case class
SetTableSerDeProperties(child: LogicalPlan, serdeClassName: Option[String], serdeProperties: Option[Map[String, String]], partitionSpec: Option[TablePartitionSpec]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... SET [SERDE|SERDEPROPERTIES] command.
-
case class
SetViewProperties(child: LogicalPlan, properties: Map[String, String]) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER VIEW ...
The logical plan of the ALTER VIEW ... SET TBLPROPERTIES command.
-
case class
ShowColumns(child: LogicalPlan, namespace: Option[Seq[String]], output: Seq[Attribute] = ShowColumns.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW COLUMN command.
-
case class
ShowCreateTable(child: LogicalPlan, asSerde: Boolean = false, output: Seq[Attribute] = ShowCreateTable.getoutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW CREATE TABLE command.
-
case class
ShowFunctions(namespace: LogicalPlan, userScope: Boolean, systemScope: Boolean, pattern: Option[String], output: Seq[Attribute] = ShowFunctions.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW FUNCTIONS command.
-
case class
ShowNamespaces(namespace: LogicalPlan, pattern: Option[String], output: Seq[Attribute] = ShowNamespaces.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW NAMESPACES command.
-
case class
ShowPartitions(table: LogicalPlan, pattern: Option[PartitionSpec], output: Seq[Attribute] = ShowPartitions.getOutputAttrs) extends LogicalPlan with V2PartitionCommand with Product with Serializable
The logical plan of the SHOW PARTITIONS command.
-
case class
ShowTableExtended(namespace: LogicalPlan, pattern: String, partitionSpec: Option[PartitionSpec], output: Seq[Attribute] = ShowTableExtended.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW TABLE EXTENDED command.
-
case class
ShowTableProperties(table: LogicalPlan, propertyKey: Option[String], output: Seq[Attribute] = ShowTableProperties.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW TBLPROPERTIES command.
-
case class
ShowTables(namespace: LogicalPlan, pattern: Option[String], output: Seq[Attribute] = ShowTables.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW TABLES command.
-
case class
ShowViews(namespace: LogicalPlan, pattern: Option[String], output: Seq[Attribute] = ShowViews.getOutputAttrs) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the SHOW VIEWS command.
The logical plan of the SHOW VIEWS command.
Notes: v2 catalogs do not support views API yet, the command will fallback to v1 ShowViewsCommand during ResolveSessionCatalog.
-
case class
Sort(order: Seq[SortOrder], global: Boolean, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
- order
The ordering expressions
- global
True means global sorting apply for entire data set, False means sorting only apply within the partition.
- child
Child logical plan
-
case class
Statistics(sizeInBytes: BigInt, rowCount: Option[BigInt] = None, attributeStats: AttributeMap[ColumnStat] = AttributeMap(Nil), isRuntime: Boolean = false) extends Product with Serializable
Estimates of various statistics.
Estimates of various statistics. The default estimation logic simply lazily multiplies the corresponding statistic produced by the children. To override this behavior, override
statisticsand assign it an overridden version ofStatistics.NOTE: concrete and/or overridden versions of statistics fields should pay attention to the performance of the implementations. The reason is that estimations might get triggered in performance-critical processes, such as query plan planning.
Note that we are using a BigInt here since it is easy to overflow a 64-bit integer in cardinality estimation (e.g. cartesian joins).
- sizeInBytes
Physical size in bytes. For leaf operators this defaults to 1, otherwise it defaults to the product of children's
sizeInBytes.- rowCount
Estimated number of rows.
- attributeStats
Statistics for Attributes.
- isRuntime
Whether the statistics is inferred from query stage runtime statistics during adaptive query execution.
-
case class
Subquery(child: LogicalPlan, correlated: Boolean) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
This node is inserted at the top of a subquery when it is optimized.
This node is inserted at the top of a subquery when it is optimized. This makes sure we can recognize a subquery as such, and it allows us to write subquery aware transformations.
- correlated
flag that indicates the subquery is correlated, and will be rewritten into a join during analysis.
-
case class
SubqueryAlias(identifier: AliasIdentifier, child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
Aliased subquery.
Aliased subquery.
- identifier
the alias identifier for this subquery.
- child
the logical plan of this subquery.
-
trait
SupportsSubquery extends LogicalPlan
A trait to represent the commands that support subqueries.
A trait to represent the commands that support subqueries. This is used to allow such commands in the subquery-related checks.
- case class TableSpec(properties: Map[String, String], provider: Option[String], options: Map[String, String], location: Option[String], comment: Option[String], serde: Option[SerdeInfo], external: Boolean) extends TableSpecBase with Product with Serializable
- trait TableSpecBase extends AnyRef
-
case class
Tail(limitExpr: Expression, child: LogicalPlan) extends LogicalPlan with OrderPreservingUnaryNode with Product with Serializable
This is similar with Limit except:
This is similar with Limit except:
- It does not have plans for global/local separately because currently there is only single implementation which initially mimics both global/local tails. See
org.apache.spark.sql.execution.CollectTailExecandorg.apache.spark.sql.execution.CollectLimitExec- Currently, this plan can only be a root node.
-
case class
TruncatePartition(table: LogicalPlan, partitionSpec: PartitionSpec) extends LogicalPlan with V2PartitionCommand with Product with Serializable
The logical plan of the TRUNCATE TABLE ...
The logical plan of the TRUNCATE TABLE ... PARTITION command.
-
case class
TruncateTable(table: LogicalPlan) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the TRUNCATE TABLE command.
-
case class
TypedFilter(func: AnyRef, argumentClass: Class[_], argumentSchema: StructType, deserializer: Expression, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A relation produced by applying
functo each element of thechildand filter them by the resulting boolean value.A relation produced by applying
functo each element of thechildand filter them by the resulting boolean value.This is logically equal to a normal Filter operator whose condition expression is decoding the input row to object and apply the given function with decoded object. However we need the encapsulation of TypedFilter to make the concept more clear and make it easier to write optimizer rules.
- trait UnaryCommand extends LogicalPlan with Command with UnaryLike[LogicalPlan]
-
trait
UnaryNode extends LogicalPlan with UnaryLike[LogicalPlan]
A logical plan node with single child.
- trait UnaryParsedStatement extends ParsedStatement with UnaryLike[LogicalPlan]
-
case class
UncacheTable(table: LogicalPlan, ifExists: Boolean, isAnalyzed: Boolean = false) extends LogicalPlan with AnalysisOnlyCommand with Product with Serializable
The logical plan of the UNCACHE TABLE command.
-
case class
Union(children: Seq[LogicalPlan], byName: Boolean = false, allowMissingCol: Boolean = false) extends LogicalPlan with Product with Serializable
Logical plan for unioning multiple plans, without a distinct.
Logical plan for unioning multiple plans, without a distinct. This is UNION ALL in SQL.
- byName
Whether resolves columns in the children by column names.
- allowMissingCol
Allows missing columns in children query plans. If it is true, this function allows different set of column names between two Datasets. This can be set to true only if
byNameis true.
-
case class
Unpivot(ids: Option[Seq[NamedExpression]], values: Option[Seq[Seq[NamedExpression]]], aliases: Option[Seq[Option[String]]], variableColumnName: String, valueColumnNames: Seq[String], child: LogicalPlan) extends LogicalPlan with UnresolvedUnaryNode with Product with Serializable
A constructor for creating an Unpivot, which will later be converted to an Expand during the query analysis.
A constructor for creating an Unpivot, which will later be converted to an Expand during the query analysis.
Either ids or values array must be set. The ids array can be empty, the values array must not be empty if not None.
A None ids array will be replaced during analysis with all resolved outputs of child except the values. This expansion allows to easily select all non-value columns as id columns.
A None values array will be replaced during analysis with all resolved outputs of child except the ids. This expansion allows to easily unpivot all non-id columns.
- ids
Id columns
- values
Value columns to unpivot
- aliases
Optional aliases for values
- variableColumnName
Name of the variable column
- valueColumnNames
Names of the value columns
- child
Child operator
- See also
org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUnpivotMultiple columns can be unpivoted in one row by providing multiple value column names and the same number of unpivot value expressions:// one-dimensional value columns Unpivot( Some(Seq("id")), Some(Seq( Seq("val1"), Seq("val2") )), None, "var", Seq("val") ) // two-dimensional value columns Unpivot( Some(Seq("id")), Some(Seq( Seq("val1.1", "val1.2"), Seq("val2.1", "val2.2") )), None, "var", Seq("val1", "val2") )
The variable column will contain the name of the unpivot value while the value columns contain the unpivot values. Multi-dimensional unpivot values can be given
aliases: }}} // two-dimensional value columns with aliases Unpivot( Some(Seq("id")), Some(Seq( Seq("val1.1", "val1.2"), Seq("val2.1", "val2.2") )), Some(Seq( Some("val1"), Some("val2") )), "var", Seq("val1", "val2") ) }}} All "value" columns must share a least common data type. Unless they are the same data type, all "value" columns are cast to the nearest common data type. For instance, typesIntegerTypeandLongTypeare cast toLongType, whileIntegerTypeandStringTypedo not have a common data type andunpivotfails with anAnalysisException.org.apache.spark.sql.catalyst.analysis.TypeCoercionBase.UnpivotCoercion
-
case class
UnresolvedHint(name: String, parameters: Seq[Any], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A general hint for the child that is not yet resolved.
A general hint for the child that is not yet resolved. This node is generated by the parser and should be removed This node will be eliminated post analysis.
- name
the name of the hint
- parameters
the parameters of the hint
- child
the LogicalPlan on which this hint applies
- case class UnresolvedTableSpec(properties: Map[String, String], provider: Option[String], optionExpression: OptionList, location: Option[String], comment: Option[String], serde: Option[SerdeInfo], external: Boolean) extends UnaryExpression with Unevaluable with TableSpecBase with Product with Serializable
-
case class
UnresolvedWith(child: LogicalPlan, cteRelations: Seq[(String, SubqueryAlias)]) extends LogicalPlan with UnaryNode with Product with Serializable
A container for holding named common table expressions (CTEs) and a query plan.
A container for holding named common table expressions (CTEs) and a query plan. This operator will be removed during analysis and the relations will be substituted into child.
- child
The final query of this CTE.
- cteRelations
A sequence of pair (alias, the CTE definition) that this CTE defined Each CTE can see the base tables and the previously defined CTEs only.
-
case class
UnsetTableProperties(table: LogicalPlan, propertyKeys: Seq[String], ifExists: Boolean) extends LogicalPlan with AlterTableCommand with Product with Serializable
The logical plan of the ALTER TABLE ...
The logical plan of the ALTER TABLE ... UNSET TBLPROPERTIES command.
-
case class
UnsetViewProperties(child: LogicalPlan, propertyKeys: Seq[String], ifExists: Boolean) extends LogicalPlan with UnaryCommand with Product with Serializable
The logical plan of the ALTER VIEW ...
The logical plan of the ALTER VIEW ... UNSET TBLPROPERTIES command.
- case class UpdateAction(condition: Option[Expression], assignments: Seq[Assignment]) extends MergeAction with Product with Serializable
- case class UpdateStarAction(condition: Option[Expression]) extends MergeAction with Product with Serializable
-
case class
UpdateTable(table: LogicalPlan, assignments: Seq[Assignment], condition: Option[Expression]) extends LogicalPlan with UnaryCommand with SupportsSubquery with Product with Serializable
The logical plan of the UPDATE TABLE command.
- trait V2CreateTableAsSelectPlan extends LogicalPlan with V2CreateTablePlan with AnalysisOnlyCommand
-
trait
V2CreateTablePlan extends LogicalPlan
A trait used for logical plan nodes that create or replace V2 table definitions.
- trait V2PartitionCommand extends LogicalPlan with UnaryCommand
-
trait
V2WriteCommand extends LogicalPlan with UnaryCommand with KeepAnalyzedQuery
Base trait for DataSourceV2 write commands
-
case class
View(desc: CatalogTable, isTempView: Boolean, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
A container for holding the view description(CatalogTable) and info whether the view is temporary or not.
A container for holding the view description(CatalogTable) and info whether the view is temporary or not. If it's a SQL (temp) view, the child should be a logical plan parsed from the
CatalogTable.viewText. Otherwise, the view is a temporary one created from a dataframe and the view description should contain aVIEW_CREATED_FROM_DATAFRAMEproperty; in this case, the child must be already resolved.This operator will be removed at the end of analysis stage.
- desc
A view description(CatalogTable) that provides necessary information to resolve the view.
- isTempView
A flag to indicate whether the view is temporary or not.
- child
The logical plan of a view operator. If the view description is available, it should be a logical plan parsed from the
CatalogTable.viewText.
- case class Window(windowExpressions: Seq[NamedExpression], partitionSpec: Seq[Expression], orderSpec: Seq[SortOrder], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
- case class WindowGroupLimit(partitionSpec: Seq[Expression], orderSpec: Seq[SortOrder], rankLikeFunction: Expression, limit: Int, child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
-
case class
WithCTE(plan: LogicalPlan, cteDefs: Seq[CTERelationDef]) extends LogicalPlan with Product with Serializable
The resolved version of UnresolvedWith with CTE referrences linked to CTE definitions through unique IDs instead of relation aliases.
The resolved version of UnresolvedWith with CTE referrences linked to CTE definitions through unique IDs instead of relation aliases.
- plan
The query plan.
- cteDefs
The CTE definitions.
- case class WithWindowDefinition(windowDefinitions: Map[String, WindowSpecDefinition], child: LogicalPlan) extends LogicalPlan with UnaryNode with Product with Serializable
-
case class
WriteDelta(table: NamedRelation, condition: Expression, query: LogicalPlan, originalTable: NamedRelation, projections: WriteDeltaProjections, write: Option[DeltaWrite] = None) extends LogicalPlan with RowLevelWrite with Product with Serializable
Writes a delta of rows to an existing table during a row-level operation.
Writes a delta of rows to an existing table during a row-level operation.
This node references a query that translates a logical DELETE, UPDATE, MERGE operation into a set of row-level changes to be encoded in the table. Each row in the query represents either a delete, update or insert and stores the operation type in a special column.
This node is constructed in rules that rewrite DELETE, UPDATE, MERGE operations for data sources that can handle deltas of rows.
- table
a plan that references a row-level operation table
- condition
a condition that defines matching records
- query
a query with a delta of records that should written
- originalTable
a plan for the original table for which the row-level command was triggered
- projections
projections for row ID, row, metadata attributes
- write
a logical write, if already constructed
Value Members
- object Aggregate extends Serializable
- object AnalysisHelper
-
object
AppendColumns extends Serializable
Factory for constructing new
AppendColumnnodes. - object AppendData extends Serializable
- object AsOfJoin extends Serializable
-
object
BROADCAST extends JoinStrategyHint with Product with Serializable
The hint for broadcast hash join or broadcast nested loop join, depending on the availability of equi-join keys.
- object CTERelationDef extends Serializable
- object CatalystSerde
-
object
CoGroup extends Serializable
Factory for constructing new
CoGroupnodes. - object DescribeColumn extends Serializable
- object DescribeNamespace extends Serializable
- object DescribeRelation extends Serializable
-
object
DistinctKeyVisitor extends LogicalPlanVisitor[Set[ExpressionSet]]
A visitor pattern for traversing a LogicalPlan tree and propagate the distinct attributes.
- object EventTimeWatermark extends Serializable
- object Expand extends Serializable
-
object
FlatMapGroupsInR extends Serializable
Factory for constructing new
FlatMapGroupsInRnodes. -
object
FlatMapGroupsWithState extends Serializable
Factory for constructing new
MapGroupsWithStatenodes. - object FunctionUtils
- object HistogramSerializer
- object JoinHint extends Serializable
-
object
JoinStrategyHint
The enumeration of join strategy hints.
The enumeration of join strategy hints.
The hinted strategy will be used for the join with which it is associated if doable. In case of contradicting strategy hints specified for each side of the join, hints are prioritized as BROADCAST over SHUFFLE_MERGE over SHUFFLE_HASH over SHUFFLE_REPLICATE_NL.
- object JoinWith
-
object
Limit
A constructor for creating a logical limit, which is split into two separate logical nodes: a LocalLimit, which is a partition local limit, followed by a GlobalLimit.
A constructor for creating a logical limit, which is split into two separate logical nodes: a LocalLimit, which is a partition local limit, followed by a GlobalLimit.
This muds the water for clean logical/physical separation, and is done for better limit pushdown. In distributed query processing, a non-terminal global limit is actually an expensive operation because it requires coordination (in Spark this is done using a shuffle).
In most cases when we want to push down limit, it is often better to only push some partition local limit. Consider the following:
GlobalLimit(Union(A, B))
It is better to do GlobalLimit(Union(LocalLimit(A), LocalLimit(B)))
than Union(GlobalLimit(A), GlobalLimit(B)).
So we introduced LocalLimit and GlobalLimit in the logical plan node for limit pushdown.
- object LimitAndOffset
- object LocalRelation extends Serializable
- object LogicalPlan
- object LogicalPlanIntegrity
- object MapElements extends Serializable
-
object
MapGroups extends Serializable
Factory for constructing new
MapGroupsnodes. - object MapPartitions extends Serializable
- object MapPartitionsInR extends Serializable
- object MergeRows extends Serializable
-
object
NO_BROADCAST_AND_REPLICATION extends JoinStrategyHint with Product with Serializable
An internal hint to prohibit broadcasting and replicating one side of a join.
An internal hint to prohibit broadcasting and replicating one side of a join. This hint is used by some rules where broadcasting or replicating a particular side of the join is not permitted, such as the cardinality check in MERGE operations.
-
object
NO_BROADCAST_HASH extends JoinStrategyHint with Product with Serializable
An internal hint to discourage broadcast hash join, used by adaptive query execution.
- object NamedParametersSupport
- object OffsetAndLimit
- object OverwriteByExpression extends Serializable
- object OverwritePartitionsDynamic extends Serializable
-
object
PREFER_SHUFFLE_HASH extends JoinStrategyHint with Product with Serializable
An internal hint to encourage shuffle hash join, used by adaptive query execution.
-
object
PlanHelper
PlanHelper contains utility methods that can be used by Analyzer and Optimizer.
PlanHelper contains utility methods that can be used by Analyzer and Optimizer. It can also be container of methods that are common across multiple rules in Analyzer and Optimizer.
- object Project extends Serializable
-
object
Range extends Serializable
Factory for constructing new
Rangenodes. - object RepartitionByExpression extends Serializable
-
object
SHUFFLE_HASH extends JoinStrategyHint with Product with Serializable
The hint for shuffle hash join.
-
object
SHUFFLE_MERGE extends JoinStrategyHint with Product with Serializable
The hint for shuffle sort merge join.
-
object
SHUFFLE_REPLICATE_NL extends JoinStrategyHint with Product with Serializable
The hint for shuffle-and-replicate nested loop join, a.k.a.
The hint for shuffle-and-replicate nested loop join, a.k.a. cartesian product join.
- object SerdeInfo extends Serializable
- object SetOperation
- object ShowColumns extends Serializable
- object ShowCreateTable extends Serializable
- object ShowFunctions extends Serializable
- object ShowNamespaces extends Serializable
- object ShowPartitions extends Serializable
- object ShowTableExtended extends Serializable
- object ShowTableProperties extends Serializable
- object ShowTables extends Serializable
- object ShowViews extends Serializable
- object Statistics extends Serializable
- object Subquery extends Serializable
- object SubqueryAlias extends Serializable
- object TypedFilter extends Serializable
-
object
Union extends Serializable
Factory for constructing new
Unionnodes. - object View extends Serializable