Create table and insert the query result into it.
The wrapper class of Hive input and output schema properties
Command for writing data out to a Hive table.
Command for writing data out to a Hive table.
This class is mostly a mess, for legacy reasons (since it evolved in organic ways and had to follow Hive's internal implementations closely, which itself was a mess too). Please don't blame Reynold for this! He was just moving code around!
In the future we should converge the write path for Hive with the normal data source write path, as defined in org.apache.spark.sql.execution.datasources.FileFormatWriter.
the logical plan representing the table. In the future this should be a org.apache.spark.sql.catalyst.catalog.CatalogTable once we converge Hive tables and data source tables.
a map from the partition key to the partition value (optional). If the partition
value is optional, dynamic partition insert will be performed.
As an example, INSERT INTO tbl PARTITION (a=1, b=2) AS ...
would have
Map('a' -> Some('1'), 'b' -> Some('2'))
and INSERT INTO tbl PARTITION (a=1, b) AS ...
would have
Map('a' -> Some('1'), 'b' -> None)
.
the logical plan representing data to write to.
overwrite existing table or partitions.
If true, only write if the table or partition does not exist.
Transforms the input by forking and running the specified script.
Transforms the input by forking and running the specified script.
the set of expression that should be passed to the script.
the command that should be executed.
the attributes that are produced by the script.
Create table and insert the query result into it.
the Table Describe, which may contains serde, storage handler etc.
the query whose result will be insert into the new relation
allow continue working if it's already exists, otherwise raise exception