Specifies the input data source format.
Specifies the input data source format.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table using connection properties.
Construct a DataFrame representing the database table accessible via JDBC URL
url named table using connection properties. The predicates
parameter gives a list
expressions suitable for inclusion in WHERE clauses; each one defines one partition
of the DataFrame.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
JDBC database url of the form jdbc:subprotocol:subname
Name of the table in the external database.
Condition in the where clause for each partition.
JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.
Don't create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.
JDBC database url of the form jdbc:subprotocol:subname
Name of the table in the external database.
the name of a column of integral type that will be used for partitioning.
the minimum value of columnName
used to decide partition stride
the maximum value of columnName
used to decide partition stride
the number of partitions. the range minValue
-maxValue
will be split
evenly into this many partitions
JDBC database connection arguments, a list of arbitrary string tag/value. Normally at least a "user" and "password" property should be included.
1.4.0
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
Construct a DataFrame representing the database table accessible via JDBC URL url named table and connection properties.
1.4.0
Loads an RDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Loads an RDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.
input RDD with one JSON object per record
1.4.0
Loads an JavaRDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Loads an JavaRDD[String]
storing JSON objects (one object per record) and
returns the result as a DataFrame.
Unless the schema is specified using schema function, this function goes through the input once to determine the input schema.
input RDD with one JSON object per record
1.4.0
Loads a JSON file (one object per line) and returns the result as a DataFrame.
Loads a JSON file (one object per line) and returns the result as a DataFrame.
This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
input path
1.4.0
Loads input in as a DataFrame, for data sources that don't require a path (e.
Loads input in as a DataFrame, for data sources that don't require a path (e.g. external key-value stores).
1.4.0
Loads input in as a DataFrame, for data sources that require a path (e.
Loads input in as a DataFrame, for data sources that require a path (e.g. data backed by a local or distributed file system).
1.4.0
Adds an input option for the underlying data source.
Adds an input option for the underlying data source.
1.4.0
Adds input options for the underlying data source.
Adds input options for the underlying data source.
1.4.0
(Scala-specific) Adds input options for the underlying data source.
(Scala-specific) Adds input options for the underlying data source.
1.4.0
Loads a Parquet file, returning the result as a DataFrame.
Specifies the input schema.
Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.
1.4.0
Returns the specified table as a DataFrame.
Returns the specified table as a DataFrame.
1.4.0
:: Experimental :: Interface used to load a DataFrame from external storage systems (e.g. file systems, key-value stores, etc). Use SQLContext.read to access this.
1.4.0