pyspark.sql.DataFrameReader.jdbc¶
- 
DataFrameReader.jdbc(url: str, table: str, column: Optional[str] = None, lowerBound: Union[str, int, None] = None, upperBound: Union[str, int, None] = None, numPartitions: Optional[int] = None, predicates: Optional[List[str]] = None, properties: Optional[Dict[str, str]] = None) → DataFrame[source]¶
- Construct a - DataFramerepresenting the database table named- tableaccessible via JDBC URL- urland connection- properties.- Partitions of the table will be retrieved in parallel if either - columnor- predicatesis specified.- lowerBound,- upperBoundand- numPartitionsis needed when- columnis specified.- If both - columnand- predicatesare specified,- columnwill be used.- New in version 1.4.0. - Changed in version 3.4.0: Supports Spark Connect. - Parameters
- tablestr
- the name of the table 
- columnstr, optional
- alias of - partitionColumnoption. Refer to- partitionColumnin Data Source Option for the version you use.
- predicateslist, optional
- a list of expressions suitable for inclusion in WHERE clauses; each one defines one partition of the - DataFrame
- propertiesdict, optional
- a dictionary of JDBC database connection arguments. Normally at least properties “user” and “password” with their corresponding values. For example { ‘user’ : ‘SYSTEM’, ‘password’ : ‘mypassword’ } 
 
- Returns
- Other Parameters
- Extra options
- For the extra options, refer to Data Source Option for the version you use. 
 
 - Notes - Don’t create too many partitions in parallel on a large cluster; otherwise Spark might crash your external database systems.