pyspark.pandas.DataFrame.skew¶
- 
DataFrame.skew(axis: Union[int, str, None] = None, skipna: bool = True, numeric_only: bool = None) → Union[int, float, bool, str, bytes, decimal.Decimal, datetime.date, datetime.datetime, None, Series]¶
- Return unbiased skew normalized by N-1. - Parameters
- axis: {index (0), columns (1)}
- Axis for the function to be applied on. 
- skipna: bool, default True
- Exclude NA/null values when computing the result. - Changed in version 3.4.0: Supported including NA/null values. 
- numeric_only: bool, default None
- Include only float, int, boolean columns. False is not supported. This parameter is mainly for pandas compatibility. 
 
- Returns
- skew: scalar for a Series, and a Series for a DataFrame.
 
 - Examples - >>> df = ps.DataFrame({'a': [1, 2, 3, np.nan], 'b': [0.1, 0.2, 0.3, np.nan]}, ... columns=['a', 'b']) - On a DataFrame: - >>> df.skew() a 0.0 b 0.0 dtype: float64 - On a Series: - >>> df['a'].skew() 0.0