public final class Bucketizer extends Model<Bucketizer> implements HasHandleInvalid, HasInputCol, HasOutputCol, HasInputCols, HasOutputCols, DefaultParamsWritable
Bucketizer maps a column of continuous features to a column of feature buckets.
Since 2.3.0,
Bucketizer can map multiple columns at once by setting the inputCols parameter. Note that
when both the inputCol and inputCols parameters are set, an Exception will be thrown. The
splits parameter is only used for single column usage, and splitsArray is for multiple
columns.
| Constructor and Description |
|---|
Bucketizer() |
Bucketizer(String uid) |
| Modifier and Type | Method and Description |
|---|---|
Bucketizer |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
double[] |
getSplits() |
double[][] |
getSplitsArray() |
Param<String> |
handleInvalid()
Param for how to handle invalid entries containing NaN values.
|
Param<String> |
inputCol()
Param for input column name.
|
StringArrayParam |
inputCols()
Param for input column names.
|
static Bucketizer |
load(String path) |
Param<String> |
outputCol()
Param for output column name.
|
StringArrayParam |
outputCols()
Param for output column names.
|
static MLReader<T> |
read() |
Bucketizer |
setHandleInvalid(String value) |
Bucketizer |
setInputCol(String value) |
Bucketizer |
setInputCols(String[] value) |
Bucketizer |
setOutputCol(String value) |
Bucketizer |
setOutputCols(String[] value) |
Bucketizer |
setSplits(double[] value) |
Bucketizer |
setSplitsArray(double[][] value) |
DoubleArrayParam |
splits()
Parameter for mapping continuous features into buckets.
|
DoubleArrayArrayParam |
splitsArray()
Parameter for specifying multiple splits parameters.
|
String |
toString() |
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
Check transform validity and derive the output schema from the input schema.
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
transform, transform, transformparamsgetHandleInvalidgetInputColgetOutputColgetInputColsgetOutputColsclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwnwritesave$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitializepublic static Bucketizer load(String path)
public static MLReader<T> read()
public final StringArrayParam outputCols()
HasOutputColsoutputCols in interface HasOutputColspublic final StringArrayParam inputCols()
HasInputColsinputCols in interface HasInputColspublic final Param<String> outputCol()
HasOutputColoutputCol in interface HasOutputColpublic final Param<String> inputCol()
HasInputColinputCol in interface HasInputColpublic String uid()
Identifiableuid in interface Identifiablepublic DoubleArrayParam splits()
See also handleInvalid, which can optionally create an additional bucket for NaN values.
public double[] getSplits()
public Bucketizer setSplits(double[] value)
public Bucketizer setInputCol(String value)
public Bucketizer setOutputCol(String value)
public Param<String> handleInvalid()
handleInvalid in interface HasHandleInvalidpublic Bucketizer setHandleInvalid(String value)
public DoubleArrayArrayParam splitsArray()
public double[][] getSplitsArray()
public Bucketizer setSplitsArray(double[][] value)
public Bucketizer setInputCols(String[] value)
public Bucketizer setOutputCols(String[] value)
public Dataset<Row> transform(Dataset<?> dataset)
Transformertransform in class Transformerdataset - (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
We check validity for interactions between parameters during transformSchema and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate().
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema in class PipelineStageschema - (undocumented)public Bucketizer copy(ParamMap extra)
ParamsdefaultCopy().copy in interface Paramscopy in class Model<Bucketizer>extra - (undocumented)public String toString()
toString in interface IdentifiabletoString in class Object