708 1159 TEST90b5efa73esklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)) sklearn.ColumnTransformer sklearn.compose._column_transformer.ColumnTransformer 1 openml==0.14.1,sklearn==0.24.0 Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer. 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 n_jobs int null Number of jobs to run in parallel ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context ``-1`` means using all processors. See :term:`Glossary <n_jobs>` for more details remainder "passthrough" sparse_threshold float 0.3 If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use ``sparse_threshold=0`` to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored transformer_weights dict null Multiplicative weights for features per transformer. The output of the transformer is multiplied by these weights. Keys are transformer names, values the weights transformers list of tuples [{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}] List of (name, transformer, columns) tuples specifying the transformer objects to be applied to subsets of the data verbose bool false If True, the time elapsed while fitting each transformer will be printed as it is completed. numeric 709 1159 TEST90b5efa73esklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler) sklearn.Pipeline(SimpleImputer,StandardScaler) sklearn.pipeline.Pipeline 1 openml==0.14.1,sklearn==0.24.0 Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``. 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 memory str or object with the joblib null Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming steps list [{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}] List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator verbose bool false If True, the time elapsed while fitting each step will be printed as it is completed. simpleimputer 710 1159 TEST90b5efa73esklearn.impute._base.SimpleImputer sklearn.SimpleImputer sklearn.impute._base.SimpleImputer 1 openml==0.14.1,sklearn==0.24.0 Imputation transformer for completing missing values. 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 add_indicator boolean false If True, a :class:`MissingIndicator` transform will stack onto output of the imputer's transform. This allows a predictive estimator to account for missingness despite imputation. If a feature has no missing values at fit/train time, the feature won't appear on the missing indicator even if there are missing values at transform/test time. copy boolean true If True, a copy of X will be created. If False, imputation will be done in-place whenever possible. Note that, in the following cases, a new copy will always be made, even if `copy=False`: - If X is not an array of floating values; - If X is encoded as a CSR matrix; - If add_indicator=True fill_value string or numerical value null When strategy == "constant", fill_value is used to replace all occurrences of missing_values If left to the default, fill_value will be 0 when imputing numerical data and "missing_value" for strings or object data types missing_values int NaN The placeholder for the missing values. All occurrences of `missing_values` will be imputed. For pandas' dataframes with nullable integer dtypes with missing values, `missing_values` should be set to `np.nan`, since `pd.NA` will be converted to `np.nan` strategy string "mean" The imputation strategy - If "mean", then replace missing values using the mean along each column. Can only be used with numeric data - If "median", then replace missing values using the median along each column. Can only be used with numeric data - If "most_frequent", then replace missing using the most frequent value along each column. Can be used with strings or numeric data If there is more than one such value, only the smallest is returned - If "constant", then replace missing values with fill_value. Can be used with strings or numeric data .. versionadded:: 0.20 strategy="constant" for fixed value imputation verbose integer 0 Controls the verbosity of the imputer openml-python python scikit-learn sklearn sklearn_0.24.0 standardscaler 711 1159 TEST90b5efa73esklearn.preprocessing._data.StandardScaler sklearn.StandardScaler sklearn.preprocessing._data.StandardScaler 1 openml==0.14.1,sklearn==0.24.0 Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if `with_mean=False`, and `s` is the standard deviation of the training samples or one if `with_std=False`. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using :meth:`transform`. Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 a... 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 copy bool true If False, try to avoid a copy and do inplace scaling instead This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned with_mean bool true If True, center the data before scaling This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory with_std bool true If True, scale the data to unit variance (or equivalently, unit standard deviation). openml-python python scikit-learn sklearn sklearn_0.24.0 openml-python python scikit-learn sklearn sklearn_0.24.0 nominal 712 1159 TEST90b5efa73esklearn.pipeline.Pipeline(customimputer=openml.testing.CustomImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder) sklearn.Pipeline(CustomImputer,OneHotEncoder) sklearn.pipeline.Pipeline 1 openml==0.14.1,sklearn==0.24.0 Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``. 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 memory str or object with the joblib null Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute ``named_steps`` or ``steps`` to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming steps list [{"oml-python:serialized_object": "component_reference", "value": {"key": "customimputer", "step_name": "customimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}] List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator verbose bool false If True, the time elapsed while fitting each step will be printed as it is completed. customimputer 713 1159 TEST90b5efa73eopenml.testing.CustomImputer openml.CustomImputer openml.testing.CustomImputer 1 openml==0.14.1,sklearn==0.24.0 Duplicate class alias for sklearn's SimpleImputer Helps bypass the sklearn extension duplicate operation check 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 add_indicator false copy true fill_value null missing_values NaN strategy "most_frequent" verbose 0 openml-python python scikit-learn sklearn sklearn_0.24.0 onehotencoder 715 1159 TEST90b5efa73esklearn.preprocessing._encoders.OneHotEncoder sklearn.OneHotEncoder sklearn.preprocessing._encoders.OneHotEncoder 1 openml==0.14.1,sklearn==0.24.0 Encode categorical features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are encoded using a one-hot (aka 'one-of-K' or 'dummy') encoding scheme. This creates a binary column for each category and returns a sparse matrix or dense array (depending on the ``sparse`` parameter) By default, the encoder derives the categories based on the unique values in each feature. Alternatively, you can also specify the `categories` manually. This encoding is needed for feeding categorical data to many scikit-learn estimators, notably linear models and SVMs with the standard kernels. Note: a one-hot encoding of y labels should use a LabelBinarizer instead. 2024-01-10T15:53:02 English sklearn==0.24.0 numpy>=1.13.3 scipy>=0.19.1 joblib>=0.11 threadpoolctl>=2.0.0 categories 'auto' or a list of array "auto" Categories (unique values) per feature: - 'auto' : Determine categories automatically from the training data - list : ``categories[i]`` holds the categories expected in the ith column. The passed categories should not mix strings and numeric values within a single feature, and should be sorted in case of numeric values The used categories can be found in the ``categories_`` attribute .. versionadded:: 0.20 drop : {'first', 'if_binary'} or a array-like of shape (n_features,), default=None Specifies a methodology to use to drop one of the categories per feature. This is useful in situations where perfectly collinear features cause problems, such as when feeding the resulting data into a neural network or an unregularized regression However, dropping one category breaks the symmetry of the original representation and can therefore induce a bias in downstream models, for instance for penalized linear classification or regression models drop null dtype number type {"oml-python:serialized_object": "type", "value": "np.float64"} Desired dtype of output handle_unknown : {'error', 'ignore'}, default='error' Whether to raise an error or ignore if an unknown categorical feature is present during transform (default is to raise). When this parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None. handle_unknown "ignore" sparse bool true Will return sparse matrix if set True else will return an array openml-python python scikit-learn sklearn sklearn_0.24.0 openml-python python scikit-learn sklearn sklearn_0.24.0 openml-python python scikit-learn sklearn sklearn_0.24.0