# Column | Jupyter-based Feature Engineering Manual: Data Preprocessing

Click AI Channel above to select Top Public Number

Heavy dry goods, first delivery Author: Yingxiang Chen & Zihan Yang

Edit: Red Stone

The importance of Feature Engineering in machine learning is self-evident. Proper feature engineering can significantly improve the performance of machine learning models.We have compiled a systematic feature engineering tutorial on Github for your reference.

https://github.com/YC-Coder-Chen/feature-engineering-handbook

This article will discuss the data preprocessing section: how to use scikit-learn s to process static continuous variables, Category Encoders to process static category variables, and Featuretools to process common time series variables.

Catalog

Data preprocessing for feature engineering is described in three main sections:

• Static continuous variable

• Static Category Variables

• Time Series Variables

This paper describes data preprocessing for 1.1 static continuous variables.This will be explained in detail using sklearn in conjunction with Jupyter. 1.1 Static Continuous Variable

1.1.1 Discrete

Discrete continuous variables can make the model more robust.For example, when predicting a customer's purchase behavior, a customer who has made 30 purchases may behave very similar to a customer who has made 32 purchases.Sometimes the over-precision in the feature can be noise, which is why in LightGBM, the model uses a histogram algorithm to prevent over-fitting.There are two methods for discrete continuous variables.

1.1.1.1 Binarization

Binary the numerical characteristics.

```# load the sample data
from sklearn.datasets import fetch_california_housing
dataset = fetch_california_housing()
X, y = dataset.data, dataset.target # we will take the first column as the example later
```
```%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots()
sns.distplot(X[:,0], hist = True, kde=True)
ax.set_title('Histogram', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution
``` ```from sklearn.preprocessing import Binarizer

sample_columns = X[0:10,0] # select the top 10 samples
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])

model = Binarizer(threshold=6) # set 6 to be the threshold
# if value <= 6, then return 0 else return 1
result = model.fit_transform(sample_columns.reshape(-1,1)).reshape(-1)
# return array([1., 1., 1., 0., 0., 0., 0., 0., 0., 0.])
```

1.1.1.2 sub-boxes

The numerical characteristics are divided into boxes.

Uniform box division:

```from sklearn.preprocessing import KBinsDiscretizer

# in order to mimic the operation in real-world, we shall fit the KBinsDiscretizer
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='uniform') # set 5 bins
# return oridinal bin number, set all bins to have identical widths

model.fit(train_set.reshape(-1,1))
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([2., 2., 2., 1., 1., 1., 1., 0., 0., 1.])
bin_edge = model.bin_edges_
# return array([ 0.4999 ,  3.39994,  6.29998,  9.20002, 12.10006, 15.0001 ]), the bin edges
```
```# visualiza the bin edges
fig, ax = plt.subplots()
sns.distplot(train_set, hist = True, kde=True)

for edge in bin_edge: # uniform bins
line = plt.axvline(edge, color='b')
ax.legend([line], ['Uniform Bin Edges'], fontsize=10)
ax.set_title('Histogram', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12);
``` Quantile box:

```from sklearn.preprocessing import KBinsDiscretizer

# in order to mimic the operation in real-world, we shall fit the KBinsDiscretizer
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile') # set 3 bins
# return oridinal bin number, set all bins based on quantile

model.fit(train_set.reshape(-1,1))
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([4., 4., 4., 4., 2., 3., 2., 1., 0., 2.])
bin_edge = model.bin_edges_
# return array([ 0.4999 ,  2.3523 ,  3.1406 ,  3.9667 ,  5.10824, 15.0001 ]), the bin edges
# 2.3523 is the 20% quantile
# 3.1406 is the 40% quantile, etc..
```
```# visualiza the bin edges
fig, ax = plt.subplots()
sns.distplot(train_set, hist = True, kde=True)

for edge in bin_edge: # quantile based bins
line = plt.axvline(edge, color='b')
ax.legend([line], ['Quantiles Bin Edges'], fontsize=10)
ax.set_title('Histogram', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12);
``` 1.1.2 Zoom

It is difficult to compare the characteristics of different scales, especially in linear regression and logistic regression models.In k-means clustering or KNN models based on Euclidean distance, feature scaling is required, otherwise distance measurement is useless.Scaling also accelerates convergence for any algorithm that uses gradient descent.

Some commonly used models: Note: Skewness affects PCA models, so it is best to use a power transformation to eliminate skewness.

1.1.2.1 Standard Scaling (Z-score standardization)

Formula: Where X is the variable (characteristic),???? is the mean of X,???? is the standard deviation of X.This method is very sensitive to outliers because they affect both???? and?????

```from sklearn.preprocessing import StandardScaler

# in order to mimic the operation in real-world, we shall fit the StandardScaler
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = StandardScaler()

model.fit(train_set.reshape(-1,1)) # fit on the train set and transform the test set
# top ten numbers for simplification
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([ 2.34539745,  2.33286782,  1.78324852,  0.93339178, -0.0125957 ,
# 0.08774668, -0.11109548, -0.39490751, -0.94221041, -0.09419626])
# result is the same as ((X[0:10,0] - X[10:,0].mean())/X[10:,0].std())
```
```# visualize the distribution after the scaling
# fit and transform the entire first feature

import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots(2,1, figsize = (13,9))
sns.distplot(X[:,0], hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Original Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution

model = StandardScaler()
model.fit(X[:,0].reshape(-1,1))
result = model.transform(X[:,0].reshape(-1,1)).reshape(-1)

# show the distribution of the entire feature
sns.distplot(result, hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Transformed Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # the distribution is the same, but scales change
fig.tight_layout()
``` 1.1.2.2 MinMaxScaler (scaled by numeric range)

Suppose we want to scale the range of feature values to (a, b).

Formula: Min is the minimum value of X and Max is the maximum value of X.This method is also sensitive to outliers, which affect both Min and Max.

```from sklearn.preprocessing import MinMaxScaler

# in order to mimic the operation in real-world, we shall fit the MinMaxScaler
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = MinMaxScaler(feature_range=(0,1)) # set the range to be (0,1)

model.fit(train_set.reshape(-1,1)) # fit on the train set and transform the test set
# top ten numbers for simplification
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([0.53966842, 0.53802706, 0.46602805, 0.35469856, 0.23077613,
# 0.24392077, 0.21787286, 0.18069406, 0.1089985 , 0.22008662])
# result is the same as (X[0:10,0] - X[10:,0].min())/(X[10:,0].max()-X[10:,0].min())
```
```# visualize the distribution after the scaling
# fit and transform the entire first feature

import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots(2,1, figsize = (13,9))
sns.distplot(X[:,0], hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Original Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution

model = MinMaxScaler(feature_range=(0,1))
model.fit(X[:,0].reshape(-1,1))
result = model.transform(X[:,0].reshape(-1,1)).reshape(-1)

# show the distribution of the entire feature
sns.distplot(result, hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Transformed Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # the distribution is the same, but scales change
fig.tight_layout() # now the scale change to [0,1]
``` 1.1.2.3 RobustScaler (Resist Outlier Scaling)

Scale features using statistics (quantiles) that are robust to outliers.Suppose we want to scale the range of feature quantiles to (a, b).

Formula: This method is more robust to outliers.

```import numpy as np
from sklearn.preprocessing import RobustScaler

# in order to mimic the operation in real-world, we shall fit the RobustScaler
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = RobustScaler(with_centering = True, with_scaling = True,
quantile_range = (25.0, 75.0))
# with_centering = True => recenter the feature by set X' = X - X.median()
# with_scaling = True => rescale the feature by the quantile set by user
# set the quantile to the (25%, 75%)

model.fit(train_set.reshape(-1,1)) # fit on the train set and transform the test set
# top ten numbers for simplification
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([ 2.19755974,  2.18664281,  1.7077657 ,  0.96729508,  0.14306683,
# 0.23049401,  0.05724508, -0.19003715, -0.66689601,  0.07196918])
# result is the same as (X[0:10,0] - np.quantile(X[10:,0], 0.5))/(np.quantile(X[10:,0],0.75)-np.quantile(X[10:,0], 0.25))
```
```# visualize the distribution after the scaling
# fit and transform the entire first feature

import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots(2,1, figsize = (13,9))
sns.distplot(X[:,0], hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Original Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution

model = RobustScaler(with_centering = True, with_scaling = True,
quantile_range = (25.0, 75.0))
model.fit(X[:,0].reshape(-1,1))
result = model.transform(X[:,0].reshape(-1,1)).reshape(-1)

# show the distribution of the entire feature
sns.distplot(result, hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Transformed Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # the distribution is the same, but scales change
fig.tight_layout()
``` 1.1.2.4 power transformation (non-linear transformation)

All scaling methods described above maintain their original distribution.However, normality is an important assumption in many statistical models.We can use a power transformation to convert the original distribution to a normal distribution.

Box-Cox transformation:

The Box-Cox transformation applies only to positive numbers and assumes the following distribution: Considering all the lambda values, the optimal values of stable variance and minimizing skewness are selected through maximum likelihood estimation.

```from sklearn.preprocessing import PowerTransformer

# in order to mimic the operation in real-world, we shall fit the PowerTransformer
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = PowerTransformer(method='box-cox', standardize=True)
# apply box-cox transformation

model.fit(train_set.reshape(-1,1)) # fit on the train set and transform the test set
# top ten numbers for simplification
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([ 1.91669292,  1.91009687,  1.60235867,  1.0363095 ,  0.19831579,
# 0.30244247,  0.09143411, -0.24694006, -1.08558469,  0.11011933])
```
```# visualize the distribution after the scaling
# fit and transform the entire first feature

import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots(2,1, figsize = (13,9))
sns.distplot(X[:,0], hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Original Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution

model = PowerTransformer(method='box-cox', standardize=True)
model.fit(X[:,0].reshape(-1,1))
result = model.transform(X[:,0].reshape(-1,1)).reshape(-1)

# show the distribution of the entire feature
sns.distplot(result, hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Transformed Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # the distribution now becomes normal
fig.tight_layout()
``` Yeo-Johnson transformation:

The Yeo Johnson transformation applies to positive and negative numbers and assumes the following distribution: Considering all the lambda values, the optimal values of stable variance and minimizing skewness are selected through maximum likelihood estimation.

```from sklearn.preprocessing import PowerTransformer

# in order to mimic the operation in real-world, we shall fit the PowerTransformer
# on the trainset and transform the testset
# we take the top ten samples in the first column as test set
# take the rest samples in the first column as train set

test_set = X[0:10,0]
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])
train_set = X[10:,0]

model = PowerTransformer(method='yeo-johnson', standardize=True)
# apply box-cox transformation

model.fit(train_set.reshape(-1,1)) # fit on the train set and transform the test set
# top ten numbers for simplification
result = model.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([ 1.90367888,  1.89747091,  1.604735  ,  1.05166306,  0.20617221,
# 0.31245176,  0.09685566, -0.25011726, -1.10512438,  0.11598074])
```
```# visualize the distribution after the scaling
# fit and transform the entire first feature

import seaborn as sns
import matplotlib.pyplot as plt

fig, ax = plt.subplots(2,1, figsize = (13,9))
sns.distplot(X[:,0], hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Original Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # this feature has long-tail distribution

model = PowerTransformer(method='yeo-johnson', standardize=True)
model.fit(X[:,0].reshape(-1,1))
result = model.transform(X[:,0].reshape(-1,1)).reshape(-1)

# show the distribution of the entire feature
sns.distplot(result, hist = True, kde=True, ax=ax)
ax.set_title('Histogram of the Transformed Distribution', fontsize=12)
ax.set_xlabel('Value', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12); # the distribution now becomes normal
fig.tight_layout()
``` 1.1.3 Regularization

All the scaling methods above are column by column.However, regularization is valid on each line, and it attempts to "scale" each sample so that it has a unit norm.Since regularization works on each line, it distorts the relationship between features and is not common.However, regularization is very useful in text classification and clustering contexts.

Assume that X[i][j] represents the value of characteristic J in sample I.

L1 Regularization Formula: L2 Regularization Formula: L1 Regularization:

```from sklearn.preprocessing import Normalizer

# Normalizer performs operation on each row independently
# So train set and test set are processed independently

###### for L1 Norm
sample_columns = X[0:2,0:3] # select the first two samples, and the first three features
# return array([[ 8.3252, 41., 6.98412698],
# [ 8.3014 , 21.,  6.23813708]])

model = Normalizer(norm='l1')
# use L2 Norm to normalize each samples

model.fit(sample_columns)

result = model.transform(sample_columns) # test set are processed similarly
# return array([[0.14784762, 0.72812094, 0.12403144],
# [0.23358211, 0.59089121, 0.17552668]])
# result = sample_columns/np.sum(np.abs(sample_columns), axis=1).reshape(-1,1)
```

L2 regularization:

```###### for L2 Norm
sample_columns = X[0:2,0:3] # select the first three features
# return array([[ 8.3252, 41., 6.98412698],
# [ 8.3014 , 21.,  6.23813708]])

model = Normalizer(norm='l2')
# use L2 Norm to normalize each samples

model.fit(sample_columns)

result = model.transform(sample_columns)
# return array([[0.19627663, 0.96662445, 0.16465922],
# [0.35435076, 0.89639892, 0.26627902]])
# result = sample_columns/np.sqrt(np.sum(sample_columns**2, axis=1)).reshape(-1,1)
```
```# visualize the difference in the distribuiton after Normalization
# compare it with the distribuiton after RobustScaling
# fit and transform the entire first & second feature

import seaborn as sns
import matplotlib.pyplot as plt

# RobustScaler
fig, ax = plt.subplots(2,1, figsize = (13,9))

model = RobustScaler(with_centering = True, with_scaling = True,
quantile_range = (25.0, 75.0))
model.fit(X[:,0:2])
result = model.transform(X[:,0:2])

sns.scatterplot(result[:,0], result[:,1], ax=ax)
ax.set_title('Scatter Plot of RobustScaling result', fontsize=12)
ax.set_xlabel('Feature 1', fontsize=12)
ax.set_ylabel('Feature 2', fontsize=12);

model = Normalizer(norm='l2')

model.fit(X[:,0:2])
result = model.transform(X[:,0:2])

sns.scatterplot(result[:,0], result[:,1], ax=ax)
ax.set_title('Scatter Plot of Normalization result', fontsize=12)
ax.set_xlabel('Feature 1', fontsize=12)
ax.set_ylabel('Feature 2', fontsize=12);
fig.tight_layout()  # Normalization distort the original distribution
``` 1.1.4 Estimation of missing values

In practice, values may be missing from the dataset.However, this sparse dataset is incompatible with most scikit learning models, which assume that all features are numeric without missing values.So before applying the scikit learning model, we need to estimate the missing values.

However, some new models, such as XGboost, LightGBM, and Catboost implemented in other packages, provide support for missing values in the dataset.So when applying these models, we no longer need to fill in the missing values in the dataset.

1.1.4.1 Univariate Feature Interpolation

Assuming that there are missing values in column i, we will estimate them using constant or column I statistics (mean, median, or pattern).

```from sklearn.impute import SimpleImputer

test_set = X[0:10,0].copy() # no missing values
# return array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])

# manully create some missing values
test_set = np.nan
test_set = np.nan
# now sample_columns becomes
# array([8.3252, 8.3014, 7.2574,    nan, 3.8462, 4.0368,    nan, 3.12 ,2.0804, 3.6912])

# create the test samples
# in real-world, we should fit the imputer on train set and tranform the test set.
train_set = X[10:,0].copy()
train_set = np.nan
train_set = np.nan

imputer = SimpleImputer(missing_values=np.nan, strategy='mean') # use mean
# we can set the strategy to 'mean', 'median', 'most_frequent', 'constant'
imputer.fit(train_set.reshape(-1,1))
result = imputer.transform(test_set.reshape(-1,1)).reshape(-1)
# return array([8.3252    , 8.3014    , 7.2574    , 3.87023658, 3.8462    ,
# 4.0368    , 3.87023658, 3.12      , 2.0804    , 3.6912    ])
# all missing values are imputed with 3.87023658
# 3.87023658 = np.nanmean(train_set)
# which is the mean of the trainset ignoring missing values
```

1.1.4.2 Multivariate Feature Interpolation

Multivariate feature interpolation uses information from the entire dataset to estimate and interpolate missing values.In scikit-learn, it is implemented iteratively in a circular manner.

In each step, one feature column is specified as output y, and the other feature columns are considered input X.A regressor works with known y (X, y).Regressors are then used to predict missing y values.This is done iteratively for each feature, then iteratively for the maximum interpolation round.

Using a linear model (for example, Bayesian Ridge):

```from sklearn.experimental import enable_iterative_imputer # have to import this to enable
# IterativeImputer
from sklearn.impute import IterativeImputer
from sklearn.linear_model import BayesianRidge

test_set = X[0:10,:].copy() # no missing values, select all features
# the first columns is
# array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])

# manully create some missing values
test_set[3,0] = np.nan
test_set[6,0] = np.nan
test_set[3,1] = np.nan
# now the first feature becomes
# array([8.3252, 8.3014, 7.2574,    nan, 3.8462, 4.0368,    nan, 3.12 ,2.0804, 3.6912])

# create the test samples
# in real-world, we should fit the imputer on train set and tranform the test set.
train_set = X[10:,:].copy()
train_set[3,0] = np.nan
train_set[6,0] = np.nan
train_set[3,1] = np.nan

impute_estimator = BayesianRidge()
imputer = IterativeImputer(max_iter = 10,
random_state = 0,
estimator = impute_estimator)

imputer.fit(train_set)
result = imputer.transform(test_set)[:,0] # only select the first column to revel how it works
# return array([8.3252    , 8.3014    , 7.2574    , 4.6237195 , 3.8462    ,
# 4.0368    , 4.00258149, 3.12      , 2.0804    , 3.6912    ])
```

Use a tree-based model (for example, ExtraTrees):

```from sklearn.experimental import enable_iterative_imputer # have to import this to enable
# IterativeImputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import ExtraTreesRegressor

test_set = X[0:10,:].copy() # no missing values, select all features
# the first columns is
# array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])

# manully create some missing values
test_set[3,0] = np.nan
test_set[6,0] = np.nan
test_set[3,1] = np.nan
# now the first feature becomes
# array([8.3252, 8.3014, 7.2574,    nan, 3.8462, 4.0368,    nan, 3.12 ,2.0804, 3.6912])

# create the test samples
# in real-world, we should fit the imputer on train set and tranform the test set.
train_set = X[10:,:].copy()
train_set[3,0] = np.nan
train_set[6,0] = np.nan
train_set[3,1] = np.nan

impute_estimator = ExtraTreesRegressor(n_estimators=10, random_state=0)
# parameters can be turned in CV though sklearn pipeline
imputer = IterativeImputer(max_iter = 10,
random_state = 0,
estimator = impute_estimator)

imputer.fit(train_set)
result = imputer.transform(test_set)[:,0] # only select the first column to revel how it works
# return array([8.3252 , 8.3014 , 7.2574 , 4.63813, 3.8462 , 4.0368 , 3.24721,
# 3.12   , 2.0804 , 3.6912 ])
```

Using K Nearest Neighbor (KNN):

```from sklearn.experimental import enable_iterative_imputer # have to import this to enable
# IterativeImputer
from sklearn.impute import IterativeImputer
from sklearn.neighbors import KNeighborsRegressor

test_set = X[0:10,:].copy() # no missing values, select all features
# the first columns is
# array([8.3252, 8.3014, 7.2574, 5.6431, 3.8462, 4.0368, 3.6591, 3.12, 2.0804, 3.6912])

# manully create some missing values
test_set[3,0] = np.nan
test_set[6,0] = np.nan
test_set[3,1] = np.nan
# now the first feature becomes
# array([8.3252, 8.3014, 7.2574,    nan, 3.8462, 4.0368,    nan, 3.12 ,2.0804, 3.6912])

# create the test samples
# in real-world, we should fit the imputer on train set and tranform the test set.
train_set = X[10:,:].copy()
train_set[3,0] = np.nan
train_set[6,0] = np.nan
train_set[3,1] = np.nan

impute_estimator = KNeighborsRegressor(n_neighbors=10,
p = 1)  # set p=1 to use manhanttan distance
# use manhanttan distance to reduce effect from outliers

# parameters can be turned in CV though sklearn pipeline
imputer = IterativeImputer(max_iter = 10,
random_state = 0,
estimator = impute_estimator)

imputer.fit(train_set)
result = imputer.transform(test_set)[:,0] # only select the first column to revel how it works
# return array([8.3252, 8.3014, 7.2574, 3.6978, 3.8462, 4.0368, 4.052 , 3.12  ,
# 2.0804, 3.6912])
```

1.1.4.3 Marker Estimate

Sometimes, some missing values may be useful.Therefore, scikit learn also provides the ability to convert datasets with missing values into corresponding binary matrices that indicate the presence of missing values in the dataset.

```from sklearn.impute import MissingIndicator

# illustrate this function on trainset only
# since the precess is independent in train set and test set
train_set = X[10:,:].copy() # select all features
train_set[3,0] = np.nan # manully create some missing values
train_set[6,0] = np.nan
train_set[3,1] = np.nan

indicator = MissingIndicator(missing_values=np.nan, features='all')
# show the results on all the features
result = indicator.fit_transform(train_set) # result have the same shape with train_set
# contains only True & False, True corresponds with missing value

result[:,0].sum() # should return 2, the first column has two missing values
result[:,1].sum(); # should return 1, the second column has one missing value
```

1.1.5 Feature Transform

1.1.5.1 polynomial transformation

Sometimes we want to introduce non-linear features into the model to increase the complexity of the model.For simple linear models, this greatly increases the complexity of the model.However, for more complex models, such as tree-based ML models, they already contain non-linear relationships in non-parametric tree structures.Therefore, this feature transformation may not be very helpful for tree-based ML models.

For example, if we set the order to 3, the form is as follows: ```from sklearn.preprocessing import PolynomialFeatures

# illustrate this function on one synthesized sample
train_set = np.array([2,3]).reshape(1,-1) # shape (1,2)
# return array([[2, 3]])

poly = PolynomialFeatures(degree = 3, interaction_only = False)
# the highest degree is set to 3, and we want more than just intereaction terms

result = poly.fit_transform(train_set) # have shape (1, 10)
# array([[ 1.,  2.,  3.,  4.,  6.,  9.,  8., 12., 18., 27.]])
```

1.1.5.2 Custom Transformations

```from sklearn.preprocessing import FunctionTransformer

# illustrate this function on one synthesized sample
train_set = np.array([2,3]).reshape(1,-1) # shape (1,2)
# return array([[2, 3]])

transformer = FunctionTransformer(func = np.log1p, validate=True)
# perform log transformation, X' = log(1 + x)
# func can be any numpy function such as np.exp
result = transformer.transform(train_set)
# return array([[1.09861229, 1.38629436]]), the same as np.log1p(train_set)
```

Okay, that's the introduction to data preprocessing for static continuous variables.It is recommended that the reader do this in Jupyter, in conjunction with the code.

(Click on the title to skip reading)

Dry Goods | Selected Public History Articles

My route to getting started with in-depth learning

My Road Map to Getting Started with Machine Learning

Heavy!

Linxuanta Machine is learning full videos and blogger notes!

Scan the two-dimensional code below and add the AI Trailer assistant WeChat to apply for group membership and get a refined note of the complete video of Lin Xuanta Machine Learning + Blogger's Red Stone (be sure to note: Group + Place + School/Company).For example: Enrollment + Shanghai + Fudan.( Long press sweep code to apply for group membership

Latest AI dry goods, I am looking at   