In the last blog, I introduced how to use keras to perform multi classification tasks for a given dataset.

100% classification accuracy verifies the feasibility of the classification model and the accuracy of the data set.

Multi classification of [keras] one dimensional convolutional neural network

In this blog, I will use a slightly modified dataset to complete the linear regression task. Compared with the previous linear regression processing, I think that using neural network to achieve linear regression is much simpler and more accurate.

The data set size is still 247 * 900, but the 247th bit of the data set becomes the real humidity value of the humidity feature.

Dataset - used as regression.csv

Different from the decision surface of the classification algorithm, the regression algorithm obtains an optimal fitting line, which can best approach each point in the data set.

First, the data set is still imported and divided:

# Load data df = pd.read_csv(r"C:\Users\316CJW\Desktop\Completion code\indoor_10_50_9.csv") X = np.expand_dims(df.values[:, 0:246].astype(float), axis=2)#Add one dimension axis Y = df.values[:, 246] # Partition training set, test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.5, random_state=0)

I won't go into details here. The students who want to know can read the previous blog.

Then the construction of network model:

# Define a neural network model = Sequential() model.add(Conv1D(16, 3,input_shape=(246,1), activation='relu')) model.add(Conv1D(16, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(64, 3, activation='relu')) model.add(Conv1D(64, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(128, 3, activation='relu')) model.add(Conv1D(128, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(64, 3, activation='relu')) model.add(Conv1D(64, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Flatten()) model.add(Dense(1, activation='linear')) plot_model(model, to_file='./model_linear.png', show_shapes=True) print(model.summary()) model.compile(optimizer='adam', loss='mean_squared_error', metrics=[coeff_determination])

In order to complete the regression task, the output layer of the neural network needs to be set as a node, which represents the prediction results of each piece of humidity information output.

model.add(Dense(1, activation='linear'))

We use the Mean Squared Error (MSE) as the loss function of the output layer. MSE is often used to compare the deviation between the predicted value of the model and the real value. In our task, by constantly reducing the value of the loss function, the whole network can fit its real humidity value as much as possible.

The schematic diagram of the whole network model is as follows:

After many parameter adjustments, we select 8-layer Conv1D to extract features, and add a layer of MaxPooling1D after every 2-layer Conv1D to retain the main features and reduce the amount of calculation. Each convolution layer uses the rectified linear unit (relu) as the activation function. The output humidity prediction value of the last layer will tend to the real value with the approaching of MSE loss function.

In order to return the real humidity value of the data more accurately, the number of network layers used is obviously deeper than that of classification.

In order to evaluate the accuracy of the network model training and testing process, we need to customize the measurement function:

The coefficient of determination R2 (coefficient of determination) is often used in linear regression to express the percentage of dependent variable fluctuations described by regression line. If R2 =1, the model predicts the target variable perfectly.

Expression: R2=SSR/SST=1-SSE/SST

Where: SST=SSR+SSE, SST(total sum of squares) is the sum of total squares, SSR (expression sum of squares) is the sum of regression squares, and SSE(error sum of squares) is the sum of residual squares.

# Custom measure function def coeff_determination(y_true, y_pred): SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) )

And apply it to compilation:

model.compile(optimizer='adam', loss='mean_squared_error', metrics=[coeff_determination])

Here is the complete code of the whole running process:

# -*- coding: utf8 -*- import numpy as np import pandas as pd from keras.models import Sequential from keras.utils import np_utils,plot_model from sklearn.model_selection import cross_val_score,train_test_split from keras.layers import Dense, Dropout,Flatten,Conv1D,MaxPooling1D from keras.models import model_from_json import matplotlib.pyplot as plt from keras import backend as K # Load data df = pd.read_csv(r"C:\Users\Desktop\data set-For regression.csv") X = np.expand_dims(df.values[:, 0:246].astype(float), axis=2)#Add one dimension axis Y = df.values[:, 246] # Partition training set, test set X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.5, random_state=0) # Custom measure function def coeff_determination(y_true, y_pred): SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) ) # Define a neural network model = Sequential() model.add(Conv1D(16, 3,input_shape=(246,1), activation='relu')) model.add(Conv1D(16, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(64, 3, activation='relu')) model.add(Conv1D(64, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(128, 3, activation='relu')) model.add(Conv1D(128, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(64, 3, activation='relu')) model.add(Conv1D(64, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Flatten()) model.add(Dense(1, activation='linear')) plot_model(model, to_file='./model_linear.png', show_shapes=True) print(model.summary()) model.compile(optimizer='adam', loss='mean_squared_error', metrics=[coeff_determination]) # Training model model.fit(X_train,Y_train, validation_data=(X_test, Y_test),epochs=40, batch_size=10) # # Convert its model to json # model_json = model.to_json() # with open(r"C:\Users\Desktop\model.json",'w')as json_file: # json_file.write(model_json)# The weight is not in json, only the network structure is saved # model.save_weights('model.h5') # # # Load model for forecasting # json_file = open(r"C:\Users6CJW\Desktop \ completion code\ model.json", "r") # loaded_model_json = json_file.read() # json_file.close() # loaded_model = model_from_json(loaded_model_json) # loaded_model.load_weights("model.h5") # print("loaded model from disk") # scores = model.evaluate(X_test,Y_test,verbose=0) # print('%s: %.2f%%' % (model.metrics_names[1], scores[1]*100)) # Accuracy scores = model.evaluate(X_test,Y_test,verbose=0) print('%s: %.2f%%' % (model.metrics_names[1], scores[1]*100)) # Scatter diagram of predicted value predicted = model.predict(X_test) plt.scatter(Y_test,predicted) x=np.linspace(0,0.3,100) y=x plt.plot(x,y,color='red',linewidth=1.0,linestyle='--',label='line') plt.rcParams['font.sans-serif'] = ['SimHei'] # Used to display Chinese labels normally plt.rcParams['axes.unicode_minus'] = False # Used to display negative sign normally plt.legend(["y = x","Humidity forecast"]) plt.title("Deviation degree between predicted value and real value") plt.xlabel('True humidity value') plt.ylabel('Humidity forecast') plt.savefig('test_xx.png', dpi=200, bbox_inches='tight', transparent=False) plt.show() # calculation error result =abs(np.mean(predicted - Y_test)) print("The mean error of linear regression:") print(result)

When evaluating the experimental results, I output the value of the determination coefficient and the average deviation between the regression humidity and the real humidity:

It can be seen that 99% of the points find their destination, that is, they are scanned by the regression line.

The average error is 0.0014, which is a good result.

On the other hand, I use the real humidity as the x-axis and the predicted humidity as the y-axis to draw the scatter diagram of the predicted data.

It can be seen from the figure that the predicted data is close to the real humidity value.

In fact, the methods of neural network are similar. The calculation of machine replaces many artificial reasoning and calculation.

I hope I can communicate with you more and make progress together (● '◡ ●)!