Machine learning learning record [continuous update] - TensorfFlow linear regression

TensorfFlow linear regression The steps of constructing a linear model with TensorFlow Import necessary Libraries Load...
Import necessary Libraries
Load the dataset and check the data
Building model
Adjust model super parameters

TensorfFlow linear regression

Steps of constructing a linear model with TensorFlow

Import necessary Libraries

from __future__ import print_function import math from IPython import display from matplotlib import cm from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format

Load the dataset and check the data

california_housing_dataframe = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) california_housing_dataframe["median_house_value"] /= 1000.0 california_housing_dataframe california_housing_dataframe.describe()

Building model

To train the model, we will use the LinearRegressor interface provided by the TensorFlow Estimator API. This API is responsible for a large number of low-level model building work, and provides convenient methods for model training, evaluation and reasoning.

Step 1: define features and configure feature columns

In TensorFlow, we use a structure called "feature column" to represent the data type of a feature. Feature columns only store descriptions of feature data; they do not contain the feature data itself.

At first, we used only one numerical input feature, total rooms. The following code extracts the total rooms data from the California  housing  dataframe, and uses the numeric  column to define the characteristic column, which specifies its data as a numeric value:

# Define the input feature: total_rooms. my_feature = california_housing_dataframe[["total_rooms"]] # Configure a numeric feature column for total_rooms. feature_columns = [tf.feature_column.numeric_column("total_rooms")]

Step 2: define goals

Next, we will define the goal, which is media house value. Similarly, we can extract it from California ﹣ housing ﹣ dataframe:

# Define the label. targets = california_housing_dataframe["median_house_value"]

Step 3: configure LinearRegressor

Next, we'll use LinearRegressor to configure the linear regression model, and use GradientDescentOptimizer, which implements small batch random gradient descent (SGD), to train the model. The learning rate parameter controls the gradient step size.

Note: for the sake of security, we will also apply gradient clipping to our optimizer through clip ﹣ gradients ﹣ by ﹣ norm. Gradient tailoring can ensure that the gradient does not become too large during training. Too large gradient will lead to the failure of gradient descent method.

# Use gradient descent as the optimizer for training the model. my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) # Configure the linear regression model with our feature columns and optimizer. # Set a learning rate of 0.0000001 for Gradient Descent. linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer )

Step 4: define the input function

To import California housing data into LinearRegressor, we need to define an input function that tells TensorFlow how to preprocess the data, and how to batch, randomize, and duplicate the data during model training.

First, we transform the Pandas feature data into a NumPy array dictionary. Then, we can use the TensorFlow Dataset API to build Dataset objects based on our data, and split the data into batches with the size of batch_size to repeat according to the specified number of cycles (num_epochs).

Note: if the default value num_epics = none is passed to repeat(), the input data will repeat indefinitely.

Then, if shuffle is set to True, we randomize the data so that it is passed randomly to the model during the training. The buffer_size parameter specifies the size of the dataset that shuffle will randomly sample from.

Finally, the input function builds an iterator for the dataset and returns the next batch of data to the LinearRegressor.

def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels

Step 5: Training Model

Now, we can train the model by calling train() on the linear_regressor. We'll wrap my input in a lambda so that we can pass in my feature and target as parameters (see this TensorFlow input function tutorial for details). First, we'll train 100 steps.

_ = linear_regressor.train( input_fn = lambda:my_input_fn(my_feature, targets), steps=100 )

Step 6: evaluate the model

We make a prediction based on the training data to see how our model fits these data during the training.

Note: training error can measure the fit between your model and training data, but it does not measure the effect of model generalization to new data. In a later exercise, you will explore how to split the data to assess the generalization capabilities of the model.

sample = california_housing_dataframe.sample(n=300) # Get the min and max total_rooms values. x_0 = sample["total_rooms"].min() x_1 = sample["total_rooms"].max() # Retrieve the final weight and bias generated during training. weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') # Get the predicted median_house_values for the min and max total_rooms values. y_0 = weight * x_0 + bias y_1 = weight * x_1 + bias # Plot our regression line from (x_0, y_0) to (x_1, y_1). plt.plot([x_0, x_1], [y_0, y_1], c='r') # Label the graph axes. plt.ylabel("median_house_value") plt.xlabel("total_rooms") # Plot a scatter plot from our data sample. plt.scatter(sample["total_rooms"], sample["median_house_value"]) # Display graph. plt.show()

Adjust model super parameters

We will use this function in 10 equal time periods to observe the improvement of the model in each time period.

For each time period, we will calculate the training loss and draw the corresponding chart. This can help you determine when the model will converge, or whether the model needs more iterations.

In addition, we will also draw the curve graph of feature weight and deviation term value learned by the model over time. You can also see the convergence effect of the model in this way.

def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"): """Trains a linear regression model of one feature. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. input_feature: A `string` specifying a column from `california_housing_dataframe` to use as input feature. """ periods = 10 steps_per_period = steps / periods my_feature = input_feature my_feature_data = california_housing_dataframe[[my_feature]] my_label = "median_house_value" targets = california_housing_dataframe[my_label] # Create feature columns. feature_columns = [tf.feature_column.numeric_column(my_feature)] # Create input functions. training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size) prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False) # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) # Set up to plot the state of our model's line each period. plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.title("Learned Line by Period") plt.ylabel(my_label) plt.xlabel(my_feature) sample = california_housing_dataframe.sample(n=300) plt.scatter(sample[my_feature], sample[my_label]) colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)] # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") root_mean_squared_errors = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period ) # Take a break and compute predictions. predictions = linear_regressor.predict(input_fn=prediction_input_fn) predictions = np.array([item['predictions'][0] for item in predictions]) # Compute loss. root_mean_squared_error = math.sqrt( metrics.mean_squared_error(predictions, targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, root_mean_squared_error)) # Add the loss metrics from this period to our list. root_mean_squared_errors.append(root_mean_squared_error) # Finally, track the weights and biases over time. # Apply some math to ensure that the data and line are plotted neatly. y_extents = np.array([0, sample[my_label].max()]) weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') x_extents = (y_extents - bias) / weight x_extents = np.maximum(np.minimum(x_extents, sample[my_feature].max()), sample[my_feature].min()) y_extents = weight * x_extents + bias plt.plot(x_extents, y_extents, color=colors[period]) print("Model training finished.") # Output a graph of loss metrics over periods. plt.subplot(1, 2, 2) plt.ylabel('RMSE') plt.xlabel('Periods') plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(root_mean_squared_errors) # Output a table with calibration data. calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) display.display(calibration_data.describe()) print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
Mr. Nan Fu Published 20 original articles, won praise 5, visited 1041 Private letter follow

12 February 2020, 03:00 | Views: 4948

Add new comment

For adding a comment, please log in
or create account

0 comments