Training cat and dog datasets (and training after image enhancement)

catalog 1, Required environment (installation with link) Two. Data set preparation 3, Network model 4, Data preprocessing 5, Training 6, Fill with da...

catalog

1, Required environment (installation with link)

tensorflow and keras, the specific version installation depends on personal needs.
The installation links are as follows:
https://blog.csdn.net/qq_41760767/article/details/97441967?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase
Check the keras version after installation

2, Dataset preparation

Download the image data set train, create a new subdirectory "data" under the Home directory, and copy the downloaded image data set train to the "data" directory. See the figure for details:

import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from IPython.display import Image root_dir = os.getcwd() data_path = os.path.join(root_dir,'data') #According to directory path root_dir = os.getcwd()
#Directory where data sets are stored data_path = os.path.join(root_dir,'data') import os,shutil #Original dataset catalog original_dataset_dir = os.path.join(data_path,'train') #Directory where small datasets are stored base_dir = os.path.join(data_path,'cats_and_dogs_small') if not os.path.exists(base_dir): os.mkdir(base_dir) #Training image directory train_dir = os.path.join(base_dir,'train') if not os.path.exists(train_dir): os.mkdir(train_dir) #Verify image directory validation_dir = os.path.join(base_dir,'validation') if not os.path.exists(validation_dir): os.mkdir(validation_dir) #Contents of test data test_dir = os.path.join(base_dir,'test') if not os.path.exists(test_dir): os.mkdir(test_dir) #Catalog of training materials for pictures of cats train_cats_dir = os.path.join(train_dir,'cats') ifnot os.path.exists(train_cats_dir): os.mkdir(train_cats_dir) #A list of training materials for dog pictures train_dogs_dir = os.path.join(train_dir,'dogs') if not os.path.exists(train_dogs_dir): os.mkdir(train_dogs_dir) #Cat's picture test data set directory test_cats_dir = os.path.join(test_dir,'cats') if not os.path.exists(test_cats_dir): os.mkdir(test_cats_dir) #Test data set catalog for dog pictures test_dogs_dir = os.path.join(test_dir,'dogs') if not os.path.exists(test_dogs_dir): os.mkdir(test_dogs_dir) #Copy the pictures of the first 600 cats to train_cats_dir fnames = ['cat.{}.jpg'.format(i) for i in range(600)] for fname in fnames: src = os.path.join(original_dataset_dir,fname) dst = os.path.join(train_cats_dir,fname) if not os.path.exists(dst): shutil.copyfile(src,dst) print("Copy next 600 cat images to train_cats_dir complete!") #Copy the pictures of the next 400 cats to validation_cats_dir fnames = ['cat.{}.jpg'.format(i) for i in range(1000,1400)] for fname in fnames: src = os.path.join(original_dataset_dir,fname) dst = os.path.join(validation_cats_dir,fname) if not os.path.exists(dst): shutil.copyfile(src,dst) print('Copy next 400 cat images to validation_cats_dir complete!') #Copy 400 pictures of the cat to test_cats_dir fnames = ['cat.{}.jpg'.format(i) for i in range(1500,1900)] for fname in fnames: src = os.path.join(original_dataset_dir,fname) dst = os.path.join(test_cats_dir,fname) if not os.path.exists(dst): shutil.copyfile(src,dst) print("Copy next 400 cat images to test_cats_dir complete!") #Copy the first 600 dog pictures to train_dogs_dir fnames = ['dog.{}.jpg'.format(i) for i in range(600)] for fname in fnames: src = os.path.join(original_dataset_dir,fname) dst = os.path.join(train_dogs_dir,fname) if not os.path.exists(dst): shutil.copyfile(src,dst) print("Copy first 600 dog images to train_dogs_dir complete!") #Copy the pictures of 400 dogs to validation_dogs_dir names = ['dog.{}.jpg'.format(i) for i in range(1000, 1400)] for fname in fnames: src = os.path.join(original_dataset_dir, fname) dst = os.path.join(validation_dogs_dir, fname) if not os.path.exists(dst): shutil.copyfile(src, dst) print('Copy next 400 dog images to validation_dogs_dir complete!') #Copy 400 pictures of dogs to test_dogs_dir fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 1900)] for fname in fnames: src = os.path.join(original_dataset_dir, fname) dst = os.path.join(test_dogs_dir, fname) if not os.path.exists(dst): shutil.copyfile(src, dst) print('Copy next 400 dog images to test_dogs_dir complete!')

#Perform a check to calculate how many pictures are in each group (training / verification / test)

print('total training cat images:', len(os.listdir(train_cats_dir))) print('total training dog images:', len(os.listdir(train_dogs_dir))) print('total validation cat images:', len(os.listdir(validation_cats_dir))) print('total validation dog images:', len(os.listdir(validation_dogs_dir))) print('total test cat images:', len(os.listdir(test_cats_dir))) print('total test dog images:', len(os.listdir(test_dogs_dir)))


1200 training images, 600 test images and 600 verification images in total

3, Network model

The convolution network (convnets) will be a set of alternating conv2d (with relu activation) and MaxPooling2D layers. Starting from the input of size 150x150 (a little arbitrary selection), we finally get the feature map before the Flatten layer of size 7x7.

Note that the depth of the feature map increases gradually in the network (from 32 to 128), while the size of the feature map is decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all the structures of convolutional networks.

Because we are dealing with binary classification, we end the network with a neuron (a dense layer of size 1) and a sigmoid activation function. The neuron will be used to see the probability of the image belonging to one class or another.

from keras import layers from keras import models from keras.utils import plot_model model = models.Sequential() model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3))) model.add(layers.MaxPooling2D((2,2))) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.MaxPooling2D((2,2))) model.add(layers.Conv2D(128,(3,3),activation='relu')) model.add(layers.MaxPooling2D((2,2))) model.add(layers.Conv2D(128,(3,3),activation='relu')) model.add(layers.MaxPooling2D((2,2))) model.add(layers.Flatten()) model.add(layers.Dense(512,activation='relu')) model.add(layers.Dense(1,activation='sigmoid'))

Let's take a look at how the dimensions of the feature maps change with every successive layer:

model.summary()

from keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])
4, Data preprocessing

As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:

Read the picture files.
Decode the JPEG content to RBG grids of pixels.
Convert these into floating point tensors.
Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).

It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image processing helper tools, located at keras.preprocessing.image. In particular, it contains the class ImageDataGenerator which allows to quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we will use here.

from keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary')


Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to break the iteration loop at some point

for data_batch, labels_batch in train_generator: print('data batch shape:', data_batch.shape) print('labels batch shape:', labels_batch.shape) break

5, Training

Let's fit our model to the data using the generator. We do it using the fit_generator method, the equivalent of fit for data generators like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does. Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before declaring an epoch over. This is the role of the steps_per_epoch argument: after having drawn steps_per_epoch batches from the generator, i.e. after having run for steps_per_epoch gradient descent steps, the fitting process will go to the next epoch. In our case, batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.

When using fit_generator, one may pass a validation_data argument, much like with the fit method. Importantly, this argument is allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as validation_data, then this generator is expected to yield batches of validation data endlessly, and thus you should also specify the validation_steps argument, which tells the process how many batches to draw from the validation generator for evaluation.

history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50)


The training process is a little long. The time of running the code here is based on the advantages and disadvantages of the personal computer graphics card.
Then don't forget to save the model after the training

model.save('cats_and_dogs_small_1.h5')

Use the chart to show the loss and accuracy data of the model to the training and verification data during the training process

import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs,acc,label='Training acc') plt.plot(epochs,val_acc,label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs,loss,label='Training loss') plt.plot(epochs,val_loss,label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()


These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss keeps decreasing linearly until it reaches nearly 0.

Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: data augmentation.

6, Fill with data

Over fitting is caused by too few samples, so we can't train the model which can be extended to new data.

Given infinite data, our model will be exposed in every possible direction of the data distribution at hand: we will never overdo it. Data increase uses the method of generating more training data from existing training samples, and "increases" samples by generating multiple random transformations of trusted images. The goal is that during training, our model will never see the same picture twice again. This will help the model storm learn more aspects of the data and promote it better.

In keras, it can be done by configuring multiple random transformations on the images read by our ImageDataGenerator instance.

datagen = ImageDataGenerator( rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')

These are just a list of available options (for more options, see the keras documentation). Take a quick look at these parameters:
●rotation_range is a value in degrees (0-180), which is the range of randomly rotating pictures.
●width_shift and height_shift is the range, which is a small part of the total width or height, used to randomly convert pictures vertically or horizontally.
●shear_range is used for random shear transformation.
●zoom_range is used to randomly enlarge the image content.
●horizontal_flip is used to randomly flip half of an image horizontally without the assumption of horizontal asymmetry, such as a real world picture.
Take a look at the enhanced image:

# This is module with image preprocessing utilities from keras.preprocessing import image fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)] # We pick one image to "augment" img_path = fnames[3] # Read the image and resize it img = image.load_img(img_path, target_size=(150, 150)) # Convert it to a Numpy array with shape (150, 150, 3) x = image.img_to_array(img) # Reshape it to (1, 150, 150, 3) x = x.reshape((1,) + x.shape) # The .flow() command below generates batches of randomly transformed images. # It will loop indefinitely, so we need to `break` the loop at some point! i = 0 for batch in datagen.flow(x, batch_size=1): plt.figure(i) imgplot = plt.imshow(image.array_to_img(batch[0])) i += 1 if i % 4 == 0: break plt.show()



If we use this data enhancement configuration to train a new network, our network will never see the same duplicate inputs. However, the inputs it sees are still correlated because they come from a small number of original images - we can't generate new information, we can only Remix existing information. Therefore, this may not be enough to get rid of overfitting completely. To further overcome overfitting, we will also add a Dropout layer before the classifier of dense connected.

model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dropout(0.5)) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])

Using data augmentation and dropout to train our network

train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,) # Note that the validation data should not be augmented! test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( # This is the target directory train_dir, # All images will be resized to 150x150 target_size=(150, 150), batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=32, class_mode='binary') history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=100, validation_data=validation_generator, validation_steps=50)


The training time is longer. It is recommended to hang up and run when you are free.

Save our model, which we will use in the convnet visualization section.

model.save('cats_and_dogs_small_2.h5')

Let's take another look at the results:

acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()


Due to the increase and loss of data, we no longer over fit: the training curve is very close to the verification curve. We are now able to achieve 82% accuracy, which is 15% higher than the non regular model.
By further using the regularization technique and by adjusting the network parameters (such as the number of filters per convolution layer or the number of layers in the network), we may be able to achieve better accuracy, which may be as high as 86-87%.

12 June 2020, 04:11 | Views: 4711

Add new comment

For adding a comment, please log in
or create account

0 comments