PyTorch deep learning practice lesson 9 multi classification problem handwritten numeral recognition (training + test) super detailed

Video link: The final collection of PyTorch deep learning practice_ Beep beep beep_ bilibili

Idea:

  1. Prepare dataset
  2. Design model class
  3. Construct loss function and optimizer
  4. Training and testing

 

1. Prepare dataset:

Because MNIST is the data set of torchvision.datasets and a subclass of torch.utils.data.Dataset, you can directly use the data loader DataLoader.

  1. The data in MNIST is PIL   Image, so you need to convert it to the tensor form in PyTorch. The image tensors we all come in are generally (W,H,C), while the general format of PyTorch is (C,H,W)(C is the number of channels, H is the height, W is the width), (W,H,C) - > (C, H, W). Use the transforms.ToTensor() method.
  2. The value in MNIST data set is between 0 ~ 255. In order to better train the model, we use normalization to make its value within 0 ~ 1. Use the transforms.Normalize() method.

Therefore, when loading the dataset, we should complete the above two steps before using the data loader. The code is as follows:

# This is a multi classification problem of handwritten numeral recognition
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader
import torch
import torch.nn.functional as F

# 1. Prepare dataset
# Processing data
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])
batch_size = 64
# Training set
mnist_train = MNIST(root='../dataset/minist', train=True, transform=transform, download=True)
train_loader = DataLoader(dataset=mnist_train, shuffle=True, batch_size=batch_size)
# Test set
mnist_test = MNIST(root='../dataset/minist', train=False, transform=transform, download=True)
test_loader = DataLoader(dataset=mnist_test, shuffle=True, batch_size=batch_size)

2. Design model class

  Precautions for designing model classes:

  1. Because we have previously converted the data set into PyTorch data format (n, C, h, w), but don't forget that the input of neural network requires us to be a two-dimensional matrix, so we must convert the data format (n, C, h, w) - > (n, c * h * W), corresponding to x = x.view (- 1784) in the code
  2. In addition to the last layer, the activation function we use in other layers is the relu() function
  3. The activation function used in the last layer of multi classification is Softmax(). The number of features output is the number of categories, which is each output value > 0, and all sums are 1. The loss function is cross entropy error (negative log likelihood), and torch.nn.CrossEntropyLoss() in PyTorch , cross entropy loss covers the whole process from Softmax function to loss calculation. Therefore, if we use cross entropy loss, we do not need to use activation function in the last layer of neural network, as shown in the figure:

The following is the implementation code of the model class:

# 2. Design model class
class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # Generation layer
        self.l1 = torch.nn.Linear(784, 512)
        self.l2 = torch.nn.Linear(512, 256)
        self.l3 = torch.nn.Linear(256, 128)
        self.l4 = torch.nn.Linear(128, 64)
        self.l5 = torch.nn.Linear(64, 10)

    def forward(self, x):
        # Note 1
        x = x.view(-1, 784)
        # Note 2
        x = F.relu(self.l1(x))
        x = F.relu(self.l2(x))
        x = F.relu(self.l3(x))
        x = F.relu(self.l4(x))
        # Note 3
        x = self.l5(x)
        return x

3. Construct loss function and optimizer

The loss function we use here is the cross entropy error

model = Net()
# 3. Construct loss function and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)

  4. Training and testing

Here, we encapsulate a training cycle and a test cycle into a method respectively, which can improve the reusability of the code. The code is as follows:

The training code is as follows:

# 4. Training and testing
# Define training method, a training cycle
def train(epoch):
    running_loss = 0.0
    for idx, (inputs, target) in enumerate(train_loader, 0):
        # The code here is no different from before
        # Forward
        y_pred = model(inputs)
        loss = criterion(y_pred, target)
        # reverse
        optimizer.zero_grad()
        loss.backward()
        # to update
        optimizer.step()

        running_loss += loss.item()
        if idx % 300 == 299:  # The average loss per 300 prints is% 299 instead of 300 because idx starts from 0
            print(f'epoch={epoch + 1},batch_idx={idx + 1},loss={running_loss / 300}')
            running_loss = 0.0

  The test code is as follows:

# Define test method, a test cycle
def test():
    # Number of samples with correct predictions
    correct_num = 0
    # Number of all samples
    total = 0
    # When testing, we do not need to calculate the gradient, so we can add this sentence without gradient tracking
    with torch.no_grad():
        for images, labels in test_loader:
            # Get predicted value
            outputs = model(images)
            # Gets the location of the maximum value of dim=1, which represents the predicted tag value
            _, predicted = torch.max(outputs.data, dim=1)
            # Accumulate the number of samples in each batch to obtain the number of all samples in a test cycle
            total += labels.size(0)
            # Accumulate the predicted correct samples of each batch to obtain all the predicted correct samples of a test cycle
            correct_num += (predicted == labels).sum().item()
        print(f'Accuracy on test set:{100 * correct_num/total}%')  # Print the accuracy of a test cycle
if __name__ == '__main__':
    # The training cycle is 10 times. The number of samples of all training sets is trained and tested each time
    for epoch in range(10):
        train(epoch)
        test()

All the above codes are completed. The sum of all the following codes and the output results are given:

# This is a multi classification problem of handwritten numeral recognition
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader
import torch
import torch.nn.functional as F

# 1. Prepare dataset
# Processing data
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])
batch_size = 64
# Training set
mnist_train = MNIST(root='../dataset/minist', train=True, transform=transform, download=True)
train_loader = DataLoader(dataset=mnist_train, shuffle=True, batch_size=batch_size)
# Test set
mnist_test = MNIST(root='../dataset/minist', train=False, transform=transform, download=True)
test_loader = DataLoader(dataset=mnist_test, shuffle=True, batch_size=batch_size)


# 2. Design model class
class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # Generation layer
        self.l1 = torch.nn.Linear(784, 512)
        self.l2 = torch.nn.Linear(512, 256)
        self.l3 = torch.nn.Linear(256, 128)
        self.l4 = torch.nn.Linear(128, 64)
        self.l5 = torch.nn.Linear(64, 10)

    def forward(self, x):
        # Note 1
        x = x.view(-1, 784)
        # Note 2
        x = F.relu(self.l1(x))
        x = F.relu(self.l2(x))
        x = F.relu(self.l3(x))
        x = F.relu(self.l4(x))
        # Note 3
        x = self.l5(x)
        return x


model = Net()
# 3. Construct loss function and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# 4. Training and testing
# Define training method, a training cycle
def train(epoch):
    running_loss = 0.0
    for idx, (inputs, target) in enumerate(train_loader, 0):
        # The code here is no different from before
        # Forward
        y_pred = model(inputs)
        loss = criterion(y_pred, target)
        # reverse
        optimizer.zero_grad()
        loss.backward()
        # to update
        optimizer.step()

        running_loss += loss.item()
        if idx % 300 == 299:  # The average loss per 300 prints is% 299 instead of 300 because idx starts from 0
            print(f'epoch={epoch + 1},batch_idx={idx + 1},loss={running_loss / 300}')
            running_loss = 0.0


# Define test method, a test cycle
def test():
    # Number of samples with correct predictions
    correct_num = 0
    # Number of all samples
    total = 0
    # When testing, we do not need to calculate the gradient, so we can add this sentence without gradient tracking
    with torch.no_grad():
        for images, labels in test_loader:
            # Get predicted value
            outputs = model(images)
            # Gets the location of the maximum value of dim=1, which represents the predicted tag value
            _, predicted = torch.max(outputs.data, dim=1)
            # Accumulate the number of samples in each batch to obtain the number of all samples in a test cycle
            total += labels.size(0)
            # Accumulate the predicted correct samples of each batch to obtain all the predicted correct samples of a test cycle
            correct_num += (predicted == labels).sum().item()
        print(f'Accuracy on test set:{100 * correct_num/total}%')  # Print the accuracy of a test cycle


if __name__ == '__main__':
    # The training cycle is 10 times. The number of samples of all training sets is trained and tested each time
    for epoch in range(10):
        train(epoch)
        test()

The results are as follows: (not all results are given)

epoch=1,batch_idx=300,loss=2.185831303993861
epoch=1,batch_idx=600,loss=0.9028161239624023
epoch=1,batch_idx=900,loss=0.4859987227121989
Accuracy on test set:88.26%
epoch=2,batch_idx=300,loss=0.34666957701245943
epoch=2,batch_idx=600,loss=0.2818286288777987
epoch=2,batch_idx=900,loss=0.23189411964267492
Accuracy on test set:94.17%

........

epoch=7,batch_idx=300,loss=0.055408267891034486
epoch=7,batch_idx=600,loss=0.061728662827517836
epoch=7,batch_idx=900,loss=0.06610782677152505
Accuracy on test set:97.48%
epoch=8,batch_idx=300,loss=0.04807355252560228
epoch=8,batch_idx=600,loss=0.051277296949798865
epoch=8,batch_idx=900,loss=0.047160824784853804
Accuracy on test set:97.43%
epoch=9,batch_idx=300,loss=0.03567605647413681
epoch=9,batch_idx=600,loss=0.04471589110791683
epoch=9,batch_idx=900,loss=0.04066507628730809
Accuracy on test set:97.65%
epoch=10,batch_idx=300,loss=0.02855320817286459
epoch=10,batch_idx=600,loss=0.03323486545394796
epoch=10,batch_idx=900,loss=0.035332622032923006
Accuracy on test set:97.79%

I am still a student. If there are any mistakes, please point them out. Thank you!!

Tags: Python neural networks Pytorch Deep Learning

Posted on Sun, 21 Nov 2021 04:00:30 -0500 by JohnN4