PyTorch deep learning neural network nn.Module and Conv2d convolution layer

1, Neural network module

Neural Network

1. Import package

import torch.nn

2. Inherit

You need to define a class, inherit nn.Module, and override the__ init__ And forward method

3. Specific code

import torch
from torch import nn

# It needs to be implemented after inheriting nn.Module__ init__ And forward method
# Forward is forward propagation
class NNModule(nn.Module):
    def __init__(self):
        super(NNModule, self).__init__()

    def forward(self, input_data):
        output_data = input_data + 1
        return output_data


nn_m = NNModule()  # definition
x = torch.tensor(1.0)
result = nn_m(x)   # Add input
print(result)

2, Convolution layer

Convolution Layers     Conv1d is one-dimensional convolution   Conv2d is two-dimensional convolution   Conv3d is three-dimensional convolution

1,torch.nn.functional.conv2d

(1) Import package    from torch.nn import functional     function

functional.conv2d   It is 2D convolution

(2) Parameter details:

  • Input: input (minibatch, in_channels, iH, iW) input channel, height, width
  • Weight: convolution kernel (weight) (out_channels, in_channels/group, kH, kW)
  • bias: offset
  • Stripe: filter the number of cells to jump during scanning, which can be a single number or tuple (sH, sW). The default is 1
  • padding: the filling size of the blank area. It can be a single tuple (dH, dW). The default value is 0

(3) Operation sequence: input image (5) × 5) - > convolution kernel (3) × 3) - > output after convolution     

The corresponding positions are multiplied and added to obtain the convolution output

(4) torch.nn.functional.conv2d specific code

import torch
from torch.nn import functional as F

# input image 
inputData = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]])

# Convolution kernel
kernelData = torch.tensor([[1, 2, 1],
                       [0, 1, 0],
                       [2, 1, 0]])

# Because it is not satisfied with the input format of conv2d, the input and convolution kernel are reset to the size parameters
input_data = torch.reshape(inputData, (1, 1, 5, 5))  # batch_size in_channels iH iW
kernel = torch.reshape(kernelData, (1, 1, 3, 3))

print(input_data.shape)
print(kernel.shape)

# The stripe kernel moves 1 bit by default
# Output after convolution
output = F.conv2d(input_data, kernel, stride=1)
print(output)

output2 = F.conv2d(input_data, kernel, stride=2)
print(output2)

# Space around padding
output3 = F.conv2d(input_data, kernel, stride=1, padding=1)
print(output3)

2,Conv2d   Two dimensional convolution

(1) Import package from torch.nn import Conv2d

(2) Parameter details:

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0,  dilation=1, group=1, bias=True, padding_mode='zeros')

  • in_channels(int): enter the number of picture channels
  • out_channels(int): the number of output channels after convolution
  • kernel_size(int or tuple): the size of convolution kernel. If defined as 3, it is 3 × Convolution kernel size of 3

Optional: optional

  • String (int or tuple, optional): the step size during convolution. The default value is 1
  • padding(int or tuple, optional): blank extension of the original image. The default value is 0
  • padding_mode(string, optional): refers to the filling mode, 'zeros',' reflect ',' replicate '   or 'circular'. Default: 'zeros'
  • Translation (string, optional): the distance of the corresponding bit of convolution check, which is 1 by default
  • groups(int, optional): the number of blocked connections from the input channel to the output channel. The default is 1
  • bias(bool, optional): offset or not

(3) Convolution formula

That is, the input is multiplied by the same position as the convolution and added

 

Blue is the input image, green is the output image, and the shadow part is the convolution kernel  

(4) Input image (5) × 5)   in_channel=1   ——>  Convolution kernel   (3 × 3)——>   Output after convolution   out_channel=1

Input image (5) × 5)   in_channel = 1   Convolution kernel 1 - > output after convolution   out_channel=2

                                                        Convolution kernel 2 - >

Number of output channels = number of convolution kernels

  N is batch_size batch quantity

C is channel, input channel and output channel

H is the high of input and output

W is the width of input and output

(5) Specific code

 

from torch import nn
from torch.utils.data import DataLoader
import torchvision
from torch.nn import Conv2d
import ssl
from torch.utils.tensorboard import SummaryWriter
import torch

ssl._create_default_https_context = ssl._create_unverified_context

dataset = torchvision.datasets.CIFAR10(root="./dataset", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=64)

class TestImage(nn.Module):
    def __init__(self):
        super(TestImage, self).__init__()

        # Define the convolution layer input channel 3, the number of cores 3, the output channel 6, and the step size 1
        self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)

    def forward(self, x):
        x = self.conv1(x)  # Put x into the convolution layer conv1
        return x


ti = TestImage()
# Display image
writer = SummaryWriter("logs")

step = 0
for data in dataloader:
    images, target = data
    output = ti(images)
    print(images.shape)
    print(output.shape)
    # torch.Size([64, 3, 32, 32]) enter the size
    writer.add_image("input", images, step, dataformats="NCHW")

    # Torch. Size ([64, 6, 30, 30]) - > [XXX, 3, 30, 30] output size
    # Change the picture to the specified output size because of the output error
    output = torch.reshape(output, (-1, 3, 30, 30))
    writer.add_image("output", output, step, dataformats="NCHW")

    step = step + 1

writer.close()

Tags: neural networks Pytorch Deep Learning

Posted on Fri, 15 Oct 2021 22:16:37 -0400 by inkdrop