PyTorch framework learning - proficient in tensor operation

1 tensor operation 1.1 splicing t...
1.1 splicing
1.2 segmentation
1.3 index
1.4 transformation
2.1 addition operation
2.2 subtraction
2.3 Hadamard product operation (element wise, corresponding element multiplication)
2.4 division operation
2.5 special operation torch.addcdiv
2.6 special operation torch.addcmul
2.7 power function
2.7 exponential function
2.8 logarithmic function
2.9 trigonometric function

1 tensor operation

1.1 splicing

torch.cat()
torch.cat(tensors, dim=0, *, out=None) → Tensor

[function]: splice tensors according to dim dimension.

∙ \bullet · tensors: for any python tensor sequence of the same type, the shape of the non empty tensor in other dimensions must be the same except that the shape in the splicing dimension can be different. Note: python only has list and tuple sequence data.
∙ \bullet · dim: the dimension to be spliced, where dim ∈ \in ∈ [0, len(tensor[0])) .

[Code]:

import torch t = torch.randn(2, 3) t_0 = torch.cat([t, t, t], dim=0) t_1 = torch.cat([t, t, t], dim=1) print("t = {} shape = {}\nt_0 = {} shape = {}\nt_1 = {} shape = {}" .format(t, t.shape, t_0, t_0.shape, t_1, t_1.shape))

[result]:

t = tensor([[-0.6014, -1.0122, -0.3023], [-1.2277, 0.9198, -0.3485]]) shape = torch.Size([2, 3]) t_0 = tensor([[-0.6014, -1.0122, -0.3023], [-1.2277, 0.9198, -0.3485], [-0.6014, -1.0122, -0.3023], [-1.2277, 0.9198, -0.3485], [-0.6014, -1.0122, -0.3023], [-1.2277, 0.9198, -0.3485]]) shape = torch.Size([6, 3]) t_1 = tensor([[-0.6014, -1.0122, -0.3023, -0.6014, -1.0122, -0.3023, -0.6014, -1.0122, -0.3023], [-1.2277, 0.9198, -0.3485, -1.2277, 0.9198, -0.3485, -1.2277, 0.9198, -0.3485]]) shape = torch.Size([2, 9])

torch.stack()
torch.stack(tensors, dim=0, *, out=None) → Tensor

[function]: connect the input tensor sequence along a new dimension. All tensors in the sequence should be of the same shape. It can be understood as: putting together multiple 2-dimensional tensors into a 3-dimensional tensor; Multiple three-dimensional tensors form a four-dimensional tensor... And so on, that is, adding new dimensions to stack.

∙ \bullet · tensors: tensor sequence to be connected. Note: python only has list and tuple sequence data.
∙ \bullet · dim: new dimension, where dim ∈ \in ∈ [0, len(out)). Note: len(out) is the dimension size of the generated data, that is, the dimension value of out.

[Code]:

import torch t1 = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) t2 = torch.tensor([[10, 20, 30], [40, 50, 60], [70, 80, 90]]) t_stack_0 = torch.stack([t1, t2], dim=0) print("t_stack_0:{}".format(t_stack_0)) print("t_stack_0.shape:{}\n".format(t_stack_0.shape)) t_stack_1 = torch.stack([t1, t2], dim=1) print("t_stack_1:{}".format(t_stack_1)) print("t_stack_1.shape:{}\n".format(t_stack_1.shape)) t_stack_2 = torch.stack([t1, t2], dim=2) print("t_stack_2:{}".format(t_stack_2)) print("t_stack_2.shape:{}\n".format(t_stack_2.shape)) t_stack_3 = torch.stack([t1, t2], dim=3) print("t_stack_3.shape:{}".format(t_stack_3.shape))

[result]:

t_stack_0:tensor([[[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]], [[10, 20, 30], [40, 50, 60], [70, 80, 90]]]) t_stack_0.shape:torch.Size([2, 3, 3]) t_stack_1:tensor([[[ 1, 2, 3], [10, 20, 30]], [[ 4, 5, 6], [40, 50, 60]], [[ 7, 8, 9], [70, 80, 90]]]) t_stack_1.shape:torch.Size([3, 2, 3]) t_stack_2:tensor([[[ 1, 10], [ 2, 20], [ 3, 30]], [[ 4, 40], [ 5, 50], [ 6, 60]], [[ 7, 70], [ 8, 80], [ 9, 90]]]) t_stack_2.shape:torch.Size([3, 3, 2]) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)

[result description]: when dim=3, dim= len(out)=3, so overflow error is reported.

1.2 segmentation

torch.chunk()
torch.chunk(input, chunks, dim=0) → List of Tensors

[function]: divide the tensor evenly according to the dimension dim. If it cannot be divided, the last tensor is less than other tensors.

∙ \bullet input: tensor to be segmented.
∙ \bullet · chunks: number of copies to be cut.
∙ \bullet · dim: the dimension to be segmented.

[Code]:

import torch a = torch.ones((2, 7)) print("a = {}".format(a)) list_of_tensors_1 = torch.chunk(a, chunks=3, dim=1) for idx, t in enumerate(list_of_tensors_1): print("The first{}Tensor: {},shape is {}".format(idx+1, t, t.shape)) print("\n") b = torch.arange(11) print("b = {}".format(b)) list_of_tensors_2 = b.chunk(6) for idx, t in enumerate(list_of_tensors_2): print("The first{}Tensor: {},shape is {}".format(idx+1, t, t.shape)) print("\n") c = torch.arange(12) print("c = {}".format(c)) list_of_tensors_3 = c.chunk(6) for idx, t in enumerate(list_of_tensors_3): print("The first{}Tensor: {},shape is {}".format(idx+1, t, t.shape))

[result]:

a = tensor([[1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.]]) First tensor: tensor([[1., 1., 1.], [1., 1., 1.]]),shape is torch.Size([2, 3]) The second tensor: tensor([[1., 1., 1.], [1., 1., 1.]]),shape is torch.Size([2, 3]) The third tensor: tensor([[1.], [1.]]),shape is torch.Size([2, 1]) b = tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) First tensor: tensor([0, 1]),shape is torch.Size([2]) The second tensor: tensor([2, 3]),shape is torch.Size([2]) The third tensor: tensor([4, 5]),shape is torch.Size([2]) Fourth tensor: tensor([6, 7]),shape is torch.Size([2]) Fifth tensor: tensor([8, 9]),shape is torch.Size([2]) The sixth tensor: tensor([10]),shape is torch.Size([1]) c = tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) First tensor: tensor([0, 1]),shape is torch.Size([2]) The second tensor: tensor([2, 3]),shape is torch.Size([2]) The third tensor: tensor([4, 5]),shape is torch.Size([2]) Fourth tensor: tensor([6, 7]),shape is torch.Size([2]) Fifth tensor: tensor([8, 9]),shape is torch.Size([2]) The sixth tensor: tensor([10, 11]),shape is torch.Size([2])

[result description]: since 7 cannot be divided by 3 and 7 / 3 is rounded up to 3, the first two dimensions are [2,3], so the tensor dimension of the last segmentation is [2,1].

torch.split()
torch.split(tensor, split_size_or_sections, dim=0)

[function]: divide the tensor evenly according to the dimension dim. You can specify the segmentation length of each component.

∙ \bullet · tensor: tensor to be segmented.
∙ \bullet · split_size_or_sections: if it is int, it indicates the length of each part. If it cannot be divided, the last tensor is less than other tensors; if it is list, it is divided according to the length of each component according to the list element. If the sum of the list elements is not equal to the value of the segmentation dimension (dim), an error will be reported.
∙ \bullet · dim: the dimension to be segmented.

[Code]:

import torch a = torch.ones((2, 5)) print("a = {}".format(a)) list_of_tensors_1 = torch.split(a, [2, 1, 2], dim=1) for idx, t in enumerate(list_of_tensors_1): print("The first{}Tensor:{}, shape is {}".format(idx + 1, t, t.shape)) print("\n") b = torch.arange(10).reshape(5, 2) print("b = {}".format(b)) list_of_tensors_2 = torch.split(b, 2) for idx, t in enumerate(list_of_tensors_2): print("The first{}Tensor:{}, shape is {}".format(idx + 1, t, t.shape)) print("\n") c = torch.arange(10).reshape(5, 2) print("c = {}".format(c)) list_of_tensors_3 = torch.split(c, [1, 4]) for idx, t in enumerate(list_of_tensors_3): print("The first{}Tensor:{}, shape is {}".format(idx + 1, t, t.shape))

[result]:

a = tensor([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]]) 1st tensor: tensor([[1., 1.], [1., 1.]]), shape is torch.Size([2, 2]) The second tensor: tensor([[1.], [1.]]), shape is torch.Size([2, 1]) The third tensor: tensor([[1., 1.], [1., 1.]]), shape is torch.Size([2, 2]) b = tensor([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) 1st tensor: tensor([[0, 1], [2, 3]]), shape is torch.Size([2, 2]) The second tensor: tensor([[4, 5], [6, 7]]), shape is torch.Size([2, 2]) The third tensor: tensor([[8, 9]]), shape is torch.Size([1, 2]) c = tensor([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) 1st tensor: tensor([[0, 1]]), shape is torch.Size([1, 2]) The second tensor: tensor([[2, 3], [4, 5], [6, 7], [8, 9]]), shape is torch.Size([4, 2])

1.3 index

torch.index_select()
torch.index_select(input, dim, index, *, out=None) → Tensor

[function]: on the dimension dim, take out the data according to the index index, splice it into a tensor and return it.

∙ \bullet input: tensor to be indexed.
∙ \bullet ∙ dim: the dimension to index.
∙ \bullet · index(IntTensor or LongTensor): the sequence number of the data to be indexed, 1D tensor.

[Code]:

import torch t = torch.randint(0, 9, size=(3, 3)) # Create uniform distribution idx = torch.tensor([0, 2], dtype=torch.long) # Note that the dtype of idx cannot be specified as torch.float print("idx: {}".format(idx)) t_select_1 = torch.index_select(t, dim=0, index=idx) # Take out rows 0 and 2 print("t: {}\nt_select_1:\n{}\n".format(t, t_select_1)) x = torch.randn(3, 4) indices = torch.tensor([0, 2]) t_select_2 = torch.index_select(x, 0, indices) # Take out rows 0 and 2 print("x: {}".format(x)) print("t_select_2:\n{}".format(t_select_2)) t_select_3 = torch.index_select(x, 1, indices) # Take out columns 0 and 2 print("t_select_3:\n{}".format(t_select_3))

[result]:

idx: tensor([0, 2]) t: tensor([[5, 3, 6], [4, 6, 2], [8, 7, 4]]) t_select_1: tensor([[5, 3, 6], [8, 7, 4]]) x: tensor([[-0.7894, -0.9907, -1.9858, 0.5365], [ 1.7030, -2.0950, -0.9801, 0.2507], [-0.1537, 0.9861, 0.0340, -1.5576]]) t_select_2: tensor([[-0.7894, -0.9907, -1.9858, 0.5365], [-0.1537, 0.9861, 0.0340, -1.5576]]) t_select_3: tensor([[-0.7894, -1.9858], [ 1.7030, -0.9801], [-0.1537, 0.0340]])

torch.masked_select()
torch.masked_select(input, mask, *, out=None) → Tensor

[function]: perform index splicing according to True in mask to get one-dimensional tensor return.

∙ \bullet input(Tensor): tensor to be indexed.
∙ \bullet · mask(BoolTensor): Boolean tensor with the same shape as input.

[Code]:

import torch t = torch.randint(0, 9, size=(3, 3)) print("t: {}".format(t)) mask = t.le(5) print("mask: {}".format(mask)) t_select = torch.masked_select(t, mask) # Take the number less than or equal to 5 print("t_select: {}\n".format(t_select)) x = torch.randn(3, 4) print("x: {}".format(x)) mask = x.ge(0.5) print("mask: {}".format(mask)) x_select = torch.masked_select(x, mask) print("x_select: {}\n".format(x_select)) # Take out numbers greater than or equal to 0.5

[result]:

t: tensor([[5, 3, 6], [4, 6, 2], [8, 7, 4]]) mask: tensor([[ True, True, False], [ True, False, True], [False, False, True]]) t_select: tensor([5, 3, 4, 2, 4]) x: tensor([[-0.7894, -0.9907, -1.9858, 0.5365], [ 1.7030, -2.0950, -0.9801, 0.2507], [-0.1537, 0.9861, 0.0340, -1.5576]]) mask: tensor([[False, False, False, True], [ True, False, False, False], [False, True, False, False]]) x_select: tensor([0.5365, 1.7030, 0.9861])

[note]: the last one is the one-dimensional tensor.

1.4 transformation

torch.reshape()
torch.reshape(input, shape) → Tensor

[function]: transform the shape of tensor. When the tensor is continuous in memory, the returned tensor and the original tensor share data memory. When one variable is changed, the other variable will also be changed.

∙ \bullet input(Tensor): tensor to be transformed.
∙ \bullet ∙ shape(tuple of python:ints): the shape of the new tensor.

[Code]:

import torch t = torch.randperm(8) # Generate random permutations of 0 to 8 print("t: {}".format(t)) t_reshape = torch.reshape(t, (-1, 2, 2)) print("t_reshape: {}".format(t_reshape)) print("t_reshape_shape: {}\n".format(t_reshape.shape)) a = torch.arange(4) print("a: {}".format(a)) a_reshape = torch.reshape(a, (2, 2)) print("a_reshape: {}".format(a_reshape)) print("a_reshape_shape: {}\n".format(a_reshape.shape)) b = torch.tensor([[0, 1], [2, 3]]) print("b: {}".format(b)) b_reshape = torch.reshape(b, (-1, )) print("b_reshape: {}".format(b_reshape)) print("b_reshape_shape: {}\n".format(b_reshape.shape)) # By modifying an element of the original tensor, the new tensor will also be changed t[0] = 1024 print("t: {}".format(t)) print("t_reshape: {}".format(t_reshape)) print("t.data Memory address: {}".format(id(t.data))) print("t_reshape.data Memory address: {}".format(id(t_reshape.data)))

[result]:

t: tensor([1, 0, 2, 5, 4, 7, 3, 6]) t_reshape: tensor([[[1, 0], [2, 5]], [[4, 7], [3, 6]]]) t_reshape_shape: torch.Size([2, 2, 2]) a: tensor([0, 1, 2, 3]) a_reshape: tensor([[0, 1], [2, 3]]) a_reshape_shape: torch.Size([2, 2]) b: tensor([[0, 1], [2, 3]]) b_reshape: tensor([0, 1, 2, 3]) b_reshape_shape: torch.Size([4]) t: tensor([1024, 0, 2, 5, 4, 7, 3, 6]) t_reshape: tensor([[[1024, 0], [ 2, 5]], [[ 4, 7], [ 3, 6]]]) t.data Memory address: 140396336572672 t_reshape.data Memory address: 140396336572672

torch.transpose()
torch.transpose(input, dim0, dim1) → Tensor

[function]: exchange two dimensions of tensor. It is often used for image transformation, such as c ∗ h ∗ w c\ast h\ast w c * h * w is transformed into h ∗ w ∗ c h\ast w\ast c h∗w∗c.

∙ \bullet input(Tensor): tensor to be exchanged.
∙ \bullet · dim0(int): the first dimension to swap.
∙ \bullet ∙ dim1(int): the second dimension to swap.

[Code]:

import torch t = torch.rand((2, 3, 4)) print("t: {}".format(t)) print("t shape: {}\n".format(t.shape)) t_transpose = torch.transpose(t, dim0=1, dim1=2) print("t_transpose: {}".format(t_transpose)) print("t_transpose shape: {}".format(t_transpose.shape))

[result]:

t: tensor([[[0.4581, 0.4829, 0.3125, 0.6150], [0.2139, 0.4118, 0.6938, 0.9693], [0.6178, 0.3304, 0.5479, 0.4440]], [[0.7041, 0.5573, 0.6959, 0.9849], [0.2924, 0.4823, 0.6150, 0.4967], [0.4521, 0.0575, 0.0687, 0.0501]]]) t shape: torch.Size([2, 3, 4]) t_transpose: tensor([[[0.4581, 0.2139, 0.6178], [0.4829, 0.4118, 0.3304], [0.3125, 0.6938, 0.5479], [0.6150, 0.9693, 0.4440]], [[0.7041, 0.2924, 0.4521], [0.5573, 0.4823, 0.0575], [0.6959, 0.6150, 0.0687], [0.9849, 0.4967, 0.0501]]]) t_transpose shape: torch.Size([2, 4, 3])

torch.t()
torch.t(input) → Tensor

[function]: 2-dimensional tensor transpose, which is equivalent to torch. Transfer (input, 0, 1) for 2-dimensional matrix.

∙ \bullet input(Tensor): tensor to be transposed.

[Code]:

import torch x1 = torch.randn(()) print("x1: {}".format(x1)) x1_t = torch.t(x1) print("x1_t: {}\n".format(x1_t)) x2 = torch.randn(3) print("x2: {}".format(x2)) x2_t = torch.t(x2) print("x2_t: {}\n".format(x2_t)) x3 = torch.randn(2, 3) print("x3: {}".format(x3)) x3_t = torch.t(x3) print("x3_t: {}".format(x3_t))

[result]:

x1: -0.6013928055763245 x1_t: -0.6013928055763245 x2: tensor([-1.0122, -0.3023, -1.2277]) x2_t: tensor([-1.0122, -0.3023, -1.2277]) x3: tensor([[ 0.9198, -0.3485, -0.8692], [-0.9582, -1.1920, 1.9050]]) x3_t: tensor([[ 0.9198, -0.9582], [-0.3485, -1.1920], [-0.8692, 1.9050]])

[result description]: for 0-D and 1-D tensors, the output result is still the input itself.

torch.squeeze()
torch.squeeze(input, dim=None, *, out=None) → Tensor

[function]: compress the dimension with length of 1.

∙ \bullet input(Tensor): tensor to be compressed.
∙ \bullet · dim(int, optional): if None, all dimensions with length 1 will be removed; if a dimension is specified, it can be removed if and only if the dimension length is 1.

[Code]:

import torch t = torch.rand((1, 2, 3, 1)) # The length of dimensions 0 and 3 is 1 t_sq = torch.squeeze(t) # Dimensions 0 and 3 can be removed t_0 = torch.squeeze(t, dim=0) # Dimension 0 can be removed t_1 = torch.squeeze(t, dim=1) # Dimension 1 cannot be removed print("t.shape: {}".format(t.shape)) print("t_sq.shape: {}".format(t_sq.shape)) print("t_0.shape: {}".format(t_0.shape)) print("t_1.shape: {}".format(t_1.shape))

[result]:

t.shape: torch.Size([1, 2, 3, 1]) t_sq.shape: torch.Size([2, 3]) t_0.shape: torch.Size([2, 3, 1]) t_1.shape: torch.Size([1, 2, 3, 1])

torch.unsqueeze()
torch.unsqueeze(input, dim) → Tensor

Function: the dimension is extended according to dim, and the length is 1.

∙ \bullet input(Tensor): tensor to be extended.
∙ \bullet ∙ dim(int): Specifies the extended dimension.

[Code]:

import torch t = torch.tensor([1, 2, 3, 4]) t_0 = torch.unsqueeze(t, 0) # Extend in dimension 0 t_1 = torch.unsqueeze(t, 1) # Expand in dimension 1 print("t.shape: {}".format(t.shape)) print("t_0.shape: {}".format(t_0.shape)) print("t_1.shape: {}".format(t_1.shape))

[result]:

t.shape: torch.Size([4]) t_0.shape: torch.Size([1, 4]) t_1.shape: torch.Size([4, 1])
2. Mathematical operation of tensor

2.1 addition operation

torch.add()
torch.add(input, other, *, alpha=1, out=None) → Tensor

[function]: add element by element, and the calculation formula is: o u t i = i n p u t i + a l p h a × o t h e r i out_ = input_ + alpha \times other_ outi​=inputi​+alpha×otheri​.

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · other(Tensor or Number): tensor or number added to input.
∙ \bullet ∙ alpha(Number): multiplier factor.

[Code]:

import torch a = torch.randn(4) print("a: {}".format(a)) a_add = torch.add(a, 20) print("a_add: {}\n".format(a_add)) b = torch.randn(4) print("b: {}".format(b)) c = torch.randn(4, 1) print("c: {}".format(c)) b_add_c = torch.add(b, c, alpha=10) print("b_add_c: {}\n".format(b_add_c)) d = b + 10 * c print("d: {}\n".format(d))

[result]:

a: tensor([-0.6014, -1.0122, -0.3023, -1.2277]) a_add: tensor([19.3986, 18.9878, 19.6977, 18.7723]) b: tensor([ 0.9198, -0.3485, -0.8692, -0.9582]) c: tensor([[-1.1920], [ 1.9050], [-0.9373], [-0.8465]]) b_add_c: tensor([[-11.0006, -12.2689, -12.7896, -12.8786], [ 19.9698, 18.7015, 18.1808, 18.0918], [ -8.4535, -9.7218, -10.2425, -10.3315], [ -7.5448, -8.8131, -9.3339, -9.4228]]) d: tensor([[-11.0006, -12.2689, -12.7896, -12.8786], [ 19.9698, 18.7015, 18.1808, 18.0918], [ -8.4535, -9.7218, -10.2425, -10.3315], [ -7.5448, -8.8131, -9.3339, -9.4228]])

[result description]: torch.add(b, c, alpha=10) is equivalent to b + 10 * c.

2.2 subtraction

torch.sub()
torch.sub(input, other, *, alpha=1, out=None) → Tensor

[function]: subtract element by element. The calculation formula is: o u t i = i n p u t i − a l p h a × o t h e r i out_ = input_ - alpha \times other_ outi​=inputi​−alpha×otheri​.

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · other(Tensor or Number): tensor or number subtracted from input.
∙ \bullet ∙ alpha(Number): multiplier factor.

[Code]:

import torch a = torch.tensor((1, 2)) print("a: {}".format(a)) b = torch.tensor((0, 1)) print("b: {}".format(b)) a_sub_b = torch.sub(a, b, alpha=2) print("a_sub_b: {}\n".format(a_sub_b)) c = a - 2 * b print("c: {}".format(c))

[result]:

a: tensor([1, 2]) b: tensor([0, 1]) a_sub_b: tensor([1, 0]) c: tensor([1, 0])

[result description]: torch.sub(a, b, alpha=2) is equivalent to a - 2 * b.

2.3 Hadamard product operation (element wise, corresponding element multiplication)

torch.mul()
torch.mul(input, other, *, out=None) → Tensor

[function]: multiply element by element, and the calculation formula is: o u t i = i n p u t i × o t h e r i out_ = input_ \times other_ outi​=inputi​×otheri​.

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · other(Tensor or Number): tensor or number multiplied by input.

[Code]:

import torch a = torch.randn(3) print("a: {}".format(a)) a_mul1 = torch.mul(a, 100) a_mul2 = a * 100 print("a_mul1: {}".format(a_mul1)) print("a_mul2: {}\n".format(a_mul2)) b = torch.randn(4, 1) c = torch.randn(1, 4) print("b: {}".format(b)) print("c: {}".format(c)) b_mul_c_1 = torch.mul(b, c) b_mul_c_2 = b * c print("b_mul_c_1: {}".format(b_mul_c_1)) print("b_mul_c_2: {}".format(b_mul_c_2))

[result]:

a: tensor([-0.6014, -1.0122, -0.3023]) a_mul1: tensor([ -60.1393, -101.2210, -30.2269]) a_mul2: tensor([ -60.1393, -101.2210, -30.2269]) b: tensor([[-1.2277], [ 0.9198], [-0.3485], [-0.8692]]) b: tensor([[-0.9582, -1.1920, 1.9050, -0.9373]]) b_mul_c_1: tensor([[ 1.1763, 1.4635, -2.3387, 1.1508], [-0.8814, -1.0965, 1.7523, -0.8622], [ 0.3339, 0.4154, -0.6638, 0.3266], [ 0.8328, 1.0361, -1.6558, 0.8147]]) b_mul_c_2: tensor([[ 1.1763, 1.4635, -2.3387, 1.1508], [-0.8814, -1.0965, 1.7523, -0.8622], [ 0.3339, 0.4154, -0.6638, 0.3266], [ 0.8328, 1.0361, -1.6558, 0.8147]])

[result description]: torch.mul(b, c) is equivalent to b * c.

2.4 division operation

torch.div()
torch.div(input, other, *, rounding_mode=None, out=None) → Tensor

[function]: divide element by element, and the calculation formula is: o u t i = i n p u t i o t h e r i out_ = \frac}} outi​=otheri​inputi​​.

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · other(Tensor or Number): tensor or number divided by input.
∙ \bullet ∙ rounding_mode(str, optional): "None" -- no rounding, equivalent to / operation in Python, or np.true_divide; "trunc" -- rounding the division result to zero, which is equivalent to C-style integer division; "Floor" -- the result of downward rounding division, which is equivalent to / / operation in Python or np.floor_divide;

[Code]:

import torch x = torch.tensor([0.3810, 1.2774, -0.2972, -0.3719, 0.4637]) print("x: {}".format(x)) x_div = torch.div(x, 0.5) print("x_div: {}".format(x_div)) a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917], [0.1815, -1.0111, 0.9805, -1.5923], [0.1062, 1.4581, 0.7759, -1.2344], [-0.1830, -0.0313, 1.1908, -1.4757]]) b = torch.tensor([0.8032, 0.2930, -0.8113, -0.2308]) a_div_b1 = torch.div(a, b) print("a_div_b1: {}".format(a_div_b1)) a_div_b2 = a / b print("a_div_b2: {}".format(a_div_b2))

[result]:

x: tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637]) x_div: tensor([ 0.7620, 2.5548, -0.5944, -0.7438, 0.9274]) a_div_b1: tensor([[-0.4620, -6.6051, 0.5676, 1.2639], [ 0.2260, -3.4509, -1.2086, 6.8990], [ 0.1322, 4.9764, -0.9564, 5.3484], [-0.2278, -0.1068, -1.4678, 6.3938]]) a_div_b2: tensor([[-0.4620, -6.6051, 0.5676, 1.2639], [ 0.2260, -3.4509, -1.2086, 6.8990], [ 0.1322, 4.9764, -0.9564, 5.3484], [-0.2278, -0.1068, -1.4678, 6.3938]])

[result description]: torch.div(a, b) is equivalent to a / b.

2.5 special operation torch.addcdiv

torch.addcdiv()
torch.addcdiv(input, tensor1, tensor2, *, value=1, out=None) → Tensor

[function]: the calculation formula is: o u t i = i n p u t i + v a l u e × t e n s o r 1 i t e n s o r 2 i out_ = input_ + value\times \frac}} outi​=inputi​+value×tensor2i​tensor1i​​

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · tensor1(Tensor): molecular tensor.
∙ \bullet · tensor2(Tensor): denominator tensor.
∙ \bullet ∙ value(Number, optional): product factor.

[Code]:

import torch t = torch.randn(1, 3) print("t: {}".format(t)) t1 = torch.randn(3, 1) print("t1: {}".format(t1)) t2 = torch.randn(1, 3) print("t2: {}".format(t2)) result = torch.addcdiv(t, t1, t2, value=0.5) print("result: {}".format(result))

[result]:

t: tensor([[-0.6014, -1.0122, -0.3023]]) t1: tensor([[-1.2277], [ 0.9198], [-0.3485]]) t2: tensor([[-0.8692, -0.9582, -1.1920]]) result: tensor([[ 0.1048, -0.3716, 0.2127], [-1.1305, -1.4922, -0.6881], [-0.4009, -0.8304, -0.1561]])

2.6 special operation torch.addcmul

torch.addcmul()
torch.addcmul(input, tensor1, tensor2, *, value=1, out=None) → Tensor

[function]: the calculation formula is: o u t i = i n p u t i + v a l u e × t e n s o r 1 i × t e n s o r 2 i out_ = input_ + value\times tensor1_\times tensor2_ outi​=inputi​+value×tensor1i​×tensor2i​

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · tensor1(Tensor): tensor, multiplier 1.
∙ \bullet · tensor2(Tensor): tensor, multiplier 2.
∙ \bullet ∙ value(Number, optional): product factor.

[Code]:

import torch t = torch.randn(1, 3) print("t: {}".format(t)) t1 = torch.randn(3, 1) print("t1: {}".format(t1)) t2 = torch.randn(1, 3) print("t2: {}".format(t2)) result = torch.addcmul(t, t1, t2, value=0.1) print("result: {}".format(result))

[result]:

t: tensor([[-0.6014, -1.0122, -0.3023]]) t1: tensor([[-1.2277], [ 0.9198], [-0.3485]]) t2: tensor([[-0.8692, -0.9582, -1.1920]]) result: tensor([[-0.4947, -0.8946, -0.1559], [-0.6813, -1.1003, -0.4119], [-0.5711, -0.9788, -0.2607]])

2.7 power function

torch.pow()
torch.pow(input, exponent, *, out=None) → Tensor

[function]: when the exponent is scalar, the calculation formula is: o u t i = i n p u t i e x p o n e n t out_ = input_^ outi​=inputiexponent​; When the exponent is a tensor, the calculation formula is: o u t i = i n p u t i e x p o n e n t i out_ = input_^ } outi​=inputiexponenti​​.

∙ \bullet · input(Tensor): tensor of input.
∙ \bullet · exponent(float or tensor): exponential value.

[Code]:

import torch t = torch.tensor([1, 2, 3, 4]) print("t: {}".format(t)) t1 = torch.pow(t, 2) print("t1: {}".format(t1)) exp = torch.arange(1, 5) print("exp: {}".format(exp)) t2 = torch.pow(t, exp) print("t2: {}".format(t2))

[result]:

t: tensor([1, 2, 3, 4]) t1: tensor([ 1, 4, 9, 16]) exp: tensor([1, 2, 3, 4]) t2: tensor([ 1, 4, 27, 256])

torch.pow()
torch.pow(self, exponent, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = s e l f e x p o n e n t i out_ = self^ } outi​=selfexponenti​.

∙ \bullet · self(float): scalar base value of power operation.
∙ \bullet · exponent(tensor): exponential tensor.

[Code]:

import torch exp = torch.arange(1, 5) print("exp: {}".format(exp)) base = 2 result = torch.pow(base, exp) print("result: {}".format(result))

[result]:

exp: tensor([1, 2, 3, 4]) result: tensor([ 2, 4, 8, 16])

2.7 exponential function

torch.exp()
torch.exp(input, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = e i n p u t i out_ = e^ } outi​=einputi​.

∙ \bullet · input(Tensor): tensor of input.

[Code]:

import torch import math t = torch.tensor([0, math.log(2.)]) print("t: {}".format(t)) result = torch.exp(t) print("result: {}".format(result))

[result]:

t: tensor([0.0000, 0.6931]) result: tensor([1., 2.])

2.8 logarithmic function

torch.log()
torch.log(input, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = l o g e ( i n p u t i ) out_ = log_^{(input_) } outi​=loge(inputi​)​.

∙ \bullet · input(Tensor): tensor of input.

[Code]:

import torch import math t = torch.tensor([math.e, math.exp(2), math.exp(3)]) print("t: {}".format(t)) result = torch.log(t) print("result: {}".format(result))

[result]:

t: tensor([ 2.7183, 7.3891, 20.0855]) result: tensor([1., 2., 3.])

torch.log2()
torch.log2(input, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = l o g 2 ( i n p u t i ) out_ = log_^{(input_) } outi​=log2(inputi​)​.

∙ \bullet · input(Tensor): tensor of input.

[Code]:

import torch t = torch.tensor([2., 4., 8.]) print("t: {}".format(t)) result = torch.log2(t) print("result: {}".format(result))

[result]:

t: tensor([2., 4., 8.]) result: tensor([1., 2., 3.])

torch.log10()
torch.log10(input, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = l o g 10 ( i n p u t i ) out_ = log_^{(input_) } outi​=log10(inputi​)​.

∙ \bullet · input(Tensor): tensor of input.

[Code]:

import torch t = torch.tensor([10., 100., 1000.]) print("t: {}".format(t)) result = torch.log10(t) print("result: {}".format(result))

[result]:

t: tensor([ 10., 100., 1000.]) result: tensor([1., 2., 3.])

2.9 trigonometric function

torch.sin()
torch.sin(input, *, out=None) → Tensor

[function]: the calculation formula is: o u t i = s i n ( i n p u t i ) out_ = sin\left ( input_ \right ) outi​=sin(inputi​).

∙ \bullet · input(Tensor): tensor of input.

[Code]:

import torch import math t = torch.tensor([0., 1 / 6 * math.pi, 1 / 3 * math.pi, 1 / 2 * math.pi]) print("t: {}".format(t)) result = torch.sin(t) print("result: {}".format(result))

[result]:

t: tensor([0.0000, 0.5236, 1.0472, 1.5708]) result: tensor([0.0000, 0.5000, 0.8660, 1.0000])

   similarly, other trigonometric functions can be obtained as follows:

torch.cos(input, *, out=None) → Tensor
torch.tan(input, *, out=None) → Tensor
torch.asin(input, *, out=None) → Tensor # anti sinusoidal function
torch.acos(input, *, out=None) → Tensor # inverse cosine function
torch.atan(input, *, out=None) → Tensor # arctangent function
torch.atan2(input, other, *, out=None) → arctangent function of Tensor # input/other corresponding element
torch.sinh(input, *, out=None) → Tensor # hyperbolic sine
torch.cosh(input, *, out=None) → Tensor # hyperbolic cosine
torch.tanh(input, *, out=None) → Tensor # hyperbolic tangent

3 linear regression

  linear regression is the analysis of a variable( y y y) With another variable (s)( x x x) The relationship between the methods. Generally, it can be written as y = w x + b y=wx+b y=wx+b. The purpose of linear regression is to solve the parameters w , b w, b w,b.

   the solution of linear regression can be divided into three steps:

  (1) determine the model: y = w x + b y=wx+b y=wx+b

   (2) select the loss function, and generally use the mean square error MSE: 1 m ∑ i = 1 m ( y i − y ^ i ) 2 \frac\sum_^ \left ( y_-\hat _ \right ) ^ m1​∑i=1m​(yi​−y^​i​)2. among y ^ i \hat _ y ^ i is the predicted value, y i y_ yi is the true value.

   (3) use the gradient descent method to solve the gradient (where α \alpha α Is the learning rate) and updates the parameters: w = w − α ∗ w . g r a d w = w - \alpha * w.grad w=w−α∗w.grad b = b − α ∗ b . g r a d b = b -\alpha * b.grad b=b−α∗b.grad

25 November 2021, 19:09 | Views: 4745

Add new comment

For adding a comment, please log in
or create account

0 comments