[image processing] Python method for obtaining picture mean and variance

Method for obtaining picture mean and variance

In domain adaptive change, or when processing data sets, it is often necessary to analyze the mean and variance of images or data sets

By analyzing the mean and variance, we can efficiently obtain the data distribution, especially the large data set

Therefore, here we record and introduce the acquisition methods of image mean and variance, and illustrate the algorithm and results through some simple experiments.

1. Basic background

1) Mean and variance of pictures
A three channel RGB picture can be represented as a space of RC*H*W, where:
C means channel; H represents the length of the picture; W indicates the width of the picture
This picture can be represented by a matrix of size I = [C, H, W]

mean value
Formula: μ = 1 H ∗ W ∑ 0 < i ≤ H , 0 < j ≤ W I i , j \mu=\frac{1}{H*W}\sum_{{0<i\le H,0<j\le W}}I_{i,j} μ=H∗W1​∑0<i≤H,0<j≤W​Ii,j​

2) Relationship between mean, variance and data distribution of data set
Reference article: CSDN > how to perform data analysis and statistics_ Perform statistical analysis on data sets you don't know
Normality check: whether the data set follows normal distribution;
Classification variable: whether the points in the data set are evenly distributed;
Data association: determine the relationship between variables and clear highly related data;

2. Code and experiment

1) Input initialization

import torch
# Initialize a matrix
# a : [N, C, H, W]
a = torch.ones([2,2,3,3])
a[:,:,:,2] *= 4

result

tensor([[[[1., 1., 4.],
          [1., 1., 4.],
          [1., 1., 4.]],
		......
         [[1., 1., 4.],
          [1., 1., 4.],
          [1., 1., 4.]]]])

2) Get variance

x = a ; eps = 1e-5 
size = x.size()
assert (len(size) == 4)  # if size!=4 break
N, C = size[:2]
tmp_var = x.contiguous().view(N, C, -1)
var = x.contiguous().view(N, C, -1).var(dim=2) + eps
# Record intermediate data tmp_var
print("tmp_var : {}, {}".format(tmp_var, tmp_var.size()))
# Record variance var
print("tmp_var : {}, {}".format(var, var.size()))

result

tmp_var : tensor([[[1., 1., 4., 1., 1., 4., 1., 1., 4.],
         [1., 1., 4., 1., 1., 4., 1., 1., 4.]],

        [[1., 1., 4., 1., 1., 4., 1., 1., 4.],
         [1., 1., 4., 1., 1., 4., 1., 1., 4.]]]), torch.Size([2, 2, 9])
tmp_var : tensor([[2.2500, 2.2500],
        [2.2500, 2.2500]]), torch.Size([2, 2])

The number of instances N and channel C remain unchanged. For the dimensions of height H and width W, it is transformed into a one-dimensional vector through the view() function to calculate the variance;

3) Obtain standard deviation and mean

# contiguous() : make the x's memory space contiguous
# the view() can be only used on contiguous memory space
std = var.sqrt().view(N, C, 1, 1)
mean = x.contiguous().view(N, C, -1).mean(dim=2).view(N, C, 1, 1)
# Record standard deviation std
print("std : {}, {}".format(std, std.size()))
# Record mean
print("mean : {}, {}".format(mean, mean.size()))

result

std : tensor([[[[1.5000]],
         [[1.5000]]],
        [[[1.5000]],
         [[1.5000]]]]), torch.Size([2, 2, 1, 1])
mean : tensor([[[[2.]],
         [[2.]]],
        [[[2.]],
         [[2.]]]]), torch.Size([2, 2, 1, 1])

4) Complete code

source: amazon-research/crossnorm-selfnorm > models > cnsn.py

def calc_ins_mean_std(x, eps=1e-5):
    """extract feature map statistics"""
    # eps is a small value added to the variance to avoid divide-by-zero.
    size = x.size()
    assert (len(size) == 4)  # if size!=4 break
    N, C = size[:2]
    var = x.contiguous().view(N, C, -1).var(dim=2) + eps
    # contiguous() : make the x's memory space contiguous
    # the view() can be only used on contiguous memory space
    std = var.sqrt().view(N, C, 1, 1)
    mean = x.contiguous().view(N, C, -1).mean(dim=2).view(N, C, 1, 1)
    return mean, std

3. Summary

The algorithm principle and code are very simple, but data distribution is a very important part of image processing and data set processing. Application distribution can achieve a variety of SOTA applications such as style migration and domain transformation, which needs to be paid attention to!

Tags: neural networks Pytorch Deep Learning

Posted on Tue, 30 Nov 2021 03:44:06 -0500 by abgoosht