Get the weight or feature of a layer in the middle of Python

Get the weight or feature of a layer in the middle of Python Question: how to deal with a trained network model that wants to know the weight of a ce...
Get the weight or feature of a layer in the middle of Python

Question: how to deal with a trained network model that wants to know the weight of a certain layer in the middle or look at the characteristics of a certain layer in the middle?

1. Obtain the weight of a certain layer and save it in excel;

Take resnet18 for example:

import torch import pandas as pd import numpy as np import torchvision.models as models resnet18 = models.resnet18(pretrained=True) parm={} for name,parameters in resnet18.named_parameters(): print(name,':',parameters.size()) parm[name]=parameters.detach().numpy()

The above code stores the parameters of each module in the parm dictionary. parameters.detach().numpy() converts the sensor type variables into the form of numpy array, which is convenient for subsequent storage in the table. The output is:

conv1.weight : torch.Size([64, 3, 7, 7]) bn1.weight : torch.Size([64]) bn1.bias : torch.Size([64]) layer1.0.conv1.weight : torch.Size([64, 64, 3, 3]) layer1.0.bn1.weight : torch.Size([64]) layer1.0.bn1.bias : torch.Size([64]) layer1.0.conv2.weight : torch.Size([64, 64, 3, 3]) layer1.0.bn2.weight : torch.Size([64]) layer1.0.bn2.bias : torch.Size([64]) layer1.1.conv1.weight : torch.Size([64, 64, 3, 3]) layer1.1.bn1.weight : torch.Size([64]) layer1.1.bn1.bias : torch.Size([64]) layer1.1.conv2.weight : torch.Size([64, 64, 3, 3]) layer1.1.bn2.weight : torch.Size([64]) layer1.1.bn2.bias : torch.Size([64]) layer2.0.conv1.weight : torch.Size([128, 64, 3, 3]) layer2.0.bn1.weight : torch.Size([128]) layer2.0.bn1.bias : torch.Size([128]) layer2.0.conv2.weight : torch.Size([128, 128, 3, 3]) layer2.0.bn2.weight : torch.Size([128]) layer2.0.bn2.bias : torch.Size([128]) layer2.0.downsample.0.weight : torch.Size([128, 64, 1, 1]) layer2.0.downsample.1.weight : torch.Size([128]) layer2.0.downsample.1.bias : torch.Size([128]) layer2.1.conv1.weight : torch.Size([128, 128, 3, 3]) layer2.1.bn1.weight : torch.Size([128]) layer2.1.bn1.bias : torch.Size([128]) layer2.1.conv2.weight : torch.Size([128, 128, 3, 3]) layer2.1.bn2.weight : torch.Size([128]) layer2.1.bn2.bias : torch.Size([128]) layer3.0.conv1.weight : torch.Size([256, 128, 3, 3]) layer3.0.bn1.weight : torch.Size([256]) layer3.0.bn1.bias : torch.Size([256]) layer3.0.conv2.weight : torch.Size([256, 256, 3, 3]) layer3.0.bn2.weight : torch.Size([256]) layer3.0.bn2.bias : torch.Size([256]) layer3.0.downsample.0.weight : torch.Size([256, 128, 1, 1]) layer3.0.downsample.1.weight : torch.Size([256]) layer3.0.downsample.1.bias : torch.Size([256]) layer3.1.conv1.weight : torch.Size([256, 256, 3, 3]) layer3.1.bn1.weight : torch.Size([256]) layer3.1.bn1.bias : torch.Size([256]) layer3.1.conv2.weight : torch.Size([256, 256, 3, 3]) layer3.1.bn2.weight : torch.Size([256]) layer3.1.bn2.bias : torch.Size([256]) layer4.0.conv1.weight : torch.Size([512, 256, 3, 3]) layer4.0.bn1.weight : torch.Size([512]) layer4.0.bn1.bias : torch.Size([512]) layer4.0.conv2.weight : torch.Size([512, 512, 3, 3]) layer4.0.bn2.weight : torch.Size([512]) layer4.0.bn2.bias : torch.Size([512]) layer4.0.downsample.0.weight : torch.Size([512, 256, 1, 1]) layer4.0.downsample.1.weight : torch.Size([512]) layer4.0.downsample.1.bias : torch.Size([512]) layer4.1.conv1.weight : torch.Size([512, 512, 3, 3]) layer4.1.bn1.weight : torch.Size([512]) layer4.1.bn1.bias : torch.Size([512]) layer4.1.conv2.weight : torch.Size([512, 512, 3, 3]) layer4.1.bn2.weight : torch.Size([512]) layer4.1.bn2.bias : torch.Size([512]) fc.weight : torch.Size([1000, 512]) fc.bias : torch.Size([1000])
parm['layer1.0.conv1.weight'][0,0,:,:]

Output is:

array([[ 0.05759342, -0.09511436, -0.02027232], [-0.07455588, -0.799308 , -0.21283598], [ 0.06557069, -0.09653367, -0.01211061]], dtype=float32)

The following functions are used to save all the parameters of a certain layer in the table, and the data maintains the convolution kernel feature size. For example, 3 * 3 convolution is still 3 x 3 after saving

def parm_to_excel(excel_name,key_name,parm): with pd.ExcelWriter(excel_name) as writer: [output_num,input_num,filter_size,_]=parm[key_name].size() for i in range(output_num): for j in range(input_num): data=pd.DataFrame(parm[key_name][i,j,:,:].detach().numpy()) #print(data) data.to_excel(writer,index=False,header=True,startrow=i*(filter_size+1),startcol=j*filter_size)

Because many values in the weight matrix are very small, take out fixed size values and write all weights into excel

counter=1 with pd.ExcelWriter('test1.xlsx') as writer: for key in parm_resnet50.keys(): data=parm_resnet50[key].reshape(-1,1) data=data[data>0.001] data=pd.DataFrame(data,columns=[key]) data.to_excel(writer,index=False,startcol=counter) counter+=1

2. Get the characteristics of a middle layer

Rewrite a function, will need to output layer output can

def resnet_cifar(net,input_data): x = net.conv1(input_data) x = net.bn1(x) x = F.relu(x) x = net.layer1(x) x = net.layer2(x) x = net.layer3(x) x = net.layer4[0].conv1(x) #In this way, the output of the first volume layer of the first block of layer4 is extracted x=x.view(x.shape[0],-1) return x model = models.resnet18() x = resnet_cifar(model,input_data)

2 December 2019, 20:51 | Views: 9095

Add new comment

For adding a comment, please log in
or create account

0 comments