# data set

Because of the epidemic, there was no mask to go out. In this experiment, the front, left, right and back full body images of heroes in a game were taken as the data set. There were 4 heroes in total, 16 full body images.

# SIFT descriptor

## 1. principle

In the process of generating SIFT descriptors, the distribution of feature points in the neighborhood of the feature points is considered (without using the global information). The main calculation process in this step includes: determining the direction of feature points and generating feature descriptors.
Description generation steps:
1. Construct Gauss difference space image.
2. Search for Minimax
3. Accurate positioning of extreme points
4. Select the main direction of feature points
5. Construct feature point description operator.

## 2. code

```# -*- coding: utf-8 -*-
from PIL import Image
from pylab import *
from PCV.localdescriptors import sift
from PCV.localdescriptors import harris

from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r'c:\windows\fonts\SimSun.ttc', size=14)

imname = 'C:\Python\Pictrue\wangzhe11.jpg'
im = array(Image.open(imname).convert('L'))
sift.process_image(imname, 'empire.sift')

figure()
gray()
subplot(131)
sift.plot_features(im, l1, circle=False)
title(u'SIFT Features',fontproperties=font)
subplot(132)
sift.plot_features(im, l1, circle=True)
title(u'Circle SIFT Characteristic scale',fontproperties=font)

# Detect harris corner
harrisim = harris.compute_harris_response(im)

subplot(133)
filtered_coords = harris.get_harris_points(harrisim, 6, 0.1)
imshow(im)
plot([p[1] for p in filtered_coords], [p[0] for p in filtered_coords], '*')
axis('off')
title(u'Feature points',fontproperties=font)

show()
```

## 3. results

Display of interest points of sift descriptors

## 4. analysis

Because the threshold value in the code is 0.1, the points with low brightness are filtered directly, and the SIFT descriptor eliminates the points on the boundary, so some heroes with flat modeling or low brightness image after graying can not find the feature points, as shown in the following figure

# Descriptors matching

## 1. principle

Matching six steps:
1. Generate the Gaussian difference pyramid (DOG pyramid), and construct the scale space
2. Detection of spatial extreme points (preliminary exploration of key points)
3. Accurate positioning of stable key points
4. Stable key point direction information distribution
5. Key point description
6. Feature point matching
The specific principle is detailed in the following blog, which is not detailed here

## 2. code

```from PIL import Image
from pylab import *
import sys
from PCV.localdescriptors import sift

if len(sys.argv) >= 3:
im1f, im2f = sys.argv[1], sys.argv[2]
else:
#  im1f = '../data/sf_view1.jpg'
#  im2f = '../data/sf_view2.jpg'
im1f = 'C:\Python\Pictrue\wangzhe\wangzhe1.jpg'
im2f = 'C:\Python\Pictrue\wangzhe\wangzhe12.jpg'
#  im1f = '../data/climbing_1_small.jpg'
#  im2f = '../data/climbing_2_small.jpg'
im1 = array(Image.open(im1f))
im2 = array(Image.open(im2f))

sift.process_image(im1f, 'out_sift_1.txt')
figure()
gray()
subplot(121)
sift.plot_features(im1, l1, circle=False)

sift.process_image(im2f, 'out_sift_2.txt')
subplot(122)
sift.plot_features(im2, l2, circle=False)

#matches = sift.match(d1, d2)
matches = sift.match_twosided(d1, d2)
print '{} matches'.format(len(matches.nonzero()[0]))

figure()
gray()
sift.plot_matches(im1, im2, l1, l2, matches, show_below=True)
show()
```

## 3. results

Match successful 1 (rotation):

Match success 2 (full body and close-up):

Unsuccessful match:

It's worth mentioning that I also used two skins of the same hero with little difference for feature matching, but the matching results were not satisfactory:

## 4. analysis

In case of unsuccessful matching in this section, heroes with similar skin painting style in the same series are specially selected for feature matching. From the experimental results, it can be seen that: the same hero is successfully rotated and matched; the whole body and close-up of the same hero are successfully matched. It should be pointed out that since the hero has dynamic special effects, there may be some subtle changes in the shape of the hero at each time In the whole body and close-up group, there are not many matches, but the match is still successful; the feature matching for different heroes fails; the light of two similar skin of the same hero is different, and the feature matching is also less, so it can be seen that the light has a certain impact on SIFT feature matching.
To sum up, we can get that the matching has the invariance of angle and rotation, and the invariance of scale.

# Picture visual matching

## 1. principle

1. sift feature extraction is used for image, and the feature is saved in the. sift file with the same name as the image.
2. Define the connection between images and confirm the connection relationship through whether there is a matching local descriptor between images.
3. If the number of image matches is higher than a threshold (for example, 2), the edge is used to connect the corresponding image nodes.

## 2. code

```# -*- coding: utf-8 -*-
from pylab import *
from PIL import Image
from PCV.localdescriptors import sift
from PCV.tools import imtools
import pydot
import os

os.environ["PATH"] += os.pathsep + 'E:/Graphviz/bin

""" This is the example graph illustration of matching images from Figure 2-10.

#path = "/FULLPATH/panoimages/"  # path to save thumbnails (pydot needs the full system path)

path = 'C:\Python\Pictrue\wangzhe'  # path to save thumbnails (pydot needs the full system path)

nbr_images = len(imlist)

# extract features
featlist = [imname[:-3] + 'sift' for imname in imlist]
for i, imname in enumerate(imlist):
sift.process_image(imname, featlist[i])

matchscores = zeros((nbr_images, nbr_images))

for i in range(nbr_images):
for j in range(i, nbr_images):  # only compute upper triangle
print 'comparing ', imlist[i], imlist[j]
matches = sift.match_twosided(d1, d2)
nbr_matches = sum(matches > 0)
print 'number of matches = ', nbr_matches
matchscores[i, j] = nbr_matches
print "The match scores is: \n", matchscores

# copy values
for i in range(nbr_images):
for j in range(i + 1, nbr_images):  # no need to copy diagonal
matchscores[j, i] = matchscores[i, j]

#visualization

threshold = 2  # min number of matches needed to create link

g = pydot.Dot(graph_type='graph')  # don't want the default directed graph

for i in range(nbr_images):
for j in range(i + 1, nbr_images):
if matchscores[i, j] > threshold:
# first image in pair
im = Image.open(imlist[i])
im.thumbnail((100, 100))
filename = path + str(i) + '.png'
im.save(filename)  # need temporary files of the right size

# second image in pair
im = Image.open(imlist[j])
im.thumbnail((100, 100))
filename = path + str(j) + '.png'
im.save(filename)  # need temporary files of the right size

g.write_png('lyc.png')
```

## 4. analysis

If you input a picture, it can be matched successfully in the data set, but if the picture size is too large or the number of pictures is too large, the matching time will be too long.

# Problems encountered and Solutions

## 1. The image size is different, so feature matching is not possible

Because the image is cut, the size is slightly different, but the algorithm can only match the same size image
Solution: use PS to change the picture in the dataset to the same size

## 2.VLfeat cannot be used

After VLfeat is installed, the following error will appear

The solution is as follows

After Baidu found that you should install graphviz first and then pydot
graphviz use