SIFT descriptors and feature matching -- python implementation

Article directory

data set

Because of the epidemic, there was no mask to go out. In this experiment, the front, left, right and back full body images of heroes in a game were taken as the data set. There were 4 heroes in total, 16 full body images.

SIFT descriptor

1. principle

In the process of generating SIFT descriptors, the distribution of feature points in the neighborhood of the feature points is considered (without using the global information). The main calculation process in this step includes: determining the direction of feature points and generating feature descriptors.
Description generation steps:
1. Construct Gauss difference space image.
2. Search for Minimax
3. Accurate positioning of extreme points
4. Select the main direction of feature points
5. Construct feature point description operator.

2. code

# -*- coding: utf-8 -*-
from PIL import Image
from pylab import *
from PCV.localdescriptors import sift
from PCV.localdescriptors import harris

# Add Chinese font support
from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r'c:\windows\fonts\SimSun.ttc', size=14)

imname = 'C:\Python\Pictrue\wangzhe11.jpg'
im = array(Image.open(imname).convert('L'))
sift.process_image(imname, 'empire.sift')
l1, d1 = sift.read_features_from_file('empire.sift')

figure()
gray()
subplot(131)
sift.plot_features(im, l1, circle=False)
title(u'SIFT Features',fontproperties=font)
subplot(132)
sift.plot_features(im, l1, circle=True)
title(u'Circle SIFT Characteristic scale',fontproperties=font)

# Detect harris corner
harrisim = harris.compute_harris_response(im)

subplot(133)
filtered_coords = harris.get_harris_points(harrisim, 6, 0.1)
imshow(im)
plot([p[1] for p in filtered_coords], [p[0] for p in filtered_coords], '*')
axis('off')
title(u'Feature points',fontproperties=font)

show()

3. results

Display of interest points of sift descriptors

4. analysis

Because the threshold value in the code is 0.1, the points with low brightness are filtered directly, and the SIFT descriptor eliminates the points on the boundary, so some heroes with flat modeling or low brightness image after graying can not find the feature points, as shown in the following figure

Descriptors matching

1. principle

Matching six steps:
1. Generate the Gaussian difference pyramid (DOG pyramid), and construct the scale space
2. Detection of spatial extreme points (preliminary exploration of key points)
3. Accurate positioning of stable key points
4. Stable key point direction information distribution
5. Key point description
6. Feature point matching
The specific principle is detailed in the following blog, which is not detailed here

https://www.learnopencv.com/histogram-of-oriented-gradients/

2. code

from PIL import Image
from pylab import *
import sys
from PCV.localdescriptors import sift


if len(sys.argv) >= 3:
  im1f, im2f = sys.argv[1], sys.argv[2]
else:
#  im1f = '../data/sf_view1.jpg'
#  im2f = '../data/sf_view2.jpg'
  im1f = 'C:\Python\Pictrue\wangzhe\wangzhe1.jpg'
  im2f = 'C:\Python\Pictrue\wangzhe\wangzhe12.jpg'
#  im1f = '../data/climbing_1_small.jpg'
#  im2f = '../data/climbing_2_small.jpg'
im1 = array(Image.open(im1f))
im2 = array(Image.open(im2f))

sift.process_image(im1f, 'out_sift_1.txt')
l1, d1 = sift.read_features_from_file('out_sift_1.txt')
figure()
gray()
subplot(121)
sift.plot_features(im1, l1, circle=False)

sift.process_image(im2f, 'out_sift_2.txt')
l2, d2 = sift.read_features_from_file('out_sift_2.txt')
subplot(122)
sift.plot_features(im2, l2, circle=False)

#matches = sift.match(d1, d2)
matches = sift.match_twosided(d1, d2)
print '{} matches'.format(len(matches.nonzero()[0]))

figure()
gray()
sift.plot_matches(im1, im2, l1, l2, matches, show_below=True)
show()

3. results

Match successful 1 (rotation):

Match success 2 (full body and close-up):

Unsuccessful match:

It's worth mentioning that I also used two skins of the same hero with little difference for feature matching, but the matching results were not satisfactory:

4. analysis

In case of unsuccessful matching in this section, heroes with similar skin painting style in the same series are specially selected for feature matching. From the experimental results, it can be seen that: the same hero is successfully rotated and matched; the whole body and close-up of the same hero are successfully matched. It should be pointed out that since the hero has dynamic special effects, there may be some subtle changes in the shape of the hero at each time In the whole body and close-up group, there are not many matches, but the match is still successful; the feature matching for different heroes fails; the light of two similar skin of the same hero is different, and the feature matching is also less, so it can be seen that the light has a certain impact on SIFT feature matching.
To sum up, we can get that the matching has the invariance of angle and rotation, and the invariance of scale.

Picture visual matching

1. principle

1. sift feature extraction is used for image, and the feature is saved in the. sift file with the same name as the image.
2. Define the connection between images and confirm the connection relationship through whether there is a matching local descriptor between images.
3. If the number of image matches is higher than a threshold (for example, 2), the edge is used to connect the corresponding image nodes.

2. code

# -*- coding: utf-8 -*-
from pylab import *
from PIL import Image
from PCV.localdescriptors import sift
from PCV.tools import imtools
import pydot
import os

os.environ["PATH"] += os.pathsep + 'E:/Graphviz/bin

""" This is the example graph illustration of matching images from Figure 2-10.
To download the images, see ch2_download_panoramio.py."""

#download_path = "panoimages"  # set this to the path where you downloaded the panoramio images
#path = "/FULLPATH/panoimages/"  # path to save thumbnails (pydot needs the full system path)

download_path = 'C:\Python\Pictrue\wangzhe'  # set this to the path where you downloaded the panoramio images
path = 'C:\Python\Pictrue\wangzhe'  # path to save thumbnails (pydot needs the full system path)

# list of downloaded filenames
imlist = imtools.get_imlist(download_path)
nbr_images = len(imlist)

# extract features
featlist = [imname[:-3] + 'sift' for imname in imlist]
for i, imname in enumerate(imlist):
    sift.process_image(imname, featlist[i])

matchscores = zeros((nbr_images, nbr_images))

for i in range(nbr_images):
    for j in range(i, nbr_images):  # only compute upper triangle
        print 'comparing ', imlist[i], imlist[j]
        l1, d1 = sift.read_features_from_file(featlist[i])
        l2, d2 = sift.read_features_from_file(featlist[j])
        matches = sift.match_twosided(d1, d2)
        nbr_matches = sum(matches > 0)
        print 'number of matches = ', nbr_matches
        matchscores[i, j] = nbr_matches
print "The match scores is: \n", matchscores

# copy values
for i in range(nbr_images):
    for j in range(i + 1, nbr_images):  # no need to copy diagonal
        matchscores[j, i] = matchscores[i, j]

#visualization

threshold = 2  # min number of matches needed to create link

g = pydot.Dot(graph_type='graph')  # don't want the default directed graph

for i in range(nbr_images):
    for j in range(i + 1, nbr_images):
        if matchscores[i, j] > threshold:
            # first image in pair
            im = Image.open(imlist[i])
            im.thumbnail((100, 100))
            filename = path + str(i) + '.png'
            im.save(filename)  # need temporary files of the right size
            g.add_node(pydot.Node(str(i), fontcolor='transparent', shape='rectangle', image=filename))

            # second image in pair
            im = Image.open(imlist[j])
            im.thumbnail((100, 100))
            filename = path + str(j) + '.png'
            im.save(filename)  # need temporary files of the right size
            g.add_node(pydot.Node(str(j), fontcolor='transparent', shape='rectangle', image=filename))

            g.add_edge(pydot.Edge(str(i), str(j)))
g.write_png('lyc.png')

3. results

4. analysis

If you input a picture, it can be matched successfully in the data set, but if the picture size is too large or the number of pictures is too large, the matching time will be too long.

Problems encountered and Solutions

1. The image size is different, so feature matching is not possible

Because the image is cut, the size is slightly different, but the algorithm can only match the same size image
Solution: use PS to change the picture in the dataset to the same size

2.VLfeat cannot be used

After VLfeat is installed, the following error will appear

The solution is as follows

3. Error in downloading and installing pydot


After Baidu found that you should install graphviz first and then pydot
graphviz use

https://graphviz.gitlab.io/_pages/Download/Download_windows.html

Then I tried a lot of methods, but they didn't work. At last, I used the following steps
1. Delete all the items installed first
2. Change download source
3. Download graphviz in navigator
4. Configure the environment variables of graphviz
5.pip install pydot

Published 14 original articles, won praise 6, visited 2586
Private letter follow

Tags: Python Windows less GitLab

Posted on Sat, 07 Mar 2020 08:08:40 -0500 by the-botman