PaddleHub face key point detection: one click to generate crayon Xiaoxin's distant cousin

One click to find crayon Xiaoxin's distant cousin

Crayon Xiaoxin should be one of the childhood memories of many little friends. No one doesn't like crayon Xiaoxin's signature thick eyebrows and round little face! No, no! Next, let's use brother Chen as a tool to restore crayon Xiaoxin's distant cousin - Crayon Xiaochen!

Let's take a look at crayon Xiaochen first and have a look!

Follow my steps and look for crayon Xiaoxin's distant cousin. It's officially opened! (step 2, 3 and 4 are process demonstration ~ one click search. Please skip to Part 5 after landing in crayon mainland)

1, Landing crayon continent

!pip install --upgrade pip
!pip install opencv-python==4.5.4.60
!pip install paddlehub==2.1.1

2, Face key point detection using PaddleHub

Face key point detection is a key step in the field of face recognition and analysis. It is the premise and breakthrough of other face related problems, such as automatic face recognition, expression analysis, 3D face reconstruction and 3D animation. The model of the PaddleHub Module is converted from https://github.com/lsy17096535/face-landmark , support the detection of multiple faces in the same picture. The purpose of this step is to obtain the coordinates of 68 key points of the face, as shown in the figure below. With the coordinates of 68 key points of the face, it will be much easier to depict the crayon eyebrow and generate the Dudu face.


import cv2
import paddlehub as hub
import matplotlib.pyplot as plt 
import matplotlib.image as mpimg
import numpy as np
import math
from PIL import Image
src_img = cv2.imread('example.jpg')

# Load the model and predict
module = hub.Module(name="face_landmark_localization")
result = module.keypoint_detection(images=[src_img])

tmp_img = src_img.copy()
for index, point in enumerate(result[0]['data'][0]):
	# cv2.putText(img, str(index), (int(point[0]), int(point[1])), cv2.FONT_HERSHEY_COMPLEX, 3, (0,0,255), -1)
	cv2.circle(tmp_img, (int(point[0]), int(point[1])), 2, (0, 0, 255), -1)

res_img_path = 'face_landmark.jpg'
cv2.imwrite(res_img_path, tmp_img)

img = mpimg.imread(res_img_path) 
# Display the prediction results of 68 key points (if the key point visualization results are not displayed, please run this cell again)
plt.figure(figsize=(10,10))
plt.imshow(img) 
plt.axis('off') 
plt.show()
[2021-11-30 14:27:46,626] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the __init__() to handle the initialization of the object
[2021-11-30 14:27:46,743] [ WARNING] - The _initialize method in HubModule will soon be deprecated, you can use the __init__() to handle the initialization of the object
[37m---    Fused 0 subgraphs into layer_norm op.[0m
[37m---    Fused 0 subgraphs into layer_norm op.[0m

3, Crayon eyebrow

In the previous step, we obtained the coordinates of 68 key points of the face, of which 18-22 and 23-27 are the coordinates of the eyebrows. To get the thick eyebrow of crayon Xiaoxin, simply connect the coordinate points of the eyebrow into a line and control the appropriate width.

This can be easily implemented using opencv's line() function.

def thick_eyebrows(image, face_landmark, width):
	for i in range(18-1, 22-1):
		cv2.line(image, face_landmark[i], face_landmark[i+1], (0, 0, 0), width)
	for i in range(23-1, 27-1):
		cv2.line(image, face_landmark[i], face_landmark[i+1], (0, 0, 0), width)
	return image

# Extract the face key point coordinates
face_landmark = np.array(result[0]['data'][0], dtype='int')
# Generate crayon small new eyebrows
width = 8
src_img = thick_eyebrows(src_img, face_landmark, width)
cv2.imwrite('thick_eyebrows.jpg', src_img)


img = mpimg.imread('thick_eyebrows.jpg') 
# Show crayon eyebrows
plt.figure(figsize=(10,10))
plt.imshow(img) 
plt.axis('off') 
plt.show()

4, Play swollen face and fill Xiaoxin

Here, the Image local translation algorithm . The idea is: from the pre deformation coordinates, according to the deformation mapping relationship, the post deformation coordinates are obtained. Among them, the deformation mapping relationship is the most critical. Different mapping relationships will get different deformation effects. Translation, scaling and rotation correspond to different mapping relationships, that is, different transformation formulas. Of course, in the actual calculation process, the inverse transformation is used, that is, the coordinates before deformation are inversely calculated from the coordinates after deformation according to the inverse transformation formula, and then the rgb pixel value of the coordinate is obtained by interpolation, and the rgb value is used as the pixel value corresponding to the coordinates after deformation. Only in this way can we ensure that the deformed image is continuous and complete.

# Fat face operation
def fat_face(image, face_landmark):
    end_point = face_landmark[30]

    # Fat left face, the distance from point 3 to point 5 is regarded as a fat face distance
    dist_left = np.linalg.norm(face_landmark[3] - face_landmark[5])
    image = local_traslation_warp(image, face_landmark[3], end_point, dist_left)

    # Fat right face, the distance from point 13 to point 15 is regarded as a fat face distance
    dist_right = np.linalg.norm(face_landmark[13] - face_landmark[15])
    image = local_traslation_warp(image, face_landmark[13], end_point, dist_right)
    return image
# Local translation algorithm
def local_traslation_warp(image, start_point, end_point, radius):
	radius_square = math.pow(radius, 2)
	image_cp = image.copy()

	dist_se = math.pow(np.linalg.norm(end_point - start_point), 2)
	height, width, channel = image.shape
	for i in range(width):
		for j in range(height):
			# Calculate whether the point is within the range of the deformation circle
			# In the first step of optimization, the direct judgment will be in the matrix box of (start_point[0], start_point[1])
			if math.fabs(i - start_point[0]) > radius and math.fabs(j - start_point[1]) > radius:
				continue

			distance = (i - start_point[0]) * (i - start_point[0]) + (j - start_point[1]) * (j - start_point[1])

			if distance < radius_square:
				# Calculate the original coordinates of (i,j) coordinates
				# The part in the square sign on the right of the calculation formula
				ratio = (radius_square - distance) / (radius_square - distance + dist_se)
				ratio = ratio * ratio

				# Map original location
				new_x = i + ratio * (end_point[0] - start_point[0])
				new_y = j + ratio * (end_point[1] - start_point[1])

				new_x = new_x if new_x >= 0 else 0
				new_x = new_x if new_x < height - 1 else height - 2
				new_y = new_y if new_y >= 0 else 0
				new_y = new_y if new_y < width - 1 else width - 2

				# The values of new_x and new_y are obtained by bilinear interpolation
				image_cp[j, i] = bilinear_insert(image, new_x, new_y)

	return image_cp


# bilinear interpolation 
def bilinear_insert(image, new_x, new_y):
	w, h, c = image.shape
	if c == 3:
		x1 = int(new_x)
		x2 = x1 + 1
		y1 = int(new_y)
		y2 = y1 + 1

		part1 = image[y1, x1].astype(np.float) * (float(x2) - new_x) * (float(y2) - new_y)
		part2 = image[y1, x2].astype(np.float) * (new_x - float(x1)) * (float(y2) - new_y)
		part3 = image[y2, x1].astype(np.float) * (float(x2) - new_x) * (new_y - float(y1))
		part4 = image[y2, x2].astype(np.float) * (new_x - float(x1)) * (new_y - float(y1))

		insertvalue = part1 + part2 + part3 + part4

		return insertvalue.astype(np.int8)
# Fat face operation
fat_nums = 3
for i in range(1, fat_nums):
	src_img = fat_face(src_img, face_landmark)

cv2.imwrite('res.jpg', src_img)
img = mpimg.imread('res.jpg') 
# Display crayon eyebrow + toot mouth
plt.figure(figsize=(10,10))
plt.imshow(img) 
plt.axis('off') 
plt.show()

5, One click execution ~ (the above is the process display part, you can find your distant cousin here)

Four parameters are introduced in run.py:

img_path Enter picture path
width Eyebrow width
res_img_path Output picture path
fat_nums Dudu face coefficient

Make corresponding modifications according to the parameter description. After running the command below and printing done, you can find the output picture (default: res.jpg) in the left (/ home/aistudio) directory

!python run.py --img_path example.jpg --width 8 --res_img_path res.jpg --fat_nums 3

Let's see the effect! (victims update from time to time ~)

(mom, mom, I'm in the same frame with the big guys of PPDE, ha ha ha)

summary

It takes only four simple steps to find the crayon Xiaoxin's distant cousin. Bald Ran has a bunch of brothers.

The principle of this scheme is to modify the picture first Face key point detection , it's easy to do with the 68 key point coordinates of the face. The generation of thick eyebrows only needs to use opencv to draw lines at the eyebrows, while the "gingival inflammation generator" in step 4 uses Local translation algorithm Completed.

The last, the last, the lucky friends are drawn from time to time. I'll help you find your distant brother (manual dog head)

Personal profile

Author: AP Kai

School: sophomore of Shenyang University of Technology

AI Studio: https://aistudio.baidu.com/aistudio/personalcenter/thirdview/675310

GitHub: https://github.com/AP-Kai/AP-Kai

Please click here View the basic usage of this environment

Please click here for more detailed instructions.

Tags: Computer Vision Deep Learning paddlepaddle

Posted on Fri, 03 Dec 2021 17:32:18 -0500 by pitstop