Dumbbell-lifting detection and automatic counting based on Paddlehub
Introduction to PaddleHub
Using Paddlehub, it is easy to obtain pre-training model under PaddlePaddle ecology, complete model management and one-click prediction. With Fine-tune API, migration learning can be quickly completed based on large-scale pre-training model, and the pre-training model can better serve the application of user-specific scenarios.
This project uses Paddlehub, based on human_pose_estimation_resnet50_mpii Human key point detection model.
# Install model before use hub install human_pose_estimation_resnet50_mpii==1.1.1
Overview of models
Human Bone Critical Point Detection (Pose Estimation) It is one of the basic algorithms of computer vision and plays a fundamental role in many computer vision tasks, such as behavior recognition, people tracking, gait recognition and other related fields. The specific applications mainly focus on intelligent video monitoring, patient monitoring system, human-computer interaction, virtual reality, human animation, smart home, smart security, athlete assisted training, and so on.The model's paper Simple Baselines for Human Pose Estimation and Tracking was published by MSRA on ECCV18 and completed using MPII dataset training.
Project ideas
In this project, during the process of dumbbell lifting during the fitness process, six joint points are detected respectively, that is, right and left wrists, right and left elbows, right and left shoulders, and the vector angle composed of wrist, elbow and shoulder based on the elbow is calculated separately. With the angle as the core, the dumbbell motion is judged to be in place, and the dumbbell motion is completed twice [0-90].The change in vector angle is counted as a dumbbell-lift.
Project process
Step one
The human_pose_estimation_resnet50_mpii model is used to detect the key points of the audience and to extract the six joint points needed for the project, namely, [right_wrist, right_elbow, right_shoulder, left_wrist, left_elbow, left_shoulder], and put them in the list:
def aitrainer(picture): consequence = [] result = pose_estimation.keypoint_detection(images=[cv2.imread(picture)]) predict = dict(result[0]["data"]) right_wrist = predict['right_wrist'] consequence.append(right_wrist) right_elbow = predict['right_elbow'] consequence.append(right_elbow) right_shoulder = predict['right_shoulder'] consequence.append(right_shoulder) left_shoulder = predict['left_shoulder'] consequence.append(left_shoulder) left_elbow = predict['left_elbow'] consequence.append(left_elbow) left_wrist = predict['left_wrist'] consequence.append(left_wrist) return consequence
Step two
As mentioned earlier, the core of this project is to calculate the angle between wrist, elbow and shoulder with the elbow as the center point, and to judge the angle, so as to monitor and count the dumbbell lift.
def Cal_Ang(point_1, point_2, point_3): """ Calculate angle from three-point coordinates :param point_1: Point 1 coordinate :param point_2: Point 2 coordinates :param point_3: Point 3 coordinates :return: Angle of point 2(Center Point) """ a=math.sqrt((point_2[0]-point_3[0]) * (point_2[0]-point_3[0])+(point_2[1]-point_3[1]) * (point_2[1]-point_3[1])) b=math.sqrt((point_1[0]-point_3[0]) * (point_1[0]-point_3[0])+(point_1[1]-point_3[1]) * (point_1[1]-point_3[1])) c=math.sqrt((point_1[0]-point_2[0]) * (point_1[0]-point_2[0])+(point_1[1]-point_2[1]) * (point_1[1]-point_2[1])) B=math.degrees(math.acos((b*b-a*a-c*c)/(-2*a*c))) return B
Step three
Use OpenCV to open the local camera to get a dynamic page, and display the results of project detection in real time on the page:
cap = cv2.VideoCapture(0) while(1): # get a frame ret, frame = cap.read() frame = cv2.resize(frame, (1280, 720)) print("Training will take place in 5 seconds. Please be ready!") for i in range(5): time.sleep(1) m = 5 - i print(m) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.imwrite("camera.jpg", frame) point = aitrainer('camera.jpg') if train_content == 'Left arm': left_wrist = point[5] left_elbow = point[4] left_shoulder = point[3] angle_left = Cal_Ang(left_wrist,left_elbow,left_shoulder) diff = angle_left - angle_last if diff >=0 and det_1 == 0: count += 0.5 det_1 = 1 if diff <=0 and det_1 ==1: count += 0.5 det_1 = 2 cv2.line(frame, tuple(left_wrist), tuple(left_elbow),(0, 0, 255), 3) cv2.line(frame, tuple(left_elbow), tuple(left_shoulder),(0, 0, 225), 3) if count % 1.0 == 0.0: det_1 = 0 print("count:%.2f"%count) cv2.rectangle(frame, (0, 450), (250, 720), (0, 255, 0), cv2.FILLED) cv2.putText(frame, str(int(count)), (45, 670), cv2.FONT_HERSHEY_PLAIN, 10, (255, 0, 0), 20) cv2.imshow('OpenPose using OpenCV', frame) if train_content == 'Right arm': right_wrist = point[0] right_elbow = point[1] right_shoulder = point[2] angle_right = Cal_Ang(right_wrist,right_elbow,right_shoulder) diff = angle_right - angle_last if diff >=0 and det_1 == 0: count += 0.5 det_1 = 1 if diff <=0 and det_1 ==1: count += 0.5 det_1 = 2 cv2.line(frame, tuple(right_wrist), tuple(right_elbow),(0, 0, 255), 3) cv2.line(frame, tuple(right_elbow), tuple(right_shoulder),(0, 0, 225), 3) if count % 1.0 == 0.0: det_1 = 0 print("count:%.2f"%count) cv2.rectangle(frame, (0, 450), (250, 720), (0, 255, 0), cv2.FILLED) cv2.putText(frame, str(int(count)), (45, 670), cv2.FONT_HERSHEY_PLAIN, 10, (255, 0, 0), 20) cv2.imshow('OpenPose using OpenCV', frame) cap.release()
[Note: This distinguishes the left arm from the right arm]