Click the card below to focus on "OpenCV and AI deep learning"
Visual / image heavy dry goods, delivered at the first time
Reading guide
This paper mainly introduces the segmentation and counting method of adhesive objects based on morphology + connected domain processing, and compares the implementation differences between Halcon and OpenCV.
Background introduction
In the actual visual application scene, we often encounter the counting problem of objects / components, and the common situation in counting is the adjacent or adhesive objects. The segmentation of adjacent or adhesive objects will directly affect the accuracy of the final counting. The common methods of segmentation and counting of adhesive objects will be introduced later, including:
[1] Morphology + connected domain processing method
[2] Distance transform + watershed segmentation method
[3] Other methods
In this paper, the [1] method will be implemented with Halcon and OpenCV respectively, and a simple comparison will be made.
Example demonstration and implementation steps
*Example 1: sugar bean segmentation and counting
Test image (image source -- Halcon routine diagram):

Implementation steps:
[1] Threshold processing: interval fixed threshold or OTSU threshold

[2] Corrosion of circular structural elements: breaking adhesive areas

[3] Connected domain processing: count + extraction center is used to mark the results
[4] Expand each connected domain: restore the outline size for drawing edges

[5] Result marking and display

Halcon implementation code:
dev_get_window (WindowHandle) read_image (Image, 'pellets') threshold (Image, Regions, 106, 255) connection(Regions, ConnectedRegions) select_shape (ConnectedRegions, SelectedRegions, 'area', 'and', 500.72, 10000) erosion_circle (SelectedRegions, RegionErosion, 7.5) connection (RegionErosion, ConnectedRegions2) dilation_circle (ConnectedRegions2, RegionDilation, 7.5) dev_set_draw ('margin') dev_set_line_width (3) dev_display (Image) dev_display (RegionDilation) area_center (RegionDilation, Area, Row, Column) gen_cross_contour_xld (Cross, Row, Column, 15, 0.785398) count_obj (RegionDilation, Number) dev_set_color ('green') set_display_font (WindowHandle, 30, 'mono', 'true', 'false') set_tposition (WindowHandle, 10, 240) write_string(WindowHandle, 'count=' + Number)
OpenCV implementation code and effect:
#Official account: deep learning from OpenCV and AI #Author: Color Space import numpy as np import cv2 img = cv2.imread('B0.jpg') cv2.imshow('src', img) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ret, thres= cv2.threshold(gray, 71, 255, cv2.THRESH_BINARY) cv2.imshow('thres', thres) k1=np.zeros((53,53), np.uint8) cv2.circle(k1,(26,26),26,(1,1,1),-1, cv2.LINE_AA) erode = cv2.morphologyEx(thres, cv2.MORPH_ERODE, k1)#Expansion operation cv2.imshow('erode', erode) # Connected domain analysis num_labels, labels, stats, centers = cv2.connectedComponentsWithStats(erode, connectivity=8) # Number of connected domains print('num_labels = ',num_labels) # Information of connected domain: x, y, width, height and area corresponding to each contour print('stats = ',stats) # Center point of connected domain print('centroids = ',centers) # Labels 1, 2, 3. For each pixel.., The labels of the same connected domain are consistent print('labels = ',labels) # Different connected domains are given different colors output = np.zeros((img.shape[0], img.shape[1], 3), np.uint8) for i in range(1, num_labels): mask = labels == i output[:, :, 0][mask] = np.random.randint(0, 255) output[:, :, 1][mask] = np.random.randint(0, 255) output[:, :, 2][mask] = np.random.randint(0, 255) cv2.imshow('output', output) dilate = cv2.morphologyEx(output, cv2.MORPH_DILATE, k1)#Expansion operation cv2.imshow('dilate', dilate) #k2=np.ones((3,3), np.uint8) #dilate = cv2.morphologyEx(dilate, cv2.MORPH_GRADIENT, k2)#Expansion operation result = cv2.addWeighted(img,0.8,dilate,0.5,0) #Image weight overlay for i in range(1,len(centers)): cv2.drawMarker(result, (int(centers[i][0]),int(centers[i][1])),(0,0,255),1,20,2) cv2.putText(result,"count=%d"%(len(centers)-1),(20,30),0,1,(0,255,0),2) cv2.imshow('result', result) cv2.waitKey(0) cv2.destroyAllWindows()
OpenCV tag results (there is no size filter here, and there are 2 more):

*Example 2: workpiece segmentation and counting
Test image (image source - CSDN users - learningways):

The implementation steps are the same as in example 1. In the code, you only need to fine tune some parameters. You can try it yourself. The implementation results are as follows:
Halcon implementation results:

OpenCV implementation results:

Comparison and summary
[1] Application: the morphology + connected domain processing method is applicable to some cases where the adhesion is not serious or the adhesion area is much smaller than the object itself (for example, the width and height of the adhesion part is 10 pixels and the width and height of the object itself is hundreds of pixels, so there is no need to worry about the disappearance of the object itself after corrosion);
[2] Halcon divides a region into different connected domains, uses the connection operator, and then processes each connected domain. OpenCV uses the connectedComponentsWithStats function to mark each connected domain, then expands a single connected domain, and finally superimposes the results with weights. It is also a flexible application without using the contour method.
If you need C + + and c# implementation codes and materials, you can add the knowledge planet below to obtain them.