For example, there is no advanced identification yet. The identified lanes are still within the intercepted target range. Only one lane line on the left and right can be identified correctly

Because the straight line fitting part uses the least square fitting, the result is quasi skew

I haven't thought of a better way yet. I'll change it when I think of it

import cv2 import numpy as np #Read image and gray image processing img=cv2.imread('load.png') p=np.zeros_like(img,np.uint8) grap=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY) #canny marginalization edges=cv2.Canny(grap, 50, 200, 3) dst=cv2.convertScaleAbs(edges) #Get picture size size=img.shape w=int(size[1]) h=int(size[0]) #Obtain the target area according to the size pts=np.array([[0.6*w,0.6*h],[0.4*w,0.6*h],[0.15*w,h],[0.8*w,h]],dtype=np.int32) cv2.fillConvexPoly(p,pts,(255,255,255)) p=cv2.cvtColor(p,cv2.COLOR_RGB2GRAY) load=cv2.bitwise_and(p,dst) #hough transform minLineLength = 1 maxLineGap = 0 lines = cv2.HoughLinesP(load, 1, np.pi / 180, 10, minLineLength, maxLineGap) #Fitting lane lines using least squares color=(0,255,0) thickness=2 left_lines_x = [] left_lines_y = [] right_lines_x = [] right_lines_y = [] line_y_max = 0 line_y_min = 999 for line in lines: for x1, y1, x2, y2 in line: if y1 > line_y_max: line_y_max = y1 if y2 > line_y_max: line_y_max = y2 if y1 < line_y_min: line_y_min = y1 if y2 < line_y_min: line_y_min = y2 k = (y2 - y1) / (x2 - x1) if k < -0.3: left_lines_x.append(x1) left_lines_y.append(y1) left_lines_x.append(x2) left_lines_y.append(y2) elif k > 0.3: right_lines_x.append(x1) right_lines_y.append(y1) right_lines_x.append(x2) right_lines_y.append(y2) # Least square line fitting left_line_k, left_line_b = np.polyfit(left_lines_x, left_lines_y, 1) right_line_k, right_line_b = np.polyfit(right_lines_x, right_lines_y, 1) # The corresponding x is inversely calculated according to the linear equation and the maximum and minimum y values cv2.line(img, (int((line_y_max - left_line_b) / left_line_k), line_y_max), (int((line_y_min - left_line_b) / left_line_k), line_y_min), color, thickness) cv2.line(img, (int((line_y_max - right_line_b) / right_line_k), line_y_max), (int((line_y_min - right_line_b) / right_line_k), line_y_min), color, thickness) cv2.imshow('dst',img) cv2.waitKey(0)

There are a few troublesome functions in the middle

HoughlinesP() must add P at the end, and the returned value is the coordinates (x1,y1),(x2,y2) of the first and last two points of a line segment

Houghlines() returns a vector representation of two elements without P( ρ，Θ)， among ρ Is the distance from the coordinate origin ((0,0) (that is, the upper left corner of the image). Θ Is the rotation angle of the arc line.

HoughLinesP(image,rho,theta,threshold,minLineLength=0,maxLineGap=0 )

Then there is the parameter problem. Houghlines() is not easy to use, but sometimes it may be OK

1. The first parameter of HoughLinesP() is the same as Houghlines(). image is a picture. Here, it must be a binary picture (I understand that ordinary images are ternary, R,G,B or H,S,V, and the binary is only 0 and 1). In order to prevent the picture type from changing somewhere, it's best to gray process it again before use

gray=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)

So it must be a binary image

2. rho and theta are the distance accuracy in pixels and the angle accuracy in radians, respectively, which are usually set to 1 and 2
Π
180
\frac{Π}{180}
one hundred and eighty Π It's not a big problem.

3. Threshold, the threshold parameter of the accumulation plane, that is, the value that must be reached in the accumulation plane when identifying a part as a straight line in the graph. Generally speaking, it is how long the line segment can be identified as a line segment. If it is too large, some small line segments will be filtered.

4. minLineLength is also a parameter used for filtering. It can only be displayed if it is greater than this value. It looks like the function of threshold

5. maxLineGap is the maximum distance allowed to connect points in the same line to points

Then there is the image selection. At first, four exact points were used to locate the cut, but later it was found that if this was the case, the image of another size would be wrong, so the target area was located by scale

#Obtain the target area according to the size pts=np.array([[0.6*w,0.6*h],[0.4*w,0.6*h],[0.15*w,h],[0.8*w,h]],dtype=np.int32) cv2.fillConvexPoly(p,pts,(255,255,255)) p=cv2.cvtColor(p,cv2.COLOR_RGB2GRAY) load=cv2.bitwise_and(p,dst)

pts is to save the coordinates of the four points. The order of the four points is connected counterclockwise from the upper right corner

cv2.fillConvexPoly() is to fill (255255255) white in the area surrounded by four points in p, AND then cv2.bitwise_and() is to AND each pixel value of p AND dst to filter out the images in the area

The following is the code to obtain the coordinates of the specified pixel points. After running, just point them directly on the graph

import cv2 def mouse(event, x, y, flags, param): if event == cv2.EVENT_LBUTTONDOWN: xy = "%d,%d" % (x, y) cv2.circle(img, (x, y), 1, (255, 255, 255), thickness = -1) cv2.putText(img, xy, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.0, (255, 255, 255), thickness = 1) cv2.imshow("image", img) img = cv2.imread("load2.png") cv2.namedWindow("image") cv2.imshow("image", img) cv2.setMouseCallback("image", mouse) cv2.waitKey(0) cv2.destroyAllWindows()