For details about the control class of the after screen effect, see another blog written earlier:

https://www.cnblogs.com/koshio0219/p/11131619.html

This article is mainly based on the previous control class, to achieve another common after screen effect - edge detection.

Concept and principle part:

First of all, we need to know that convolution is an operation that often deals with pixels in graphics.

The essence of convolution operation is to re fuse each pixel and its surrounding pixels in order to get different pixel processing effects, such as sharpening image, blurring image, detecting edge and so on.

Convolution operation can get different results through different pixel fusion algorithms, which mainly depends on convolution kernel.

The convolution kernel can be regarded as a square matrix of n rows and n columns, and the original pixel is located in the center of the square matrix.

The convolution kernel of edge detection is also called edge detection operator. Taking Sobel operator as an example, the shape is as follows:

It should be noted that the Sobel operator here is based on the coordinate axis with the upper left of the screen as the origin, and the lower right is + x,+y direction respectively, rather than similar to the uv coordinate axis with the lower left of the screen as the origin, and the upper right is + x,+y direction respectively. This point needs special attention, otherwise the later procedure is easy to write wrong.

Where Gx and Gy are edge detection in the longitudinal and transverse directions respectively, you can imagine by removing the zero element in the matrix, because the zero element will not have any impact on the pixels. That is to say, Gx is to calculate the horizontal gradient value and Gy is to calculate the vertical gradient value.

The horizontal gradient value detects the vertical edge line, and the vertical gradient value detects the horizontal edge line. This is very confusing and needs special attention.

In addition to fusing pixels, edge detection operator is mainly used to calculate the gradient value of pixels.

The high gradient value between a pixel and the surrounding pixels means that it is very different from the surrounding pixels. We can imagine that this pixel is out of line with the surrounding pixels, and there is an insurmountable ladder; then we can think that this pixel can be used as a value in a boundary.

By processing every pixel in the image, we can get the edge of the image. This is the essence of edge detection.

Calculation method:

1. Get the coordinate positions of 8 pixels around each pixel for calculation with Sobel operator, similar to: (the arrangement should be consistent with the coordinate axis of Sobel operator)

uv[0] | uv[1] | uv[2] |

uv[3] | uv[4] (original pixel point) | uv[5] |

uv[6] | uv[7] | uv[8] |

However, since the origin of uv coordinate is at the lower left corner, when calculating uv[0]-uv[8], if uv[4] is the original pixel point, their offsets can be expressed as follows:

(-1,1)uv[0] | (0,1)uv[1] | (1,1)uv[2] |

(-1,0)uv[3] | (0,0)uv[4] | (1,0)uv[5] |

(-1,-1)uv[6] | (0,-1)uv[7] | (1,-1)uv[8] |

2. The position coordinate information of the surrounding pixels of the target pixel can be calculated quickly by the offset value, and then the horizontal and vertical gradient values are calculated with the corresponding elements of Gx and Gy respectively, that is, the vertical and horizontal edge detection are carried out respectively:

The specific calculation method is: first, turn the convolution kernel 180 degrees to get a new matrix, then multiply and add the corresponding elements, pay attention not to be confused with the multiplication calculation of the matrix.

However, whether the Sobel operator performs the flipping operation has no effect on the calculation results, so for the Sobel operator, the flipping operation can be omitted.

In order to simplify the calculation of GPU, we can directly add their absolute values to get the final gradient value G.

3. After calculating the gradient value, the original sampling results are interpolated with respect to G to get the final image.

Program implementation:

The first is the script of parameter control:

`1 using UnityEngine; 2 3 public class EdgeDetectionCtrl : ScreenEffectBase 4 { 5 private const string _EdgeOnly = "_EdgeOnly"; 6 private const string _EdgeColor = "_EdgeColor"; 7 private const string _BackgroundColor = "_BackgroundColor"; 8 9 [Range(0,1)] 10 public float edgeOnly = 0.0f; 11 12 public Color edgeColor = Color.black; 13 14 public Color backgroundColor = Color.white; 15 16 private void OnRenderImage(RenderTexture source, RenderTexture destination) 17 { 18 if (Material!=null) 19 { 20 Material.SetFloat(_EdgeOnly, edgeOnly); 21 Material.SetColor(_EdgeColor, edgeColor); 22 Material.SetColor(_BackgroundColor, backgroundColor); 23 Graphics.Blit(source, destination, Material); 24 } 25 else 26 Graphics.Blit(source, destination); 27 } 28 }`