OpenCV Foundation (23) improves the lighting of night images

1. Summary

Taking good pictures in low light seems magical to non photographers. Completing low light photography requires a combination of skills, experience and appropriate equipment. Images taken in low light lack color and unique edges. They also have the problems of low visibility and unknown depth. These disadvantages make such images unsuitable for personal use or image processing or computer vision tasks. We will learn to improve the lighting of night images.

For people without photography skills, we can use image processing technology to enhance these images. Shi et al. Proposed a method in their paper "night low illumination image enhancement of a single image using light / dark channel a priori". This paper will be the basis of this article.

change good night between chart image in of mirror bright front after Improve lighting in night images before and after Improve lighting in night images before and after

For laymen, the solution to weak light is to use flash, but you must have noticed that sometimes flash can cause adverse effects such as red eyes and glare. As a reward to our dear readers, we will discuss how to correct the inconvenience of picture lighting and try to solve the limitations of this technology.

We will use the picture given below throughout the explanation. The image is taken from the paper cited above.

2. Principle

Our goal is to enhance the low illumination image of a single image by using the method based on two channel a priori.

Image enhancement using a single image is simpler than using multiple images. Single image enhancement does not require additional auxiliary images, nor does it require accurate point-to-point fusion between different images.

This is where the solution based on dual channel a priori comes into play. In short, it is the "a priori information" of the image that you can use in your image processing problem. You'll want to know why we use dual channels instead of just bright channels for low light images, because it contains the most missing information. Considering the dark channel can eliminate the blocking effect in some areas and help to see the artifacts clearly, as shown in the figure below.

double through Avenue before Check of have to want nature Necessity of two channel a priori Necessity of two channel a priori

3. Framework for improving night image lighting

Before we delve into enhanced images, let's understand the steps involved. The following flowchart lists the steps we will follow to obtain the lighting version of the night image.

Firstly, the light and dark channel images are obtained, which are the maximum and minimum pixel values in the local patch of the original image. Next, we calculate the global atmospheric light because it gives us most of the information about the relatively bright part of the image.

We use the channel and atmospheric light values to obtain their respective transmission maps, and consider the dark weight in special cases. We will here Detailed discussion.

change good night between chart image bright degree of frame frame flow Course chart Frame flow chart for improving night image brightness Frame flow chart for improving night image brightness
From step 5 of the flowchart, note that an improved lighting image can be found using the following formula:

among I ( x ) I(x) I(x) is an enhanced image, I n i g h t ( x ) I_{night}(x) Insight (x) is the original low light image, A A A is atmospheric light, t ( x ) t(x) t(x) is the corrected transmission diagram.

3.1 step 1: obtain a priori of light and dark channels

The first step is to estimate the light dark channel a priori. They represent the maximum and minimum intensities of pixels in the local region, respectively. This process can be imagined as a sliding convolution window to help us find the maximum or minimum values of all channels.

A priori estimated dark channel:

A priori estimated bright channel:

among I c I^c Ic is I I I's color channel, Ω ( x ) Ω(x) Ω (x) is a local region centered on X. y is a pixel in the local region Ω (x).

Python

import cv2
import numpy as np

def get_illumination_channel(I, w):
    M, N, _ = I.shape
    # Channel filling
    padded = np.pad(I, ((int(w/2), int(w/2)), (int(w/2), int(w/2)), (0, 0)), 'edge')
    darkch = np.zeros((M, N))
    brightch = np.zeros((M, N))

    for i, j in np.ndindex(darkch.shape):
        darkch[i, j] = np.min(padded[i:i + w, j:j + w, :]) # dark channel
        brightch[i, j] = np.max(padded[i:i + w, j:j + w, :]) # bright channel

    return darkch, brightch

We first import cv2 and NumPy and write a function to obtain the illumination channel. The image size is stored in variables M and N. Apply half the kernel size padding to the images to ensure that their size remains the same. np.min is used to obtain the lowest pixel value in the sliding block, and finally obtain the dark channel. Similarly, the highest pixel value in the slider is obtained by using np.max, and finally the bright channel is obtained. We will need the values of the dark and light channels for further steps. So we return these values. Similar code is written for C + +, as shown below.
C++

std::pair<cv::Mat, cv::Mat> get_illumination_channel(cv::Mat I, float w) {
	int N = I.size[0];
	int M = I.size[1];
	cv::Mat darkch = cv::Mat::zeros(cv::Size(M, N), CV_32FC1);
	cv::Mat brightch = cv::Mat::zeros(cv::Size(M, N), CV_32FC1);

	int padding = int(w/2);
        // padding for channels
	cv::Mat padded = cv::Mat::zeros(cv::Size(M + 2*padding, N + 2*padding), CV_32FC3);

	for (int i=padding; i < padding + M; i++) {
		for (int j=padding; j < padding + N; j++) {
			padded.at<cv::Vec3f>(j, i).val[0] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[0]/255;
			padded.at<cv::Vec3f>(j, i).val[1] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[1]/255;
			padded.at<cv::Vec3f>(j, i).val[2] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[2]/255;
		}
	}

	for (int i=0; i < darkch.size[1]; i++) {
		int col_up, row_up;
		
		col_up = int(i+w);

		for (int j=0; j < darkch.size[0]; j++) {
			double minVal, maxVal;

			row_up = int(j+w);

			cv::minMaxLoc(padded.colRange(i, col_up).rowRange(j, row_up), &minVal, &maxVal);
     
			darkch.at<float>(j,i) = minVal; //dark channel
			brightch.at<float>(j,i) = maxVal; //bright channel
		}
	}

	return std::make_pair(darkch, brightch);
}

Dark and light channels are obtained by initializing the matrix with zero and filling them with values from the image array, where CV_32FC1 defines the depth and number of channels for each element.

Half the size of the filled inner core is applied to the image to ensure that their size remains unchanged. We iterate the matrix to obtain the lowest pixel value in the block, which is used to set the dark channel pixel value. Obtaining the highest pixel value in the block provides us with a bright channel pixel value. cv::minMaxLoc is used to find the global minimum and maximum values in the array.


Left − dark through Avenue before Check , right − bright through Avenue before Check Left dark channel a priori, right bright channel a priori Left − dark channel a priori, right − bright channel a priori

3.2 step 2: calculate global atmospheric illumination

The next step is to calculate the global atmospheric illumination. It is calculated by averaging the first 10% intensity using the bright channel obtained above. The value of 10% is taken to ensure that a small exception will not have a great impact on it.

whole game large gas light of meter count and his yes change good night between chart image mirror bright of Tribute offer Calculation of global atmospheric light and its contribution to improving night image illumination Calculation of global atmospheric light and its contribution to improving night image illumination
Python

def get_atmosphere(I, brightch, p=0.1):
    M, N = brightch.shape
    flatI = I.reshape(M*N, 3) # reshaping image array
    flatbright = brightch.ravel() #flattening image array

    searchidx = (-flatbright).argsort()[:int(M*N*p)] # sorting and slicing
    A = np.mean(flatI.take(searchidx, axis=0), dtype=np.float64, axis=0)
    return A

In order to achieve this through the code, the images are deformed, flattened and sorted according to the maximum intensity. The image matrix is sliced to contain only the first ten percent of the pixels, and then the average of these is taken.
C++

cv::Mat get_atmosphere(cv::Mat I, cv::Mat brightch, float p=0.1) {
	int N = brightch.size[0];
	int M = brightch.size[1];

        // flattening and reshaping image array
	cv::Mat flatI(cv::Size(1, N*M), CV_8UC3);
	std::vector<std::pair<float, int>> flatBright;

	for (int i=0; i < M; i++) {
		for (int j=0; j < N; j++) {
			int index = i*N + j;
			flatI.at<cv::Vec3b>(index, 0).val[0] = I.at<cv::Vec3b>(j, i).val[0];
			flatI.at<cv::Vec3b>(index, 0).val[1] = I.at<cv::Vec3b>(j, i).val[1];
			flatI.at<cv::Vec3b>(index, 0).val[2] = I.at<cv::Vec3b>(j, i).val[2];

			flatBright.push_back(std::make_pair(-brightch.at<float>(j, i), index));
		}
	}


        // sorting and slicing the array
	sort(flatBright.begin(), flatBright.end());

	cv::Mat A = cv::Mat::zeros(cv::Size(1, 3), CV_32FC1);

	for (int k=0; k < int(M*N*p); k++) {
		int sindex = flatBright[k].second;
		A.at<float>(0, 0) = A.at<float>(0, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[0];
		A.at<float>(1, 0) = A.at<float>(1, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[1];
		A.at<float>(2, 0) = A.at<float>(2, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[2];
	}

	A = A/int(M*N*p);

	return A/255;
}

3.3 step 3: find the initial transmission diagram

The transmission diagram describes the portion of light that is not scattered and reaches the camera. In this algorithm, the following equation will be used to estimate a priori from the bright channel:

A c A^c Ac is the maximum value of the local area of atmospheric light.
Python

def get_initial_transmission(A, brightch):
    A_c = np.max(A)
    init_t = (brightch-A_c)/(1.-A_c) # Initial transmission diagram
    return (init_t - np.min(init_t))/(np.max(init_t) - np.min(init_t)) # Normalized initial transmission diagram

In the code, the formula is used to calculate the initial transmission diagram, and then used to calculate the normalized initial transmission diagram.
C++

cv::Mat get_initial_transmission(cv::Mat A, cv::Mat brightch) {
	double A_n, A_x, minVal, maxVal;
	cv::minMaxLoc(A, &A_n, &A_x);
	cv::Mat init_t(brightch.size(), CV_32FC1);
	init_t = brightch.clone();
        // Initial transmission diagram
	init_t = (init_t - A_x)/(1.0 - A_x);
	cv::minMaxLoc(init_t, &minVal, &maxVal);
        // Normalized initial transmission diagram
	init_t = (init_t - minVal)/(maxVal - minVal);

	return init_t;
}


first beginning Thoroughly shoot chart Initial transmission diagram Initial transmission diagram

3.4 step 4: estimate the corrected transmission diagram using dark channel

The transmission diagram is also calculated according to the dark channel a priori, and the difference between priors is calculated. This calculation is performed to correct the potential error transmission estimation obtained a priori from the bright channel.

Any with a value less than the alpha setting (0.4 determined by empirical experiments) I d i f f e r e n c e I^{difference} The pixels X of the Idifference channel are located in the dark object, which makes its depth unreliable. This also makes the transmission of pixel x unreliable. Therefore, unreliable transmission can be corrected by obtaining the product of the transmission diagram.
Python

def get_corrected_transmission(I, A, darkch, brightch, init_t, alpha, omega, w):
    im = np.empty(I.shape, I.dtype);
    for ind in range(0, 3):
        im[:, :, ind] = I[:, :, ind] / A[ind] #Divide the pixel value by atmospheric light
    dark_c, _ = get_illumination_channel(im, w) # Dark channel transmission diagram
    dark_t = 1 - omega*dark_c # Modified dark channel transmission diagram
    corrected_t = init_t # Initialize the corrected transmission map with the initial transmission map
    diffch = brightch - darkch # Difference between projective graphs

    for i in range(diffch.shape[0]):
        for j in range(diffch.shape[1]):
            if(diffch[i, j] < alpha):
                corrected_t[i, j] = dark_t[i, j] * init_t[i, j]

    return np.abs(corrected_t)

We use the get created in the first code snippet_ illumination_ Channel function to obtain the dark channel transmission diagram. The omega parameter is usually set to 0.75 to correct the initial transmission map. The corrected transmission map is initialized as the initial transmission map. If the difference between the dark channel and the bright channel is greater than alpha, i.e. 0.4, its value will remain the same as the initial transmission diagram. If the difference anywhere is less than alpha, we take the product of the transmission diagram mentioned above.
C++

cv::Mat get_corrected_transmission(cv::Mat I, cv::Mat A, cv::Mat darkch, cv::Mat brightch, cv::Mat init_t, float alpha, float omega, int w) {
	cv::Mat im3(I.size(), CV_32FC3);
        //Divide the pixel value by atmospheric light
	for (int i=0; i < I.size[1]; i++) {
		for (int j=0; j < I.size[0]; j++) {
			im3.at<cv::Vec3f>(j, i).val[0] = (float)I.at<cv::Vec3b>(j, i).val[0]/A.at<float>(0, 0);
			im3.at<cv::Vec3f>(j, i).val[1] = (float)I.at<cv::Vec3b>(j, i).val[1]/A.at<float>(1, 0);
			im3.at<cv::Vec3f>(j, i).val[2] = (float)I.at<cv::Vec3b>(j, i).val[2]/A.at<float>(2, 0);
		}
	}

	cv::Mat dark_c, dark_t, diffch;

	std::pair<cv::Mat, cv::Mat> illuminate_channels = get_illumination_channel(im3, w);
        // Dark channel projection
	dark_c = illuminate_channels.first;
        // Modified dark channel transmission diagram
	dark_t = 1 - omega*dark_c;
	cv::Mat corrected_t = init_t;
	diffch = brightch - darkch; //Difference between projective graphs

	for (int i=0; i < diffch.size[1]; i++) {
		for (int j=0; j < diffch.size[0]; j++) {
			if (diffch.at<float>(j, i) < alpha) {
                                // Initialize the corrected transmission map with the initial transmission map
				corrected_t.at<float>(j, i) = abs(dark_t.at<float>(j, i)*init_t.at<float>(j, i)); 
			}
		}
	}

	return corrected_t;
}


repair just Thoroughly shoot chart Corrected transmission diagram Corrected transmission diagram

3.5 step 5: use the Guided Filter to smooth the transmission map

Let's take a look at the definition of Guided Filter: like other filtering operations, Guided Filter is a neighborhood operation, but when calculating the output pixel value, the statistical information of a region in the corresponding spatial neighborhood in the Guided image will be considered.

Essentially, it is an edge preserving smoothing filter. I've used this GitHub repository Implementation of. This filter is applied to the corrected transmission map obtained above to obtain a finer image.

answer use G u i d e d F i l t e r after Obtain have to of Thoroughly shoot chart Transmission diagram obtained after applying Guided Filter Transmission image obtained after applying GuidedFilter

various species Thoroughly shoot chart of between of than relatively Comparison between various transmission diagrams Comparison between various transmission diagrams

3.6 step 6: calculate the result image

Transmission map and atmospheric light value are required to obtain enhanced image. Now that we have the required values, we can apply the first equation to obtain the results.
Python

def get_final_image(I, A, refined_t, tmin):
    refined_t_broadcasted = np.broadcast_to(refined_t[:, :, None], (refined_t.shape[0], refined_t.shape[1], 3)) # Copy channels of 2D refined map to 3 channels
    J = (I-A) / (np.where(refined_t_broadcasted < tmin, tmin, refined_t_broadcasted)) + A # Get the final result

    return (J - np.min(J))/(np.max(J) - np.min(J)) # Normalized image

Firstly, the gray refined transformation image is transformed into gray image to ensure the same number of channels between the original image and the transformation image. Next, the output image is calculated using the equation. The image is then normalized to the maximum and minimum and returned from the function.
C++

cv::Mat get_final_image(cv::Mat I, cv::Mat A, cv::Mat refined_t, float tmin) {
	cv::Mat J(I.size(), CV_32FC3);

	for (int i=0; i < refined_t.size[1]; i++) {
		for (int j=0; j < refined_t.size[0]; j++) {
			float temp = refined_t.at<float>(j, i);

			if (temp < tmin) {
				temp = tmin;
			}
                        // finding result
			J.at<cv::Vec3f>(j, i).val[0] = (I.at<cv::Vec3f>(j, i).val[0] - A.at<float>(0,0))/temp + A.at<float>(0,0);
			J.at<cv::Vec3f>(j, i).val[1] = (I.at<cv::Vec3f>(j, i).val[1] - A.at<float>(1,0))/temp + A.at<float>(1,0);
			J.at<cv::Vec3f>(j, i).val[2] = (I.at<cv::Vec3f>(j, i).val[2] - A.at<float>(2,0))/temp + A.at<float>(2,0);
		}
	}

	double minVal, maxVal;
	cv::minMaxLoc(J, &minVal, &maxVal);

        // normalized image
	for (int i=0; i < J.size[1]; i++) {
		for (int j=0; j < J.size[0]; j++) {
			J.at<cv::Vec3f>(j, i).val[0] = (J.at<cv::Vec3f>(j, i).val[0] - minVal)/(maxVal - minVal);
			J.at<cv::Vec3f>(j, i).val[1] = (J.at<cv::Vec3f>(j, i).val[1] - minVal)/(maxVal - minVal);
			J.at<cv::Vec3f>(j, i).val[2] = (J.at<cv::Vec3f>(j, i).val[2] - minVal)/(maxVal - minVal);
		}
	}

	return J;
}


most end junction fruit final result final result

4. Further improvement

Although the image is full of color, it looks very fuzzy. Sharpening will improve the picture. We can use cv2.detailEnhance() for this task, but it will increase the noise. So we can use cv2.edgePreservingFilter() to limit it. However, this function will still cause some noise. Therefore, if the image is noisy from the beginning, this is not ideal.

To learn more about these technologies, see this paper.

enter one step increase strong of chart image Further enhanced image Further enhanced image

primary beginning chart image And junction fruit of than relatively Comparison between original image and result Comparison between original image and result

5. Limitations

If any clear light source (such as lamp) or natural light source (such as moon) covers a large part of the image, the effect of this method is poor. Why is this a problem? Because such a light source will increase the intensity of the atmosphere. When we are looking for the brightest 10% pixels, this will lead to overexposure of these areas.

This causal relationship is more intensively shown in the following image.

To overcome this problem, let's analyze the initial transmission diagram made by the bright channel.

first beginning Thoroughly shoot chart Initial transmission diagram Initial transmission diagram
The task seems to be to reduce these strong white spots that lead to overexposure of these areas. This can be done by limiting the value from 255 to a minimum value.
Python

def reduce_init_t(init_t):
    init_t = (init_t*255).astype(np.uint8) 
    xp = [0, 32, 255]
    fp = [0, 32, 48]
    x = np.arange(256) # Create array [0,..., 255]
    table = np.interp(x, xp, fp).astype('uint8') # Interpolate fp in the x range according to xp
    init_t = cv2.LUT(init_t, table) # Lookup table
    init_t = init_t.astype(np.float64)/255 # Standardized transmission diagram
    return init_t

To achieve this in code, the transmission diagram is converted to a range of 0-255. The look-up table is then used to interpolate the points from the original value to the new range, thereby reducing the impact of high exposure.
C++

cv::Mat reduce_init_t(cv::Mat init_t) {
	cv::Mat mod_init_t(init_t.size(), CV_8UC1);

	for (int i=0; i < init_t.size[1]; i++) {
		for (int j=0; j < init_t.size[0]; j++) {
			mod_init_t.at<uchar>(j, i) = std::min((int)(init_t.at<float>(j, i)*255), 255);
		}
	}

	int x[3] = {0, 32, 255};
	int f[3] = {0, 32, 48};

        // creating array [0,...,255]
	cv::Mat table(cv::Size(1, 256), CV_8UC1);

	//Linear Interpolation
	int l = 0;
	for (int k = 0; k < 256; k++) {
		if (k > x[l+1]) {
			l = l + 1;
		}

		float m  = (float)(f[l+1] - f[l])/(x[l+1] - x[l]);
		table.at<int>(k, 0) = (int)(f[l] + m*(k - x[l]));
	}

	//Lookup table
	cv::LUT(mod_init_t, table, mod_init_t);

	for (int i=0; i < init_t.size[1]; i++) {
		for (int j=0; j < init_t.size[0]; j++) {
                        // normalizing the transmission map
			init_t.at<float>(j, i) = (float)mod_init_t.at<uchar>(j, i)/255;
		}
	}

	return init_t;
}

The following figure shows how this adjustment in the code will affect the visual representation of pixels.

reduce less too degree Expose light of chart shape surface show Graphical representation to reduce overexposure Graphical representation to reduce overexposure

more change after of Thoroughly shoot chart Changed transmission diagram Changed transmission diagram

Thoroughly shoot chart yes than Transmission diagram comparison Transmission diagram comparison
We can see the difference between the image obtained by using the method in the paper and the results obtained by the solution we just discussed.


Left − primary beginning chart image , in heart − relatively Good morning! living become of increase strong chart image , right − enter one step increase strong of chart image . Left - original image, Center - enhanced image generated earlier, right - further enhanced image. Left − original image, center − enhanced image generated earlier, right − further enhanced image.

6. Results

The final step is to create a function that combines all the techniques as image transfer.
Python

def dehaze(I, tmin=0.1, w=15, alpha=0.4, omega=0.75, p=0.1, eps=1e-3, reduce=False):
    I = np.asarray(im, dtype=np.float64) # Convert the input to a float array.
    I = I[:, :, :3] / 255
    m, n, _ = I.shape
    Idark, Ibright = get_illumination_channel(I, w)
    A = get_atmosphere(I, Ibright, p)

    init_t = get_initial_transmission(A, Ibright) 
    if reduce:
        init_t = reduce_init_t(init_t)
    corrected_t = get_corrected_transmission(I, A, Idark, Ibright, init_t, alpha, omega, w)

    normI = (I - I.min()) / (I.max() - I.min())
    refined_t = guided_filter(normI, corrected_t, w, eps) # applying guided filter
    J_refined = get_final_image(I, A, refined_t, tmin)
    
    enhanced = (J_refined*255).astype(np.uint8)
    f_enhanced = cv2.detailEnhance(enhanced, sigma_s=10, sigma_r=0.15)
    f_enhanced = cv2.edgePreservingFilter(f_enhanced, flags=1, sigma_s=64, sigma_r=0.2)
    return f_enhanced

C++

int main() {
	cv::Mat img = cv::imread("dark.png");

	float tmin = 0.1;
	int w = 15;       
	float alpha = 0.4;  
	float omega = 0.75; 
	float p = 0.1;      
	double eps = 1e-3;   
	bool reduce = false;

	std::pair<cv::Mat, cv::Mat> illuminate_channels = get_illumination_channel(img, w);

	cv::Mat Idark = illuminate_channels.first;
	cv::Mat Ibright = illuminate_channels.second;

	cv::Mat A = get_atmosphere(img, Ibright);

	cv::Mat init_t = get_initial_transmission(A, Ibright);

	if (reduce) {
		init_t = reduce_init_t(init_t);
	}

	double minVal, maxVal;
        // Convert the input to a float array
	cv::Mat I(img.size(), CV_32FC3), normI;

	for (int i=0; i < img.size[1]; i++) {
		for (int j=0; j < img.size[0]; j++) {
			I.at<cv::Vec3f>(j, i).val[0] = (float)img.at<cv::Vec3b>(j, i).val[0]/255;
			I.at<cv::Vec3f>(j, i).val[1] = (float)img.at<cv::Vec3b>(j, i).val[1]/255;
			I.at<cv::Vec3f>(j, i).val[2] = (float)img.at<cv::Vec3b>(j, i).val[2]/255;
		}
	}

	cv::minMaxLoc(I, &minVal, &maxVal);

	normI = (I - minVal)/(maxVal - minVal);

	cv::Mat corrected_t = get_corrected_transmission(img, A, Idark, Ibright, init_t, alpha, omega, w); 

	cv::Mat refined_t(normI.size(), CV_32FC1);
        // applying guided filter
	refined_t = guidedFilter(normI, corrected_t, w, eps);

	cv::Mat J_refined = get_final_image(I, A, refined_t, tmin);

	cv::Mat enhanced(img.size(), CV_8UC3);

	for (int i=0; i < img.size[1]; i++) {
		for (int j=0; j < img.size[0]; j++) {
			enhanced.at<cv::Vec3b>(j, i).val[0] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[0]*255), 255);
			enhanced.at<cv::Vec3b>(j, i).val[1] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[1]*255), 255);
			enhanced.at<cv::Vec3b>(j, i).val[2] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[2]*255), 255);
		}
	}

	cv::Mat f_enhanced;

	cv::detailEnhance(enhanced, f_enhanced, 10, 0.15);
	cv::edgePreservingFilter(f_enhanced, f_enhanced, 1, 64, 0.2);

	cv::imshow("im", f_enhanced);
	cv::waitKey(0);
	return 0;
}

Look at the gif below, which shows some other images enhanced using this algorithm.

Left : primary beginning chart image , right : increase strong chart image Left: original image, right: enhanced image Left: original image, right: enhanced image

Left − primary beginning chart image , in heart − relatively Good morning! living become of increase strong chart image , right − enter one step increase strong of chart image . Left - original image, Center - enhanced image generated earlier, right - further enhanced image. Left − original image, center − enhanced image generated earlier, right − further enhanced image.

conclusion

In summary, we first understand the problems associated with images taken in low light or low light conditions. We gradually discussed the method proposed by Shi et al. To enhance such an image. We also discuss the further improvements and limitations of the technology introduced in the paper.

This paper presents an excellent technology to improve low light image illumination. However, it applies only to images that always maintain constant illumination. As promised, we also explained a solution to overcome the limitations of images with bright spots, such as the full moon or lights in the image.

For the future development of this method, we can try to control this reduction through the track bar. The track bar will help the user better understand the appropriate values for enhancement and set the best values required for a single image.

Link: https://pan.baidu.com/s/1cMDB-fDu9CXc0zJ97RT8eA 
Extraction code: 123 a

Reference catalogue

https://learnopencv.com/improving-illumination-in-night-time-images/

Tags: OpenCV

Posted on Wed, 13 Oct 2021 20:41:00 -0400 by zeezack