OpenCV advanced -- image transformation

Further to the above:

5, Image perspective transformation

        Firstly, perspective transformation is carried out according to the law of object imaging projection, that is, the object is re projected to a new imaging plane. In perspective transformation, the transformation relationship between the image before perspective and the image after perspective can be represented by a 3 * 3 transformation matrix, which can be obtained from the coordinates of four corresponding points in the two images. Therefore, perspective transformation is also called "four point transformation". In OpenCV, getPerspectiveTransform() function for obtaining transformation matrix according to four corresponding points and warpprovision() function for perspective transformation are provided.

getPerspectiveTransform() function prototype:

Mat cv::getPerspectiveTransform(const Point2f src[],
                                const Point2f dst[],
                                int solveMethod = DECOMP_LU
                                )

src []: coordinates of 4 pixels in the original image.

dst []: coordinates of 4 pixels in the target image.

solveMethod: select the flag for calculating the perspective transformation matrix method. The parameters and meanings that can be selected are given in the table below.

The getPerspectiveTransform() function evaluates the method flag
Flag parametersAbbreviationeffect
DECOMP_LU0Gauss elimination method for optimal principal axis elements
DECOMP_SVD1Singular value decomposition (SVD) method
DECOMP_EIG2Eigenvalue decomposition method
DECOMP_CHOLESKY3Cholesky decomposition
DECOMP_QR4QR decomposition
DECOMP_NORMAL16Using the normal equation, it can be used with other flags

Prototype of the warpprective() function:

void cv::warpPerspective(InputArray src,
                         OutputArray dst,
                         InputArray M,
                         Size dsize,
                         int flags = INTER_LINEAR,
                         int borderMode = BORDER_CONSTANT,
                         const Scalar& borderValue = Scalar()
                         )

src: input image.

dst: output image after perspective transformation. The data type is the same as src, but the size is the same as dsize.

M: 3 * 3 transformation matrix.

dsize: the size of the output image.

flags: interpolation method flag.

borderMode: the flag of the pixel boundary extrapolation method.

borderValue: the value used to fill the boundary. By default, it is 0.

The parameter meaning of this function is the same as that of the warpAffine() function (above), but it will be repeated.

6, Polar coordinate transformation

        Polar coordinate transformation is the transformation of images in rectangular coordinate system and polar coordinate system, which can change a circular image into a rectangular image. It is commonly used to deal with images such as clocks, watches, discs and so on. The warpPolar() function is provided in OpenCV to realize the polar coordinate transformation of the image. The code prototype is as follows:

void cv::warpPolar(InputArray src,
                   OutputArray dst,
                   Size dsize,
                   Point2f center,
                   double maxRadius,
                   int flags
                   )

src: source image.

dst: output image after polar coordinate transformation, with the same data type and number of channels as the source image.

dsize: target image size.

center: the origin coordinate of polar coordinates during polar transformation.

maxRadius: the radius of the boundary circle during transformation, which also determines the scale parameter during inverse transformation.

flags: marks of interpolation method and polar mapping method. The two methods are connected by "+" or "|".

Polar mapping method flag of warpPolar() function
Flag parameterseffect
WARP_POLAR_LINEARpolar coordinates
WARP_POLAR_LOGSemi logarithmic polar coordinate transformation
WARP_INVERSE_MAPinverse transformation

Code example:

#include<opencv2/opencv.hpp>
#include<iostream>
#include<vector>

using namespace std;
using namespace cv;

int main()
{	
	Mat img = imread("D:\\lena.jpg");
	if (img.empty())
	{
		cout << "Please make sure the file name of the image is correct" << endl;
		return -1;
	}

	Mat img1, img2;
	Point2f center = Point2f(img.cols / 2, img.rows / 2);      //Origin of polar coordinates in image
	//Positive coordinate transformation
	warpPolar(img, img1, Size(300, 600), center, center.x, INTER_LINEAR + WARP_POLAR_LINEAR);
	//Inverse polar coordinate transformation
	warpPolar(img, img2, Size(img.rows, img.cols), center, center.x, INTER_LINEAR + WARP_POLAR_LINEAR + WARP_INVERSE_MAP);

	imshow("Source image", img);
	imshow("Positive coordinate transformation", img1);
	imshow("Inverse polar coordinate transformation", img2);
	waitKey(0);
	return 0;
}

Tags: OpenCV

Posted on Wed, 06 Oct 2021 12:28:17 -0400 by php_user13