GCNv2 + VS2017 Deployment Guide

Deployment environment

  Windows 10

  VS 2017

  libtorch 1.7 + cu101

Step 1 configure libtorch Library

Version selection

    when using torch::jit::load to load the model in C + +, it is required that the Pytorch version based on which the. pt model is generated is the same as the libtorch version in the environment, or at least the latter is no less than the former. Otherwise, an error C10::Error will be reported when running the load

   the reason for the error is that the. pt file also contains the. py code according to my experience in deploying GCNv2 this time. Therefore, if you use different versions of libtorch to execute Python code generated by other versions, there is likely to be syntax differences and errors. The above error reports are C10::Error in VS2017

   for example, the model file of GCNv2 is generated based on Python 1.0.1, and we use libtorch 1.7.0 to load the model this time. Therefore, the Python code in the model file has syntax errors that are inconsistent with the new version of libtorch 1.7.0, so we need to modify the model code in Step 2.

    we use libtorch 1.7.0 + cu101 this time because the libtorch 1.0.1 version recommended by the author on Gitub is too old and needs to be downloaded with the following command:

git clone --recursive -b v1.0.1 https://github.com/pytorch/pytorch

  the above methods are slow. The key is that the speed of the third-party library is too slow and the download is incomplete. Similar problems also exist in the configuration of Pangolin library. Later, I simply manually configured Pangolin's third-party library. Please refer to another blog post.

   although this version of the network disk is downloaded on the Internet provided by colleagues, it is really troublesome to build and compile because it is the source code, so the new version is adopted directly. If you have to run this version, you can refer to the blog, as shown on the right: https://blog.csdn.net/qq_35590091/article/details/103181008

Library Download

   libtorch 1.7.0 + cu101 we can directly misappropriate the library in Python environment. First, download the library from the command line. The command is as follows:

pip install torch==1.7.0+cu101 -f  https://download.pytorch.org/whl/torch_stable.html

  then go to the site packages directory of the Python compiler, find the torch folder, copy the lib folder and the include folder into the new directory libtorch for future use.

Integrated into VS2017

  I put the libtorch directory into the 3rd folder of VS2017 solution, so I created a new property sheet in the project. The contents to be added are as follows:

VC + + directory / include directory:

VC + + directory / Library Directory:.... \ external\libjpeg\lib

Linker / input / additional dependencies:

   note 1: all *. Lib file names in the above lib folder can be generated by batch command, which is faster. In the Lib folder directory on the command line, execute the following commands:

dir /b *.lib >libs.txt 

    libs.txt will be generated in the directory, where the content is the name list of all lib files.

  note 2: cuda version on my computer is 10.1, so I downloaded libtorch version 1.7.0 + cu101. Although it has not been tested, I think the cuda version of libtorch 1.7.0 + cuxxx downloaded by readers should be the same as this machine.

   in addition, since cuda is used, the following parameters need to be added in project properties \ linker \ command line \ other options, otherwise an error C10::Error will be reported when loading the model:


Step 2 code modification

   since the version of libtorch we use is different from the version used by the author of GCNv2, it is necessary to modify the relevant code adaptively. We mainly modify two aspects of the code, one is the Python code in the. pt model, and the other is the code modification of GCNv2 itself.

Model code modification

  first, you need to find out which parts of the model code need to be modified. Start VS Code, use the Python compiler environment with libtorch 1.7.0 + cu101 installed, execute the following code and load the model:

import torch

model = torch.jit.load('E:\\Program Files (x86)\\Microsoft Visual Studio\\MyProjects\\GCNv2\\models\\back\\gcn2_320x240.pt')

  then an error is reported as follows:

    we can locate the specific error code line: it can be seen here, which is the 67th line of code/gcn.py file in the model file. Therefore, we use WinRAR to open the corresponding gcn2_320x240.pt file, find gcn.py and open it, as shown in the following figure:

  locate line 67 Where the error occurred:

  _32 = torch.squeeze(torch.grid_sampler(input, grid, 0, 0))

  referring to the previous error reporting information, we can know that it is torch.grid_ The sampler function is missing an align_corners parameter. We refer to another colleague's blog( https://blog.csdn.net/weixin_45650404/article/details/106085719 ), add the last parameter of the function as True as follows:

  _32 = torch.squeeze(torch.grid_sampler(input, grid, 0, 0, True))

  load the model again and the error report disappears. At this point, the model code is modified.

GCNv2 code modification

   the code is modified mainly for gcnextrator. CC and gcnextrator. H, as follows:

  first, in the gcnextrator. H file, modify the protected member module in the gcnextrator class. The original code is:

    std::shared_ptr<torch::jit::script::Module> module;

  amend to read:

	torch::jit::Module module;
	// std::shared_ptr<torch::jit::script::Module> module;

  secondly, in the GCNextractor.cc file, modify part of the code of the loading model in the constructor. The original code is:

    const char *net_fn = getenv("GCN_PATH");
    net_fn = (net_fn == nullptr) ? "gcn2.pt" : net_fn;
    module = torch::jit::load(net_fn);

  amend to read:

	const char *net_fn = "D:\\gcn2_320x240.pt";
    module = torch::jit::load(net_fn);

   at the same time, in the overloaded bracket operator, deduce the following code of the model:

    auto output = module->forward(inputs).toTuple();

  amend to read:

    auto output = module.forward(inputs).toTuple();

   there are still two points to note. First, the author of GCNv2 specially added the adaptation to the newer version of libtorch in the latest version of the code. The relevant codes in the overloaded bracket operators are as follows:

    #if defined(TORCH_NEW_API)
        std::vector<int64_t> dims = {1, img_height, img_width, 1};
        auto img_var = torch::from_blob(img.data, dims, torch::kFloat32).to(device);	
        img_var = img_var.permute({0,3,1,2});	
        auto img_tensor = torch::CPU(torch::kFloat32).tensorFromBlob(img.data, {1, img_height, img_width, 1});
        img_tensor = img_tensor.permute({0,3,1,2});
        auto img_var = torch::autograd::make_variable(img_tensor, false).to(device);

  it can be seen that we need to add an extra torch in the preprocessor setting in the project properties_ NEW_ API definition.

  second, pay attention to modifying the code according to the size of the input image. In the same function, there is the following code:

    int img_width = 320;
    int img_height = 240;

    int border = 8;
    int dist_thresh = 4;

    if (getenv("FULL_RESOLUTION") != nullptr)
        img_width = 640;
        img_height = 480;

        border = 16;
        dist_thresh = 8;

  visible, the default size of the input image is 320 × 240, just half the image size of the TUM dataset. The image size here needs to be modified according to different models. Because the model we tested is gcn2_320x240.pt, the size is the same, so it will not be changed. But if readers want to use Gcn2_ If the 640x480.pt model is tested, it must be modified. The modification code can be referenced as follows:

    int img_width = 320;
    int img_height = 240;

    int border = 8;
    int dist_thresh = 4;
	bool fullResolution = true;

    if (getenv("FULL_RESOLUTION") != nullptr || fullResolution)			
        img_width = 640;
        img_height = 480;

        border = 16;
        dist_thresh = 8;

Step 3 run test

   create a new VS2017 project, select gcnextrator. H and gcnextrator. CC, and write the sample code as follows for testing:

#include <iostream>
#include <opencv2\opencv.hpp>
#include "GCNextractor.h"

using namespace std;
using namespace GCN;

int main()
	GCNextractor* mpGCNextractor;

	int nFeatures = 1200;
	float fScaleFactor = 1.2;
	int nLevels = 8;
	int fIniThFAST = 20;
	int fMinThFAST = 7;

	mpGCNextractor = new GCNextractor(nFeatures, fScaleFactor, nLevels, fIniThFAST, fMinThFAST);

	std::cout << "loaded deep model" << std::endl;

	cv::Mat frame1 = cv::imread("D:\\before.png", cv::IMREAD_GRAYSCALE);
	cv::Mat frame2 = cv::imread("D:\\after.png", cv::IMREAD_GRAYSCALE);

	cv::resize(frame1, frame1, cv::Size(640, 480));
	cv::resize(frame2, frame2, cv::Size(640, 480));

	std::vector<cv::KeyPoint> mvKeys, mvKeys2;
	cv::Mat mDescriptors, mDescriptors2;

#ifdef ORB_USE

	cv::Ptr<cv::ORB> detector = cv::ORB::create();
	detector->detectAndCompute(frame1, cv::Mat(), mvKeys, mDescriptors);
	detector->detectAndCompute(frame2, cv::Mat(), mvKeys2, mDescriptors2);

	(*mpGCNextractor)(frame1, cv::Mat(), mvKeys, mDescriptors);
	(*mpGCNextractor)(frame2, cv::Mat(), mvKeys2, mDescriptors2);

	cv::Mat outImg;
	cv::drawKeypoints(frame1, mvKeys, outImg, cv::Scalar::all(-1), cv::DrawMatchesFlags::DEFAULT);

	std::vector<cv::DMatch> matches;
	cv::BFMatcher matcher(cv::NORM_HAMMING);
	matcher.match(mDescriptors, mDescriptors2, matches, cv::Mat());

	double min_dist = 10000, max_dist = 0;
	for (int i = 0; i < mDescriptors.rows; i++) {
		double dist = matches[i].distance;
		if (dist < min_dist) min_dist = dist;
		if (dist > max_dist) max_dist = dist;
	std::vector<cv::DMatch> goodMatches;
	for (int i = 0; i < mDescriptors.rows; i++) {
		if (matches[i].distance <= max(2 * min_dist, 30.0))

	cv::Mat matchesImage2;
	cv::drawMatches(frame1, mvKeys, frame2, mvKeys2, goodMatches, matchesImage2);
	cv::imshow("curr-prev", matchesImage2);
	cv::imshow("kp", outImg);
	return 0;

  you can use the predefined macro ORB_USE to compare ORB and GCNv2. The results are as follows:



  it can be seen that the matching effect is better for these two pictures.

Tags: Computer Vision Deep Learning slam windows10

Posted on Sun, 24 Oct 2021 03:53:37 -0400 by mw-dnb