Training, testing and encapsulation of YOLOv3 under Linux+OpenCV3.4.0

In winter vacation, because the project needs to use yolov3, I found a lot of linux configuration tutorials on the Internet. Many bloggers have talked about it in detail in their blogs. I will focus on the encapsulation of yolov3 under linux. Because it is impossible to provide source code for others to use, they are all encapsulated into dynamic link libraries for others to use. In the process of encapsulation, I found online tutorials In particular, the encapsulation under linux is not very detailed. I'm here to make a supplementary note, hoping to help more people avoid detours and make a record of their first independent completion of these tasks.
PS: computer is too laggy because of my computer configuration, and I do not support cuda. I tried to train with cpu under my own computer windows, especially slow, super slow and very stuck, and then gave up, using the Titan V trained by our boss's server, a fast batch.

1, Deploying YOLOv3 based on CPU & GPU to train and test your own data under Linux

1. Making datasets
To prepare the data image, I used jpg format and labelImg tool for annotation. I just need to change the predefined_classes.txt under the data folder to my own category. After each diagram is annotated, the corresponding xml file will be generated.

labelImg is a tool that can be downloaded on the Internet. If it can't be found, I saved it on Baidu's online disk. Link: https://pan.baidu.com/s/17BbDw6RTzaFgsKhcow5lQQ
Extraction code: e6h6

Create your own data folder VOCdevkit in the root directory, in which create the folder voc207, and continue to create three folders Annotations (put the annotation information xml file of each image), JPEGImages (put the image file), ImageSets (create another folder Main to store the file name of training image and verification image (excluding the suffix)) . [the name of the folder can be self fetched, as long as you pay attention to it when writing the path later]
The directory structure is as follows:

Put the test.py file in the voc207 folder and run

python test.py               #Run test.py

After that, four txt files in the ImageSets/Main folder will be generated. The contents of the generated TXT are as follows, all of which are image names.

The test.py file is as follows:

###### test.py
import os
import random

trainval_percent = 0.1
train_percent = 0.9
xmlfilepath = 'Annotations'
txtsavepath = 'ImageSets\Main'
total_xml = os.listdir(xmlfilepath)

num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)

ftrainval = open('ImageSets/Main/trainval.txt', 'w')
ftest = open('ImageSets/Main/test.txt', 'w')
ftrain = open('ImageSets/Main/train.txt', 'w')
fval = open('ImageSets/Main/val.txt', 'w')

for i in list:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        ftrainval.write(name)
        if i in train:
            ftest.write(name)
        else:
            fval.write(name)
    else:
        ftrain.write(name)

ftrainval.close()
ftrain.close()
fval.close()
ftest.close()

2. Deploying YOLOv3 under Linux
(1) Download the project file of Darknet, and put the VOCdevkit folder made in the previous step in.

git clone https://github.com/pjreddie/darknet   #Download items

(2) Place the labels.py file in the root directory of darknet and run

####### labels.py
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join

sets=[('2007', 'train'), ('2007', 'trainval'), ('2007', 'test'), ('2007', 'val')]   
 #Category name, modify as needed
classes = ["chest", "upper_body", "whole_body"]   

def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

def convert_annotation(year, image_id):
    in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
    out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
    tree=ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)

    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult)==1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
        bb = convert((w,h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')

wd = getcwd()

for year, image_set in sets:
    if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
        os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
    image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
    list_file = open('%s_%s.txt'%(year, image_set), 'w')
    for image_id in image_ids:
        list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
        convert_annotation(year, image_id)
    list_file.close()

python labels.py     #Run labels.py

After the operation, four txt files are generated in the root directory: 2007 ﹐ train.txt, 2007 ﹐ train val.txt, 2007 ﹐ test.txt, 2007 ﹐ val.txt. They store the complete path of the pictures in the training set and verification set respectively. The content is about the same.

Use the following command to merge files:

cat 2007_train.txt 2007_trainval.txt > train.txt`

(3) Create obj.names to store all categories
Create obj.names in the data folder of the project, and enter the names of all categories in your training set. I have three categories in total and write three lines.

(4) Create obj.data and store the path of related files
Create obj.data in the cfg folder of the project, where the path of training set, verification set, category name file, weight storage file, etc. can refer to the original writing method of coco.data, and modify the path. I changed it to this way.
(5) Create yolov3-obj.cfg
Create yolov3-obj.cfg under the CFG folder of the project, which is the configuration information of the whole network. You can copy the original yolov3.cfg file and modify it. The modified part is as follows:

  • Modify batch=64, subdivisions=16
  • Modify three places (each [yolo] and the previous [revolutionary] of each [yolo]) according to the actual situation
    Modify the [yolo] layer's classes=3 (total classes)
    Modify the filters=24 of the [revolutionary] layer
    [note] filters=3 * (5 + classes)
  • Last line change random=1 (multiscale output)
  • If necessary, the width and height, learning rate and maximum number of iterations of the training image can also be modified.

(6) Download the pre training model darknet53.conv.74 and start training

wget https://pjreddie.com/media/files/darknet53.conv.74   #Download pre training model
./darknet detector train cfg/obj.data cfg/yolov3-obj.cfg darknet53.conv.74

If it is GPU training, you only need to modify the Makefile file file. After modification, use the command "make" to compile, and then use the above command to train.
Amend to read:

(7) Detection with trained model
After a long wait, after the training, there is a backup folder in the darknet directory, which stores the weights files saved in the training process, and these weights files are used for detection.

./darknet detector test cfg/obj.data cfg/yolov3-obj.cfg backup/yolov3-obj_10000.weights data/1.jpg
./darknet detect cfg/yolov3-obj.cfg backup/yolov3-obj_10000.weights data/1.jpg  #(this command uses coco.names by default, and it's OK to change coco.names to its own category name.)

3, Possible problems
1. Training process
There is also a version of darknet, which is a complete project of darknet https://github.com/AlexeyAB/darknet
If this version is used, there are more parameters to choose from in the Makefile, such as:
GPU acceleration: GPU=1, CUDNN=1, CUDNN_HALF=1
CPU acceleration: OPENMP=1 (multithreading), AVX/AVX2=1 (instruction set)
2. Test process
(1) Problem Description: error while loading shared libraries: libcudart.so.10.2 cannot open shared object file: No such file or directory
Solution (select a user with sudo permission): sudo CP / usr / local / cuda-10.2/lib64/libcudart.so.10.2 / usr / local / lib / libcudart.so.10.2 & & sudo ldconfig
**

2, Package of YOLOv3 under Linux

**
1, To encapsulate this part, I use the linux part under the complete project directory of darknet. In this complete project, he has already encapsulated YOLOv3. The interface is Yolo V2 class.hpp. The author has already encapsulated it into a class. But because of the needs of the project, I need to provide a header file with a fixed format to someone else, so I am on the basis of this encapsulated class After that, we encapsulate some of the functions of. h and. cpp separately.

The figure above shows a part of its interface file Yolo V2 class.hpp, which writes the part of loading the network into the class constructor, that is to say, the network will be loaded once every time it is detected, which will take a lot of time in the actual project, so I take this part out and define it as a global variable, so when multiple images are repeatedly detected, the whole model will only load Once.

I will do the encapsulation operation under linux as follows: move the Yolo V2 class.hpp file under the original include folder to the src folder, and add the following files under the src folder: Yolo init. H, Yolo init.cpp, Yolo proc.cpp
Under the include folder, add the following files: Yolo ABCD proc. H, Yolo ABCD data. H
Yolo init. H and Yolo init. CPP realize the function of loading the model;
Yolo proc. H and Yolo proc. CPP realized the specific detection function and some processing of the detection results;
yolo_data.h defines some data structures to be used.

//yolo_data.h
#ifndef _YOLO_DATA_H_
#define _YOLO_DATA_H_

#pragma pack(push)
#pragma pack(1)

//Because yolo uses coco.names as the default detection tool, if you need to change it, you need to modify the source code, so I directly changed it to my own category in coco.names
//Parameter definition
#define NET_COCO "coco.names"                    //Model category file
#define NET_CFG "yolov3-obj.cfg"                 //Model profile
#define NET_WEIGHTS "yolov3-obj_10000.weights"   //Model weight file

// Definition of categories
enum TYPE
{
    TYPE_CHEST = 0x00,   
    TYPE_UPPER_BODY,    
    TYPE_WHOLE_BODY,       
    TYPE_TOTAL_NUM	
};

// Test results
typedef struct tagTYPE_RESULT {
    int x1, x2, y1, y2;            
    TYPE type;    
}TYPE_RESULT;

#pragma pack(pop)

#endif
//yolo_init.h
#ifndef _YOLO_INIT_H_
#define _YOLO_INIT_H_

#include "yolo_v2_class.hpp"

Detector initalize_yolo();

#endif
//yolo_init.cpp
#include <iostream>
#include "yolo_init.h"
#include "yolo_data.h"
using namespace std;

Detector initalize_yolo() {
	const char* names_file = NET_COCO;
	const char* cfg_file = NET_CFG;
	const char* weights_file = NET_WEIGHTS;
	Detector detector(cfg_file, weights_file, 0); //Initialize detector
	return detector;
}
//yolo_proc.h
#ifndef _PROC_H_
#define _PROC_H_

#include "yolo_data.h"

#pragma GCC visibility push(default)
//Detection function
int Recognition(const char *filename, TYPE_RESULT *pRes);

#pragma GCC visibility pop

#endif
//yolo_proc.cpp
#include <iostream>
#include <fstream>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <algorithm>
#include <string>
#include <opencv2/opencv.hpp>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/highgui/highgui_c.h"
#include "yolo_init.h"
#include "yolo_data.h"
#include "yolo_proc.h"
using namespace std;

Detector detector = initalize_yolo();      //Load model as global variable

//The following two pieces of code are the source code of yolo project
cv::Scalar obj_id_to_color(int obj_id) {
    int const colors[6][3] = { { 1,0,1 },{ 0,0,1 },{ 0,1,1 },{ 0,1,0 },{ 1,1,0 },{ 1,0,0 } };
    int const offset = obj_id * 123457 % 6;
    int const color_scale = 150 + (obj_id * 123457) % 100;
    cv::Scalar color(colors[offset][0], colors[offset][1], colors[offset][2]);
    color *= color_scale;
    return color;
}

void draw_boxes(cv::Mat mat_img, std::vector<bbox_t> result_vec, std::vector<std::string> obj_names,
    int current_det_fps = -1, int current_cap_fps = -1)
{
    int const colors[6][3] = { { 1,0,1 },{ 0,0,1 },{ 0,1,1 },{ 0,1,0 },{ 1,1,0 },{ 1,0,0 } };

    char* buff = new char[256];
    for (auto& i : result_vec) {
        cv::Scalar color = obj_id_to_color(i.obj_id);
        cv::rectangle(mat_img, cv::Rect(i.x, i.y, i.w, i.h), color, 4);

        if (obj_names.size() > i.obj_id) {
            std::string obj_name = obj_names[i.obj_id];
            if (i.track_id > 0) obj_name += " - " + std::to_string(i.track_id);
            cv::Size const text_size = getTextSize(obj_name, cv::FONT_HERSHEY_COMPLEX_SMALL, 1.2, 2, 0);
            int max_width = (text_size.width > i.w + 2) ? text_size.width : (i.w + 2);
            max_width = std::max(max_width, (int)i.w + 2);
            //max_width = std::max(max_width, 283);
            std::string coords_3d;
            if (!std::isnan(i.z_3d)) {
                std::stringstream ss;
                ss << std::fixed << std::setprecision(2) << "x:" << i.x_3d << "m y:" << i.y_3d << "m z:" << i.z_3d << "m ";
                coords_3d = ss.str();
                cv::Size const text_size_3d = getTextSize(ss.str(), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, 1, 0);
                int const max_width_3d = (text_size_3d.width > i.w + 2) ? text_size_3d.width : (i.w + 2);
                if (max_width_3d > max_width) max_width = max_width_3d;
            }

            cv::rectangle(mat_img, cv::Point2f(std::max((int)i.x - 1, 0), std::max((int)i.y - 35, 0)),
                cv::Point2f(std::min((int)i.x + max_width, mat_img.cols - 1), std::min((int)i.y, mat_img.rows - 1)),
                color, CV_FILLED, 8, 0);
            putText(mat_img, obj_name, cv::Point2f(i.x, i.y - 16), cv::FONT_HERSHEY_COMPLEX_SMALL, 1.2, cv::Scalar(0, 0, 0), 2);
            if (!coords_3d.empty()) putText(mat_img, coords_3d, cv::Point2f(i.x, i.y - 1), cv::FONT_HERSHEY_COMPLEX_SMALL, 0.8, cv::Scalar(0, 0, 0), 1);
        }
    }
    if (current_det_fps >= 0 && current_cap_fps >= 0) {
        std::string fps_str = "FPS detection: " + std::to_string(current_det_fps) + "   FPS capture: " + std::to_string(current_cap_fps);
        putText(mat_img, fps_str, cv::Point2f(10, 20), cv::FONT_HERSHEY_COMPLEX_SMALL, 1.2, cv::Scalar(50, 255, 0), 2);
    }
}

int Recognition(const char *filename, TYPE_RESULT *pRes)
{
    string names_file = "coco.names";
    vector<std::string> obj_names;
    ifstream ifs(names_file.c_str());
    string line;
    while (getline(ifs, line)) obj_names.push_back(line);
    
    cv::Mat image = cv::imread(filename);
    if (image.empty()) {
        throw std::runtime_error("file not found");
    }
    //-------------------------Start detection----------------------------
    vector<bbox_t> result_vec = detector.detect(filename);
    //-----------------------Process test results---------------------------
    TYPE_RESULT* last_result = (TYPE_RESULT*)malloc(sizeof(TYPE_RESULT));      
    
    if (result_vec.size() == 0) {
        (*last_result).x1 = 0;
        (*last_result).y1 = 0;
        (*last_result).x2 = image.cols;
        (*last_result).y2 = image.rows;
        cout << "No target detected!" << endl;
    }
    else{
        int max_area = 0;
        cv::Rect max_rect;
        unsigned int max_rect_id = 0;
        float max_rect_prob = 0.0;
        
        for (size_t i = 0; i < result_vec.size(); i++) {
            cout << "x = " << result_vec[i].x << " y = " << result_vec[i].y << " width = " << result_vec[i].w << " height = " << result_vec[i].h << " prob = " << result_vec[i].prob << " obj_id = " << result_vec[i].obj_id << endl;

            // Extract the target part and save it in the ROI results folder. The file name format is "category confidence. jpg"
            if ((result_vec[i].x + result_vec[i].w) > image.cols)
                result_vec[i].w = image.cols - result_vec[i].x;
            if ((result_vec[i].y + result_vec[i].h) > image.rows)
                result_vec[i].h = image.rows - result_vec[i].y;

            cv::Rect rect(result_vec[i].x, result_vec[i].y, result_vec[i].w, result_vec[i].h);   //Rectangle where the target is located
            int area = result_vec[i].w * result_vec[i].h;
            cout << "rect_area = " << area << endl;
            //Determine whether it is the largest rectangle
            if (max_area < area) {
                max_area = area;
                max_rect = rect;
                max_rect_id = result_vec[i].obj_id;
                max_rect_prob = result_vec[i].prob;

                //Assign the information value of the maximum bounding box to the structure pointer
                (*last_result).x1 = result_vec[i].x;
                (*last_result).y1 = result_vec[i].y;
                (*last_result).x2 = result_vec[i].x + result_vec[i].w;
                (*last_result).y2 = result_vec[i].y + result_vec[i].h;

                if (result_vec[i].obj_id == 0) {
                    (*last_result).type = TYPE_CHEST;
                }
                if (result_vec[i].obj_id == 1) {
                    (*last_result).type = TYPE_UPPER_BODY;
                }
                if (result_vec[i].obj_id == 2) {
                    (*last_result).type = TYPE_WHOLE_BODY;
                }
            }
        }

        draw_boxes(image, result_vec, obj_names);
        cv::namedWindow("test", CV_WINDOW_NORMAL);
        cv::imshow("test", image);
        cv::waitKey(2000);

        //------------Processing result: extract the target part of the image and save it-----------------
        cv::Mat ROI = image(max_rect);     //Cut out part
        char* buff = new char[256];
        string str = string(filename).substr(string(filename).find_last_of('/') + 1, string(filename).rfind(".") - (string(filename).find_last_of('/') + 1));
        sprintf(buff, "%d-%f-%s", max_rect_id, max_rect_prob, str.c_str());
        
        std::string prefix = "ROI-results/";    //Folder to save detected targets
        if (access(prefix.c_str(), F_OK) == -1) //If the folder does not exist
            mkdir(prefix.c_str(), S_IRWXU);              //Then create

        string strImgSavePath = "ROI-results/" + string(buff) + ".jpg";
        imwrite(strImgSavePath, ROI);
        delete[] buff;

        memcpy(pRes, last_result, sizeof(TYPE_RESULT));
        return 0;
    }
}

Add the test.cpp file to src for testing, as follows:

//test.cpp
#include <iostream>
#include <opencv2/opencv.hpp>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/highgui/highgui_c.h"
#include "yolo_proc.h"
#include "yolo_data.h"
using namespace std;

int printResult(TYPE_RESULT* pRes) {
	cout << "\ntype = " << pRes->type << " x1 =  " << pRes->x1 << " y1 = " << pRes->y1 << " x2 = " << pRes->x2 << " y2 = " << pRes->y2 << endl;
	if (pRes->type == TYPE_CHEST) {
		cout << "TYPE_CHEST" << endl;
	}
	if (pRes->type == TYPE_UPPER_BODY) {
		cout << "TYPE_UPPER_BODY" << endl;
	}
	if (pRes->type == TYPE_WHOLE_BODY) {
		cout << "TYPE_WHOLE_BODY" << endl;
	}
	return 0;
}

int main() {
	TYPE_RESULT* pRes = (TYPE_RESULT*)malloc(sizeof(TYPE_RESULT));
	const char *filename = "/home/1.jpg";
	Recognition(filename, pRes);
	printResult(pRes);
	return 0;
}

Modify the Makefile file under the project folder, as follows:

  • Set LIBSO=1 to generate a dynamic library
 LIBSO=1
 #....................................
 #Change the name of the dynamic library here. It must start with lib
 LIBNAMESO=libyolo.so
  • Add the following definition to build a static library
#The name of the static library must also start with lib
ALIB=libyolo.a
AR=ar
ARFLAGES=rcs
  • Modify OBJ, that is, add the file you added directly after "OBJ ="
OBJ=yolo_proc.o yolo_init.o yolo_v2_class.o ......
  • Modify DEPS, i.e
DEPS = $(wildcard src/*.h) $(wildcard src/*.hpp) Makefile include/darknet.h include/yolo_data.h include/yolo_proc.h

Here. HPP is included in src because Yolo ﹣ V2 ﹣ class.hpp is included

  • Modify the dependency file of all, that is, add the name of the static library at the end
all: $(OBJDIR) backup results setchmod $(EXEC) $(LIBNAMESO) $(APPNAMESO) $(ALIB)
  • Modify the command to generate the dynamic link library, that is, you need to add a dependency file
$(LIBNAMESO): $(OBJDIR) $(OBJS) include/yolo_data.h include/yolo_proc.h src/yolo_proc.cpp
	$(CPP) -shared -std=c++11 -fvisibility=hidden -DLIB_EXPORTS $(COMMON) $(CFLAGS) $(OBJS) -o $@ $(LDFLAGS)
#Where test.cpp is the test file created by itself
$(APPNAMESO): $(LIBNAMESO) include/yolo_data.h include/yolo_proc.h src/test.cpp
	$(CPP) -std=c++11 $(COMMON) $(CFLAGS) -o $@ src/test.cpp $(LDFLAGS) -L ./ -l:$(LIBNAMESO)
  • Add command to build static library
$(ALIB): $(OBJS)
	$(AR) $(ARFLAGES) $@ $^
  • In the last delete command, add the static library name at the end
clean:
	rm -rf $(OBJS) $(EXEC) $(LIBNAMESO) $(APPNAMESO) $(ALIB)

At this point, the makefiles are all modified. cd to the directory of darknet, and make directly, and libyolo.so and libyolo.a files can be generated.

so how do I use the generated. so and. a files? Here is an example:

  1. Create a new project folder SO, under SO folder
    Create a new lib folder: copy the files libyolo.a and libyolo.so to the Lib folder
    Create a new src folder: copy test.cpp to the src folder
    Create a new include folder, and copy Yolo? Data. H and Yolo? Proc. H to the include folder
    Copy yolov3-obj.cfg, coco.names, yolov3-obj_.weights to SO folder
    Create CMakeLists.txt as follows:
#CMakeLists.txt
cmake_minimum_required(VERSION 2.8)   #cmake minimum version

#Project information
project(SO)

SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -pthread -fopenmp")
#add_definitions(-std=c++11 -lpthread)
ADD_DEFINITIONS(-DOPENCV)

# find opencv
find_package(OpenCV 3.4.3 REQUIRED)
find_package(Threads REQUIRED)
message(${OpenCV_INCLUDE_DIRS})

#include path
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${PROJECT_SOURCE_DIR}/include)

#Library file directory to link
link_directories(${PROJECT_SOURCE_DIR}/lib)

# Test static library, path of static library
find_library(libNJUST_TARGET_TYPE.a ${PROJECT_SOURCE_DIR}/lib/)

#Test dynamic library, dynamic library path
#find_library(libNJUST_TARGET_TYPE.so ${PROJECT_DOURCE_DIR}/lib/)

#Find all the source files in the src directory and save the names to the dir? DRC variable
aux_source_directory(./src/ DIR_SRC)

#Set generation target SO
add_executable(SO ${DIR_SRC})

#Set the name of the library to be linked, test the static library
target_link_libraries(SO ${OpenCV_LIBS} libNJUST_TARGET_TYPE.a ${CMAKE_THREAD_LBS_INIT})

#Set the name of the library to be linked, test the dynamic library
#target_link_libraries(SO ${OpenCV_LIBS} libNJUST_TARGET_TYPE.so ${CMAKE_THREAD_LBS_INIT})

#message(STATUS "OpenCV_LIBS: ${OpenCV_LIBS}")
  1. cd to the SO folder, and compile CMakeLists.txt with the command:
cmake .

After that, the Makefile file will be generated and compiled with the command:

make

Then an executable SO will be generated and executed with the command:

./SO

This is the end of encapsulation.

According to the above process, if no error occurs during the generation of dynamic link library and static link library,. so and. A files are generated smoothly, then there should be no problem in the code. If there is a problem in the code, an error will be reported during the generation. It's better to correct the problem.

During the test, if there are many problems about undefined functions in the process of running, they are generally that CMakeLists.txt does not have a third-party library configured. For example, when I configured opencv at that time, the contents, paths, etc. were all written correctly, but they still reported errors. All the functions used to opencv reported undefined errors. Later, I checked on the Internet for a long time, Just know that the original error was caused by not adding the version of OpenCV. Just add the version of OpenCV. At that time, there were several versions of OpenCV on the server, so there was a version mismatch problem, bald!

Here are some of the problems I've had:

1. Question / usr/bin/ld: /home/ZT/SO/lib/libNJUST_TARGET_TYPE.a(blas.o): undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
//Lib / x86_-linux-gnu / libpthread.so.0: unable to add symbol: DSO missing from command line

Solution: add "- pthread" in cmakelist.txt
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -pthread")

2. Question / usr/bin/ld: /home/ZT/SO/lib/libNJUST_TARGET_TYPE.a(blas.o): undefined reference to symbol 'omp_get_num_thread@@OMP_1.0'
//Lib / x86_-linux-gnu / libgomp. So. 1: unable to add symbol: DSO missing from command line

Solution: add openmp link library command in Makefile
LINKFLAGS += -fopenmp -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS) -std=c++11
 SET as above in CMake (CMake ﹣ cxx ﹣ flags "${CMake ﹣ cxx ﹣ flags} - STD = C + + 11 - OpenMP)
(if you want to configure the path of openmp, it's equivalent to changing all parts of "opencv" in CMakeLists.txt to "openmp". It's the same as configuring OpenCV. In my project, it's like adding - openmp directly instead of adding a path.)

3. There is a heap: "xxxcpp: undefined reference to cv::String", "xxxcpp: undefined reference to cv::rectangle", "xxxcpp: undefined reference to cv::namedWindow"
(it's the undefined reference in opencv anyway)
Solution: in addition to configuring opencv in cmakelist.txt, you must specify the version of OpenCV. Different versions may conflict.

Some common commands:

1. Add header directory
INCLUDE_DIRECTORIES
 Syntax: include "[after" [before] [system] dir1 [dir2...])
It is equivalent to the function of the - I parameter in the g + + option, as well as the function of adding the path to the cplus? Include? Path variable in the environment variable.
include_directories(../../../thirdparty/comm/include)

2. Add the library file directory to be linked
LINK_DIRECTORIES
 Syntax: link? Directories (directory1 directory2...)
It is equivalent to the function of the - L option of the g + + command, as well as the function of adding the path of LD? Library? Path in the environment variable.
link_directories("/home/server/third/lib")

3. Find the directory where the library is located
FIND_LIBRARY
 Syntax: find? Library (runtime? Lib RT / usr / lib / usr / local / lib no? Default? Path)
cmake will look up in the directory. If none of the directories exists, the value runtime? Lib will be assigned as no? Default? Path

4. Add the path of the library file to be linked
LINK_LIBRARIES
 Syntax: link | libraries (library1 < debug | optimized > library2...)
You can link one or more, separated by spaces

5. Set the name of the library file to be linked
TARGET_LINK_LIBRARIES
 Syntax: target [link] libraries (< target > [Item1 [Item2 [...]]] [[debug | optimized | general] < item >]...)

6. Generate target files for the project
add_executable
 Syntax: add_executable (< name > [Win32] [macosx_bundle] [exclude_from_all] source1 [Source2...])
A simple example is as follows: add ENU executable (Demo main. CPP)

7. Link? Directories should be placed in front of add? Executable() or add? Library().

8,
View opencv installation Library: PKG config opencv -- LIBS
 View opencv installation version: PKG config opencv -- modversion
 Check the installation path of OpenCV under linux: sudo find / -iname "*opencv *"
Published 1 original article, praised 0 and visited 1
Private letter follow

Tags: OpenCV Linux Makefile cmake

Posted on Sat, 07 Mar 2020 04:31:38 -0500 by FangerZero