OK, here comes another source code operation record ←←
1, Basic environment
According to the author's source code Readme file, python 2.7 must be available. Later, I installed tensorflow in a virtual environment established by Anaconda. This record is for the version of OpenCV 3.x! For the revision of version 3.x, the fourth point is the general part.
2, ORB-SLAM2 compiled successfully
Installation record details of ORB-SLAM2 click here .
There are some libraries to be installed in advance, such as C++11 or C++0x Compiler, Pangolin, OpenCV and Eigen3. If you don't know how to install them, you can search them manually. DynaSLAM can only support OpenCV2.4.11 when it first came out. However, some enthusiasts submitted code that can support Opencv3.x in 19 years. We will talk about it in detail later. I installed OpenCV 3.4.5 myself
3, Additional libraries to install before compiling DynaSLAM
Readme file according to open source code:
1. Install the boost library
sudo apt-get install libboost-all-dev
2. Download the DynaSLAM source code and put it into the h5 file
git clone https://github.com/BertaBescos/DynaSLAM.git
Then from this page https://github.com/matterport/Mask_RCNN/releases Download h5 the file and save it to DynaSLAM/src/python /
3.Python related environment
Here, first create a new virtual environment in Anaconda and activate it, and then install tensorflow and keras in the virtual environment in turn.
conda create -n MaskRCNN python=2.7 conda activate MaskRCNN pip install tensorflow==1.14.0 #Or pip install tensorflow GPU = = 1.14.0 pip install keras==2.0.9
After completing the above steps, the python environment is almost ready. You can test it below
cd DynaSLAM python src/python/Check.py
If the output is Mask R-CNN is correctly working, you can go to the next step. However, things are difficult to go so smoothly. Hahaha, then solve them one by one. I have two problems here:
3.1 scikit image is not installed
sudo pip install scikit-image
3.2 error reporting about pycocotools
be careful! Here, you must install Python 2.7 (cocoapi only supports Python 2)! Otherwise, an error will be reported when running Check.py and cannot be found_ mask, because when Python 3 runs, it will not be generated_ mask.so this file.
git clone https://github.com/waleedka/coco python PythonAPI/setup.py build_ext install
After running the above instructions, copy the entire pycotools folder to src/python /
4, Modify some DynaSLAM source code
Thank you very much for this little sister (?), whose original address is here: Pushyami_dev , if you want to see what has been added or deleted in the code, you can click in.
The submitted code mainly makes some modifications to the use of Opencv3. Then, on the basis of this code, remove - march=native (Segment Default error will appear) in CMakeLists.txt, and remember to go to / Thirdparty/DBoW2. The code is mainly modified in the following parts. Just copy the whole code directly to the folder. Pay attention to modifying your OpenCV3.x version.
- CMakeLists.txt
- Thirdparty/DBoW2/CMakeLists.txt
- include/Conversion.h
- src/Conversion.cc
1. CMakeLists.txt
cmake_minimum_required(VERSION 2.8) project(DynaSLAM) IF(NOT CMAKE_BUILD_TYPE) SET(CMAKE_BUILD_TYPE Release) # SET(CMAKE_BUILD_TYPE Debug) ENDIF() MESSAGE("Build type: " ${CMAKE_BUILD_TYPE}) #set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O3 -march=native ") #set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3 -march=native") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O3 ") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3 ") # This is required if opencv is built from source locally #SET(OpenCV_DIR "~/opencv/build") # set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O0 -march=native ") # set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O0 -march=native") # Check C++11 or C++0x support include(CheckCXXCompilerFlag) CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11) CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X) if(COMPILER_SUPPORTS_CXX11) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") add_definitions(-DCOMPILEDWITHC11) message(STATUS "Using flag -std=c++11.") elseif(COMPILER_SUPPORTS_CXX0X) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") add_definitions(-DCOMPILEDWITHC0X) message(STATUS "Using flag -std=c++0x.") else() message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.") endif() LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules) set(Python_ADDITIONAL_VERSIONS "2.7") #This is to avoid detecting python 3 find_package(PythonLibs 2.7 EXACT REQUIRED) if (NOT PythonLibs_FOUND) message(FATAL_ERROR "PYTHON LIBS not found.") else() message("PYTHON LIBS were found!") message("PYTHON LIBS DIRECTORY: " ${PYTHON_LIBRARY} ${PYTHON_INCLUDE_DIRS}) endif() message("PROJECT_SOURCE_DIR: " ${OpenCV_DIR}) find_package(OpenCV 3.4 QUIET) if(NOT OpenCV_FOUND) find_package(OpenCV 2.4 QUIET) if(NOT OpenCV_FOUND) message(FATAL_ERROR "OpenCV > 2.4.x not found.") endif() endif() find_package(Qt5Widgets REQUIRED) find_package(Qt5Concurrent REQUIRED) find_package(Qt5OpenGL REQUIRED) find_package(Qt5Test REQUIRED) find_package(Boost REQUIRED COMPONENTS thread) if(Boost_FOUND) message("Boost was found!") message("Boost Headers DIRECTORY: " ${Boost_INCLUDE_DIRS}) message("Boost LIBS DIRECTORY: " ${Boost_LIBRARY_DIRS}) message("Found Libraries: " ${Boost_LIBRARIES}) endif() find_package(Eigen3 3.1.0 REQUIRED) find_package(Pangolin REQUIRED) set(PYTHON_INCLUDE_DIRS ${PYTHON_INCLUDE_DIRS} /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy) include_directories( ${PROJECT_SOURCE_DIR} ${PROJECT_SOURCE_DIR}/include ${EIGEN3_INCLUDE_DIR} ${Pangolin_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS} /usr/include/python2.7/ #/usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ ${Boost_INCLUDE_DIRS} ) message("PROJECT_SOURCE_DIR: " ${PROJECT_SOURCE_DIR}) set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/lib) add_library(${PROJECT_NAME} SHARED src/System.cc src/Tracking.cc src/LocalMapping.cc src/LoopClosing.cc src/ORBextractor.cc src/ORBmatcher.cc src/FrameDrawer.cc src/Converter.cc src/MapPoint.cc src/KeyFrame.cc src/Map.cc src/MapDrawer.cc src/Optimizer.cc src/PnPsolver.cc src/Frame.cc src/KeyFrameDatabase.cc src/Sim3Solver.cc src/Initializer.cc src/Viewer.cc src/Conversion.cc src/MaskNet.cc src/Geometry.cc ) target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBS} ${EIGEN3_LIBS} ${Pangolin_LIBRARIES} ${PROJECT_SOURCE_DIR}/Thirdparty/DBoW2/lib/libDBoW2.so ${PROJECT_SOURCE_DIR}/Thirdparty/g2o/lib/libg2o.so /usr/lib/x86_64-linux-gnu/libpython2.7.so ${Boost_LIBRARIES} ) # Build examples set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/RGB-D) add_executable(rgbd_tum Examples/RGB-D/rgbd_tum.cc) target_link_libraries(rgbd_tum ${PROJECT_NAME}) set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/Stereo) add_executable(stereo_kitti Examples/Stereo/stereo_kitti.cc) target_link_libraries(stereo_kitti ${PROJECT_NAME}) set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/Monocular) add_executable(mono_tum Examples/Monocular/mono_tum.cc) target_link_libraries(mono_tum ${PROJECT_NAME})
2.Thirdparty/DBoW2/CMakeLists.txt
cmake_minimum_required(VERSION 2.8) project(DBoW2) #set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O3 -march=native ") #set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3 -march=native") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wall -O3 ") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -O3 ") set(HDRS_DBOW2 DBoW2/BowVector.h DBoW2/FORB.h DBoW2/FClass.h DBoW2/FeatureVector.h DBoW2/ScoringObject.h DBoW2/TemplatedVocabulary.h) set(SRCS_DBOW2 DBoW2/BowVector.cpp DBoW2/FORB.cpp DBoW2/FeatureVector.cpp DBoW2/ScoringObject.cpp) set(HDRS_DUTILS DUtils/Random.h DUtils/Timestamp.h) set(SRCS_DUTILS DUtils/Random.cpp DUtils/Timestamp.cpp) find_package(OpenCV 3.4 QUIET) if(NOT OpenCV_FOUND) find_package(OpenCV 2.4.3 QUIET) if(NOT OpenCV_FOUND) message(FATAL_ERROR "OpenCV > 2.4.3 not found.") endif() endif() set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) include_directories(${OpenCV_INCLUDE_DIRS}) add_library(DBoW2 SHARED ${SRCS_DBOW2} ${SRCS_DUTILS}) target_link_libraries(DBoW2 ${OpenCV_LIBS})
3.include/Conversion.h
/** * This file is part of DynaSLAM. * Copyright (C) 2018 Berta Bescos <bbescos at unizar dot es> (University of Zaragoza) * For more information see <https://github.com/bertabescos/DynaSLAM>. * */ #ifndef CONVERSION_H_ #define CONVERSION_H_ #include <Python.h> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/core/core.hpp> #include "numpy/ndarrayobject.h" // #include "__multiarray_api.h" #define NUMPY_IMPORT_ARRAY_RETVAL namespace DynaSLAM { static PyObject* opencv_error = 0; static int failmsg(const char *fmt, ...); class PyAllowThreads; class PyEnsureGIL; #define ERRWRAP2(expr) \ try \ { \ PyAllowThreads allowThreads; \ expr; \ } \ catch (const cv::Exception &e) \ { \ PyErr_SetString(opencv_error, e.what()); \ return 0; \ } static PyObject* failmsgp(const char *fmt, ...); static size_t REFCOUNT_OFFSET = (size_t)&(((PyObject*)0)->ob_refcnt) + (0x12345678 != *(const size_t*)"\x78\x56\x34\x12\0\0\0\0\0")*sizeof(int); static inline PyObject* pyObjectFromRefcount(const int* refcount) { return (PyObject*)((size_t)refcount - REFCOUNT_OFFSET); } static inline int* refcountFromPyObject(const PyObject* obj) { return (int*)((size_t)obj + REFCOUNT_OFFSET); } class NumpyAllocator; enum { ARG_NONE = 0, ARG_MAT = 1, ARG_SCALAR = 2 }; class NDArrayConverter { private: void init(); public: NDArrayConverter(); //cv::Mat toMat(const PyObject* o); //issue bug cv::Mat toMat(PyObject* o); PyObject* toNDArray(const cv::Mat& mat); }; } #endif /* CONVERSION_H_ */
4. src/Conversion.cc
/** * This file is part of DynaSLAM. * Copyright (C) 2018 Berta Bescos <bbescos at unizar dot es> (University of Zaragoza) * For more information see <https://github.com/bertabescos/DynaSLAM>. * */ #include "Conversion.h" #include <iostream> namespace DynaSLAM { static void init() { import_array(); } static int failmsg(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } class PyAllowThreads { public: PyAllowThreads() : _state(PyEval_SaveThread()) {} ~PyAllowThreads() { PyEval_RestoreThread(_state); } private: PyThreadState* _state; }; class PyEnsureGIL { public: PyEnsureGIL() : _state(PyGILState_Ensure()) {} ~PyEnsureGIL() { //std::cout << "releasing"<< std::endl; PyGILState_Release(_state); } private: PyGILState_STATE _state; }; using namespace cv; static PyObject* failmsgp(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } class NumpyAllocator : public MatAllocator { public: #if ( CV_MAJOR_VERSION < 3) NumpyAllocator() {} ~NumpyAllocator() {} void allocate(int dims, const int* sizes, int type, int*& refcount, uchar*& datastart, uchar*& data, size_t* step) { //PyEnsureGIL gil; int depth = CV_MAT_DEPTH(type); int cn = CV_MAT_CN(type); const int f = (int)(sizeof(size_t)/8); int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE : depth == CV_16U ? NPY_USHORT : depth == CV_16S ? NPY_SHORT : depth == CV_32S ? NPY_INT : depth == CV_32F ? NPY_FLOAT : depth == CV_64F ? NPY_DOUBLE : f*NPY_ULONGLONG + (f^1)*NPY_UINT; int i; npy_intp _sizes[CV_MAX_DIM+1]; for( i = 0; i < dims; i++ ) { _sizes[i] = sizes[i]; } if( cn > 1 ) { _sizes[dims++] = cn; } PyObject* o = PyArray_SimpleNew(dims, _sizes, typenum); if(!o) { CV_Error_(CV_StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims)); } refcount = refcountFromPyObject(o); npy_intp* _strides = PyArray_STRIDES(o); for( i = 0; i < dims - (cn > 1); i++ ) step[i] = (size_t)_strides[i]; datastart = data = (uchar*)PyArray_DATA(o); } void deallocate(int* refcount, uchar*, uchar*) { //PyEnsureGIL gil; if( !refcount ) return; PyObject* o = pyObjectFromRefcount(refcount); Py_INCREF(o); Py_DECREF(o); } #else NumpyAllocator() { stdAllocator = Mat::getStdAllocator(); } ~NumpyAllocator() { } UMatData* allocate(PyObject* o, int dims, const int* sizes, int type, size_t* step) const { UMatData* u = new UMatData(this); u->data = u->origdata = (uchar*) PyArray_DATA((PyArrayObject*) o); npy_intp* _strides = PyArray_STRIDES((PyArrayObject*) o); for (int i = 0; i < dims - 1; i++) step[i] = (size_t) _strides[i]; step[dims - 1] = CV_ELEM_SIZE(type); u->size = sizes[0] * step[0]; u->userdata = o; return u; } UMatData* allocate(int dims0, const int* sizes, int type, void* data, size_t* step, int flags, UMatUsageFlags usageFlags) const { if (data != 0) { CV_Error(Error::StsAssert, "The data should normally be NULL!"); // probably this is safe to do in such extreme case return stdAllocator->allocate(dims0, sizes, type, data, step, flags, usageFlags); } PyEnsureGIL gil; int depth = CV_MAT_DEPTH(type); int cn = CV_MAT_CN(type); const int f = (int) (sizeof(size_t) / 8); int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE : depth == CV_16U ? NPY_USHORT : depth == CV_16S ? NPY_SHORT : depth == CV_32S ? NPY_INT : depth == CV_32F ? NPY_FLOAT : depth == CV_64F ? NPY_DOUBLE : f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT; int i, dims = dims0; cv::AutoBuffer<npy_intp> _sizes(dims + 1); for (i = 0; i < dims; i++) _sizes[i] = sizes[i]; if (cn > 1) _sizes[dims++] = cn; PyObject* o = PyArray_SimpleNew(dims, _sizes, typenum); if (!o) CV_Error_(Error::StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims)); return allocate(o, dims0, sizes, type, step); } bool allocate(UMatData* u, int accessFlags, UMatUsageFlags usageFlags) const { return stdAllocator->allocate(u, accessFlags, usageFlags); } void deallocate(UMatData* u) const { if (u) { PyEnsureGIL gil; PyObject* o = (PyObject*) u->userdata; Py_XDECREF(o); delete u; } } const MatAllocator* stdAllocator; #endif }; NumpyAllocator g_numpyAllocator; NDArrayConverter::NDArrayConverter() { init(); } void NDArrayConverter::init() { import_array(); } cv::Mat NDArrayConverter::toMat( PyObject *o) { cv::Mat m; if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; } if( !PyArray_Check(o) ) { failmsg("toMat: Object is not a numpy array"); } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("toMat: Data type = %d is not supported", typenum); } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("toMat: Dimensionality (=%d) is too high", ndims); } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2) { failmsg("toMat: Object has more than 2 dimensions"); } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { #if ( CV_MAJOR_VERSION < 3) m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) #else m.u = g_numpyAllocator.allocate(o, ndims, size, type, step); m.addref(); Py_INCREF(o); //m.u->refcount = *refcountFromPyObject(o); #endif }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return m; } PyObject* NDArrayConverter::toNDArray(const cv::Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp; Mat *p = (Mat*)&m; #if ( CV_MAJOR_VERSION < 3) if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); #else if(!p->u || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } //p->addref(); //return pyObjectFromRefcount(&p->u->refcount); PyObject* o = (PyObject*) p->u->userdata; Py_INCREF(o); return o; #endif } }
5, Compiling source code and running
Compile DynaSLAM source code
cd DynaSLAM chmod +x build.sh ./build.sh
If the latter two parameters are not given at runtime, it is equivalent to running ORB-SLAM2
If you only want to use the function of MaskRCNN but do not want to save the mask, then in PATH_MASK is written as no_save, otherwise give a folder address to save mask
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM3.yaml /XXX/tum_dataset/ /XXX/tum_dataset/associations.txt masks/ output/
If it is found that the Light Track has been unsuccessful and cannot be initialized, increase the number of feature points in the ORB parameter setting, and generally change it to 3000 on github.
End, sprinkle flowers~