YOLO V5 trains its own dataset

1. Environment
Ubuntu 20.04
python 3.8.8
Graphics card driver 470.74
CUDA 11.5

View Ubuntu version number

cat /proc/version

View Python version number

python --version

View graphics driver version

nvidia-smi

View CUDA version

nvcc --version

2. Code model download
2.1 download model

git clone https://github.com/ultralytics/yolov5

2.2 download the pre training model

It is recommended to create a new weights in the model folder to place the downloaded pre training model weights

3. Data set production
3.1 download Yolomark

git clone https://github.com/AlexeyAB/Yolo_mark

3.2 place the pictures in Yolomark/x64/Release/data/img folder and delete the pre loaded pictures.

3.3 cmake, then run yolo_mark

cd Yolo_mark
cmake .
make
chomd +x ./linux_mark.sh
./linux_mark.sh

3.4 modify Yolo before this_ Mark / x64 / release / data / obj.data to specify the number of dimension types

classes= 1

Modify Yolo_mark/x64/Release/data/obj.names to specify the label type name

Finger

3.5 the main interface is as follows. Box selection is used for marking, object ID selection is used to switch the standard type, c clear the current image mark, and the spacebar is used to save and flip the next image.

3.6 after labeling, the txt of the label is automatically generated, for example:

0 0.612500 0.174306 0.071875 0.184722
0 0.683984 0.120833 0.071094 0.183333
0 0.756641 0.097917 0.077344 0.168056
0 0.833594 0.134028 0.065625 0.145833
0 0.880859 0.311111 0.064844 0.155556

4. Training settings
4.1 file path setting
The yolov5 root directory is shown in the following picture. It is recommended to place each file as follows
data - put the. yaml file used for dataset and training
models -- release yolo5 model
runs - training results
Weights -- pre training model weights
Root directory - put train.py (training execution entry)

4.2 dataset settings
Create a new folder FingerDatasets under yolov5/data, where you can create two files, images and labels, and put the marked jpg pictures and txt labels.


4.3 training set and test set allocation
The labeled pictures need to be divided into three purposes: train/test/val,
Finally, three txt files are generated

txt contents are as follows

data/FingerDatasets/images/71.jpg
data/FingerDatasets/images/72.jpg
data/FingerDatasets/images/73.jpg
data/FingerDatasets/images/74.jpg
data/FingerDatasets/images/75.jpg
data/FingerDatasets/images/76.jpg
data/FingerDatasets/images/77.jpg
data/FingerDatasets/images/78.jpg
data/FingerDatasets/images/79.jpg
data/FingerDatasets/images/80.jpg
data/FingerDatasets/images/81.jpg

4.4 preparation of training configuration file
Create a new FingerDetect.yaml file under yolov5/data. Used to direct the application to connect to the dataset

train: data/FingerDatasets/train.txt
val:  data/FingerDatasets/val.txt
test: data/FingerDatasets/test.txt
 
nc: 1
 
names: ['FInger']

4.5 modify model
We use the v5x model, open yolov5/models/yolov5x/yaml, modify the recognition type to 1, and adjust the network depth and width appropriately. The anchor, backbone and head can be adjusted after being familiar with the network structure.

nc: 1  # number of classes
depth_multiple: 1.5  # model depth multiple
width_multiple: 1.33  # layer channel multiple

4.6 modification of training documents

In def parse_ In opt function, modify the training parameters according to the actual situation:

weight

parser.add_argument('--weights', type=str, default=ROOT / 'weights/yolov5s.pt', help='initial weights path')

Model

parser.add_argument('--cfg', type=str, default='models/yolov5s.yaml', help='model.yaml path')

data set

parser.add_argument('--data', type=str, default=ROOT / 'data/FingerDetect.yaml', help='dataset.yaml path')

EPoch quantity

parser.add_argument('--epochs', type=int, default=500)

Batch size

parser.add_argument('--batch-size', type=int, default=4, help='total batch size for all GPUs, -1 for autobatch')

4.7 start training

cd yolov5
python3 train.py

5. Description of training results
The training results are placed in the latest exp folder under yolov5/runs folder by default.

Weight model - under the weights folder generated in exp
Training process - results.csv
Loss curve / precision curve - results.png
Val presentation set results - val_batchX_labels.jpg

Tags: Computer Vision Deep Learning yolo

Posted on Mon, 01 Nov 2021 03:20:44 -0400 by samirk