1. 搭建环境

创建虚拟环境

1
2
conda create -n yolov8 python=3.7
conda activate yolov8

安装PyTorch1.8.0

1
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

下载作者开源的程序,并安装其他依赖

1
2
3
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
pip install -r requirements.txt

2. 数据集制作(实例分割)

(0)使用相机录制目标物体数据

此处以 Realsense 相机 + Ubuntu 20.04 + ROS Noetic 为例。将相机通过 USB 连接至电脑后,运行以下 python 程序,拍摄目标物体,每按下一次 s 保存一个图像,按下 qesc 退出数据录制。

建议在数据集中的图像,物体数量、角度、背景等尽可能丰富。

(1)使用 Labelme 创建实例分割数据集

安装 labelme

1
pip install labelme

安装完成后直接在命令行输入 labelme 即可打开。

使用 label 进行标注,将生成的 json 文件和原始图像 jpg,放入同一个文件夹中。

(2)Labelme 格式转 YOLO 格式

参考 pypi的labelme2yolo包

1
2
3
pip install labelme2yolo
# 或者使用清华源
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple labelme2yolo
1
labelme2coco --json_dir path/to/labelme/dir

(3)创建数据集的 YAML 文件

打开目录 ultralytics/datasets,复制一份其中的 coco128-seg.yaml,重命名为 custom-seg.yaml,然后根据自己的数据集进行修改。

例如:

1
2
3
4
5
6
7
8
9
10
11
12
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /media/mahaofei/OneTouch/Dataset/Program_data/image_processing/ultralytics/20230223_Phone_4Obj_YOLO # dataset root dir
train: images/train2017 # train images (relative to 'path') 128 images
val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional)

# Classes
names:
0: ammeter
1: coffeebox
2: realsensebox
3: sucker

3. Python使用教程

一般使用此方法训练测试

(1)训练

1
2
3
4
from ultralytics import YOLO

model = YOLO('yolov8n.pt') # 从预训练模型开始
model.train(epochs=5)

(2)评价

1
2
3
4
5
from ultralytics import YOLO

model = YOLO("model.pt")
model.val() # 使用model.pt的data yaml进行评价
model.val(data='coco128.yaml') # 或指定数据进行评价

(3)预测

获取预测结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from ultralytics import YOLO
from PIL import Image
import cv2

model = YOLO("model.pt")
# 接受所有类型 - image/dir/Path/URL/video/PIL/ndarray. 0 for webcam
# 从摄像头
results = model.predict(source="0")
# 从文件夹
results = model.predict(source="folder", show=True) # Display preds. Accepts all YOLO predict arguments

# 从PIL图像
im1 = Image.open("bus.jpg")
results = model.predict(source=im1, save=True) # save plotted images

# 从ndarray
im2 = cv2.imread("bus.jpg")
results = model.predict(source=im2, save=True, save_txt=True) # save predictions as labels

# 从PIL/ndarray的列表
results = model.predict(source=[im1, im2])

预测结果分析(results会包含预测所有结果的列表,当有很多图像的时候要注意避免内存溢出,特别是在实例分割时)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 1. return as a list
results = model.predict(source="folder")

# 2. return as a generator (stream=True)
results = model.predict(source=0, stream=True)

for result in results:
# Detection
result.boxes.xyxy # box with xyxy format, (N, 4)
result.boxes.xywh # box with xywh format, (N, 4)
result.boxes.xyxyn # box with xyxy format but normalized, (N, 4)
result.boxes.xywhn # box with xywh format but normalized, (N, 4)
result.boxes.conf # confidence score, (N, 1)
result.boxes.cls # cls, (N, 1)

# Segmentation
result.masks.data # masks, (N, H, W)
result.masks.xy # x,y segments (pixels), List[segment] * N
result.masks.xyn # x,y segments (normalized), List[segment] * N

# Classification
result.probs # cls prob, (num_class, )

# Each result is composed of torch.Tensor by default,
# in which you can easily use following functionality:
result = result.cuda()
result = result.cpu()
result = result.to("cpu")
result = result.numpy()

4. 开始训练

新建一个python文件如train.py,添加内容如下:

1
2
3
4
5
6
7
8
9
from ultralytics import YOLO

# Load a model
# model = YOLO('yolov8n-seg.yaml') # build a new model from YAML
model = YOLO('yolov8n-seg.pt') # load a pretrained model (recommended for training)
# model = YOLO('yolov8n-seg.yaml').load('yolov8n.pt') # build from YAML and transfer weights

# Train the model
model.train(data='custom-seg.yaml', epochs=100, imgsz=3904, batch=1)

2.5 结果预测