YOLO11改进 | 检测头 | 小目标遮挡物性能提升的检测头Detect_MultiSEAM【完整代码】
秋招面试专栏推荐 :深度学习算法工程师面试问题总结【百面算法工程师】——点击即可跳转
💡💡💡本专栏所有程序均经过测试,可成功执行💡💡💡
基于深度学习的人脸检测算法取得了巨大进步。这些算法大致可以分为两类,即像Faster R-CNN这样的两阶段检测器和像YOLO这样的一阶段检测器。由于在一阶段检测器中准确性和速度之间取得了更好的平衡,因此它们已被广泛应用于许多场景中。在本文中,我们介绍一种基于一阶段检测器YOLO实时检测器,针对遮挡问题,用一个名为SEAM的注意力模块进行优化,文章在介绍主要的原理后,将手把手教学如何进行模块的代码添加和修改,并将修改后的完整代码放在文章的最后,方便大家一键运行,小白也可轻松上手实践。对于学有余力的同学,可以挑战进阶模块。文章内容丰富,可以帮助您更好地面对深度学习目标检测YOLO系列的挑战。
专栏地址:YOLO11入门 + 改进涨点——点击即可跳转 欢迎订阅
目录
1. 原理
2. 将Detect_MultiSEAM代码实现
2.1 Detect_MultiSEAM添加到YOLO11中
2.2 更改init.py文件
2.3 添加yaml文件
2.4 在task.py中进行注册
2.5 执行程序
3.修改后的网络结构图
4. 完整代码分享
5. GFLOPs
6. 进阶
7.总结
1. 原理
论文地址:YOLO-FaceV2: A Scale and Occlusion Aware Face Detector——点击即可跳转
官方代码: 官方代码仓库——点击即可跳转
SEAM(分离和增强注意力模块)旨在提高人脸检测能力,尤其是在有遮挡的情况下。以下是 SEAM 背后的主要原理:
1. 深度可分离卷积
-
深度卷积:每个输入通道单独卷积,有助于捕获特定于通道的特征。
-
逐点卷积:使用 1x1 卷积组合深度卷积的输出,跨通道整合信息并确保保留通道间关系。
2. 残差连接
-
残差连接用于确保网络能够通过缓解梯度消失等问题更有效地学习,从而允许通过直接路径保留重要信息。
3. 全连接网络
-
在深度可分离卷积之后,两层全连接网络融合了所有通道的信息。这增强了模型学习复杂模式和通道间关系的能力。
4. 指数归一化
-
使用指数函数处理全连接层的输出逻辑,将值范围从 [0, 1] 扩展到 [1, e]。此步骤提供单调映射,使结果对位置误差更具鲁棒性。
5. 注意力机制
-
SEAM 模块的最终输出用作注意力图。此图与原始特征相乘,以强调相关区域(例如面部)并抑制不相关区域(例如背景)。
-
通过关注这些注意力增强的特征,该模型可以更有效地检测面部,即使在面部被部分遮挡的情况下也是如此。
检测中的应用
-
SEAM 旨在通过学习被遮挡和未被遮挡的面部区域之间的关系来减轻遮挡的影响。这有助于准确检测被其他物体部分隐藏的面部。
-
通过将 SEAM 集成到人脸检测流程中,该模型实现了更好的性能,特别是在具有遮挡人脸的具有挑战性的场景中。
2. 将Detect_MultiSEAM代码实现
2.1 Detect_MultiSEAM添加到YOLO11中
关键步骤一: 将下面代码粘贴到在/ultralytics/ultralytics/nn/modules/head.py中,并在该文件的__all__中添加“Detect_MultiSEAM”
class Residual(nn.Module):
def __init__(self, fn):
super(Residual, self).__init__()
self.fn = fn
def forward(self, x):
return self.fn(x) + x
class SEAM(nn.Module):
def __init__(self, c1, c2, n, reduction=16):
super(SEAM, self).__init__()
if c1 != c2:
c2 = c1
self.DCovN = nn.Sequential(
*[nn.Sequential(
Residual(nn.Sequential(
nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=3, stride=1, padding=1, groups=c2),
nn.GELU(),
nn.BatchNorm2d(c2)
)),
nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=1, stride=1, padding=0, groups=1),
nn.GELU(),
nn.BatchNorm2d(c2)
) for i in range(n)]
)
self.avg_pool = torch.nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(c2, c2 // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(c2 // reduction, c2, bias=False),
nn.Sigmoid()
)
self._initialize_weights()
# self.initialize_layer(self.avg_pool)
self.initialize_layer(self.fc)
def forward(self, x):
b, c, _, _ = x.size()
y = self.DCovN(x)
y = self.avg_pool(y).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
y = torch.exp(y)
return x * y.expand_as(x)
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_uniform_(m.weight, gain=1)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def initialize_layer(self, layer):
if isinstance(layer, (nn.Conv2d, nn.Linear)):
torch.nn.init.normal_(layer.weight, mean=0., std=0.001)
if layer.bias is not None:
torch.nn.init.constant_(layer.bias, 0)
def DcovN(c1, c2, depth, kernel_size=3, patch_size=3):
dcovn = nn.Sequential(
nn.Conv2d(c1, c2, kernel_size=patch_size, stride=patch_size),
nn.SiLU(),
nn.BatchNorm2d(c2),
*[nn.Sequential(
Residual(nn.Sequential(
nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=kernel_size, stride=1, padding=1, groups=c2),
nn.SiLU(),
nn.BatchNorm2d(c2)
)),
nn.Conv2d(in_channels=c2, out_channels=c2, kernel_size=1, stride=1, padding=0, groups=1),
nn.SiLU(),
nn.BatchNorm2d(c2)
) for i in range(depth)]
)
return dcovn
class MultiSEAM(nn.Module):
def __init__(self, c1, c2, depth, kernel_size=3, patch_size=[3, 5, 7], reduction=16):
super(MultiSEAM, self).__init__()
if c1 != c2:
c2 = c1
self.DCovN0 = DcovN(c1, c2, depth, kernel_size=kernel_size, patch_size=patch_size[0])
self.DCovN1 = DcovN(c1, c2, depth, kernel_size=kernel_size, patch_size=patch_size[1])
self.DCovN2 = DcovN(c1, c2, depth, kernel_size=kernel_size, patch_size=patch_size[2])
self.avg_pool = torch.nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(c2, c2 // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(c2 // reduction, c2, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y0 = self.DCovN0(x)
y1 = self.DCovN1(x)
y2 = self.DCovN2(x)
y0 = self.avg_pool(y0).view(b, c)
y1 = self.avg_pool(y1).view(b, c)
y2 = self.avg_pool(y2).view(b, c)
y4 = self.avg_pool(x).view(b, c)
y = (y0 + y1 + y2 + y4) / 4
y = self.fc(y).view(b, c, 1, 1)
y = torch.exp(y)
return x * y.expand_as(x)
class Detect_SEAM(nn.Module):
"""YOLOv8 Detect head for detection models."""
dynamic = False # force grid reconstruction
export = False # export mode
shape = None
anchors = torch.empty(0) # init
strides = torch.empty(0) # init
def __init__(self, nc=80, ch=()):
"""Initializes the YOLOv8 detection layer with specified number of classes and channels."""
super().__init__()
self.nc = nc # number of classes
self.nl = len(ch) # number of detection layers
self.reg_max = 16 # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
self.no = nc + self.reg_max * 4 # number of outputs per anchor
self.stride = torch.zeros(self.nl) # strides computed during build
c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100)) # channels
self.cv2 = nn.ModuleList(
nn.Sequential(Conv(x, c2, 3), SEAM(c2, c2, 1, 16), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), SEAM(c3, c3, 1, 16), nn.Conv2d(c3, self.nc, 1)) for x in ch)
self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
def forward(self, x):
"""Concatenates and returns predicted bounding boxes and class probabilities."""
shape = x[0].shape # BCHW
for i in range(self.nl):
x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
if self.training:
return x
elif self.dynamic or self.shape != shape:
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
self.shape = shape
x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'): # avoid TF FlexSplitV ops
box = x_cat[:, :self.reg_max * 4]
cls = x_cat[:, self.reg_max * 4:]
else:
box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
if self.export and self.format in ('tflite', 'edgetpu'):
# Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:
# https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309
# See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695
img_h = shape[2] * self.stride[0]
img_w = shape[3] * self.stride[0]
img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)
dbox /= img_size
y = torch.cat((dbox, cls.sigmoid()), 1)
return y if self.export else (y, x)
def bias_init(self):
"""Initialize Detect() biases, WARNING: requires stride availability."""
m = self # self.model[-1] # Detect() module
# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
# ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # nominal class frequency
for a, b, s in zip(m.cv2, m.cv3, m.stride): # from
a[-1].bias.data[:] = 1.0 # box
b[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img)
class Detect_MultiSEAM(Detect_SEAM):
def __init__(self, nc=80, ch=()):
super().__init__(nc, ch)
self.nc = nc # number of classes
self.nl = len(ch) # number of detection layers
self.reg_max = 16 # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
self.no = nc + self.reg_max * 4 # number of outputs per anchor
self.stride = torch.zeros(self.nl) # strides computed during build
c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100)) # channels
self.cv2 = nn.ModuleList(
nn.Sequential(Conv(x, c2, 3), MultiSEAM(c2, c2, 1), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), MultiSEAM(c3, c3, 1), nn.Conv2d(c3, self.nc, 1)) for x in ch)
self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
-
SEAM(Separated and Enhancement Attention Module)处理图像的主要流程可以分为以下几个步骤:
1. 输入特征图
-
输入的特征图是从前一层网络得到的,可以包含多通道的特征信息。
2. 深度可分离卷积
-
深度卷积(Depthwise Convolution): 这一步骤对每个输入通道分别进行卷积操作,以捕捉每个通道的局部特征。
-
逐点卷积(Pointwise Convolution): 进行1x1卷积操作,将深度卷积的输出整合,结合所有通道的信息。这一步确保了通道之间的关系得以保留和增强。
3. 残差连接
-
将输入特征图直接通过残差连接添加到逐点卷积的输出上,这样有助于信息的直接传递,避免梯度消失问题,从而提高网络的学习能力。
4. 全连接网络
-
深度可分离卷积和残差连接后的输出经过两层全连接网络,进一步融合和处理所有通道的信息,增强模型对复杂特征模式的学习能力。
5. 指数归一化
-
全连接网络的输出经过指数函数处理,将输出值从[0, 1]范围扩展到[1, e]范围。这个过程通过单调映射,使结果更具鲁棒性,对位置误差不敏感。
6. 注意力图生成
-
经过指数归一化后的输出被用作注意力图,该注意力图包含了对输入特征图中各个位置的重要性评估。
7. 特征增强
-
注意力图与输入特征图相乘,强化了重要区域(如人脸特征)并抑制了不重要区域(如背景),从而生成增强后的特征图。
8. 输出
-
最终的增强特征图被传递到后续的网络层,用于进一步的检测或分类任务。
总结
SEAM模块通过深度可分离卷积捕捉局部特征,通过残差连接保留重要信息,通过全连接网络融合通道信息,通过指数归一化增强鲁棒性,最终生成注意力图并与输入特征图相乘以增强重要区域特征。通过这一系列步骤,SEAM模块能够有效处理遮挡问题,提高面部检测的准确性。
-
2.2 更改init.py文件
关键步骤二:修改modules文件夹下的__init__.py文件,先导入函数
然后在下面的__all__中声明函数
2.3 添加yaml文件
关键步骤三:在/ultralytics/ultralytics/cfg/models/11下面新建文件yolo11_Detect_MultiSEAM.yaml文件,粘贴下面的内容
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
# YOLO11n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 2, C3k2, [256, False, 0.25]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 2, C3k2, [512, False, 0.25]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 2, C3k2, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 2, C3k2, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
- [-1, 2, C2PSA, [1024]] # 10
# YOLO11n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 2, C3k2, [512, False]] # 13
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 13], 1, Concat, [1]] # cat head P4
- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 10], 1, Concat, [1]] # cat head P5
- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
- [[16, 19, 22], 1, Detect_MultiSEAM, [nc]] # Detect(P3, P4, P5)
温馨提示:本文只是对yolo11基础上添加模块,如果要对yolo11n/l/m/x进行添加则只需要指定对应的depth_multiple 和 width_multiple。
# YOLO11n
depth_multiple: 0.50 # model depth multiple
width_multiple: 0.25 # layer channel multiple
max_channel:1024
# YOLO11s
depth_multiple: 0.50 # model depth multiple
width_multiple: 0.50 # layer channel multiple
max_channel:1024
# YOLO11m
depth_multiple: 0.50 # model depth multiple
width_multiple: 1.00 # layer channel multiple
max_channel:512
# YOLO11l
depth_multiple: 1.00 # model depth multiple
width_multiple: 1.00 # layer channel multiple
max_channel:512
# YOLO11x
depth_multiple: 1.00 # model depth multiple
width_multiple: 1.50 # layer channel multiple
max_channel:512
2.4 在task.py中进行注册
关键步骤四:在task.py的中进行注册,
先在task.py中导入Detect_MultiSEAM函数
-
在BaseModel的类下 _apply的函数下添加Detect_MultiSEAM,如下图
2.在DetectionModel类下的__init__函数中,添加Detect_MultiSEAM,如下图所示
3. 在parse_model函数中,在elif语句添加Detect_MultiSEAM,如下图所示,
4. 在guess_model_task的函数中添加Detect_MultiSEAM,如下图所示
2.5 执行程序
关键步骤五:在ultralytics文件中新建train.py,将model的参数路径设置为yolo11_Detect_MultiSEAM.yaml的路径即可
from ultralytics import YOLO
import warnings
warnings.filterwarnings('ignore')
from pathlib import Path
if __name__ == '__main__':
# 加载模型
model = YOLO("ultralytics/cfg/11/yolo11.yaml") # 你要选择的模型yaml文件地址
# Use the model
results = model.train(data=r"你的数据集的yaml文件地址",
epochs=100, batch=16, imgsz=640, workers=4, name=Path(model.cfg).stem) # 训练模型
🚀运行程序,如果出现下面的内容则说明添加成功🚀
from n params module arguments
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 6640 ultralytics.nn.modules.block.C3k2 [32, 64, 1, False, 0.25]
3 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
4 -1 1 26080 ultralytics.nn.modules.block.C3k2 [64, 128, 1, False, 0.25]
5 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
6 -1 1 87040 ultralytics.nn.modules.block.C3k2 [128, 128, 1, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 346112 ultralytics.nn.modules.block.C3k2 [256, 256, 1, True]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 249728 ultralytics.nn.modules.block.C2PSA [256, 256, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
13 -1 1 111296 ultralytics.nn.modules.block.C3k2 [384, 128, 1, False]
14 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
15 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
16 -1 1 32096 ultralytics.nn.modules.block.C3k2 [256, 64, 1, False]
17 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
18 [-1, 13] 1 0 ultralytics.nn.modules.conv.Concat [1]
19 -1 1 86720 ultralytics.nn.modules.block.C3k2 [192, 128, 1, False]
20 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
21 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1]
22 -1 1 378880 ultralytics.nn.modules.block.C3k2 [384, 256, 1, True]
23 [16, 19, 22] 1 3348640 ultralytics.nn.modules.head.Detect_MultiSEAM [80, [64, 128, 256]]
YOLO11_Detect_MultiSEAM summary: 553 layers, 5,507,808 parameters, 5,507,792 gradients, 7.3 GFLOPs
3.修改后的网络结构图
4. 完整代码分享
https://pan.baidu.com/s/1BBjITF2Thv2QIaJiBwVdEw?pwd=j2sc
提取码: j2sc
5. GFLOPs
关于GFLOPs的计算方式可以查看:百面算法工程师 | 卷积基础知识——Convolution
未改进的YOLO11n GFLOPs
改进后的GFLOPs
6. 进阶
可以与其他的注意力机制或者损失函数等结合,进一步提升检测效果
7.总结
SEAM(Separated and Enhancement Attention Module)主要通过深度可分离卷积来分别处理每个输入通道,以捕捉通道特定的特征,并使用1x1卷积将这些特征整合,从而保留跨通道的信息。残差连接确保了网络在学习过程中能够有效缓解梯度消失问题,直接通道的保留确保了重要信息的传递。经过深度可分离卷积处理后,两层全连接网络进一步融合所有通道的信息,增强了模型学习复杂模式和通道间关系的能力。输出的logits通过指数函数处理,将值范围从[0, 1]扩展到[1, e],提供了单调映射,使结果对位置误差更具鲁棒性。最终,SEAM模块的输出作为注意力图,与原始特征相乘,强调相关区域(如人脸)并抑制无关区域(如背景)。通过这种方式,SEAM在处理遮挡问题时能够更好地学习被遮挡和未被遮挡区域之间的关系,从而提高模型在部分遮挡情况下的面部检测准确性。