当前位置: 首页 > article >正文

改进YOLO——更换骨干网络GhostNetV2

基于大佬迪菲赫尔曼的专栏文章进行实操时,感觉文章写的并不是很清楚,部分步骤写的不太详细,对于新手小白不太友好。这里写下详细版的步骤。

迪菲赫尔曼改进YOLO原文:

改进YOLO系列 | GhostNetV2: 用长距离注意力增强低成本运算 | 更换骨干网络之GhostNetV2_ghostnetv2 yolo-CSDN博客


目录

一、GhostNetV2核心代码

二、修改common.py

三、修改yolo.py

三、建立yaml文件 

四、训练


一、GhostNetV2核心代码

在models文件夹下新建modules文件夹,在modules文件夹下新建一个py文件。这里为GhostV2.py。复制以下代码到文件里面。

注意:很多改进文件都是将代码直接复制到common.py文件,如果改进机制多了容易造成混乱。创建一个modules文件夹,将改进机制放里面方便管理。

import math
from functools import partial

import numpy as np

import torch
import torch.nn as nn
import torch.nn.functional as F


def _make_divisible(x, divisor=4):
    return int(np.ceil(x * 1. / divisor) * divisor)


class MyHSigmoid(nn.Module):
    """
    Hard Sigmoid definition.

    Args:

    Returns:
        Tensor, output tensor.

    Examples:
        >>> MyHSigmoid()
    """

    def __init__(self):
        super(MyHSigmoid, self).__init__()
        self.relu6 = nn.ReLU6()

    def forward(self, x):
        """ forward """
        return self.relu6(x + 3.) * 0.16666667


class Activation(nn.Module):
    """
    Activation definition.

    Args:
        act_func(string): activation name.

    Returns:
         Tensor, output tensor.
    """

    def __init__(self, act_func):
        super(Activation, self).__init__()
        if act_func == 'relu':
            self.act = nn.ReLU()
        elif act_func == 'relu6':
            self.act = nn.ReLU6()
        elif act_func == 'sigmoid':
            self.act = nn.Sigmoid()
        elif act_func in ('hsigmoid', 'hard_sigmoid'):
            self.act = MyHSigmoid()
        elif act_func in ('hswish', 'hard_swish'):
            self.act = nn.Hardswish()
        else:
            raise NotImplementedError

    def forward(self, x):
        """ forward """
        return self.act(x)


class GlobalAvgPooling(nn.Module):
    """
    Global avg pooling definition.

    Args:

    Returns:
        Tensor, output tensor.

    Examples:
        >>> GlobalAvgPooling()
    """

    def __init__(self):
        super(GlobalAvgPooling, self).__init__()
        self.mean = nn.AdaptiveAvgPool2d(1)

    def forward(self, x):
        """ forward """
        x = self.mean(x)
        return x


class SE_Ghost(nn.Module):
    """
    SE warpper definition.

    Args:
        num_out (int): Output channel.
        ratio (int): middle output ratio.

    Returns:
        Tensor, output tensor.

    """

    def __init__(self, num_out, ratio=4):
        super(SE_Ghost, self).__init__()
        num_mid = _make_divisible(num_out // ratio)
        self.pool = GlobalAvgPooling()
        self.conv_reduce = nn.Conv2d(in_channels=num_out, out_channels=num_mid,
                                     kernel_size=1, bias=True, padding_mode='zeros')
        self.act1 = Activation('relu')
        self.conv_expand = nn.Conv2d(in_channels=num_mid, out_channels=num_out,
                                     kernel_size=1, bias=True, padding_mode='zeros')
        self.act2 = Activation('hsigmoid')

    def forward(self, x):
        """ forward of SE module """
        out = self.pool(x)
        out = self.conv_reduce(out)
        out = self.act1(out)
        out = self.conv_expand(out)
        out = self.act2(out)
        out = x * out
        return out


class ConvUnit(nn.Module):
    """
    ConvUnit warpper definition.

    Args:
        num_in (int): Input channel.
        num_out (int): Output channel.
        kernel_size (Union[int, tuple[int]]): Input kernel size.
        stride (int): Stride size.
        padding (Union[int, tuple[int]]): Padding number.
        num_groups (int): Output num group.
        use_act (bool): Used activation or not.
        act_type (string): Activation type.

    Returns:
        Tensor, output tensor.

    """

    def __init__(self, num_in, num_out, kernel_size=1, stride=1, padding=0, num_groups=1,
                 use_act=True, act_type='relu'):
        super(ConvUnit, self).__init__()
        self.conv = nn.Conv2d(in_channels=num_in,
                              out_channels=num_out,
                              kernel_size=kernel_size,
                              stride=stride,
                              padding=padding,
                              groups=num_groups,
                              bias=False,
                              padding_mode='zeros')
        self.bn = nn.BatchNorm2d(num_out)
        self.use_act = use_act
        self.act = Activation(act_type) if use_act else None

    def forward(self, x):
        """ forward of conv unit """
        out = self.conv(x)
        out = self.bn(out)
        if self.use_act:
            out = self.act(out)
        return out


class GhostModule(nn.Module):
    """
    GhostModule warpper definition.

    Args:
        num_in (int): Input channel.
        num_out (int): Output channel.
        kernel_size (int): Input kernel size.
        stride (int): Stride size.
        padding (int): Padding number.
        ratio (int): Reduction ratio.
        dw_size (int): kernel size of cheap operation.
        use_act (bool): Used activation or not.
        act_type (string): Activation type.

    Returns:
        Tensor, output tensor.

    """

    def __init__(self, num_in, num_out, kernel_size=1, stride=1, padding=0, ratio=2, dw_size=3,
                 use_act=True, act_type='relu'):
        super(GhostModule, self).__init__()
        init_channels = math.ceil(num_out / ratio)
        new_channels = init_channels * (ratio - 1)

        self.primary_conv = ConvUnit(num_in, init_channels, kernel_size=kernel_size, stride=stride,
                                     padding=kernel_size // 2, num_groups=1, use_act=use_act, act_type=act_type)
        self.cheap_operation = ConvUnit(init_channels, new_channels, kernel_size=dw_size, stride=1,
                                        padding=dw_size // 2, num_groups=init_channels,
                                        use_act=use_act, act_type=act_type)

    def forward(self, x):
        """ ghost module forward """
        x1 = self.primary_conv(x)
        x2 = self.cheap_operation(x1)
        return torch.cat([x1, x2], dim=1)


class GhostModuleMul(nn.Module):
    """
    GhostModuleMul warpper definition.

    Args:
        num_in (int): Input channel.
        num_out (int): Output channel.
        kernel_size (int): Input kernel size.
        stride (int): Stride size.
        padding (int): Padding number.
        ratio (int): Reduction ratio.
        dw_size (int): kernel size of cheap operation.
        use_act (bool): Used activation or not.
        act_type (string): Activation type.

    Returns:
        Tensor, output tensor.

    """

    def __init__(self, num_in, num_out, kernel_size=1, stride=1, padding=0, ratio=2, dw_size=3,
                 use_act=True, act_type='relu'):
        super(GhostModuleMul, self).__init__()
        self.avgpool2d = nn.AvgPool2d(kernel_size=1, stride=1)
        self.gate_fn = Activation('sigmoid')
        init_channels = math.ceil(num_out / ratio)
        new_channels = init_channels * (ratio - 1)

        self.primary_conv = ConvUnit(num_in, init_channels, kernel_size=kernel_size, stride=stride,
                                     padding=kernel_size // 2, num_groups=1, use_act=use_act, act_type=act_type)
        self.cheap_operation = ConvUnit(init_channels, new_channels, kernel_size=dw_size, stride=1,
                                        padding=dw_size // 2, num_groups=init_channels,
                                        use_act=use_act, act_type=act_type)
        self.short_conv = nn.Sequential(
            ConvUnit(num_in, num_out, kernel_size=kernel_size, stride=stride,
                     padding=kernel_size // 2, num_groups=1, use_act=False),
            ConvUnit(num_out, num_out, kernel_size=(1, 5), stride=1,
                     padding=(0, 2), num_groups=num_out, use_act=False),
            ConvUnit(num_out, num_out, kernel_size=(5, 1), stride=1,
                     padding=(2, 0), num_groups=num_out, use_act=False),
        )

    def forward(self, x):
        """ ghost module forward """
        res = self.avgpool2d(x)
        res = self.short_conv(res)
        res = self.gate_fn(res)

        x1 = self.primary_conv(x)
        x2 = self.cheap_operation(x1)
        out = torch.cat([x1, x2], dim=1)
        out = out * F.interpolate(res, size=out.shape[-2:], mode="bilinear", align_corners=True)
        return out


class GhostV2(nn.Module):
    """
    GhostBottleneck warpper definition.

    Args:
        num_in (int): Input channel.
        num_mid (int): Middle channel.
        num_out (int): Output channel.
        kernel_size (int): Input kernel size.
        stride (int): Stride size.
        act_type (str): Activation type.
        use_se (bool): Use SE warpper or not.

    Returns:
        Tensor, output tensor.
    """

    def __init__(self, num_in, num_out, num_mid, kernel_size, stride=1, act_type='relu', use_se=False, layer_id=None):
        super(GhostV2, self).__init__()
        self.use_ori_module = layer_id <= 1
        if self.use_ori_module:
            self.ghost1 = GhostModule(num_in, num_mid, kernel_size=1,
                                      stride=1, padding=0, act_type=act_type)
        else:
            self.ghost1 = GhostModuleMul(num_in, num_mid, kernel_size=1,
                                         stride=1, padding=0, act_type=act_type)

        self.use_dw = stride > 1
        self.dw = None
        if self.use_dw:
            self.dw = ConvUnit(num_mid, num_mid, kernel_size=kernel_size, stride=stride,
                               padding=self._get_pad(kernel_size), act_type=act_type, num_groups=num_mid, use_act=False)

        self.use_se = use_se
        if use_se:
            self.se = SE_Ghost(num_mid)

        self.ghost2 = GhostModule(num_mid, num_out, kernel_size=1, stride=1,
                                  padding=0, act_type=act_type, use_act=False)

        self.down_sample = False
        if num_in != num_out or stride != 1:
            self.down_sample = True
        self.shortcut = None
        if self.down_sample:
            self.shortcut = nn.Sequential(
                ConvUnit(num_in, num_in, kernel_size=kernel_size, stride=stride,
                         padding=self._get_pad(kernel_size), num_groups=num_in, use_act=False),
                ConvUnit(num_in, num_out, kernel_size=1, stride=1,
                         padding=0, num_groups=1, use_act=False),
            )

    def forward(self, x):
        """ forward of ghostnet """
        shortcut = x
        out = self.ghost1(x)
        if self.use_dw:
            out = self.dw(out)
        if self.use_se:
            out = self.se(out)
        out = self.ghost2(out)
        if self.down_sample:
            shortcut = self.shortcut(shortcut)
        out = shortcut + out
        return out

    def _get_pad(self, kernel_size):
        """set the padding number"""
        pad = 0
        if kernel_size == 1:
            pad = 0
        elif kernel_size == 3:
            pad = 1
        elif kernel_size == 5:
            pad = 2
        elif kernel_size == 7:
            pad = 3
        else:
            raise NotImplementedError
        return pad

二、修改common.py

在common.py文件中,在前面的部分添加以下代码,导入GhostV2.py的内容:

from models.modules.GhostV2 import *

在C3Ghost模块后面,加入C3_Ghostv2模块,加入以下代码:

class C3_Ghostv2(nn.Module):
    def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c1, c_, 1, 1)
        self.cv3 = Conv(2 * c_, c2, 1)  # act=FReLU(c2)
        self.m = nn.Sequential(*(GhostV2(c_, c_, 2 * c_, 3, 1, 'relu', False, 2) for _ in range(n)))

    def forward(self, x):
        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))

三、修改yolo.py

在yolo.py文件中,找到from models.common import:

from models.common import("添加的模块")

将GhostV2和C3_Ghostv2模块加入。如下所示:

找到parse_model函数,将GhostV2和C3_Ghostv2模块加入,如下图所示:

在后面几行加入以下代码:

elif m is GhostV2:
    c1, c2 = ch[f], args[0]
    if c2 != no:
        c2 = make_divisible(c2 * gw, 8)
       args = [c1, c2, *args[1:]]

如下图所示:

三、建立yaml文件 

在models文件夹下,新建yaml文件,这里命名为yolov5-GhostNetv2.yaml。

将以下代码复制进文件。

# Paper: https://arxiv.org/abs/2211.12905
# Parameters
nc: 80  # number of classes
depth_multiple: 1  # model depth multiple
width_multiple: 1  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 v6.0 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [16, 3, 2]],  # 0-P1/2
   [-1, 1, GhostV2, [16, 16, 3, 1, 'relu', False ,0]],  #1 outc, midc, k, s, act, se, id

   [-1, 1, GhostV2, [24, 48, 3, 2, 'relu', False ,1]],  #2
   [-1, 1, GhostV2, [24, 72, 3, 1, 'relu', False ,2]],  #3 p2

   [-1, 1, GhostV2, [40, 72, 5, 2,'relu', True,3]],    #4
   [-1, 1, GhostV2, [40, 120, 5, 1 ,'relu', True,4]],  #5 p3

   [-1, 1, GhostV2, [80, 240, 3, 2, 'relu', False,5]],  #
   [-1, 1, GhostV2, [80, 200, 3, 1, 'relu', False,6]],  #
   [-1, 1, GhostV2, [80, 184, 3, 1, 'relu', False,7]],  #
   [-1, 1, GhostV2, [80, 184, 3, 1, 'relu', False,8]],  #
   [-1, 1, GhostV2, [112, 480, 3, 1,'relu', True,9]],  #
   [-1, 1, GhostV2, [112, 672, 3, 1,'relu', True,10]],  #11 p4

   [-1, 1, GhostV2, [160, 672,5, 2, 'relu', True,11]],  #
   [-1, 1, GhostV2, [160, 960,5, 1, 'relu', False,12]],  #
   [-1, 1, GhostV2, [160, 960,5, 1, 'relu', True,13]],  #
   [-1, 1, GhostV2, [160, 960,5, 1, 'relu', False,14]],  #
   [-1, 1, GhostV2, [160, 960,5, 1, 'relu', True,15]],  #16 p5
  ]

# YOLOv5 v6.0 head
head:
  [[-1, 1, Conv, [80, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 11], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, C3, [192, False]],  # 20

   [-1, 1, Conv, [80, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1,5], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, C3, [120, False]],  # 24 (P3/8-small)

   [-1, 1, Conv, [60, 3, 2]],
   [[-1, 21], 1, Concat, [1]],  # cat head P4
   [-1, 3, C3, [144, False]],  # 27 (P4/16-medium)

   [-1, 1, Conv, [80, 3, 2]],
   [[-1, 17], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [160, False]],  # 30 (P5/32-large)

   [[24, 27, 30], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

至此更改完成。

四、训练

由于更改了主干网络,weights权重选不选都是从头开始训练。

在data下打开coco.yaml文件,对路径进行修改。尽量用绝对路径,相对路径容易报错。

在yolov5下找到train.py文件,对里面参数的进行修改。

目前主要对--weight、--cfg、--data进行参数设置。其他默认即可。

--weight:先选用官方的yolov5.pt权重。自己训练完后可更换为自己的权重。

注意:这里由于更改了主干网络,weights权重选不选都是从头开始训练。

--cfg:选用刚刚在models下建立的yolov5-GhostNetv2.yaml。

--data:选用上面修改过路径的coco.yaml。

--batch-size:默认是16。如果出现以下问题,提示内存不足的情况,建议设置为8。(batch-size的设置一般为8的整数倍)

打开train.py。ctrl+shift+p 在弹出框窗口搜索Python:选择解释器,选择自己创建的Python虚拟环境,这里是yolo 。

点击右上角运行程序。等待训练结束即可。

训练结果会保存在run文件夹下。


http://www.kler.cn/news/330315.html

相关文章:

  • C语言 | Leetcode C语言题解之题451题根据字符出现频率排序
  • 微信小程序用户信息更新指南:头像与昵称篇
  • 【C++设计模式】(四)创建型模式:简单工厂模式,工厂方法模式,抽象工厂模式
  • 阿里云 SAE Web:百毫秒高弹性的实时事件中心的架构和挑战
  • 【算法篇】二叉树类(3)(笔记)
  • Redis: Sentinel节点管理,故障迁移一致性以及TILT模式
  • 计算机网络-系分(5)
  • 2、Spring Boot 3.x 集成 Feign
  • javascript手写实现instanceof函数 介绍判断数组的方法
  • 网站开发基础:HTML、CSS
  • 四、网络层(下)
  • 《 Spring Boot实战:优雅构建责任链模式投诉处理业务》
  • 【Redis 源码】6AOF持久化
  • Linux_kernel字符设备驱动12
  • 云计算SLA响应时间的matlab模拟与仿真
  • Golang | Leetcode Golang题解之第446题等差数列划分II-子序列
  • C++学习笔记----8、掌握类与对象(二)---- 成员函数的更多知识(3)
  • 【数一线性代数】021入门
  • 代码工艺:Spring Boot 防御式编程实践
  • JavaScript Map全解:从基础到高级应用