当前位置: 首页 > article >正文

DETR:End-to-End Object Detection with Transformers

【DETR 论文精读【论文精读】-哔哩哔哩】 https://b23.tv/Iy9k4O2

【DETR源码解读4-哔哩哔哩】 https://b23.tv/Qp1uH5v

 摘要:

将目标检测看作一个集合预测的问题

任务:给定一张图片,预测一组框,每个框需要得到坐标信息和包含的物体类别信息,将框可以视为集合,不同图片所对应的框不同,则所对应的集合就不同

去除:NMS、生成anchor

创新:

①提出一个目标函数,通过二分图匹配的方式强制模型输出一组独一无二的预测(去除冗余框,每个物体的理想状态下就会生成一个框)

②使用Transformer 的encoder、decoder架构:

(1)在Transformer解码器中提出learned object query,它可以和全局图像信息结合,通过不停做注意力操作,让模型直接输出一组预测框

(2)并行出框

导言:

背景:

大部分目标检测器采用间接方式预测,或用回归和分类替代目标检测问题

性能受限于后处理(NMS)操作,处理大量冗余框

流程:

①用卷积神经网络提取特征

②将特征拉直,送入Transformer的encoder、decoder,进一步学习全局信息,将每一个点的信息与全局做交互

③生成框输出,通过object query限制框的个数

④使用二分图匹配计算Loss,计算预测框和GT Box的matching loss决定预测框中对应GT Box的框;选择后计算分类损失、Bounding Box Loss;没有匹配上的框被标记为背景

相关工作:

两阶段目标检测:proposal

单阶段目标检测:anchor、center

基于集合:

关系型网络、Learnable NMS利用类似于自注意力的方法去处理物体间联系

无需后处理

性能低,使用了人工干预

基于Encoder Decoder:

得到更全局的信息

并行输出目标框,时效性增强

Encoder:

代码结构与图对应 : 

(0): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
# 图像特征和位置编码相加得到了q、k
q = k = self.with_pos_embed(src, pos)
# value还是特征
src2 = self.self_attn(q, k, value=src, attn_mask=src_mask,
                              key_padding_mask=src_key_padding_mask)[0]

改进后的Decoder:

# 将encoder出来的4个值传入decoder进行处理
hs = self.decoder(tgt, memory, memory_key_padding_mask=mask,
                          pos=pos_embed, query_pos=query_embed)

代码结构与图对应:

(0): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )

解码器部分的第一部分用到的是Multi-Head Self-Attention ,而第二部分用到的则是Multi-Head Attention  

第一个MHSA:

q = k = self.with_pos_embed(tgt, query_pos)
# tgt当作第一个value进行self-aattention
# 计算每个目标表示与其他目标表示之间的注意力分数
 tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
                              key_padding_mask=tgt_key_padding_mask)[0]
第二个MHA:

# 通过多头自注意力层计算目标表示和图像特征表示之间的注意力分数
tgt2 = self.multihead_attn(
                           # q:第一个attention的输出+位置编码
                           query=self.with_pos_embed(tgt, query_pos),
                           # k:encoder输出+position embedding(空间位置编码)
                           key=self.with_pos_embed(memory, pos),
                           # v:encoder输出(memory)
                           value=memory, 
                           attn_mask=memory_mask,
                           key_padding_mask=memory_key_padding_mask)[0]
原因:

Queries 内部先进行协同工作(通过 MHSA)

然后再与图像特征交互提取有用的信息(通过 MHA)

DETR:

class DETR(nn.Module):
    def __init__(self, num_classes, hidden_dim, nheads, num_encoder_layers, num_decoder_layers, num_queries):
        super(DETR, self).__init__()
        # Encoder
        self.transformer = TransformerEncoder(d_model=hidden_dim, nhead=nheads, num_encoder_layers=num_encoder_layers)
        
        # Decoder
        self.transformer_decoder = TransformerDecoder(d_model=hidden_dim, nhead=nheads, num_decoder_layers=num_decoder_layers)
        
        # Object queries
        self.query_embed = nn.Embedding(num_queries, hidden_dim)
        
        # Prediction heads
        self.class_embed = nn.Linear(hidden_dim, num_classes + 1)  # +1 for background
        self.bbox_embed = MLP(hidden_dim, hidden_dim * 4, 4, 3)

    def forward(self, images):
        # 假设images已经通过CNN处理成[batch_size, num_channels, H, W]
        # 这里略过CNN部分,直接模拟CNN输出特征
        src = ...  # [batch_size, src_seq_len, hidden_dim],其中src_seq_len是特征图的序列长度
        
        # 目标查询
        hs = self.query_embed.weight.unsqueeze(0).repeat(images.shape[0], 1, 1)  # [batch_size, num_queries, hidden_dim]
        
        # Encoder
        memory = self.transformer(src)
        
        # Decoder
        tgt = torch.zeros_like(hs)  # 初始化为零的目标查询
        outputs = self.transformer_decoder(tgt, memory, tgt_mask=None, memory_key_padding_mask=None)
        
        # 输出预测
        outputs_class = self.class_embed(outputs)
        outputs_coord = self.bbox_embed(outputs).sigmoid()  # 假设使用sigmoid来限制坐标范围
        
        return outputs_class, outputs_coord

基于集合的目标函数:

使用cost matrix:类似于三个工人分配三个工作,使每个工人被分配其最擅长的工作

使用scipy包中提供的linear-sum-assignment函数,将cost matrix作为函数输入

a、b、c看作100个框,x、y、z看作GT框 

损失函数:

包含分类Loss和出框Loss

为了使两个Loss在相近的取值空间,将log去除,得到更好的效果

在Bounding Box中,使用L_1 Loss可能会产生问题,于是加入了genralized IoU Loss(与框大小无关)

细节:为了让模型收敛更快,训练的更稳定,在Decoder后加入很多auxiliary loss(额外的目标函数);在6个(重复六次)Decoder后都加了FFN(共享参数),去得到目标检测输出从而得到Loss

网络框架:

模型创建:

detr = DETR(num_classes=91, hidden_dim=256, nheads=8, num_encoder_layers=6, num_decoder_layers=6)

整体架构:

DETR(
  (transformer): Transformer(
    (encoder): TransformerEncoder(
      (layers): ModuleList(
        (0): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
        (1): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
        (2): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
        (3): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
        (4): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
        (5): TransformerEncoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
        )
      )
    )
    (decoder): TransformerDecoder(
      (layers): ModuleList(
        (0): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
        (1): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
        (2): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
        (3): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
        (4): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
        (5): TransformerDecoderLayer(
          (self_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (multihead_attn): MultiheadAttention(
            (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
          )
          (linear1): Linear(in_features=256, out_features=2048, bias=True)
          (dropout): Dropout(p=0.1, inplace=False)
          (linear2): Linear(in_features=2048, out_features=256, bias=True)
          (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (norm3): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
          (dropout1): Dropout(p=0.1, inplace=False)
          (dropout2): Dropout(p=0.1, inplace=False)
          (dropout3): Dropout(p=0.1, inplace=False)
        )
      )
      (norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
    )
  )
# ********
  (class_embed): Linear(in_features=256, out_features=92, bias=True)
  (bbox_embed): MLP(
    (layers): ModuleList(
      (0): Linear(in_features=256, out_features=256, bias=True)
      (1): Linear(in_features=256, out_features=256, bias=True)
# 最后输出中心坐标xy和wh
      (2): Linear(in_features=256, out_features=4, bias=True)
    )
  )
# decoder使用的可学习参数
  (query_embed): Embedding(100, 256)
# 将backbone抽取的2048维度转换为transformer使用的256维度
  (input_proj): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
# # ********
  (backbone): Joiner(
    (0): Backbone(
      (body): IntermediateLayerGetter(
        (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
        (bn1): FrozenBatchNorm2d()
        (relu): ReLU(inplace=True)
        (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
        (layer1): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (1): FrozenBatchNorm2d()
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
        )
        (layer2): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): FrozenBatchNorm2d()
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (3): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
        )
        (layer3): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): FrozenBatchNorm2d()
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (3): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (4): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (5): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
        )
        (layer4): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
              (1): FrozenBatchNorm2d()
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): FrozenBatchNorm2d()
            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): FrozenBatchNorm2d()
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): FrozenBatchNorm2d()
            (relu): ReLU(inplace=True)
          )
        )
      )
    )
    (1): PositionEmbeddingSine()
  )
)

 前向过程:

 ①输入图像[3*800*1066],通过卷积网络得到一些特征

# 模型定义
self.backbone = nn.Sequential(*list(resnet50(pretrained=True).children())[:-2])
# 前向传播
x = self.backbone(inputs)
# 卷积神经网络提取特征
class Backbone(BackboneBase):
    """ResNet backbone with frozen BatchNorm."""
    def __init__(self, name: str,
                 train_backbone: bool,
                 return_interm_layers: bool,
                 dilation: bool):
        # getattr返回torchivision.models的对象属性值
        backbone = getattr(torchvision.models, name)(
            replace_stride_with_dilation=[False, False, dilation],
            pretrained=is_main_process(), norm_layer=FrozenBatchNorm2d)
        # 最后一层特征层的大小 根据网络结构设置
        num_channels = 512 if name in ('resnet18', 'resnet34') else 2048
        super().__init__(backbone, train_backbone, num_channels, return_interm_layers)

 ② 走到卷积网络最后一层(conv5)时得到[2048*25*34],而25和34分别为800和1066的1/32,2048为对应的通道数

③输入Transformer时需要一个降维操作,通过[1*1*256]的卷积核降维得到[256*25*34]

# 投射层 将2048变为256
self.conv = nn.Conv2d(2048, hidden_dim, 1)
# 前向传播
h = self.conv(x)
# 输入进transformer 跳input_proj
hs = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos[-1])[0]

# input_proj转换通道数
self.input_proj = nn.Conv2d(backbone.num_channels, hidden_dim, kernel_size=1)

 ④Transformer无位置信息,为其加入位置编码,固定位置编码大小为[256*25*34],保持维度一致

# 模型定义
self.row_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
self.col_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
# 前向传播
pos = torch.cat([self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),], dim=-1).flatten(0, 1).unsqueeze(1)

提出了二维编码图片来加入位置向量:

针对256维向量,前128维代表x位置编码,后128维代表y的位置编码 

# 使用二维编码 编码位置向量
    N_steps = args.hidden_dim // 2  # N_steps:128

PE_{pos_{x},2i}=sin(pos_{x}/10000^{2i/128})

PE_{pos_{x},2i+1}=cos(pos_{x}/10000^{2i/128})

PE_{pos_{y},2i}=sin(pos_{y}/10000^{2i/128})

PE_{pos_{y},2i+1}=cos(pos_{y}/10000^{2i/128})

# 初始化一个128维向量
dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
# 除
pos_x = x_embed[:, :, :, None] / dim_t
pos_y = y_embed[:, :, :, None] / dim_t

使用stack(与cat的不同):为达到sin和cos拼接 

 ⑤相加后作为输入需要拉直,将h与w拉直变为一个数值,变为[850*256],850为序列长度,256为head dimension

相加:

# tensor和pos位置编码相加操作
def with_pos_embed(self, tensor, pos: Optional[Tensor]):
    return tensor if pos is None else tensor + pos

# 拉直
h = self.transformer(pos + h.flatten(2).permute(2, 0, 1),self.query_pos.unsqueeze(1))

 ⑥进入Transformer Encoder层,输入为[850,256],输出也为[850*256],在DETR中使用了6个Encoder进行叠加

# 定义的有关Transformer的输入参数
def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
          num_decoder_layers=6, dim_feedforward=2048, dropout=0.1,
          activation="relu", normalize_before=False,
          return_intermediate_dec=False):

# 进入Transformer进行一系列操作
# 最终得到100*256 
self.transformer = nn.Transformer(hidden_dim, nheads,num_encoder_layers,num_decoder_layers)

⑦进入Transformer Decoder层,进行框的输出,将输入1和输入2反复做自注意力操作,得到[100*256]的特征;在DETR中使用了6个Decoder进行叠加

# 前向传播时叠加两个输入
h = self.transformer(pos + h.flatten(2).permute(2, 0, 1),self.query_pos.unsqueeze(1))

 输入1:创新点object queries,它是一个learnable positional embedding,维度[100*256],其中的256是与encoder中的256相互对应,便于一起做乘法,100代表模型最终为100输出(条件)

# object queries设置为100 最终出100个框
self.query_pos = nn.Parameter(torch.rand(100, hidden_dim))

细节:但是在第一层Decoder中没有做object queries自注意力机制,为了移除冗余框

输入2:图像端得出的全局特征,维度[850*256]

⑧ 将特征给全连接层,全连接层做物体类别的预测和出框的预测:

类别若是COCO则为91类:

self.linear_class = nn.Linear(hidden_dim, num_classes + 1)

框为4个值(x、y、w、h):

self.linear_bbox = nn.Linear(hidden_dim, 4)

⑨将得到的100个框与GT框进行匈牙利匹配

logits, bboxes = detr(inputs)

实验:

与Faster-RCNN对比:

训练策略的改变对模型提升效果很大,DETR对大物体检测效果较好,但小物体处理效果一般

Transformer Encoder:

class TransformerEncoder(nn.Module):

    def __init__(self, encoder_layer, num_layers, norm=None):
        super().__init__()
        # 克隆layers 跳_get_clones
        self.layers = _get_clones(encoder_layer, num_layers)
        self.num_layers = num_layers
        self.norm = norm

    def forward(self, src,
                mask: Optional[Tensor] = None,
                src_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None):
        # 传入输入
        output = src

        for layer in self.layers:
            output = layer(output, src_mask=mask,src_key_padding_mask=src_key_padding_mask, pos=pos)

        if self.norm is not None:
            output = self.norm(output)

        return output

 Encoder主要学习的是全局特征,尽可能让物体和物体之间分得开

自注意力的可视化,使用Transformer Encoder让图片里的物体分的很开,在此基础上做目标检测和分割相对而言就会简单很多

随着Transformer编码器层数增加,学到的全局特征越多,性能一直在提升,但带来了参数增长和速度变慢 

Transformer Decoder:

class TransformerDecoder(nn.Module):

    def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
        super().__init__()
        self.layers = _get_clones(decoder_layer, num_layers)
        self.num_layers = num_layers
        self.norm = norm
        self.return_intermediate = return_intermediate

    def forward(self, tgt, memory,
                tgt_mask: Optional[Tensor] = None,
                memory_mask: Optional[Tensor] = None,
                tgt_key_padding_mask: Optional[Tensor] = None,
                memory_key_padding_mask: Optional[Tensor] = None,
                pos: Optional[Tensor] = None,
                query_pos: Optional[Tensor] = None):
        output = tgt

        intermediate = []

        for layer in self.layers:
            output = layer(output, memory, tgt_mask=tgt_mask,
                           memory_mask=memory_mask,
                           tgt_key_padding_mask=tgt_key_padding_mask,
                           memory_key_padding_mask=memory_key_padding_mask,
                           pos=pos, query_pos=query_pos)
            if self.return_intermediate:
                intermediate.append(self.norm(output))

        if self.norm is not None:
            output = self.norm(output)
            if self.return_intermediate:
                intermediate.pop()
                intermediate.append(output)

        if self.return_intermediate:
            return torch.stack(intermediate)

        return output.unsqueeze(0)

Decoder将注意力分给学习边缘,更好的区分物体和解决遮挡问题

Object Queries:

 每一个正方形代表一个object queries,绿色的点代表小的Bounding Box,红色的点代表大的横向的Bounding Box,蓝色的点代表大的纵向的Bounding Box

object queries 与anchor类似:

anchor:提前定义好一些Bounding Box,最后将预测和提前订好的Bounding Box做对比

object queries:可学习;类似于问问题的“人”,用图片自己的方式去问,得到答案,而答案就是对应的Bounding Box,而未找到答案就无结果

每次学习后看到一张图片都会问左下角是否看到一些小物体,中间是否看到大的横向物体 

每次学习后看到一张图片都会问右边是否看到一些小物体,中间是否看到大的横向或纵向物体 

总结:

优点:

提出了一个全新的用于目标检测的DETR框架

利用了Transformer和二分图匹配,使框架是一个端到端可学习网络

在COCO数据集和全景分割上达到了很好的效果

自注意力带来的全局信息使之在大物体上效果更好

缺点:

训练时间长

不好优化

小物体性能差

运行效果:


http://www.kler.cn/a/415892.html

相关文章:

  • Perforce SAST专家详解:自动驾驶汽车的安全与技术挑战,Klocwork、Helix QAC等静态代码分析成必备合规性工具
  • 【VUE3】【Naive UI】<n-button> 标签
  • 肿瘤微环境中单细胞的泛癌分类
  • SQL进阶——子查询与视图
  • 《C++助力无监督学习:挖掘数据潜在结构的高效之道》
  • 群晖无法删除容器和套件显示报错无法执行此操作,可能是因为网络连接不稳定或系统正忙,请稍后再试 手把手图文教程解决办法
  • Android触摸事件setOnTouchListener用法
  • 各个排序算法基础速通万字介绍
  • STM32--MAP文件
  • 【论文复现】LoRA:大模型的低阶自适用
  • Python-链表数据结构学习(1)
  • 10个Word自动化办公脚本
  • HCIA笔记6--路由基础
  • 信息系统项目管理-论文写作方法之背景二
  • 开源的跨平台SQL 编辑器Beekeeper Studio
  • pdf.js 预览pdf的时候发票数据缺失显示不全:字体加载出错(缺失)导致部分缺失
  • qt QGraphicsPolygonItem详解
  • RVO动态避障技术方案介绍
  • 力扣--LCR 150.彩灯装饰记录II
  • 深度学习2:从零开始掌握PyTorch:数据操作不再是难题
  • 从零开发操作系统-聊一聊C语言中的头文件
  • 对于GC方面,在使用Elasticsearch时要注意什么?
  • SQL Server 实战 - 多种连接
  • 网络基础 - IP 隧道篇
  • 【Git】Git 命令参考手册
  • 定时任务删除MongoDB历史数据