(即插即用模块-Attention部分) 二十、(2021) GAA 门控轴向注意力
文章目录
- 1、Gated Axial-Attention
- 2、代码实现
paper:Medical Transformer: Gated Axial-Attention for Medical Image Segmentation
Code:https://github.com/jeya-maria-jose/Medical-Transformer
1、Gated Axial-Attention
论文首先分析了 ViTs 在训练小规模数据集时的弊端以及指出了 ViTs 的计算复杂度偏高。为此,论文提出了一种门控轴向注意力(Gated Axial-Attention),其通过在自注意力模块中引入额外的门控机制来扩展现有的体系结构。在分析了位置偏差难以学习、相对位置编码不够准确等问题后,通过将可控制的影响位置偏差施加在编码的非本地上下文来实现改进。Gated Axial-Attention的 核心思想是Gate门控机制,通过引入 Gate 控制机制来控制位置编码对 Self-Attention 的影响程度。
对于一个输入特征 X,Gated Axial-Attention的实现过程:
-
输入特征图: 将输入图像提取特征图,并进行通道维度上的线性变换,得到 Query、Key 和 Value 向量。
-
Axial-Attention:
在高度方向上进行 1D Self-Attention,计算像素之间的依赖关系。
在宽度方向上进行 1D Self-Attention,计算像素之间的依赖关系。
-
Positional Encoding:计算相对位置编码,将像素位置信息融入到 Query、Key 和 Value 向量中。
-
Gate 控制机制:通过可学习的 Gate 参数,控制相对位置编码对 Self-Attention 的影响程度。
-
输出特征图: 将经过 Self-Attention 和 Gate 控制的特征图进行线性变换,得到最终输出特征图。
Gated Axial-Attention 结构图:
2、代码实现
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 卷积"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class qkv_transform(nn.Conv1d):
"""Conv1d for qkv_transform"""
class AxialAttention(nn.Module):
def __init__(self, in_planes, out_planes, groups=8, kernel_size=56,
stride=1, bias=False, width=False):
assert (in_planes % groups == 0) and (out_planes % groups == 0)
super(AxialAttention, self).__init__()
self.in_planes = in_planes
self.out_planes = out_planes
self.groups = groups
self.group_planes = out_planes // groups
self.kernel_size = kernel_size
self.stride = stride
self.bias = bias
self.width = width
# Multi-head self attention
self.qkv_transform = qkv_transform(in_planes, out_planes * 2, kernel_size=1, stride=1,
padding=0, bias=False)
self.bn_qkv = nn.BatchNorm1d(out_planes * 2)
self.bn_similarity = nn.BatchNorm2d(groups * 3)
self.bn_output = nn.BatchNorm1d(out_planes * 2)
# Position embedding
self.relative = nn.Parameter(torch.randn(self.group_planes * 2, kernel_size * 2 - 1), requires_grad=True)
query_index = torch.arange(kernel_size).unsqueeze(0)
key_index = torch.arange(kernel_size).unsqueeze(1)
relative_index = key_index - query_index + kernel_size - 1
self.register_buffer('flatten_index', relative_index.view(-1))
if stride > 1:
self.pooling = nn.AvgPool2d(stride, stride=stride)
self.reset_parameters()
def forward(self, x):
# pdb.set_trace()
if self.width:
x = x.permute(0, 2, 1, 3)
else:
x = x.permute(0, 3, 1, 2) # N, W, C, H
N, W, C, H = x.shape
x = x.contiguous().view(N * W, C, H)
# Transformations
qkv = self.bn_qkv(self.qkv_transform(x))
q, k, v = torch.split(qkv.reshape(N * W, self.groups, self.group_planes * 2, H),
[self.group_planes // 2, self.group_planes // 2, self.group_planes], dim=2)
# Calculate position embedding
all_embeddings = torch.index_select(self.relative, 1, self.flatten_index).view(self.group_planes * 2,
self.kernel_size,
self.kernel_size)
q_embedding, k_embedding, v_embedding = torch.split(all_embeddings,
[self.group_planes // 2, self.group_planes // 2,
self.group_planes], dim=0)
qr = torch.einsum('bgci,cij->bgij', q, q_embedding)
kr = torch.einsum('bgci,cij->bgij', k, k_embedding).transpose(2, 3)
qk = torch.einsum('bgci, bgcj->bgij', q, k)
stacked_similarity = torch.cat([qk, qr, kr], dim=1)
stacked_similarity = self.bn_similarity(stacked_similarity).view(N * W, 3, self.groups, H, H).sum(dim=1)
# stacked_similarity = self.bn_qr(qr) + self.bn_kr(kr) + self.bn_qk(qk)
# (N, groups, H, H, W)
similarity = F.softmax(stacked_similarity, dim=3)
sv = torch.einsum('bgij,bgcj->bgci', similarity, v)
sve = torch.einsum('bgij,cij->bgci', similarity, v_embedding)
stacked_output = torch.cat([sv, sve], dim=-1).view(N * W, self.out_planes * 2, H)
output = self.bn_output(stacked_output).view(N, W, self.out_planes, 2, H).sum(dim=-2)
if self.width:
output = output.permute(0, 2, 1, 3)
else:
output = output.permute(0, 2, 3, 1)
if self.stride > 1:
output = self.pooling(output)
return output
def reset_parameters(self):
self.qkv_transform.weight.data.normal_(0, math.sqrt(1. / self.in_planes))
# nn.init.uniform_(self.relative, -0.1, 0.1)
nn.init.normal_(self.relative, 0., math.sqrt(1. / self.group_planes))
if __name__ == '__main__':
x = torch.randn(4, 512, 7, 7).cuda()
# kernel_size 要跟 h,w 相同
model = AxialAttention(512, 512, kernel_size=7).cuda()
out = model(x)
print(out.shape)
本文只是对论文中的即插即用模块做了整合,对论文中的一些地方难免有遗漏之处,如果想对这些模块有更详细的了解,还是要去读一下原论文,肯定会有更多收获。