使用 VisionTransformer(VIT) FineTune 训练驾驶员行为状态识别模型
一、VisionTransformer(VIT) 介绍
大模型已经成为人工智能领域的热门话题。在这股热潮中,大模型的核心结构 Transformer
也再次脱颖而出证明了其强大的能力和广泛的应用前景。Transformer
自 2017
年由Google
提出以来,便在NLP
领域掀起了一场革命。相较于传统的循环神经网络(RNN
)和长短时记忆网络(LSTM
), Transformer
凭借自注意力机制和端到端训练方式,以及处理长距离依赖问题上显著的优势,使其在多项NLP
任务中都取得了卓越表现,常见模型例如:BERT
、GPT
等。
随着 Transformer
在NLP
领域的成功,慢慢的也开始进军到了CV
领域。在 CV
领域中,卷积神经网络(CNN
)一直占据主导地位。然而,CNN
的卷积操作限制了其对全局信息的捕捉,导致在处理复杂场景时效果不佳。相比之下,Transformer
能够更好地捕捉长距离依赖关系,有助于识别图像中的全局特征,另外,自注意力机制也能使得模型关注到不同区域的重要信息,提高特征提取的准确性。
但是要想 Transformer
处理图像,首选需要考虑如何将图像转为序列数据,因为 CNN
的输入通常是一个四维张量,其维度通常表示为 [批次大小, 高度, 宽度, 通道数]
,一般图像也是RGB
三维的,所以可以非常方便的处理图像数据。而 Transformer
的输入是一个三维张量,其维度表示为 [批次大小,序列长度, 嵌入维度]
,维度的不同导致不能直接将图像传入 Transformer
结构 。
对此 VisionTransformer
(VIT
)巧妙的例用了 CNN
解决了维度不一致的问题,成为了将 Transformer
架构应用于 CV
领域的一种创新方法, 下面是 VIT
的架构图:
首先,VIT
将输入图像分割成一系列固定大小的图像块(利用CNN
),每个块就像NLP
中的单词一样,成为序列中的一个元素,这点类似于文本模型中的 Embedding
层。这种分割方法使得图像的局部特征得以保留,并为后续的处理提供了基础。接着,为了确保模型能够理解图像块的空间位置,VIT为每个图像块添加了位置编码,这些编码是可学习的参数,它们准确地指示了每个块在原始图像中的位置。
然后,每个图像块被展平成一维向量,并通过一个线性层进行嵌入,转换成高维向量。这个过程类似于在自然语言处理中将单词映射到词嵌入向量。完成嵌入后,这些向量被送入标准的Transformer
编码器中。编码器由多个自注意力层和前馈网络组成,它们能够捕捉图像块之间的复杂交互和依赖关系。
最后,VIT
在Transformer
编码器的输出上添加了一个分类头,通常是一个全连接层,用于生成最终的分类结果。
下面是 VIT-Base
的据图结构:
VisionTransformer(
(conv_proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
(encoder): Encoder(
(dropout): Dropout(p=0.0, inplace=False)
(layers): Sequential(
(encoder_layer_0): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_1): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_2): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_3): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_4): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_5): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_6): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_7): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_8): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_9): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_10): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
(encoder_layer_11): EncoderBlock(
(ln_1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(self_attention): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
)
(dropout): Dropout(p=0.0, inplace=False)
(ln_2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): MLPBlock(
(0): Linear(in_features=768, out_features=3072, bias=True)
(1): GELU(approximate='none')
(2): Dropout(p=0.0, inplace=False)
(3): Linear(in_features=3072, out_features=768, bias=True)
(4): Dropout(p=0.0, inplace=False)
)
)
)
(ln): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
)
(heads): Sequential(
(head): Linear(in_features=768, out_features=1000, bias=True)
)
)
从结构中可以看出,输入三维图像, 经过(16, 16)
的卷积核,并且步长也是 (16, 16)
,如果输入大小为 (224, 224)
,则输出就为 768
个大小为 (14,14)
的特征图,然后每个特征图在展平成一维向量就是 (batch,768,196)
,接着后面就可以喂入到 Transformer
结构了。
上面对 VIT
有了简单的了解后,下卖弄使用 Pytorch
vit_b_16
模型 FineTune
训练下 Kaggle
比赛中的驾驶员状态数据集。
实验使用的依赖版本如下:
torch==1.13.1+cu116
torchvision==0.14.1+cu116
tensorboard==2.17.1
tensorboard-data-server==0.7.2
二、准备数据集
驾驶员状态数据集这里使用 Kaggle
比赛的数据,由于官网已经没办法下载了,这里可以在 百度的 aistudio
公开数据集中下载:
https://aistudio.baidu.com/datasetdetail/35503
下载后可以看到训练集下有10
个分类:
分别表示:
分类 | 解释 |
---|---|
c0 | 安全驾驶 |
c1 | 右手使用手机 |
c2 | 右手打电话 |
c3 | 左手使用手机 |
c4 | 左手打电话 |
c5 | 操作中控台 |
c6 | 喝水 |
c7 | 向后伸手 |
c8 | 手摸头发或化妆 |
c9 | 与人交谈 |
每个类别下的示例图像如下:
数据集的分布如下,每个类别整体分布2000
左右:
三、VIT FineTune 训练
在 Pytorch
中已经集成好了 VIT
结构,这里使用 vit_b_16
为例,可以选择冻结所有原来模型的参数,追加两层全链接层:
net.py
from torchvision import models
import torch.nn as nn
class Model(nn.Module):
def __init__(self, num_classes):
super(Model, self).__init__()
# 加载预训练的 vit_b_16 模型
self.base_model = models.vit_b_16(pretrained=True)
print(self.base_model)
# 冻结主干网络的权重
for param in self.base_model.parameters():
param.requires_grad = False
self.relu = nn.ReLU()
self.fc1 = nn.Linear(self.base_model.heads.head.out_features, 1024)
self.dropout1 = nn.Dropout(p=0.2)
self.fc2 = nn.Linear(1024, 512)
self.dropout2 = nn.Dropout(p=0.1)
self.fc3 = nn.Linear(512, num_classes)
def forward(self, x):
x = self.base_model(x)
x = self.fc1(x)
x = self.relu(x)
x = self.dropout1(x)
x = self.fc2(x)
x = self.relu(x)
x = self.dropout2(x)
x = self.fc3(x)
return x
或者不冻结原有的参数,也不改变原来模型的结构,在此基础上继续训练新的类别,可以使用如下结构,直接将 head
层的输出改为分类的大小:
net.py
from torchvision import models
import torch.nn as nn
class Model(nn.Module):
def __init__(self, num_classes):
super(Model, self).__init__()
# 加载预训练模型
self.base_model = models.vit_b_16(pretrained=True)
print(self.base_model)
num_ftrs = self.base_model.heads.head.in_features
# 修改最后一层的输出数
self.base_model.heads.head = nn.Linear(num_ftrs, num_classes)
print(self.base_model)
def forward(self, x):
return self.base_model(x)
这里我使用第一种方式,显存占用比较小,整体训练过程如下,其中使用 80% 的数据训练,20% 的数据验证:
import os.path
import torch
from torchvision import datasets, models, transforms
from torch.utils.data import DataLoader, random_split
import torch.nn as nn
import torch.optim as optim
from tqdm import tqdm
from torch.utils.tensorboard import SummaryWriter
from net import Model
import sys, json
# 设置随机种子, 让结果可复现
torch.manual_seed(0)
# 加载数据集
def load_data(data_dir, train_ratio, data_transforms, batch_size):
# 读取数据集
dataset = datasets.ImageFolder(data_dir, data_transforms)
# 拆分为训练集和验证集
train_size = int(train_ratio * len(dataset))
val_size = len(dataset) - train_size
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False)
return train_loader, val_loader, dataset.classes
# 迭代训练
def train_model(model, criterion, optimizer, train_loader, val_loader, device, output_dir, writer, num_epochs=10):
best_accuracy, global_step = 0.0, 0
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
for inputs, labels in tqdm(train_loader, file=sys.stdout, desc="Train Epoch: " + str(epoch)):
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
writer.add_scalar('Loss/train', loss, global_step)
global_step += 1
train_loss = running_loss / len(train_loader)
# 验证模型
model.eval()
accuracy, val_loss = validate_model(model, val_loader, device, epoch, criterion)
tqdm.write(
f'Epoch {epoch + 1}, Device: {device}, Loss: {train_loss}, Val Loss: {val_loss} , Current Accuracy: {accuracy}')
writer.add_scalar('Loss/val', val_loss, epoch)
writer.add_scalar('Accuracy/val', accuracy, epoch)
if accuracy > best_accuracy:
# 保存最优模型结构
torch.save(model.state_dict(), os.path.join(output_dir, 'best_model.pth'))
best_accuracy = accuracy
# 保存最终模型结构
torch.save(model.state_dict(), os.path.join(output_dir, 'last_model.pth'))
# 验证模型
def validate_model(model, val_loader, device, epoch, criterion):
correct = 0
total = 0
running_loss = 0.0
with torch.no_grad():
for inputs, labels in tqdm(val_loader, file=sys.stdout, desc="Val Epoch: " + str(epoch)):
inputs, labels = inputs.to(device), labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
running_loss += loss.item()
_, predicted = torch.max(outputs, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
return 100 * correct / total, running_loss / len(val_loader)
def main():
# 数据集地址
data_dir = 'imgs/train'
# 模型保存目录
output_dir = "model"
# 日志输出目录
logs_dir = "logs"
# 训练集的比例
train_ratio = 0.8
# 批次大小
batch_size = 45
# 学习率
lr = 1e-3
# 迭代周期
epochs = 50
data_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
# 加载数据集,80% 训练,20% 验证
train_loader, val_loader, classes = load_data(
data_dir=data_dir,
train_ratio=train_ratio,
data_transforms=data_transforms,
batch_size=batch_size
)
if not os.path.exists(output_dir):
os.mkdir(output_dir)
# 记录分类顺序
with open(os.path.join(output_dir, "classify.txt"), "w", encoding="utf-8") as w:
w.write(json.dumps(classes, ensure_ascii=False))
# 日志记录
writer = SummaryWriter(logs_dir)
# 设备
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 加载模型结构
model = Model(len(classes))
print(model)
# 损失函数
criterion = nn.CrossEntropyLoss()
# 优化器
optimizer = optim.AdamW(model.parameters(), lr=lr)
# 训练
model.to(device)
train_model(
model=model,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
val_loader=val_loader,
device=device,
output_dir=output_dir,
writer=writer,
num_epochs=epochs
)
writer.close()
if __name__ == '__main__':
main()
训练期间大概占用显存两个G
左右:
训练过程,可以看到验证集的准确率在逐步提升以及loss
在逐步收敛:
训练结束后,可以查看下 tensorboard
中你的 loss
和 准确率的曲线:
tensorboard --logdir=logs --bind_all
在 浏览器访问 http:ip:6006/
在验证集上的准确率达到 98.5
左右,loss
的波动还是蛮大的,大家也可以加入更多优化策略进来。
四、模型测试
import os
import torch
from torchvision import transforms
from net import Model
import matplotlib.pyplot as plt
from PIL import Image
import json
plt.rcParams['font.sans-serif'] = ['SimHei']
classify_cn = {
"c0": "安全驾驶",
"c1": "右手使用手机",
"c2": "右手打电话",
"c3": "左手使用手机",
"c4": "左手打电话",
"c5": "操作中控台",
"c6": "喝水",
"c7": "向后伸手",
"c8": "手摸头发或化妆",
"c9": "与人交谈"
}
def main():
image_dir = "imgs/test"
# 读取分类
with open("model/classify.txt", "r", encoding="utf-8") as r:
classify = json.loads(r.read())
# 使用设备
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 加载模型
model = Model(len(classify))
model.load_state_dict(torch.load('model/best_model.pth'))
model = model.to(device)
data_transforms = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
imgs = os.listdir(image_dir)
# 分成四个一组
imgs = list(zip(*[iter(imgs)] * 4))
for names in imgs:
plt.figure(figsize=(8, 8))
for i, name in enumerate(names):
plt.subplot(2, 2, i + 1)
image = Image.open(os.path.join(image_dir, name)).convert('RGB')
img = data_transforms(image).unsqueeze(0)
img = img.to(device)
with torch.no_grad():
output = model(img)
_, predicted = torch.max(output.data, 1)
label = classify_cn[classify[predicted[0].item()]]
plt.imshow(image)
plt.title(label)
plt.show()
if __name__ == '__main__':
main()