CV(10)--目标检测
前言
仅记录学习过程,有问题欢迎讨论
目标检测
object detection,就是在给定的图片中精确找到物体所在位置,并标注出物体的类别;输出的是分类类别label+物体的外框(x, y, width, height)。
目标检测算法:
1、候选区域/框 + 深度学习分类:通过提取候选区域,并对相应区域进行以深度学习方法为主的
分类的方案,如:
• R-CNN(Selective Search + CNN + SVM)
• SPP-net(ROI Pooling)
• Fast R-CNN(Selective Search + CNN + ROI)
• Faster R-CNN(RPN + CNN + ROI)
2、 基于深度学习的回归方法:YOLO/SSD 等方法
IOU:
一个简单的测量标准,测量两个框之间的相似度,目标和实际的交集/并集,值越大越接近。
TP:预测为正样本,实际也为正样本
TN:预测为负样本,实际也为负样本
FP:预测为正样本,实际为负样本
FN:预测为负样本,实际为正样本
precision(精确度)和recall(召回率):精度是找得对,召回是找的全
precision = TP / (TP + FP)
recall = TP / (TP + FN)
F1 = 2 * (precision * recall) / (precision + recall)
边框回归:
目标是寻找一种关系使得原始窗口经过映射接近真实窗口
Input:
P=(Px,Py,Pw,Ph)
(注:训练阶段输入还包括 Ground Truth)
Output:
需要进行的平移变换和尺度缩放 dx,dy,dw,dh ,或者说是Δx,Δy,Sw,Sh 。
有了这四个变换我们就可以直接得到 Ground Truth。
TWO Stage:
Faster R-CNN(RPN + CNN + ROI)
Faster-RCNN
-
Conv layers:Faster RCNN首先使用一组基础的conv+relu+pooling层提取 image的feature maps。该feature maps被共享用于后续 RPN层和全连接层。
- 在Faster RCNN Conv layers中对所有的卷积都做了pad处理( pad=1,即填充一圈0),导致原图 变为 (M+2)x(N+2)大小,再做3x3卷积后输出MxN 。正是这种设置,导致Conv layers中的conv层 不改变输入和输出矩阵大小
- Conv layers中的pooling层kernel_size=2,stride=2。 这样每个经过pooling层的MxN矩阵,都会变为(M/2)x(N/2)大小
- 一个MxN大小的矩阵经过Conv layers固定变为(M/16)x(N/16)。 这样Conv layers生成的feature map都可以和原图对应起来
-
Region Proposal Networks(RPN):RPN网络用于生成region proposals。通过softmax判断anchors属于 positive或者negative,再利用bounding box regression 修正anchors获得精确的proposals。
- 直接使用RPN生成检测框,是Faster R-CNN的巨 大优势,能极大提升检测框的生成速度。
- 上面一条通过softmax分类anchors(按特征中心点穷举9个框),获得positive和negative分类-二分类;
- 下面一条用于计算对于anchors的bounding box regression偏移量,以获得精确的proposal。
- 最后的Proposal层则负责综合positive anchors和对应bounding box regression偏移量获取 proposals,同时剔除太小和超出边界的proposals
-
Roi Pooling:该层收集输入的feature maps和proposals, 综合这些信息后提取proposal feature maps,送入后续全连接层判定目标类别。
- proposal是对应MN尺度的,所以先使用spatial_scale参数将其映射回(M/16)(N/16)大小 feature map尺度;
- 再将每个proposal对应的feature map区域水平分为pooled_w * pooled_h的网格;
- 对网格的每一份都进行max pooling处理。
-
Classification:利用proposal feature maps计算 proposal的类别,同时再次bounding box regression获得检测框最终的精确位置。
实现Faster-RCNN网络结构
"""
实现Faster-RCNN
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
import numpy as np
import cv2
# 定义骨干网络,这里使用 ResNet
class ResNetBackbone(nn.Module):
def __init__(self):
super(ResNetBackbone, self).__init__()
resnet = torchvision.models.resnet50(pretrained=True)
self.features = nn.Sequential(*list(resnet.children())[:-2])
def forward(self, x):
x = self.features(x)
return x
# 区域生成网络 (RPN)
class RPN(nn.Module):
def __init__(self, in_channels, num_anchors):
super(RPN, self).__init__()
self.conv = nn.Conv2d(in_channels, 512, kernel_size=3, stride=1, padding=1)
# 2表示每个锚点有两种可能的类别:正负样本。通过这层卷积,网络将对每个锚点预测其概率得分。
self.cls_layer = nn.Conv2d(512, num_anchors * 2, kernel_size=1, stride=1)
# 4 表示对于每个锚点,要预测其边界框的 4 个参数
self.reg_layer = nn.Conv2d(512, num_anchors * 4, kernel_size=1, stride=1)
def forward(self, x):
x = F.relu(self.conv(x))
cls_scores = self.cls_layer(x)
bbox_preds = self.reg_layer(x)
cls_scores = cls_scores.permute(0, 2, 3, 1).contiguous().view(x.size(0), -1, 2)
bbox_preds = bbox_preds.permute(0, 2, 3, 1).contiguous().view(x.size(0), -1, 4)
return cls_scores, bbox_preds
# RoI 池化层
class RoIPooling(nn.Module):
def __init__(self, output_size):
super(RoIPooling, self).__init__()
self.output_size = output_size
def forward(self, features, rois):
roi_features = []
for i in range(features.size(0)):
# 包含了感兴趣区域的信息
roi = rois[i]
# features = (batch_size, channels, height, width),池化到output_size的统一size大小
roi_feature = torchvision.ops.roi_pool(features[i].unsqueeze(0), [roi], self.output_size)
roi_features.append(roi_feature)
roi_features = torch.cat(roi_features, dim=0)
return roi_features
# Faster R-CNN 模型
class FasterRCNN(nn.Module):
def __init__(self, num_classes):
super(FasterRCNN, self).__init__()
self.backbone = ResNetBackbone()
self.rpn = RPN(2048, 9) # 假设使用 9 个锚点
# 池化到 7*7
self.roi_pooling = RoIPooling((7, 7))
self.fc1 = nn.Linear(2048 * 7 * 7, 1024)
self.fc2 = nn.Linear(1024, 1024)
self.cls_layer = nn.Linear(1024, num_classes)
self.reg_layer = nn.Linear(1024, num_classes * 4)
def forward(self, x, rois=None):
features = self.backbone(x)
cls_scores, bbox_preds = self.rpn(features)
if rois is not None:
roi_features = self.roi_pooling(features, rois)
roi_features = roi_features.view(roi_features.size(0), -1)
fc1 = F.relu(self.fc1(roi_features))
fc2 = F.relu(self.fc2(fc1))
cls_preds = self.cls_layer(fc2)
reg_preds = self.reg_layer(fc2)
return cls_preds, reg_preds, cls_scores, bbox_preds
else:
return cls_scores, bbox_preds
# 自定义数据集类
class CustomDataset(Dataset):
def __init__(self, image_paths, target_paths, transform=None):
self.image_paths = image_paths
self.target_paths = target_paths
self.transform = transform
def __len__(self):
return len(self.image_paths)
def __getitem__(self, idx):
image = cv2.imread(self.image_paths[idx])
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
target = np.load(self.target_paths[idx], allow_pickle=True)
if self.transform:
image = self.transform(image)
return image, target
# 数据预处理
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# 训练函数
def train(model, dataloader, optimizer, criterion_cls, criterion_reg):
model.train()
total_loss = 0
for images, targets in dataloader:
images = images.to(device)
targets = [t.to(device) for t in targets]
optimizer.zero_grad()
cls_preds, reg_preds, cls_scores, bbox_preds = model(images, targets)
# 计算分类和回归损失,这里假设 targets 包含真实类别和边界框信息
cls_loss = criterion_cls(cls_preds, targets)
reg_loss = criterion_reg(reg_preds, targets)
loss = cls_loss + reg_loss
loss.backward()
optimizer.step()
total_loss += loss.item()
return total_loss / len(dataloader)
# 评估函数
def evaluate(model, dataloader):
model.eval()
correct = 0
total = 0
with torch.no_grad():
for images, targets in dataloader:
images = images.to(device)
targets = [t.to(device) for t in targets]
cls_preds, reg_preds, _, _ = model(images)
# 计算评估指标,这里可根据具体需求实现
# 例如计算 mAP 等
return correct / total
if __name__ == "__main__":
# 假设的图像和标注文件路径
image_paths = ['img/street.jpg', 'img/street.jpg']
target_paths = ['target1.npy', 'target2.npy']
dataset = CustomDataset(image_paths, target_paths, transform)
dataloader = DataLoader(dataset, batch_size=2, shuffle=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
num_classes = 2 # 包括背景类
model = FasterRCNN(num_classes).to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
criterion_cls = nn.CrossEntropyLoss()
criterion_reg = nn.SmoothL1Loss()
num_epochs = 10
for epoch in range(num_epochs):
loss = train(model, dataloader, optimizer, criterion_cls, criterion_reg)
print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {loss}')
# 评估
accuracy = evaluate(model, dataloader)
print(f'Accuracy: {accuracy}')