当前位置: 首页 > article >正文

OpenCV与AI深度学习 | 基于YoloV11自定义数据集实现车辆事故检测(有源码,建议收藏!)

本文来源公众号“OpenCV与AI深度学习”,仅用于学术分享,侵权删,干货满满。

原文链接:基于YoloV11自定义数据集实现车辆事故检测

    在智能交通系统领域,实时检测车辆事故的能力变得越来越重要。该项目利用先进的计算机视觉技术,采用最先进的对象检测模型 YOLOv11 来准确识别和分类车辆事故。主要目标是通过向紧急服务提供及时警报并实现更快的响应时间来提高道路安全。

    YoloV11 是 ultralytics 的 Yolo 最新版本,与以前的版本相比,有几个优点和最大的功能,有关更多信息,请查看官方 ultralytics yoloV11 文章

https://docs.ultralytics.com/models/yolo11/?source=post_page-----a793d51cc4ba--------------------------------

YOLOv11来了:将重新定义AI的可能性

    本文项目涉及几个步骤,这是一个简单的原型级项目,步骤是

    1. 数据准备 :数据选择和预处理

    2. 模型训练 :使用 Yolov11 使用自定义数据训练模型

    3. 模型评估 :在看不见的数据上评估模型的性能

    4. 模型推理 :使用实际模型的 onnx 版本对看不见的数据进行推理

数据准备

    数据准备和预处理是计算机视觉模型开发的关键步骤。这些步骤可确保模型能够有效地学习并很好地泛化到新数据。以下是它们重要性的关键原因:

    1. 数据质量改进

    2. 减少过拟合

    3. 正确的标签处理

    在对象检测或分割等任务中,确保标签(边界框、掩码或关键点)与预处理后的图像匹配对于准确训练至关重要。未对齐的标签会显著降低模型性能。

    改进的性能指标

    正确预处理的数据会在准确性、精度和召回率方面获得更好的性能。准备充分的数据使模型能够专注于有意义的特征并改进其预测。

    总之,数据准备和预处理通过提高数据质量、降低计算复杂性、防止过度拟合和增强模型泛化,直接影响计算机视觉模型的成功。

    在这个项目中,我从两个不同的来源获取了数据集

    找到我带的 GITHUB 的源代码和数据集

https://github.com/varunpn7405/Vehicle_Accident_detection

数据预处理

    作为第一步,我们需要 删除空注释和相应的帧 ,我在整个模型开发过程中提供帮助

  • 避免误导模型:对象检测模型经过训练,可以预测图像中对象的存在和位置。如果包含没有注释的图像(即没有任何标记对象的图像),则模型可能会错误地了解到许多图像不包含对象,从而降低其有效检测对象的能力。
  • 提高训练效率:在没有注释的图像上进行训练会浪费计算资源,因为模型在处理图像时没有学习任何关于对象位置的有用信息。删除这些图像有助于将训练重点放在相关的信息数据上。
  • 减少偏差:包含大量空白图像可能会使模型偏向于预测图像通常不包含对象,从而导致更高的假阴性率(即无法检测到存在的对象)。
  • 防止过度拟合:对过多的空图像进行训练可能会导致模型对预测未来图像的“无对象”过于自信,这可能会损害其对存在对象的真实场景的泛化。
  • 确保正确的损失计算:目标检测模型通常使用依赖于目标存在的损失(如分类和定位损失)。空图像可能会影响这些损失的计算方式,尤其是当模型期望每个图像至少有一个对象时,这可能会导致训练期间的不稳定。
import os, shutil

# Function to check if a file is empty
def is_empty_file(file_path):
    return os.path.exists(file_path) and os.stat(file_path).st_size == 0

image_extensions = ['.jpg', '.jpeg', '.png']
path = os.getcwd()
inputPar = os.path.join(path, r'dataset')
outputPar = os.path.join(path, r'filtered')

if not os.path.exists(outputPar):
    os.makedirs(outputPar)

folders = os.listdir(inputPar)
for folder in folders:
    if folder in ["test", "train", "valid"]:
        inputChild = os.path.join(inputPar, folder, "labels")
        outputChild1 = os.path.join(outputPar, folder, "labels")
        if not os.path.exists(outputChild1):
            os.makedirs(outputChild1)
        outputChild2 = os.path.join(outputPar, folder, "images")
        if not os.path.exists(outputChild2):
            os.makedirs(outputChild2)
        files = os.listdir(inputChild)
        for file in files:
            annotation_path = os.path.join(inputChild, file)
            # Check if the annotation file is empty
            if not is_empty_file(annotation_path):
                shutil.copy(annotation_path, os.path.join(outputChild1, file))
                # Try to find and remove the corresponding image file
                image_name = os.path.splitext(file)[0]
                for ext in image_extensions:
                    image_path = os.path.join(inputPar, folder, "images", image_name + ext)
                    if os.path.exists(image_path):
                        shutil.copy(image_path, os.path.join(outputChild2, image_name + ext))

然后下一个任务是我们的第二个数据集有 3 个类 Accident 、Car 和 Fire,我们只需要 Accident 实例,因此删除 Car 和 Fire 的注释,并删除图像和注释 yolo 文件(如果它仅包含属于 Car 和 Fire 的注释),则删除该文件。

import os, shutil

image_extensions = ['.jpg', '.jpeg', '.png']
path = os.getcwd()
inputPar = os.path.join(path, r'accident detection.v10i.yolov11')
outputPar = os.path.join(path, r'accident detection.v10i.yolov11(Filtered)')

if not os.path.exists(outputPar):
    os.makedirs(outputPar)

folders = os.listdir(inputPar)
clsfile = os.path.join(path, 'classes.txt')
with open(clsfile) as tf:
    clsnames = [cl.strip() for cl in tf.readlines()]

for folder in folders:
    if folder in ["test", "train", "valid"]:
        inputChild = os.path.join(inputPar, folder, "labels")
        outputChild1 = os.path.join(outputPar, folder, "labels")
        if not os.path.exists(outputChild1):
            os.makedirs(outputChild1)
        outputChild2 = os.path.join(outputPar, folder, "images")
        if not os.path.exists(outputChild2):
            os.makedirs(outputChild2)
        files = os.listdir(inputChild)
        for file in files:
            fileName, ext = os.path.splitext(file)
            finput = os.path.join(inputChild, file)
            with open(finput) as tf:
                Yolodata = tf.readlines()

            # Filter out objects that are not class 1 or 2
            new_yolo_data = []
            for obj in Yolodata:
                if not (int(obj.split(' ')[0]) == 2 or int(obj.split(' ')[0]) == 1):
                    new_yolo_data.append(obj)
            with open(os.path.join(outputChild1, file), "w") as tf:
                tf.writelines(new_yolo_data)

            image_name = os.path.splitext(file)[0]
            for ext in image_extensions:
                image_path = os.path.join(inputPar, folder, "images", image_name + ext)
                if os.path.exists(image_path):
                    shutil.copy(image_path, os.path.join(outputChild2, image_name + ext))
                    break

  过滤所有这些后,然后将两个数据集组合在一起,以获得用于训练和验证的合适数据集,该数据集应该是

以可视化的方式监控和验证整个数据集的标注质量,并编写相同的脚本

import os
from PIL import Image, ImageDraw, ImageFont

font = ImageFont.truetype("arial.ttf", 15)
path = os.getcwd()
inputPar = os.path.join(path, r'Dataset')
outputPar = os.path.join(path, r'Visualisation')

if not os.path.exists(outputPar):
    os.makedirs(outputPar)

folders = os.listdir(inputPar)
cls_clr = {"Accident": "#eb0523"}
clsfile = os.path.join(path, 'classes.txt')
with open(clsfile) as tf:
    clsnames = [cl.strip() for cl in tf.readlines()]

for folder in folders:
    if folder in ["test", "train", "valid"]:
        inputChild = os.path.join(inputPar, folder, "labels")
        outputChild = os.path.join(outputPar, folder)
        if not os.path.exists(outputChild):
            os.makedirs(outputChild)
        files = os.listdir(inputChild)
        for file in files:
            fileName, ext = os.path.splitext(file)
            finput = os.path.join(inputChild, file)
            with open(finput) as tf:
                Yolodata = tf.readlines()
            imgpath1 = os.path.join(inputPar, folder, "images", fileName + '.jpg')
            # imgpath2 = os.path.join(inputPar, folder, "images", fileName + '.png')
            if os.path.exists(imgpath1):
                imgpath = imgpath1
            # elif os.path.exists(imgpath2):
            #     imgpath = imgpath2
            if os.path.exists(imgpath):
                print("plotting >>", fileName + '.jpg')
                img = Image.open(imgpath)
                draw = ImageDraw.Draw(img)
                width, height = img.size
                for obj in Yolodata:
                    clsName = clsnames[int(obj.split(' ')[0])]
                    xnew = float(obj.split(' ')[1])
                    ynew = float(obj.split(' ')[2])
                    wnew = float(obj.split(' ')[3])
                    hnew = float(obj.split(' ')[4])
                    label = f"{clsName}"
                    # box size
                    dw = 1 / width
                    dh = 1 / height
                    # coordinates
                    xmax = int(((2 * xnew) + wnew) / (2 * dw))
                    xmin = int(((2 * xnew) - wnew) / (2 * dw))
                    ymax = int(((2 * ynew) + hnew) / (2 * dh))
                    ymin = int(((2 * ynew) - hnew) / (2 * dh))
                    clr = cls_clr[clsName]
                    tw, th = font.getbbox(label)[2:]
                    # draw bbox and classname::
                    draw.rectangle([(xmin, ymin), (xmax, ymax)], outline=clr, width=2)
                    txtbox = [(xmin, ymin - th), (xmin + tw, ymin)]
                    draw.rectangle(txtbox, fill=clr)
                    draw.text((xmin, ymin - th), label, fill='white', font=font)
                fout = os.path.join(outputChild, imgpath.split("\\")[-1])
                img.save(fout)
            else:
                print(f'{imgpath}  >> img not found')

 相同的 Project Config File 为

[  {    "name": "Accident",    "id": 3505802,    "color": "#f80b2b",    "type": "any",    "attributes": []  }]

我们不能直接上传像 coco 这样的注释,要在 cvat 上上传 Yolo,有一个特定的文件夹结构需要保留

然后重新格式化数据集以获得更好的训练,但这是可选的,为此我们可以使用 python 自动化脚本,如下:

import os
import shutil
from concurrent.futures import ThreadPoolExecutor
import time

startTime = time.time()

def copy_files(src_dir, dst_dir):
    """Copy files from src_dir to dst_dir."""
    os.makedirs(dst_dir, exist_ok=True)
    for file in os.listdir(src_dir):
        shutil.copy(os.path.join(src_dir, file), os.path.join(dst_dir, file))

def process_folder(inputPar, outPar, folder):
    """Process 'images' and 'labels' subfolders within the given folder."""
    if folder in ["train", "valid", "test"]:
        inputChild = os.path.join(inputPar, folder)
        for subfldr in ["images", "labels"]:
            inputSubChild = os.path.join(inputChild, subfldr)
            outChild = os.path.join(outPar, subfldr, folder)
            if os.path.exists(inputSubChild):
                copy_files(inputSubChild, outChild)

def main():
    cPath = os.getcwd()
    inputPar = r"data_set"
    outPar = os.path.join(cPath, "Dataset")
    folders = ["train", "valid", "test"]
    with ThreadPoolExecutor() as executor:
        executor.map(lambda folder: process_folder(inputPar, outPar, folder), folders)

if __name__ == "__main__":
    main()
endTime = time.time()
print("process Completed in !!", endTime - startTime)

那么最终的数据集结构应该是这样的:

训练

    创建指定数据集路径和所有

import yaml

# Define the data configuration
data_config = {
    'train': '/content/data_v2/images/train', # Replace with your train directory path
    'val': '/content/data_v2/images/valid', # Replace with your train directory path
    'nc': 1,  # number of classes
    'names': ['Accident']
}

# Write the configuration to a YAML file
with open('data.yaml', 'w') as file:
    yaml.dump(data_config, file)

    安装 Ultralytics

pip install ultralytics

    训练模型

from ultralytics import YOLO

# Load a model
model = YOLO("yolo11n.pt")

# Train the model
train_results = model.train(
    data="data.yaml",  # path to dataset YAML
    epochs=100,  # number of training epochs do adjust as you need
    imgsz=640,  # training image size
)

    那么下一个关键步骤是将开发的模型转换为 ONNX(Open Neural Network Exchange),它有几个优点:

    1. 跨平台兼容性

    ONNX 是一种开放格式,受到 PyTorch、TensorFlow 和 Keras 等各种深度学习框架的支持。将 YOLO 模型转换为 ONNX 后,您可以跨多个平台(云、移动设备、边缘设备)部署它,而无需坚持使用单个深度学习框架。

    2. 改进的推理性能

    可以使用 ONNX Runtime 或 TensorRT 等运行时对 ONNX 模型进行优化以获得更好的性能,这大大加快了推理速度,尤其是在 NVIDIA GPU 和边缘设备等硬件上。这可以在对象检测任务中更快地进行实时预测。

    3. 更轻松地在不同硬件上部署

    转换为 ONNX 的 YOLO 模型可以使用兼容 ONNX 的运行时部署在各种硬件架构(CPU、GPU、FPGA 和自定义 AI 加速器)上。这种灵活性对于在不同环境(从数据中心到嵌入式系统)中部署模型至关重要。

    4. 与其他 AI 工具的互操作性

    ONNX 模型可以与一系列工具集成,以进行优化、量化和基准测试。这有助于减小模型大小、提高执行效率,并实现与适用于 Intel 硬件的 OpenVINO 等工具的兼容性。

    5. 可扩展性

    ONNX 格式允许批处理和并行化,这在跨多个设备或服务器扩展推理服务时非常有用。

    6. 边缘和移动部署

    通过将 YOLO 模型转换为 ONNX,您可以利用 TensorRT 和 ONNX Runtime for mobile 等框架,以优化性能在边缘设备、手机和 IoT 系统上高效部署模型。

    7. 更轻松的模型优化

    ONNX 提供了各种工具来简化模型修剪、量化和其他优化,以降低模型的计算成本,这对于在资源受限的设备上部署至关重要。

    8. 标准化格式

    使用 ONNX 有助于统一不同开发阶段和框架之间的模型生命周期,通过保持一致的开放标准格式来简化模型转换、验证和版本控制。

model.export(format="onnx")  # creates 'best.onnx'

    测试图像上的推理:

import cv2
from ultralytics import YOLO

image_path = r"Accident detection model.v2i.yolov11(Empty Filtered)\train\images\Accidents-online-video-cutter_com-_mp4-41_jpg.rf.549dce3991b2ae74ae65274cc32d8eff.jpg"  # Replace with your test image path
onnx_model = YOLO("best.onnx")
class_names = onnx_model.names

image = cv2.imread(image_path)

# Run inference
results = onnx_model(image)

# Extract predictions
for result in results:
    boxes = result.boxes  # get bounding boxes
    for box in boxes:
        x1, y1, x2, y2 = map(int, box.xyxy[0].tolist())  # Bounding box coordinates
        conf = box.conf.item()  # Confidence score
        class_id = int(box.cls.item())  # Class ID

        # Prepare the label text
        label = f"{class_names[class_id]}: {conf:.2f}"

        # Draw the bounding box (blue color, thickness of 2)
        cv2.rectangle(image, (x1, y1), (x2, y2), (255, 0, 0), 2)

        # Draw the label above the bounding box
        font = cv2.FONT_HERSHEY_SIMPLEX
        label_size, _ = cv2.getTextSize(label, font, 0.5, 1)
        label_ymin = max(y1, label_size[1] + 10)
        cv2.rectangle(image, (x1, label_ymin - label_size[1] - 10),
                     (x1 + label_size[0], label_ymin + 4), (255, 0, 0), -1)  # Draw label background
        cv2.putText(image, label, (x1, label_ymin), font, 0.5, (255, 255, 255), 1)  # Put label text

# Save the image
output_path = "output_image.jpg"
cv2.imwrite(output_path, image)
print(f"Saved inference result to {output_path}")

 现在我们可以绘制开发模型的准确率指标

{    "image_name_1": {        "bboxes": [            {"bbox": [xmin1, ymin1, xmax1, ymax1], "class": "cls1"},            {"bbox": [xmin2, ymin2, xmax2, ymax2], "class": "cls2"}        ]    },    "image_name_2": {        "bboxes": [            {"bbox": [xmin3, ymin3, xmax3, ymax3], "class": "cls1"},            {"bbox": [xmin4, ymin4, xmax4, ymax4], "class": "cls2"},            {"bbox": [xmin5, ymin5, xmax5, ymax5], "class": "cls3"}        ]    }}

    为预测和真实值制作如上所示的 json,以计算性能指标,例如 Precision、Recall、F1 Score 和 Classification 报告,对于 Ground Truth

import os
import json
from PIL import Image

img_data_dict = {}
path = os.getcwd()
inputPar = os.path.join(path, r'Dataset')
folders = os.listdir(inputPar)

clsfile = os.path.join(path, 'classes.txt')
with open(clsfile) as tf:
    clsnames = [cl.strip() for cl in tf.readlines()]

for folder in folders:
    if folder in ["test"]:
        inputChild = os.path.join(inputPar, folder, "images")
        files = os.listdir(inputChild)
        for file in files:
            imgpath = os.path.join(inputChild, file)
            img_data_dict[file] = []
            fileName, ext = os.path.splitext(file)
            finput = os.path.join(inputPar, folder, "labels", fileName + '.txt')

            with open(finput) as tf:
                Yolodata = tf.readlines()
            if os.path.exists(imgpath):
                print("plotting >>", fileName + '.jpg')
                img = Image.open(imgpath)
                width, height = img.size
                for obj in Yolodata:
                    clsName = clsnames[int(obj.split(' ')[0])]
                    xnew = float(obj.split(' ')[1])
                    ynew = float(obj.split(' ')[2])
                    wnew = float(obj.split(' ')[3])
                    hnew = float(obj.split(' ')[4])
                    # box size
                    dw = 1 / width
                    dh = 1 / height
                    # coordinates
                    xmax = int(((2 * xnew) + wnew) / (2 * dw))
                    xmin = int(((2 * xnew) - wnew) / (2 * dw))
                    ymax = int(((2 * ynew) + hnew) / (2 * dh))
                    ymin = int(((2 * ynew) - hnew) / (2 * dh))
                    bbx_dict = {"Bbox": [xmin, ymin, xmax, ymax], "class": f"{clsName}"}
                    if file in img_data_dict:
                        img_data_dict[file].append(bbx_dict)
            else:
                print(f'{imgpath}  >> img not found:')

with open("img_gt.json", "w") as f:
    json.dump(img_data_dict, f, indent=4)

用于预测的代码:

import cv2
import json
import os
from ultralytics import YOLO

onnx_model = YOLO("best.onnx")
class_names = onnx_model.names
img_data_dict = {}
path = os.getcwd()
inputPar = os.path.join(path, r'Dataset')
folders = os.listdir(inputPar)

for folder in folders:
    if folder in ["test"]:
        inputChild = os.path.join(inputPar, folder, "images")
        files = os.listdir(inputChild)
        for file in files:
            img_data_dict[file] = []
            imgpath = os.path.join(inputChild, file)
            image = cv2.imread(imgpath)
            # Run inference
            results = onnx_model(image)
            # Extract predictions
            for result in results:
                boxes = result.boxes  # get bounding boxes
                for box in boxes:
                    x1, y1, x2, y2 = map(int, box.xyxy[0].tolist())  # Bounding box coordinates
                    conf = box.conf.item()  # Confidence score
                    class_id = int(box.cls.item())  # Class ID
                    clsName = class_names[class_id]
                    bbx_dict = {"Bbox": [x1, y1, x2, y2], "class": f"{clsName}"}
                    if file in img_data_dict:
                        img_data_dict[file].append(bbx_dict)

with open("img_pred.json", "w") as f:
    json.dump(img_data_dict, f, indent=4)

 但这里的问题是模型预测在任何时候都不准确,它可能导致在创建分类报告时产生问题(ytrue 和 ypred 的大小变化),因此平衡它们是非常必要的为此,如果需要,我们可以使用 python 脚本来更新两个 jsons,代码:

import json

def load_data(ground_truth_file, predictions_file):
    with open(ground_truth_file) as f:
        ground_truth = json.load(f)
    with open(predictions_file) as f:
        predictions = json.load(f)
    return ground_truth, predictions

# Load data
ground_truth_file = 'img_gt.json'
predictions_file = 'img_pred.json'
ground_truth, predictions = load_data(ground_truth_file, predictions_file)

# Make a copy of the data
ground_truth_upd, predictions_upd = ground_truth.copy(), predictions.copy()

# Update the lists so they have the same length
for gt_key, pred_key in zip(ground_truth, predictions):
    # Get the annotations for the current key
    gt_annotations = ground_truth[gt_key]
    pred_annotations = predictions[pred_key]
    if len(gt_annotations) != len(pred_annotations):
        gt_len = len(gt_annotations)
        pred_len = len(pred_annotations)
        # Add padding to the smaller list
        if gt_len < pred_len:
            # Pad ground truth with empty boxes and None class
            for _ in range(pred_len - gt_len):
                ground_truth_upd[gt_key].append({"Bbox": [0, 0, 0, 0], "class": None})
        elif pred_len < gt_len:
            # Pad predictions with empty boxes and None class
            for _ in range(gt_len - pred_len):
                predictions_upd[pred_key].append({"Bbox": [0, 0, 0, 0], "class": None})

# Save updated data
with open("img_gt_upd.json", "w") as f:
    json.dump(ground_truth_upd, f, indent=4)
with open("img_pred_upd.json", "w") as f:
    json.dump(predictions_upd, f, indent=4)

评估性能指标的代码:

import json
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report

def load_data(ground_truth_file, predictions_file):
    with open(ground_truth_file) as f:
        ground_truth = json.load(f)
    with open(predictions_file) as f:
        predictions = json.load(f)
    return ground_truth, predictions

def iou(box1, box2):
    # Calculate the intersection coordinates
    xi1 = max(box1[0], box2[0])
    yi1 = max(box1[1], box2[1])
    xi2 = min(box1[2], box2[2])
    yi2 = min(box1[3], box2[3])

    # Calculate the area of intersection
    intersection_area = max(0, xi2 - xi1) * max(0, yi2 - yi1)

    # Calculate the area of both boxes
    box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])
    box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])

    # Calculate the area of union
    union_area = box1_area + box2_area - intersection_area
    return intersection_area / union_area if union_area > 0 else 0

def calculate_metrics(ground_truth, predictions, iou_threshold=0.5):
    tp = 0  # True Positives
    fp = 0  # False Positives
    fn = 0  # False Negatives
    # For classification report
    y_true = []  # Ground truth classes
    y_pred = []  # Predicted classes
    for image in ground_truth:
        gt_boxes = ground_truth[image]
        pred_boxes = predictions[image]
        matched_gt = [False] * len(gt_boxes)  # Track which ground truths have been matched
        for pred in pred_boxes:
            pred_box = pred['Bbox']
            pred_class = pred['class']  # Append predicted class for report
            best_iou = 0
            best_gt_idx = -1
            for idx, gt in enumerate(gt_boxes):
                gt_box = gt['Bbox']
                gt_class = gt['class']
                # Only consider ground truths that match the predicted class
                if gt_class == pred_class and not matched_gt[idx]:
                    current_iou = iou(pred_box, gt_box)
                    if current_iou > best_iou:
                        best_iou = current_iou
                        best_gt_idx = idx
            # Check if the best IoU exceeds the threshold
            if best_iou >= iou_threshold and best_gt_idx != -1:
                tp += 1  # Count as true positive
                matched_gt[best_gt_idx] = True  # Mark this ground truth as matched
            else:
                fp += 1  # Count as false positive
        fn += matched_gt.count(False)  # Count unmatched ground truths as false negatives
        # Append ground truth classes for the report
        y_true.extend(gt['class'] for gt in gt_boxes)
        y_pred.extend(pred['class'] for pred in pred_boxes)
    y_true = [label if label is not None else -1 for label in y_true]
    y_pred = [label if label is not None else -1 for label in y_pred]
    # Calculate precision, recall, F1 score
    precision = tp / (tp + fp) if (tp + fp) > 0 else 0
    recall = tp / (tp + fn) if (tp + fn) > 0 else 0
    f1_score = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0
    return tp, fp, fn, precision, recall, f1_score, y_true, y_pred

def plot_metrics(precision, recall, f1_score):
    metrics = [precision, recall, f1_score]
    labels = ['Precision', 'Recall', 'F1 Score']
    plt.figure(figsize=(8, 5))
    plt.bar(labels, metrics, color=['blue', 'orange', 'green'])
    plt.ylim(0, 1)
    plt.ylabel('Score')
    plt.title('Performance Metrics')
    plt.grid(axis='y')
    for i, v in enumerate(metrics):
        plt.text(i, v + 0.02, f"{v:.2f}", ha='center', va='bottom')
    plt.show()

def main(ground_truth_file, predictions_file):
    ground_truth, predictions = load_data(ground_truth_file, predictions_file)
    tp, fp, fn, precision, recall, f1_score, y_true, y_pred = calculate_metrics(ground_truth, predictions)
    print(f"True Positives: {tp}")
    print(f"False Positives: {fp}")
    print(f"False Negatives: {fn}")
    print(f"Precision: {precision:.2f}")
    print(f"Recall: {recall:.2f}")
    print(f"F1 Score: {f1_score:.2f}")
    # Generate classification report
    print("\nClassification Report:")
    print(f"Length of y_true: {len(y_true)}")
    print(f"Length of y_pred: {len(y_pred)}")
    print(classification_report(y_true, y_pred))
    # Plot metrics
    plot_metrics(precision, recall, f1_score)

# Example usage
ground_truth_file = 'img_gt_upd.json'
predictions_file = 'img_pred_upd.json'
main(ground_truth_file, predictions_file)

THE END !

文章结束,感谢阅读。您的点赞,收藏,评论是我继续更新的动力。大家有推荐的公众号可以评论区留言,共同学习,一起进步。


http://www.kler.cn/a/390883.html

相关文章:

  • WebRTC API分析
  • uniapp+vue2 设置全局变量和全局方法 (兼容h5/微信小程序)
  • 实验一:自建Docker注册中心
  • vue3 pdf base64转成文件流打开
  • 程序员年薪百万秘籍(一)
  • scala的练习题
  • vue中如何关闭eslint检测?
  • 【子串分值——贡献法】
  • 软考:去中心化的部署有什么特点
  • vue2面试题6|[2024-11-11]
  • 25浙江省考-专项刷题(数字推理)-错题本
  • 从0开始学docker (每日更新 24-11-10)
  • Qt 项目架构设计
  • 11/12Linux实验2
  • 【快捷入门笔记】mysql基本操作大全-SQL数据库
  • webpack loader全解析,从入门到精通(10)
  • NVR设备ONVIF接入平台EasyCVR私有化部署视频平台如何安装欧拉OpenEuler 20.3 MySQL
  • 微服务容器化部署实践(FontConfiguration.getVersion)
  • kafka面试题part-3
  • 发包人一直恶意拖延审计,施工人如何破局?
  • 信息安全工程师(82)操作系统安全概述
  • MVVM前端开发模型,怎么快速定位问题
  • 库打包工具 rollup
  • Chromium127编译指南 Linux篇 - 编译前环境搭建(一)
  • 基于深度卷积二元分解网络的齿轮和轴承故障特征提取方法
  • 【LeetCode】【算法】11. 盛最多水的容器