当前位置: 首页 > article >正文

yolo自动化项目实例解析(九) 导航

比如我们经常使用的导航,说白了就是寻找两点之间最近的路径,也就是所谓的寻路,我们需要想办法让程序知道他要去哪里,路径包含(起点、轨迹、终点)

一、录制轨迹

从平面角度来看,我们可以把区域视为一张大地图,而我们当前区域附近的区域视为小地图,当我们在大地图中的区域时,我们需要根据当前的区域位置来判断一下当前位于大地图中的什么坐标,首先我们就需要根据大小地图去定位坐标

1、sift特征匹配算法

该函数使用 SIFT(Scale-Invariant Feature Transform,尺度不变特征变换)算法来在一张大图(big_img)中寻找另一张小图(small_img)的位置。SIFT 是一种用于图像处理和计算机视觉领域的特征检测算法,它能够识别出图像中的关键点,并且这些关键点对光照变化、旋转等具有一定的不变性。

    def find_img_all_sift(self, big_img, small_img, roi):
        """
        使用 SIFT 特征匹配在大图中找到小图的匹配位置
        :param big_img: 大图
        :param small_img: 小图
        :param roi: 感兴趣区域 (ROI)
        :return: 匹配结果列表
        """
        # 使用 SIFT 特征匹配
        sift = cv2.SIFT_create()
        kp1, des1 = sift.detectAndCompute(big_img, None)
        kp2, des2 = sift.detectAndCompute(small_img, None)

        # 确保描述符类型为 float32
        des1 = des1.astype(np.float32)
        des2 = des2.astype(np.float32)

        bf = cv2.BFMatcher()
        matches = bf.knnMatch(des1, des2, k=2)
        good = []
        for m, n in matches:
            if m.distance < 0.75 * n.distance:
                good.append([m])
        if len(good) > 10:
            src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
            dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
            M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
            h, w = small_img.shape[:2]
            pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
            dst = cv2.perspectiveTransform(pts, M)
            return [{"result": tuple(map(int, dst[0][0])), "rectangle": [tuple(map(int, p[0])) for p in dst]}]
        return []

就是大图匹配小图坐标

使用案例
import cv2
import numpy as np

def find_img_all_sift(big_img, small_img, roi=None):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :param roi: 感兴趣区域 (ROI),默认为None表示整个图像
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []

# 加载图像
big_image_path = 'datu.png'
small_image_path = 'xiaotu.png'

big_image = cv2.imread(big_image_path, cv2.IMREAD_GRAYSCALE)
small_image = cv2.imread(small_image_path, cv2.IMREAD_GRAYSCALE)

# 验证图像是否成功加载
if big_image is None:
    raise FileNotFoundError(f"无法加载大图: {big_image_path}")
if small_image is None:
    raise FileNotFoundError(f"无法加载小图: {small_image_path}")

# 调用函数
results = find_img_all_sift(big_image, small_image, None)
for result in results:
    print(f"匹配点坐标 {result['result']}   匹配点4个顶点坐标 {result['rectangle']}   中心点坐标 {result['center']}")
绘制小图在大图中的坐标
# 在原图上绘制矩形框和中心点
big_image_color = cv2.cvtColor(big_image, cv2.COLOR_GRAY2BGR)
for result in results:
    rectangle = result['rectangle']
    center = result['center']
    #绘制框架
    cv2.polylines(big_image_color, [np.array(rectangle)], True, (0, 255, 0), 2)
    #绘制中心红点
    cv2.circle(big_image_color, center, 5, (0, 0, 255), -1)  

# 显示结果
cv2.imshow('Matched Locations', big_image_color)
cv2.waitKey(0)
cv2.destroyAllWindows()

思路

首先我们先详细怎么才能实现寻路功能

寻路,顾名思义就是寻找正确到达目标的一个路径,但是怎么能让机器知道我们要去哪里呢,
就是通过不断的给他新的坐标,然后进行移动

坐标怎么获取我们上面已经研究过了,通过大图和小图的特征进行匹配截图

我们小图可以看作是我们游戏中的小地图,众所周知,大部分游戏的小地图都是一个轮盘样式的,固定在某个区域内,当我们人物移动的时候,人物会在小地图上的坐标会发生偏移,也就是说我们可以视为,人物本身没有变化,而地图发生了拖拽,小地图在大地图中的坐标发生了偏移

我们上面拿到了小地图在大地图中的坐标和中心点, 我们可以通过在屏幕中循环截取小地图的坐标变化,从而保留出人物在大地图上的运行轨迹

2、图片截取

我们小地图的区域是会不断变化的,我们不可能准备所有区域的小地图,更加希望是在实时获取当前小地图中的信息然后再去做判断,通过这个方法去截取指定区域的图片作为小地图用于识别

import ctypes
import cv2
import numpy as np
import win32gui
import win32ui


def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道


# 定义
hwnd = 65800  # 桌面句柄
map_region = (100, 100, 200, 200)  # 左  上  右  下
left, top, right, bottom = map_region
small_img = screenshot(hwnd, left, top, right, bottom)

# 保存截图
output_path = "screenshot.png"
cv2.imwrite(output_path, small_img)
print(f"截图已保存到: {output_path}")

3、截图坐标判定

我们截取到的图片是一个区域的矩形,我们需要先获取出矩形的中心点,这个中心点也被视为我们自身当前的位置

import time
import pyautogui


def get_screenshot_area(center_x, center_y, width, height):
    """
    根据中心点和宽高计算截图区域
    :param center_x: 中心点的 x 坐标
    :param center_y: 中心点的 y 坐标
    :param width: 截图区域的宽度
    :param height: 截图区域的高度
    :return: (left, top, right, bottom)
    """
    half_width = width // 2
    half_height = height // 2
    left = center_x - half_width
    top = center_y - half_height
    right = center_x + half_width
    bottom = center_y + half_height
    return left, top, right, bottom




def main():
    # 等待2秒
    time.sleep(2)

    # 获取鼠标当前位置
    mouse_x, mouse_y = pyautogui.position()
    print(f"鼠标位置: ({mouse_x}, {mouse_y})")

    # 计算截图区域(这里指定的截图的矩形大小)
    width = 80  # 你可以根据需要调整宽度
    height = 80  # 你可以根据需要调整高度
    left, top, right, bottom = get_screenshot_area(mouse_x, mouse_y, width, height)
    print(f"截图区域: ({left}, {top}, {right}, {bottom})")



if __name__ == "__main__":
    main()

4、截图与特征匹配

结合上面的代码,当我们当前区域的小地图发生位置变化的时候,会记录下所有在大地图上的坐标

import time

import cv2
import numpy as np

def find_img_all_sift(big_img, small_img, roi=None):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :param roi: 感兴趣区域 (ROI),默认为None表示整个图像
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []



import ctypes
import cv2
import numpy as np
import win32gui
import win32ui


def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道




#定义截图的句柄、窗口大小
hwnd = 65800  # 桌面句柄
map_region = (100, 100, 200, 200)  # 左  上  右  下
left, top, right, bottom = map_region
#定义大地图
datu = 'datu.png'
#格式化地图为NumPy 数组
big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)






while True:
    time.sleep(2)

    # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
    small_image = screenshot(hwnd, left, top, right, bottom)

    # #匹配大小地图的特征
    results = find_img_all_sift(big_image, small_image, None)

    #变量输出参数
    for result in results:
        print(f"匹配点坐标 {result['result']}   匹配点4个顶点坐标 {result['rectangle']}   中心点坐标 {result['center']}")

5、录制区域检查

我们绘制路径的时候一定不希望在切换窗口的时候依旧在绘制,这样会存在大量的误差,我们更希望在立刻指定窗口后暂停录制,下面做个判断

import time

import cv2
import numpy as np
import win32api


def find_img_all_sift(big_img, small_img, roi=None):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :param roi: 感兴趣区域 (ROI),默认为None表示整个图像
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []



import ctypes
import cv2
import numpy as np
import win32gui
import win32ui


def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道







#检查活动窗口
def get_window_handle_at_mouse_position():
    #获取当前活动窗口
    # active_hwnd = ctypes.windll.user32.GetForegroundWindow()
    # return active_hwnd
    point = win32api.GetCursorPos()
    hwnd = win32gui.WindowFromPoint(point)
    return hwnd



def main():


    hwnd = 65800   #窗口句柄

    map_region = (100, 100, 200, 200) #截图矩形
    left, top, right, bottom = map_region

    #定义大地图及格式化
    datu = 'datu.png'
    # 格式化地图为NumPy 数组
    big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)



    #循环截图匹配特征
    while True:
        #检查鼠标所在的窗口是否为指定的窗口id,不是就停止
        if get_window_handle_at_mouse_position() != hwnd:
            print("鼠标离开了指定窗口!")
            time.sleep(0.5)
            continue



        time.sleep(2)

        # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
        small_image = screenshot(hwnd, left, top, right, bottom)

        # #匹配大小地图的特征
        results = find_img_all_sift(big_image, small_image, None)

        #变量输出参数
        for result in results:
            print(f"匹配点坐标 {result['result']}   匹配点4个顶点坐标 {result['rectangle']}   中心点坐标 {result['center']}")





if __name__ == '__main__':
    main()

6、导航运行轨迹

import time
import traceback
import cv2
import numpy as np
import win32api
import win32gui
import win32ui
import ctypes

def find_img_all_sift(big_img, small_img):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []

def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道

def get_window_handle_at_mouse_position():
    point = win32api.GetCursorPos()
    hwnd = win32gui.WindowFromPoint(point)
    return hwnd

def main():
    hwnd = 65800  # 窗口句柄
    map_region = (100, 100, 200, 200)  # 截图矩形
    left, top, right, bottom = map_region

    # 定义大地图及格式化
    datu = 'datu.png'
    big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)

    big_img = cv2.imdecode(np.fromfile(file=datu, dtype=np.uint8), cv2.IMREAD_COLOR)  # 加载大图
    big_height, big_width, _ = big_img.shape
    big_img_yt = big_img.copy()

    # 创建一个窗口来显示大图,并设置窗口大小
    cv2.namedWindow('Matched Image', cv2.WINDOW_NORMAL)
    cv2.resizeWindow('Matched Image', big_width, big_height)

    while True:
        # 检查鼠标所在的窗口是否为指定的窗口id, 不是就停止
        if get_window_handle_at_mouse_position() != hwnd:
            print("鼠标离开了指定窗口!")
            time.sleep(0.5)
            continue

        time.sleep(2)

        # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
        small_image = screenshot(hwnd, left, top, right, bottom)

        if small_image is not None:
            # 匹配大小地图的特征
            results = find_img_all_sift(big_image, small_image)

            print(results)
            # 在大图上标记匹配点
            for result in results:
                result_post = [result["center"][0], result["center"][1]]
                cv2.circle(big_img_yt, result_post, 2, (255, 0, 0), -1)

            # 显示标记后的图像
            cv2.imshow('Matched Image', big_img_yt)

            # 按 'q' 键退出
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

    # 释放资源
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

我们上面实时显示的时候发现,我们拖动图片向下,他就记录上面的坐标
反之,我们向上的时候他就记录向下的坐标 ,这里的圆心我们可以视为人物,当地图拖拽的时候我们人物进行移动

7、添加开关

import time
import traceback
import cv2
import keyboard
import numpy as np
import win32api
import win32gui
import win32ui
import ctypes
import threading

def find_img_all_sift(big_img, small_img):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []

def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道

def get_window_handle_at_mouse_position():
    point = win32api.GetCursorPos()
    hwnd = win32gui.WindowFromPoint(point)
    return hwnd

def main():
    hwnd = 65800  # 窗口句柄
    map_region = (100, 100, 200, 200)  # 截图矩形
    left, top, right, bottom = map_region

    # 定义大地图及格式化
    datu = 'datu.png'
    big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)

    big_img = cv2.imdecode(np.fromfile(file=datu, dtype=np.uint8), cv2.IMREAD_COLOR)  # 加载大图
    big_height, big_width, _ = big_img.shape
    big_img_yt = big_img.copy()

    # 创建一个窗口来显示大图,并设置窗口大小
    cv2.namedWindow('Matched Image', cv2.WINDOW_NORMAL)
    cv2.resizeWindow('Matched Image', big_width, big_height)

    # 使用 threading.Event 来管理录制状态
    recording_event = threading.Event()

    def start_recording():
        recording_event.set()
        print("开始录制...")

    def stop_recording():
        recording_event.clear()
        print("停止录制...")

    # 注册按键监听
    keyboard.add_hotkey('f7', start_recording)
    keyboard.add_hotkey('f8', stop_recording)

    while True:
        # 检查鼠标所在的窗口是否为指定的窗口id, 不是就停止
        if get_window_handle_at_mouse_position() != hwnd:
            print("鼠标离开了指定窗口!")
            time.sleep(0.5)
            continue

        time.sleep(2)

        # 基于按键判断
        if recording_event.is_set():
            print(111)

            # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
            small_image = screenshot(hwnd, left, top, right, bottom)

            if small_image is not None:
                # 匹配大小地图的特征
                results = find_img_all_sift(big_image, small_image)

                print(results)
                # 在大图上标记匹配点
                for result in results:
                    result_post = [result["center"][0], result["center"][1]]
                    cv2.circle(big_img_yt, result_post, 2, (255, 0, 0), -1)

                # 显示标记后的图像
                cv2.imshow('Matched Image', big_img_yt)

                # 按 'q' 键退出
                if cv2.waitKey(1) & 0xFF == ord('q'):
                    break

    # 释放资源
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

8、去除动态显示,添加保存轨迹图

import time
import traceback
import cv2
import keyboard
import numpy as np
import win32api
import win32gui
import win32ui
import ctypes
import threading

def find_img_all_sift(big_img, small_img):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []

def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道

def get_window_handle_at_mouse_position():
    point = win32api.GetCursorPos()
    hwnd = win32gui.WindowFromPoint(point)
    return hwnd

def main():
    hwnd = 65800  # 窗口句柄
    map_region = (100, 100, 200, 200)  # 截图矩形
    left, top, right, bottom = map_region

    # 定义大地图及格式化
    datu = 'datu.png'
    big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)

    big_img = cv2.imdecode(np.fromfile(file=datu, dtype=np.uint8), cv2.IMREAD_COLOR)  # 加载大图
    big_height, big_width, _ = big_img.shape
    big_img_yt = big_img.copy()

    # 使用 threading.Event 来管理录制状态
    recording_event = threading.Event()

    def start_recording():
        recording_event.set()
        print("开始录制...")

    def stop_recording():
        recording_event.clear()
        print("停止录制...")
        # 保存带有标记的图片
        cv2.imwrite('111.png', big_img_yt)
        print("图片已保存为 111.png")

    # 注册按键监听
    keyboard.add_hotkey('f7', start_recording)
    keyboard.add_hotkey('f8', stop_recording)

    while True:
        # 检查鼠标所在的窗口是否为指定的窗口id, 不是就停止
        if get_window_handle_at_mouse_position() != hwnd:
            print("鼠标离开了指定窗口!")
            time.sleep(0.5)
            continue

        time.sleep(2)

        # 基于按键判断
        if recording_event.is_set():
            print(111)

            # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
            small_image = screenshot(hwnd, left, top, right, bottom)

            if small_image is not None:
                # 匹配大小地图的特征
                results = find_img_all_sift(big_image, small_image)

                print(results)
                # 在大图上标记匹配点
                for result in results:
                    result_post = [result["center"][0], result["center"][1]]
                    cv2.circle(big_img_yt, result_post, 2, (255, 0, 0), -1)

    # 释放资源
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

9、基于轨迹查询坐标

我们关于标记还需要做一个改变,就是每次循环的时候,不是都会标记当前图片的点位,因为我可能这次循环的时候坐标没有发生变化,所以我需要判断只有坐标发生变化的时候才会在图中进行标记,而且当第一次进行标记的时候使用绿色标记,当录制结束的时候,最后一次标记的时候使用红色标记,而两次标记中间的标记都使用蓝色

并且红色和绿色的标记都置顶,不会被覆盖

import time
import traceback
import cv2
import keyboard
import numpy as np
import win32api
import win32gui
import win32ui
import ctypes
import threading

def find_img_all_sift(big_img, small_img):
    """
    使用 SIFT 特征匹配在大图中找到小图的匹配位置
    :param big_img: 大图
    :param small_img: 小图
    :return: 匹配结果列表
    """
    # 使用 SIFT 特征匹配
    sift = cv2.SIFT_create()
    kp1, des1 = sift.detectAndCompute(big_img, None)
    kp2, des2 = sift.detectAndCompute(small_img, None)

    # 确保描述符类型为 float32
    des1 = des1.astype(np.float32)
    des2 = des2.astype(np.float32)

    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)
    good = []
    for m, n in matches:
        if m.distance < 0.75 * n.distance:
            good.append([m])

    if len(good) > 10:
        src_pts = np.float32([kp1[m[0].queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([kp2[m[0].trainIdx].pt for m in good]).reshape(-1, 1, 2)
        M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
        h, w = small_img.shape[:2]
        pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
        dst = cv2.perspectiveTransform(pts, M)
        rectangle = [tuple(map(int, p[0])) for p in dst]
        center_x = int(sum(p[0] for p in rectangle) / 4)
        center_y = int(sum(p[1] for p in rectangle) / 4)
        return [{"result": tuple(map(int, dst[0][0])), "rectangle": rectangle, "center": (center_x, center_y)}]
    return []

def screenshot(hwnd, left, top, right, bottom):
    # 获取窗口设备上下文
    hwndDC = win32gui.GetWindowDC(hwnd)
    mfcDC = win32ui.CreateDCFromHandle(hwndDC)
    saveDC = mfcDC.CreateCompatibleDC()

    # 获取窗口大小
    rect = win32gui.GetWindowRect(hwnd)
    width = right - left
    height = bottom - top

    # 创建位图
    saveBitMap = win32ui.CreateBitmap()
    saveBitMap.CreateCompatibleBitmap(mfcDC, width, height)
    saveDC.SelectObject(saveBitMap)

    # 设置剪切区域
    saveDC.SetWindowExt((width, height))
    saveDC.SetViewportExt((width, height))
    saveDC.SetWindowOrg((left, top))
    saveDC.SetViewportOrg((0, 0))

    # 截图
    result = ctypes.windll.user32.PrintWindow(hwnd, saveDC.GetSafeHdc(), 0)
    if result == 0:
        print("PrintWindow failed")
        return None

    bmpinfo = saveBitMap.GetInfo()
    bmpstr = saveBitMap.GetBitmapBits(True)

    # 转换为 OpenCV 图像
    im_cv = np.frombuffer(bmpstr, dtype='uint8')
    im_cv = im_cv.reshape((height, width, 4))

    # 清理资源
    win32gui.DeleteObject(saveBitMap.GetHandle())
    saveDC.DeleteDC()
    mfcDC.DeleteDC()
    win32gui.ReleaseDC(hwnd, hwndDC)

    return im_cv[:, :, :3]  # 去掉 alpha 通道

def get_window_handle_at_mouse_position():
    point = win32api.GetCursorPos()
    hwnd = win32gui.WindowFromPoint(point)
    return hwnd

def main():
    hwnd = 65800  # 窗口句柄
    map_region = (100, 100, 200, 200)  # 截图矩形
    left, top, right, bottom = map_region

    # 定义大地图及格式化
    datu = 'datu.png'
    big_image = cv2.imread(datu, cv2.IMREAD_GRAYSCALE)

    big_img = cv2.imdecode(np.fromfile(file=datu, dtype=np.uint8), cv2.IMREAD_COLOR)  # 加载大图
    big_height, big_width, _ = big_img.shape
    big_img_yt = big_img.copy()

    # 使用 threading.Event 来管理录制状态
    recording_event = threading.Event()

    # 记录上一次的坐标
    last_center = None
    first_mark = True
    marks = []  # 用于存储所有标记

    def start_recording():
        nonlocal first_mark
        first_mark = True
        recording_event.set()
        print("开始录制...")

    def stop_recording():
        nonlocal last_center, first_mark
        recording_event.clear()
        print("停止录制...")

        # 获取最后一次的截图
        small_image = screenshot(hwnd, left, top, right, bottom)
        if small_image is not None:
            results = find_img_all_sift(big_image, small_image)
            if results:
                current_center = results[0]["center"]
                if last_center is None or last_center != current_center:
                    marks.append((current_center, (0, 0, 255)))  # 红色标记
                    last_center = current_center

        # 绘制所有标记
        for mark_center, color in marks:
            cv2.circle(big_img_yt, mark_center, 2, color, -1)

        # 保存带有标记的图片
        cv2.imwrite('111.png', big_img_yt)
        print("图片已保存为 111.png")

    # 注册按键监听
    keyboard.add_hotkey('f7', start_recording)
    keyboard.add_hotkey('f8', stop_recording)

    while True:
        # 检查鼠标所在的窗口是否为指定的窗口id, 不是就停止
        if get_window_handle_at_mouse_position() != hwnd:
            print("鼠标离开了指定窗口!")
            time.sleep(0.5)
            continue

        time.sleep(2)

        # 基于按键判断
        if recording_event.is_set():

            # 获取截取的小地图(获取的已经是NumPy 数组不需要再转换)
            small_image = screenshot(hwnd, left, top, right, bottom)

            if small_image is not None:
                # 匹配大小地图的特征
                results = find_img_all_sift(big_image, small_image)

                print(results)
                if results:
                    current_center = results[0]["center"]
                    if last_center is None or last_center != current_center:
                        # 标记点
                        color = (255, 0, 0)  # 蓝色
                        if first_mark:
                            color = (0, 255, 0)  # 绿色
                            first_mark = False

                        marks.append((current_center, color))
                        last_center = current_center

    # 释放资源
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

二、基于轨迹获取坐标

三、校对方向


http://www.kler.cn/a/406030.html

相关文章:

  • 【Unity3D插件】Unity3D HDRP Outline高亮发光轮廓描边插件教程
  • 数据结构-7.Java. 对象的比较
  • autoware(2)运行自己的数据集
  • 全面击破工程级复杂缓存难题
  • 使用redis-shake工具进行redis的数据同步
  • Win11下载和配置VSCode(详细讲解)
  • RDD触发算子:collectAsMap以及foreachParition的语法以及举例使用
  • 第三讲 架构详解:“隐语”可信隐私计算开源框架
  • Elasticsearch 开放推理 API 增加了对 IBM watsonx.ai Slate 嵌入模型的支持
  • SpringBoot 整合Mybatis时读取部分数据为空或日期为空
  • C#超简单实现人脸识别
  • 虚拟浏览器可以应对哪些浏览器安全威胁?
  • macOS 无法安装第三方app,启用任何来源的方法
  • 利用uniapp开发鸿蒙:运行到鸿蒙模拟器—踩坑合集
  • 高级编程之结构化代码
  • 知识中台在多语言客户中的应用
  • SOL链上的 Meme 生态发展:从文化到创新的融合#dapp开发#
  • Jenkins迁移数据目录
  • C语言中const char *字符进行切割实现
  • 基于matlab的语音信号去噪的App Designer 设计
  • Django数据迁移出错,解决raise NodeNotFoundError问题
  • Spring Boot 深度解析:快速构建高效、现代化的 Web 应用程序
  • 独立资源池与共享资源池在云计算中各自的优势
  • 数据分析指标与术语
  • sysbench压测DM的高可用切换测试
  • 如何使用大模型进行智能质检?