当前位置: 首页 > article >正文

深蓝学院自主泊车第3次作业-IPM

目录

  • 1 题目介绍
  • 2 求解

1 题目介绍

已知鱼眼相机的参数,

  1. image_width,表示图像的宽度
  2. image_height,表示图像的高度
  3. ξ \xi ξ,表示鱼眼相机参数
  4. k 1 k_1 k1 k 2 k_2 k2,表示径向相机参数
  5. p 1 p_1 p1 p 2 p_2 p2,表示切向相机参数
  6. f x f_x fx f y f_y fy,表示相机的焦距
  7. c x c_x cx c y c_y cy,表示相机的光心

已知相机坐标系到车体坐标系的变换矩阵,

  1. t t t,表示平移向量
  2. R R R,表示旋转矩阵

给出4组这样的参数及相应的图片,请求解最终的IPM图片。

2 求解

(1)设置最终IPM图像的尺寸为 1000 × 1000 1000\times 1000 1000×1000(用符号 i p m _ i m g _ h × i p m _ i m g _ w ipm\_img\_h\times ipm\_img\_w ipm_img_h×ipm_img_w表示),其中每个像素的长度和宽度为0.02米(用符号 p i x e l _ s c a l e _ pixel\_scale\_ pixel_scale_表示)。

(2)遍历IPM图像中的每个像素 ( u , v ) (u,v) (u,v),那么可以直到机体坐标系的点的坐标值为,
p v = [ − ( 0.5 ⋅ i p m _ i m g _ h − u ) ∗ p i x e l _ s c a l e 0.5 ⋅ ( i p m _ i m g _ w − v ) ⋅ p i x e l _ s c a l e 0 ] (1) p_v = \begin{bmatrix}-(0.5 \cdot ipm\_img\_h - u) * pixel\_scale \\ 0.5 \cdot (ipm\_img\_w - v) \cdot pixel\_scale \\ 0 \end{bmatrix} \tag{1} pv= (0.5ipm_img_hu)pixel_scale0.5(ipm_img_wv)pixel_scale0 (1)
(3)求得在相机坐标系下的坐标值为,
p c = R T ⋅ p v − R T ⋅ t (2) p_c = R^T \cdot p_v - R^T \cdot t \tag{2} pc=RTpvRTt(2)

p c p_c pc归一化操作,得到 p c _ n o r m p_{c\_norm} pc_norm

(4)此时,得到单位平面上的点 ( x 1 , y 1 ) (x_1,y_1) (x1,y1)
x 1 = p c _ n o r m [ 0 ] p c _ n o r m [ 2 ] + ξ y 1 = p c _ n o r m [ 1 ] p c _ n o r m [ 2 ] + ξ (3) \begin{align} x_1 = \frac{p_{c\_norm}[0]} { p_{c\_norm}[2] + \xi} \\ y_1 = \frac{p_{c\_norm}[1]} { p_{c\_norm}[2] + \xi} \end{align} \tag{3} x1=pc_norm[2]+ξpc_norm[0]y1=pc_norm[2]+ξpc_norm[1](3)

(5)得到畸变后的点 ( x 2 , y 2 ) (x_2,y_2) (x2,y2)
r = x 1 2 + y 1 2 x 2 = x 1 ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 x 1 y 1 + p 2 ( r 2 + 2 x 1 2 ) y 2 = y 1 ( 1 + k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2 y 1 2 ) + 2 p 2 x 1 y 1 (4) \begin{align} r &= \sqrt{x_1^2 + y_1^2} \\ x_2 &= x_1 (1 + k_1 r^2 + k_2 r^4) + 2 p_1 x_1 y_1 + p_2(r^2+2x_1^2) \\ y_2 &= y_1 (1 + k_1r^2+k_2r^4) + p_1(r^2+2y_1^2) + 2p_2x_1y_1 \end{align} \tag{4} rx2y2=x12+y12 =x1(1+k1r2+k2r4)+2p1x1y1+p2(r2+2x12)=y1(1+k1r2+k2r4)+p1(r2+2y12)+2p2x1y1(4)

(6)将点 ( x 2 , y 2 ) (x_2,y_2) (x2,y2)投影到图像坐标系,得到 ( u 1 , v 1 ) (u_1,v_1) (u1,v1)
u 1 = f x x 2 + c x v 1 = f y y 2 + c y (5) \begin{align} u_1 &= f_x x_2 + c_x \\ v_1 &= f_y y_2 + c_y \end{align} \tag{5} u1v1=fxx2+cx=fyy2+cy(5)
将鱼眼相机中的点 ( u 1 , v 1 ) (u_1,v_1) (u1,v1)的颜色赋给IPM图像中的 ( u . v ) (u.v) (u.v)坐标。

此处需要注意:

tip1: 获取颜色时需要四舍五入 u 1 u_1 u1 v 1 v_1 v1

tip2: 如果IPM图像中 ( u , v ) (u,v) (u,v)坐标已经被赋予了颜色信息,记录它的次数,那么最终的颜色值通过次数来加权平均,
c o l o r = t o t a l − 1 t o t a l ⋅ c o l o r + 1 t o t a l ⋅ n e w _ c o l o r (6) color = \frac{total-1}{total} \cdot color + \frac{1}{total} \cdot new\_color \tag{6} color=totaltotal1color+total1new_color(6)

最终的代码如下,

cv::Mat IPM::GenerateIPMImage(const std::vector<cv::Mat>& images) const {
  // Initialize a black IPM image with dimensions ipm_img_h_ x ipm_img_w_ and 3 channels (RGB)
  cv::Mat ipm_image = cv::Mat::zeros(ipm_img_h_, ipm_img_w_, CV_8UC3);
  std::vector<std::vector<int>> uv_cnt(ipm_img_h_, std::vector<int>(ipm_img_w_, 0)); 

  // Check if the number of input images matches the number of cameras
  if (images.size() != cameras_.size()) {
    // If not, print an error message and return the initialized black IPM image
    std::cout << "IPM not init normaly !" << std::endl;
    return ipm_image;
  }
  
  // Iterate over each pixel in the IPM image
  for (int u = 0; u < ipm_img_w_; ++u) {
    for (int v = 0; v < ipm_img_h_; ++v) {
      // Calculate the point p_v in vehicle coordinates, p_v is corresponding to the current pixel (u, v).
      // Assume the height of the ipm_image in vehicle coordinate is 0.
      Eigen::Vector3d p_v(-(0.5 * ipm_img_h_ - u) * pixel_scale_,
                          (0.5 * ipm_img_w_ - v) * pixel_scale_, 0);

      // Iterate over each camera
      for (size_t i = 0; i < cameras_.size(); ++i) {
        // Project the vehile point p_v into the image plane uv
        TODO begin///

        //将vehicle系下的点转到camera系下
        Eigen::Vector3d p_c = cameras_[i].T_vehicle_cam_.inverse() * p_v;
        // Eigen::Maxtrix3d R1 = cameras_[i].T_vehicle_cam_.linear();
        // Eigen::Vector3d t1 = cameras_[i].translation();

        if (p_c[2] < 0.0) { //过滤z值小于0的点
          continue;
        }

        //将camera系下的点转到图像系下 
        Eigen::Vector3d norm_p_c = p_c.normalized();
        norm_p_c[2] += cameras_[i].xi_;
        //unit plane上的点(x1,y1)
        double x1 = norm_p_c[0] / norm_p_c[2];
        double y1 = norm_p_c[1] / norm_p_c[2];
        double r_2 = x1 * x1 + y1 * y1;

        //获取参数k1,k2,p1,p2
        double k1 = cameras_[i].D_.at<double>(0, 0);
        double k2 = cameras_[i].D_.at<double>(1, 0);
        double p1 = cameras_[i].D_.at<double>(2, 0);
        double p2 = cameras_[i].D_.at<double>(3, 0);

        //畸变后的点(x2,y2)
        double x2 = x1 * (1 + k1 * r_2 + k2 * r_2 * r_2) + 2.0 * p1 * x1 * y1 + p2 * (r_2 + 2 * x1 * x1);
        double y2 = y1 * (1 + k1 * r_2 + k2 * r_2 * r_2) + p1 * (r_2 + 2 * y1 * y1) + 2.0 * p2 * x1 * y1;

        //获取参数fx,fy,cx,cy
        double fx = cameras_[i].K_.at<double>(0, 0);
        double fy = cameras_[i].K_.at<double>(1, 1);
        double cx = cameras_[i].K_.at<double>(0, 2);
        double cy = cameras_[i].K_.at<double>(1, 2);
        
        //图像坐标系下的点(u1,v1)
        double u1 = fx * x2 + cx;
        double v1 = fy * y2 + cy;

        //四舍五入,得到最终的uv0,uv1
        int uv0 = static_cast<int>(std::round(u1));
        int uv1 = static_cast<int>(std::round(v1)); 
        TODO end/
        // (uv0, uv1) is the projected pixel from p_v to cameras_[i]
        // Skip this point if the projected coordinates are out of bounds of the camera image
        if (uv0 < 0 || uv0 >= cameras_[i].width_ || uv1 < 0 ||
            uv1 >= cameras_[i].height_) {
          continue;
        }

        // Get the pixel color from the camera image and set it to the IPM image
        // If the IPM image pixel is still black (not yet filled), directly assign the color
        if (ipm_image.at<cv::Vec3b>(v, u) == cv::Vec3b(0, 0, 0)) {
          ipm_image.at<cv::Vec3b>(v, u) = images[i].at<cv::Vec3b>(uv1, uv0);
          uv_cnt[v][u] += 1;
        } else {
          // Otherwise, average the existing color with the new color
          uv_cnt[v][u] += 1;
          double total = 1.0 * uv_cnt[v][u]; 
          ipm_image.at<cv::Vec3b>(v, u) = (total - 1.0) / total * ipm_image.at<cv::Vec3b>(v, u) + 1.0 / total * 
                                           images[i].at<cv::Vec3b>(uv1, uv0); 
        }
      }
    }
  }

  // Return the generated IPM image
  return ipm_image;
}

最终的IPM图像如下,

在这里插入图片描述

给出的参考IPM图像如下,

在这里插入图片描述

本方法得到的IPM图像有更清晰的停车位线!


http://www.kler.cn/a/557765.html

相关文章:

  • SQL面试题集:识别互相关注的用户
  • 八股文实战之JUC:静态方法的锁和普通方法的锁
  • go json处理 encoding/json 查询和修改gjson/sjson
  • java开发工程师面试技巧
  • 对计算机中缓存的理解和使用Redis作为缓存
  • LeetCode 2506.统计相似字符串对的数目:哈希表+位运算
  • Trae+Qt+MSVC环境配置
  • 运筹说 第132期 | 矩阵对策的基本理论
  • PostgreSQL:更新字段慢
  • 索引与Redis 知识点
  • 易飞ERP查询报表提示:报表档的字段数为21但要写到报表档的字段数为42;报表没有信息;;
  • 策略模式介绍和代码示例
  • 对Revit事务机制的一些推测
  • Webpack的基本功能有哪些
  • 负载均衡集群( LVS 相关原理与集群构建 )
  • RT-Thread+STM32L475VET6——icm20608传感器
  • 可变参数学习
  • 论文略:ACloser Look into Mixture-of-Experts in Large Language Models
  • 详解单例模式、模板方法及项目和源码应用
  • 基于Spring Boot的兴顺物流管理系统设计与实现(LW+源码+讲解)