﻿ 基于相机标定的跨相机场景拼接方法
 计算机系统应用  2020, Vol. 29 Issue (1): 176-183 PDF

Multi-Camera Traffic Scene Mosaic Based on Camera Calibration
WU Fei-Fan, LIANG Hao-Xiang, SONG Huan-Sheng, JIA Jin-Ming, LIU Li-Chen
School of Information Engineering, Chang’an University, Xi’an 710064, China
Foundation item: National Natural Science Foundation of China (61572083); Joint Fund of Ministry of Education (6141A02022610); Major Project of Key Research and Development Program of Shaanxi Province (2018ZDXM-GY-047); Team Incubation Project of the Central Universities of China (300102248402)
Abstract: The intelligent traffic application under the single camera scene has been well developed, but the cross-regional research is still in its infancy. This study proposes a cross-camera scene stitching method based on camera calibration. Firstly, the mapping relationship between the physical information in the sub-world coordinate system and the two-dimensional image in the two camera scenes is established by vanishing point calibration. Secondly, the projection transformation between the cameras is completed by the common feature information between the two sub-world coordinate systems. The road scene stitching is completed by the proposed inverse projection idea and the translation vector relationship. The experimental results show that the proposed method can achieve road scene splicing and cross-region road physical measurement, which lays a foundation for relevant practical application research.
Key words: scene stitching     camera calibration     projection transformation     inverse projection     translation vector

1 单场景相机标定 1.1 相机标定

 $\alpha p = HP{\rm{ = }}KRTP$ (1)

1.2 道路场景标定模型

 图 1 单场景下相机标定模型

 $K = \left[ {\begin{array}{*{20}{c}} f&0&{{C_x}}\\ 0&f&{{C_y}}\\ 0&0&1 \end{array}} \right]$ (2)

 $R = {R_x} = \left[ {\begin{array}{*{20}{c}} 1&0&0\\ 0&{\cos (\phi + \pi /2)}&{ - \sin (\phi + \pi /2)}\\ 0&{ - \sin (\phi + \pi /2)}&{\cos (\phi + \pi /2)} \end{array}} \right]$ (3)

 $T = \left[ {\begin{array}{*{20}{c}} 0\\ 0\\ { - h} \end{array}} \right]$ (4)

 $u = \frac{{su}}{s} = \frac{{{X_w}{H_{11}} + {Y_w}{H_{12}}}}{{{X_w}{H_{31}} + {Y_w}{H_{32}} + {H_{34}}}}$ (5)
 $v = \frac{{sv}}{s} = \frac{{ - {X_w}{H_{12}} + {Y_w}{H_{22}} + {H_{24}}}}{{{X_w}{H_{31}} + {Y_w}{H_{32}} + {H_{34}}}}$ (6)

 $f = \sqrt { - (v_0^2 + {u_0}{u_1})}$ (7)
 $\phi = \arctan ({{ - {v_0}} / f})$ (8)
 $\theta = \sqrt {{{({f^2} + v_0^2)} / {({f^2} + u_0^2 + v_0^2)}}}$ (9)

 $\cos \theta = \frac{{fl({v_f} - {v_0})({v_b} - {v_0})}}{{h({f^2} + v_0^2)({v_f} - {v_b})}}$ (10)
 图 2 沿车道方向消失点的确定

 图 3 世界坐标系中的线段示意图

 ${\cos ^2}\theta = \frac{{\left( {{f^2} + v_0^2} \right)}}{{{f^2} + u_0^2 + v_0^2}}$ (11)

 ${f^4} + [u_0^2 + 2v_0^2 - {\left( {{{wl} / h}} \right)^2}]{f^2} + (u_0^2 + v_0^2)v_0^2 = 0$ (12)
2 道路场景拼接 2.1 相机间旋转与平移

 $\left[ {\begin{array}{*{20}{c}} {{x_{1i}}}\\ {{y_{1i}}}\\ {{z_{1i}}} \end{array}} \right] = M\left[ {\begin{array}{*{20}{c}} {{x_{2i}}}\\ {{y_{2i}}}\\ {{z_{2i}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} r&t\\ 0&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{x_{2i}}}\\ {{y_{2i}}}\\ {{z_{2i}}} \end{array}} \right]$ (13)

2.2 带有真实物理信息的场景拼接

 图 4 道路场景拼接模拟示意图

Step 1. 构建一个容纳两个场景的世界坐标系W1, 从世界坐标系W1原点开始向X轴、Y轴方向以cm级步阶获取世界坐标点(因对道路场景拼接因而始终世界坐标系Z=0), 记为(xi, yi, 0).

Step 2. 因统一世界坐标系与参考坐标系原点、方向一致, 所以两个坐标系之间变换矩阵为单位矩阵E, 因此将该世界坐标点(xi, yi, 0)通过相机标定, 结合式(14)计算出其在相机视野1(参考坐标系所在视野)的位置(ui, vi), 获取相机视野(ui, vi)的像素值, 将该像素值放入统一世界坐标系W2的(xi,yi,0)位置, 整个过程如图5, 依次经过该过程, 直到相机视野1中的道路信息全部放入世界坐标系W2中.

 $\left[ {\begin{array}{*{20}{c}} {{u_i}}\\ {{v_i}}\\ 1 \end{array}} \right] = {K_1}{R_1}{T_1}E\left[ {\begin{array}{*{20}{c}} {{x_i}}\\ {{y_i}}\\ 0\\ 1 \end{array}} \right]$ (14)

Step 3. 对于相机视野2, 因统一世界坐标系与非参考坐标系存在位置和方向上的差别, 因此通过2.1节中所求得的透视变换M, 同样将世界坐标点(xi, yi, 0)通过相机标定, 结合式(15)计算出其在相机视野2(非参考坐标系所在视野)的位置(ui, vi), 获取相机视野(ui, vi)的像素值, 将该像素值放入统一世界坐标系W2的(xi, yi, 0)位置, 对于部分世界坐标系中的位置不在相机视野范围内可将像素信息记为0.

 图 5 像素信息获取示意图

 $\left[ {\begin{array}{*{20}{c}} {{u_i}}\\ {{v_i}}\\ 1 \end{array}} \right] = {K_2}{R_2}{T_2}M\left[ {\begin{array}{*{20}{c}} {{x_i}}\\ {{y_i}}\\ 0\\ 1 \end{array}} \right]$ (15)

Step 1. 求取相机场景2中多个圆标志物中心点在该子世界坐标系下的坐标, 记为(X2i, Y2i, 0), 并通过式(16)和式(17)求解多个标志物的中心坐标为 $( \overline {{X_2}} ,$ $\overline {{Y_2}} ,0 )$ , 其中n表示标志物的个数.

Step 2. 通过透视变换M求取 $\left( {\overline {{X_2}} ,\overline {{Y_2}} ,0} \right)$ 在相机场景1下子世界坐标下的坐标 $\left( {\overline {{X_1}} ,\overline {{Y_1}} ,0} \right)$ ,之后求解 $\Delta X =$ $\overline {{X_2}} - \overline {{X_2}}$ , $\Delta Y = \overline {{Y_2}} - \overline {{Y_2}}$ .

Step 3. 如此在统一世界坐标系W1对相机场景2远处进行拼接时, 可对W1中(xi, yi, 0)进行矢量偏移计算得 $\left( {{x_i} - \Delta X,{y_i} - \Delta Y,0} \right)$ , 这样结合式(15)计算出其在相机视野2(非参考坐标系所在视野)的位置(ui, vi), 获取相机视野(ui, vi)的像素值, 将该像素值放入统一世界坐标系W2的(xi, yi, 0)位置.

 $\overline {{X_2}} = \frac{{\displaystyle \sum\limits_{i = 1}^n {{X_{2i}}} }}{n}$ (16)
 $\overline {{Y_2}} = \frac{{\displaystyle \sum\limits_{i = 1}^n {{Y_{_{2i}}}} }}{n}$ (17)

3 实验设计

3.1 单相机标定误差分析

 图 6 不同相机监控场景示意图

3.2 场景拼接误差分析

 图 7 两个场景下的道路场景拼接

 图 8 全景图中线段选取

 图 9 跨区域线段测量与对比

4 结束语

 [1] Wan YW, Huang Y, Buckles BP. Camera calibration and vehicle tracking: Highway traffic video analytics. Transportation Research Part C: Emerging Technologies, 2014, 44: 202-213. DOI:10.1016/j.trc.2014.02.018 [2] 徐德文. 基于视频的智能交通监控系统技术研究[硕士学位论文]. 西安: 西北工业大学, 2004. [3] Datondji SRE, Dupuis Y, Subirats P, et al. A survey of vision-based traffic monitoring of road intersections. IEEE Transactions on Intelligent Transportation Systems, 2016, 17(10): 2681-2698. DOI:10.1109/TITS.2016.2530146 [4] 严腾. 高速公路场景下相机自动标定算法研究[硕士学位论文]. 西安: 长安大学, 2018. [5] 王琳, 赵健康, 夏轩, 等. 基于双目立体视觉技术的桥梁裂缝测量系统. 计算机应用, 2015, 35(3): 901-904. DOI:10.11772/j.issn.1001-9081.2015.03.901 [6] 吴战广. 隧道近景摄影测量影像解析与快速实现研究[硕士学位论文]. 成都: 西南交通大学, 2016. [7] 刘大海, 卢朝阳. 视频技术在智能交通系统中的应用. 计算机工程, 2003, 29(17): 165-166, 186. DOI:10.3969/j.issn.1000-3428.2003.17.065 [8] Wiska R, Alhamidi MR, Habibie N, et al. Vehicle traffic monitoring using single camera and embedded systems. Proceedings of 2016 International Conference on Advanced Computer Science and Information Systems. Malang, Indonesia. 2016. 117–122. [9] Dubska M, Herout A, Sochor J. Automatic camera calibration for traffic understanding. Proceedings of 2014 British Machine Vision Conference. Nottingham, UK. 2014. http://www.bmva.org/bmvc/2014/papers/paper013/index.html. [10] Khan S, Ali H, Ullah Z, et al. An intelligent monitoring system of vehicles on highway traffic. Proceedings of the 2018 12th International Conference on Open Source Systems and Technologies. Lahore, Pakistan. 2018. 71–75. [11] Zheng Y, Peng SL. A practical roadside camera calibration method based on least squares optimization. IEEE Transactions on Intelligent Transportation Systems, 2014, 15(2): 831-843. DOI:10.1109/TITS.2013.2288353 [12] 严红平, 汪凌峰, 潘春洪. 高速公路动态环境下的摄像机自标定. 计算机辅助设计与图形学学报, 2013, 25(7): 1036-1044. DOI:10.3969/j.issn.1003-9775.2013.07.014 [13] Sochor J, Juránek R, Herout A. Traffic surveillance camera calibration by 3D model bounding box alignment for accurate vehicle speed measurement. Computer Vision and Image Understanding, 2017, 161: 87-98. DOI:10.1016/j.cviu.2017.05.015 [14] Hong JX, Lin W, Zhang H, et al. Image mosaic based on SURF feature matching. Proceedings of the 2009 1st International Conference on Information Science and Engineering. Nanjing, China. 2009. 1287–1290. [15] 赵向阳, 杜利民. 一种全自动稳健的图像拼接融合算法. 中国图象图形学报, 2004, 9(4): 417-422. DOI:10.3969/j.issn.1006-8961.2004.04.007 [16] 马嘉琳, 张锦明, 孙卫新. 基于相机标定的全景图拼接方法研究. 系统仿真学报, 2017, 29(5): 1112-1119. [17] Kanhere NK, Birchfield ST. A taxonomy and analysis of camera calibration methods for traffic monitoring applications. IEEE Transactions on Intelligent Transportation Systems, 2010, 11(2): 441-452. DOI:10.1109/TITS.2010.2045500