紧耦合的移动端实时位姿优化方法
作者:
基金项目:

丝绸文化传承与产品设计数字化技术文化和旅游部重点实验室开放基金(2020WLB10)


Tightly-coupled Real-time Pose Optimization Method for Mobile Terminal
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [22]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    位姿估计一直是三维重建领域的关键性问题. 为保证移动端有限计算资源下的实时性并提高轨迹计算的准确性, 提出一种紧耦合的移动端实时位姿优化方法. 首先, 获取图像信息与运动传感器信息进行特征提取、预积分等预处理; 然后根据对极几何约束, 计算重投影误差与惯性传感器误差; 最后采用加权误差联合优化计算位姿轨迹. 紧耦合策略可以有效利用图像信息与惯性运动信息对位姿约束的一致性. 在公开数据集EuRoC上的实验表明, 与已有视觉惯性位姿估计方法相比, 文中方法在保证移动端实时性的同时, 重建相机轨迹误差更小.

    Abstract:

    Pose estimation has always been a key issue in the field of 3D reconstruction. A tightly coupled real-time pose optimization method for the mobile terminal is proposed to ensure the real-time performance under the limited resources of the mobile terminal and improve the accuracy of trajectory calculation. First, image information and motion sensor information are obtained to conduct pretreatments such as feature extraction and pre-integration. Then, the reprojection error and the inertial sensor error are calculated according to the epipolar geometric constraints. Finally, the weighted error is used to jointly optimize the calculation of the pose trajectory. The tight coupling strategy can efficiently use the consistency in the pose constraint of image information and inertial motion information. Experiments on the public data set EuRoC show that compared with the existing visual-inertial pose estimation methods, the proposed method guarantees the real-time performance on the mobile terminal and has a smaller camera trajectory error in reconstruction.

    参考文献
    [1] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces. Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara: IEEE, 2007. 225–234.
    [2] Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry. Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong: IEEE, 2014. 15–22.
    [3] Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM. Proceedings of the 13th European Conference on Computer Vision. Zurich: Springer, 2014. 834–849.
    [4] Mur-Artal R, Montiel JMM, Tardós JD. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 2015, 31(5): 1147–1163. [doi: 10.1109/TRO.2015.2463671
    [5] Weiss S, Achtelik MW, Lynen S, et al. Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments. Proceedings of 2012 IEEE International Conference on Robotics and Automation. Saint Paul: IEEE, 2012. 957–964.
    [6] Lynen S, Achtelik MW, Weiss S, et al. A robust and modular multi-sensor fusion approach applied to MAV navigation. Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013. 3923–3929.
    [7] Li MY, Mourikis AI. High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics Research, 2013, 32(6): 690–711. [doi: 10.1177/0278364913481251
    [8] Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach. Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg: IEEE, 2015. 298–304.
    [9] Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 2015, 34(3): 314–334. [doi: 10.1177/0278364914554813
    [10] Forster C, Carlone L, Dellaert F, et al. On-manifold preintegration for real-time visual-inertial odometry. IEEE Transactions on Robotics, 2017, 33(1): 1–21. [doi: 10.1109/TRO.2016.2597321
    [11] Mourikis AI, Roumeliotis SI. A multi-state constraint Kalman filter for vision-aided inertial navigation. Proceedings of 2007 IEEE International Conference on Robotics and Automation. Rome: IEEE, 2007. 3565–3572.
    [12] Usenko V, Engel J, Stückler J, et al. Direct visual-inertial odometry with stereo cameras. Proceedings of 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm: IEEE, 2016. 1885–1892.
    [13] Lupton T, Sukkarieh S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Transactions on Robotics, 2012, 28(1): 61–76. [doi: 10.1109/TRO.2011.2170332
    [14] Forster C, Carlone L, Dellaert F, et al. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. Technical Report, Atlanta: Georgia Institute of Technology, 2015.
    [15] Qin T, Li PL, Shen SJ. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 2018, 34(4): 1004–1020. [doi: 10.1109/TRO.2018.2853729
    [16] Campos C, Elvira R, Rodríguez JJG, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv: 2007.11898, 2021.
    [17] Campos C, Montiel JMM, Tardós JD. Inertial-only optimization for visual-inertial initialization. Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris: IEEE, 2020. 51–57.
    [18] Mur-Artal R, Tardós JD. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 2017, 33(5): 1255–1262. [doi: 10.1109/TRO.2017.2705103
    [19] Ghadi M, Laouamer L, Nana L, et al. A novel zero-watermarking approach of medical images based on Jacobian matrix model. Security and Communication Networks, 2016, 9(18): 5203–5218. [doi: 10.1002/sec.1690
    [20] Wang Y, Hu SQ, Wu SD. Object tracking based on Huber loss function. The Visual Computer, 2019, 35(11): 1641–1654. [doi: 10.1007/s00371-018-1563-1
    [21] Kümmerle R, Grisetti G, Strasdat H, et al. G2o: A general framework for graph optimization. Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai: IEEE, 2011. 3607–3613.
    [22] Burri M, Nikolic J, Gohl P, et al. The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research, 2016, 35(10): 1157–1163. [doi: 10.1177/0278364915620033
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

孙晓明,宋滢.紧耦合的移动端实时位姿优化方法.计算机系统应用,2022,31(2):207-212

复制
分享
文章指标
  • 点击次数:676
  • 下载次数: 1731
  • HTML阅读次数: 1229
  • 引用次数: 0
历史
  • 收稿日期:2021-04-11
  • 最后修改日期:2021-05-11
  • 在线发布日期: 2022-01-28
文章二维码
您是第11204427位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号