﻿ 基于改进MSCKF算法的室内机器人定位方法
 计算机系统应用  2020, Vol. 29 Issue (2): 238-243 PDF

Method of Indoor Robot Positioning Based on Improved MSCKF Algorithm
SUN Yi, ZHANG Xue-Li
College of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
Abstract: The speed and position state equation of the indoor robot positioning based on the traditional MSCKF algorithm needs to integrate the measurement data of the accelerometer in the IMU which causes the drift and cumulative errors, and its accelerometer is always interfered by gravity. Aiming at this problem, this study proposes an improved MSCKF algorithm. Under the premise of not using the accelerometer sensors, the improved MSCKF utilizes the advantages of wheel odometer sensors which measure the amount of translation more accurately, fuses the data of the wheeled odometer with the data of the gyroscope in the IMU, and improves the state equation of Extended Kalman Filter (EKF) for MSCKF algorithm. First, the improve posture equation of the EKF is obtained by using the angular velocity data of the gyro sensor. Then, after combining the translation data of the wheel odometer sensor with the rotation information of the posture equation, the improve velocity and position equation of the EKF are obtained. Finally, the MSCKF and its improved algorithm are implemented on the Robot Operating System (ROS), and verified in an indoor scene with the Turtlebot2 robot. The experimental results show that the improved MSCKF algorithms’ motion trajectory is closer to the real trajectory, and its positioning accuracy is also improved. Compared to the average closed-loop error which is 0.429 m, its average closed-loop error is 0.348 m.
Key words: MSCKF     IMU     wheeled odometer     EKF equation of state     robot     indoor positioning

IMU和Camera数据融合问题又称为VIO (Visual-Inertial Odometry)问题. 目前, 基于滤波的主流VIO算法有: Multi-Sensor Fusion (MSF)、Multi-State Constraint Kalman Filter (MSCKF)、Robust Visual Inertial Odometry (ROVIO)等. 其中MSCKF[13]是2007年由Mourikis等提出的算法, 该算法采用扩展卡尔曼(EKF)[14]将IMU和单目Camera传感器数据进行EKF融合, 融合方法是用IMU运动模型构建EKF状态方程, 用单目Camera的重投影误差建立EKF量测模型.

1 MSCKF算法

MSCKF是基于EKF滤波的IMU和Camera的数据融合算法. 在介绍MSCKF算法之前, 首先需要对相关的坐标系进行定义和约定. 在本文中, I表示IMU坐标系, 又称为载体坐标系, 与载体固连, 随载体运动而变化; C表示相机坐标系; G表示全局坐标系, 是固定不变的坐标系, 本文定义G系在算法的初始位置, 即是以机器人中心点位置为原点, z轴为垂直载体向上, x轴指向载体的前部, y轴指向载体的左侧. 上述坐标系均满足右手定则.

MSCKF算法的系统状态向量X定义为:

 $X = {\left[ {{X_{\rm{IMU}}},{}_C^I{q^1},{}^Ip_C^1,{}_C^I{q^2},{}^Ip_C^2, \cdots ,{}_C^I{q^N},{}^Ip_C^N} \right]^{\rm{T}}}$ (1)

MSCKF算法的EKF状态方程使用IMU数据对系统状态向量进行预测. IMU测量输出的数据是角速度 $\omega$ 和线加速度 $f$ , 将其经坐标转换可得到角速度 $\tilde \omega$ 和线加速度 $\tilde f$ 在IMU坐标系下的表示, 这其中包含了陀螺仪和加速度计的零偏 ${b_\omega }$ ${b_f}$ 以及噪声 ${w_\omega }$ ${w_f}$ , 因此真实的角速度 ${\overset{\frown}{\omega }}$ 和加速度 ${\overset{\frown}{f}}$ 定义为:

 ${\overset{\frown}{\omega }} = \tilde \omega - {b_\omega } - {w_\omega }$ (2)
 ${\overset{\frown}{f}} = \tilde f - {b_f} - {w_f}$ (3)

MSCKF算法的状态方程推导较为复杂, 本文直接给出连续状态方程, 具体推导过程见文献[21]:

 ${}_G^I {\mathop {\mathop q \limits^\frown }\limits^ \cdot } = \frac{1}{2}\varOmega \left( {{\overset{\frown}{\omega }} } \right){}_G^I{\overset{\frown}{q}}$ (4)
 ${{\overset{\frown}{b}} _\omega } = {0_{3 \times 1}}$ (5)
 ${}^G{\mathop {\mathop v\limits^\frown }\limits^ \cdot } = C_{{}_G^I{\overset{\frown}{q}}}^{\rm{T}}{\overset{\frown}{f}} {\rm{ + }}{}^Gg$ (6)
 ${{\overset{\frown}{b}}_f} = {0_{3 \times 1}}$ (7)
 ${}^G{\mathop {\mathop p\limits^\frown }\limits^ \cdot } = ^G\overset{\frown}{v}$ (8)
 ${}_C^I {\mathop {\mathop q \limits^\frown }\limits^ \cdot } = {0_{3 \times 1}}$ (9)
 ${}^I{{\mathop {\mathop q \limits^\frown }\limits^ \cdot }_C} = {0_{3 \times 1}}$ (10)

2 MSCKF算法的改进与实现 2.1 改进MSCKF算法

 ${V_L} = \frac{{{N_L}}}{{sum}} \times \frac{{\pi \cdot r}}{{\varDelta t}}$ (11)
 ${V_R} = \frac{{{N_R}}}{{sum}} \times \frac{{\pi \cdot r}}{{\varDelta t}}$ (12)
 $v = \frac{{{N_L} + {N_R}}}{2}$ (13)
 ${\omega _0} = \frac{{2\left( {{N_L} - {N_R}} \right)}}{L}$ (14)
 ${\theta _{k + 1}} = {\theta _k} + {\omega _0}\varDelta T$ (15)
 ${x_{k + 1}} = {x_k} + vcos ({\theta _{k + 1}})\varDelta t$ (16)
 ${y_{k + 1}} = {y_k} + vsin ({\theta _{k + 1}})\varDelta t$ (17)

2.2 MSCKF与改进MSCKF算法的实现 2.2.1 MSCKF与改进MSCKF算法的实现流程

MSCKF算法通过IMU数据预测状态向量, 通过单目Camera对环境进行观测, 提取自然路标, 并运用路标信息的重投影误差约束对预测的状态信息进行修正更新. 如图1是传统MSCKF算法的流程图, 图中的英文对应的是程序里不同的函数.

(1) 判断是否初始化. 若没有完成初始化, 则进行初始化, 并构造MSCKF状态向量, 进行下一步; 若已经完成初始化, 则直接进行下一步.

(2) EKF预测. 传统MSCKF算法输入为IMU数据, 调用根据第1节EKF状态方程编写的函数, 输出系统状态向量的预测值和对应的协方差矩阵.

(3) 对新图像进行光流跟踪、匹配, 并提取新的特征点.

(4) 将新一帧图像的位姿pose加入到状态向量和协方差中(增广).

(5) 特征点处理.

① 若之前视图中的特征点在当前帧中观测不到(看不见的特征点), 则将该特征的跟踪列表加入到measurementUpdate中, 用于更新MSCKF状态向量.

② 若当前帧观测到的特征是之前在视图中已经观测到的特征点(成熟特征点), 则将该特征点加入到跟踪列表.

③ 对新提取的特征点分配新的featureID, 加入到跟踪列表.

(6) 循环遍历所有加入到measurementUpdate中的特征点, 进行EKF更新, 得到MSCKF系统状态向量的最优解.

① 对特征点进行三角化, 精确得到当前路标点在全局坐标系的位置.

② 特征点的边缘化: 将重投影误差中的关于特征点误差的约束进行边缘化(左零空间投影), 将特征点误差约束转化为系统状态向量的约束(待优化变量).

③ 利用重投影误差, 采用视觉更新, 进行一次线性化, 求解当前帧时刻系统状态向量的最优解.

(7) 将滑动窗口滑图像帧进行边缘化, 使其维持在固定的长度.

 图 1 传统MSCKF算法流程图

2.2.2 MSCKF与改进MSCKF算法的实现

 图 2 Turtlebot2机器人

MSCKF及改进算法有两个比较重要的文件: src/ros_interface.cpp和src/cornet_detector.cpp. 其中src/ros_interface.cpp是算法主体, 包括状态预测、状态增广和测量更新等步骤. src/cornet_detector.cpp是视觉前端处理部分, 主要进行光流跟踪及特征点处理等操作.

3 实验结果分析

 图 3 实验环境

3.1 规定轨迹运动

3.2 封闭轨迹运动

 图 4 规定路线运动轨迹

4 结束语

 [1] Durrant-Whyte H. Where am I? A tutorial on mobile vehicle localization. Industrial Robot: An International Journal, 1994, 21(2): 11-16. DOI:10.1108/EUM0000000004145 [2] 夏凌楠, 张波, 王营冠, 等. 基于惯性传感器和视觉里程计的机器人定位. 仪器仪表学报, 2013, 34(1): 166-172. DOI:10.3969/j.issn.0254-3087.2013.01.024 [3] 刘园园, 葛世荣, 朱华, 等. 煤矿救灾机器人定位技术研究. 煤矿机械, 2011, 32(1): 49-52. DOI:10.3969/j.issn.1003-0794.2011.01.021 [4] Masamune K, Kurima I, Kuwana K, et al. HIFU positioning robot for less-invasive fetal treatment. Procedia CIRP, 2013, 5: 286-289. DOI:10.1016/j.procir.2013.01.056 [5] 褚辉, 李长勇, 杨凯, 等. 多信息融合的物流机器人定位与导航算法的研究. 机械设计与制造, 2019(4): 240-243. DOI:10.3969/j.issn.1001-3997.2019.04.059 [6] 文邹韬, 冯穗力. 基于VIO和Wi-Fi指纹技术的室内定位系统设计. 电讯技术, 2019, 59(4): 449-454. DOI:10.3969/j.issn.1001-893x.2019.04.014 [7] 徐则中, 庄燕滨. 移动机器人定位方法对比研究. 系统仿真学报, 2009, 21(7): 1891-1896. [8] 陈小宁, 黄玉清, 杨佳. 多传感器信息融合在移动机器人定位中的应用. 传感器与微系统, 2008, 27(6): 110-113. DOI:10.3969/j.issn.1000-9787.2008.06.035 [9] Yuan K, Wang H, Zhang H. Robot position realization based on multi-sensor information fusion algorithm. Fourth international symposium on computational intelligence and design. Hangzhou, China. 2011. 294–297. [10] Marín L, Vallés M, Soriano Á, et al. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots. Sensors, 2013, 13(10): 14133-14160. DOI:10.3390/s131014133 [11] 何壮壮, 丁德锐, 王永雄. 基于多传感器融合的移动机器人定位. 计算机与数字工程, 2019, 47(2): 325-329, 343. DOI:10.3969/j.issn.1672-9722.2019.02.014 [12] Corke P, Lobo J, Dias J. An introduction to inertial and visual sensing. The International Journal of Robotics Research, 2007, 26(6): 519-535. DOI:10.1177/0278364907079279 [13] Mourikis AI, Roumeliotis SI. A multi-state constraint kalman filter for vision-aided inertial navigation. Proceedings 2007 IEEE International Conference on Robotics and Automation. Roma, Italy. 2007. 3565–3572. [14] Groves PD. GNSS与惯性及多传感器组合导航系统原理. 练军想, 曹聚亮, 吴文启, 等, 译. 北京: 国防工业出版社, 2011. 66–68. [15] Li MY, Mourikis AI. Optimization-based estimator design for vision-aided inertial navigation. Robotics: Science and Systems. 2013. 241–248. [16] Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robotics and Automation Letters, 2018, 3(2): 965-972. DOI:10.1109/LRA.2018.2793349 [17] Ramezani M, Khoshelham K, Fraser C. Pose estimation by omnidirectional visual-inertial odometry. Robotics and Autonomous Systems, 2018, 105: 26-37. DOI:10.1016/j.robot.2018.03.007 [18] Feng W, Zheng L. Rapid and robust initialization for monocular visual inertial navigation within multi-state Kalman filter. Chinese Journal of Aeronautics, 2018, 31(1): 148-160. DOI:10.1016/j.cja.2017.10.011 [19] 李东轩. 多传感器融合的室内移动机器人定位[硕士学位论文]. 杭州: 浙江大学, 2018. [20] Liu HHS, Pang GKH. Accelerometer for mobile robot positioning. IEEE Transactions on Industry Applications, 2001, 37(3): 812-819. DOI:10.1109/28.924763 [21] 文坤. 多状态多视图约束视觉/惯性组合导航算法研究[硕士学位论文]. 长沙: 国防科学技术大学, 2016.