本文已被:浏览 520次 下载 1263次
Received:January 04, 2023 Revised:February 03, 2023
Received:January 04, 2023 Revised:February 03, 2023
中文摘要: 在室内环境下的机器人视觉导航任务中, 可行驶区域检测是不可或缺的一部分, 这是保证自动驾驶任务实现的基础. 目前较多的解决方法是对数据集中出现过的障碍物进行识别来检测可行驶区域, 缺乏灵活性, 因此本文提出了一种针对地铁站等室内平坦地面的可行驶区域检测方法, 提高实用性. 本文采用经典的MobileNetV3网络对采集到的前方图像进行分类, 判断是否为地面区域. 由于室内地面的地标、箭头等贴纸的影响, 因此需要对非地面区域进一步判断, 与常规的立体障碍物进行区分. 本文利用连续帧之间的特征点匹配获得相机移动距离, 并利用直线拟合计算斜率的方法达到区分立体障碍物与平面地标的目的. 实验表明, 本文提出的方法能较好地检测机器人前方可行驶区域, 具有较高的实用价值.
Abstract:In the robot visual navigation task of the indoor environment, the detection of the drivable area is an indispensable part, which is the basis for ensuring the realization of the autonomous driving task. At present, many solutions are to detect the drivable area by identifying obstacles in the dataset, which lacks flexibility. Therefore, a drivable area detection method for indoor flat ground such as subway stations is proposed in this study to improve practicability. The classic MobileNetV3 network is applied to classify the collected front images and determine whether they are ground areas. Due to the influences of stickers such as landmarks and arrows on the indoor floor, it is necessary to further judge the non-ground area and distinguish it from conventional three-dimensional obstacles. In this study, the feature point matching between successive frames is adopted to obtain the camera moving distance, and the method of calculating the slope by straight line fitting is used to distinguish between three-dimensional obstacles and plane landmarks. Experiments show that the proposed method can better detect the drivable area in front of the robot and has high practical value.
文章编号: 中图分类号: 文献标志码:
基金项目:科技创新特区计划(20-163-14-LZ-001-004-01)
引用文本:
王思功,朱明.地铁站场景下的可行驶区域检测.计算机系统应用,2023,32(7):211-218
WANG Si-Gong,ZHU Ming.Drivable Area Detection in Subway Station Scenario.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):211-218
王思功,朱明.地铁站场景下的可行驶区域检测.计算机系统应用,2023,32(7):211-218
WANG Si-Gong,ZHU Ming.Drivable Area Detection in Subway Station Scenario.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):211-218