Abstract:Unmanned aerial vehicles (UAVs) cannot identify and locate foreign objects in the scene during the inspection in low-light environments, resulting in the subsequent intelligent algorithms failing to obtain the environmental semantic information. To this end, this study proposes a method to fuse information from the ORB-SLAM2 algorithm with the YOLOv5 model, which is applicable to the improvement of low-light object detection. First, deep learning training and fusion algorithm validation are performed by self-collecting low-light datasets from RGB-D cameras. Then, the target pixel coordinates are extracted by combining the keyframe information, the output of the object detection module, and the inherent information of the camera. Finally, the position of the target object is solved relative to the world coordinate system by keyframe information and pixel coordinates. The study achieves more accurate recognition of target objects in low-light environments and localization of target objects in the world coordinate system at the sub-meter level, which provides an effective solution for intelligent inspection of UAVs in low-light environments.