Abstract:The image masking method based on semantic segmentation is often used to solve the interference problem of moving objects in three-dimensional (3D) reconstruction tasks of static scenes. However, a small number of invalid feature points will be produced when the mask is used to eliminate moving objects. To solve this problem, a method for eliminating moving objects in the dimension of feature points is proposed. The convolutional neural network is used to obtain the moving target information, and the feature point filtering module is constructed. Then, the moving target information is used to filter and update the feature point list for the complete elimination of the moving target. The ground image dataset and aerial image dataset and the processing algorithms of DeepLabV3 and YOLOv4 are used to verify the proposed method. The results show that the moving object elimination method in 3D reconstruction in the feature point dimension can completely eliminate the moving object without generating additional invalid feature points. Compared with the image masking method, the proposed method shortens the point cloud generation time by 13.36% and reduces the reprojection error by 9.93% on average.