Abstract:High-resolution remote sensing images have rich spatial features. To solve the problems of complex models, blurred boundaries, and multi-scale segmentation in remote sensing land cover methods, this study proposes a lightweight semantic segmentation network based on boundary and multi-scale information. First, the method uses a lightweight MobileNetV3 classifier and depthwise separable convolutions to reduce computation. Second, the method adopts top-down and bottom-up feature pyramid structures for multi-scale segmentation. Next, a boundary enhancement module is designed to provide rich boundary detail information for the segmentation task. Then, the method designs a feature fusion module to fuse boundary and multi-scale semantic features. Finally, the method applies cross-entropy and Dice loss functions to deal with the sample imbalance. The mean intersection over union of the WHDLD dataset reaches 59.64%, and the overall accuracy reaches 87.68%. The mean intersection over union of the DeepGlobe dataset reaches 70.42%, and the overall accuracy reaches 88.81%. The experimental results show that the model can quickly and effectively realize the land cover classification of remote sensing images.