Abstract:There are many kinds of natural disasters, and it is relatively difficult to semantically segment remote sensing images. In order to better realize remote sensing image segmentation, this study proposes a three-layer semantic segmentation model for remote sensing images based on a generative adversarial network. For the analysis of different scenes, a multi-level remote-sensing semantic segmentation framework is designed based on a fully convolutional network (FCN). The semantic segmentation of remote sensing images is effectively performed, and thus the segmentation accuracy of the model is enhanced. Experiments show that this model is effective, which can be directly observed from the segmentation results of damaged buildings, with mIoU being 82.28 %. In addition, this model is compared with other network models, and its performance evaluation index is significantly better than that of other network models. Finally, a reliable data report is provided to emergency management departments by analyzing various scene images of natural disasters.