本文已被:浏览 2812次 下载 2880次
Received:April 17, 2019 Revised:May 21, 2019
Received:April 17, 2019 Revised:May 21, 2019
中文摘要: 基于深度卷积神经网络的图像语义分割方法需要大量像素级标注的训练数据,但标注的过程费时又费力.本文基于生成对抗网络提出一种编码-解码结构的半监督图像语义分割方法,其中编码器-解码器模块作为生成器,整个网络通过耦合标准多分类交叉熵损失和对抗损失进行训练.为充分利用浅层网络包含的丰富的语义信息,本文将编码器中不同尺度的特征输入到分类器,并将得到的不同粒度的分类结果融合,进而优化目标边界.此外,鉴别器通过发现无标签数据分割结果中的可信区域,以此提供额外的监督信号,来实现半监督学习.在PASCAL VOC 2012和Cityscapes上的实验表明,本文提出的方法优于现有的半监督图像语义分割方法.
Abstract:Image semantic segmentation methods based on deep convolutional neural network requires a large number of pixel-level annotation training data, but the labeling process is time-consuming and laborious. In this study, a semi-supervised image semantic segmentation method with encoder-decoder based on generative adversarial networks is proposed, in which the encoder-decoder as the generator. The entire network is trained by coupling the standard multi-class cross entropy loss with the adversarial loss. In order to make full use of the rich semantic information contained in the shallow layers, this study puts the features of multi-scales in the encoder into the classifier, and fuses the obtained classification results with different granularities to optimize the object boundaries. In addition, the discriminator enables semi-supervised learning by discovering the trusted regions in the unlabeled data segmentation results to provide additional supervisory signals. Experiments on PASCAL VOC 2012 and Cityscapes show that the proposed method is superior to the existing semi-supervised image semantic segmentation methods.
keywords: image semantic segmentation encoder-decoder deep learning generative adversarial networks semi-supervised learning
文章编号: 中图分类号: 文献标志码:
基金项目:
引用文本:
刘贝贝,华蓓.基于编码器-解码器的半监督图像语义分割.计算机系统应用,2019,28(11):182-187
LIU Bei-Bei,HUA Bei.Encoder-Decoder for Semi-Supervised Image Semantic Segmentation.COMPUTER SYSTEMS APPLICATIONS,2019,28(11):182-187
刘贝贝,华蓓.基于编码器-解码器的半监督图像语义分割.计算机系统应用,2019,28(11):182-187
LIU Bei-Bei,HUA Bei.Encoder-Decoder for Semi-Supervised Image Semantic Segmentation.COMPUTER SYSTEMS APPLICATIONS,2019,28(11):182-187