###
计算机系统应用英文版:2022,31(12):412-419
本文二维码信息
码上扫一扫!
基于改进UNet网络的室内运动目标阴影分割
(沈阳化工大学 计算机科学与技术学院, 沈阳 110142)
Segmentation of Indoor Moving Object Shadow Based on Improved UNet Network
(College of Computer Science and Technology, Shenyang University of Chemical Technology, Shenyang 110142, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 601次   下载 1024
Received:April 21, 2022    Revised:May 22, 2022
中文摘要: 针对室内环境下智能监控视频对光照变化产生的阴影难以识别、分割困难等问题, 提出一种结合迁移学习方式和SENet通道注意力机制的UNet网络. 首先, 针对阴影特征模糊难以有效提取的问题, 在UNet模型的上采样部分, 添加SENet通道注意力机制, 在不增加网络参数的同时, 提高有效区域的特征权重; 并将预训练好的VGG16网络迁移到UNet模型中, 实现特征迁移和参数共享, 提高模型的泛化能力, 减少训练成本; 最后通过解码器得到分割结果. 实验结果表明, 改进的UNet算法相比于原UNet算法在对运动目标的分割精度上达到了96.09%, 对阴影的分割精度上达到92.24%, 平均交并比(MIOU)达到92.58%, 算法性能指标有显著提升.
中文关键词: 阴影  迁移学习  注意力机制  UNet  深度学习
Abstract:Considering that shadows caused by changes in lighting are difficult to identify and segment for intelligent surveillance videos in indoor environments, this study proposes a UNet network combining the transfer learning method and the SENet channel attention mechanism. Specifically, because shadow features are blurry and difficult to extract effectively, the SENet channel attention mechanism is added to the upsampling part of the UNet model to improve the feature weight of the effective area without increasing the network parameters. A pre-trained VGG16 network is then migrated into the UNet model to achieve feature migration and parameter sharing, improve the generalization ability of the model, and reduce training costs. Finally, the segmentation result is obtained by a decoder. The experimental results show that compared with the original UNet algorithm, the improved UNet algorithm offers significantly enhanced performance indicators, with its segmentation accuracy on moving objects and shadows respectively reaching 96.09% and 92.24% and a mean intersection-over-union (MIOU) of 92.58%.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
刘莹,杨硕.基于改进UNet网络的室内运动目标阴影分割.计算机系统应用,2022,31(12):412-419
LIU Ying,YANG Shuo.Segmentation of Indoor Moving Object Shadow Based on Improved UNet Network.COMPUTER SYSTEMS APPLICATIONS,2022,31(12):412-419