多分支结构强化表征能力的CapsNet方法
作者:
基金项目:

国家自然科学基金(61672158);福建省自然科学基金(2018J1798);福建省高校产学合作项目(2018H6010)


Multi-Branches CapsNet Method with Enhanced Representation Capability
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [14]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    CapsNet是一种新的目标识别模型,通过动态路由和capsule识别已知目标的新形态.针对CapsNet的解码器输入层规模随类别数增加而增加,可延展性较弱的问题,本文提出多分支自编码器模型.该模型将各个类别的编码分别传递给解码器,使解码器规模独立于类别数,增强了模型的可延展性.针对单类别图像训练多类别图像识别任务,本文增加新的优化目标降低非标签类别的编码向量对解码器的激励,强化了模型的表征能力.MNIST数据集的实验结果表明,多分支自编码器具有良好的识别能力且重构能力明显优于CapsNet,因而具有更全面的表征能力.

    Abstract:

    A novel neural network for object recognition, CapsNet, uses dynamic routing and capsules to recognize novel state of a known object, while the input layer of CapsNet decoder increases when the number of categories increases, which means a relatively limited scalability. To overcome this weakness, we propose the Multi-branches Auto-Encoder (MAE) which gives coding vectors of every class to the decoder respectively letting the scale of decoder independent from the number of categories enhancing the representation capability of the proposed model. The experiment on MNIST shows that MAE is competitive in recognition and more powerful in reconstruction which means a more complete capability on representation.

    参考文献
    [1] LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11):2278-2324.[doi:10.1109/5.726791
    [2] Ranzato M, Huang FJ, Boureau Y L, et al. Unsupervised learning of invariant feature hierarchies with applications to object recognition. Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN, USA. 2007. 1-8.
    [3] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, NV, USA. 2012. 1097-1105.
    [4] Szegedy C, Liu W, Jia YQ, et al. Going deeper with convolutions. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA. 2015. 1-9.
    [5] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
    [6] 尹宏鹏, 陈波, 柴毅, 等. 基于视觉的目标检测与跟踪综述. 自动化学报, 2016, 42(10):1466-1489
    [7] Girshick R. Fast R-CNN. Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile. 2015. 1440-1448.
    [8] Ren SQ, He KM, Girshick R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.[doi:10.1109/TPAMI.2016.2577031
    [9] He KM, Gkioxari G, Dollár P, et al. Mask R-CNN. Proceedings of 2017 IEEE International Conference on Computer Vision. Venice. 2017. 2980-2988.
    [10] 周飞燕, 金林鹏, 董军. 卷积神经网络研究综述. 计算机学报, 2017, 40(6):1229-1251
    [11] Sabour S, Frosst N, Hinton GE. Dynamic routing between capsules. Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach, CA, USA. 2017. 3856-3866.
    [12] Abadi M, Barham P, Chen JM, et al. Tensorflow:A system for large-scale machine learning. Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation. Savannah, GA, USA. 2016. 265-283.
    [13] Kingma DP, Ba J. Adam:A method for stochastic optimization. arXiv:1412.6980.
    [14] Xiao LC, Bahri Y, Sohl-Dickstein J, et al. Dynamical isometry and a mean field theory of CNNs:How to train 10, 000-layer vanilla convolutional neural networks. Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden. 2018. 5393-5402.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

谢海闻,叶东毅,陈昭炯.多分支结构强化表征能力的CapsNet方法.计算机系统应用,2019,28(3):111-117

复制
分享
文章指标
  • 点击次数:1626
  • 下载次数: 2153
  • HTML阅读次数: 1346
  • 引用次数: 0
历史
  • 收稿日期:2018-09-18
  • 最后修改日期:2018-10-12
  • 在线发布日期: 2019-02-22
文章二维码
您是第10738360位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号