###
计算机系统应用:2020,29(9):16-25
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
卷积神经网络压缩与加速技术研究进展
(1.浪潮电子信息产业股份有限公司, 济南 250101;2.广东浪潮大数据研究有限公司, 广州 510632)
Research Progress on Convolutional Neural Network Compression and Acceleration Technology
(1.Inspur Electronic Information Industry Co. Ltd., Jinan 250101, China;2.Guangdong Inspur Big Data Research Co. Ltd., Guangzhou 510632, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 299次   下载 129
投稿时间:2020-02-26    修订日期:2020-03-17
中文摘要: 神经网络压缩技术的出现缓解了深度神经网络模型在资源受限设备中的应用难题,如移动端或嵌入式设备.但神经网络压缩技术在压缩处理的自动化、稀疏度与硬件部署之间的矛盾、避免压缩后模型重训练等方面存在困难.本文在回顾经典神经网络模型和现有神经网络压缩工具的基础上,总结参数剪枝、参数量化、低秩分解和知识蒸馏四类压缩方法的代表性压缩算法的优缺点,概述压缩方法的评测指标和常用数据集,并分析各种压缩方法在不同任务和硬件资源约束中的性能表现,展望神经网络压缩技术具有前景的研究方向.
Abstract:The development of neural network compression relieves the difficulty of deep neural networks running on resource-restricted devices, such as mobile or embedded devices. However, neural network compression encounters challenges in automation of compression, conflict of the sparsity and hardware deployment, avoidance of retraining compressed networks and other issues. This paper firstly reviews classic neural network models and current compression toolkits. Secondly, this paper summarizes advantages and weaknesses of representative compression methods of parameter pruning, quantization, low-rank factorization and distillation. This paper lists evaluating indicators and common datasets for the performance evaluation and then analyzes compression performance in different tasks and resource constraints. Finally, promising development trends are stated in this paper as references for promoting the neural network compression technique.
文章编号:7632     中图分类号:    文献标志码:
基金项目:
引用文本:
尹文枫,梁玲燕,彭慧民,曹其春,赵健,董刚,赵雅倩,赵坤.卷积神经网络压缩与加速技术研究进展.计算机系统应用,2020,29(9):16-25
YIN Wen-Feng,LIANG Ling-Yan,PENG Hui-Min,CAO Qi-Chun,ZHAO Jian,DONG Gang,ZHAO Ya-Qian,ZHAO Kun.Research Progress on Convolutional Neural Network Compression and Acceleration Technology.COMPUTER SYSTEMS APPLICATIONS,2020,29(9):16-25

用微信扫一扫

用微信扫一扫