本文已被:浏览 1647次 下载 3445次
Received:October 22, 2017 Revised:November 10, 2017
Received:October 22, 2017 Revised:November 10, 2017
中文摘要: 针对传统神经网络的学习率由人为经验性设定,存在学习率设置过大或过小,容易导致无法收敛或收敛速度慢的问题,本文提出基于权值变化的自适应学习率改进方法,改善传统神经网络学习率受人为经验因素影响的弊端,提高误差精度,并结合正态分布模型与梯度上升法,提高收敛速度.本文以BP神经网络为例,对比固定学习率的神经网络,应用经典XOR问题仿真验证,结果表明本文的改进神经网络具有更快的收敛速度和更小的误差.
Abstract:An adaptive learning rate improvement method, based on weight change, is proposed to improve the learning rate of traditional neural network in this study. If the learning rate is too large or too small, neural network is too difficult or too slow to converge. To offset this disadvantage, the study put forward a new learning rate, based on weight gradient, to improve the convergence rate and improve the traditional neural network learning rate affected by the human experienced factors, and combined with normal distribution and gradient rise method, to size up error accuracy and convergence speed. Taking BP neural network as an example, comparing the fixed learning rate neural network, and applying classical XOR problem simulation, we verify the proposed method. The results show that this improved neural network has faster convergence speed and smaller error.
keywords: neural network adaptive learning rate normal distribution model method of gradient increase XOR issue
文章编号: 中图分类号: 文献标志码:
基金项目:
引用文本:
朱振国,田松禄.基于权值变化的BP神经网络自适应学习率改进研究.计算机系统应用,2018,27(7):205-210
ZHU Zhen-Guo,TIAN Song-Lu.Improvement of Learning Rate of Feed Forward Neural Network Based on Weight Gradient.COMPUTER SYSTEMS APPLICATIONS,2018,27(7):205-210
朱振国,田松禄.基于权值变化的BP神经网络自适应学习率改进研究.计算机系统应用,2018,27(7):205-210
ZHU Zhen-Guo,TIAN Song-Lu.Improvement of Learning Rate of Feed Forward Neural Network Based on Weight Gradient.COMPUTER SYSTEMS APPLICATIONS,2018,27(7):205-210