Abstract:An adaptive learning rate improvement method, based on weight change, is proposed to improve the learning rate of traditional neural network in this study. If the learning rate is too large or too small, neural network is too difficult or too slow to converge. To offset this disadvantage, the study put forward a new learning rate, based on weight gradient, to improve the convergence rate and improve the traditional neural network learning rate affected by the human experienced factors, and combined with normal distribution and gradient rise method, to size up error accuracy and convergence speed. Taking BP neural network as an example, comparing the fixed learning rate neural network, and applying classical XOR problem simulation, we verify the proposed method. The results show that this improved neural network has faster convergence speed and smaller error.