Shanghai Key Laboratory of Intelligent Information Processing, Shanghai Engineering Research Center for Video Technology and System, Shanghai 201203, China 在期刊界中查找 在百度中查找 在本站中查找
School of Computer Science, Fudan University, Shanghai 201203, China;Shanghai Key Laboratory of Intelligent Information Processing, Shanghai Engineering Research Center for Video Technology and System, Shanghai 201203, China 在期刊界中查找 在百度中查找 在本站中查找
Very deep convolutional neural networks based on residual learning have achieved higher accuracy than other methods for large scale face recognition problem. But the massive floating-point parameters existing in the models need to occupy extensive computational and memory resources, which cannot be satisfied with the demand of occasions with limited resources. Aimed at the solution of this issue, a very deep residual neural network based on network model parameters quantization was designed in this study. In detail, based on the model Face-ResNet, the network was added with batch normalization layers and dropout layers, and also its total layers were deepened. Applying binary quantization to parameters of the designed network models, it can compress the model size substantially and improve computational efficiency with little loss of model recognition accuracy. Both theoretical analysis and experiments prove the effectiveness of the designed method.