Abstract:The traditional sign language recognition only relies on the underlying features selected manually, which is difficult to adapt to the diversity of sign language image background. A method of sign language recognition based on the multi-factor skin color segmentation and the improved VGG network is proposed in the study. The collected sign language images are initially segmented by an elliptic model. The skin color region is excluded according to the maximum connected domain, and the skin color regions outside the hand region is removed by centroid positioning method, so as to realize the accurate segmentation of sign language images. The VGG network is improved by reducing the number of convolution and full connection, which reduces the required storage capacity and the number of parameters. The gray-scale image of the segmented sign language is taken as the input of the network, and the improved VGG network is used to establish the recognition model of sign language. By comparing the different structure of the network model of sign language recognition rate of the image, show that the improved VGG networks can effectively study characteristics, the average image sign language recognition rate is above 97%.