Abstract:Creating a convolutional neural network consumes substantial human resources, and much computing power is needed during training. The application of dilated convolution instead of the pooling operation in the convolutional neural network can considerably increase the receptive field and reduce the computational complexity, but the dilated convolution will bring about the loss of spatial hierarchy and information continuity. This study proposes a parallel asymmetric dilated convolution module, which can fill in the information lost by dilated convolution and be embedded in the current convolutional neural networks to replace the 3×3 convolution for network training. As a result, network convergence is accelerated and network performance is improved. The experimental results show that the proposed module can significantly improve the classification of various classical networks on CIFAR-10 and other data sets.