Abstract:It is needed to be adapted to the actual engineering requirements and the classification of the fine-grained data when we collect and annotate data. However, It is difficult to maintain complete independent and identical distribution between the samples. The non-i.i.d data seriously reduce the training’s robustness of deep neural network model and the generalization performance of specific tasks. In order to overcome the shortcomings, this study proposes an improved algorithm of batch normalization, which normalizes a fix reference batch to calculate its mean and variance when the model training started, and then, the statistics of the reference batch is used to update other batches. Experimental results show that the proposed algorithm can accelerate the training convergence speed of the neural network model, meanwhile, the classification error is reduced by 0.3% compared with the BN algorithm. On the other hand, the robustness of neural network model and the generalization performance of some detection frameworks like object detection or instance segmentation are also improved effectively.