Abstract:Artificial neural network (ANN) has made significant progress in many fields, but its high demand for computing resources and energy consumption limits its deployment and application on the hardware side. Spiking neural network (SNN) performs well on neural morphology hardware due to its low power consumption and fast inference characteristics. However, the neural dynamics and pulse propagation mechanism of SNN make its training process complex. Current research primarily focuses on image classification tasks. This study attempts to apply SNN to more complex computer vision tasks. This study is based on the YOLOv3 tiny network and proposes the spiking YOLOv3 model, which conforms to the SNN characteristics of the network model. It achieves higher accuracy in detection tasks and reduces the average inference time to about 1/4 of the original work. In addition, this study also analyzes the conversion errors generated during the ANN-SNN conversion process and optimizes the Spiking YOLOv3 model using a quantization activation function to reduce conversion errors. The optimized model reduces the average inference time to about half of the original and achieves lossless conversion on the VOC and UAV datasets in ANN-SNN, significantly improving the detection efficiency based on this model.