Abstract:The uncertainty of neural networks reflects the predictive confidence of deep learning models, enabling timely human intervention in unreliable decision-making, which is crucial for enhancing system safety. However, existing measurement methods often require significant modifications to the model or training process, leading to high implementation complexity. To address this, this study proposes an uncertainty measurement approach utilizing neuron statistical modeling and analysis with activation values within a single forward propagation. An improved kernel density estimation technology is employed to construct neuron activation distributions and stimulate neuron normal operating range. Subsequently, a neighborhood-weighted density estimation method is utilized to calculate anomaly factors, effectively qualifying deviations of test samples from neuron activation distribution. Finally, by statistically combining the anomaly factors of each neuron, the cumulative anomaly factors of the sample provide a new perspective in assessing model uncertainty. Experimental results across multiple public datasets and models visually demonstrate the significant effectiveness of the proposed method in distinguishing between in-domain and out-of-domain samples through visualizing feature maps. Moreover, the method exhibits exceptional performance in out-of-domain detection tasks, with AUROC exceeding other methods across various experimental setups, validating its generality and effectiveness.