Abstract:Deep learning models require certain interpretability in practical applications in certain scenarios, and vision is a basic tool for humans to understand the surrounding world. Visualization technology can transform the model training process from an invisible black box to an interactive and analyzable visual process, effectively improving the credibility and interpretability of the model. At present, there is a lack of review on deep learning model visualization tools in related fields, as well as a lack of research on the actual needs of different users and the evaluation of user experience. Therefore, this study summarizes the current situation of the application of visualization tools in different fields by investigating the literature related to interpretability and visualization in recent years. It proposes a classification method and basis for target user-oriented visualization tools and introduces and compares each type of tool from the aspects of visualization content, computational cost, etc., so that different users can select and deploy suitable tools. Finally, on this basis, the problems in the field of visualization are discussed and its prospects are provided.