Abstract:Federated learning (FL) is an emerging distributed machine learning framework aimed at addressing issues of data privacy protection and efficient distributed computing. It allows multiple clients to collaboratively train a global model without sharing their data. However, due to the heterogeneity in the data distribution of each client, a single global model often fails to meet the personalized needs of different clients. To address this issue, this paper proposes a federated learning algorithm that combines self-distillation and decoupled knowledge distillation. The algorithm retains the client’s historical model as a teacher model to distill and guide the training of the local model, and after obtaining a new local model, it is uploaded to the server for weighted averaging and aggregation. In the knowledge distillation process, the decoupled distillation of target class knowledge and non-target class knowledge allows for a more thorough transmission of personalized knowledge. Experimental results show that the proposed method outperforms existing federated learning methods in classification accuracy on the CIFAR-10 and CIFAR-100 datasets.