Abstract:Federated learning protects user privacy by aggregating trained models of the client and thereby keeping the data local on the client. Due to the large numbers of devices participating in training, the data is non-independent and identically distributed (non-IID), and the communication bandwidth is limited. Therefore, reducing communication costs is an important research direction for federated learning. Gradient compression is an effective method of improving the communication efficiency of federated learning. However, most of the commonly used gradient compression methods are for independent and identically distributed data without considering the characteristics of federated learning. For the scene of non-IID data in federated learning, this study proposes a sparse ternary compression algorithm based on projection. The communication cost is reduced by gradient compression on the client and server, and the negative impact of non-IID client data is mitigated by gradient projection aggregation on the server. The experimental results show that the proposed algorithm not only improves communication efficiency but also outperforms the existing gradient compression algorithms in convergence speed and accuracy.