Abstract:Federated learning allows multiple users to collaboratively train models without sharing the original data. To ensure that users’ local datasets are not leaked, the existing works propose secure aggregation protocols. However, most of the existing schemes fail to consider global model privacy, and the system is at a high cost of computational and communicational resources. In response to the above problems, this study proposes an efficient and secure privacy-preserving secure aggregation scheme for federated learning. The scheme uses symmetric homomorphic encryption to protect the privacy of the user model and the global model and adopts secret sharing to solve users’ dropout. At the same time, the Pedersen commitment is applied to verify the correctness of the aggregation results returned by the cloud server, and the BLS signature is utilized to protect the data integrity during the interaction between the users and the cloud server. In addition, security analysis illustrates that the proposed protocol is of provable security; performance analysis indicates that the protocol is efficient and practical for federated learning systems with large-scale users.