Abstract:In federated learning, due to barriers such as industry competition and privacy protection, users keep data locally and cannot train models in a centralized manner. Users can train models cooperatively through the central server to fully utilize their data and computing power, and they can share the common model obtained by training. However, the common model produces the same output for different users, so it cannot be readily applied to the common situation where users’ data are heterogeneous. To solve this problem, this study proposes a new algorithm based on the meta-learning method Reptile to learn personalized federated learning models for users. Reptile can learn the initial parameters of models efficiently for multi-tasks. When a new task arrives, only a few steps of gradient descent are needed for convergence to satisfactory model parameters. This advantage is leveraged, and Reptile is combined with federated averaging (FedAvg). The user terminal uses Reptile to process multi-tasks and update parameters. After that, the central server performs the averaging aggregation of the parameters the user updates and iteratively learns better initial parameters of the model. Finally, after the proposed algorithm is applied to each user’s data, personalized models can be obtained by only a few steps of gradient descent. In the experiment, this study uses simulated data and real data to set up federated learning scenarios. The experiment shows that the proposed algorithm can converge faster and offer a better personalized learning ability than other algorithms.