在联邦学习背景下, 由于行业竞争、隐私保护等壁垒, 用户数据保留在本地, 无法集中在一处训练. 为充分利用用户的数据和算力, 用户可通过中央服务器协同训练模型, 训练得到的公共模型为用户共享, 但公共模型对于不同用户会产生相同输出, 难以适应用户数据是异质的常见情形. 针对该问题, 提出一种基于元学习方法Reptile的新算法, 为用户学习个性化联邦学习模型. Reptile可高效学习多任务的模型初始化参数, 在新任务到来时, 仅需几步梯度下降就能收敛到良好的模型参数. 利用这一优势, 将Reptile与联邦平均(federated averaging, FedAvg)相结合, 用户终端利用Reptile处理多任务并更新参数, 之后中央服务器将用户更新的参数进行平均聚合, 迭代学习更好的模型初始化参数, 最后将其应用于各用户数据后仅需几步梯度下降即可获得个性化模型. 实验中使用模拟数据和真实数据设置了联邦学习场景, 实验表明该算法相比其他算法能够更快收敛, 具有更好的个性化学习能力.
In federated learning, due to barriers such as industry competition and privacy protection, users keep data locally and cannot train models in a centralized manner. Users can train models cooperatively through the central server to fully utilize their data and computing power, and they can share the common model obtained by training. However, the common model produces the same output for different users, so it cannot be readily applied to the common situation where users’ data are heterogeneous. To solve this problem, this study proposes a new algorithm based on the meta-learning method Reptile to learn personalized federated learning models for users. Reptile can learn the initial parameters of models efficiently for multi-tasks. When a new task arrives, only a few steps of gradient descent are needed for convergence to satisfactory model parameters. This advantage is leveraged, and Reptile is combined with federated averaging (FedAvg). The user terminal uses Reptile to process multi-tasks and update parameters. After that, the central server performs the averaging aggregation of the parameters the user updates and iteratively learns better initial parameters of the model. Finally, after the proposed algorithm is applied to each user’s data, personalized models can be obtained by only a few steps of gradient descent. In the experiment, this study uses simulated data and real data to set up federated learning scenarios. The experiment shows that the proposed algorithm can converge faster and offer a better personalized learning ability than other algorithms.