###
计算机系统应用英文版:2024,33(5):228-238
本文二维码信息
码上扫一扫!
非独立同分布数据下联邦学习算法中优化器的对比分析
(福州职业技术学院 特殊教育系, 福州 350108)
Comparative Analysis of Optimizers in Federated Learning Algorithms Under Non-independent and Identically Distributed Data
(Department of Special Education, Fuzhou Polytechnic, Fuzhou 350108, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 305次   下载 1182
Received:November 23, 2023    Revised:December 20, 2023
中文摘要: 在联邦学习环境中选取适宜的优化器是提高模型性能的有效途径, 尤其在数据高度异构的情况下. 本文选取FedAvg算法与FedALA算法作为主要研究对象, 并提出其改进算法pFedALA. pFedALA通过令客户端在等待期间继续本地训练, 有效降低了由于同步需求导致的资源浪费. 在此基础上, 本文重点分析这3种算法中优化器的作用, 通过在MNIST和CIFAR-10数据集上测试, 比较了SGD、Adam、ASGD以及AdaGrad等多种优化器在处理非独立同分布(Non-IID)、数据不平衡时的性能. 其中重点关注了基于狄利克雷分布的实用异构以及极端的异构数据设置. 实验结果表明: 1) pFedALA算法呈现出比FedALA算法更优的性能, 表现为其平均测试准确率较FedALA提升约1%; 2)传统单机深度学习环境中的优化器在联邦学习环境中表现存在显著差异, 与其他主流优化器相比, SGD、ASGD与AdaGrad优化器在联邦学习环境中展现出更强的适应性和鲁棒性.
Abstract:Selecting appropriate optimizers for a federated learning environment is an effective way to improve model performance, especially in situations where the data is highly heterogeneous. In this study, the FedAvg and FedALA algorithms are mainly investigated, and an improved version called pFedALA is proposed. PFedALA effectively reduces resource waste caused by synchronization demands by allowing clients to continue local training during waiting periods. Then, the roles of the optimizers in these three algorithms are analyzed in detail, and the performance of various optimizers such as stochastic gradient descent (SGD), Adam, averaged SGD (ASGD), and AdaGrad in handling non-independent and identically distributed (Non-IID) and imbalanced data is compared by testing them on the MNIST and CIFAR-10 datasets. Special attention is given to practical heterogeneity based on the Dirichlet distribution and extreme heterogeneity in terms of data setting. The experimental results suggest the following observations: 1) The pFedALA algorithm outperforms the FedALA algorithm, with an average test accuracy approximately 1% higher than that of FedALA; 2) Optimizers commonly used in traditional single-machine deep learning environments deliver significantly different performance in a federated learning environment. Compared with other mainstream optimizers, the SGD, ASGD, and AdaGrad optimizers appear to be more adaptable and robust in the federated learning environment.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
傅刚.非独立同分布数据下联邦学习算法中优化器的对比分析.计算机系统应用,2024,33(5):228-238
FU Gang.Comparative Analysis of Optimizers in Federated Learning Algorithms Under Non-independent and Identically Distributed Data.COMPUTER SYSTEMS APPLICATIONS,2024,33(5):228-238