本文已被:浏览 668次 下载 1834次
Received:June 28, 2023 Revised:July 27, 2023
Received:June 28, 2023 Revised:July 27, 2023
中文摘要: 联邦学习是一种分布式机器学习方法, 它将数据保留在本地, 仅将计算结果上传到客户端, 从而提高了模型传递与聚合的效率和安全性. 然而, 联邦学习面临的一个重要挑战是, 上传的模型大小日益增加, 大量参数多次迭代, 给通信能力不足的小型设备带来了困难. 因此在本文中, 客户端和服务器被设置为仅一次的互相通信机会. 联邦学习中的另一个挑战是, 客户端之间的数据规模并不相同. 在不平衡数据场景下, 服务器的模型聚合将变得低效. 为了解决这些问题, 本文提出了一个仅需一轮通信的轻量级联邦学习框架, 在联邦宽度学习中设计了一种聚合策略算法, 即FBL-LD. 算法在单轮通信中收集可靠的模型并选出主导模型, 通过验证集合理地调整其他模型的参与权重来泛化联邦模型. FBL-LD利用有限的通信资源保持了高效的聚合. 实验结果表明, FBL-LD相比同类联邦宽度学习算法具有更小的开销和更高的精度, 并且对不平衡数据问题具有鲁棒性.
Abstract:Federated learning is a distributed machine learning approach that enables model delivery and aggregation without compromising the privacy and security of local data. However, federated learning faces a major challenge: the large size of the models and the parameters that need to be communicated multiple times between the client and the server, bringing difficulties for small devices with insufficient communication capability. Therefore, this study set up the client and server to communicate with each other only once. Another challenge in federated learning is the data imbalance among different clients. The model aggregation for servers becomes inefficient in data imbalance. To overcome these challenges, the study proposes a lightweight federated learning framework that requires only one-shot communication between the client and the server. The framework also introduces an aggregation policy algorithm, FBL-LD. The algorithm selects the most reliable and dominant model from the client models in a one-shot communication and adjusts the weights of other models based on a validation set to achieve a generalized federated model. FBL-LD reduces the communication overhead and improves aggregation efficiency. Experimental results show that FBL-LD outperforms existing federated learning algorithms in terms of accuracy and robustness to data imbalance.
keywords: federated learning broad network one-shot communication privacy protection machine learning
文章编号: 中图分类号: 文献标志码:
基金项目:国家自然科学基金(61872153, 61972288)
引用文本:
文家宝,陈泯融.基于宽度网络架构的单模型主导联邦学习.计算机系统应用,2024,33(1):1-10
WEN Jia-Bao,CHEN Min-Rong.Single Model Dominant Federation Learning Based on Broad Network Architecture.COMPUTER SYSTEMS APPLICATIONS,2024,33(1):1-10
文家宝,陈泯融.基于宽度网络架构的单模型主导联邦学习.计算机系统应用,2024,33(1):1-10
WEN Jia-Bao,CHEN Min-Rong.Single Model Dominant Federation Learning Based on Broad Network Architecture.COMPUTER SYSTEMS APPLICATIONS,2024,33(1):1-10