###
计算机系统应用英文版:2020,29(1):1-13
本文二维码信息
码上扫一扫!
面向分布式机器学习的大消息广播设计
(华东计算技术研究所, 上海 201808)
Large Message Broadcast Design for Distributed Machine Learning
(East China Institute of Computing Technology, Shanghai 201808, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 2134次   下载 2292
Received:June 17, 2019    Revised:July 12, 2019
中文摘要: MPI (Message Passing Interface)专为节点密集型大规模计算集群设计,然而,随着MPI+CUDA (Compute Unified Device Architecture)应用程序以及计算节点拥有GPU的计算机集群的出现,类似于MPI的传统通信库已无法满足.而在机器学习领域,也面临着同样的挑战,如Caff以及CNTK (Microsoft CognitiveToolkit)的深度学习框架,由于训练过程中,GPU会缓存庞大的数据量,而大部分机器学习训练的优化算法具有迭代性特点,导致GPU间的通信数据量大,通信频率高,这些已成为限制深度学习训练性能提升的主要因素之一,虽然推出了像NCCL (Nvidia Collective multi-GPU Communication Library)这种解决深度学习通信问题的集合通信库,但也存在不兼容MPI等问题.因此,设计一种更加高效、符合当前新趋势的通信加速机制便显得尤为重要,为解决上述新形势下的挑战,本文提出了两种新型通信广播机制:(1)一种基于MPI_Bcast的管道链PC (Pipelined Chain)通信机制:为GPU缓存提供高效的节点内外通信.(2)一种适用于多GPU集群系统的基于拓扑感知的管道链TA-PC (Topology-Aware Pipelined Chain)通信机制:充分利用多GPU节点间的可用PCIe链路.为了验证提出的新型广播设计,分别在三种配置多样化的GPU集群上进行了实验:GPU密集型集群RX1、节点密集型集群RX2、均衡型集群RX3.实验中,将新的设计与MPI+NCCL1 MPI_Bcast进行对比实验,对于节点内通信和节点间的通信,分别取得了14倍和16.6倍左右的性能提升;与NCCL2的对比试验中,小中型消息取得10倍左右的性能提升,大型消息取得与其相当的性能水平,同时TA-PC设计相比于PC设计,在64GPU集群上实现50%左右的性能提升.实验结果充分说明,提出的解决方案在可移植性以及性能方面有较大的优势.
Abstract:Traditionally, Message Passing Interface (MPI) runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and GPU clusters with a relatively smaller number of nodes, efficient communication schemes need to be designed for such systems. This coupled with new application workloads brought forward by Deep Learning (DL) frameworks like Caffe and Microsoft Cognitive Toolkit (CNTK) pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NVIDIA NCCL have emerged to deal with DL workloads. In this study, we address these new challenges for MPI runtimes and propose two new designs to deal with them: (1) a Pipelined Chain (PC) design for MPI_Bcast that provides efficient intra- and inter-node communication of GPU buffers, and (2) a Topology-Aware PC (TA-PC) design for systems with multiple GPUs to fully exploit all the available PCIe links available within a multi-GPU node. To highlight the benefits of proposed designs, we present the performance evaluation on three GPU clusters with diverse characteristics: a dense multi-GPU system RX1, with a single K80 GPU card per node RX2, with a single P100 GPU per node RX3. The proposed designs offer up to 14×and 16.6×better performance than MPI+NCCL1 based solutions for intra- and inter-node broadcast latency. we have enhanced the performance results by adding comparisons for the proposed MPI_Bcast designs as well as ncclBroadcast (NCCL2) design. We report up to 10×better performance for small and medium message sizes and comparable performance for large message sizes. We also observed that the TA-PC design is up to 50% better than the PC design for MPI_Bcast to 64 GPUs. The results clearly highlight the strength of the proposed solution both in terms of portability as well as performance.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
辛逸杰,谢彬,李振兴.面向分布式机器学习的大消息广播设计.计算机系统应用,2020,29(1):1-13
XIN Yi-Jie,XIE Bin,LI Zhen-Xing.Large Message Broadcast Design for Distributed Machine Learning.COMPUTER SYSTEMS APPLICATIONS,2020,29(1):1-13