本文已被:浏览 498次 下载 1134次
Received:January 19, 2023 Revised:February 23, 2023
Received:January 19, 2023 Revised:February 23, 2023
中文摘要: 在雾计算系统架构基础上, 针对数据中心高能耗、应用任务负载的随机动态性以及用户对应用的低时延要求, 提出一种基于A2C (advantage actor-critic)算法的以最小化能源消耗和平均响应时间为目标的容器整合方法, 利用检查点/恢复技术实时迁移容器, 实现资源整合. 构建从数据中心系统状态到容器整合的端到端决策模型, 提出自适应多目标奖励函数, 利用基于梯度的反向传播算法加快决策模型的收敛速度. 基于真实任务负载数据集的仿真实验结果表明, 该方法能够在保证服务质量的同时有效降低能耗.
中文关键词: 雾计算|资源调度|深度强化学习|容器技术|建模与仿真
Abstract:In view of the high energy consumption of data centers, the random dynamics of application task load, and the low latency requirements of users for applications, on the basis of the fog computing system architecture, a container integration method based on advantage actor-critic (A2C) algorithm is proposed to minimize energy consumption and average response time. The method uses checkpoint/recovery technology to migrate containers in real time to achieve resource integration. An end-to-end decision model from data center system state to container integration is constructed, and an adaptive multi-objective reward function is proposed. The gradient-based backpropagation algorithm is used to accelerate the convergence speed of the decision model. Simulation results based on real task load datasets show that the proposed method can effectively reduce energy consumption while ensuring service quality.
文章编号: 中图分类号: 文献标志码:
基金项目:太原科技大学博士科研启动基金(20202063); 太原科技大学研究生教育创新项目(SY2022063)
引用文本:
党伟超,王珏.基于深度强化学习的雾计算容器整合.计算机系统应用,2023,32(8):303-311
DANG Wei-Chao,WANG Jue.Container Consolidation Based on Deep Reinforcement Learning in Fog Computing Environment.COMPUTER SYSTEMS APPLICATIONS,2023,32(8):303-311
党伟超,王珏.基于深度强化学习的雾计算容器整合.计算机系统应用,2023,32(8):303-311
DANG Wei-Chao,WANG Jue.Container Consolidation Based on Deep Reinforcement Learning in Fog Computing Environment.COMPUTER SYSTEMS APPLICATIONS,2023,32(8):303-311