本文已被:浏览 568次 下载 1129次
Received:February 22, 2021 Revised:March 19, 2021
Received:February 22, 2021 Revised:March 19, 2021
中文摘要: 区别于传统深度强化学习中通过从经验回放单元逐个选择的状态转移样本进行训练的方式, 针对采用整个序列轨迹作为训练样本的深度Q网络(Deep Q Network, DQN), 提出基于遗传算法的交叉操作扩充序列样本的方法. 序列轨迹是由智能体与环境交互的试错决策过程中产生, 其中会存在相似的关键状态. 以两条序列轨迹中的相似状态作为交叉点, 能产生出当前未出现过的序列轨迹, 从而达到扩充序列样本数量、增大序列样本的多样性的目的, 进而增加智能体的探索能力, 提高样本效率. 与深度Q网络随机采样训练样本和采用序列样本向后更新的算法(Episodic Backward Update, EBU)进行对比, 所提出的方法在Playing Atari 2600视频游戏中能取得更高的奖赏值.
Abstract:Different from the traditional deep reinforcement learning method of training through transitions selected one by one from the experience replay, for the Deep Q Network (DQN) that uses the entire episode trajectory as the training sample, a method for expanding episode samples is proposed, which is based on genetic algorithm crossover operators. The episode trajectory is generated during the trial-and-error decision-making process of the interaction between the agent and the environment, in which similar key states will be encountered. With the similar state in the two episode trajectories as the intersection point, the episode trajectory that has not appeared till present can be generated to enlarge the number of episode samples and increase their diversity, thereby enhancing the agent’s exploration ability and improving sample efficiency. Compared with DQN that randomly selects samples and uses the Episodic Backward Update (EBU) algorithm, the proposed method can achieve higher rewards in the Playing Atari 2600.
文章编号: 中图分类号: 文献标志码:
基金项目:国家自然科学基金(61562009); 贵州省科技计划(黔科合基础[2019]1130号)
引用文本:
杨彤,秦进,谢仲涛,袁琳琳.基于遗传交叉算子的深度Q网络样本扩充.计算机系统应用,2021,30(12):155-162
YANG Tong,QIN Jin,XIE Zhong-Tao,YUAN Lin-Lin.Samples Expanding of Deep Q Network Based on Genetic Crossover Operator.COMPUTER SYSTEMS APPLICATIONS,2021,30(12):155-162
杨彤,秦进,谢仲涛,袁琳琳.基于遗传交叉算子的深度Q网络样本扩充.计算机系统应用,2021,30(12):155-162
YANG Tong,QIN Jin,XIE Zhong-Tao,YUAN Lin-Lin.Samples Expanding of Deep Q Network Based on Genetic Crossover Operator.COMPUTER SYSTEMS APPLICATIONS,2021,30(12):155-162