结合Bootstrapped探索方法的CCLF算法
作者:
基金项目:

广东省自然科学基金面上项目(2023A1515011472)


CCLF Algorithm with Bootstrapped Exploration
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [18]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    深度强化学习因其可用于从高维的图像中提取出有效信息, 从而可以自动生成解决各类复杂任务的有效策略, 如游戏 AI, 机器人控制和自动驾驶等. 然而, 由于任务环境的复杂性以及智能体低下的探索效率, 使得即使执行相对简单的任务, 智能体仍需要与环境进行大量交互. 因此, 本文提出一种结合Bootstrapped探索方法的CCLF算法—Bootstrapped CCLF, 该算法通过actor网络中多个head来产生更多不同的潜在动作, 从而能够访问到更多不同的状态, 提高智能体的探索效率, 进而加快收敛过程. 实验结果表明, 该算法在DeepMind Control环境中具有比原算法更好的性能以及稳定性, 证明了该算法的有效性.

    Abstract:

    Deep reinforcement learning can be used to extract effective information from high-dimensional images and thus automatically generate effective strategies for solving complex tasks such as game AI, robot control, and autonomous driving. However, due to the complexity of the task environment and the low exploration efficiency of the agent, it is still necessary for the agent to interact with the environment frequently even for relatively simple tasks. Therefore, this study proposes a CCLF algorithm (Bootstrapped CCLF), which combines Bootstrapped exploration method to generate more different potential actions through multiple heads in the actor network, so that more different states can be accessed to improve the exploration efficiency of the agent, and thus the convergence process can be accelerated. The experimental results show that the algorithm has better performance and stability than the original algorithm in the DeepMind Control environment, which proves the effectiveness of the algorithm.

    参考文献
    [1] 李茹杨, 彭慧民, 李仁刚, 等. 强化学习算法与应用综述. 计算机系统应用, 2020, 29(12): 13–25
    [2] Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540): 529–533. [doi: 10.1038/nature14236
    [3] 项宇, 秦进, 袁琳琳. 结合向前状态预测和隐空间约束的强化学习表示算法. 计算机系统应用, 2022, 31(11): 148–156. [doi: 10.15888/j.cnki.csa.008801
    [4] Hu GN, Zhang W, Zhu WH. Prioritized experience replay for continual learning. Proceedings of the 6th International Conference on Computational Intelligence and Applications. Xiamen: IEEE, 2021. 16–20.
    [5] Hao JY, Yang TP, Tang HY, et al. Exploration in deep reinforcement learning: From single-agent to multiagent domain. IEEE Transactions on Neural Networks and Learning Systems, 2023.
    [6] Osband I, Blundell C, Pritzel A, et al. Deep exploration via bootstrapped DQN. Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 4033–4041.
    [7] Osband I, van Roy B, Wen Z. Generalization and exploration via randomized value functions. Proceedings of the 33rd International Conference on Machine Learning. New York City: JMLR.org, 2016. 2377–2386.
    [8] Ciosek K, Vuong Q, Loftin R, et al. Better exploration with optimistic actor-critic. Proceedings of the 33rd Conference on Neural Information Processing Systems. Vancouver: NIPS, 2019. 1785–1796.
    [9] Bai CJ, Wang LX, Han L, et al. Principled exploration via optimistic bootstrapping and backward induction. Proceedings of the 38th International Conference on Machine Learning. San Diego: PMLR, 2021. 577–587.
    [10] Pathak D, Agrawal P, Efros AA, et al. Curiosity-driven exploration by self-supervised prediction. Proceedings of the 34th International Conference on Machine Learning. Sydney: PMLR, 2017. 2778–2787.
    [11] Burda Y, Edwards H, Storkey AJ, et al. Exploration by random network distillation. Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019. 17.
    [12] Houthooft R, Chen X, Duan Y, et al. VIME: Variational information maximizing exploration. Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 1117–1125.
    [13] Sun CY, Qian HW, Miao CY. CCLF: A contrastive-curiosity-driven learning framework for sample-efficient reinforcement learning. Proceedings of the 31st International Joint Conference on Artificial Intelligence. Vienna: IJCAI.org, 2022. 3444–3450.
    [14] 高阳, 陈世福, 陆鑫. 强化学习研究综述. 自动化学报, 2004, 30(1): 86–100
    [15] Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proceedings of the 35th International Conference on Machine Learning. Stockholm: PMLR, 2018. 1861–1870.
    [16] Laskin M, Srinivas A, Abbeel P. CURL: Contrastive unsupervised representations for reinforcement learning. Proceedings of the 37th International Conference on Machine Learning. San Diego: PMLR, 2020. 5639–5650.
    [17] Todorov E, Erez T, Tassa Y. MuJoCo: A physics engine for model-based control. Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve: IEEE, 2012. 5026–5033.
    [18] Yarats D, Kostrikov I, Fergus R. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. Proceedings of the 9th International Conference on Learning Representations. OpenReview.net, 2021. 21.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

杜志斌,黄银豪.结合Bootstrapped探索方法的CCLF算法.计算机系统应用,2023,32(9):162-168

复制
分享
文章指标
  • 点击次数:592
  • 下载次数: 1392
  • HTML阅读次数: 821
  • 引用次数: 0
历史
  • 收稿日期:2023-03-02
  • 最后修改日期:2023-04-04
  • 在线发布日期: 2023-07-17
文章二维码
您是第11276722位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号