Intelligent Robot Depalletizing System Based on Visual Positioning
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [21]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    An intelligent robot depalletizing system based on visual positioning is designed to solve the problem that the traditional teaching and playback robot can only perform depalletizing tasks with given positions and fixed trajectories and thus is limited to fixed scenes. The system uses the coordinate transformation of the target pixel center to obtain the corresponding world coordinates. For the problem that the eye-in-hand camera may lead to the inaccurate rotation angle of the target obtained by the image processing algorithm due to the deflection of the camera, it is proposed to use the extrinsic parameter coefficient of the camera to compensate for the rotation angle of the target. Moreover, a depalletization strategy is designed, and the communication guides the robot to automatically perform the depalletization task by grabbing from nearest to farthest without manual intervention. The experimental data shows that the system can grab the target with an unknown position in an unknown work scene, with a position error of 1.1 mm and an angle error of 1.2°, and the time to position the stacking layer is about 1.2 s. The system meets precision and efficiency requirements for depalletizing robots in the industrial scenes.

    Reference
    [1] 徐翔斌, 马中强. 基于移动机器人的拣货系统研究进展. 自动化学报, 2022, 48(1): 1–20. [doi: 10.16383/j.aas.c190728
    [2] 陈阳, 郑甲红, 王婧. 双机器人协同控制研究综述. 计算机系统应用, 2022, 31(2): 13–21. [doi: 10.15888/j.cnki.csa.008319
    [3] 王诗宇. 智能化工业机器人视觉系统关键技术研究[博士学位论文]. 沈阳: 中国科学院大学(中国科学院沈阳计算技术研究所), 2021.
    [4] 蒋新松. 未来机器人技术发展方向的探讨. 机器人, 1996, (5): 30–36
    [5] Hoffmann A, Poeppel A, Schierl A, et al. Environment-aware proximity detection with capacitive sensors for human-robot-interaction. Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon: IEEE, 2016. 145–150.
    [6] Antonelli D, Bruno G. Human-robot collaboration using industrial robots. Proceedings of the 2nd International Conference on Electrical, Automation and Mechanical Engineering. Amsterdam: Atlantis Press, 2017. 99–102.
    [7] Posada JRD, Schneider U, Pidan S, et al. High accurate robotic drilling with external sensor and compliance model-based compensation. Proceedings of the 2016 IEEE International Conference on Robotics and Automation. Stockholm: IEEE, 2016. 3901–3907.
    [8] 孟少华, 胡瑞钦, 张立建, 等. 一种基于机器人的航天器大型部件自主装配方法. 机器人, 2018, 40(1): 81–88, 101
    [9] 尹仕斌, 任永杰, 刘涛, 等. 机器视觉技术在现代汽车制造中的应用综述. 光学学报, 2018, 38(8): 0815001
    [10] 杜学丹, 蔡莹皓, 鲁涛, 等. 一种基于深度学习的机械臂抓取方法. 机器人, 2017, 39(6): 820–828, 837
    [11] 夏晶, 钱堃, 马旭东, 等. 基于级联卷积神经网络的机器人平面抓取位姿快速检测. 机器人, 2018, 40(6): 794–802. [doi: 10.13973/j.cnki.robot.170702
    [12] Shin H, Hwang H, Yoon H, et al. Integration of deep learning-based object recognition and robot manipulator for grasping objects. Proceedings of the 16th International Conference on Ubiquitous Robots (UR). Jeju: IEEE, 2019. 174–178.
    [13] 朱贺勇. 物体三维重构与位姿估计[硕士学位论文]. 杭州: 浙江大学, 2022.
    [14] 王德明, 颜熠, 周光亮, 等. 基于实例分割网络与迭代优化方法的3D视觉分拣系统. 机器人, 2019, 41(5): 637–648
    [15] Hinterstoisser S, Lepetit V, Ilic S, et al. Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. Proceedings of the 11th Asian Conference on Computer Vision. Daejeon: Springer, 2012. 548–562.
    [16] Fan XJ, Liu XG, Wang XL, et al. An automatic robot unstacking system based on binocular stereo vision. Proceedings of the 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics. Wuhan: IEEE, 2014. 86–90.
    [17] Zhang ZY. Flexible camera calibration by viewing a plane from unknown orientations. Proceedings of the 7th IEEE International Conference on Computer Vision. Kerkyra: IEEE, 1999. 666–673.
    [18] He KM, Gkioxari G, Dollár P, et al. Mask R-CNN. Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 2980–2988.
    [19] 周前飞, 刘晶红. 航空变焦距镜头非线性畸变快速校正方法. 光学学报, 2015, 35(4): 0411001
    [20] 康佳. 基于复杂工业场景的拆垛机器人视觉定位技术的研究[硕士学位论文]. 北京: 北京邮电大学, 2021.
    [21] 茅凌波, 史金龙, 周志强, 等. 基于单视图关键点投票的机器人抓取方法. 计算机集成制造系统, 1–16. https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C45S0n9fL2suRadTyEVl2pW9UrhTDCdPD65zPhUaeS--asbRNezWrTHW8H82BDL2JS03FgvdtbtFyhf_DzNcSGHi&uniplatform=NZKPT. (2021-08-24)[2022-07-16].
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

刘宝临,邹汶材.基于视觉定位的机器人智能拆垛系统.计算机系统应用,2023,32(7):138-144

Copy
Share
Article Metrics
  • Abstract:626
  • PDF: 2047
  • HTML: 1019
  • Cited by: 0
History
  • Received:December 14,2022
  • Revised:January 20,2023
  • Online: April 28,2023
Article QR Code
You are the first990772Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063