Adaptive Weight Block Matching Depth Estimation Algorithm Based on Light Field Image Sequence
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [17]
  • |
  • Related
  • | | |
  • Comments
    Abstract:

    In the existing depth estimation algorithm, when the depth estimation is performed on the image of the light field sequence, the matching effect is poor and the robustness is low when the image brightness changes and in the weak texture region. Aiming to solve these problems, this study proposes an adaptive weight block matching algorithm based on CIELab color space. Since the color difference matching in color image RGB color space has many influencing factors, the algorithm converts to CIELab space for color similarity matching to calculate the weight value, and then combines the gradient and distance to calculate the matching image and the matching block in the image to be matched to obtain the comprehensive weight value. Finally, according to the linear characteristics of the Epipolar Plane Image (EPI), the matching image and the image block to be matched in the image sequence are matched and calculated, and the depth map is obtained. After simulation, the proposed algorithm can estimate the depth information of the scene better, and the accuracy is greatly improved. It is obviously superior to the previous depth estimation algorithm and can be widely used.

    Reference
    [1] Gortler SJ, Grzeszczuk R, Szeliski R, et al. The lumigraph. Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New York, NY, USA. 1996. 43-54.
    [2] Levoy M, Hanrahan P. Light field rendering. Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New York, NY, USA. 1996. 31-42.
    [3] Wilburn B, Joshi N, Vaish V, et al. High performance imaging using large camera arrays. ACM Transactions on Graphics (TOG), 2005, 24(3):765-776.[doi:10.1145/1073204.1073259
    [4] Davis A, Levoy M, Durand F. Unstructured light fields. Computer Graphics Forum, 2012, 31(2pt1):305-314.[doi:10.1111/j.1467-8659.2012.03009.x
    [5] Veeraraghavan A, Raskar R, Agrawal A, et al. Dappled photography:Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics, 2007, 26(3):69.[doi:10.1145/1276377.1276463
    [6] Zhang YB, Lv HJ, Liu YB, et al. Light-field depth estimation via epipolar plane image analysis and locally linear embedding. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(4):739-747.[doi:10.1109/TCSVT.2016.2555778
    [7] 聂云峰, 相里斌, 周志良. 光场成像技术进展. 中国科学院研究生院学报, 2011, 28(5):563-572
    [8] 丁伟利, 陈瑜, 马鹏程, 等. 基于阵列图像的自适应光场三维重建算法研究. 仪器仪表学报, 2016, 37(9):2156-2165.[doi:10.3969/j.issn.0254-3087.2016.09.030
    [9] 郑淼, 赵红颖, 杨鹏, 等. 基于光场数据的无纹理景物视差估计方法. 北京大学学报(自然科学版), 2018, 54(2):336-340
    [10] Bolles RC, Baker HH, Marimont DH. Epipolar- plane image analysis:An approach to determining structure from motion. International Journal of Computer Vision, 1987, 1(1):7-55.[doi:10.1007/BF00128525
    [11] Criminisi A, Kang SB, Swaminathan R, et al. Extracting layers and analyzing their specular properties using epipolar-plane-image analysis. Computer Vision and Image Understanding, 2005, 97(1):51-85.[doi:10.1016/j.cviu.2004.06.001
    [12] Kim C, Zimmer H, Pritch Y, et al. Scene reconstruction from high spatio-angular resolution light fields. ACM Transactions on Graphics, 2013, 32(4):73
    [13] 丁伟利, 马鹏程, 陆鸣, 等. 基于先验似然的高分辨光场图像深度重建算法研究. 光学学报, 2015, 35(7):0715002
    [14] Wanner S, Goldluecke B. Variational light field analysis for disparity estimation and super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(3):606-619.[doi:10.1109/TPAMI.2013.147
    [15] Wanner S, Goldluecke B. Globally consistent depth labeling of 4D light fields. Proceedings of 2012 Computer Vision and Pattern Recognition. Providence, RI, USA. 2012. 41-48.
    [16] Wanner S, Goldluecke B. Spatial and angular variational super-resolution of 4D light fields. Proceedings of the 12th European Conference on Computer Vision-ECCV 2012. Florence, Italy. 2012. 608-621.
    [17] Tao MW, Hadap S, Malik J, et al. Depth from combining defocus and correspondence using light-field cameras. Proceedings of 2013 IEEE International Conference on Computer Vision (ICCV). Sydney, NSW, Australia. 2013. 673-680.
    Related
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

公冶佳楠,李轲.基于光场图像序列的自适应权值块匹配深度估计算法.计算机系统应用,2020,29(4):195-201

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:July 22,2019
  • Revised:September 23,2019
  • Online: April 09,2020
  • Published: April 15,2020
Article QR Code
You are the first1025786Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063