基于潜在低秩表示及导向滤波的红外与可见光图像融合方法
作者:
基金项目:

陕西省教育厅科学研究计划(20JK0585); 陕西学前师范学院科研基金(2020YBKJ19)


Infrared and Visible Image Fusion Method Based on Latent Low-Rank Representation and Guided Filtering
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [20]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    针对红外与可见光图像融合过程中出现的细节损失严重等问题, 提出一种基于潜在低秩表示与导向滤波的红外与可见光图像融合方法. 首先, 采用潜在低秩表示方法将源图像分解为低秩图层和显著图层, 为了更多地提取低秩图层中细节信息, 采用导向滤波将低秩图层分解基础图层和细节图层; 并针对基础图层、细节图层和显著图层的特性, 分别采用视觉显著度加权法、梯度显著度加权法、绝对值最大选择法作为融合规则. 特别地, 为了解决初始权重具有噪声且不与物体边界对齐问题, 采用导向滤波优化初始权重. 最后, 将基础融合图层、细节融合图层和显著融合图层经叠加运算得到融合图像. 通过对比多组融合图像主、客观评价结果表明, 该方法能有效挖掘源图像的细节信息, 在视觉质量和客观评价方法优于其他图像融合方法.

    Abstract:

    This study proposes an infrared and visible image fusion method based on latent low-rank representation and guided filtering to address the serious detail loss and the poor visual quality in the fusion. First of all, the source image is decomposed by latent low-rank representation into low-rank layers and salient layers. Then the low-rank layers are decomposed by guided filtering into basic layers and structural layers with the aim of extracting more structural information from low-rank layers. According to the characteristics of basic layers, structural layers, and salient layers, visual saliency weighting, gradient saliency weighting, and absolute maximum selection are used as fusion rules, respectively. In particular, since the initial weight is noisy and unaligned with the object boundary, it is optimized by guided filtering. Finally, the basic fusion layer, the structural fusion layer, and the salient fusion layer are overlapped to yield the fused image. The subjective and objective evaluation results of several groups of fused images are compared. The proposed method is found able to effectively extract the detail information of source images and superior to other image fusion methods in terms of visual quality and objective evaluation.

    参考文献
    [1] Zhu ZQ, Chai Y, Yin HP, et al. A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing, 2016, 214: 471–482. [doi: 10.1016/j.neucom.2016.06.036
    [2] 郭全民, 王言, 李翰山. 改进IHS-Curvelet变换融合可见光与红外图像抗晕光方法. 红外与激光工程, 2018, 47(11): 440–448
    [3] Yang SY, Wang M, Jiao LC, et al. Image fusion based on a new contourlet packet. Information Fusion, 2010, 11(2): 78–84. [doi: 10.1016/j.inffus.2009.05.001
    [4] Jin X, Jiang Q, Yao SW, et al. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Physics & Technology, 2018, 88: 1–12
    [5] Wu CM, Chen L. Infrared and visible image fusion method of dual NSCT and PCNN. PLoS One, 2020, 15(9): e0239535. [doi: 10.1371/journal.pone.0239535
    [6] 沈瑜, 陈小朋, 苑玉彬, 等. 基于显著矩阵与神经网络的红外与可见光图像融合. 激光与光电子学进展, 2020, 57(20): 76–86
    [7] Liu GC, Lin ZC, Yu Y. Robust subspace segmentation by low-rank representation. Proceedings of the 27th International Conference on Machine Learning. Haifa, Israel. 2010. 663–670.
    [8] Liu GC, Yan SC. Latent low-rank representation for subspace segmentation and feature extraction. Proceedings of 2011 International Conference on Computer Vision. Barcelona, Spain. 2011. 1615–1622.
    [9] Yu S, Chen XP. Infrared and visible image fusion based on a latent low-rank representation nested with multiscale geometric transform. IEEE Access, 2020, 8: 110214–110226. [doi: 10.1109/ACCESS.2020.3001974
    [10] 江泽涛, 蒋琦, 黄永松, 等. 基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法. 光子学报, 2020, 49(4): 0410001
    [11] Wang XZ, Yin JF, Zhang K, et al. Infrared weak-small targets fusion based on latent low-rank representation and DWT. IEEE Access, 2019, 7: 112681–112692. [doi: 10.1109/ACCESS.2019.2934523
    [12] He KM, Sun J, Tang XO. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(6): 1397–1409. [doi: 10.1109/TPAMI.2012.213
    [13] 李浩谊, 马春庭. 基于改进的Scharr算法的海上舰船图像边缘检测. 舰船电子工程, 2019, 39(3): 103–106
    [14] Ma JY, Zhou Y. Infrared and visible image fusion via gradientlet filter. Computer Vision and Image Understanding, 2020, 197-198: 103016. [doi: 10.1016/j.cviu.2020.103016
    [15] Tan W, Zhou HX, Song JLQ, et al. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition. Applied Optics, 2019, 58(12): 3064–3073. [doi: 10.1364/AO.58.003064
    [16] Ma JL, Zhou ZQ, Wang B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics & Technology, 2017, 82: 8–17
    [17] Aktar M, Mamun MA, Hossain MA, et al. Weighted normalized mutual information based change detection in remote sensing images. Proceedings of the 19th International Conference on Computer and Information Technology. Dhaka, Bangladesh. 2016. 257–260.
    [18] Xydeas CS, Petrović V. Objective image fusion performance measure. Electronics Letters, 2000, 36(4): 308–309. [doi: 10.1049/el:20000267
    [19] Ma KD, Zeng K, Wang Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 2015, 24(11): 3345–3356. [doi: 10.1109/TIP.2015.2442920
    [20] Zhang L, Zhang L, Mou XQ, et al. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 2011, 20(8): 2378–2386. [doi: 10.1109/TIP.2011.2109730
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

朱亚辉.基于潜在低秩表示及导向滤波的红外与可见光图像融合方法.计算机系统应用,2021,30(9):295-301

复制
分享
文章指标
  • 点击次数:965
  • 下载次数: 1785
  • HTML阅读次数: 1691
  • 引用次数: 0
历史
  • 收稿日期:2020-12-16
  • 最后修改日期:2021-01-18
  • 在线发布日期: 2021-09-04
文章二维码
您是第11208382位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号