双波段彩色融合图像色彩和谐性客观评价
作者:
基金项目:

国家自然科学基金(61801517); 中央高校基本科研业务费专项(19CX02029A, 19CX02027A)


Objective Assessment of Color Harmony in Dual-band Color Fused Images
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [29]
  • |
  • 相似文献 [6]
  • | | |
  • 文章评论
    摘要:

    针对现有的图像质量评价方法较少利用人眼视网膜和视觉皮层的颜色编码机制, 并且未能充分考虑图像色彩信息对图像质量的影响, 提出了一种基于多视觉特征的可见光(微光)与红外彩色融合图像色彩和谐性客观评价模型. 该模型在图像质量评估中融入了更多的颜色信息, 综合考虑多种人眼视觉特征包括视觉对立色彩特征、色彩信息波动特征和高级视觉内容特征, 经过特征融合和支持向量回归训练, 实现彩色融合图像的色彩和谐性客观评价. 采用3种典型场景融合图像数据库进行实验比较与分析. 实验结果表明, 与现有的8种图像质量客观评价方法相比, 所提出的方法与人眼主观感受更加一致, 具有较高的预测准确度.

    Abstract:

    The currently available quality assessment methods for images rarely fully utilize the color coding mechanisms of the retina of human eyes and the visual cortex and fail to fully consider the influence of color information on image quality. In this study, an objective assessment model for the color harmony of visible light (dim-light) and infrared color fused images based on multiple visual features is proposed to address the above problems. This model incorporates more color information into image quality assessment by considering a variety of visual features of human eyes comprehensively, including the feature of visual contrast colors, the feature of color information fluctuation, and the feature of advanced visual content. Through feature fusion and support vector regression training, it achieves the objective assessment of the color harmony of color fused images. Experimental comparisons and analyses are conducted using databases of fused images in three typical scenes. The experimental results show that compared with the existing eight methods of objective image quality assessment, the proposed method is more consistent with the subjective perception of human eyes and has higher prediction accuracy.

    参考文献
    [1] Luo YY, He KJ, Xu D, et al. Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik, 2022, 258: 168914.
    [2] Ma JY, Tang LF, Fan F, et al. SwinFusion: Cross-domain long-range learning for general image fusion via Swin Transformer. IEEE/CAA Journal of Automatica Sinica, 2022, 9(7): 1200–1217.
    [3] Li CF, Guan TX, Zheng YH, et al. Blind image quality assessment in the contourlet domain. Signal Processing: Image Communication, 2021, 91: 116064.
    [4] Zhang L, Zhang L, Bovik AC. A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 2015, 24(8): 2579–2591.
    [5] Zhou ZH, Lu W, Yang JC, et al. No-reference image quality assessment based on neighborhood co-occurrence matrix. Signal Processing: Image Communication, 2020, 81: 115680.
    [6] Kang L, Ye P, Li Y, et al. Convolutional neural networks for no-reference image quality assessment. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014. 1733–1740.
    [7] Zhang WX, Ma KD, Yan J, et al. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(1): 36–47.
    [8] Golestaneh SA, Dadsetan S, Kitani KM. No-reference image quality assessment via Transformers, relative ranking, and self-consistency. Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa: IEEE, 2022. 3989–3999.
    [9] Gu K, Tao DC, Qiao JF, et al. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(4): 1301–1313.
    [10] Ma S, Fan YY, Chen CW. Pose maker: A pose recommendation system for person in the landscape photographing. Proceedings of the 22nd ACM International Conference on Multimedia. Orlando: ACM, 2014. 1053–1056.
    [11] Lu P, Peng XJ, Yuan CX, et al. Image color harmony modeling through neighbored co-occurrence colors. Neurocomputing, 2016, 201: 82–91.
    [12] Lu P, Yu JB, Peng XJ. Deep conditional color harmony model for image aesthetic assessment. Proceedings of the 24th International Conference on Pattern Recognition (ICPR). Beijing: IEEE, 2018. 2845–2850.
    [13] Gu K, Zhai GT, Yang XK, et al. Using free energy principle for blind image quality assessment. IEEE Transactions on Multimedia, 2015, 17(1): 50–63.
    [14] Wu JJ, Lin WS, Shi GM, et al. Orientation selectivity based visual pattern for reduced-reference image quality assessment. Information Sciences, 2016, 351: 18–29.
    [15] Wu JJ, Zeng JC, Dong WS, et al. Blind image quality assessment with hierarchy: Degradation from local structure to deep semantics. Journal of Visual Communication and Image Representation, 2019, 58: 353–362.
    [16] Jin WQ, Jia XT, Gao SS, et al. Subjective evaluation of quality for color fusion images. Optics and Precision Engineering, 2015, 23(12): 3465–3471.
    [17] Kolb H. How the retina works. American Scientist, 2003, 91(1): 28–35.
    [18] Ruderman DL, Cronin TW, Chiao CC. Statistics of cone responses to natural images: Implications for visual coding. Journal of the Optical Society of America A, 1998, 15(8): 2036–2045.
    [19] Pridmore RW. A new transformation of cone responses to opponent color responses. Attention, Perception, & Psychophysics, 2021, 83(4): 1797–1803.
    [20] Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images. IEEE Computer Graphics and Applications, 2001, 21(5): 34–41.
    [21] Lasmar NE, Stitou Y, Berthoumieu Y. Multiscale skewed heavy tailed model for texture analysis. Proceedings of the 16th IEEE International Conference on Image Processing. Cairo: IEEE, 2009. 2281–2284.
    [22] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436–444.
    [23] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016. 770–778.
    [24] Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE, 2009. 248–255.
    [25] Li QH, Lin WS, Xu JT, et al. Blind image quality assessment using statistical structural and luminance features. IEEE Transactions on Multimedia, 2016, 18(12): 2457–2469.
    [26] Zhu HC, Li LD, Wu JJ, et al. MetaIQA: Deep meta-learning for no-reference image quality assessment. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020. 14131–14140.
    [27] Su SL, Yan QS, Zhu Y, et al. Blindly assess image quality in the wild guided by a self-adaptive hyper network. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020. 3664–3673.
    [28] Yang SD, Wu TH, Shi SW, et al. MANIQA: Multi-dimension attention network for no-reference image quality assessment. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). New Orleans: IEEE, 2022. 1190–1199.
    [29] Pan ZQ, Yuan F, Lei JJ, et al. VCRNet: Visual compensation restoration network for no-reference image quality assessment. IEEE Transactions on Image Processing, 2022, 31: 1613–1627.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

高绍姝,宋尚鸽,倪潇.双波段彩色融合图像色彩和谐性客观评价.计算机系统应用,2024,33(5):170-177

复制
分享
文章指标
  • 点击次数:291
  • 下载次数: 936
  • HTML阅读次数: 479
  • 引用次数: 0
历史
  • 收稿日期:2023-12-13
  • 最后修改日期:2024-01-10
  • 在线发布日期: 2024-04-07
文章二维码
您是第10651124位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号