###
计算机系统应用英文版:2017,26(12):1-8
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
基于语义蕴含关系的图片语句匹配模型
(中国科学院 软件研究所, 北京 100190)
Image Sentence Matching Model Based on Semantic Implication Relation
(Institute of Software, Chinese Academy of Sciences, Beijing 100190, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1486次   下载 2190
Received:March 22, 2017    Revised:April 13, 2017
中文摘要: 本文提出一种基于蕴含关系的图片语句匹配模型IRMatch,旨在解决图片语句两种不同模态语义之间的非对等匹配问题. 在利用卷积神经网络分别对图片和语句进行语义映射的基础上,IRMatch模型通过引入最大软间隔的学习策略挖掘图片与语句之间的蕴含关系,以强化相关图片语句对在公共语义空间中位置的邻近性,改善图片语句匹配得分的合理性. 基于IRMatch模型,本文实现一种图文双向检索方法,并在Flickr8k、Flickr30k以及Microsoft COCO数据集上与基于已有图片语句匹配模型的图文双向检索方法进行了比较. 实验结果表明,基于IRMatch模型的检索方法在上述三个数据集上的R@1,R@5,R@10以及Med r均优于基于已有模型的检索方法.
Abstract:In this paper, we propose a model called IRMatch for matching images and sentences based on implication relation to solve the nonequivalent semantics matching problem between images and sentences. The IRMatch model first maps images and sentences to a common semantic space respectively by using convolutional neural networks, and then mines implication relations between images and sentences with a learning algorithm by introducing maximum soft margin strategies, which strengthens the proximity of locations of related images and sentences in the common semantic space and improves the reasonability of matching scores between images and sentences. Based on the IRMatch model, we realize approaches of bidirectional image and sentence retrieval, and compare them with approaches using existing models for matching images and sentences on datasets Flickr8k, Flickr30k and Microsoft COCO. Experimental results show that our retrieval approaches perform better in terms of R@1, R@5, R@10 and Med r on the three datasets.
文章编号:     中图分类号:    文献标志码:
基金项目:国家“863”项目(2013AA01A603)
引用文本:
柯川,李文波,汪美玲,李孜.基于语义蕴含关系的图片语句匹配模型.计算机系统应用,2017,26(12):1-8
KE Chuan,LI Wen-Bo,WANG Mei-Ling,LI Zi.Image Sentence Matching Model Based on Semantic Implication Relation.COMPUTER SYSTEMS APPLICATIONS,2017,26(12):1-8