Image Sentence Matching Model Based on Semantic Implication Relation
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In this paper, we propose a model called IRMatch for matching images and sentences based on implication relation to solve the nonequivalent semantics matching problem between images and sentences. The IRMatch model first maps images and sentences to a common semantic space respectively by using convolutional neural networks, and then mines implication relations between images and sentences with a learning algorithm by introducing maximum soft margin strategies, which strengthens the proximity of locations of related images and sentences in the common semantic space and improves the reasonability of matching scores between images and sentences. Based on the IRMatch model, we realize approaches of bidirectional image and sentence retrieval, and compare them with approaches using existing models for matching images and sentences on datasets Flickr8k, Flickr30k and Microsoft COCO. Experimental results show that our retrieval approaches perform better in terms of R@1, R@5, R@10 and Med r on the three datasets.

    Reference
    Related
    Cited by
Get Citation

柯川,李文波,汪美玲,李孜.基于语义蕴含关系的图片语句匹配模型.计算机系统应用,2017,26(12):1-8

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:March 22,2017
  • Revised:April 13,2017
  • Adopted:
  • Online: December 07,2017
  • Published:
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063