本文已被:浏览 292次 下载 1399次
Received:September 07, 2023 Revised:October 20, 2023
Received:September 07, 2023 Revised:October 20, 2023
中文摘要: 作为融合多源异构知识图谱的主要手段, 实体对齐一般首先编码实体等图结构信息, 而后通过计算实体间相似度来获取对齐实体. 然而, 现存的多模态对齐方法往往直接引入预训练方法表达模态特征, 忽略了模态间的融合以及模态特征与图结构间的融合. 因此, 本文提出一种关系敏感型的多子图图神经网络(RAMS)方法. 通过多子图图神经网络编码方法对模态信息与图结构进行结合并获得实体表征, 通过跨域相似度计算得到对齐结果. 广泛且多角度的实验证明了本文所提出的模型在准确率、效率、鲁棒性方面均超过了基线模型.
Abstract:Multi-modal entity alignment (MMEA) is a crucial technique for integrating multi-source heterogeneous multi-modal knowledge graphs (MMKGs). This integration is typically achieved by encoding graph structure and calculating the plausibility of multi-modal representation between entities. However, existing MMEA methods tend to directly employ pre-trained models and overlook the fusion between modalities as well as the fusion between modal features and graph structures. To address these limitations, this study proposes a novel approach called relation-aware multi-subgraph graph neural network (RAMS) for obtaining multi-modal representation in the context of entity alignment. RAMS utilizes a multi-subgraph graph neural network for fusing modality information and graph structure to derive entity representation. The alignment results are subsequently obtained through cross-domain similarity calculation. Extensive experiments demonstrate that RAMS outperforms baseline models in terms of accuracy, efficiency, and robustness.
keywords: multimodal entity alignment graph neural network (GNN) knowledge graph machine learning deep learning
文章编号: 中图分类号: 文献标志码:
基金项目:国家重点研发计划(2021YFB3900903)
引用文本:
金佳惠,李治江,刘谊章.关系敏感型多子图图神经网络的多模态实体对齐.计算机系统应用,2024,33(3):245-254
JIN Jia-Hui,LI Zhi-Jiang,LIU Yi-Zhang.Multimodal Entity Alignment Based on Relation-aware Multi-subgraph Graph Neural Network.COMPUTER SYSTEMS APPLICATIONS,2024,33(3):245-254
金佳惠,李治江,刘谊章.关系敏感型多子图图神经网络的多模态实体对齐.计算机系统应用,2024,33(3):245-254
JIN Jia-Hui,LI Zhi-Jiang,LIU Yi-Zhang.Multimodal Entity Alignment Based on Relation-aware Multi-subgraph Graph Neural Network.COMPUTER SYSTEMS APPLICATIONS,2024,33(3):245-254