融合机器学习与知识推理的可解释性框架
作者:
作者单位:

作者简介:

通讯作者:

基金项目:

十三五装备预研项目(41402020501,41402020101)


Interpretable Framework for Integrating Machine Learning and Knowledge Reasoning
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    针对基于规则的可解释性模型可能出现的规则无法反映模型真实决策情况的问题, 提出了一种融合机器学习和知识推理两种途径的可解释性框架. 框架演进目标特征结果和推理结果, 在二者相同且都较为可靠的情况下实现可解释性. 目标特征结果通过机器学习模型直接得到, 推理结果通过子特征分类结果结合规则进行知识推理得到, 两个结果是否可靠通过计算可信度来判断. 使用面向液基细胞学检查图像的融合学习与推理的某类宫颈癌细胞识别案例对框架进行验证, 实验表明, 该框架能够赋予模型的真实决策结果以可解释性, 并在迭代过程中提升了分类精度. 这帮助人们理解系统做出决策的逻辑, 以及更好地了解结果可能失败的原因.

    Abstract:

    Because the rules of the rule-based interpretability model may fail to reflect the exact decision-making situation of the model, the interpretability framework combining machine learning and knowledge reasoning is proposed. The framework evolves a target-feature result and a reasoning result, which implements interpretability when the two are the same and both reliable. The target-feature result is obtained directly by the machine learning model, while the reasoning result is acquired by sub-feature classification combined with rules for knowledge reasoning. Whether the two results are reliable is judged by calculating their credibility. A particular recognition case of cervical cancer cells for TCT image fusion learning and reasoning is used to verify the framework. Experiments demonstrate that the framework make model’s real decisions interpretable and improve classification accuracy during iteration. This helps people understand the logic of the system’s decision-making and the reason for its failure.

    参考文献
    相似文献
    引证文献
引用本文

李迪媛,康达周.融合机器学习与知识推理的可解释性框架.计算机系统应用,2021,30(7):22-31

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2020-10-21
  • 最后修改日期:2020-11-18
  • 录用日期:
  • 在线发布日期: 2021-07-02
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号