Abstract:Because the rules of the rule-based interpretability model may fail to reflect the exact decision-making situation of the model, the interpretability framework combining machine learning and knowledge reasoning is proposed. The framework evolves a target-feature result and a reasoning result, which implements interpretability when the two are the same and both reliable. The target-feature result is obtained directly by the machine learning model, while the reasoning result is acquired by sub-feature classification combined with rules for knowledge reasoning. Whether the two results are reliable is judged by calculating their credibility. A particular recognition case of cervical cancer cells for TCT image fusion learning and reasoning is used to verify the framework. Experiments demonstrate that the framework make model’s real decisions interpretable and improve classification accuracy during iteration. This helps people understand the logic of the system’s decision-making and the reason for its failure.