ARTIFICIAL INTELLIGENCE TO SUPPORT HEALTH DECISION SUPPORT: A FAST CHANGING FIELD INTERPLAY BETWEEN HUMAN EXPERTISE AND AI-ALGORITHMS
The current aim of our study was to study the role of artificial intelligence in support of decision support in healthcare sector and to prove the AI is the fast changing field interplay between human expertise and AI-algorithms. Explainable AI techniques for decision support in scenarios involving medical picture analysis were maybe given in the current study. We sought to increase the explainability of the Convolutional Neural Network's (CNN) judgments by using three different explainable techniques to the same collection of medical imaging data. Visual explanations were applied to in vivo images acquired by a video capsule endoscopy (VCE) in an effort to boost the confidence of medical professionals in black-box forecasts. We applied the Contextual Importance and Utility (CIU) method as an alternative explanation strategy, as well as two post hoc interpretable machine learning techniques: SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). The explanations that were generated were evaluated by humans. Building on the justifications offered by LIME, SHAP, and CIU, we carried out three user studies. Participants from a variety of non-medical backgrounds completed a set of assessments in an online survey environment, providing feedback on their knowledge and comprehension of the provided explanations. A quantitative analysis was performed on three user groups (n = 10, 10, 10) with three different types of explanations. As anticipated, we discovered that the CIU-explainable approach outperformed both the LIME and SHAP approaches in terms of enhancing support for human decision-making and being more transparent, meaning it is easier for users to grasp. Furthermore, CIU fared better than LIME and SHAP by producing explanations faster. Our results imply that human decision-making differs significantly depending on the type of explanation support environment. Accordingly, we offer three plausible explainable techniques that, with further development in application, can be used to various medical data sets and can effectively assist medical professionals in making decisions.