Research support services

Abstract
Artificial Intelligence (AI) is a foundational domain within computer science, dedicated to mimicking human cognitive functions for problem-solving in real-world settings. While recent advancements in machine learning (ML) and deep learning (DL) have propelled AI into the mainstream—especially in high-stakes fields like medicine—the challenge remains in ensuring these systems are explainable, interpretable, and trustworthy. This article explores the historical foundations of AI, its practical developments in ML and DL, and the growing importance of explainability, particularly in medical decision-making. We also introduce the concept of “causability”—the degree to which explanations can be causally understood by human experts—and emphasise its role in human-AI interaction.


1. Introduction
Artificial Intelligence (AI) is arguably one of the oldest and most ambitious fields in computer science, aimed at replicating human reasoning and learning. It integrates principles from cognitive science and computer science to develop systems that learn and think like humans. AI is often referred to as machine intelligence, in contrast to human intelligence (Poole, Mackworth, & Goebel, 1998; Russell & Norvig, 2010), and sits at the intersection of symbolic reasoning, statistical inference, and increasingly, data-driven learning.

AI’s early development focused on symbolic systems capable of logical reasoning, as demonstrated by McCarthy’s 1958 Advice Taker—a program envisioned to perform common-sense reasoning (McCarthy, 1960). Over time, the field has shifted towards data-centric approaches such as machine learning (ML), which enables systems to learn from data and improve over time. This evolution has been driven by both algorithmic innovations and the availability of large datasets and computational resources.


2. Machine Learning and Deep Learning: From Data to Decision
ML, a subset of AI, is focused on designing algorithms that can learn from data and make predictions. It thrives in domains that demand high adaptability and learning efficiency, such as diagnostics, finance, and recommendation systems (Michalski, Carbonell, & Mitchell, 1984). Challenges in ML include sense-making, context-awareness, and decision-making under uncertainty (Holzinger, 2017).

Deep learning (DL), a powerful family of ML methods based on multi-layered neural networks, has shown remarkable capabilities—ranging from image classification to natural language processing. Its popularity has surged due to its success in matching or exceeding human-level performance in tasks like skin cancer classification (Esteva et al., 2017) and detection of diabetic retinopathy (Ting et al., 2017). However, the downside is its “black-box” nature—making it difficult to interpret or explain the decision-making process.


3. The Imperative of Explainable AI (XAI)
In medicine, AI is being increasingly adopted for diagnosis, treatment planning, and clinical decision support. However, this domain is fraught with complex, noisy, and incomplete data, which requires not just accuracy but also transparency and interpretability.

Explainable AI (XAI) refers to models and systems that offer comprehensible and traceable justifications for their outputs. Early AI systems, such as MYCIN (Shortliffe & Buchanan, 1975), were built on logical rules and could explain their reasoning steps, but lacked scalability and usability in real-world clinical settings.

Today, the demand for explainable AI in medicine is not just about technical transparency, but also about fostering trust, enhancing decision-making, and ensuring ethical compliance. This includes enabling clinicians to audit AI decisions, correct errors, and integrate AI insights with domain expertise.


4. From Explainability to Causability
A novel concept introduced in recent years is causability, defined as the degree to which a human expert can understand the causal reasoning behind a machine’s decision within a given context. Unlike technical explainability—which focuses on algorithmic behaviour—causability is user-centred and reflects the effectiveness of an explanation in aiding human understanding (Holzinger et al., 2017).

The distinction is crucial. While explainable models highlight which features influenced a prediction, causable systems ensure that the explanation resonates with the human’s mental model. This concept mirrors usability in human-computer interaction, drawing attention to how well humans can interact with and trust AI systems.


5. Challenges in Medical AI Explainability
Several challenges plague the development of explainable systems in healthcare:

  • Data limitations: High-quality annotated datasets are rare, and medical data is often heterogeneous and incomplete (Holzinger, Dehmer, & Jurisica, 2014).

  • Performance vs. interpretability trade-off: High-performing models like DL are often the least interpretable, whereas more transparent models like decision trees may offer lower accuracy (Bologna & Hayashi, 2017).

  • Contextual understanding: Medical decisions are context-sensitive and require not just pattern recognition but understanding of causal relationships and patient history.

To address these challenges, models must integrate multiple data sources, support interactive explanation interfaces, and maintain high predictive performance while offering causal clarity.


6. Explanation Types and Human-AI Interaction
There are three types of explanations relevant in medical AI:

  1. Peer-to-peer explanations – as seen in clinical discussions.

  2. Educational explanations – between trainers and learners.

  3. Scientific explanations – rooted in hypothesis and deduction.

In clinical practice, peer-level explanations are crucial: AI must communicate decisions in a way that medical professionals can understand and validate. Visual interfaces, natural language summaries, and interactive exploration tools can bridge the gap between raw model output and expert understanding.


7. The Role of Human Expertise and Ethical Concerns
Human experts are not always capable of explaining their own decisions—especially when intuition and experience play a large role. This underscores the need for AI systems to support, rather than replace, human judgement.

Explainable AI also touches on broader ethical issues: transparency, accountability, fairness, and trustworthiness. Without adequate explanation mechanisms, AI risks being perceived as opaque or biased, especially in high-stakes applications like medicine.


8. Conclusion and Future Directions
As AI continues to transform healthcare, the importance of explainability and causability cannot be overstated. Building models that offer meaningful explanations, tailored to human reasoning, is essential for safe, ethical, and effective AI integration in clinical practice.

We propose further research into developing standardized metrics for measuring explanation quality, such as the System Causability Scale (Hoffman et al., 2018), and fostering multidisciplinary collaboration between AI researchers, clinicians, and human-computer interaction specialists.

The future of medicine lies in synergistic human-AI collaboration—where intelligent systems not only deliver accurate predictions but also empower experts with insight, control, and confidence.


References:
(Only a few representative ones listed here for brevity; full reference list available on request)

  • Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature.

  • Holzinger, A., et al. (2017). What do we need to build explainable AI systems for the medical domain? Review Paper.

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature.

  • Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (1984). Machine Learning: An Artificial Intelligence Approach.

  • Poole, D., Mackworth, A., & Goebel, R. (1998). Computational Intelligence.

RSS
Follow by Email
YouTube
Pinterest
LinkedIn
Share
Instagram
WhatsApp
FbMessenger
Tiktok