Why it matters: Artificial intelligence is already helping scientists read medical images, monitor ICU patients, and study diseases from Alzheimer’s to cancer. But many AI systems act like “black boxes,” making it hard for clinicians and researchers to trust how they reached a result. A new invited review argues that explainable AI (XAI)—tools that show why an algorithm made a call—can bridge that gap in physiology research.
What the review looked at
Bettina Finzel, a researcher at the University of Bamberg, surveyed the recent wave of XAI in physiology, narrowing ~200 candidate papers down to 85 studies published between 2020 and 2024. The works span oncology, intensive care, neurology, aging, pain physiology, genetics, and more—underscoring how widely AI is being tried across biomedicine. The article appears in Pflügers Archiv – European Journal of Physiology as part of a special issue on AI.
The state of explainability—useful, but still basic
- Two tools dominate: The review finds heavy reliance on SHAP (a method that ranks which data features most influenced a prediction; used in 40 studies) and LIME (a method that builds a simple local approximation to explain a single prediction; used in 15). Their popularity stems from being model‑agnostic and easy to add to existing AI pipelines.
- Mostly visual, mostly static: Many explanations are delivered as bar charts, feature‑importance plots, or heatmaps over medical signals and images. (A composite figure in the paper—see p. 10—shows SHAP plots, heatmaps from Grad‑CAM/LRP, and partial dependence curves commonly used to “open” black boxes.) What’s missing, the author notes, are interactive, multimodal, human‑centered explanations that let experts probe why and what if in real time.
- Trust and validation: XAI is already helping experts validate whether an AI is “looking” at the right evidence—for instance, confirming that a mortality‑risk model weighs clinically sensible lab values rather than spurious artifacts. But many projects still rely on small or noisy datasets, so explanations also help flag when models might be learning the wrong lessons.
Two big opportunities ahead
- Build trustworthy, integrative physiology with XAI. Explanations can connect data‑hungry AI with the field’s holistic view of how cells, organs, and systems interact—helping researchers weave AI signals into established physiological knowledge.
- Design explanations the way humans explain. Insights from cognition and clinical practice (how people actually reason, teach, and justify) should shape the next generation of XAI so that explanations are useful at the bedside and the bench, not just technically correct.
Bottom line
The review’s message is optimistic but clear: explainability is moving from “nice‑to‑have” to “must‑have” as AI spreads through physiology and medicine. Today’s tools—especially SHAP and LIME—are a strong start. The next step is to go beyond static heatmaps to interactive, human‑centered explanations that earn trust, accelerate discovery, and ultimately improve patient care.
Source: Finzel B. “Current methods in explainable artificial intelligence and future prospects for integrative physiology,”Pflügers Archiv – European Journal of Physiology (Invited Review).
Editor’s note: This article is for information only and is not a substitute for professional medical advice.