You are currently on the new version of our website. Access the old version .
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

9 January 2026

Dialogical AI for Cognitive Bias Mitigation in Medical Diagnosis

,
,
,
,
,
and
1
Dipartimento di Scienze Sociali, Politiche e Cognitive, University of Siena, 53100 Siena, Italy
2
Dipartimento di Scienze Sociali e Politiche, University of Milan, 20122 Milan, Italy
3
Centro Chirurgico Toscano, 52100 Arezzo, Italy
*
Authors to whom correspondence should be addressed.
This article belongs to the Section Computing and Artificial Intelligence

Abstract

Large Language Models (LLMs) promise to enhance clinical decision-making, yet empirical studies reveal a paradox: physician performance with LLM assistance shows minimal improvement or even deterioration. This failure stems from an “acquiescence problem”: current LLMs passively confirm rather than challenge clinicians’ hypotheses, reinforcing cognitive biases such as anchoring and premature closure. To address these limitations, we propose a Dialogic Reasoning Framework that operationalizes Dialogical AI principles through a prototype implementation named “Diagnostic Dialogue” (DiDi). This framework operationalizes LLMs into three user-controlled roles: the Framework Coach (guiding structured reasoning), the Socratic Guide (asking probing questions), and the Red Team Partner (presenting evidence-based alternatives). Built upon Retrieval-Augmented Generation (RAG) architecture for factual grounding and traceability, this framework transforms LLMs from passive information providers into active reasoning partners that systematically mitigate cognitive bias. We evaluate the feasibility and qualitative impact of this framework through a pilot study (DiDi) deployed at Centro Chirurgico Toscano (CCT). Through purposive sampling of complex clinical scenarios, we present comparative case studies illustrating how the dialogic approach generates necessary cognitive friction to overcome acquiescence observed in standard LLM interactions. While rigorous clinical validation through randomized controlled trials remains necessary, this work establishes a methodological foundation for designing LLM-based clinical decision support systems that genuinely augment human clinical reasoning.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.