Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review
Abstract
1. Introduction
1.1. Background
1.1.1. From AI Tools to Collaborative Partners: An Evolution
1.1.2. Applications of HAIC
1.1.3. Ethical Concerns and Challenges
1.2. Existing Gaps
1.3. Research Questions
2. Methodology
2.1. Search Strategy
2.2. Screening and Eligibility
2.3. Data Collection
2.4. Study Selection
2.5. Synthesis Methods: A Human–AI Collaborative Framework
2.5.1. Stage 1: AI-Assisted Deep Reading and Information Extraction
- Precise summaries of research questions and objectives.
- Standardized extraction of methodological elements (study design, sample characteristics, and data types).
- Itemized organization of core findings.
- Preliminary annotations of potential relevance to the three research questions (RQ1-RQ3) with supporting textual evidence.
2.5.2. Stage 2: Human Expert-Informed Judgment
2.5.3. Stage 3: Consensus Building Among Human Researchers
2.5.4. Stage 4: Final Synthesis
- RQ relevance (e.g., RQ1, RQ2, RQ3, or multiple).
- Methodology (e.g., study design and data sources).
- Core findings (e.g., key results and implications).
- Limitations (paper-specific, AI analysis, and human judgment).
2.6. Effectiveness of the Human–AI Collaborative Approach
3. Results
3.1. Review Findings and Discussions
3.1.1. Descriptives Statistics
Publication Year
Research Domains’ Distribution
Research Methods’ Distribution
3.2. RQ1: What Design Strategies Enable AI Systems to Support Humans’ Intuitive Capabilities While Maintaining Decision-Making Autonomy?
3.2.1. Strategy 1: Complementary Role Architecture
3.2.2. Strategy 2: Adaptive User-Centered Design
3.2.3. Strategy 3: Context-Aware Task Allocation
3.2.4. Strategy 4: Autonomous Reliance Calibration
3.2.5. Supporting Evidence and Implementation Considerations
3.3. RQ2: How Do AI Presentation and Interaction Approaches Influence Trust Calibration and Reliance Behaviors in HAIC?
3.3.1. How AI Systems Present Information to Users
Method 1: Visual Presentations-Showing Users What AI “Sees”
Method 2: Learning Through Examples: How AI Shows Similar Cases
3.3.2. How Users Interact and Collaborate with AI Systems
Method 1: Giving Users Control: Interactive Features and User Agency
Method 2: Expressing Uncertainty: How AI Communicates Confidence Levels
Method 3: Explaining the Process: How AI Describes Its Decision Making
3.3.3. Effects on User Trust: How Much Users Believe in AI
3.3.4. Effects on User Behavior: How Users Actually Use AI Recommendations
3.3.5. What Influences These Effects: Key Factors That Matter
3.4. RQ3: What Ethical and Practical Implications Arise from Integrating AI Decision Support Systems into High-Risk Human Decision Making, Particularly Regarding Trust Calibration, Skill Degradation, and Accountability Across Different Domains?
3.4.1. Challenge 1: Trust Calibration Challenges in High-Risk Contexts
Generalized Trust Formation and Its Risks
The Capability–Morality Trust Paradox
Error Pattern Sensitivity and Risk Assessment
Performance Paradox in HAIC
3.4.2. Challenge 2: Skill Degradation and Human Agency Preservation
Patterns of Human Agency in AI-Mediated Decision Making
Influence Dynamics and Skill Preservation
Domain-Specific Skill Degradation Risks
3.4.3. Challenge 3: Accountability Gaps and Responsibility Attribution
The Responsibility Attribution Challenge
Inadequacy of Current Certification Frameworks
Transparency Paradoxes in High-Risk Contexts
3.4.4. Domain-Specific Ethical and Practical Implications in High-Risk Environments
Healthcare: Clinical Decision Making and Patient Safety
Aviation and Safety-Critical Systems: Managing Catastrophic Risk
Public Institutions and Democratic Governance: Trust and Legitimacy
Military and Defense Applications: Command Authority and Accountability
Cross-Domain Risk Calibration Requirements
3.4.5. Implications for Responsible Implementation in High-Risk Contexts
Integrated Sociotechnical Approaches
Ethical Framework Integration
3.5. Quantitative Synthesis and Evidence Integration
3.5.1. Meta-Analytic Opportunities
3.5.2. Evidence Tables for Key Findings
3.5.3. Conflicting Findings and Resolution Needs
4. Limitation
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
| HAIC | Human–AI Collaboration |
| AI | Artificial Intelligence |
| LLM | Large Language Model |
| XAI | Explainable AI |
| TREWS | Targeted Real-Time Early Warning System |
| LIME | Local Interpretable Model-Agnostic Explanations |
| SHAP | SHapley Additive exPlanations |
References
- Reverberi, C.; Tommaso, R.; Aldo, S.; Cesare, H.; Paolo, C.; GI Genius CADx Study Group; Cherubini, A. Experimental evidence of effective human–AI collaboration in medical decision-making. Sci. Rep. 2022, 12, 14952. [Google Scholar] [CrossRef]
- Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
- Adams, R.; Henry, K.E.; Sridharan, A.; Soleimani, H.; Zhan, A.; Rawat, N.; Johnson, L.; Hager, D.N.; Cosgrove, S.E.; Markowski, A.; et al. Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat. Med. 2022, 28, 1455–1460. [Google Scholar] [CrossRef]
- Maslej, N.; Fattorini, L.; Perrault, R.; Parli, V.; Reuel, A.; Brynjolfsson, E.; Etchemendy, J.; Ligett, K.; Lyons, T.; Manyika, J.; et al. Artificial Intelligence Index Report 2024. arXiv 2024, arXiv:2405.19522. [Google Scholar] [CrossRef]
- Hanna, M.G.; Pantanowitz, L.; Jackson, B.; Palmer, O.; Visweswaran, S.; Pantanowitz, J.; Deebajah, M.; Rashidi, H.H. Ethical and Bias Considerations in Artificial Intelligence/Machine Learning. Mod. Pathol. 2025, 38, 100686. [Google Scholar] [CrossRef]
- Gala, D.; Behl, H.; Shah, M.; Makaryus, A.N. The Role of Artificial Intelligence in Improving Patient Outcomes and Future of Healthcare Delivery in Cardiology: A Narrative Review of the Literature. Healthcare 2024, 12, 481. [Google Scholar] [CrossRef]
- Ahmad, S.F.; Han, H.; Alam, M.M.; Rehmat, M.K.; Irshad, M.; Arraño-Muñoz, M.; Ariza-Montes, A. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit. Soc. Sci. Commun. 2023, 10, 311. [Google Scholar] [CrossRef]
- Hasanzadeh, F.; Josephson, C.B.; Waters, G.; Adedinsewo, D.; Azizi, Z.; White, J.A. Bias recognition and mitigation strategies in artificial intelligence healthcare applications. NPJ Digit. Med. 2025, 8, 154. [Google Scholar] [CrossRef]
- Smith, P.T. Resolving responsibility gaps for lethal autonomous weapon systems. Front. Big Data 2022, 5, 1038507. [Google Scholar] [CrossRef]
- Lavazza, A.; Farina, M. Leveraging autonomous weapon systems: Realism and humanitarianism in modern warfare. Technol. Soc. 2023, 74, 102322. [Google Scholar] [CrossRef]
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- Holmes, W. Artificial Intelligence in Education. In Encyclopedia of Education and Information Technologies; Tatnall, A., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 1–16. [Google Scholar]
- Tzirides, A.O.; Zapata, G.; Kastania, N.P.; Saini, A.K.; Castro, V.; Ismael, S.A.; You, Y.-l.; Santos, T.A.d.; Searsmith, D.; O’Brien, C.; et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 2024, 6, 100184. [Google Scholar] [CrossRef]
- Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 33–60. [Google Scholar] [CrossRef]
- McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the dartmouth summer research project on artificial intelligence. AI Mag. 1955, 27, 12. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Chiriatti, M.; Ganapini, M.; Panai, E.; Ubiali, M.; Riva, G. The case for human–AI interaction as system 0 thinking. Nat. Hum. Behav. 2024, 8, 1829–1830. [Google Scholar] [CrossRef]
- Vaccaro, M.; Almaatouq, A.; Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat. Hum. Behav. 2024, 8, 2293–2303. [Google Scholar] [CrossRef]
- Tsvetkova, M.; Yasseri, T.; Pescetelli, N.; Werner, T. A new sociology of humans and machines. Nat. Hum. Behav. 2024, 8, 1864–1876. [Google Scholar] [CrossRef]
- Endsley, M.R. Toward a theory of situation awareness in dynamic systems. In Situational Awareness; Routledge: Oxfordshire, UK, 2017; pp. 9–42. [Google Scholar]
- Sweller, J. Cognitive load during problem solving: Effects on learning. Cogn. Sci. 1988, 12, 257–285. [Google Scholar] [CrossRef]
- Sweller, J. CHAPTER TWO—Cognitive Load Theory. In Psychology of Learning and Motivation; Mestre, J.P., Ross, B.H., Eds.; Academic Press: Cambridge, MA, USA, 2011; pp. 37–76. [Google Scholar]
- Klein, G.A. Sources of Power: How People Make Decisions; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
- Ross, K.G.; Klein, G.A.; Thunholm, P.; Schmitt, J.F.; Baxter, H.C. The recognition-primed decision model. Mil. Rev. 2004, 74, 6–10. [Google Scholar]
- Bansal, G.; Wu, T.; Zhou, J.; Fok, R.; Nushi, B.; Kamar, E.; Ribeiro, M.T.; Weld, D. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–16. [Google Scholar]
- Klein, G. Naturalistic decision making. Hum. Factors 2008, 50, 456–460. [Google Scholar] [CrossRef]
- Senoner, J.; Schallmoser, S.; Kratzwald, B.; Feuerriegel, S.; Netland, T. Explainable AI improves task performance in human-AI collaboration. Sci. Rep. 2024, 14, 31150. [Google Scholar] [CrossRef]
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
- Chen, V.; Liao, Q.V.; Vaughan, J.W.; Bansal, G. Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. Proc. ACM Hum. Comput. Interact. 2023, 7, 1–32. [Google Scholar] [CrossRef]
- Poursabzi-Sangdeh, F.; Goldstein, D.G.; Hofman, J.M.; Vaughan, J.W.W.; Wallach, H. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; p. 237. [Google Scholar]
- Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar]
- McAllister, D.J. Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 1995, 38, 24–59. [Google Scholar] [CrossRef]
- Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
- Hunter, C.; Bowen, B.E. We’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war. J. Strateg. Stud. 2024, 47, 116–146. [Google Scholar] [CrossRef]
- Szabadföldi, I. Artificial Intelligence in Military Application—Opportunities and Challenges. Land. Forces Acad. Rev. 2021, 26, 157–165. [Google Scholar] [CrossRef]
- Kase, S.E.; Hung, C.P.; Krayzman, T.; Hare, J.Z.; Rinderspacher, B.C.; Su, S.M. The Future of Collaborative Human-Artificial Intelligence Decision-Making for Mission Planning. Front. Psychol. 2022, 13, 850628. [Google Scholar] [CrossRef]
- Dodeja, L.; Tambwekar, P.; Hedlund-Botti, E.; Gombolay, M. Towards the design of user-centric strategy recommendation systems for collaborative Human–AI tasks. Int. J. Hum. Comput. Stud. 2024, 184, 103216. [Google Scholar] [CrossRef]
- Berretta, S.; Tausch, A.; Ontrup, G.; Gilles, B.; Peifer, C.; Kluge, A. Defining human-AI teaming the human-centered way: A scoping review and network analysis. Front. Artif. Intell. 2023, 6, 1250725. [Google Scholar] [CrossRef]
- Nurkin, T.; Siegel, J. Battlefield Applications for Human-Machine Teaming: Demonstrating Value, Experimenting with New Capabilities and Accelerating Adoption; Atlantic Council, Scowcroft Center for Strategy and Security: Washington, DC, USA, 2023. [Google Scholar]
- Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif. Intell. Med. 2024, 151, 102861. [Google Scholar] [CrossRef]
- Wubineh, B.Z.; Deriba, F.G.; Woldeyohannis, M.M. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. Urol. Oncol. Semin. Orig. Investig. 2024, 42, 48–56. [Google Scholar] [CrossRef]
- Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef]
- Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial intelligence in healthcare: Transforming the practice of medicine. Future Healthc. J. 2021, 8, e188–e194. [Google Scholar] [CrossRef]
- Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
- Dhanda, M.; Rogers, B.A.; Hall, S.; Dekoninck, E.; Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0. Robot. Comput. Integr. Manuf. 2025, 93, 39. [Google Scholar] [CrossRef]
- Gomez, C.; Cho, S.M.; Ke, S.; Huang, C.-M.; Unberath, M. Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Front. Comput. Sci. 2025, 6, 1521066. [Google Scholar] [CrossRef]
- Sharma, A.; Lin, I.W.; Miner, A.S.; Atkins, D.C.; Althoff, T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat. Mach. Intell. 2023, 5, 46–57. [Google Scholar] [CrossRef]
- Herrera, F. Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration. Inf. Fusion. 2025, 121, 103133. [Google Scholar] [CrossRef]
- Rezaei, M.; Pironti, M.; Quaglia, R. AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations? Manag. Decis. 2024, 63, 3369–3388. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
- Zheng, Q.; Tang, Y.; Liu, Y.; Liu, W.; Huang, Y. UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; p. 570. [Google Scholar]
- Helms Andersen, T.; Marcussen, T.; Termannsen, A.; Lawaetz, T.; Nørgaard, O. Using Artificial Intelligence Tools as Second Reviewers for Data Extraction in Systematic Reviews: A Performance Comparison of Two AI Tools Against Human Reviewers. Cochrane Evid. Synth. Methods 2025, 3, e70036. [Google Scholar] [CrossRef]
- de la Torre-López, J.; Ramírez, A.; Romero, J.R. Artificial intelligence to automate the systematic review of scientific literature. Computing 2023, 105, 2171–2194. [Google Scholar] [CrossRef]
- Bolanos, F.; Salatino, A.; Osborne, F.; Motta, E. Artificial intelligence for literature reviews: Opportunities and challenges. Artif. Intell. Rev. 2024, 57, 259. [Google Scholar] [CrossRef]
- Blaizot, A.; Veettil, S.K.; Saidoung, P.; Moreno-Garcia, C.F.; Wiratunga, N.; Aceves-Martins, M.; Lai, N.M.; Chaiyakunapruk, N. Using artificial intelligence methods for systematic review in health sciences: A systematic review. Res. Synth. Methods 2022, 13, 353–362. [Google Scholar] [CrossRef]
- Farber, S. Enhancing peer review efficiency: A mixed-methods analysis of artificial intelligence-assisted reviewer selection across academic disciplines. Learn. Publ. 2024, 37, e1638. [Google Scholar] [CrossRef]
- Berger-Tal, O.; Wong, B.B.; Adams, C.A.; Blumstein, D.T.; Candolin, U.; Gibson, M.J.; Greggor, A.L.; Lagisz, M.; Macura, B.; Price, C.J. Leveraging AI to improve evidence synthesis in conservation. Trends Ecol. Evol. 2024, 39, 548–557. [Google Scholar] [CrossRef]
- Lee, K.; Paek, H.; Ofoegbu, N.; Rube, S.; Higashi, M.K.; Dawoud, D.; Xu, H.; Shi, L.; Wang, X. A4SLR: An Agentic AI-Assisted Systematic Literature Review Framework to Augment Evidence Synthesis for HEOR and HTA. Value Health 2025, 28, 1655–1664. [Google Scholar] [CrossRef]
- Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Tausczik, Y.; Samulowitz, H.; Gray, A. Human-AI Collaboration in Data Science: Exploring Data Scientists’ Perceptions of Automated AI. Proc. ACM Hum. Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
- Choudari, S.; Sanwal, R.; Sharma, N.; Shastri, S.; Singh, D.A.P.; Deepa, G. Data Science collaboration in Human AI: Decision Optimization using Human-centered Automation. In Proceedings of the 2024 Second International Conference Computational and Characterization Techniques in Engineering & Sciences (IC3TES), Lucknow, India, 15–16 November 2024; pp. 1–6. [Google Scholar]
- Scholes, M.S. Artificial intelligence and uncertainty. Risk Sci. 2025, 1, 100004. [Google Scholar] [CrossRef]
- Cai, C.J.; Winter, S.; Steiner, D.; Wilcox, L.; Terry, M. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum. Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
- Xu, B.; Song, X.; Cai, Z.; Professor, A.Y.-L.C.; Lim, E.; Tan, C.-W.; Yu, J. Artificial Intelligence or Augmented Intelligence: A Case Study of Human-AI Collaboration in Operational Decision Making. In Proceedings of the Pacific Asia Conference on Information Systems (PACIS), Dubai, United Arab Emirates, 20–24 June 2020. [Google Scholar]
- Ding, S.; Pan, X.; Hu, L.; Liu, L. A new model for calculating human trust behavior during human-AI collaboration in multiple decision-making tasks: A Bayesian approach. Comput. Ind. Eng. 2025, 200, 110872. [Google Scholar] [CrossRef]
- Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
- Gomez, C.; Unberath, M.; Huang, C.-M. Mitigating knowledge imbalance in AI-advised decision-making through collaborative user involvement. Int. J. Hum. Comput. Stud. 2023, 172, 102977. [Google Scholar] [CrossRef]
- Muijlwijk, H.; Willemsen, M.C.; Smyth, B.; IJsselsteijn, W.A. Benefits of Human-AI Interaction for Expert Users Interacting with Prediction Models: A Study on Marathon Running. In Proceedings of the IUI ‘24: 29th International Conference on Intelligent User Interfaces, Greenville, SC, USA, 18–21 March 2024; pp. 245–258. [Google Scholar]
- Liu, M.X.; Wu, T.; Chen, T.; Li, F.M.; Kittur, A.; Myers, B.A. Selenite: Scaffolding Online Sensemaking with Comprehensive Overviews Elicited from Large Language Models. In Proceedings of the CHI ‘24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–26. [Google Scholar]
- Zheng, C.; Zhang, Y.; Huang, Z.; Shi, C.; Xu, M.; Ma, X. DiscipLink: Unfolding Interdisciplinary Information Seeking Process via Human-AI Co-Exploration. In Proceedings of the UIST ‘24: The 37th Annual ACM Symposium on User Interface Software and Technology, Pittsburgh, PA, USA, 13–16 October 2024; pp. 1–20. [Google Scholar]
- Shi, C.; Hu, Y.; Wang, S.; Ma, S.; Zheng, C.; Ma, X.; Luo, Q. RetroLens: A Human-AI Collaborative System for Multi-step Retrosynthetic Route Planning. In Proceedings of the CHI ‘23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–20. [Google Scholar]
- Pinto, R.; Lagorio, A.; Ciceri, C.; Mangano, G.; Zenezini, G.; Rafele, C. A Conversationally Enabled Decision Support System for Supply Chain Management: A Conceptual Framework. IFAC Pap. 2024, 58, 801–806. [Google Scholar] [CrossRef]
- Meske, C.; Ünal, E. Investigating the Impact of Control in AI-Assisted Decision-Making—An Experimental Study. In Proceedings of the MuC ‘24: Mensch und Computer, Karlsruhe, Germany, 1–4 September 2024; pp. 419–423. [Google Scholar]
- Bharti, P.K.; Ghosal, T.; Agarwal, M.; Ekbal, A. PEERRec: An AI-based approach to automatically generate recommendations and predict decisions in peer review. Int. J. Digit. Libr. 2024, 25, 55–72. [Google Scholar] [CrossRef]
- Eisbach, S.; Langer, M.; Hertel, G. Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100015. [Google Scholar] [CrossRef]
- Zheng, Y.; Rowell, B.; Chen, Q.; Kim, J.Y.; Kontar, R.A.; Yang, X.J.; Lester, C.A. Designing Human-Centered AI to Prevent Medication Dispensing Errors: Focus Group Study with Pharmacists. JMIR Form. Res. 2023, 7, e51921. [Google Scholar] [CrossRef]
- Park, G. The Effect of Level of AI Transparency on Human-AI Teaming Performance Including Trust in Machine Learning Interface. Ph.D. Thesis, University of Michigan-Dearborn, Dearborn, MI, USA, 2023. [Google Scholar]
- Korentsides, J.; Keebler, J.R.; Fausett, C.M.; Patel, S.M.; Lazzara, E.H. Human-AI Teams in Aviation: Considerations from Human Factors and Team Science. J. Aviat. Aerosp. Educ. Res. 2024, 33, 7. [Google Scholar] [CrossRef]
- Schoonderwoerd, T.A.J.; Zoelen, E.M.V.; Bosch, K.V.D.; Neerincx, M.A. Design patterns for human-AI co-learning: A wizard-of-Oz evaluation in an urban-search-and-rescue task. Int. J. Hum. Comput. Stud. 2022, 164, 102831. [Google Scholar] [CrossRef]
- Jalalvand, F.; Baruwal Chhetri, M.; Nepal, S.; Paris, C. Alert Prioritisation in Security Operations Centres: A Systematic Survey on Criteria and Methods. ACM Comput. Surv. 2025, 57, 1–36. [Google Scholar] [CrossRef]
- Chen, J.; Lu, S. An Advanced Driving Agent with the Multimodal Large Language Model for Autonomous Vehicles. In Proceedings of the 2024 IEEE International Conference on Mobility, Operations, Services and Technologies (MOST), Dallas, TX, USA, 1–3 May 2024; pp. 1–11. [Google Scholar]
- Lin, J.; Tomlin, N.; Andreas, J.; Eisner, J. Decision-Oriented Dialogue for Human-AI Collaboration. Trans. Assoc. Comput. Linguist. 2024, 12, 892–911. [Google Scholar] [CrossRef]
- Cui, H.; Yasseri, T. AI-enhanced collective intelligence. Patterns 2024, 5, 101074. [Google Scholar] [CrossRef]
- Flathmann, C.; Schelble, B.G.; Rosopa, P.J.; McNeese, N.J.; Mallick, R.; Madathil, K.C. Examining the impact of varying levels of AI teammate influence on human-AI teams. Int. J. Hum. Comput. Stud. 2023, 177, 103061. [Google Scholar] [CrossRef]
- Ghaffar, F.; Furtado, N.M.; Ali, I.; Burns, C. Diagnostic Decision-Making Variability Between Novice and Expert Optometrists for Glaucoma: Comparative Analysis to Inform AI System Design. JMIR Med. Inform. 2025, 13, e63109. [Google Scholar] [CrossRef] [PubMed]
- Rastogi, C.; Zhang, Y.; Wei, D.; Varshney, K.R.; Dhurandhar, A.; Tomsett, R. Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. Proc. ACM Hum. Comput. Interact. 2022, 6, 1–22. [Google Scholar] [CrossRef]
- Paleja, R.; Munje, M.; Chang, K.; Jensen, R.; Gombolay, M. Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems. arXiv 2025, arXiv:2406.05003. [Google Scholar] [CrossRef]
- Rosenbacke, R. Cognitive Challenges in Human-AI Collaboration: A Study on Trust, Errors, and Heuristics in Clinical Decision-Making. Ph.D. Thesis, Copenhagen Business School, Copenhagen, Denmark, 2025. [Google Scholar]
- Kreps, S.; Jakesch, M. Can AI communication tools increase legislative responsiveness and trust in democratic institutions? Gov. Inf. Q. 2023, 40, 101829. [Google Scholar] [CrossRef]
- Schemmer, M.; Kuehl, N.; Benz, C.; Bartos, A.; Satzger, G. Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations. In Proceedings of the IUI ‘23: 28th International Conference on Intelligent User Interfaces, Sydney, NSW, Australia, 27–31 March 2023; pp. 410–422. [Google Scholar]
- Chakravorti, T.; Singh, V.; Rajtmajer, S.; McLaughlin, M.; Fraleigh, R.; Griffin, C.; Kwasnica, A.; Pennock, D.; Giles, C.L. Artificial Prediction Markets Present a Novel Opportunity for Human-AI Collaboration. arXiv 2023, arXiv:2211.16590. pp. 2304–2306. [Google Scholar]
- Xu, Z.; Song, T.; Lee, Y.-C. Confronting verbalized uncertainty: Understanding how LLM’s verbalized uncertainty influences users in AI-assisted decision-making. Int. J. Hum. Comput. Stud. 2025, 197, 103455. [Google Scholar] [CrossRef]
- Tutul, A.A.; Nirjhar, E.H.; Chaspari, T. Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech. In Proceedings of the ICMI ‘21: International Conference on Multimodal Interaction, Montréal, QC, Canada, 18–22 October 2021; pp. 288–296. [Google Scholar]
- Syiem, B.V.; Kelly, R.M.; Dingler, T.; Goncalves, J.; Velloso, E. Addressing attentional issues in augmented reality with adaptive agents: Possibilities and challenges. Int. J. Hum. Comput. Stud. 2024, 190, 103324. [Google Scholar] [CrossRef]
- Schmutz, J.B.; Outland, N.; Kerstan, S.; Georganta, E.; Ulfert, A.-S. AI-teaming: Redefining collaboration in the digital era. Curr. Opin. Psychol. 2024, 58, 101837. [Google Scholar] [CrossRef] [PubMed]
- Judkins, J.T.; Hwang, Y.; Kim, S. Human-AI interaction: Augmenting decision-making for IT leader’s project selection. Inf. Dev. 2025, 41, 1009–1035. [Google Scholar] [CrossRef]
- Daly, S.J.; Hearn, G.; Papageorgiou, K. Sensemaking with AI: How Trust Influences Human-AI Collaboration in Health and Creative Industries. Soc. Sci. Humanit. Open 2025, 11, 101346. [Google Scholar] [CrossRef]
- Lowell, L.; Adm, P.-B. Strategic alliance: Navigating challenges in human-ai collaboration for effective business decision-making. Int. J. Nov. Res. Dev. 2024, 9, a84–a94. [Google Scholar]
- Hah, H.; Goldin, D.S. How Clinicians Perceive Artificial Intelligence–Assisted Technologies in Diagnostic Decision Making: Mixed Methods Approach. J. Med. Internet Res. 2021, 23, e33540. [Google Scholar] [CrossRef]
- Papachristos, E.; Skov Johansen, P.; Møberg Jacobsen, R.; Bjørn Leer Bysted, L.; Skov, M.B. How do People Perceive the Role of AI in Human-AI Collaboration to Solve Everyday Tasks? In Proceedings of the CHI Greece 2021: 1st International Conference of the ACM Greek SIGCHI Chapter, Athens, Greece, 25–27 November 2021; pp. 1–6. [Google Scholar]
- Famiglini, L.; Campagner, A.; Barandas, M.; La Maida, G.A.; Gallazzi, E.; Cabitza, F. Evidence-based XAI: An empirical approach to design more effective and explainable decision support systems. Comput. Biol. Med. 2024, 170, 108042. [Google Scholar] [CrossRef] [PubMed]
- Chowdhury, A.; Nguyen, H.; Ashenden, D.; Pogrebna, G. POSTER: A Teacher-Student with Human Feedback Model for Human-AI Collaboration in Cybersecurity. In Proceedings of the ASIA CCS ‘23: ACM ASIA Conference on Computer and Communications Security, Melbourne, Australia, 10–14 July 2023; pp. 1040–1042. [Google Scholar]
- Vasconcelos, H.; Jörke, M.; Grunde-McLaughlin, M.; Gerstenberg, T.; Bernstein, M.S.; Krishna, R. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. Proc. ACM Hum. Comput. Interact. 2023, 7, 1–38. [Google Scholar] [CrossRef]
- Westphal, M.; Vössing, M.; Satzger, G.; Yom-Tov, G.B.; Rafaeli, A. Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Comput. Hum. Behav. 2023, 144, 107714. [Google Scholar] [CrossRef]
- Morrison, K.; Spitzer, P.; Turri, V.; Feng, M.; Kühl, N.; Perer, A. The Impact of Imperfect XAI on Human-AI Decision-Making. Proc. ACM Hum. Comput. Interact. 2024, 8, 1–39. [Google Scholar] [CrossRef]
- Ma, S.; Lei, Y.; Wang, X.; Zheng, C.; Shi, C.; Yin, M.; Ma, X. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the CHI ‘23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–19. [Google Scholar]
- Andrews, R.W.; Mason, L.J.; Divya, S.; Feigh, K.M. The role of shared mental models in human-AI teams: A theoretical review. Theor. Issues Ergon. Sci. 2023, 24, 129–175. [Google Scholar] [CrossRef]
- Tabrez, A. Effective Human-Machine Teaming through Communicative Autonomous Agents that Explain, Coach, and Convince. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, London, UK, 29 May–2 June 2023; pp. 3008–3010. [Google Scholar]
- Bashkirova, A.; Krpan, D. Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100066. [Google Scholar] [CrossRef]
- Sivaraman, V.; Bukowski, L.A.; Levin, J.; Kahn, J.M.; Perer, A. Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care. In Proceedings of the CHI ‘23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–18. [Google Scholar]
- Schoeffer, J.; De-Arteaga, M.; Kühl, N. Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making. In Proceedings of the CHI ‘24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11—16 May 2024; pp. 1–18. [Google Scholar]
- Erlei, A.; Sharma, A.; Gadiraju, U. Understanding Choice Independence and Error Types in Human-AI Collaboration. In Proceedings of the CHI ‘24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11—16 May 2024; pp. 1–19. [Google Scholar]
- Schaefer, K.E.; Chen, J.Y.; Szalma, J.L.; Hancock, P.A. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Factors 2016, 58, 377–400. [Google Scholar] [CrossRef]
- Parasuraman, R.; Riley, V. Humans and automation: Use, misuse, disuse, abuse. Hum. Factors 1997, 39, 230–253. [Google Scholar]
- Duan, W.; Zhou, S.; Scalia, M.J.; Yin, X.; Weng, N.; Zhang, R.; Freeman, G.; McNeese, N.; Gorman, J.; Tolston, M. Understanding the Evolvement of Trust Over Time within Human-AI Teams. Proc. ACM Hum. Comput. Interact. 2024, 8, 1–31. [Google Scholar] [CrossRef]
- Gunning, D.; Aha, D. DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 2019, 40, 44–58. [Google Scholar]
- Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
- Tolmeijer, S.; Christen, M.; Kandul, S.; Kneer, M.; Bernstein, A. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making. In Proceedings of the CHI ‘22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–17. [Google Scholar]
- Wang, B.Y.; Boell, S.K.; Riemer, K.; Peter, S. Human Agency in AI Configurations Supporting Organizational Decision-making. In Proceedings of the Australasian Conferences on Information Systems, Wellington, New Zealand, 5–8 December 2023. [Google Scholar]
- Delgado-Aguilera Jurado, R.; Ye, X.; Ortolá Plaza, V.; Zamarreño Suárez, M.; Pérez Moreno, F.; Arnaldo Valdés, R.M. An introduction to the current state of standardization and certification on military AI applications. J. Air Transp. Manag. 2024, 121, 102685. [Google Scholar] [CrossRef]
- Hao, X.; Demir, E.; Eyers, D. Exploring collaborative decision-making: A quasi-experimental study of human and Generative AI interaction. Technol. Soc. 2024, 78, 102662. [Google Scholar] [CrossRef]
- Zhang, Y.; Zong, R.; Shang, L.; Yue, Z.; Zeng, H.; Liu, Y.; Wang, D. Tripartite Intelligence: Synergizing Deep Neural Network, Large Language Model, and Human Intelligence for Public Health Misinformation Detection (Archival Full Paper). In Proceedings of the CI ‘24: Collective Intelligence Conference, Boston, MA, USA, 27–28 June 2024; pp. 63–75. [Google Scholar]
- Heyder, T.; Passlack, N.; Posegga, O. Ethical management of human-AI interaction: Theory development review. J. Strateg. Inf. Syst. 2023, 32, 101772. [Google Scholar] [CrossRef]





| Study | Domain | Intervention Type | Outcome Measure | Effect Size/Change | Significance | Context | Limitations |
|---|---|---|---|---|---|---|---|
| [105] | General | Likelihood Display | Trust Appropriateness | Improved calibration | p < 0.05 | Reduced over-trust in incorrect AI | Non-expert users; low-stake tasks; and small complementarity |
| [67] | Sports Analytics | Interactive Prediction | Model Acceptance | β = 0.266 | p < 0.001 | Marathon finish time prediction | Single domain (marathon); unfamiliar runners only |
| [18] | Meta-analysis | Human–AI Combination | Performance vs. Best Agent | Hedges’ g = −0.23 | Significant | Decision tasks showed losses | Publication bias; high heterogeneity; and limited delegation research |
| [64] | General | Bayesian Adaptation | Trust Prediction Accuracy | 97.6% accuracy | Not reported | Task difficulty adaptation | Assumes rational decision making; ignores cognitive biases |
| [76] | Pharmacy | Transparency Display | Trust in AI Capability | Increased understanding | p < 0.05 | Reduced perceived workload | Only 2 transparency levels; n = 20 |
| Study | Presentation Type | Domain | Performance Change | Trust/Reliance Change | Notable Findings | Key Limitations |
|---|---|---|---|---|---|---|
| [100] | Visual CAMs | Medical Imaging | Improved accuracy and confidence | Increased physician trust | Red–blue coloring most effective | Single fracture type; small sample; and 8:1:1 split |
| [102] | Simple Highlights | General Tasks | Maintained performance | Reduced over-reliance | Outperformed complex explanations | Low-stakes maze task; ‘perfect’ explanations; and crowd-workers |
| [47] | Contextual Feedback | Peer Support | +19.6% empathy | Maintained autonomy | 300 participants, real-world setting | Non-clinical; 30 min sessions; and may not generalize to clinical |
| [29] | Example-based | General | Improved override decisions | Better unreliability detection | Three intuition-driven pathways | Think-aloud method; ML-experienced participants; and low stakes |
| [101] | LIME/SHAP | Cybersecurity | Enhanced interpretability | Increased expert trust | Malware detection tasks | Performance varies with datasets; emerging threat adaptation unclear |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, G.; Murthy, S.V.; Jia, B. Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review. Informatics 2025, 12, 135. https://doi.org/10.3390/informatics12040135
Xu G, Murthy SV, Jia B. Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review. Informatics. 2025; 12(4):135. https://doi.org/10.3390/informatics12040135
Chicago/Turabian StyleXu, Gerui, Shruthi Venkatesha Murthy, and Bochen Jia. 2025. "Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review" Informatics 12, no. 4: 135. https://doi.org/10.3390/informatics12040135
APA StyleXu, G., Murthy, S. V., & Jia, B. (2025). Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review. Informatics, 12(4), 135. https://doi.org/10.3390/informatics12040135

