Next Article in Journal
Speech Emotion Recognition and Serious Games: An Entertaining Approach for Crowdsourcing Annotated Samples
Previous Article in Journal
Exploring Healthcare Professionals’ Perspectives on Electronic Medical Records: A Qualitative Study
Previous Article in Special Issue
Assessment of Published Papers on the Use of Machine Learning in Diagnosis and Treatment of Mastitis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives

Department of Computer Science, University of Dayton, 300 College Park, Dayton, OH 45469, USA
*
Authors to whom correspondence should be addressed.
Information 2025, 16(3), 237; https://doi.org/10.3390/info16030237
Submission received: 8 January 2025 / Revised: 5 March 2025 / Accepted: 6 March 2025 / Published: 17 March 2025
(This article belongs to the Special Issue 2nd Edition of Data Science for Health Services)

Abstract

:
Recent advances in artificial intelligence (AI) have created opportunities to enhance medical decision-making for patients with discordant chronic conditions (DCCs), where a patient has multiple, often unrelated, chronic conditions with conflicting treatment plans. This paper explores the perspectives of healthcare providers (n = 10) and patients (n = 6) regarding AI tools for medication management. Participants were recruited through two healthcare centers, with interviews conducted via Zoom. The semi-structured interviews (60–90 min) explored their views on AI, including its potential role and limitations in medication decision making and management of DCCs. Data were analyzed using a mixed-methods approach, including semantic analysis and grounded theory, yielding an inter-rater reliability of 0.9. Three themes emerged: empathy in AI–patient interactions, support for AI-assisted administrative tasks, and challenges in using AI for complex chronic diseases. Our findings suggest that while AI can support decision-making, its effectiveness depends on complementing human judgment, particularly in empathetic communication. The paper also highlights the importance of clear AI-generated information and the need for future research on embedding empathy and ethical standards in AI systems.

1. Introduction

Recent advancements in artificial intelligence (AI) have opened new possibilities for enhancing decision-making and patient care in healthcare systems [1,2]. One promising application is the use of AI-generated recommendations for managing complex disease scenarios, particularly in the context of discordant chronic conditions (DCCs), which are situations in which a patient has multiple, often unrelated, chronic conditions with conflicting treatment plans, creating significant challenges for healthcare providers who aim to optimize patient outcomes [3]. With the rising prevalence of DCCs, developing effective treatment plans has become increasingly difficult for healthcare providers and patients alike [4,5].
Despite the potential of AI to assist with decision-making in complex care, significant gaps remain in the research on how AI tools are integrated into healthcare settings [6,7,8]. Current literature focuses on the technical advancements of AI, but it lacks robust qualitative insights into how healthcare providers and patients perceive these tools and the barriers they encounter in using them [9]. This paper addresses this gap by focusing on the perspectives of healthcare providers and patients regarding AI-driven tools for medication recommendations in the context of DCCs. Given the critical role healthcare providers play in patient care, understanding their concerns, suggestions, and ethical considerations is essential for developing AI systems that are both practical and ethically sound [10].
The significance of this paper lies in its contribution to the body of scientific knowledge on decision aids in healthcare, specifically through its qualitative insights into patient and provider experiences with existing tools and their needs for future aid designs. Decision support systems powered by AI have the potential to reshape patient care, but little is known about the real-world challenges of adoption. The limited exploration of issues such as trust, usability, and alignment with existing workflows hinders the successful deployment of AI in clinical settings [4,11]. Moreover, addressing the ethical concerns surrounding AI-driven decision-making, such as accountability and transparency, remains under-explored, making it critical to delve into these areas to ensure equitable care and effective treatment strategies.
This paper also aims to enrich the understanding of patients’ and healthcare providers’ barriers, challenges, and ethical concerns when adopting AI in clinical practice. AI offers potential benefits to patients by promoting more equitable care across communities with disparities in access to healthcare, such as those impacted by the urban/rural digital divide. By addressing these disparities, AI could help ensure that patients in under-served areas receive better-tailored and timely care. Ultimately, this research contributes to the ongoing discourse on AI in healthcare by providing qualitative insights into how healthcare providers and patients engage with AI systems. It highlights the importance of fostering collaboration between healthcare providers and AI technologies, outlining strategies for improving the integration of AI in managing complex healthcare scenarios. Through these insights, this paper seeks to offer design recommendations for optimizing AI adoption, with the goal of improving care quality for patients with complex chronic conditions.

2. Related Works

Current research has explored the potential of AI to enhance clinical decision-making [4] and focus on medication recommendations for complex diseases [12]. AI’s potential to analyze extensive datasets and discern nuanced patterns positions it as a promising tool to tackle the challenges caused by discordant chronic diseases. AI-driven tools can analyze vast datasets and generate personalized treatment recommendations [11], and contribute valuable insights into strategies for integrating AI into clinical workflows [4,11]. Future work can explore how to seamlessly integrate AI-generated medication regimens into the existing practices of healthcare providers, ensuring a harmonious adoption of AI in healthcare decision-support systems.
Despite the potential benefits of AI tools in healthcare, several concerns hinder their widespread adoption, including trust, privacy, bias, safety, liability, and transparency. Research indicates that healthcare providers remain hesitant toward AI-driven systems due to these barriers [4,13,14,15]. One key issue is that AI tools lack the expertise and clinical judgment required to accurately diagnose patients or recommend appropriate treatments [16,17]. Additionally, biases in training datasets can cause these tools to overstate the urgency of certain situations, leading to potentially inaccurate or misleading information [18,19]. Furthermore, many AI tools do not fully comply with HIPAA regulations, raising concerns about patient privacy and data security [20]. Addressing these challenges is essential for promoting trust and ensuring the effective integration of AI in healthcare decision-support systems.
The management of DCCs using AI-driven tools presents distinctive complex challenges, including polypharmacy, drug interactions, and comorbidities that significantly hinder effective medication prescriptions. Setting medication prescriptions and care for DCCS involves complexities in balancing treatment protocols, particularly in cases with potentially conflicting medication requirements. The current AI tools are yet to master that balance, resulting in substantial limitations to their adoption in patient care. For example, the privacy and failure of AI tools to comply with HIPAA regulations, fears of potential breaches, and the sale of patient information by AI developers [20] are major limitations that hinder AI adoption. Furthermore, physicians face challenges when estimating out-of-pocket medication costs. Lack of transparent cost information is another major hindrance in adopting AI tools in medication management [21]. Future work is needed to address these challenges of AI interventions and improve patient outcomes.

3. Methods

3.1. Study Design

This qualitative study employed a semi-structured interview design to explore the perspectives of patients with DCCs and healthcare providers on the use of AI tools in clinical practice for managing discordant chronic comorbidities (DCCs). By utilizing this approach, we aimed to capture the experiences and opinions of our participants. The findings will provide a foundational understanding for future research and the implementation of AI tools in healthcare [22].

3.2. Participant Selection

We included 16 participants (n = 16), a sample size consistent with typical qualitative research standards (10–30 participants) [23,24,25]. The sample consisted of 10 healthcare providers (HPs) and 6 patients (PPs). See Table 1 for a list of providers. The self-reported treating conditions are relevant to our recruitment criteria, focusing on type-2 diabetes and its common comorbidities (e.g., depression, chronic kidney disease, arthritis). We excluded providers who did not treat these conditions.
Patients met the following criteria: aged 18–65, willing to participate in interviews, and self-reported as type-2 diabetic with at least one additional chronic comorbidity (e.g., depression, chronic kidney disease, arthritis). Those with limited consent capacity or significant communication issues were excluded.
Participants were provided with an overview of the study and assurance of confidentiality. Written consent was obtained from participants prior to the commencement of the study. Participants were recruited from two healthcare centers. Each participant received a USD 20 gift card as compensation for their participation. The study was approved by our University Institutional Review Board (IRB).
Each participant self-reported having worked with, helped, or supported a patient with DCCs and is knowledgeable about and comfortable using emerging technologies. Very Knowledgeable participants include those who manage DCC cases daily and have over ten years of experience. Knowledgeable participants have experience with DCC patients, though it is not their primary role.
Since data were collected from only two healthcare centers, we acknowledge the limitations in diversity this may introduce. We are aware that the sample size and selection may not fully capture the breadth of experiences related to DCCs. We aimed to achieve thematic saturation through our diverse participant selection; however, further research may be necessary to explore additional perspectives across varied healthcare settings and demographics.

3.3. Interviews

We conducted semi-structured interviews lasting 60 to 90 min with each participant, focusing on their experiences and perceptions of AI tools in clinical practice, particularly for medication recommendations. Interviews were held remotely via Zoom and recorded with participants’ consent. Each session was transcribed verbatim for thorough analysis.
The interview guide was developed through a review of existing literature on AI applications in healthcare, followed by consultations with the research team. It covered several key topics aimed at eliciting participants’ experiences with, and opinions on, AI tools in clinical settings. The open-ended questions included the following:
  • Perspectives on the role of AI in healthcare;
  • Experiences with AI-generated recommendations;
  • Assessment of AI accuracy, relevance, and usability;
  • Concerns about AI integration into clinical workflows;
  • AI’s potential to support clinical decision-making and patient care.
Participants were encouraged to reflect on their prior interactions with AI, discussing the tools’ perceived benefits, limitations, and future roles in their practice. The guide was designed to allow flexibility for participants to express their unique perspectives, ensuring a rich and diverse dataset for analysis.

3.4. Data Analysis

Following the interviews, two researchers independently analyzed the participants’ responses to explore their perspectives on AI-generated medication recommendations. We utilized Dedoose 9.0.107 [26] qualitative analysis software, which supported thematic coding and organization of data [27].
Our data analysis approach followed a grounded theory framework: a systematic set of techniques and procedures that enable researchers to identify concepts and build theory from qualitative data [28]. This method was chosen because it allows for the development of theoretical insights directly from the data, aligning nicely with our exploratory study design and the semi-structured interviews used for data collection. The grounded theory approach is particularly suitable because it helps to identify emerging patterns in an area like AI-assisted healthcare, where existing theories are limited. The iterative nature of grounded theory also ensured that our analysis remained flexible and responsive to the evolving themes identified during the coding process.
We used a mixed-methods approach to ensure a comprehensive analysis. First, we conducted a semantic analysis to create a detailed codebook. Then, in three iterative rounds of grounded theory analysis, we refined, expanded, and merged codes based on discussions. This process led to the identification of three major themes: (i) empathy in AI–patient interactions, (ii) support for AI-assisted administrative tasks, and (iii) challenges and opportunities for AI in managing complex chronic diseases.
The emergence of empathy as a key theme led to the hypothesis that empathetic AI communication could enhance patient adherence to treatment recommendations. Our final analysis achieved a high inter-rater reliability score of 0.9, indicating strong agreement between researchers.

4. Results

In this section, we explore the insights gathered from 16 in-depth interviews on the role of AI in healthcare, particularly focusing on empathy in AI–patient interactions, AI support for administrative tasks, and its application in managing complex chronic diseases.

4.1. Empathy in AI-Patient Interactions

Empathy emerged as a central theme in participants’ discussions regarding AI in healthcare, highlighting both the enthusiasm for its potential benefits and the concerns surrounding its limitations. Key aspects of this theme include the positive impact of AI on patient outcomes and experiences, the risks associated with artificial empathy, and AI’s role in addressing sensitive diagnoses.

4.1.1. Positive Impact on Patient Outcomes and Experience

Participants consistently noted that the combination of AI tools and empathetic communication could improve patient experiences. One participant observed that a lack of empathy in healthcare could lead to patients feeling that their providers are disengaged or robotic:
“A lot of people do not trust doctors who are very cut and dry, and have very poor bedside manner. Because it will make the patient think, ‘Oh, they don’t actually care about me. They’re very robotic, and they’re only doing their job. And they don’t really care about me as a patient, and the kind of care that I need.’’
PP4
Healthcare professionals also emphasized the importance of empathy in patient care. One participant stated the following:
“It’s hard to overstate how crucial empathy is. Removing it could diminish patient health outcomes just as much as neglecting the medical treatment itself.”
HP2
Patients further highlighted the value of empathy through personal experiences. One participant shared how a nurse’s empathetic approach made a significant difference in their care:
“With my chronic migraines, there was one nurse who sat me down and explained everything, making me feel truly comfortable.”
PP3

4.1.2. Risks of Artificial Empathy

Participants also expressed concerns about the potential risks of AI simulating empathy without truly understanding human emotions. Some healthcare professionals noted that AI could create a false sense of connection, leading patients to become overly reliant on AI for emotional support. One healthcare professional described this as follows:
“Patients may actually emotionally develop a relationship with this AI…just actually never see anybody.”
HP5
AI’s ability to fully understand human emotions is limited. In scenarios where patients interact with AI through typed responses or surveys, the system misses important non-verbal cues such as body language and tone of voice, which are key to understanding emotional context. Another healthcare professional explained it as follows:
“…there’s a lot of give and take with body language, voice tone, interaction that you pick up. As you know, like you and I. Okay. And if I was just typing in the responses, you would have a different take. Now, if the computer, the AI is responding to input that has no personality or human characteristics, but it’s responding as if it’s human, It’s misleading.”
HP5

4.1.3. AI’s Role in Sensitive Diagnoses

Another concern raised was the appropriateness of AI delivering emotionally charged diagnoses, such as life-threatening or chronic conditions, without human involvement due to emotional disconnect. Both healthcare providers and patients expressed reservations about this:
“We cannot deliver a diagnosis like HIV without a personal conversation. Some people have harmed themselves after receiving such news through a phone call.”
HP9

4.2. AI-Assisted Administrative and Clinical Tasks

Participants highlighted the potential of AI to enhance administrative workflows and clinical decision-making, particularly in the management of complex chronic conditions. The key sub-themes that emerged include augmenting the healthcare workforce, improving workflow efficiency, and enhancing patient education.

4.2.1. Augmenting the Healthcare Workforce

First, many healthcare professionals noted that AI could alleviate the burden of routine administrative tasks, enabling them to dedicate more time to direct patient care. For example, one participant stated the following:
“Start with replacing things that doctors don’t want/like to do. Such as dictation or generation of notes. Will give doctors more time to spend with patients.”
HP3
This sentiment reflects a broader consensus that AI is most effective as an assistant rather than a replacement in healthcare, particularly by reducing administrative load:
“Doctors should take advantage of AI. If you’re able to utilize AI to become a more efficient doctor and utilize it in your career, that will make you stand out. It’s actually just helping doctors be more efficient.”
HP6

4.2.2. Improving Workflow Efficiency

Participants also discussed AI’s ability to organize and present patient data in a way that optimizes clinical workflows:
“We use AI to look at interactions. We use AI if you have a patient that has 7, 8, 9 drugs. It’s so difficult to know each single interaction with each of them. So AI helps us crunch that out, you know, certain illness, certain conditions, you know, that we may not be aware.”
HP9
By pulling relevant information from electronic health records (EHRs), AI could help providers take a more holistic approach to patient care, especially for those managing multiple comorbidities:
“AI could pull a patient’s health record from their EHR to see the patient as a whole.”
HP4

4.2.3. Enhancing Patient Education

Participants highlighted AI’s ability to deliver timely reminders and educational materials, which could improve patient adherence to treatment plans:
“AI could enhance patient adherence by sending reminders for appointments and tests, along with the rationale for why these steps are crucial in their care.”
HP4
Additionally, AI was recognized for its potential to provide unbiased and consistent information, which could be particularly beneficial in under-served communities where access to quality care is limited:
“Patients might not have transportation. They might not have the funds. So having access to a tool online, like medical AI, to help them diagnose whatever condition they have that’d be very helpful. Also, because medical AI can be distributed very widely to people who don’t have funds to see a primary care doctor.”
HP1

4.3. Challenges and Opportunities in AI-Driven Healthcare

Participants recognized the potential advantages of AI in healthcare but voiced concerns about its current limitations, particularly regarding the balance between AI and medical professionals. From this discussion, several sub-themes emerged. In this subsection, we will focus on two key sub-themes: Skepticism About AI’s Capabilities and Reducing Bias and Healthcare Disparities.

4.3.1. Skepticism About AI’s Capabilities

A key issue raised was skepticism about AI’s capabilities to perform complex healthcare tasks without human oversight. Many participants emphasized that AI should serve to complement healthcare professionals rather than replace them:
“I think it can be extremely helpful. I don’t think it should drive decisions necessarily, but it could work with doctors to make their work more efficient.”
PP1
This perspective highlights the need to maintain a balance between AI and human expertise, ensuring AI’s role is supportive and not autonomous in healthcare decision-making:
“It’s a good start. Especially researching different things to bring up to your doctor. But, I will always stand by the fact of going about getting medication and discussing treatments and conditions should always lead back to a doctor.”
PP3

4.3.2. Reducing Bias and Healthcare Disparities

Additionally, participants discussed the potential for AI to reduce bias and ensure objectivity in clinical settings. They noted that AI’s lack of emotional influence might allow for more data-driven, unbiased assessments in certain scenarios:
“AI doesn’t have empathy, which allows it to assess the situation more objectively.”
PP2
This feature of AI could be particularly useful in cases where objective analysis is crucial, though it must be balanced with the empathetic understanding that human healthcare providers offer.
The un-empathetic nature of AI allows it to provide unbiased evaluations of patients’ situations. This can promote more accurate diagnosis and treatment plans when used in conjunction with a human healthcare provider.
Participants also explored how AI could play a vital role in reducing healthcare disparities by offering consistent, reliable information to patients from various backgrounds. This capacity to provide equitable care was viewed as a key strength of AI in addressing inequalities:
“AI’s ability to provide unbiased information could be a key factor in reducing care disparities.”
HP9
However, participants stressed the importance of ensuring that AI-generated information is accessible to all patients, advocating for clear, simplified language and support for multilingual communication:
“Make the response easy to read and understand, at a fifth-grade reading level.”
HP1
These insights demonstrate that while AI holds significant promise for improving healthcare outcomes, particularly in terms of reducing disparities, its design must prioritize the diverse needs of both patients and providers to be effective.

5. Discussion

The integration of AI in healthcare presents both opportunities and challenges, particularly for patients with discordant chronic conditions (DCCs). This paper identifies key design directions to improve AI-driven solutions, focusing on emotional responsiveness in decision-making, administrative efficiency, and patient education while also addressing regulatory compliance, technical challenges, and emerging trends in AI for healthcare.

5.1. Integrating Emotional Responsiveness in AI-Driven Healthcare Decision-Making

Participants in our study highlighted the importance of emotional responsiveness in AI-driven healthcare tools, particularly in medication recommendations. Some participants expressed that incorporating elements of emotional intelligence could enhance user experience, while others emphasized that AI should not attempt to replicate human empathy. Prior research supports the notion that patients value AI systems that offer empathetic responses, but concerns persist regarding the feasibility of AI achieving true empathy [29].
However, emotional responsiveness must be balanced with AI interpretability and transparency. Participants stressed the need for human oversight in AI-generated recommendations, reinforcing studies suggesting that AI should augment a compassionate care rather than act as a substitute [30]. A critical issue raised was the potential for bias in AI recommendations due to limitations in training data. If AI systems rely on datasets that do not reflect diverse patient populations, they may reinforce existing healthcare disparities. Addressing bias through rigorous validation and bias-mitigation techniques is essential to ensure equitable AI-driven healthcare solutions. Furthermore, the interpretability of AI decisions remains a major concern. Ensuring that the AI output is explainable to both clinicians and patients will be vital in building trust and improving decision-making.

5.2. AI for Administrative Efficiency and Its Integration into Providers’ Workflow

Participants highlighted AI’s potential to enhance administrative efficiency by automating documentation and summarizing patient data, allowing healthcare providers to dedicate more time to direct patient care. AI-driven tools, such as IBM Watson for Oncology and Google DeepMind Health, have demonstrated the ability to extract key patient information from electronic health records (EHRs) [30,31]. However, challenges remain, particularly regarding data privacy, interoperability across diverse EHR systems, and ensuring accuracy in complex medical contexts [32,33].
A major concern was the difficulty of integrating AI tools into the existing healthcare infrastructure. Many AI solutions lack seamless compatibility with multiple EHR platforms, hindering adoption [34,35]. Regulatory compliance, especially adherence to HIPAA guidelines, poses another challenge, with participants emphasizing the need for robust data security protocols and standardized frameworks to ensure secure data exchange.
Furthermore, participants noted that the absence of standardized protocols for AI–EHR integration leads to inconsistencies in how AI-generated recommendations are contextualized within clinical workflows [36,37]. While AI has the potential to improve efficiency and enhance the patient experience, its success depends on overcoming technical and policy-level barriers. Addressing these limitations requires collaboration among AI developers, healthcare institutions, and policymakers to establish interoperable, regulatory-compliant systems that balance innovation with patient privacy.

5.3. Using AI to Improve Patient Education and Address Healthcare Disparities

Participants in our study emphasized the potential of AI-driven tools to improve patient education by making health information more accessible. Existing applications, such as Healthwise’s Virtual Health Coach and Mayo Clinic’s AskMayoExpert, have demonstrated success in simplifying complex medical terminology, thus helping patients better understand their conditions and treatment options [38,39,40].
Participants also noted the role AI could play in addressing healthcare disparities by providing accessible and consistent medical information. Specifically, healthcare professionals pointed out that AI could improve access to medical guidance for economically disadvantaged individuals by offering online tools that eliminate the need for frequent in-person consultations. Several participants mentioned the importance of AI providing information in an easy-to-read format, ensuring comprehension for a wide range of patients.
Economic accessibility was another key concern. Transportation barriers often prevent individuals from seeking timely medical care, and participants saw AI as a solution for delivering remote guidance. To further reduce disparities, AI systems should be integrated into community-based access points, such as public libraries, telehealth kiosks, and mobile health applications with offline capabilities. Participants acknowledged that such applications could reduce disparities in healthcare access by enabling patients to receive preliminary assessments and recommendations without the need for an in-person visit.
While participants largely focused on the economic and logistical aspects of healthcare disparities, ensuring that AI tools remain unbiased and accessible was also a key concern. AI’s ability to provide objective, data-driven recommendations without emotional influence was seen as a potential advantage in reducing disparities in medical decision-making. However, participants stressed the importance of designing AI in a way that aligns with patients’ diverse needs. Future research should focus on developing and testing AI-driven patient education systems that actively mitigate bias, enhance accessibility, and support under-served populations.

5.4. Emerging Trends and Implementation Strategies

Participants in this study identified several emerging AI trends with the potential to enhance chronic disease management, including real-time health monitoring, predictive analytics, and AI-driven clinical decision support tools. These advancements offer opportunities to improve personalized care, optimize treatment planning, and support proactive disease management. However, their successful implementation is contingent upon addressing critical challenges related to scalability, interoperability, and integration within existing healthcare systems. Without standardized protocols for AI deployment, inconsistencies in data exchange and fragmented digital health infrastructures may limit the widespread adoption and effectiveness of these technologies.
A key concern raised was the lack of structured implementation strategies for AI adoption in clinical settings. While AI has demonstrated efficacy in controlled environments, its translation into real-world practice faces barriers such as resistance from healthcare professionals, insufficient clinical validation, and regulatory uncertainties [36,41] Participants emphasized the need for clear, evidence-based implementation frameworks that define step-by-step strategies for AI integration. These frameworks should include standardized evaluation metrics; best practices for clinician training; and governance models that ensure patient safety, ethical AI use, and alignment with healthcare workflows.
Additionally, participants highlighted concerns about technical limitations, particularly bias in AI training data and challenges in model generalizability. If not carefully managed, algorithmic bias could perpetuate healthcare disparities rather than mitigate them. Ensuring fairness and reliability in AI-driven recommendations requires rigorous validation across diverse patient populations, continuous auditing of model performance, and transparent reporting of decision-making processes. Explainable AI (XAI) is emerging as a promising approach to enhance the interpretability of AI-generated recommendations, helping both clinicians and patients understand the rationale behind medical guidance. However, further research is needed to assess the effectiveness of XAI techniques in clinical practice and ensure they align with provider decision-making needs.
As AI adoption in healthcare advances, several critical trends and actionable implementation strategies must be considered. Participants emphasized the necessity of personalized AI-driven interventions that adapt to individual patient needs. For effective integration, AI tools should leverage real-time patient data and continuously refine recommendations based on health patterns. To ensure these tools are practical, developers must work closely with healthcare professionals to establish evidence-based personalization protocols.
A key trend identified was the increasing emphasis on hybrid AI–human collaboration. While AI can assist in clinical decision-making, participants underscored that AI should function as a co-pilot rather than an autonomous decision-maker. This requires the implementation of AI governance frameworks that define the boundaries of AI recommendations and ensure clinicians maintain oversight. Additionally, trust in AI systems can be strengthened by embedding transparency features, such as explainable AI models that clearly outline the reasoning behind recommendations.
To maximize adoption, AI solutions must be designed for seamless integration with electronic health record systems and telemedicine platforms. A modular, API-driven approach that facilitates interoperability across healthcare systems can mitigate existing compatibility barriers. Moreover, policy-level interventions must establish regulatory pathways for AI validation and deployment, ensuring compliance with HIPAA and other data privacy laws. AI tools should also undergo post-deployment monitoring to assess real-world efficacy and identify unintended biases.
Finally, proactive patient engagement strategies will be vital for the long-term success of AI in healthcare. Developers must prioritize intuitive, patient-friendly interfaces and incorporate educational components that demystify AI recommendations. By fostering trust through clear communication and user-centered design, AI-driven healthcare solutions can achieve sustainable and equitable impact.

6. Conclusions

In this paper, we examined the integration of AI in healthcare for patients with DCCs, highlighting key areas for improvement in AI design, implementation, and regulation. Participants emphasized the need for AI systems that enhance clinical outcomes while also improving patient experience through personalized, empathetic communication that empowers self-management. Although participants supported incorporating empathy into AI–patient interactions, concerns arose about possible misconceptions stemming from artificial empathy and the importance of AI systems complementing, rather than replacing, human interaction. AI’s potential to address healthcare personnel shortages, reduce disparities, and empower patients is significant, but accessible communication tailored to diverse populations remains essential.
Beyond empathy, this paper highlights the importance of interpretability, regulatory compliance, and structured implementation frameworks to ensure the practical and ethical adoption of AI in healthcare. Future research should focus on balancing empathy with transparency, developing scalable and interoperable AI systems, and establishing governance models to oversee AI deployment. While this paper provides valuable insights, it is limited by its qualitative approach and sample size, which may affect generalizability. Future studies incorporating quantitative data and empirical evaluation of AI-driven interventions will be crucial in refining AI-based healthcare solutions for widespread clinical use.

Author Contributions

Conceptualization, T.O.; Methodology, T.O. and Z.S.; Investigation, T.O. and Z.S.; Resources, T.O. and T.V.N.; Data curation, T.O.; Writing—original draft, T.O. and T.V.N. and Z.S.; Writing—review and editing, T.O. and T.V.N. and Z.S.; Supervision, T.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of University of Dayton (protocol code: 19781167 and 20 December 2023).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, M.; Decary, M. Artificial intelligence in healthcare: An essential guide for health leaders. In Healthcare Management Forum; Sage Publications: Los Angeles, CA, USA, 2020; Volume 33, pp. 10–18. [Google Scholar] [CrossRef]
  2. Burgess, E.R.; Jankovic, I.; Austin, M.; Cai, N.; Kapuścińska, A.; Currie, S.; Overhage, J.M.; Poole, E.S.; Kaye, J. Healthcare AI treatment decision support: Design principles to enhance clinician adoption and trust. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–19. [Google Scholar]
  3. Ongwere, T.; Cantor, G.; Martin, S.R.; Shih, P.C.; Clawson, J.; Connelly, K. Design hotspots for care of discordant chronic comorbidities: Patients’ perspectives. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction, Oslo, Norway, 29 September–3 October 2018; pp. 571–583. [Google Scholar]
  4. Bao, Y.; Gong, W.; Yang, K. A Literature Review of Human–AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory. Systems 2023, 11, 442. [Google Scholar] [CrossRef]
  5. Sebastian, A.M.; Peter, D. Artificial intelligence in cancer research: Trends, challenges and future directions. Life 2022, 12, 1991. [Google Scholar] [CrossRef]
  6. Albahri, A.S.; Duhaim, A.M.; Fadhel, M.A.; Alnoor, A.; Baqer, N.S.; Alzubaidi, L.; Albahri, O.; Alamoodi, A.; Bai, J.; Salhi, A.; et al. A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Inf. Fusion 2023, 96, 156–191. [Google Scholar] [CrossRef]
  7. Vasey, B.; Nagendran, M.; Campbell, B.; Clifton, D.A.; Collins, G.S.; Denaxas, S.; Denniston, A.K.; Faes, L.; Geerts, B.; Ibrahim, M.; et al. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022, 377, e070904. [Google Scholar] [CrossRef] [PubMed]
  8. Bekbolatova, M.; Mayer, J.; Ong, C.W.; Toma, M. Transformative potential of AI in healthcare: Definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare 2024, 12, 125. [Google Scholar] [CrossRef]
  9. Hassan, M.; Kushniruk, A.; Borycki, E. Barriers to and facilitators of artificial intelligence adoption in health care: Scoping review. JMIR Hum. Factors 2024, 11, e48633. [Google Scholar] [CrossRef]
  10. Sharma, I.P.; Nguyen, T.V.; Singh, S.A.; Ongwere, T. Predicting an Optimal Medication/Prescription Regimen for Patient Discordant Chronic Comorbidities Using Multi-Output Models. Information 2024, 15, 31. [Google Scholar] [CrossRef]
  11. He, T.; Fu, G.; Yu, Y.; Wang, F.; Li, J.; Zhao, Q.; Song, C.; Qi, H.; Luo, D.; Zou, H.; et al. Towards a psychological generalist ai: A survey of current applications of large language models and future prospects. arXiv 2023, arXiv:2312.04578. [Google Scholar]
  12. Kasula, B.Y.; Whig, P. AI-Driven Machine Learning Solutions for Sustainable Development in Healthcare—Pioneering Efficient, Equitable, and Innovative Health Service. Int. J. Sustain. Dev. AI ML IoT 2023, 2, 1–7. [Google Scholar]
  13. Prakash, A.V.; Das, S. Medical practitioner’s adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. Inf. Manag. 2021, 58, 103524. [Google Scholar] [CrossRef]
  14. Knop, M.; Weber, S.; Mueller, M.; Niehaves, B. Human factors and technological characteristics influencing the interaction of medical professionals with artificial intelligence–enabled clinical decision support systems: Literature review. JMIR Hum. Factors 2022, 9, e28639. [Google Scholar] [CrossRef] [PubMed]
  15. Choudhury, A. Factors influencing clinicians’ willingness to use an AI-based clinical decision support system. Front. Digit. Health 2022, 4, 920662. [Google Scholar] [CrossRef]
  16. Iftikhar, L.; Iftikhar, M.F.; Hanif, M.I. Docgpt: Impact of chatgpt-3 on health services as a virtual doctor. EC Paediatr. 2023, 12, 45–55. [Google Scholar]
  17. Khawaja, Z.; Bélisle-Pipon, J.-C. Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Front. Digit. Health 2023, 5, 1278186. [Google Scholar] [CrossRef]
  18. Lyons, R.J.; Arepalli, S.R.; Fromal, O.; Choi, J.D.; Jain, N. Artificial intelligence chatbot performance in triage of ophthalmic conditions. Can. J. Ophthalmol. 2023, 59, e301–e308. [Google Scholar] [CrossRef] [PubMed]
  19. Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V.; Gibson, J. Artificial intelligence for good health: A scoping review of the ethics literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef] [PubMed]
  20. Marks, M.; Haupt, C.E. AI chatbots, health privacy, and challenges to HIPAA compliance. JAMA 2023, 330, 309–310. [Google Scholar] [CrossRef]
  21. Sloan, C.E.; Millo, L.; Gutterman, S.; Ubel, P.A. Accuracy of physician estimates of out-of-pocket costs for medication filling. JAMA Netw. Open 2021, 4, e2133188. [Google Scholar] [CrossRef]
  22. Ongwere, T.; Rutuja, N.; Nguyen, T.V. Improving Medication Prescription Strategies for Discordant Chronic Comorbidities Through Medical Data Bench-Marking and Recommender Systems. In Science and Information Conference; Springer: Cham, Switzerland, 2024; pp. 237–250. [Google Scholar]
  23. Creswell, J.W.; Poth, C.N. Qualitative Inquiry and Research Design: Choosing Among Five Approaches; Sage publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  24. Guest, G.; Bunce, A.; Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 2006, 18, 59–82. [Google Scholar] [CrossRef]
  25. Patton, M.Q. Qualitative Research Evaluation Methods: Integrating Theory and Practice; Sage publications: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  26. Version, D. (2023) 9.0.107, Cloud Application for Managing, Analyzing, and Presenting Qualitative and Mixed Method Research Data. Los Angeles, CA: SocioCultural Research Consultants, LLC. 2023. Available online: www.dedoose.com (accessed on 7 January 2025).
  27. Salmona, M.; Lieber, E.; Kaczynski, D. Qualitative and Mixed Methods Data Analysis Using Dedoose: A Practical Approach for Research Across the Social Sciences; Sage Publications: Thousand Oaks, CA, USA, 2019. [Google Scholar]
  28. Foley, G.; Timonen, V. Using grounded theory method to capture and analyze health care experiences. Health Serv. Res. 2015, 50, 1195–1210. [Google Scholar] [CrossRef]
  29. Daher, K.; Casas, J.; Khaled, O.A.; Mugellini, E. Empathic chatbot response for medical assistance. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, Virtual, 20–22 October 2020; pp. 1–3. [Google Scholar]
  30. Batra, P.; Dave, D.M. Revolutionizing Healthcare Platforms: The Impact of AI on Patient Engagement and Treatment Efficacy. Int. J. Sci. Res. (IJSR) 2024, 13, 613–624. [Google Scholar]
  31. Alloghani, M.; Thron, C.; Subair, S. Cognitive computing, emotional intelligence, and artificial intelligence in Healthcare. In Artificial Intelligence for Data Science in Theory and Practice; Springer International Publishing: Cham, Switzerland, 2022; pp. 109–118. [Google Scholar]
  32. Faheem, H.; Dutta, S. Artificial Intelligence Failure at IBM ‘Watson for Oncology’. IUP J. Knowl. Manag. 2023, 21, 47–75. [Google Scholar]
  33. Kumah-Crystal, Y.A.; Pirtle, C.J.; Whyte, H.M.; Goode, E.S.; Anders, S.H.; Lehmann, C.U. Electronic health record interactions through voice: A review. Appl. Clin. Inform. 2018, 9, 541–552. [Google Scholar] [CrossRef]
  34. Somashekhar, S.; Sepúlveda, M.-J.; Puglielli, S.; Norden, A.; Shortliffe, E.; Kumar, C.R.; Rauthan, A.; Kumar, N.A.; Patil, P.; Rhee, K.; et al. Watson for Oncology and breast cancer treatment recommendations: Agreement with an expert multidisciplinary tumor board. Ann. Oncol. 2018, 29, 418–423. [Google Scholar] [CrossRef]
  35. Powles, J.; Hodson, H. Google DeepMind and healthcare in an age of algorithms. Health Technol. 2017, 7, 351–367. [Google Scholar] [CrossRef]
  36. Nair, M.; Svedberg, P.; Larsson, I.; Nygren, J.M. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLoS ONE 2024, 19, e0305949. [Google Scholar] [CrossRef]
  37. Ongwere, T.; Stolterman, E.; Shih, P.C.; James, C.; Connelly, K. Translating a DC3 Model into a Conceptual Tool (DCCs Ecosystem): A Case Study with a Design Team. In Proceedings of the International Conference on Pervasive Computing Technologies for Healthcare, Tel Aviv, Israel, 6–8 December 2021; pp. 381–397. [Google Scholar]
  38. Nazarko, L. Healthwise, Part 5. Prevention and treatment of type 2 diabetes. Br. J. Healthc. Assist. 2022, 16, 18–25. [Google Scholar] [CrossRef]
  39. Krey, M. mHealth Apps: Potentials for the patient—Physician relationship. J. Adv. Inf. Technol. 2018, 9, 102–109. [Google Scholar] [CrossRef]
  40. Maramba, G.; Smuts, H.; Adebesin, F.; Hattingh, M.; Mawela, T. KMS as a Sustainability Strategy during a Pandemic. Sustainability 2023, 15, 9158. [Google Scholar] [CrossRef]
  41. Dellink, R. Barriers to the Adoption of Artificial Intelligence in Medical Diagnosis: Procurement Perspective. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2023. [Google Scholar]
Table 1. Summary of study participants (healthcare providers).
Table 1. Summary of study participants (healthcare providers).
ParticipantMedical PositionMedical SpecialtyKnowledge About DCCs
HP1Director of integrated healthFamily MedicineVery knowledgeable
HP2Medical doctorInternal MedicineKnowledgeable
HP3Medical doctorFamily MedicineVery knowledgeable
HP4Medical doctorFamily MedicineVery Knowledgeable
HP5Medical doctorInternal medicineKnowledgeable
HP6Medical doctorFamily MedicineVery knowledgeable
HP7PsychiatristInternal medicineKnowledgeable
HP8Physician assistantInternal medicineKnowledgeable
HP9Physician assistantFamily MedicineVery knowledgeable
HP10PsychiatristGeriatricKnowledgeable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ongwere, T.; Nguyen, T.V.; Sadowski, Z. Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives. Information 2025, 16, 237. https://doi.org/10.3390/info16030237

AMA Style

Ongwere T, Nguyen TV, Sadowski Z. Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives. Information. 2025; 16(3):237. https://doi.org/10.3390/info16030237

Chicago/Turabian Style

Ongwere, Tom, Tam V. Nguyen, and Zoe Sadowski. 2025. "Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives" Information 16, no. 3: 237. https://doi.org/10.3390/info16030237

APA Style

Ongwere, T., Nguyen, T. V., & Sadowski, Z. (2025). Artificial Intelligence for Medication Management in Discordant Chronic Comorbidities: An Analysis from Healthcare Provider and Patient Perspectives. Information, 16(3), 237. https://doi.org/10.3390/info16030237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop