Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study
Abstract
1. Introduction
1.1. Theoretical Background
1.1.1. Socio-Technical Systems Theory
1.1.2. Technostress and Digital Workload
1.1.3. Technology Acceptance and Resistance
1.2. Current State of Research
1.2.1. Opportunities in the Use of AI in Primary Care
1.2.2. Challenges in Using AI in Primary Care
1.3. Study Aim
- What types of AI applications are currently used in primary care, and how are they integrated into existing workflows?
- To what extent do physicians view AI as a tool that supports decision-making versus a source of additional cognitive or administrative burden?
- How do PCPs experience the influence of AI on doctor-patient interaction and the quality of care?
- What concerns do physicians have regarding responsibility, safety, and reliability of AI in patient care?
- What psychological effects, such as stress, anxiety, or relief, do physicians associate with the use of AI tools in everyday practice?
- What resources or conditions (e.g., training, infrastructure, support systems) are necessary for the successful and stress-reducing implementation of AI in primary care?
2. Materials and Methods
2.1. Study Design
2.2. Participant Recruitment and Sampling
2.3. Data Collection
2.4. Data Analysis
2.5. Criteria for Assessing Qualitative Research
2.6. Ethical Considerations
2.7. Use of AI Software
3. Results
3.1. Sociodemographic Data
3.2. Qualitative Results on AI Use in Primary Care
3.3. Perceptions and Utilization of AI in Primary Care
3.4. AI as Support: Opportunities and Benefits
3.5. Challenges and Additional Burdens
3.6. Impact on the Physician–Patient Relationship
3.7. Responsibility and Safety Concerns
3.8. Psychological Burden and Resources
3.9. Future Perspectives and Conditions for Successful Integration
4. Discussion
4.1. AI as a Supportive Tool: Opportunities and Benefits
4.2. Ambivalence and Burden: Risks of Overload and Cognitive Strain
4.3. Impact on the Doctor–Patient Relationship
4.4. Accountability, Safety, and Ethical Concerns
4.5. Preconditions for Sustainable Implementation
4.6. Strengths and Limitations
4.7. Research Implications
- The dynamics of trust and transparency in clinician–AI interactions
- The negotiation and preservation of professional autonomy in the context of algorithmic recommendations
- The evolving frameworks of accountability and responsibility when AI supports clinical decisions
- The role of digital literacy and organizational infrastructure in facilitating sustainable socio-technical integration.
4.8. Practical Implications
4.8.1. System Design and Integration
4.8.2. Training and Digital Literacy
4.8.3. Organizational Infrastructure and Change Management
4.8.4. Preserving the Human Dimension of Care
4.8.5. Ethical and Regulatory Frameworks
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 8–9. [Google Scholar] [CrossRef]
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- Blease, C.; Kaptchuk, T.J.; Bernstein, M.H.; Mandl, K.D.; Halamka, J.D.; DesRoches, C.M. Artificial Intelligence and the Future of Primary Care: Exploratory Qualitative Study of UK General Practitioners’ Views. J. Med. Internet Res. 2019, 21, e12802. [Google Scholar] [CrossRef] [PubMed]
- Panch, T.; Mattie, H.; Atun, R. Artificial intelligence and algorithmic bias: Implications for health systems. J. Glob. Health 2019, 9, 010318. [Google Scholar] [CrossRef] [PubMed]
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef] [PubMed]
- Hannun, A.Y.; Rajpurkar, P.; Haghpanahi, M.; Tison, G.H.; Bourn, C.; Turakhia, M.P.; Ng, A.Y. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 2019, 25, 65–69. [Google Scholar] [CrossRef]
- Wong, A.; Otles, E.; Donnelly, J.P.; Krumm, A.; McCullough, J.; DeTroyer-Cooley, O.; Pestrue, J.; Phillips, M.; Konye, J.; Penoza, C.; et al. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Intern. Med. 2021, 181, 1065–1070. [Google Scholar] [CrossRef]
- Jotterand, F.; Bosco, C. Artificial Intelligence in Medicine: A Sword of Damocles? J. Med. Syst. 2021, 46, 9. [Google Scholar] [CrossRef]
- Morley, J.; Machado, C.C.V.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
- Cabitza, F.; Rasoini, R.; Gensini, G.F. Unintended Consequences of Machine Learning in Medicine. JAMA 2017, 318, 517–518. [Google Scholar] [CrossRef]
- Blease, C.; Bernstein, M.H.; Gaab, J.; Kaptchuk, T.J.; Kossowsky, J.; Mandl, K.D.; Davis, R.B.; DesRoches, C.M. Computerization and the future of primary care: A survey of general practitioners in the UK. PLoS ONE 2018, 13, e0207418. [Google Scholar] [CrossRef]
- Kluge, E.W. Artificial intelligence in healthcare: Ethical considerations. Healthc. Manag. Forum 2020, 33, 47–49. [Google Scholar] [CrossRef]
- Shuaib, A. Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward. Int. J. Gen. Med. 2024, 17, 1765–1771. [Google Scholar] [CrossRef]
- Verghese, A.; Shah, N.H.; Harrington, R.A. What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA 2018, 319, 19–20. [Google Scholar] [CrossRef]
- Godfrey, L.; St-Amant, A.; Premji, K.; Fitzsimon, J. Impact of changes in primary care attachment: A scoping review. Fam. Med. Community Health 2025, 13, e003115. [Google Scholar] [CrossRef] [PubMed]
- Storseth, O.; McNeil, K.; Grudniewicz, A.; Correia, R.H.; Gallant, F.; Thelen, R.; Lavergne, M.R. Administrative burden in primary care: Critical review. Can. Fam. Physician 2025, 71, 417–423. [Google Scholar] [CrossRef] [PubMed]
- Vos, J.F.J.; Boonstra, A.; Kooistra, A.; Seelen, M.; van Offenbeek, M. The influence of electronic health record use on collaboration among medical specialties. BMC Health Serv. Res. 2020, 20, 676. [Google Scholar] [CrossRef] [PubMed]
- de Hoop, T.; Neumuth, T. Evaluating Electronic Health Record Limitations and Time Expenditure in a German Medical Center. Appl. Clin. Inform. 2021, 12, 1082–1090. [Google Scholar] [CrossRef]
- Shortliffe, E.H.; Sepúlveda, M.J. Clinical Decision Support in the Era of Artificial Intelligence. JAMA 2018, 320, 2199–2200. [Google Scholar] [CrossRef]
- Sittig, D.F.; Singh, H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual. Saf. Health Care 2010, 19, i68–i74. [Google Scholar] [CrossRef]
- Baxter, G.; Sommerville, I. Socio-technical systems: From design methods to systems engineering. Interact. Comput. 2010, 23, 4–17. [Google Scholar] [CrossRef]
- Greenhalgh, T.; Robert, G.; Macfarlane, F.; Bate, P.; Kyriakidou, O. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Q. 2004, 82, 581–629. [Google Scholar] [CrossRef] [PubMed]
- Tarafdar, M.; Pullins, E.B.; Ragu-Nathan, T.S. Technostress: Negative effect on performance and possible mitigations. Inf. Syst. J. 2015, 25, 103–132. [Google Scholar] [CrossRef]
- Holden, R.J.; Karsh, B.T. The technology acceptance model: Its past and its future in health care. J. Biomed. Inform. 2010, 43, 159–172. [Google Scholar] [CrossRef]
- Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
- Bracken, A.; Reilly, C.; Feeley, A.; Sheehan, E.; Merghani, K.; Feeley, I. Artificial Intelligence (AI)—Powered Documentation Systems in Healthcare: A Systematic Review. J. Med. Syst. 2025, 49, 28. [Google Scholar] [CrossRef]
- Omar, M.; Soffer, S.; Charney, A.W.; Landi, I.; Nadkarni, G.N.; Klang, E. Applications of large language models in psychiatry: A systematic review. Front. Psychiatry 2024, 15, 1422807. [Google Scholar] [CrossRef]
- Hah, H.; Goldin, D.S. How Clinicians Perceive Artificial Intelligence-Assisted Technologies in Diagnostic Decision Making: Mixed Methods Approach. J. Med. Internet Res. 2021, 23, e33540. [Google Scholar] [CrossRef]
- Chen, Y.J.; Lin, C.S.; Lin, C.; Tsai, D.J.; Fang, W.H.; Lee, C.C.; Wang, C.H.; Chen, S.J. An AI-Enabled Dynamic Risk Stratification for Emergency Department Patients with ECG and CXR Integration. J. Med. Syst. 2023, 47, 81. [Google Scholar] [CrossRef]
- Schulman, K.A.; Nielsen, P.K.; Jr Patel, K. AI Alone Will Not Reduce the Administrative Burden of Health Care. JAMA 2023, 330, 2159–2160. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, M.I.; Spooner, B.; Isherwood, J.; Lane, M.; Orrock, E.; Dennison, A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023, 15, e46454. [Google Scholar] [CrossRef] [PubMed]
- Bienefeld, N.; Keller, E.; Grote, G. AI Interventions to Alleviate Healthcare Shortages and Enhance Work Conditions in Critical Care: Qualitative Analysis. J. Med. Internet Res. 2025, 27, e50852. [Google Scholar] [CrossRef] [PubMed]
- Ennab, M.; McHeick, H. Enhancing interpretability and accuracy of AI models in healthcare: A comprehensive review on challenges and future directions. Front. Robot. AI 2024, 11, 1444763. [Google Scholar] [CrossRef]
- Hudson, T.J.; Albrecht, M.; Smith, T.R.; Ator, G.A.; Thompson, J.A.; Shah, T.; Shanks, D. Impact of Ambient Artificial Intelligence Documentation on Cognitive Load. Mayo Clin. Proc. Digit. Health 2025, 3, 100193. [Google Scholar] [CrossRef]
- Douglas, D.M.; Lacey, J.; Howard, D. Ethical risk for AI. AI Ethics 2025, 5, 2189–2203. [Google Scholar] [CrossRef]
- Al-Dulaimi, A.O.M.; Mohammed, M.A.-A.W. Legal responsibility for errors caused by artificial intelligence (AI) in the public sector. Int. J. Law Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
- Fotheringham, K.; Smith, H. Accidental injustice: Healthcare AI legal responsibility must be prospectively planned prior to its adoption. Future Healthc. J. 2024, 11, 100181. [Google Scholar] [CrossRef]
- García-Madurga, M.-Á.; Gil-Lacruz, A.-I.; Saz-Gil, I.; Gil-Lacruz, M. The Role of Artificial Intelligence in Improving Workplace Well-Being: A Systematic Review. Businesses 2024, 4, 389–410. [Google Scholar] [CrossRef]
- Clarke, V.; Braun, V. Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. Psychologist 2013, 26, 120–123. [Google Scholar]
- Yousefi, F.; Dehnavieh, R.; Laberge, M.; Gagnon, M.P.; Ghaemi, M.M.; Nadali, M.; Azizi, N. Opportunities, challenges, and requirements for Artificial Intelligence (AI) implementation in Primary Health Care (PHC): A systematic review. BMC Prim. Care 2025, 26, 196. [Google Scholar] [CrossRef] [PubMed]
- Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
- Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019, 6, 94–98. [Google Scholar] [CrossRef] [PubMed]
- Shanafelt, T.D.; West, C.P.; Sinsky, C.; Trockel, M.; Tutty, M.; Satele, D.V.; Carlasare, L.E.; Dyrbye, L.N. Changes in Burnout and Satisfaction With Work-Life Integration in Physicians and the General US Working Population Between 2011 and 2017. Mayo Clin. Proc. 2019, 94, 1681–1694. [Google Scholar] [CrossRef]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Abraham, V. How Tech Can Turn Doctors into Clerical Workers. The New York Times, 16 May 2018. [Google Scholar]
- Gundlack, J.; Negash, S.; Thiel, C.; Buch, C.; Schildmann, J.; Unverzagt, S.; Mikolajczyk, R.; Frese, T. Artificial Intelligence in Medical Care—Patients’ Perceptions on Caregiving Relationships and Ethics: A Qualitative Study. Health Expect. 2025, 28, e70216. [Google Scholar] [CrossRef]
- West, C.P.; Dyrbye, L.N.; Shanafelt, T.D. Physician burnout: Contributors, consequences and solutions. J. Intern. Med. 2018, 283, 516–529. [Google Scholar] [CrossRef]
- Gani, I.; Litchfield, I.; Shukla, D.; Delanerolle, G.; Cockburn, N.; Pathmanathan, A. Understanding “Alert Fatigue” in Primary Care: Qualitative Systematic Review of General Practitioners Attitudes and Experiences of Clinical Alerts, Prompts, and Reminders. J. Med. Internet Res. 2025, 27, e62763. [Google Scholar] [CrossRef]
- Ghassemi, M.; Oakden-Rayner, L.; Beam, A.L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 2021, 3, e745–e750. [Google Scholar] [CrossRef]
- Bond, R.R.; Novotny, T.; Andrsova, I.; Koc, L.; Sisakova, M.; Finlay, D.; Guldenring, D.; McLaughlin, J.; Peace, A.; McGilligan, V.; et al. Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. J. Electrocardiol. 2018, 51, S6–S11. [Google Scholar] [CrossRef]
- Feldman, E.; De Cremer, D. Preserving physician ethics in the era of autonomous AI. AI Ethics 2025, 5, 3415–3420. [Google Scholar] [CrossRef]
- Kücking, F.; Hübner, U.; Przysucha, M.; Hannemann, N.; Kutza, J.O.; Moelleken, M.; Erfurt-Berge, C.; Dissemond, J.; Babitsch, B.; Busch, D. Automation Bias in AI-Decision Support: Results from an Empirical Study. Stud. Health Technol. Inform. 2024, 317, 298–304. [Google Scholar] [PubMed]
- Nakashima, H.H.; Mantovani, D.; Machado Junior, C. Users’ trust in black-box machine learning algorithms. Rev. Gestão 2024, 31, 237–250. [Google Scholar] [CrossRef]
- Vandemeulebroucke, T. The ethics of artificial intelligence systems in healthcare and medicine: From a local to a global perspective, and back. Pflugers Arch. 2025, 477, 591–601. [Google Scholar] [CrossRef]
- Lupton, D. Critical Perspectives on Digital Health Technologies. Sociol. Compass 2014, 8, 1344–1359. [Google Scholar] [CrossRef]
- Ratti, E.; Morrison, M.; Jakab, I. Ethical and social considerations of applying artificial intelligence in healthcare-a two-pronged scoping review. BMC Med. Ethics 2025, 26, 68. [Google Scholar] [CrossRef]
- Kueper, J.K.; Terry, A.; Bahniwal, R.; Meredith, L.; Beleno, R.; Brown, J.B.; Dang, J.; Leger, D.; McKay, S.; Pinto, A.; et al. Connecting artificial intelligence and primary care challenges: Findings from a multi stakeholder collaborative consultation. BMJ Health Care Inform. 2022, 29, e100493. [Google Scholar] [CrossRef]
- Gerke, S.; Minssen, T.; Cohen, G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif. Intell. Healthc. 2020, 26, 295–336. [Google Scholar]
- Goodman, K.W. Ethics, Medicine, and Information Technology: Intelligent Machines and the Transformation of Health Care; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
- Tulk Jesso, S.; Kelliher, A.; Sanghavi, H.; Martin, T.; Henrickson Parker, S. Inclusion of Clinicians in the Development and Evaluation of Clinical Artificial Intelligence Tools: A Systematic Literature Review. Front. Psychol. 2022, 13, 830345. [Google Scholar] [CrossRef]
- Pinto Dos Santos, D.; Giese, D.; Brodehl, S.; Chon, S.H.; Staab, W.; Kleinert, R.; Maintz, D.; Baeßler, B. Medical students’ attitude towards artificial intelligence: A multicentre survey. Eur. Radiol. 2019, 29, 1640–1646. [Google Scholar] [CrossRef]
- Naik, N.; Hameed, B.M.Z.; Shetty, D.K.; Swain, D.; Shah, M.; Paul, R.; Aggarwal, K.; Ibrahim, S.; Patil, V.; Smriti, K.; et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front. Surg. 2022, 9, 862322. [Google Scholar] [CrossRef] [PubMed]
- Misra, R.; Keane, P.A.; Hogg, H.D.J. How should we train clinicians for artificial intelligence in healthcare? Future Healthc. J. 2024, 11, 100162. [Google Scholar] [CrossRef] [PubMed]
Quality Criterion | Operationalization | Illustrative Techniques or Procedures |
---|---|---|
Credibility/Trustworthiness | Ensure that the findings accurately capture participants’ viewpoints without undue influence or distortion. | Validation through participant feedback loops (e.g., respondent validation). |
Dependability/Consistency/ Reliability | Maintain transparency and traceability throughout the research process. | Documented decision pathways and use of consistent data handling protocols (e.g., methodological logbook). |
Clarification Accuracy | Ensure accurate understanding of participants’ statements and reduce ambiguity. | Use of probing and clarification questions; restating participants’ responses for confirmation. |
Justification/Reasoning Transparency | Explore underlying motives, rationales, or interpretive frameworks behind participants’ actions or decisions. | Encouraging participants to articulate rationale behind behaviors or viewpoints (e.g., ‘why’ questions). |
Objectivity/Confirmability | Demonstrate that interpretations derive directly from the data and are not shaped by researcher bias. | Maintenance of reflective logs and cross-checks with co-researchers to ensure neutrality. |
Authenticity/Genuineness | Ensure that participants’ voices are presented faithfully and without distortion. | Application of inductive coding practices; effort to preserve original phrasing and intent. |
Reflexivity/Reflective Awareness | Acknowledge and critically examine the researcher’s positionality and potential influence on the research. | Routine documentation of personal assumptions and theoretical lenses (e.g., reflective journaling). |
Transferability/Contextual Applicability | Provide sufficient detail to support transfer of findings to other settings or populations. | Thick description of participant characteristics and research context to facilitate relevance assessment. |
Category | Subcategory | n | (%) |
---|---|---|---|
Profession | |||
General Practitioner | 20 | 71.4 | |
Internist | 8 | 28.6 | |
Gender | |||
Female | 13 | 46.4 | |
Male | 15 | 53.6 | |
Diverse | - | - | |
Age Group (in years) | |||
36–45 | 14 | 50.0 | |
46–55 | 8 | 28.6 | |
56–65 | 6 | 21.4 | |
Work Experience (in years) | |||
10–15 | 10 | 35.7 | |
16–25 | 9 | 32.1 | |
26–35 | 7 | 25 | |
36–45 | 2 | 7.2 |
Main Category | Description | Key Themes |
---|---|---|
Perceptions and Utilization of AI | How AI is perceived and used in everyday primary care practice | Support for clinical decision-making, integration into routine tasks (e.g., reminders, risk assessments), clinical judgment remains central |
AI as Support: Opportunities and Benefits | The supportive role of AI in reducing workload and improving care quality | Automation of administrative tasks, improved diagnostic confidence, strengthening patient trust through transparent use |
Challenges and Additional Burdens | Challenges and potential burdens associated with AI | Alert fatigue from irrelevant notifications, pressure to conform to AI suggestions, “black-box” issues, uncertainty, and stress |
Impact on the Physician–Patient Relationship | AI’s influence on the therapeutic relationship | Concerns about loss of personal connection vs. increased patient confidence through transparent AI integration |
Responsibility and Safety Concerns | Physician responsibility and safety considerations when using AI | Caution in following AI recommendations, demand for transparency and accountability |
Psychological Burden and Resources | Psychological stress and necessary support resources | Anxiety and uncertainty due to lack of training, need for technical support and involvement in AI development |
Future Perspectives and Conditions for Successful Integration | Outlook and prerequisites for successful AI adoption | Optimism about user-friendly design, physician involvement, organizational and educational support for integration |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mache, S.; Bernburg, M.; Würtenberger, A.; Groneberg, D.A. Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study. Clin. Pract. 2025, 15, 138. https://doi.org/10.3390/clinpract15080138
Mache S, Bernburg M, Würtenberger A, Groneberg DA. Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study. Clinics and Practice. 2025; 15(8):138. https://doi.org/10.3390/clinpract15080138
Chicago/Turabian StyleMache, Stefanie, Monika Bernburg, Annika Würtenberger, and David A. Groneberg. 2025. "Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study" Clinics and Practice 15, no. 8: 138. https://doi.org/10.3390/clinpract15080138
APA StyleMache, S., Bernburg, M., Würtenberger, A., & Groneberg, D. A. (2025). Artificial Intelligence in Primary Care: Support or Additional Burden on Physicians’ Healthcare Work?—A Qualitative Study. Clinics and Practice, 15(8), 138. https://doi.org/10.3390/clinpract15080138