Next Article in Journal
Contextualizing the Intersection of Makerspaces and XR Technologies Through Immersive Storytelling: A Thematic Hybrid Review
Next Article in Special Issue
Adolescents’ Experience with a Conversational Agent for Depression
Previous Article in Journal
DC-CSAP: An Edge-UAV-End Collaborative Data Collection Framework for UAV-Assisted IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Provider Perspectives on Sociotechnical Alignment of Intelligent Clinical Decision Support Systems

by
Andy Behrens
*,
Cherie Noteboom
and
Patti Brooks
Department of Information Systems, College of Business and Information Systems, Dakota State University, Madison, SD 57042, USA
*
Author to whom correspondence should be addressed.
Information 2026, 17(2), 191; https://doi.org/10.3390/info17020191
Submission received: 30 December 2025 / Revised: 30 January 2026 / Accepted: 9 February 2026 / Published: 13 February 2026
(This article belongs to the Special Issue Information Technology for Smart Healthcare)

Abstract

Intelligent Clinical Decision Support Systems (ICDSS) are increasingly integrated into healthcare settings to enhance clinical decision-making, efficiency, and patient safety. Despite advances in artificial intelligence-enabled decision support, ICDSS adoption remains inconsistent, particularly in complex clinical environments where professional autonomy, workflow alignment, and accountability are critical. This study examines healthcare providers’ perspectives on ICDSS through a grounded theory approach informed by established Information Systems theories, including the Unified Theory of Acceptance and Use of Technology (UTAUT), Technology Acceptance Model (TAM), Diffusion of Innovation (DOI), and the Human-Organization-Technology fit (HOT-fit) framework. Semi-structured interviews were conducted with 11 providers within a large, integrated healthcare organization, and data were analyzed using open, axial, and selective coding. The findings reveal three interrelated dimensions shaping ICDSS use: provider experience, clinical utility, and adaptation. While ICDSS were perceived as valuable for improving efficiency, supporting treatment decisions, and enhancing patient safety, their adoption was constrained by cognitive overload, workflow misalignment, data quality concerns, and perceived threats to professional autonomy. Trust, explainability, and workflow fit emerged as central mechanisms influencing selective use rather than full adoption. By grounding provider perspectives within a sociotechnical lens, this study extends existing IS theories to the context of AI-enabled clinical decision support and offers empirically grounded insights for designing ICDSS that better align with clinical practice.

Graphical Abstract

1. Introduction

The artificial intelligence (AI) healthcare market has experienced rapid growth with projections estimating an increase from $11.06 billion in 2021 to $38.66 billion in 2025 [1]. The integration of Intelligent Clinical Decision Support Systems (ICDSS) represents a major technological shift aimed at improving diagnosis, decision-making efficiency, and precision medicine [2,3]. The increasing influence of AI integrated in clinical decision support systems, namely ICDSS, stems from an increase in data-driven clinical decision-making. ICDSS are designed to free the provider’s time and increase productivity, precision, and efficacy [3,4]. While the benefits of AI-driven clinical decision-making are well-documented, provider resistance remains a critical barrier to ICDSS adoption.
Smart healthcare initiatives increasingly depend on the successful integration of AI-enabled systems into clinical workflows [5]. ICDSS are a core component of this transformation, serving as smart healthcare technologies that augment clinical reasoning, enhance diagnostic accuracy, and support data-driven care delivery. However, the effectiveness of smart healthcare depends not only on technical performance but also on how clinicians interpret, accept, and adapt to AI-enabled systems in everyday practice. In everyday practice, providers work under intense pressure, juggle multiple tasks, and carry high cognitive loads. They also vary in AI literacy, practice across different specialties and patient populations, and operate within organizational requirements and liability constraints. Because of all these external factors, understanding providers’ perspectives, including perceived utility, usability, explainability, workload impact, and accountability, is therefore essential for ensuring the safe, trustworthy, and sustainable adoption of smart healthcare technologies.
Healthcare providers hold strong professional identities grounded in autonomy, expertise, and high-stakes accountability and have historically resisted clinical decision support technologies [6,7]. Providers frequently cite workflow misalignment, usability concerns, loss of flexibility, erosion of decision autonomy, and the dehumanization of patient care as barriers to adoption [8,9]. Although ICDSS are intended to augment clinical judgment, many systems are perceived as intrusive or misaligned with established practices, creating tensions between algorithmic recommendations and professional intuition. These tensions are particularly relevant in high-risk, knowledge-intensive environments such as medicine.

1.1. Research Gap and Significance

While prior research has examined ICDSS design, performance, and implementation challenges, provider perspectives and the sociotechnical dynamics of ICDSS adoption remain underexplored [2,10]. Existing studies tend to emphasize technological performance factors such as accuracy, usability, and explainability over the contextual and identity-related factors shaping acceptance in clinical practice [11,12]. Information Systems (IS) research lacks a grounded, empirical understanding of how providers evaluate, negotiate, and adapt to AI-enabled decision support systems within their workflow and professional culture [13,14].
A recent study showed that workflow fit is often a stronger predictor of ICDSS use than design quality itself, suggesting that sociotechnical alignment, not algorithmic sophistication, is the central barrier to adoption [8]. Therefore, the providers’ perspectives are critical due to workflow misalignment and professional autonomy concerns [12]. However, the mechanisms through which workflow fit, identity, and perceived autonomy shape ICDSS acceptance remain theoretically underdeveloped.
The lack of empirical investigations creates a significant challenge for the overall potential of the ICDSS. The providers’ role in healthcare is highly influential for sociotechnical alignment of the ICDSS. The way providers view ICDSS reflects deeper insight into what providers expect and perceive as they leverage the ICDSS [12]. Significant value is gained from understanding their unique needs and preferences. The providers’ perspectives are the key to strengthening the sociotechnical alignment of the ICDSS for improved provider satisfaction and patient outcomes [12].
Furthermore, the extant literature prioritizes design over context. Researchers have neglected the providers’ perspectives on sociotechnical factors [9,13]. It is essential to consider the sociotechnical context that the providers operate in [12,15]. A comprehensive empirical understanding of ICDSS is impossible without insight into the sociotechnical context of providers. The lack of sociotechnical understanding limits the potential benefits of the ICDSS.
Empirically understanding the providers’ insights into ICDSS strengthens the sociotechnical alignment with the ICDSS. Further work is required to understand the providers’ perspectives on the ICDSS [12,15]. At the time of writing this article, no empirical research or theory explains the providers’ perspectives on ICDSS. Empirical evidence for sociotechnical alignment lays the foundation for tailoring the ICDSS to the unique needs and preferences of providers.
This study addresses this gap by developing a grounded theoretical model of provider perspectives on ICDSS in a large, complex healthcare organization, the Veteran Affairs Healthcare System (VAHCS), that is governed by strict regulatory and compliance requirements. VAHCS provides healthcare services to a specific and distinct group of patients, making its services unique. Further exploration of the providers’ perspectives will provide empirical insight into ICDSS optimization and bridge the gap between AI-driven decision support and sociotechnical alignment.
This study identifies a set of drivers and obstacles from qualitative interview data influencing provider perspectives of ICDSS, organized across provider experience, clinical utility, and adaptation dimensions. Overall, participants described value as emerging from the system’s ability to support clinical decision-making while minimizing disruption to workflow and professional practice.
From a provider experience perspective, ICDSS were viewed as most effective when they facilitated efficient access to relevant medical information, reduced documentation burden, and supported decision-making through data-driven intelligence. Participants emphasized the importance of intuitive navigation, clinically meaningful information groupings, and automation features that reduced manual input and saved time. Trust in recommendations was strengthened when guidance was transparent, evidence-based, and accompanied by accessible explanations of underlying calculations and guidelines.
Clinical utility was closely associated with treatment optimization and patient safety. Participants highlighted support for antibiotic stewardship, including awareness of restrictions, availability of alternatives, and decision guidance that reduced unnecessary prescribing. Safety-related features, such as drug interaction alerts and reminder mechanisms, were perceived as contributing to error reduction and improved first-time treatment selection. Diagnostic support further reinforced provider confidence and supported ongoing clinical learning.
Adaptation-related findings emphasized the role of training, social support, and technical familiarity in ICDSS adoption. Participants favored hands-on, practice-based training and noted the importance of pharmacy advocacy, peer dialog, and mentorship in sustaining use. Variability in technical proficiency across providers underscored the need for systems that are both learnable and supportive of diverse user capabilities.
Despite these drivers, participants identified several obstacles that constrained ICDSS effectiveness. Provider experience obstacles included cognitive overload, data quality concerns, and limited system flexibility, often resulting from poor navigation, excessive alerts, interoperability challenges, and non-customizable interfaces. Clinical utility obstacles reflected tensions related to perceived dehumanization of care, diminished provider autonomy, increased compliance-related workload, and misalignment with clinical workflow. Adaptation obstacles further revealed anxiety related to AI-enabled features, learning demands, skepticism toward system value, and concerns about system reliability in practice.

1.2. Research Question

The research question directly relates to empirically exploring the providers’ perspectives of ICDSS. The following research question is posited: “What are providers’ perspectives of Intelligent Clinical Decision Support Systems?”

1.3. Theoretical Frameworks

Established IS theories provide a useful foundation for examining provider interactions with ICDSS. In this study, the Unified Theory of Acceptance and Use of Technology (UTAUT), Technology Acceptance Model (TAM), Diffusion of Innovation (DOI), and the Human-Organization-Technology fit (HOT-fit) framework are employed as complementary theoretical lenses to interpret provider perspectives. Collectively, these models offer constructs related to perceived usefulness, effort expectancy, social influence, innovation adoption, and sociotechnical alignment that inform the analysis of ICDSS use in clinical settings.
However, these theories were largely developed to explain the adoption of conventional information systems and only partially account for the distinctive characteristics of AI-enabled clinical decision support. They offer limited explanatory power for understanding how clinicians negotiate issues of professional autonomy, uncertainty, identity, and workflow integration when interacting with systems that actively influence clinical judgment. As a result, while extant theories illuminate general adoption tendencies, they fall short of explaining the selective acceptance, adaptation, and resistance observed in ICDSS use within complex healthcare environments.
Despite rapid advances in AI-driven decision support, adoption remains uneven in practice. Prior research has emphasized technical performance and usability, often under examining how clinicians assess sociotechnical fit as ICDSS intersect with established workflows, accountability structures, and professional norms. This gap limits the ability of IS research to explain how workflow alignment, autonomy, and uncertainty jointly shape provider engagement with ICDSS beyond initial adoption.
To address this limitation, this study adopts a grounded theory approach to examine provider perspectives on ICDSS within a large, complex healthcare organization. Rather than testing existing models, the study leverages established IS theories as sensitizing concepts to surface empirically grounded mechanisms that explain how clinicians evaluate, integrate, and constrain ICDSS use in practice. In doing so, the study advances a more applied, context-sensitive understanding of ICDSS adoption that extends existing IS theory to the realities of AI-enabled clinical work.
This study advances IS research by developing a grounded theoretical explanation of how providers selectively engage with ICDSS in complex sociotechnical environments. The findings identify workflow alignment as a central mechanism shaping conditional use and extend established adoption theories by incorporating uncertainty, professional identity, and explainability in AI-enabled decision support. These insights inform the design of human-centered ICDSS that align with professional practice rather than disrupt it.

2. Materials and Methods

This study adopts an interpretive qualitative research approach to examine providers’ perspectives on ICDSS within their sociotechnical context [16]. Interpretive methods are well-suited for investigating phenomena involving human sensemaking, professional identity, and technology use in complex organizational settings [14]. Through this approach, the researcher captured insights into these elements within the specific organizational context, emphasizing the interplay between the providers, their environment, and the technology in use. Grounded theory was selected to enable theory development directly from data. Figure 1 outlines the methodology of our research approach.
The data was collected through semi-structured interviews with 11 providers, including physicians, nurse practitioners, and physician assistants. Interviews continued until theoretical saturation was achieved, consistent with grounded theory principles. All procedures were approved by Institutional Review Boards, and informed consent was obtained from all participants.
The data was analyzed using a grounded theory (GT) methodology [17]. GT is an ideal methodology for researching in under-theorized areas like provider perspectives. This study developed a substantive empirical theory and contributed to the generation of formal theory [18]. Data analysis occurred through a three-step coding process [17,18,19].
Open codes created bundles of meaning in the data. Open codes were categorized and sorted to create axial codes. Axial codes are concepts that have meaning. The concepts were further abstracted in the selective coding process to generate broad categories. Generated categories include relationships that are representative of the main focus of the research study [17].
This study used the coding instrument put forth by Qureshi and Unlu [20]. The coding instrument provided structure throughout the analysis. The instrument provided context to the categories that began to emerge. The instrument provided structure to the emergent theoretical constructs and their connections to the categories [20].
It begins with Initial Coding, where raw data are examined to generate preliminary codes and initial concepts. These codes are then refined during Focused Coding, in which stronger, more salient concepts are consolidated into higher-level categories. Finally, Theoretical Coding integrates these categories into broader themes and emergent theory. The process is nonlinear and iterative. Insights from later stages can inform and refine earlier coding decisions.

2.1. Case Study Context

The VAHCS is America’s most extensive integrated healthcare system. The VAHCS provides care at 1321 healthcare facilities, including 172 medical centers and 1138 outpatient sites of care that serve nine million Veterans annually [21,22]. The State Summaries of South Dakota identify that 65,335 veterans use the VAHCS services in South Dakota [22]. VAHCS provides services that include surgery, critical care, mental health, orthopedics, pharmacy, radiology, and physical therapy. They also provide additional specialty services such as audiology and speech pathology, dermatology, dental geriatrics, neurology, oncology, podiatry, prosthetics, urology, and vision care [22].
The VAHCS’s mission focuses on providing exceptional healthcare that improves veteran health and well-being [22]. The VAHCS focuses on the relationships between the providers and the veterans through two goals in their “Long-Range Plan Framework.” The goals discussed by veterans choosing VAHCS as their healthcare provider and coordinator are built on trusted, long-term relationships and delivering high-quality, accessible, and integrated healthcare [22]. The Framework was outlined with a technology focus through the Office of IT [23].
The Computerized Patient Record System (CPRS) development started in 1994, being the first user interface for the VAHCS IT architecture [23]. CPRS uses client–server architecture. CPRS was interactable on the users’ workstations. CPRS was released in 1997. The VAHCS fully implemented CPRS with the following features in 1999 [23]:
  • A clinical data repository with privacy protection,
  • Data retrieval and display,
  • Document entry with role-based business rules,
  • Problem lists, medication lists, reports (including radiology), and health summaries,
  • Provider order entry for all clinical services and departments, and
  • Clinical decision support with reminders, real-time clinical alert systems, notification systems, order checking, and disease management features.
The Chief Information Officer centralized all IT programs in 2002 [23]. CPRS includes the Veterans Health Information System and Technology Architecture (VistA). VistA provides additional information for providers, such as medications, health conditions, and laboratory test values [24,25].
VAHCS integrated additional intelligence into CPRS to automate decision support by providing recommendations and relevant information aggregated by pharmacy support staff through linked software. The ICDSS is an internally developed software system maintained and managed by the pharmacy staff, who curate and integrate clinical guidelines and recommendations into the logic of the ICDSS.
VAHCS is incorporating a new electronic health record from Cerner Corporation. Cerner will give VAHCS staff access to a holistic view of the Veteran’s care received during their time in the military and other care provided [26]. One of the critical components of the move to Cerner is removing reliance on multiple interfaces and continued manual data entry [26]. The primary goals of moving towards a modernized electronic health record at the VAHCS are as follows [26]:
  • Implement the same EHR system as the DOD and the United States Coast Guard. This system is interoperable with community care providers and enables the seamless sharing of Veteran records from active duty and beyond.
  • Provide Veterans and clinicians with a complete picture of a patient’s medical history, driving connections between military service and health outcomes through data analytics.
  • Offer an improved, more consistent patient scheduling experience at VA medical facilities and community care partners nationwide.
Cerner will provide clinicians with quicker and more efficient access to the data to improve the delivery of care for Veterans [26]. This transition creates an opportunity for examining provider perspectives on ICDSS, as changes to core clinical infrastructure directly affect workflow alignment, decision autonomy, and the integration of AI-enabled decision support into everyday clinical practice.

2.2. Data Collection

Data collection occurred between October 2022 and February 2023. Dakota State University (DSU) and VAHCS collaborated on the research. DSU and the University of South Dakota required IRBs to interview participants at VAHCS.
The Public Affairs Office at VAHCS recruited the providers. Any provider could interview at VAHCS. The variety of providers interviewed created a rich and unique perspective of ICDSS. The researchers then scheduled a time to meet and conduct the interview. Before beginning the interview, providers gave written or verbal consent.
The researcher collected data through semi-structured interviews. The interview guide included open-ended questions. The interview began with questions from the interview guide. Interview questions prompted responses from providers to understand the providers’ perspective of ICDSS. Follow-up questions fostered deeper insight into the provider and researcher discussions [19].
A total of 11 healthcare professionals participated in this study, representing organizations of national scope. Participants’ professional experience ranged from 1 to over 8 years, providing perspectives from both early-career and more experienced practitioners.
The sample included a mix of clinical roles, consisting of five nurse practitioners, four physicians, and two physician assistants. This distribution enabled this study to capture a range of clinical viewpoints across different practitioner roles while maintaining a focused clinical context. In total, 11 interviews were completed and included in the analysis.
The interview’s total minutes of recording were 382 min, with 151 pages of transcribed data. The most frequent age group of interviewees was 30–39. The most frequent specialties were internal medicine/hospitalist and infectious disease, with two interviewees each.
The researchers recorded the interviews with Zoom Workplace Version 6.0. Zoom transcribed the recorded audio and stored the files in a secure location following the IRB policy specifications. The researcher reviewed and validated the transcribed audio files for accuracy. The researcher anonymized interviewee names and other identifiable information.
The researchers stopped data collection when the study reached theoretical saturation and achieved variability in the dataset [27]. Glaser and Strauss identified theoretical saturation as no additional findings were developing as properties of a category [18].

2.3. Data Analysis

To expose the theoretical underpinning of the providers’ perspectives of ICDSS, the following nonlinear steps were conducted throughout the analysis process: (a) the constant comparative method, (b) theoretical coding, (c) theoretical sampling, (d) theoretical saturation, and (e) theoretical sensitivity [28,29]. The data collection and analysis steps are iterative and require constant comparison of old and new data [17].
The researcher used inductive GT for data analysis. Inductive GT started with the data and moved toward theory generation. The researcher was theoretically sensitive and conceptualized and formulated a theory as it emerged from the data [30]. The research followed a Glaserian approach to GT. The three-step coding process outlined by Corbin and Strauss [30] was adopted.
The first step in the data analysis was to initially read the interview data and compare it with the audio for data validation. The researchers stopped data collection when the study reached theoretical saturation and achieved variability in the dataset [27]. Glaser and Strauss identified theoretical saturation as no additional findings were developing as properties of a category [18]. The researcher cleaned the validated data to maintain accuracy and uploaded it to ATLAS.ti Version 24. The data was analyzed with ATLAS.ti Version 24 using the three types of coding suggested by Strauss and Corbin for GT: open, axial, and selective coding [17,31].

2.4. Open Coding

Open coding is the first step of the data analysis process. The researcher created open codes based on the thoughts and ideas formed throughout the analysis. Additional open codes generated from the data are called “in vivo” codes [19]. “In vivo codes” and specific words the participants said in their interviews were intertwined with the researchers’ open codes.
ATLAS.ti assisted in the intertwining and blurring of open codes. ATLAS.ti was critical in developing and organizing the codes, relationships, and abstraction development. ATLAS.ti network diagrams facilitated relationship development between categories. Charmaz introduced guidelines for GT researchers developing initial codes [19]. The guidelines are described below.
Charmaz reported the importance of remaining close to the data, using simple and precise short codes, preserving action-oriented meanings, and continuously comparing data segments. The guidelines also encourage analysts to move efficiently through the data while maintaining analytic openness, supporting the early development of grounded concepts [19].

2.5. Axial Coding

The subsequent step is axial coding. After the data has been open-coded with descriptive and meaningful codes, it is ready for axial coding. The researcher compared open codes and identified and analyzed relationships between the open codes. The researcher developed axial codes to group codes and relationships to comprehensively understand the data. Axial coding sorts and groups the data into clustered concepts [19]. The concepts form the constructs during the theory development process.

2.6. Selective/Theoretical Coding

The last step is selective coding. Selective coding integrated the concept groupings into one core category [31]. A core category emerges during selective coding [17]. The categories reflect the providers’ perspectives of the ICDSS. The researchers develop theoretical codes to discuss the relationships between the axial codes [19]. The 6 Cs by Corbin and Strauss [19] structured the coding and relationship-building process.

2.7. Reliability

The researcher developed a case study protocol to specify how the case was conducted [32]. The study’s replicability is met through the appropriate organization and storage of documents, notes, and narratives to help investigators [33]. Adopting these protocol requirements allows future researchers to develop similar studies and yield comparable results.

2.8. Validation

It is essential to address validity concerns for CSR. Internal validity refers to the causal relationships between variables and results [33,34]. The researcher addressed internal validity by triangulation through verifying findings with organizational data, VAHCS reports, reports, and articles [33]. Additional researchers reviewed the data to ensure quality labeling and coding. The researcher addressed external validity by providing a clear rationale for the case study selection and extensive details on the context of the case study [34].
This study is intentionally designed as a qualitative investigation to examine provider perspectives on ICDSS with a focus on sociotechnical alignment. The identified constructs involve subjective experience, organizational context, professional judgment, and human–AI interaction, which cannot be adequately captured through system logs, performance metrics, or experimental validation alone.

3. Results

To support interpretive rigor and credibility in the absence of quantitative verification, we emphasized analytic transparency, systematic coding procedures, cross-participant comparison, and traceability between themes and verbatim interview evidence. Results are therefore presented as experiential and design-informing insights, contributing implementation-relevant knowledge rather than claims about ICDSS technical performance or predictive accuracy.
The researcher created initial open codes that collapsed into 161 codes and were grouped into twenty axial codes. Axial codes captured the thoughts and actions of the providers. Twenty concepts formed the six categories that emerged from the data.
Figure 2, a network diagram, visualizes the concepts and categories that emerged. The green categories are the drivers, while the red categories are the obstacles. The yellow category indicates external influences that exist at the VAHCS.
Thematic representation of ICDSS provider perspectives derived from qualitative analysis. Provider experience, clinical utility, and adaptation are shown as interrelated dimensions, with corresponding drivers and obstacles that shape selective use and adoption of AI-enabled clinical decision support.

Findings Table

Table 1 shows the progression of the coding process from open, focused, to theoretical coding. Open codes are bundles of meaning derived from the provider narrative interview data. The researcher analyzed the interview text and applied words or phrases to capture the meaning from the provider’s narrative. Axial codes are referred to as concepts in the Corbin and Strauss taxonomy (1990). Concepts were abstracted from the open codes, creating meaning from the related codes in the form of a word or phrase. The words or phrases of the concepts are the constructs of the emergent theory. Categories were developed as the last step in the process as a further abstraction, capturing theoretical relevance. These were used to capture the theoretical relevance of related concepts. Corbin and Strauss [30] refer to selective codes as categories in their taxonomy. Table 1 shows the summarized provider perspectives through the coding progression.
The analysis revealed three overarching and interrelated themes that characterize providers’ experiences with Intelligent Clinical Decision Support Systems (ICDSS): provider experiences, clinical utility, and adaptation. Provider experiences captured how ICDSS simultaneously enabled efficiency, data-driven support, and transparency while also introducing cognitive overload, data quality concerns, and constraints on flexibility. Clinical utility reflected the extent to which ICDSS supported treatment decisions, reduced adverse events, and aided diagnostic reasoning, yet was undermined when systems disrupted workflow, diminished professional autonomy, or interfered with patient–provider interactions. Adaptation highlighted how trust in ICDSS evolved through training, team dialog, and technical literacy, while obstacles such as AI anxiety, skepticism, and learning challenges shaped long-term engagement. As illustrated in Figure 2 and Table 1, these themes interact to shape ICDSS use, with workflow alignment emerging as a central mechanism linking perceived value and resistance.
This study applied a grounded theory approach to understanding providers’ perspectives of ICDSS. The data analysis process yielded key concepts and categories of the providers’ narrative data. The researcher developed themes following the data analysis. The narrative data is categorized into three overarching themes:
  • Provider experiences are the perspectives highlighting the value and obstacles of the ICDSS.
  • Clinical utility refers to how well the ICDSS aligns with workflow and improves decision-making.
  • Adaptation reflects the factors that influence trust and perceived autonomy of the ICDSS.
Collectively, the data analysis revealed that ICDSS provides value for providers, but obstacles still exist. Together, these findings show that provider experience, clinical utility, and adaptation interact to shape ICDSS use, with workflow alignment influencing whether systems are engaged, selectively used, or bypassed.

4. Discussion

This section interprets the findings by examining their implications for IS theory, prior research, and the design of ICDSS.

4.1. Workarounds and Workflow Fit

An unanticipated finding was the emergence of provider workarounds, where providers bypassed or overrode ICDSS recommendations when they conflicted with clinical intuition. This aligns with the findings of Wang [35], who identified provider frustration when ICDSS recommendations did not align with their expectations, leading them to develop alternative workflows.
These findings raise implications for patient safety. Prior research has identified workflow misalignment as a barrier [6], but this study reveals that misalignment often leads to active disengagement. This suggests that mandatory ICDSS implementation does not ensure actual usage. Instead, developers should prioritize workflow alignment by ensuring that ICDSS logic aligns with provider reasoning patterns rather than imposing rigid decision models.

4.2. Professional Autonomy and AI Collaboration

Providers emphasized the importance of retaining clinical judgment, even when ICDSS recommendations were available. This is consistent with the findings of Wang et al. [36], who found evidence that clinical expertise cannot be replaced by AI. Providers relied on ICDSS as a support tool, but not as a primary decision-maker. Providers preferred ICDSS as an assistive tool rather than an authoritative system. Trust in AI was conditional and required transparency and interpretability to be accepted. The ICDSS should augment decision-making, not replace it [35]. These findings reinforce human–AI collaboration models [37], demonstrating that ICDSS adoption increases when providers retain the ability to override recommendations.

4.3. De-Skilling Concerns

Providers expressed concern regarding potential de-skilling, where frequent reliance on ICDSS-generated recommendations could lead to diminished clinical expertise. This is consistent with the findings of Wang et al. [36], who found that overreliance on ICDSS may weaken diagnostic reasoning skills over time. Knop et al. [38] emphasized that clinical expertise is essential, and providers must remain engaged in diagnostic processes rather than deferring decisions to AI. ICDSS should function as an assistive learning tool rather than a substitute for clinical reasoning.

4.4. Junior vs. Senior Provider Adoption Differences

An important finding was that junior providers demonstrated greater receptiveness to AI-driven recommendations compared to their more experienced counterparts. This suggests that ICDSS may be more beneficial as a training tool for early-career clinicians. Knop et al. [38] found that ICDSS produced accurate recommendations, but senior providers resisted AI intervention due to decision inconsistencies. Wang et al. [36] identified that junior providers benefited from ICDSS in diagnosing rare cases, suggesting that ICDSS could be integrated into medical training programs. These findings indicate that ICDSS should be positioned as a supplemental learning tool for early-career providers while allowing experienced clinicians to use it selectively.

4.5. Explainability as an Antecedent to Trust

Explainability emerged as a critical determinant of trust in ICDSS adoption. Providers indicated the need to understand the origin of recommendations, the timing and reasoning behind suggestions, and whether and why recommendations changed over time. Providers expressed frustration with opaque AI logic, echoing Wang et al. [36], who found that confidence scores as low as 30% undermined provider trust. This study supports the argument that explainability is an antecedent to trust [38]. This suggests that AI systems lacking sufficient explainability face higher rejection rates [38,39]. ICDSS developers should implement explainability features, including confidence scores, rationale displays, and transparency mechanisms to improve adoption.

4.6. Theoretical Contributions

The grounded theory approach is advantageous for discovering or extending theory. Our findings in this study were able to be extended and resulted in theoretical contributions.

4.7. UTAUT: Uncertainty as a Mediating Factor

UTAUT emphasizes performance expectancy, effort expectancy, social influence, and facilitating conditions as determinants of technology adoption [40]. These constructs are relevant for ICDSS, particularly regarding productivity, ease of use, and organizational support. However, this study indicates that uncertainty regarding AI-driven decision support negatively affects adoption. Uncertainty acts as a mediating factor of adoption that is influenced by trust levels rather than performance expectancy alone.
UTAUT also assumes rational evaluation of system benefits and does not sufficiently address clinical uncertainty, the risk-laden nature of medical decisions, or the role of professional autonomy in shaping acceptance. For ICDSS, uncertainty and trust dynamics often override performance perceptions, revealing a gap in UTAUT’s ability to account for AI-driven decision support.

4.8. DOI: Conditional Adopters

DOI explains how new technologies spread based on innovation attributes such as relative advantage, compatibility, and complexity [41]. DOI posits that early adopters embrace technology while laggards resist [41]. We propose the addition of a new category, “conditional adopters”, which identifies providers who accept ICDSS only when it augments, rather than replaces, clinical decision-making, suggesting that DOI does not fully capture ICDSS adoption in specialized fields like healthcare.
However, DOI assumes adoption occurs along predictable trajectories and does not account for selective or conditional use, where clinicians adopt ICDSS only when recommendations align with their professional judgment. This missing adoption mode is particularly relevant for AI-enabled systems, where clinicians balance system output against expertise.

4.9. TAM: Professional Identity

TAM emphasizes perceived usefulness and ease of use as primary adoption drivers [42] and has been widely applied in healthcare IT. Its narrow focus on usefulness and ease of use does not capture the identity-based concerns that shape provider adoption of AI. Clinical judgment, autonomy, and professional status influence whether providers view ICDSS as support tools or threats. Thus, TAM underestimates the moderating role of professional identity, which tends to be especially salient among experienced clinicians.
However, this study finds that threats to professional identity and diagnostic autonomy may override usability considerations. Experienced clinicians expressed concern about ICDSS promoting overreliance, leading to diagnostic de-skilling. This suggests that TAM should be extended to incorporate professional identity factors as adoption moderators.

4.10. HOT-Fit: Customization

The HOT-fit framework highlights alignment between human factors, organizational context, and technical characteristics [43]. This model is well-suited for complex clinical environments, as it acknowledges the interaction of systems with workflow, communication structures, and organizational support. However, HOT-fit treats workflow fit as one factor among many, whereas our findings indicate that workflow alignment is the primary determinant of ICDSS use. This suggests the need to elevate workflow fit from a subcomponent of fit to a central theoretical mechanism for ICDSS adoption.
Our study indicated customization was a key enabler for ICDSS. Providers preferred adaptive AI interfaces that allowed personalization. Providers indicated that obstacles with workflow fit were also tied to their unique specializations, which restricted them to a workflow they were not accustomed to. The link between our thematic findings, theoretical connections, and provider perspectives is summarized in Table 2.

4.11. Significance of Discussion

This study shifts attention from ICDSS technical performance to how clinicians actually engage with AI-driven decision support in practice. The findings show that ICDSS adoption is shaped less by accuracy alone and more by workflow fit, professional autonomy, trust, and explainability factors that directly influence real-world use and patient safety.
When systems conflict with clinical reasoning, providers may disengage or bypass recommendations, potentially introducing new safety concerns. This underscores the need for ICDSS designs that align with clinical workflows and reasoning patterns rather than imposing rigid decision logic.
The findings also reinforce that clinicians view ICDSS as collaborative tools, not authoritative decision-makers. Trust in AI is conditional and depends on transparency, interpretability, and the ability to override recommendations. Without these features, ICDSS risk being perceived as threats to professional judgment rather than supports for care delivery.
Differences between junior and senior providers further highlight the developmental impact of ICDSS. While early-career clinicians may benefit from ICDSS as learning tools, experienced providers prefer selective, context-dependent use. This distinction is critical for organizations deploying ICDSS at scale without undermining expertise or professional identity.
Finally, this study extends information systems theory by demonstrating that dominant adoption models do not fully capture AI-enabled clinical decision support. Uncertainty, conditional use, professional identity, and workflow primacy emerge as central drivers of adoption, indicating the need to refine existing frameworks for high-stakes, AI-mediated work.

4.12. Implications for Healthcare Entities

Given these findings, healthcare organizations should adopt phased ICDSS implementation strategies that prioritize high-trust environments and allow providers gradually increase their reliance on AI-supported tools over time. In parallel, ICDSS developers should integrate explainability features, enabling providers to understand the rationale behind system recommendations. Research by Xie et al. [46] demonstrates that explainability directly correlates with trust in AI-enabled clinical systems, and our findings support this claim.
To address workflow concerns, healthcare policymakers should implement ICDSS customization features that allow providers to adjust automation levels based on their professional expertise and comfort level. For example, senior physicians might prefer minimal AI intervention, while junior providers may benefit from more structured guidance. This aligns with the HOT-Fit framework, which emphasizes that healthcare technologies must be adapted to the organizational context rather than enforced as one-size-fits-all solutions.
Healthcare policymakers should also incorporate policies that promote accountable and trustworthy AI use within organizations. This would ensure that any AI use could be auditable and explainable in the provider’s decision-making process. For instance, structured documentation of clinical overrides can preserve provider autonomy while supporting liability management and regulatory compliance.
To move beyond theoretical interpretation, we translate key findings into concrete, actionable guidance for ICDSS designers, healthcare administrators, clinical informatics teams, and governance stakeholders.
For ICDSS designers, participants’ concerns about trust and cognitive overload suggest the need for interfaces that clearly communicate system confidence, evidence sources, and reasoning transparency. For example, ICDSS tools should display explanation layers tailored to clinician expertise, provide concise rationales rather than opaque recommendations, and allow clinicians to inspect or dismiss suggestions without workflow disruption.
For healthcare administrators and IT leadership, our findings highlight the importance of aligning ICDSS implementation with real-world clinical workflows rather than forcing workflow changes to fit technology. Practical strategies include piloting new ICDSS features in limited clinical settings before full-scale rollout, allocating protected time for training, and establishing clear accountability structures defining when clinicians may rely on or override system recommendations.
For clinical informatics teams, our results emphasize the need to monitor ICDSS adoption patterns and clinician feedback continuously. Practical steps include tracking alert fatigue rates, conducting regular usability audits, and incorporating provider feedback loops into system updates to ensure the tool remains responsive to evolving clinical needs.
For governance and policy stakeholders, providers’ concerns about liability, bias, and ethical responsibility indicate a need for formal oversight mechanisms. For example, institutions should create multidisciplinary AI governance committees, define policies for bias monitoring and escalation, and implement documentation standards for ICDSS recommendations influencing high-stakes clinical decisions.
Collectively, these implications demonstrate how qualitative provider insights can directly inform system design, organizational policy, and responsible deployment practices, helping ensure ICDSS technologies are not only technically capable but also usable, trusted, ethically governed, and sustainable in real-world clinical environments.
The researcher analyzed the providers’ perspectives on ICDSS, resulting in developed themes that indicated that there are still significant obstacles in the sociotechnical implementation of ICDSS. The unit of analysis of the study was the providers. The findings are reflective of this unit of analysis. Consequently, the findings are valuable for healthcare entities implementing ICDSS for healthcare staff. IS and information technology (IT) professionals can glean valuable insight from this research study’s findings. The departments can also consider implementing a “provider in the loop” implementation method, ensuring consistent provider communication in development. IS and IT professionals can benefit from an iterative design collaboration with the providers. Future development of ICDSS would benefit from a personalized user interface design that aligns with different specialties. Providers indicated that they are still uneasy about AI.
Healthcare entities should continue evolving their AI conversations and building policies around them that are reflective of trustworthy AI processes. Continued involvement ensures the sociotechnical alignment of the provider’s already developed workflow, improving patient outcomes and provider satisfaction.

4.13. Limitations

This research addressed the gap by contributing a qualitative emergent theory manuscript attempting to explain the sociotechnical phenomena of providers’ perspectives of ICDSS. Sustained research into the providers’ perspectives is required. Further investigations into the various obstacles of providers would provide valuable insight into the future development of ICDSS.
Key areas of concern are further understanding AI anxiety, better understanding of the data, obstacles providers face, determination of how to appropriately integrate the ICDSS into clinical workflow to minimize the negative impact on patients, and the development of more effective ways to address compliance requirements while minimizing the negative impact on providers.
While this study provides deep insight into provider perspectives, it has limitations. First, this study is based on a single healthcare system, namely VAHCS, which has unique policies and workflow constraints and limits generalizability. Future research should expand to private hospitals and international settings to determine if the findings can be generalized across different healthcare infrastructures. Second, our study relies on self-reported perspectives, which may introduce response bias. Lastly, our findings suggest that ICDSS adoption is not a binary process but rather a spectrum of acceptance and resistance.
Although this study includes a relatively small sample of 11 healthcare professionals, the research was designed as a qualitative inquiry prioritizing depth of insight over statistical breadth. In qualitative research, sample adequacy is determined by thematic saturation and richness of experiential data, rather than representativeness or dataset scale. The interviews yielded recurring patterns across professional roles, clinical contexts, and organizational settings, indicating sufficient saturation to support the study’s analytic goals.
Importantly, the objective of this study was not to analyze patient records, enterprise clinical databases, or large-scale system usage data (such as those maintained by the U.S. Department of Veterans Affairs), but to capture frontline provider perspectives on ICDSS adoption, trust, workflow integration, and decision-making context. These human-centered, experiential factors cannot be derived from administrative datasets or electronic health record repositories alone, which primarily reflect system activity rather than clinician reasoning, perception, or organizational constraints.
Additionally, access to VA enterprise datasets involves substantial regulatory, IRB, and governance requirements, and would represent a methodologically distinct study focused on system-level performance rather than sociotechnical implementation dynamics. Rather than replacing large-scale data analysis, this study offers a complementary qualitative lens, generating practice-grounded insights that can inform future mixed-methods or VA-scale quantitative research.

4.14. Future Research

Future research should examine understanding the impact of the providers’ sociotechnical concerns. Replicating this GT study in private-sector or other healthcare settings would help assess the transferability of the findings beyond the VA environment. Comparative qualitative studies could explore how variations in organizational governance, accountability, and workflow structure influence conditional use of ICDSS.
Extending this research to additional patient care staff, such as nurses or nursing assistants, pharmacists, and ancillary departments, would provide valuable insight. Every role has a unique contribution in collaborating with the provider and provides a unique perspective on ICDSS, and has the potential to contribute to the body of research.
Quantitative studies are needed to determine the significance of the key factors identified by providers in the current study. Additionally, a longitudinal study would be beneficial to better understand provider adaptation over time.
While quantitative evaluation is essential for assessing algorithmic accuracy, predictive performance, or system efficiency, such measures address a different research objective than the one pursued in this study. Our aim is not to evaluate ICDSS performance, but rather to understand how clinicians perceive, interpret, adopt, resist, and operationalize intelligent decision support in real-world clinical environments. These insights are critical because adoption, trust, usability, and workflow fit often determine the real-world impact of clinical AI systems more strongly than technical accuracy alone.

5. Conclusions

This study does not claim empirical system performance validation and does not rely on usage telemetry, outcome metrics, or experimental benchmarking. Instead, its evidentiary basis lies in human-centered qualitative data, capturing clinician perspectives and sociotechnical constraints that are not observable through system logs alone.
By foregrounding provider experience and workflow alignment, this study offers actionable guidance for designing ICDSS implementations that improve clinical adoption without compromising professional autonomy or patient-centered care.
This study highlights the complexity of provider perspectives on ICDSS adoption, emphasizing that adoption is not solely driven by usability but also by experience, adaptation, and clinical utility concerns. By linking findings to established theories, identifying unexpected insights, and providing concrete policy recommendations, this research contributes to a more nuanced understanding of ICDSS implementation in clinical practice. Moving forward, ICDSS developers and healthcare organizations must prioritize sociotechnical alignment, ensuring that AI-driven augmented clinical judgment is integrated seamlessly into professional workflows rather than imposing rigid decision logic.

Author Contributions

Conceptualization, A.B. and C.N.; methodology, A.B.; software, A.B.; validation, A.B., C.N. and P.B.; formal analysis, A.B. and C.N.; investigation, A.B. and C.N.; resources, A.B., C.N. and P.B.; data curation, A.B.; writing—original draft preparation, A.B.; writing—review and editing, A.B., C.N. and P.B.; visualization, A.B.; supervision, C.N. and P.B.; project administration, C.N. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Boards of Dakota State University (protocol code 20220407-AB, approved April 2022) and the University of South Dakota (protocol code IRB-22-111, approved 2022).

Informed Consent Statement

Informed consent was obtained from all participants prior to data collection. Participants were informed of the purpose of the study, the voluntary nature of their participation, and their right to withdraw at any time without penalty. All participants provided consent before participating in the interviews.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical restrictions.

Acknowledgments

The authors would like to thank the healthcare providers who participated in this study for generously sharing their time and perspectives. The authors also acknowledge the support of Dakota State University and the Veterans Affairs Healthcare System for facilitating study coordination and recruitment. During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-5 series) for purposes of language refinement, structural editing, and clarity improvement. The authors reviewed, edited, and validated all generated content and take full responsibility for the accuracy and integrity of the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CDSSClinical Decision Support System
CPRSComputerized Patient Record System
DOIDiffusion of Innovation
DSUDakota State University
EHRElectronic Health Record
GTGrounded Theory
HOT-fitHuman–Organization–Technology fit
ICDSSIntelligent Clinical Decision Support System
IRBInstitutional Review Board
ISInformation Systems
ITInformation Technology
NLPNatural Language Processing
TAMTechnology Acceptance Model
UTAUTUnified Theory of Acceptance and Use of Technology
VAVeterans Affairs
VAHCSVeterans Affairs Healthcare System
VistAVeterans Health Information System and Technology Architecture

References

  1. Precedence Research. Artificial Intelligence (AI) in Healthcare Market Size Worldwide from 2021 to 2030. Statista. Available online: https://www.statista.com/statistics/1334826/ai-in-healthcare-market-sizeworldwide/ (accessed on 31 October 2023).
  2. Ramathilagam, A.; Pitchipoo, P. Modeling and Development of Fuzzy Logic-Based Intelligent Decision Support System. Rom. J. Inform. Sci. Technol. 2022, 25, 58–79. [Google Scholar]
  3. Wan, W.; Xu, J.; Zeng, Q.; Pan, L.; Sun, W. Development and Evaluation of Intelligent Medical Decision Support Systems. Acad. J. Sci. Technol. 2023, 8, 22–25. [Google Scholar] [CrossRef]
  4. Khan, M.F.; Haider, F.; Al-Hmouz, A.; Mursaleen, M. Development of an Intelligent Decision Support System for Attaining Sustainable Growth within a Life Insurance Company. Mathematics 2021, 9, 1369. [Google Scholar] [CrossRef]
  5. Tian, S.; Yang, W.; Grange, J.M.L.; Wang, P.; Huang, W.; Ye, Z. Smart healthcare: Making medical care more intelligent. Glob. Health J. 2019, 3, 62–65. [Google Scholar] [CrossRef]
  6. Petitgand, C.; Motulsky, A.; Denis, J.-L.; Régis, C. Investigating the Barriers to Physician Adoption of an Artificial Intelligence-Based Decision Support System in Emergency Care: An Interpretative Qualitative Study. In Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2020; Volume 270, pp. 1001–1005. [Google Scholar]
  7. Jussupow, E.; Spohrer, K.; Heinzl, A.; Link, C. I am; We are-Conceptualizing Professional Identity Threats from Information Technology. In Proceedings of the International Conference on Information Systems, San Francisco, CA, USA, 13–16 December 2018. [Google Scholar]
  8. Abell, B.; Naicker, S.; Rodwell, D.; Donovan, T.; Tariq, A.; Baysari, M.; Blythe, R.; Parsons, R.; McPhail, S.M. Identifying Barriers and Facilitators to Successful Implementation of Computerized Clinical Decision Support Systems in Hospitals: A Nasss Framework-Informed Scoping Review. Implement. Sci. 2023, 18, 32. [Google Scholar] [CrossRef]
  9. Dean, T.B.; Seecheran, R.; Badgett, R.G.; Zackula, R.; Symons, J. Perceptions and Attitudes Toward Artificial Intelligence Among Frontline Physicians and Physicians’ Assistants in Kansas: A Cross-Sectional Survey. JAMIA Open 2024, 7, ooae100. [Google Scholar] [CrossRef]
  10. Pinsky, M.; Dubrawski, A.; Clermont, G. Intelligent Clinical Decision Support. Sensors 2022, 22, 1408. [Google Scholar] [CrossRef]
  11. Romero-Brufau, S.; Wyatt, K.D.; Boyum, P.; Mickelson, M.; Moore, M.; Cognetta-Rieke, C. A Lesson in Implementation: A Pre-Post Study of Providers’ Experience with Artificial Intelligence-Based Clinical Decision Support. Int. J. Med. Inform. 2020, 137, 104072. [Google Scholar] [CrossRef]
  12. Vijayakumar, S.; Lee, V.V.; Leong, Q.Y.; Hong, S.J.; Blasiak, A.; Ho, D. Physicians’ Perspectives on AI in Clinical Decision Support Systems: Interview Study of the CURATE.AI Personalized Dose Optimization Platform. JMIR Hum. Factors 2023, 10, e48476. [Google Scholar] [CrossRef]
  13. Albahar, F.; Abu-Farha, R.K.; Alshogran, O.Y.; Alhamad, H.; Curtis, C.E.; Marriott, J.F. Healthcare Professionals’ Perceptions, Barriers, and Facilitators towards Adopting Computerised Clinical Decision Support Systems in Antimicrobial Stewardship in Jordanian Hospitals. Healthcare 2023, 11, 836. [Google Scholar] [CrossRef]
  14. Samhammer, D.; Roller, R.; Hummel, P.; Osmanodja, B.; Burchardt, A.; Mayrdorfer, M.; Duettmann, W.; Dabrock, P. Nothing Works Without the Doctor: Physicians’ Perception of Clinical Decision-Making and Artificial Intelligence. Front. Med. 2022, 9, 1016366. [Google Scholar] [CrossRef] [PubMed]
  15. Romero-Brufau, S.; Wyatt, K.D.; Boyum, P.; Mickelson, M.; Moore, M.; Cognetta-Rieke, C. Implementation of Artificial Intelligence-Based Clinical Decision Support to Reduce Hospital Readmissions at a Regional Hospital. Appl. Clin. Inform. 2020, 11, 570–577. [Google Scholar] [CrossRef]
  16. Myers, M.D. Qualitative Research in Information Systems. MIS Q. 1997, 21, 241–242. [Google Scholar] [CrossRef]
  17. Corbin, J.M.; Strauss, A. Grounded theory research: Procedures, canons, and evaluative criteria. Qual. Sociol. 1990, 13, 3–21. [Google Scholar] [CrossRef]
  18. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research; Observations (Chicago, Ill.); Aldine: Chicago, IL, USA, 1967; Available online: https://books.google.com/books?id=oUxEAQAAIAAJ (accessed on 29 January 2026).
  19. Charmaz, K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis; Sage Publications: London, UK, 2006. [Google Scholar]
  20. Qureshi, H.A.; Ünlü, Z. Beyond the Paradigm Conflicts: A Four-Step Coding Instrument for Grounded Theory. Int. J. Qual. Methods 2020, 19, 160940692092818. [Google Scholar] [CrossRef]
  21. Slightam, C.; Wray, C.; Tisdale, R.L.; Zulman, D.M.; Gray, C. Opportunities to Enhance the Implementation of Veterans Affairs Video-Based Care: Qualitative Perspectives of Providers from Diverse Specialties. J. Med. Internet Res. 2023, 25, e43314. [Google Scholar] [CrossRef]
  22. Veterans Affairs. Providing Health Care for Veterans. Veterans Health Administration. Available online: https://www.va.gov/health/ (accessed on 8 August 2023).
  23. Veterans Affairs. Office of Information Technology. History of IT at VA. Available online: https://www.oit.va.gov/about/history.cfm (accessed on 8 August 2023).
  24. Li, J. A Service-Oriented Approach to Interoperable and Secure Personal Health Record Systems. In 2017 IEEE Symposium on Service-Oriented System Engineering (SOSE); IEEE: Piscataway, NJ, USA, 2017; pp. 38–46. [Google Scholar] [CrossRef]
  25. Tomcsanyi, K.M.; Tran, K.A.; Bates, J.; Cunningham, F.E.; Silverman, R.; Norris, A.K.; Moore, V.R.; Voora, D. Veterans Health Administration: Implementation of pharmacogenomic clinical decision support with statin medications and the SLCO1B1 gene as an exemplar. Am. J. Health-Syst. Pharm. 2023, 80, 1082–1089. [Google Scholar] [CrossRef]
  26. U.S Department of Veteran Affairs. Electronic Health Record Modernization: Transforming Health Care for Veterans, Revolutionizing Health Care for All. Fact Sheet. November 2022. Available online: https://digital.va.gov/ehr-modernization/frequently-asked-question/ (accessed on 17 August 2023).
  27. Miller, D.P., Jr.; Latulipe, C.; Melius, K.A.; Quandt, S.A.; Arcury, T.A. Primary Care Providers’ Views of Patient Portals: Interview Study of Perceived Benefits and Consequences. J. Med. Internet Res. 2016, 18, e8. [Google Scholar] [CrossRef]
  28. Charmaz, K. Grounded Theory in the 21st Century: A Qualitative Method for Advancing Social Justice Research. In The Sage Handbook of Qualitative Research; Sage Publications Ltd.: London, UK, 2005; Volume 3, pp. 507–535. [Google Scholar]
  29. O’Reilly, K.; Paper, D.; Marx, S. Demystifying Grounded Theory for Business Research. Organ. Res. Methods 2012, 15, 247–262. [Google Scholar] [CrossRef]
  30. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research, 1st ed.; Routledge: Abingdon, UK, 2017. [Google Scholar] [CrossRef]
  31. Boudreau, M.-C.; Robey, D. Enacting Integrated Information Technology: A Human Agency Perspective. Organ. Sci. 2005, 16, 3–18. [Google Scholar] [CrossRef]
  32. Gibbert, M.; Filippetti, V.; Ruigrok, W. What passes as a rigorous case study? Strateg. Manag. J. 2008, 29, 22. [Google Scholar]
  33. Yin, R.K. Case Study Research: Design and Methods (Applied Social Research Methods); SAGE: Los Angeles, CA, USA, 1994. [Google Scholar]
  34. Cook, T.D.; Campbell, D.T. Quasi-Experimentation: Design & Analysis Issues for Field Settings; Houghton Mifflin: Boston, MA, USA, 1979. [Google Scholar]
  35. Wang, L.; Zhang, Z.; Wang, D.; Cao, W.; Zhou, X.; Zhang, P.; Liu, J.; Fan, X.; Tian, F. Human-Centered Design and Evaluation of AI-Empowered Clinical Decision Support Systems: A Systematic Review. Front. Comput. Sci. 2023, 5, 1187299. [Google Scholar] [CrossRef]
  36. Wang, S.; Xu, J.; Tahmasebi, A.; Daniels, K.; Liu, J.B.; Curry, J.; Cottrill, E.; Lyshchik, A.; Eisenbrey, J.R. Incorporation of a Machine Learning Algorithm with Object Detection Within the Thyroid Imaging Reporting and Data System Improves the Diagnosis of Genetic Risk. Front. Oncol. 2020, 10, 591846. [Google Scholar] [CrossRef]
  37. Qassim, S.; Golden, G.; Slowey, D.; Sarfas, M.; Whitmore, K.; Perez, T.; Strong, E.; Lundrigan, E.; Fradette, M.-J.; Baxter, J. A mixed-methods feasibility study of a novel AI-enabled, web-based, clinical decision support system for the treatment of major depression in adults. J. Affect. Disord. Rep. 2023, 14, 100677. [Google Scholar] [CrossRef]
  38. Knop, M.; Weber, S.; Mueller, M.; Niehaves, B. Human Factors and Technological Characteristics Influencing the Interaction of Medical Professionals with Artificial Intelligence–Enabled Clinical Decision Support Systems: Literature Review. JMIR Hum. Factors 2022, 9, e28639. [Google Scholar] [CrossRef]
  39. Markus, A.F.; Kors, J.A.; Rijnbeek, P.R. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. J. Biomed. Inform. 2021, 113, 103655. [Google Scholar] [CrossRef]
  40. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  41. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: London, UK; Collier Macmillan: New York, NY, USA, 2003. [Google Scholar]
  42. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  43. Yusof, M.M.; Kuljis, J.; Papazafeiropoulou, A.; Stergioulas, L.K. An evaluation framework for Health Information Systems: Human, organization and technology-fit factors (HOT-fit). Int. J. Med. Inform. 2008, 77, 386–398. [Google Scholar] [CrossRef] [PubMed]
  44. Wang, D.; Wang, L.; Zhang, Z.; Wang, D.; Zhu, H.; Gao, Y.; Fan, X.; Tian, F. ‘Brilliant AI Doctor’ in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2021; pp. 1–18. [Google Scholar] [CrossRef]
  45. Jauk, S.; Kramer, D.; Avian, A.; Berghold, A.; Leodolter, W.; Schulz, S. Technology Acceptance of a Machine Learning Algorithm Predicting Delirium in a Clinical Setting: A Mixed-Methods Study. J. Med. Syst. 2021, 45, 48. [Google Scholar] [CrossRef]
  46. Xie, Y.; Chen, M.; Kao, D.; Gao, G.; Chen, X.A. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar] [CrossRef]
Figure 1. Qualitative case study design illustrating the grounded theory informed analytic workflow. Interview data are analyzed through iterative cycles of initial coding, focused coding, and theoretical coding, enabling constant comparison, relationship building, and theory development.
Figure 1. Qualitative case study design illustrating the grounded theory informed analytic workflow. Interview data are analyzed through iterative cycles of initial coding, focused coding, and theoretical coding, enabling constant comparison, relationship building, and theory development.
Information 17 00191 g001
Figure 2. Providers’ Perspectives Network Diagram.
Figure 2. Providers’ Perspectives Network Diagram.
Information 17 00191 g002
Table 1. Key results summary.
Table 1. Key results summary.
CategoryConceptOpen Codes
Provider Experience DriversFacilitates Medical Information SearchEasily accessible patient information, guided by indication or bacteria type, relevant text groupings on a single screen
Improves Task EfficiencyAuto-fill my documentation, just guides me through it, minimize text input, saves me time
Leverages Data-Driven IntelligenceData helps with decision-making, historical patient data is valuable, value in knowing their service connection
Provides Transparent and Reliable RecommendationsProvide inline resources for calculations, recommendations and guidelines are explained and evidence-based, update transparency
Clinical Utility DriversImproves Treatment ChoicesAntibiotic alternatives available, certain antibiotics have restrictions, decision support for antibiotic use, decrease usage of unnecessary antibiotics, helps educate patients, manual order approval, order approval automated through CDSS, patient is most important
Reduces Adverse EventsAlerts for drug interactions, auto population, choose the right antibiotic the first time, free text input for order adjustments, reminders help me so I do not forget
Supports Diagnosis ProcessAssures me, continuous learning, helps me make my clinical decision, include diagnostic support
Adaptation DriversPromotes Personalized TrainingHands-on learner, learning by observation, orientation, training maintains consistency, tutored right at your elbow
Promotes Team DialogAdvice maintained in house, advocated by pharmacy, pharmacy supports other providers, provider feedback encouraged, provider mentorship
Improves Technical LiteracyMake most of your tools, technical proficiency varies by provider
Provider Experience ObstaclesContributes to Cognitive OverloadHard to navigate, information is hard to find, information overload, technical limitations, the extra clicks are inefficient
Data Quality IssuesDocumentation is not always right, lack of interoperability, problem list is not always updated, workarounds for limited functionality
Lacks FlexibilityAdjustable user interface, customizable order sets, unable to favorite order sets
Clinical Utility ObstaclesDehumanizes Patient–Provider InteractionInternet facilitates patient questioning, patient first documentation second, technology can be impersonable
Diminishes Provider AutonomyDay of the specialization, knowledge is irreplaceable, physicians’ status is diminishing
Increases Workload Through Compliance RequirementsRegulations create additional workload burden
Misaligned with WorkflowAlert fatigue, alerts not contextually relevant, interrupts my thought process using tools outside EHR, mobile app lacks ordering functionality
Adaptation ObstaclesAI AnxietyAI is helpful, fear of AI automation, natural language processing is not always accurate, natural language processing used for documentation
Challenging to LearnEffectiveness increases with familiarity, make most of your tools
SkepticismChange is hard, do not make more work for me, effectiveness increases with familiarity, have a backup plan for when technology fails, technology changes fast
Table 2. Thematic mapping.
Table 2. Thematic mapping.
Thematic FindingTheoretical ConnectionPerspective CategoryExtant Literature
UncertaintyUTAUTObstacle: Lack of transparency undermines trust[38,44]
Conditional AdoptersDOIObstacle: Perceived loss of decision-making control[35,44]
Professional IdentityTAMObstacle: Resistance due to overconfidence[38]
Workflow FitHOT-FitObstacle: ICDSS misalignment with provider workflows[44,45]
Driver: Customizable interfaces improve efficiency[44]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Behrens, A.; Noteboom, C.; Brooks, P. Provider Perspectives on Sociotechnical Alignment of Intelligent Clinical Decision Support Systems. Information 2026, 17, 191. https://doi.org/10.3390/info17020191

AMA Style

Behrens A, Noteboom C, Brooks P. Provider Perspectives on Sociotechnical Alignment of Intelligent Clinical Decision Support Systems. Information. 2026; 17(2):191. https://doi.org/10.3390/info17020191

Chicago/Turabian Style

Behrens, Andy, Cherie Noteboom, and Patti Brooks. 2026. "Provider Perspectives on Sociotechnical Alignment of Intelligent Clinical Decision Support Systems" Information 17, no. 2: 191. https://doi.org/10.3390/info17020191

APA Style

Behrens, A., Noteboom, C., & Brooks, P. (2026). Provider Perspectives on Sociotechnical Alignment of Intelligent Clinical Decision Support Systems. Information, 17(2), 191. https://doi.org/10.3390/info17020191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop