4.1. Analysis of Findings
This study advances beyond frequency reporting by quantifying predictive pathways for physician acceptance using a compact, theory-anchored framework. Specifically, we demonstrate independent effects of future system value and training priority, test specialty-specific moderation of the training–acceptance slope, and evaluate mediation by XR task-utility, thereby linking practical implementation levers to established acceptance theory.
This multi-specialty sample of Romanian physicians expressed generally favorable attitudes toward AI/XR-enabled telemedicine (TAI = 3.9/5), with especially strong endorsement of integrating these topics into medical curricula (mean = 4.4/5). Importantly, acceptance was not a function of specialty group per se; rather, it tracked with beliefs about future system value and with prioritization of professional training. This pattern underscores that addressing how AI/XR fits within clinical pathways, and equipping clinicians with the skills to evaluate and apply tools safely, may matter more than tailoring messages purely by specialty.
Perceived barriers were concrete: technical reliability, financial outlay, and patient acceptance. Ethical focal points, clinician accountability, and data confidentiality signal a desire for guardrails that preserve professional responsibility and trust. Requested adoption strategies (hands-on workshops and continuing education) map directly onto these needs. XR’s perceived utility was moderate to high across the board and most pronounced in dentistry and A&ICU; however, the differences were not significant, implying that early pilots in receptive services could generate templates others can reuse.
The near-null slopes in A&ICU and Dentistry plausibly reflect ceiling effects (higher baseline familiarity and frequent simulation/remote counseling) and constraint profiles where training alone is not rate-limiting. In A&ICU, ongoing team simulation may already embed AI/XR concepts; in Dentistry, hardware ergonomics, setup time, and reimbursement can dominate perceived feasibility, dampening the marginal yield of generic workshops.
The multivariable model offers a practical blueprint. Acceptance is higher where physicians (a) are convinced of future system-level benefits and (b) view training as a priority. Implementation should therefore pair a compelling, governance-aligned narrative (safety, equity, accountability) with experiential learning (simulations, supervised cases, checklists). This approach addresses both head (evidence, policy) and hands (skills, workflows), while respecting core ethical commitments.
Romanian physicians’ generally favorable views of AI/XR in telemedicine, with acceptance tightly linked to beliefs about future system-level value and the priority given to training, align with several recent clinician surveys outside Romania. A statewide U.S. survey of frontline physicians/physician associates reported broadly positive attitudes toward AI’s potential but highlighted uneven familiarity and a desire for practical education and governance clarity [
18]. A national sample of U.S. physicians similarly found limited formal knowledge about AI, coupled with interest in training and clear rules of the road, suggesting that “confidence through competence” is a common precondition for adoption [
19]. Scoping evidence across multiple specialties reinforces these patterns, identifying training, workflow fit, and perceived usefulness as recurrent facilitators, with liability and transparency concerns as drag factors [
20]. At the pipeline level, Romanian medical students express strong intentions to use digital health yet cite curricular and support needs—consistent with our finding that curriculum integration and training priority co-travel with acceptance among practicing clinicians [
21].
Our cross-specialty results also converge with the XR literature. An umbrella review of extended reality (XR) in surgical training concludes that XR is generally acceptable and useful for skill acquisition, while emphasizing the importance of ergonomics, reliability, and instructional design—determinants that mirror the “technical” and “hands-on” themes reported by our respondents [
22]. Specialty-specific studies support this: surgeons and residents exposed to augmented reality (AR) report high perceived value and usability when tasks are proximal to operative workflows [
23], and orthopedic teams find AR feasible in the operating room, provided setup and visualization are dependable [
24]. That our sample’s XR utility scores were strongest in dentistry and A&ICU is consistent with domains where visualization, simulation, and crisis resource management training can benefit from immersive tools, though the literature stresses that stable hardware, network performance, and targeted scenarios are prerequisites for sustained uptake [
22,
23,
24]. However, the absence of a statistically significant indirect effect via XUI suggests that system-level expectations (interoperability, governance, funding) outweigh modality-specific XR beliefs when clinicians form acceptance intentions.
Perceived benefits in our data—time efficiency and improved access—track closely with surgical teleconsultation evidence. A nationwide survey of surgeons reported high interest in video visits for selected indications, but flagged physical exam limitations, rapport challenges, and connectivity as top barriers—the same categories our respondents selected most often [
25]. A systematic review comparing online video versus face-to-face surgeon–patient consultations found shorter waiting and total appointment times and broadly similar satisfaction for follow-ups, while calling for more trials in high-stakes preoperative settings—again mapping onto our respondents’ enthusiasm for efficiency and caution about clinical adequacy [
26]. At a systems level, an overview of telemedicine reviews across the WHO European Region likewise highlights infrastructure reliability, integration with records, and medico-legal clarity as determinants of sustainable practice—elements our participants prioritized as “technical” and “financial” challenges needing institutional solutions [
27,
28,
29].
Reservations in our sample about erosion of the patient–physician relationship and data confidentiality are also well supported. Comparative and qualitative syntheses describe mixed effects of telehealth on empathy and rapport—some patients and clinicians experience diminished nonverbal connection, while others note unique windows into patients’ home contexts—underscoring that communication training and visit design (e.g., camera placement, agenda-setting, hybrid pathways) materially influence perceived care quality [
30,
31]. These findings suggest that “acceptance” is inseparable from interaction quality: our observed link between training priority and acceptance is compatible with evidence that targeted upskilling in telecommunication skills can mitigate relational concerns and convert technical feasibility into perceived care value [
26,
30,
31].
Governance signals in our cohort—emphasis on accountability, confidentiality, and transparency—mirror recent policy and institutional developments. Within the EU, the new AI Act establishes a risk-based framework with stringent obligations for high-risk medical AI and interactions with MDR/IVDR, highlighting the need for robust post-market monitoring, documentation, and human oversight—requirements that directly address clinicians’ safety and accountability concerns [
28,
29]. At the health-system level, case studies describe operational governance for clinical AI (e.g., model review committees, incident reporting, equity monitoring), offering practical templates for aligning frontline training with trustworthy deployment—precisely the blend our respondents favored (hands-on workshops embedded in clear rules) [
32]. Together, these developments make “future potential” more credible by reducing uncertainty around responsibility and risk.
The moderation result of the current study is that the training–acceptance slope was most pronounced among surgeons and trended positive in medical specialties, which fits the literature’s lesson that adoption rises when training is proximal to real tasks and when organizational frameworks reduce friction. In practice, this argues for service-line pilots that pair experiential upskilling with concrete governance (XR-supported telementoring with clear data flows and audit trails), while aligning with macro-level regulatory expectations (EU AI Act) and micro-level communication best practices to preserve rapport [
28,
29,
30,
31,
32]. In short, the external literature supports our core implementation levers: make the future value proposition tangible with guardrails and invest in workflow-proximal training that builds both technical and communicative competence—particularly in specialties where the marginal benefit is highest.
Concurrently, advances in secure medical IoT, including dynamic ciphering schemes designed for healthcare data streams, illustrate practical pathways to harden telemedical ecosystems against interception and tampering [
33]. In parallel, digital-twin and embodied-AI developments in mechatronics inform the reliability requirements for XR-enabled tele-operation and telementoring—latency, stability, and safety constraints that clinicians implicitly weigh when judging utility [
34].
Practice implications can be specialty-tailored as follows: for Surgical services, prioritize XR-assisted telementoring workshops with OR-adjacent setup drills, overseen by credentialed proctors and standardized checklists that cover visualization quality, network latency, data-capture requirements, and explicit human-oversight points. In A&ICU settings, run crisis-resource management simulations that integrate AI triage/risk scores and structured handoffs, coupled with exercises on fail-safe reversion when data streams degrade. Medical departments should adopt consult-oriented modules on AI-supported triage, differential-diagnosis safety nets, and documentation aids, with clear emphasis on medico-legal boundaries. Dentistry can use XR planning sandboxes focused on occlusion and anatomy, set time-to-setup and ergonomic targets, and define practical pathways for device sharing and sterilization logistics. Cross-cutting across all modules, align activities with local AI governance (model review, incident reporting, equity monitoring) and rigorous data-protection protocols.
Sustainable adoption hinges on (i) reliable broadband and secure device provisioning; (ii) interoperable EHR integration and audit trails; (iii) clear financing for devices/licensing/training; and (iv) alignment with EU risk-based AI regulation (human oversight, documentation, post-market monitoring). Institutional AI governance committees can translate these macro-rules into service-line checklists and approval pathways that clinicians trust.
Beyond descriptive attitudes, this work quantifies predictive pathways for acceptance (training priority and future potential as independent predictors) and tests specialty moderation and mediation by XR utility using robust SEs and bootstrap procedures, thereby advancing theory-anchored implementation levers rather than reporting frequencies alone. Proposed additional XR items (do not affect current results): (i) XR ergonomics are acceptable for routine clinical use; (ii) XR setup time is compatible with clinical workflow; (iii) Network reliability is sufficient for XR telementoring.
The reported acceptance in this cohort aligns with clinician samples from other systems that emphasize utility, governance clarity, and training as recurring determinants of telemedicine and AI uptake. Differences in specialty patterns—stronger training–acceptance coupling in procedure-dense services and more variable effects in ambulatory medical fields—mirror reports from surgical and critical-care settings, whereas medico-legal comfort and workflow fit remain salient in ambulatory care. Together, these consistencies support the external validity of the two actionable levers identified here—credible future system value and structured, workflow-proximal training—and underscore the need to embed hands-on upskilling within clear institutional governance.
4.2. Study Limitations
This training-oriented, cross-sectional survey relied on a small, convenience sample (n = 43) with uneven specialty strata, over half from A&ICU, which limits precision, interaction power, and generalizability beyond the participating clinicians and Romanian context. Findings reflect Romanian practice patterns and regulatory context and may not generalize to other systems without considering differences in infrastructure, financing, and governance. All measures were self-reported and mapped to 5-point Likert-type scales after translation, introducing potential common-method bias, residual translation artifacts, and untested measurement invariance across specialties. Moreover, we did not measure prior digital training, local IT infrastructure, EHR integration maturity, reimbursement exposure, or medico-legal familiarity; these may confound or moderate observed associations and should be collected prospectively. The TAI composite showed good internal consistency, but the XUI comprised only two items, constraining construct breadth. XUI included two face-valid items to minimize respondent burden; broader XR constructs (ergonomics, cognitive load, workflow fit) warrant multi-item scales and validation (EFA/CFA) in larger samples. Although we used HC3-robust SEs and bootstrap procedures, multiple modeling steps and exploratory interaction probing raise the possibility of type I/II error in a low-power setting. Our design cannot establish causality, and we did not observe actual behavior change or clinical outcomes following training or XR exposure. The convenience sample (n = 43) limits precision, particularly for interaction terms and small between-group differences. Future work should employ stratified sampling with pre-specified cell sizes to ensure balanced specialty comparisons and improved generalizability. Self-report Likert data are susceptible to acquiescence and common-method bias. Future work should triangulate with behavioral indicators (e.g., audit logs, credentialing uptake) and vignette-based performance tasks. Finally, several relevant covariates, prior digital training, local infrastructure quality, EHR integration maturity, reimbursement exposure, and medico-legal climate, were not measured and could confound or moderate observed associations.