4.1. Analysis of Findings
Our survey shows that Romanian patients report moderate-to-high acceptance of AI/XR-enabled telemedicine (Acceptance Index 3.27 ± 0.59; α = 0.780), with no measurable differences across age groups (all Kruskal–Wallis
p > 0.45). Notably, 27.3% reported prior AI/XR use and 36.4% self-rated as moderately informed—figures that likely exceed general-population baselines. This may reflect the university-affiliated recruitment channels and higher educational attainment in our sample, and should temper generalizability. This pattern contrasts with concerns that older adults’ “digital readiness” gap necessarily depresses acceptance [
6], although the oldest strata were small (61–70 y: n = 2; 71–80 y: n = 9) and estimates are imprecise.
Prior exposure was associated with meaningfully higher acceptance (3.47 ± 0.47 vs. 3.19 ± 0.59;
p = 0.0011), consistent with diffusion processes in which direct experience reduces uncertainty. Overall levels align with the literature, showing sustained telehealth satisfaction post-pandemic [
1,
2,
3,
4,
5]. The absence of age effects within this relatively well-educated sample (76.2% tertiary education) suggests that structural barriers, rather than age per se, may be the dominant constraint when they occur. Descriptively, the youngest (21–30 years: mean 3.38) and oldest (71–80 years: mean 3.44) strata showed relatively higher willingness to choose an AI-assisted visit, but differences were not statistically significant and small cell sizes warrant caution.
In the proportional-odds model, willingness to pay was strongly associated with choosing an AI-assisted visit (OR 6.81, 95% CI 3.39–13.66;
p < 0.001), and perceived accessibility had an independent, albeit smaller, effect (OR 1.83, 95% CI 1.03–3.24;
p = 0.040). OLS results converged, with willingness to pay being the only significant predictor of the continuous Acceptance Index (β = +0.432;
p < 0.001; R
2 = 0.461). Knowledge was correlated modestly with acceptance (ρ = 0.189;
p = 0.048), but was not independently predictive; privacy concern neither correlated with acceptance (ρ = 0.072;
p = 0.455) nor shifted choice (χ
2 = 0.052;
p = 0.819). Taken together, these findings suggest that clear value propositions and low-friction access can outweigh generalized privacy anxieties—echoing evidence that patients trade off model performance, interpretability, and convenience [
9,
10,
11]. For XR, evidence of benefit in rehabilitation and engagement [
13,
14] alongside cybersickness risks [
15] underscores the need for deliberate content design and session titration to sustain acceptance.
Within the EU, AI deployments are shaped by the AI Act’s risk-tiered obligations and by General Data Protection Regulation (GDPR)-anchored transparency and data-subject rights [
17,
18]. In Romania, the formalization of telemedicine continues amid uneven infrastructure and digital literacy [
19], and physician adoption is driven by perceived usefulness and social norms [
21]. Recent efforts to quantify telemedicine acceptance among Romanian patients with diabetes include the development and validation of a structured instrument by Patrascu et al., which demonstrated strong reliability and utility for assessing patients’ desirability for, acceptability of, and adherence to telemedicine. Nevertheless, social desirability effects (the tendency to provide favorable responses because AI/XR are perceived as innovative) may inflate acceptance estimates. Their findings underscore the importance of capturing patient-specific needs to inform the development of telemedicine platforms and policies [
22].
Our subgroup analysis suggests targeted levers for patients: prior use conferred the largest, statistically reliable benefit among those with secondary education (Cliff’s δ = 0.56; Holm-adjusted p = 0.029), pointing to value in short, supervised “try-before-you-decide” experiences, transparent data notices aligned with GDPR, and integrated onboarding (e.g., one-click scheduling, clear fallback to clinician-only care). Pricing clarity and tiered options may further align willingness to pay with perceived benefit, while XR deployments should include tolerability screening and accessible alternatives to protect equity.
Moreover, our finding of moderate-to-high acceptance (Acceptance Index 3.27 ± 0.59) with no detectable between-age differences aligns with work showing broadly favorable but not uniform patient receptivity to remote and AI-inflected care. In a German primary-care survey grounded in Unified Theory of Acceptance and Use of Technology (UTAUT), overall acceptance of video consultations was also “moderate,” with performance and effort expectancy, rather than age, driving the intention to use [
23]. A contemporary systematic review of video/phone telemental health also reported generally positive patient attitudes, tempered by variability across settings and outcomes, again suggesting that contextual factors (ease of use, clinical appropriateness) matter at least as much as demographics [
24]. Broader public polling about AI in U.S. healthcare similarly finds cautious optimism, with interest in convenience and perceived effectiveness balanced against misgivings, consistent with our mid-to-upper scale means on trust and perceived improvement [
25].
The positive association we observed between prior exposure and acceptance mirrors diffusion patterns in which experience reduces uncertainty and increases willingness to adopt. Experimental work indicates that how AI is framed also shapes support communicating accuracy and bias, especially with explicit mention of clinician oversight, increases approval for medical AI across tasks [
26]. A recent scoping review of patient perspectives on AI converges on the same levers: perceived usefulness, human-in-the-loop assurance, and transparent communication are recurrent facilitators of acceptance [
27]. Although privacy concerns are salient attitudinally, their link to behavior is often weak. In a large telehealth study, privacy worry had little explanatory power for continued-use intention once usability and value were accounted for, a pattern congruent with our non-association between privacy concern and acceptance or choice [
28].
Willingness to pay emerged as the strongest correlate of acceptance in our multivariable models (β ≈ +0.43 per one-level increase; proportional-odds OR ≈ 6.8), highlighting the centrality of perceived value. This finding aligns with a systematic review that demonstrates a wide variation in stated willingness to pay for telemedicine, depending on the condition and context, yet it correlates with perceived convenience and benefits, particularly when telemedicine substitutes for costly travel or wait time [
29]. Disease-specific studies echo this sensitivity to out-of-pocket framing: in a telemedicine lifestyle program, most participants were willing to pay modest copays (<US
$30), with acceptance rising as travel burdens increased [
30]. From a system perspective, economic evaluations of digital health interventions highlight that uptake hinges not only on clinical performance but also on clear pricing and cost offsets (e.g., fewer visits, reduced transportation), which help patients and payers reconcile value with fees [
31].
Accessibility also showed an independent association with choosing an AI-assisted visit (OR ≈ 1.83), which is consistent with evidence that lowering frictions, devices, connectivity, and onboarding shifts preferences toward virtual care. Therefore, reducing practical barriers—simpler log-in and scheduling, loaner devices where needed, reliable broadband or cellular alternatives, and brief guided first-use tutorials—can shift preferences toward virtual care. In the VA program that mailed video-enabled tablets to veterans with access barriers, two-thirds of recipients preferred or were indifferent to tablet-based visits relative to in-person care, citing fewer logistics and smoother access as key reasons [
32]. A subsequent evaluation of the VA’s Digital Divide Consult found that targeted device distribution and support significantly boosted telehealth adoption among patients lacking access to technology or broadband, reinforcing the idea that “facilitating conditions” are leverage points for acceptance in real-world populations [
33].
Lastly, our bundled AI/XR framing is consistent with emerging XR evidence suggesting generally good acceptability when content is purposeful and tolerability is managed. Moreover, acceptability is high when tolerability is actively managed—for example, by limiting motion-rich content, offering seated modes, shorter initial sessions, and easy opt-out. In mixed chronic pain cohorts, VR interventions demonstrate high user satisfaction and acceptance, alongside clinically meaningful symptom improvements [
34]. Educational and preparatory use cases, such as VR walk-throughs before radiotherapy, demonstrate feasibility and reductions in anxiety, with encouraging patient-reported value [
35]. VR walk-throughs’ refers to short, immersive previews that simulate a care pathway (e.g., radiotherapy setup) to set expectations and reduce anxiety.
Oncology feasibility studies similarly report high acceptability of immersive programs during chemotherapy [
36]. In oncology, immersive programs typically deliver brief relaxation, mindfulness, or distraction modules via a headset during infusion sessions. On the technical edge, reviews of augmented reality for real-time telemedicine note usability gains but emphasize workflow fit and device ergonomics as prerequisites for sustained use [
37], while implementation scoping reviews across XR catalog common barriers, cost, staff training, motion sensitivity, and recommend gradual onboarding and opt-in pathways, strategies that resonate with our secondary-education subgroup where prior exposure had the largest acceptance gain [
38]. Gradual onboarding’ means starting with optional, short trial sessions and step-ups only if tolerated; ‘opt-in pathways’ means patients choose XR components explicitly, with standard telemedicine as a default alternative. Nevertheless, our instrument did not capture clinical condition or acuity; such contextual factors may materially shape preferences and should be incorporated in future designs.
Future studies should (i) validate the Acceptance Index with factor-analytic methods and test–retest reliability; (ii) experimentally separate AI versus XR attributes (e.g., discrete-choice designs varying accuracy, explainability, comfort, and cost); (iii) include objective usage metrics alongside stated preferences; (iv) evaluate equity-focused facilitation (loaner devices, guided onboarding) on acceptance; and (v) assess clinical-condition effects on preferences to tailor telemedicine pathways.