Previous Article in Journal
Mixed Reality Game Design for the Effectiveness and Application Research of Integrating Sustainable Concepts into Blended Learning
Previous Article in Special Issue
Reducing Periprocedural Pain and Anxiety of Child Patients with Guided Relaxation Exercises in a Virtual Natural Environment: A Clinical Research Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neuroception of Psychological Safety and Attitude Towards General AI in uHealth Context

by
Anca-Livia Panfil
1,2,
Simona C. Tamasan
1,2,
Claudia C. Vasilian
2,
Raluca Horhat
1,3,* and
Diana Lungeanu
1,*
1
Center for Modeling Biological Systems and Data Analysis, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
2
Liaison Psychiatry, “Pius Brinzeu” County Emergency Hospital, 300723 Timisoara, Romania
3
Department of Functional Sciences, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
*
Authors to whom correspondence should be addressed.
Multimodal Technol. Interact. 2026, 10(1), 4; https://doi.org/10.3390/mti10010004 (registering DOI)
Submission received: 23 November 2025 / Revised: 27 December 2025 / Accepted: 29 December 2025 / Published: 30 December 2025

Abstract

Interest in general AI is widespread, and much is expected from its large-scale adoption in the healthcare sector. However, the success of uHealth implementations relies on genuine trust, beyond technical performance. Neuroception of psychological safety (NPS), grounded in polyvagal theory, encompasses the human subconscious and automatic processes of safety and risk detection. We conducted a cross-sectional survey to explore a hypothetical connection between NPS and the perception of general AI in the uHealth context, by an anonymous online questionnaire comprising the following: Neuroception of Psychological Safety Scale (NPSS), four-item AI Attitude Scale (AIAS-4), and questions on AI threat, age, gender, and level of education. Multivariate analysis was performed using covariance-based structural equation modeling. We received 201 responses: 73 (36.3%) males vs. 128 (63.7%) females, all adults with varying levels of education (from 0 = basic formal education to 4 = master’s degree). Respondents belonged to four demographic cohorts: from Baby boomers to Generation Z. SEM results indicated that attitudes towards AI-driven health interventions are significantly impacted by social engagement and compassion (NPSS factors). Gender, education, and demographic cohort were confirmed as significant covariates. NPS-related attitudes towards AI should be considered and analyzed by healthcare providers, application developers, and policy or regulatory authorities.

1. Introduction

Artificial intelligence (AI) is defined as intelligence exhibited by machines, particularly computer systems, which enable machines to perceive their environment and use learning and intelligence to take actions and maximize their chances of achieving defined goals [1,2]. Although narrow AI (i.e., AI limited to specific tasks) has been introduced, implemented and discussed before, 2022 marked a major breakthrough, with the advent of generative pre-trained transformers (GPT) and intelligent GPT-based chatbots (software applications or web interfaces designed to mimic human conversation): general AI (i.e., AI with human-level cognitive abilities) and large language models (LLMs) have emerged in the public awareness and society at large [1]. Recent years have also seen growing interest in general AI, with high expectations for its adoption in the health sector, particularly for mobile health (mHealth) and connected or ubiquitous health (uHealth), whose potential to transform health services is recognized [3,4,5,6,7,8], including psychological and mental health support [7,9,10,11].
Beyond the enthusiasm generated by the exceptional performance and expected benefits, the success of uHealth implementations relies not only on formal acceptance, but above all in genuine trust. Numerous concerns have been raised regarding the transparency of decisions, potential biases, ethical issues, accountability, and empathy or compassion in decision-making [6,11,12]. For many stakeholders (e.g., patients and health services providers), such concerns are not related to technical performance; quite often, the human beings cannot necessarily explain or justify their reservations and fears [6,12,13,14,15,16]. However, such reservations should be acknowledged and examined by healthcare providers, application developers, and policy or regulatory authorities.
Neuroscience and psychophysiology offer the context and scientific background to understand unconscious processes that influence human social or behavioral patterns, and emotional decisions. Neuroception describes the subconscious and automatic process by which the nervous system constantly scans the internal and external environments for cues of safety, danger, or life-threatening risk. This concept is essential to understanding the construct of well-being, as it plays an important role in recovery, rest, and social bonding [17]. Polyvagal theory elaborates on the classical understanding of the autonomic nervous system function by showing that the parasympathetic system, via the vagus nerve, has two distinct pathways that evolved at different times. These pathways create a three-part hierarchy of response, explaining much of our behavior, particularly in situations of stress, trauma, and during social interactions. This theory concludes that safety is a biological state rather than a cognitive concept [18,19]. Neuroception of psychological safety (NPS) is based on the polyvagal theory and it is increasingly recognized as a distinct construct which is part of general emotional well-being [18,20]. The neuroception of psychological safety scale (NPSS) has been developed in the early 2020s and further validated in various contexts and with different populations [20,21,22,23].
Faced with the rapid changes in society, human connections seem to be dwindling considerably, and attention is increasingly turning to social media and AI to fulfill the human need for harmony and synchronicity. The apparent contradiction between AI’s ability to provide this feeling through a cognitive approach and the biologically complex nature of human connections is sparking the curiosity of various stakeholders (e.g., application developers and policy and regulatory authorities), who are interested to investigate different subjective aspects of this multifaceted human–machine interaction.
Age and gender are largely acknowledged to influence the attitude towards AI and its societal pervasiveness [14,24]. More specifically, successive demographic cohorts are seen as social generations more digitally literate than their predecessors, and Generation Z (i.e., people born mid-to-late 1990s) have been dubbed as “Zoomers” or “digital natives” (opposite to the “digital immigrants” born before) [6,25,26,27,28]. At the same time, it has been reported that, irrespective of the demographic cohort, the relationship with conversational AI and AI trust depends on gender and education level [6,9,11,14,24,28].
We hypothesized that individual differences in baseline neuroceptive subjective safety orientation would modulate responses to novel or ambiguous technological agents. That is to say, even if the safety of interaction with the AI itself cannot be measured, NPS scores would reflect individual attitudes towards AI and its perceived beneficial or harmful role in human society.
We conducted a cross-sectional survey in a convenience sample with the main objective of exploring this hypothetical connection between NPS and the perception of general AI within the context of smart health, i.e., the entire spectrum of technology-enabled healthcare. The study aimed to (a) analyze the contribution of NPS in connection with covariance factors, such as demographic cohort, gender, and level of education; and (b) investigate the NPS’s distinct facets (namely, social engagement, compassion, and body sensations) in regard to their contribution to AI perception and attitude.

2. Materials and Methods

This project is part of an awareness campaign for AI-driven health instruments: “The priority is YOU”. The data were collected through an anonymous cross-sectional survey announced at a public lecture on World Mental Health Day (10 October 2024). Convenience sampling was used: adult participants were invited to complete a web-based questionnaire on a volunteer basis and to further recruit other potential participants. The survey remained open for two months (until 9 December 2024).
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of “Pius Branzeu” Emergency County Hospital no 477/01.08.2024.
A brief description of the project was provided in the introduction of the questionnaire, and the informed consent was included at the beginning: for each individual, actual data collection proceeded after a required confirmation had been granted. No identifying data were collected, nor were any IP addresses tracked; no other restrictions were applied, based on the assumption that the adoption of digital health services would require a basic level of information literacy on the part of participants, so that the web-based questionnaire would not generate selection bias in our investigation.

2.1. Tools for Data Collection: Scales and Variables

The neuroception of psychological safety scale (NPSS) was applied to collect the data regarding the individual neuroception of psychological safety of respondents. NPSS is a 29-item (each on a 5-point Likert scale) self-report instrument which measures three dimensions of psychological safety: social engagement (14 items), compassion (7 items), and body sensations (8 items) [20]. For each factor/dimension, higher scores indicate stronger feelings of safety. NPSS is grounded in polyvagal theory [18,19] and has been validated with various populations [21,22,23], namely the factor structure, measurement invariance, and test–retest and construct validity [23,29]. Although the polyvagal framework remains debated on mechanistic grounds [30,31,32,33], the NPSS has been psychometrically validated as a subjective measure of perceived safety, the level at which it is used in the present study.
For the application of NPSS, participants were asked to consider a standardized scenario involving the use of AI-based connected health systems (namely, uHealth) in a healthcare setting. This scenario described digital health services in which AI tools support care delivery by analyzing health data and providing recommendations, alerts, or risk assessments to aid clinical decision-making or self-management. The AI system was presented as a decision support system rather than a standalone system, operating within existing care workflows and not replacing healthcare professionals. Participants were asked to answer the NPSS questions based on their anticipated experience of using, or being supported by, such AI-based uHealth systems, rather than on interpersonal interaction with a human agent. For analytical purposes and comparability, each item was rescaled on [0, 1] range, and the arithmetic means of each factor were further employed.
The attitude towards general AI was assessed through the four-item AI Attitude Scale (AIAS4) [2]. The AIAS4 items are measured on a 10-point proportional scale, considering a single factor; this scale has also been validated in various settings [24,28,34]. The arithmetic mean of the four items was used for data analysis. In addition to AIAS4, a supplementary question asked respondents to rate their perceived threat of AI on a similar 10-point proportional scale. This response was treated as a distinct construct and a separate measure/variable. It underwent no prior validation.
Age of the respondents was coded as demographic cohorts: Baby boomers (born up to 1964), Generation X (1965–1980), Generation Y or Millennials (1981–1996), and Generation Z (born in 1997 or later). In the analysis, the Baby boomers were considered the reference, and Generation Z was assigned the highest code.
Gender had two options, based on the self-declared legal status: male or female.
The level of education was quantified on five levels (from 0 = basic formal education to 4 = master’s degree, namely, at least five-year academic programs after high school).

2.2. Data Analysis

The reliability of the scales’ measurements was assessed based on Cronbach’s alpha. Values of Cronbach’s alpha greater than 0.8 were considered to prove good internal consistency. Intraclass correlation coefficient (ICC) with 95% confidence interval (CI) was also calculated as an index of the extent to which measurements can be replicated. Values higher than 0.5 for ICC were considered to indicate moderate to good consistency.
Exploratory descriptive statistical analysis was performed: the observed frequency counts and percentages for categorical variables; and the sample’s mean and standard deviation (SD) for numerical variables, irrespective of their distribution. Univariate normality of numerical variables was tested with Shapiro–Wilk statistical test, and multivariate normality was tested with Mardia’s test [35]. All the numerical variables had non-normal distributions, mainly due to negative kurtosis values, but slight skewness.
Covariance-based structural equation modeling (SEM) was employed as a multivariate analysis to explore the hypothesized connection between NPS components and the perception of general AI in conjunction with observed covariance factors, such as gender, demographic cohorts, and level of education [36,37]. No latent variables were included in the analysis. SEM was used to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, with no underlying prediction aim.
The goodness of fit indices for the SEM models and their respective [threshold values] were as follows: the model Chi-square test and resultant p-value [<0.05]; comparative fit index (CFI) [>0.90]; root mean square error of approximation (RMSEA) [<0.1]; and standardized root mean square residual (SRMR) [<0.08]. The SEM models were compared based on their fit statistics and the Akaike information criterion (AIC). To assess the statistical significance of changes in AIC values and select the simplest model, Vuong’s closeness test, based on likelihood ratio, was employed.
Sensitivity/robustness analysis was conducted considering two approaches: (a) separate univariate multiple linear regressions for the two dependent variables in SEM models with multicollinearity testing based on the variance inflation factor (VIF); and (b) alternative SEM models with gender as a grouping factor (namely, rather than introducing gender as a covariate in the SEM model).
The statistical analysis was conducted at 95% level of confidence and 5% level of statistical significance, and all reported probability values were two-tailed. For SEM fit indices, other limits of statistical confidence and significance have been explicitly specified.
Data analysis was performed with the statistical software IBM SPSS v. 20 and R v. 4.4.2 packages (including “lavaan” v. 0.6.19, “MVN” v. 5.9, “nonnest2” v. 0.5-8, “rms” v. 6.6-0, and “semPlot” v. 1.1.6).

2.3. Statistical Power and Sample Size

A priori analytical model-free power analysis was conducted [38]. The H0 model was compared against the saturated H1 model with the following requests: RMSEA as effect measure with 0.1 effect size; alpha = 0.05; power = 0.9; and 20 degrees of freedom (df).
The required sample size resulted to be N = 132 to detect an RMSEA of 0.1 (on 20 df), equivalent to a difference in the population minima F0 = 0.2 and McDonald’s index of non-centrality Mc = 0.905.
Analysis of statistical power was conducted with R v. 4.4.2, package “semPower” v. 2.1.1.

3. Results

3.1. General Characteristics of Respondents

A total of 201 answers were collected; most of them, 93/201 (46%), were received during the first four days after the public announcement. Almost two thirds of the respondents were females, namely, the gender proportions were as follows: 73 (36.3%) males; 128 (63.7%) females. Table 1 and Table 2 detail the respondents’ level of education and age groups (demographic cohorts), respectively.
The reliability and reproducibility of the scales’ measurements are presented in Table 3. The combined three-factor NPSS resulted in a rather low ICC in this dataset; therefore, the three factors were considered separately in the multivariate SEM analysis, and no subsequent analysis was conducted on the combined NPSS, apart from descriptive statistics.
The overall descriptive statistics for all scale measurements are presented in Table 4.

3.2. Respondents’ Characteristics Across Demographic Cohorts

Table 5 synthesizes the descriptive statistics across the demographic groups. The imbalance in gender and level of education is evident.
In order to observe the variability and gender-split difference across the demographic cohorts, Figure 1, Figure 2 and Figure 3 illustrate the box-plots for each characteristic. The NPSS scores seem rather uniform across the demographic cohorts, with lower levels among the males. The very large variability in perception of AI as a threat is apparent, illustrated by the corresponding large boxes for this variable. The imbalance in education is apparent in Figure 1, too.

3.3. Connection Between NPS and Perception of General AI

Table 6 presents the model and numerical results of covariance-based SEM applied to explore the hypothesized connection between NPSS components/factors and the perception of general AI in conjunction with observed covariance factors. No latent variables were included in the model.
The regression results for AIAS4 are of particular importance. The especially large value of the regression coefficient of NPSS score for social engagement in relation with AIAS4 should be noted, together with its corresponding standard error, which led to a large z-score and high statistical significance. Although with smaller values for the estimates (but with small standard errors as well), the relation between AIAS4 and covariates such as demographic cohort, level of education, and gender are highly significant, too. In this dataset, the NPSS score for body sensations does not relate to AIAS4, but the score for compassion is negative (i.e., individuals with higher compassion scores are less open towards AI) and gets close to the statistical significance.
One should also note the non-significant relation between the above factors and perception of AI as a threat, although their sign/direction would suggest that female and less-educated individuals from older generations would be more prone to see AI as a threat.
Nevertheless, in this dataset, a statistically significant negative covariance is observed between AIAS4 (which quantifies the perceived benefits and willingness/readiness to use AI tools) and perception of AI as a threat, with a z-score of −2.15 (Table 6). This significant relationship would suggest a partial interdependence between baseline safety orientation and explicit threat judgements. It should also be noted that, as a single item construct, perceived threat has a higher variance than the AIAS4 (as presented in Table 4 and Table 5).
Although with different relationships towards the AI attitude, the three components of NPSS were observed as being significantly correlated with each other. Moreover, for this dataset, education was significantly related to the demographic cohort (i.e., younger generations were more educated in this dataset).
Figure 4 illustrates the path diagram of this model, explicitly showing the estimates for the regression coefficients and covariates, allowing a comprehensive view over this multivariate model of inter-connections between NPS, AI attitude, AI perceived threat, gender, demographic cohorts, and level of education. The large value of the regression coefficient of NPSS score for social engagement in relation to AIAS4 is also apparent in this diagram. An additional aspect to note is the high variance in the score for AI perception as threat (almost double compared to the AIAS4 on the same proportional scale).

3.4. Sensitivity Analysis

3.4.1. Univariate Multiple Linear Regression Analysis

Univariate multiple linear regression confirmed the regression results for AIAS4, namely the strength and significance of NPSS scores for social engagement and compassion, and significant contributions of covariates such as demographic cohort, level of education, and gender. In addition, the regression results for the outcome of AI as a threat confirmed the non-significant results of the covariance-based SEM analysis.
The VIF values ranged from 1.08 to 2.06, meaning there was no collinearity of the independent variables in the regression models.
The R code for this analysis and the results in full are provided as Supplementary Materials.

3.4.2. Alternative SEM Analysis with Gender as a Grouping Factor

We conducted a separate covariance-based SEM analysis applied on separate gender-groups (maintaining the same structure in both groups), rather than having the gender in the model itself. Table 7 presents the model and the fitted results, and details the parameter estimates of this gender-split analysis. Although some of the fit indices are slightly better in this approach (such as CFI, RMSEA, SRMR; Table 7), the Chi-square test of the fitting and the Vuong test showed that introducing the gender in the model (namely, the previous model with gender included in the regression equations) would be a significantly better fit for the actual data (Likelihood ratio LR = 66.417, p = 0.008).
On the other hand, this gender-split model would suggest distinct profile patterns for the two genders in their relation to AI-driven tools: AIAS4 would significantly associate with NPSS compassion with a negative regression coefficient in females (more open towards showing compassion, less open towards AI tools), in contrast with NPSS social engagement with a positive coefficient in males (more open towards social engagement, more open towards AI tools), and a non-significant relationship with demographic cohort in males (Table 7).
The variables of demographic cohort and education retained their influence (i.e., the regression coefficient estimates) on AI perception (Table 7 vs. Table 6). Their covariance was found as significant in both groups, with a higher z-score for females. The higher covariance estimate in females would imply a more pronounced imbalance in education in the four demographic cohorts among females compared to males (Table 7). This imbalance could influence the corresponding demographic and education regression coefficients for the AI attitude and AI perceived threat.
The score for AI perception as a threat maintained the high variance and non-significant association with the NPSS scores and level of education, but proved to be significantly and negatively related to the demographic cohort in males (i.e., younger males seem to be significantly less worried regarding the potential threats of AI, compared to older generations) in contrast with the non-significant relationship in females (Table 7). The negative covariance between AIAS4 and AI perception as a threat resulted to be higher among females (z-score = −3.36; Table 7) compared to the whole sample (z-score = −2.15; Table 6); it resulted as positive and statistically non-significant in the male group (Table 7).
However, as this sample was unbalanced in regard to gender and education across the demographic cohorts, and the number of males was less than the a priori required sample size, such a gender-based distinction between the contributions of NPS to AI attitude might prove spurious in future investigations.

4. Discussion

We conducted a cross-sectional survey to explore a hypothesized connection between NPS and the perception of general AI in the context of AI-driven uHealth applications, by using an anonymous online questionnaire. The survey comprised the NPSS and AIAS4 scales to collect data on safety neuroception and attitude towards AI, respectively. It also included questions on covariation factors, such as age, gender, and level of education. The 201 answers in our convenience sample covered all education levels and demographic cohorts in the adult population, although the educational levels of the cohorts were uneven (with younger cohorts being more educated). The analysis employed a multivariate SEM model.
Psychological safety typically relies on cues drawn from immediate reality [18,19,20]. Therefore, one would expect that perceptions of safety will differ in a virtual environment and in human–machine interactions compared to face-to-face encounters, namely safety to be perceived primarily at a cognitive level: opposed to traditional interaction, AI leverages the anonymity, asynchronicity, control over the privacy and data protection measures, and lasting character of digital content [39,40]. However, it seems that the neurobiological perception of safety remains very important in human–machine interaction as well. Neuroception concerns the automatic evaluation of safety versus threat in ambiguous contexts, not only interpersonal ones.
NPSS was not designed for interaction with AI, and it is essential not to equate AI with a social agent. However, our results indicate that attitudes towards AI are impacted by individuals’ NPS scores, particularly by the dimensions of social engagement (trust, feeling accepted and understood, and the ability to express oneself without fear of judgement) and compassion (feeling connected and empathetic). It remains important not to frame the compassion scores as empathy towards AI.
Higher scores of social engagement reflect more probable openness and positive attitudes towards AI-based applications. Such an anticipated positive relationship with artificial entities can also be a door for increasing human interaction with the help of AI stimulation, and it has been used in mental health interventions, although caveats regarding addiction circuits in the brain and worsened loneliness in the long term are discussed in the literature [7,8,9,10,11]. It is worth noting that social engagement reflects a general willingness to interact with external agents, including technology-mediated systems, rather than a belief that AI is a social partner.
Moreover, gender might be a moderator in this relationship. In our results regarding the group of women, the compassion dimension of NPSS was negatively associated with openness towards AI; this finding makes sense because compassion is a bidirectional emotion and, in a human–machine interaction, satisfaction may be perceived as being minimal [17,41].
For AI-based applications to be relied upon and adopted, the expected benefits must outweigh the anticipated risks [1,40,42]. Neuroception-related factors are associated with general attitudinal orientation as opposed to explicit judgments about threat; perceived threat may reflect a critical appraisal. The significant negative covariance between AIAS4 scores and the single item measuring perceived AI threat in our results confirms that attitude towards AI and perceived threat are related but distinct and non-redundant constructs: AIAS4 scores assess a general evaluative position, while the explicit threat perception is influenced by higher-order cognitive, cultural, and informational factors (e.g., media narratives, political discourse), which can override the baseline neuroceptive safety orientation. This distinction may be relevant for understanding the acceptance of AI in uHealth.
Showing that body sensations were not associated with either openness towards AI or the perception of AI as a threat, our results suggest a cognitive facet of anticipated interaction with AI-driven applications. This finding (i.e., the lack of influence of body sensations) might also relate to the effect of personality traits, embedded in social engagement and compassion scores, which prevail over the body sensation scores. Previous studies have shown that engagement with AI-assisted health interventions is strongly influenced by personality traits [6,12,13,14,15,16,28].
Attitudes towards AI, while affected by the NPS scores in our investigation, were also significantly influenced by covariates such as gender, education level, and age, quantified by individuals’ belonging to one of four demographic cohorts (from Baby boomers to Generation Z). These results confirm not only our initial hypotheses, but also previous findings published in the literature [6,14,24,25,26,27].
Gender-split analysis revealed a substantive shift in the impact of social engagement on AIAS4 scores: its effect increased among men and decreased in women, favoring compassion at the expense of a positive attitude towards AI. This gender difference has been explored: women tend to assess danger, but men are more inclined to take risks [43]. The stronger negative correlation between AIAS4 scores and its perception as a potential threat in gender-split analysis (compared to the whole sample analysis) also implies a stronger emotional connection among women.
Moreover, our results confirmed earlier findings that less educated individuals in older generations are less positive towards AI [6,14]. There are acknowledged differences in online behavior between digital natives (Generation Z, accustomed to increasingly sophisticated technology) compared to Generation Y (with a hybrid behavior between traditional and digital) or the digital immigrants such as Generation X and Baby boomers (who are rather impulsive and radical) [25,26,27,28].
The effect of education level on the AI attitude, together with the cognitive facets of attitudes towards AI-based health interventions (suggested by the lack of connection with body sensations), would bring forward the importance of eHealth literacy as a key factor influencing AI acceptance in healthcare. Digital literacy in general has been reported to be a powerful predictor of attitudes towards AI-driven applications, transcending other limiting factors such as gender, age, general level of education, or socioeconomic status [3,6,10,12,13,14,15,24,25,27]. In this context, the observed effect of education level should be interpreted in light of the sample’s generational composition. The relatively high proportion of young participants (namely, Generations Y and Z) with advanced university degrees likely reflects educational pathways specific to these cohorts and their greater familiarity with digital technologies, rather than formal education acting as an isolated determinant of attitudes towards AI. Consequently, the association between education and AI acceptance observed in this study could reflect, at least in part, unevenly distributed differences in functional digital health skills and digital literacy across generations. This demographic configuration suggests that the role of education in shaping attitudes towards AI-based health interventions depends on broader socio-technical contexts and warrants caution when extrapolating the magnitude of this effect to populations where formal education is less closely linked to digital skills or familiarity with AI.
Even though the neuroception of safety in the virtual space does not operate in isolation, it strongly impacts the users’ value systems, patterns in behavior, interaction with AI-driven applications, and subsequent adoption of uHealth applications.
We also acknowledge the relevance and importance of risk analysis related to the deployment of AI-based uHealth. However, our investigation did not aim at threat detection or risk analysis; our results are limited to user perception and acceptance. Technical risk detection and human perceived safety or security remain two distinct, complementary layers: acceptance and trust are influenced by factors not reducible to detected risks alone. The NPSS-informed findings highlight the need to address perceived safety alongside technical safety.

Limitations

Firstly, this study employed convenience sampling and collected anonymous self-reported data, without objective confirmation of the accuracy of demographic and education information. Primarily due to the recruitment procedure, the sample was unbalanced with respect to gender and age relative to reported level of education. Although not heavy, non-normality of the model variables would compound upon the observed unbalance. These shortcomings would further combine with the common method bias, leading to spurious associations on the one hand and underestimated effect sizes on the other. Moreover, the temporal distribution of responses was highly skewed (nearly half were received in the first four days following the public announcement of the questionnaire). To compensate for these foreseen effects, the actual sample size exceeded the a priori required value, and the survey questionnaire was designed to limit the shared variance and to control for method bias by alternating the scale and factual questions.
Secondly, the findings of this cross-sectional survey with convenience sampling would entail limited external validity of the findings. This limitation also combines with the novelty of the scales employed and the use of NPSS in a novel context. More specifically, while the NPSS was designed to capture the biologically grounded dimensions of perceived safety in interpersonal contexts, this study does not establish a mechanism by which NPSS dimensions such as social engagement or compassion translate into attitudes towards a non-human agent. The observed associations should therefore not be interpreted as evidence that these concepts operate towards AI in an interpersonal or social sense, but rather as an indication that individual differences in baseline safety orientation can modulate evaluative responses to AI-based health technologies. Consequently, the application of the NPSS in this context is exploratory and limited, and does not constitute validation of the scale for human–AI interaction as such. In addition, the polyvagal theory itself is not unanimously accepted [18,30,31,32,33]. On the other hand, our study used covariance-based SEM rather than predictive analysis, and the observed strength of the coefficients corresponding to the relationship NPS—AI attitude supports a robust connection worth to be further explored.
Thirdly, when the SEM analysis was conducted on gender-split groups, the gender imbalance in the sample led to a slightly underpowered male group, which would imply some caveats regarding the contribution towards the AI attitude of the respondents’ demographic cohorts among males. However, the sensitivity analysis mainly confirmed the robustness of the findings regarding the relationship NPS—AI attitude identified in the combined-gender analysis.
Despite all these limitations, the findings of this survey add up to the existing body of evidence regarding the factors that contribute to the adoption of eHealth technology, thus providing a solid background for future investigations on this topic. Among the questions to be explored are as follows: to what extent personal psychological traits and digital literacy levels mediate or moderate the impact of NPS on attitudes, and whether personal experiences in the physical and virtual worlds combine in the relationship NPS—attitude towards AI.

5. Conclusions

In the context of ubiquitous healthcare solutions, neuroception-related dimensions appear to shape general attitudes towards AI, whereas explicit perception of AI as a threat may be driven by distinct and more reflective processes. The significant covariance between these measures indicates conceptual relatedness without redundancy. Gender, demographic generation, and level of education are covariates which significantly impact the general attitudes, rather than explicit perception of threat.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mti10010004/s1, PDF file: R code and results of univariate multiple regression analysis.

Author Contributions

Conceptualization, A.-L.P. and S.C.T.; methodology, A.-L.P., S.C.T. and D.L.; software, A.-L.P. and D.L.; validation, A.-L.P., S.C.T. and C.C.V.; formal analysis, R.H. and D.L.; investigation, A.-L.P., S.C.T. and C.C.V.; resources, A.-L.P., S.C.T. and D.L.; data curation, A.-L.P. and D.L.; writing—original draft preparation, A.-L.P., S.C.T., R.H. and D.L.; writing—review and editing, A.-L.P., S.C.T., C.C.V., R.H. and D.L.; project administration, S.C.T. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

The article processing costs were funded by “Victor Babes” University of Medicine and Pharmacy, Timisoara, Romania.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of “Pius Branzeu” Emergency County Hospital, Timisoara, Romania (No. 477/01.08.2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The dataset is available from the first author, upon request.

Acknowledgments

We kindly acknowledged the contribution of the survey anonymous respondents, who graciously agreed to share their opinions and feelings with us.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AIAS4Four-item AI Attitude Scale (AIAS-4)
95% CI95% Confidence Interval
GPTGenerative Pre-trained Transformers
ICCIntraclass Correlation Coefficient
LLMLarge Language Model
NPSNeuroception of Psychological Safety
NPSSNeuroception of Psychological Safety Scale
SDStandard Deviation
SEMStructural Equation Modeling

References

  1. Maslej, N.; Fattorini, L.; Perrault, R.; Gil, Y.; Parli, V.; Kariuki, N.; Capstick, E.; Reuel, A.; Brynjolfsson, E.; Etchemendy, J.; et al. The AI Index 2025 Annual Report; Stanford HAI: Stanford, CA, USA, 2025. [Google Scholar]
  2. Grassini, S. Development and Validation of the AI Attitude Scale (AIAS-4): A Brief Measure of General Attitude toward Artificial Intelligence. Front. Psychol. 2023, 14, 1191628. [Google Scholar] [CrossRef]
  3. Bhatt, P.; Liu, J.; Gong, Y.; Wang, J.; Guo, Y. Emerging Artificial Intelligence–Empowered MHealth: Scoping Review. JMIR Mhealth Uhealth 2022, 10, e35053. [Google Scholar] [CrossRef]
  4. Motwani, A.; Shukla, P.K.; Pawar, M. Ubiquitous and Smart Healthcare Monitoring Frameworks Based on Machine Learning: A Comprehensive Review. Artif. Intell. Med. 2022, 134, 102431. [Google Scholar] [CrossRef]
  5. Smits Serena, R.; Hinterwimmer, F.; Burgkart, R.; von Eisenhart-Rothe, R.; Rueckert, D. The Use of Artificial Intelligence and Wearable Inertial Measurement Units in Medicine: Systematic Review. JMIR Mhealth Uhealth 2025, 13, e60521. [Google Scholar] [CrossRef]
  6. Tan, J.Y.; Choo, J.S.H.; Iyer, S.C.; Lim, B.S.Y.; Tan, J.J.-R.; Ng, J.M.Y.; Lian, T.T.Y.; Hilal, S. A Cross Sectional Study of Role of Technology in Health for Middle-Aged and Older Adults in Singapore. Sci. Rep. 2024, 14, 18645. [Google Scholar] [CrossRef]
  7. Woll, S.; Birkenmaier, D.; Biri, G.; Nissen, R.; Lutz, L.; Schroth, M.; Ebner-Priemer, U.W.; Giurgiu, M. Applying AI in the Context of the Association Between Device-Based Assessment of Physical Activity and Mental Health: Systematic Review. JMIR Mhealth Uhealth 2025, 13, e59660. [Google Scholar] [CrossRef] [PubMed]
  8. Xiao, C.; Zhao, Y.; Li, G.; Zhang, Z.; Liu, S.; Fan, W.; Hu, J.; Yao, Q.; Yang, C.; Zou, J.; et al. Clinical Efficacy of Multimodal Exercise Telerehabilitation Based on AI for Chronic Nonspecific Low Back Pain: Randomized Controlled Trial. JMIR Mhealth Uhealth 2025, 13, e56176. [Google Scholar] [CrossRef] [PubMed]
  9. Sedlakova, J.; Trachsel, M. Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? Am. J. Bioeth. 2023, 23, 4–13. [Google Scholar] [CrossRef]
  10. Poudel, U.; Jakhar, S.; Mohan, P.; Nepal, A. AI in Mental Health: A Review of Technological Advancements and Ethical Issues in Psychiatry. Issues Ment. Health Nurs. 2025, 46, 693–701. [Google Scholar] [CrossRef] [PubMed]
  11. Boucher, E.M.; Harake, N.R.; Ward, H.E.; Stoeckl, S.E.; Vargas, J.; Minkel, J.; Parks, A.C.; Zilca, R. Artificially Intelligent Chatbots in Digital Mental Health Interventions: A Review. Expert Rev. Med. Devices 2021, 18, 37–49. [Google Scholar] [CrossRef]
  12. Ryan, K.; Hogg, J.; Kasun, M.; Kim, J.P. Users’ Perceptions and Trust in AI in Direct-to-Consumer MHealth: Qualitative Interview Study. JMIR Mhealth Uhealth 2025, 13, e64715. [Google Scholar] [CrossRef]
  13. Yang, Y.; Ngai, E.W.T.; Wang, L. Resistance to Artificial Intelligence in Health Care: Literature Review, Conceptual Framework, and Research Agenda. Inf. Manag. 2024, 61, 103961. [Google Scholar] [CrossRef]
  14. Wilson, J.; Heinsch, M.; Betts, D.; Booth, D.; Kay-Lambkin, F. Barriers and Facilitators to the Use of E-Health by Older Adults: A Scoping Review. BMC Public Health 2021, 21, 1556. [Google Scholar] [CrossRef]
  15. Ng, S.W.T.; Zhang, R. Trust in AI Chatbots: A Systematic Review. Telemat. Inform. 2025, 97, 102240. [Google Scholar] [CrossRef]
  16. Ardon, O.; Schmidt, R.L. Clinical Laboratory Employees’ Attitudes Toward Artificial Intelligence. Lab. Med. 2020, 51, 649–654. [Google Scholar] [CrossRef]
  17. Goetz, J.L.; Keltner, D.; Simon-Thomas, E. Compassion: An Evolutionary Analysis and Empirical Review. Psychol. Bull. 2010, 136, 351–374. [Google Scholar] [CrossRef]
  18. Porges, S.W. Polyvagal Theory: A Journey from Physiological Observation to Neural Innervation and Clinical Insight. Front. Behav. Neurosci. 2025, 19, 1659083. [Google Scholar] [CrossRef]
  19. Porges, S.W. Polyvagal Theory: Current Status, Clinical Applications, and Future Directions. Clin. Neuropsychiatry 2025, 22, 169–184. [Google Scholar] [CrossRef]
  20. Morton, L.; Cogan, N.; Kolacz, J.; Calderwood, C.; Nikolic, M.; Bacon, T.; Pathe, E.; Williams, D.; Porges, S.W. A New Measure of Feeling Safe: Developing Psychometric Properties of the Neuroception of Psychological Safety Scale (NPSS). Psychol. Trauma 2024, 16, 701–708. [Google Scholar] [CrossRef]
  21. Cogan, N.; Campbell, J.; Morton, L.; Young, D.; Porges, S. Validation of the Neuroception of Psychological Safety Scale (NPSS) Among Health and Social Care Workers in the UK. Int. J. Environ. Res. Public Health 2024, 21, 1551. [Google Scholar] [CrossRef]
  22. Cogan, N.; Morton, L.; Campbell, J.; Irvine Fitzpatrick, L.; Lamb, D.; De Kock, J.; Ali, A.; Young, D.; Porges, S. Neuroception of Psychological Safety Scale (NPSS): Validation with a UK Based Adult Community Sample. Eur. J. Psychotraumatol. 2025, 16, 2490329. [Google Scholar] [CrossRef]
  23. Spinoni, M.; Zagaria, A.; Pecchinenda, A.; Grano, C. Factor Structure, Construct Validity, and Measurement Invariance of the Neuroception of Psychological Safety Scale (NPSS). Eur. J. Investig. Health Psychol. Educ. 2024, 14, 2702–2715. [Google Scholar] [CrossRef]
  24. Møgelvang, A.; Bjelland, C.; Grassini, S.; Ludvigsen, K. Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Educ. Sci. 2024, 14, 1363. [Google Scholar] [CrossRef]
  25. Chan, C.K.Y.; Lee, K.K.W. The AI Generation Gap: Are Gen Z Students More Interested in Adopting Generative AI Such as ChatGPT in Teaching and Learning than Their Gen X and Millennial Generation Teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  26. Rudolph, C.W.; Zacher, H. Considering Generations From a Lifespan Developmental Perspective. Work Aging Retire. 2017, 3, waw019. [Google Scholar] [CrossRef]
  27. Vera-Toscano, E.; Meroni, E.C. An Age–Period–Cohort Approach to Disentangling Generational Differences in Family Values and Religious Beliefs: Understanding the Modern Australian Family Today. Demogr. Res. 2021, 45, 653–692. [Google Scholar] [CrossRef]
  28. Grassini, S.; Thorp, S.; Sævild Ree, A.; Sevic, A.; Cipriani, E. Attitudes Toward Technology and Artificial Intelligence: The Role of Demographic and Personality Factors. In Proceedings of the 36th Annual Conference of the European Association of Cognitive Ergonomics, Tallinn, Estonia, 7–10 October 2025; ACM: New York, NY, USA, 2025; pp. 1–5. [Google Scholar]
  29. Poli, A.; Miccoli, M. Validation of the Italian Version of the Neuroception of Psychological Safety Scale (NPSS). Heliyon 2024, 10, e27625. [Google Scholar] [CrossRef] [PubMed]
  30. Grossman, P.; Taylor, E.W. Toward Understanding Respiratory Sinus Arrhythmia: Relations to Cardiac Vagal Tone, Evolution and Biobehavioral Functions. Biol. Psychol. 2007, 74, 263–285. [Google Scholar] [CrossRef] [PubMed]
  31. Taylor, E.W.; Wang, T.; Leite, C.A.C. An Overview of the Phylogeny of Cardiorespiratory Control in Vertebrates with Some Reflections on the ‘Polyvagal Theory. Biol. Psychol. 2022, 172, 108382. [Google Scholar] [CrossRef] [PubMed]
  32. Balzarotti, S.; Biassoni, F.; Colombo, B.; Ciceri, M.R. Cardiac Vagal Control as a Marker of Emotion Regulation in Healthy Adults: A Review. Biol. Psychol. 2017, 130, 54–66. [Google Scholar] [CrossRef]
  33. Grossman, P. Fundamental Challenges and Likely Refutations of the Five Basic Premises of the Polyvagal Theory. Biol. Psychol. 2023, 180, 108589. [Google Scholar] [CrossRef]
  34. Talik, W.; Talik, E.B.; Grassini, S. Measurement Invariance of the Artificial Intelligence Attitude Scale (AIAS-4): Cross-Cultural Studies in Poland, the USA, and the UK. Curr. Psychol. 2025, 44, 15758–15766. [Google Scholar] [CrossRef]
  35. Korkmaz, S.; Goksuluk, D.; Zararsiz, G. MVN: An R Package for Assessing Multivariate Normality. R J. 2014, 6, 151. [Google Scholar] [CrossRef]
  36. Rosseel, Y. Lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef]
  37. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Kenny, D.A., Little, T.D., Eds.; The Guilford Press: New York, NY, USA, 2016; ISBN 978-1-4625-2334-4. [Google Scholar]
  38. Moshagen, M.; Bader, M. SemPower: General Power Analysis for Structural Equation Models. Behav. Res. Methods 2023, 56, 2901–2922. [Google Scholar] [CrossRef]
  39. McLeod, E.; Gupta, S. The Role of Psychological Safety in Enhancing Medical Students’ Engagement in Online Synchronous Learning. Med. Sci. Educ. 2023, 33, 423–430. [Google Scholar] [CrossRef]
  40. Shrestha, A.K.; Barthwal, A.; Campbell, M.; Shouli, A.; Syed, S.; Joshi, S.; Vassileva, J. Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review. In Proceedings of the 2024 IEEE 15th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Berkeley, CA, USA, 24–26 October 2024; IEEE: New York, NY, USA; pp. 116–123. [Google Scholar]
  41. Holtzman, I.; Nimrod, G. Forgiveness in Human-Machine Interaction. Front. Comput. Sci. 2025, 7, 1617471. [Google Scholar] [CrossRef]
  42. Lee, C.-H.; Wang, Z.; Wang, D.; Lyu, S.; Chen, C.-H. Artificial-Intelligence-Driven Governance: Addressing Emerging Risks with a Comprehensive Risk-Prevention-Centred Model for Public Health Crisis Management. Health Res. Policy Syst. 2025, 23, 115. [Google Scholar] [CrossRef]
  43. Byrnes, J.P.; Miller, D.C.; Schafer, W.D. Gender Differences in Risk Taking: A Meta-Analysis. Psychol. Bull. 1999, 125, 367–383. [Google Scholar] [CrossRef]
Figure 1. Box-plots illustrating the level of education in the four-demographic and two-gender groups. The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullets are outliers.
Figure 1. Box-plots illustrating the level of education in the four-demographic and two-gender groups. The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullets are outliers.
Mti 10 00004 g001
Figure 2. Box-plots illustrating the level of neuroception of psychological safety (NPS) in the four-demographic and two-gender groups. The scores for the overall 19-item NPSS and for each of the three factors (i.e., social engagement, compassion, and body sensations) are depicted. The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullets are outliers.
Figure 2. Box-plots illustrating the level of neuroception of psychological safety (NPS) in the four-demographic and two-gender groups. The scores for the overall 19-item NPSS and for each of the three factors (i.e., social engagement, compassion, and body sensations) are depicted. The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullets are outliers.
Mti 10 00004 g002
Figure 3. Box-plots illustrating attitudes towards general AI (the higher the score, the more favorable the opinion) and the perception of AI as a threat (the higher the score, the greater the perceived threat of AI). The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullet is an outlier.
Figure 3. Box-plots illustrating attitudes towards general AI (the higher the score, the more favorable the opinion) and the perception of AI as a threat (the higher the score, the greater the perceived threat of AI). The boxes are proportional to the inter-quartile range (IQR) with medians marked in-between, and the whiskers are proportional to 1.5 × IQR (or trimmed to the minimum or maximum values). The bullet is an outlier.
Mti 10 00004 g003
Figure 4. Path diagram of the covariance-based SEM model fitted on the actual data of 201 responses. NPSS factors are scaled on the interval [0, 1]. AIAS4 and AI perceived threat are measured on a 10-point proportional scale. There are four demographic cohorts and five levels of education. Single-variable bidirectional arrows represent variance values (not included in Table 6). Bidirectional arrows between variables show covariance values (presented in Table 6). Unidirectional arrows indicate regression coefficients (presented in Table 6). Abbreviations: AIAS4, four-item AI attitude scale; AI per threat, AI perceived threat; NPSS, neuroception of psychological safety scale.
Figure 4. Path diagram of the covariance-based SEM model fitted on the actual data of 201 responses. NPSS factors are scaled on the interval [0, 1]. AIAS4 and AI perceived threat are measured on a 10-point proportional scale. There are four demographic cohorts and five levels of education. Single-variable bidirectional arrows represent variance values (not included in Table 6). Bidirectional arrows between variables show covariance values (presented in Table 6). Unidirectional arrows indicate regression coefficients (presented in Table 6). Abbreviations: AIAS4, four-item AI attitude scale; AI per threat, AI perceived threat; NPSS, neuroception of psychological safety scale.
Mti 10 00004 g004
Table 1. Respondents’ level of education.
Table 1. Respondents’ level of education.
Level of EducationN = 201 Respondents in Total
0 (basic formal education)24 (11.9%)
142 (20.9%)
2 54 (26.9%)
3 55 (27.4%)
4 (master’s degree)26 (12.9%)
Table 2. Respondents’ age groups, described as demographic cohorts based on their expected familiarity with digital instruments.
Table 2. Respondents’ age groups, described as demographic cohorts based on their expected familiarity with digital instruments.
Demographic CohortN = 201 Respondents in Total
Baby boomers68 (33.8%)
Generation X56 (27.9%)
Generation Y (Millennials)56 (27.9%)
Generation Z21 (10.4%)
Table 3. Reliability of scales’ measurements.
Table 3. Reliability of scales’ measurements.
Scale (N = 201 Answers)No ItemsCronbach’s AlphaICC (95% CI)
NPSS290.9560.428 (0.380–0.482)
NPSS social engagement140.9460.557 (0.506–0.612)
NPSS compassion70.9450.712 (0.665–0.757)
NPSS body sensations80.9300.623 (0.571–0.676)
AIAS440.8880.666 (0.608–0.721)
Abbreviations: AIAS4, four-item AI attitude scale; CI, confidence interval; ICC, intraclass correlation; NPSS, neuroception of psychological safety scale.
Table 4. Descriptive statistics for the scale measurements.
Table 4. Descriptive statistics for the scale measurements.
Scale (a)N = 201 Respondents in Total
NPSS average; rescaled [0–1]0.80 ± 0.15
NPSS social engagement average; rescaled [0–1]0.80 ± 0.17
NPSS compassion average; rescaled [0–1]0.87 ± 0.16
NPSS body sensations average; rescaled [0–1]0.77 ± 0.20
AIAS4 average; scale [1–10]5.11 ± 2.59
AI perceived threat; scale [1–10]5.13 ± 3.17
(a) mean ± SD. Abbreviations: AIAS4, four-item AI attitude scale; NPSS, neuroception of psychological safety scale; SD, standard deviation.
Table 5. Descriptive statistics across the demographic groups.
Table 5. Descriptive statistics across the demographic groups.
N = 201 Respondents in TotalDemographic Cohort
VariableBaby Boomers
(N = 68)
Gen X
(N = 56)
Gen Y
(N = 56)
Gen Z
(N = 21)
Gender (a)
  F43 (63.2%)32 (57.1%)36 (64.3%)17 (81%)
  M25 (36.8%)24 (42.9%)20 (35.7%)4 (19%)
Level of education (a)
  0 (basic formal education)12 (17.6%)6 (10.7%)6 (10.7%)
  122 (32.4%)16 (28.6%)3 (5.4%)1 (4.8%)
  220 (29.4%)19 (33.9%)15 (26.8%)
  311 (16.2%)9 (16.1%)20 (35.7%)15 (71.4%)
  4 (master’s degree)3 (4.4%)6 (10.7%)12 (21.4%)5 (23.8%)
NPSS average (b)0.81 ± 0.120.80 ± 0.160.82 ± 0.170.80 ± 0.14
NPSS social engagement average (b)0.80 ± 0.150.78 ± 0.180.81 ± 0.180.79 ± 0.16
NPSS compassion average (b)0.86 ± 0.150.84 ± 0.180.90 ± 0.150.87 ± 0.18
NPSS body sensations average (b)0.76 ± 0.180.79 ± 0.200.79 ± 0.220.64 ± 0.23
AIAS4 average (b)4.30 ± 2.054.31 ± 2.495.50 ± 2.827.04 ± 2.73
AI perceived threat (b)5.10 ± 3.145.39 ± 3.185.30 ± 3.124.10 ± 3.40
(a) Observed frequency (percentage); (b) mean ± SD. Abbreviations: AIAS4, four-item AI attitude scale; NPSS, neuroception of psychological safety scale; SD, standard deviation.
Table 6. The covariance-based structural equation model (SEM) and the numerical results of the fitted model on the 201-observation dataset.
Table 6. The covariance-based structural equation model (SEM) and the numerical results of the fitted model on the 201-observation dataset.
Covariance-based SEM for perception of general AI in connection with NPSS and demographic factors
AIAS4 ~ NPSSsocial + NPSScompassion + NPSSbody + demoCohort + education + genderM
AI perceived threat ~ NPSSsocial + NPSScompassion + NPSSbody + demoCohort + education + genderM

AIAS4 ~~ AI perceived threat
NPSSsocial ~~ NPSScompassion + NPSSbody
NPSScompassion ~~ NPSSbody
demoCohort ~~ education
Fit indices (201 observations)
   Chi-square testCFIRMSEASRMR
   21.341 (df = 11)
   p = 0.03 *
0.9620.068
90% CI (0.021; 0.111)
0.071
Parameter estimates (a)
   Regressions estimate ± standard errorz-score (p-value)
     AIAS4 ~NPSSsocial4.383 ± 1.3353.283 (0.001 **)
NPSScompassion−2.505 ± 1.283−1.952 (0.051)
NPSSbody−0.111 ± 0.910−0.121 (0.903)
demoCohort0.515 ± 0.1722.992 (0.003 **)
education0.661 ± 0.1434.622 (<0.001 **)
genderM1.087 ± 0.3273.325 (0.001 **)
     AI perceived threat ~NPSSsocial0.139 ± 1.8750.074 (0.941)
NPSScompassion0.759 ± 1.8020.421 (0.674)
NPSSbody0.047 ± 1.2790.037 (0.971)
demoCohort−0.094 ± 0.242−0.387 (0.699)
education−0.194 ± 0.201−0.967 (0.334)
genderM−0.689 ± 0.459−1.501 (0.133)
   Covariances
     AIAS4 ~~ AI perceived threat−1.071 ± 0.497−2.152 (0.031 *)
     NPSSsocial ~~ NPSScompassion0.017 ± 0.0027.670 (<0.001 **)
     NPSSsocial ~~ NPSSbody0.018 ± 0.0036.601 (<0.001 **)
     NPSScompassion ~~ NPSSbody0.012 ± 0.0025.001 (<0.001 **)
     demoCohort ~~ education0.515 ± 0.0935.510 (<0.001 **)
(a) Regression coefficients and covariance values are expressed as estimate ± standard error; z-scores are used for the z-test in standardized testing; p-values represent statistical significance (* p < 0.05; ** p < 0.01). Abbreviations: AIAS4, four-item AI attitude scale; CFI, comparative fit index; NPSS, neuroception of psychological safety scale; RMSEA, root mean square error of approximation; SRMR, standardized root mean square residual. “~” and “~~” respectively indicate regression and covariance.
Table 7. The covariance-based SEM model to be applied in gender-split analysis and the results of the fitted model and the parameter estimates of the fitted model on split-gender groups.
Table 7. The covariance-based SEM model to be applied in gender-split analysis and the results of the fitted model and the parameter estimates of the fitted model on split-gender groups.
Covariance-based SEM for perception of general AI in connection with NPSS and demographic factors
AIAS4 ~ NPSSsocial + NPSScompassion + NPSSbody + demoCohort + education
AI perceived threat ~ NPSSsocial + NPSScompassion + NPSSbody + demoCohort + education

AIAS4 ~~ AI perceived threat
NPSSsocial ~~ NPSScompassion + NPSSbody
NPSScompassion ~~ NPSSbody
demoCohort ~~ education
Fit indices (128 + 73 observations)
  Chi-square testCFIRMSEASRMR
  15.953 (df = 12)
  p = 0.193
0.9860.057
90% CI (0.000; 0.124)
0.050
Parameter estimates (a)
   Female group (128 observations)
   Regressions estimate ± standard errorz-score (p-value)
     AIAS4 ~NPSSsocial2.243 ± 1.6011.401 (0.161)
NPSScompassion−3.718 ± 1.617−2.299 (0.022 *)
NPSSbody−0.216 ± 1.086−0.198 (0.843)
demoCohort0.604 ± 0.2032.968 (0.003 **)
education0.524 ± 0.1693.093 (0.002 **)
     AI perceived threat ~NPSSsocial1.333 ± 2.2940.581 (0.561)
NPSScompassion1.215 ± 2.3180.524 (0.600)
NPSSbody0.190 ± 1.5570.122 (0.903)
demoCohort0.439 ± 0.2921.507 (0.132)
education−0.376 ± 0.243−1.550 (0.121)
   Covariances
     AIAS4 ~~ AI perceived threat−2.090 ± 0.622− 3.362 (0.001 **)
     NPSSsocial ~~ NPSScompassion0.012 ± 0.0025.472 (<0.001 **)
     NPSSsocial ~~ NPSSbody0.016 ± 0.0034.929 (<0.001 **)
     NPSScompassion ~~ NPSSbody0.007 ± 0.0032.656 (<0.001 **)
     demoCohort ~~ education0.590 ± 0.1285.510 (<0.001 **)
   Male group (73 observations)
   Regressionsestimate ± standard errorz-score (p-value)
     AIAS4 ~NPSSsocial8.741 ± 2.1893.992 (0.001 **)
NPSScompassion−2.766 ± 2.089−1.324 (0.185)
NPSSbody0.135 ± 1.526−0.089 (0.929)
demoCohort0.444 ± 0.2911.528 (0.127)
education0.688 ± 0.2392.876 (0.004 **)
     AI perceived threat ~NPSSsocial−3.233 ± 2.986−1.083 (0.279)
NPSScompassion1.398 ± 2.8490.491 (0.624)
NPSSbody−0.081 ± 2.082−0.039 (0.969)
demoCohort−1.299 ± 0.396−3.276 (0.001 **)
education0.271 ± 0.3260.831 (0.406)
   Covariances
     AIAS4 ~~ AI perceived threat1.039 ± 0.7281.427 (0.154)
     NPSSsocial ~~ NPSScompassion0.022 ± 0.0044.999 (<0.001 **)
     NPSSsocial ~~ NPSSbody0.020 ± 0.0054.224 (<0.001 **)
     NPSScompassion ~~ NPSSbody0.019 ± 0.0054.024 (<0.001 **)
     demoCohort ~~ education0.358 ± 0.1262.851 (0.004 **)
(a) Regression coefficients and covariance values are expressed as estimate ± standard error; z-scores are used for the z-test in standardized testing; p-values represent statistical significance (* p < 0.05; ** p < 0.01). Abbreviations: AIAS4, four-item AI attitude scale; CFI, comparative fit index; NPSS, neuroception of psychological safety scale; RMSEA, root mean square error of approximation; SRMR, standardized root mean square residual. “~” and “~~” respectively indicate regression and covariance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Panfil, A.-L.; Tamasan, S.C.; Vasilian, C.C.; Horhat, R.; Lungeanu, D. Neuroception of Psychological Safety and Attitude Towards General AI in uHealth Context. Multimodal Technol. Interact. 2026, 10, 4. https://doi.org/10.3390/mti10010004

AMA Style

Panfil A-L, Tamasan SC, Vasilian CC, Horhat R, Lungeanu D. Neuroception of Psychological Safety and Attitude Towards General AI in uHealth Context. Multimodal Technologies and Interaction. 2026; 10(1):4. https://doi.org/10.3390/mti10010004

Chicago/Turabian Style

Panfil, Anca-Livia, Simona C. Tamasan, Claudia C. Vasilian, Raluca Horhat, and Diana Lungeanu. 2026. "Neuroception of Psychological Safety and Attitude Towards General AI in uHealth Context" Multimodal Technologies and Interaction 10, no. 1: 4. https://doi.org/10.3390/mti10010004

APA Style

Panfil, A.-L., Tamasan, S. C., Vasilian, C. C., Horhat, R., & Lungeanu, D. (2026). Neuroception of Psychological Safety and Attitude Towards General AI in uHealth Context. Multimodal Technologies and Interaction, 10(1), 4. https://doi.org/10.3390/mti10010004

Article Metrics

Back to TopTop