Next Article in Journal
A Hybrid Deep Learning and Feature Descriptor Approach for Partial Fingerprint Recognition
Previous Article in Journal
Quad-Frequency Wide-Lane, Narrow-Lane and Hatch–Melbourne–Wübbena Combinations: The Beidou Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Measurement Invariance of the Human–Computer Trust Scale

School of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(9), 1806; https://doi.org/10.3390/electronics14091806
Submission received: 25 March 2025 / Revised: 16 April 2025 / Accepted: 22 April 2025 / Published: 28 April 2025
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Trust in technology is a topic of growing importance in Human–Computer Interaction due to the growing impact of systems on daily lives. However, limited attention has been paid to how one’s national culture shapes their propensity to trust. This study addresses an existing gap in trust in technology research by advancing towards a more accurate tool for quantitatively measuring propensity to trust across different contexts. We specifically evaluate the psychometric properties of the human–computer trust scale (HCTS) in Brazil, Singapore, Malaysia, Estonia, and Mongolia. To accomplish this, we used the Measurement Invariance of Composite Models (MICOM), a procedure that examines the equivalency of the instrument’s psychometric properties across different groups. Our results highlight the importance of rigorous validation processes when applying psychometric instruments in cross-cultural contexts, offering insights into the differences between the countries investigated and the procedure’s potential to investigate trust across different groups.

1. Introduction

As technology increasingly permeates nearly every aspect of modern life, understanding how individuals trust systems has become critical in Human–Computer Interaction (HCI). Despite the growing importance of trust in technology, there has been limited attention toward validating trust assessment tools, as the rapid pace of technological change often conflicts with the time-intensive process of instrument development and validation. However, it is crucial to ensure that these tools function consistently across different demographic groups to enable meaningful cross-cultural analyses.
This study seeks to address this issue by advancing the Human–Computer Trust Scale (HCTS), a trust assessment instrument [1], by critically examining its applicability across Brazil, Singapore, Malaysia, Estonia, and Mongolia. We use Measurement Invariance of Composite Models (MICOM), a statistical procedure within Partial Least Squares Structural Equation Modeling (PLS-SEM), to assess whether the HCTS demonstrates consistent psychometric properties across the five countries [2,3,4]. Our guiding research question is whether measurement invariance can be established for the HCTS across the five countries with distinct cultural dimensions [5].
Additionally, despite a growing body of research on trust in technology [6,7], limited attention has been paid to how one’s national culture shapes their propensity to trust technological systems. This article builds on the evidence that national culture moderates trust in technology [8] and focuses on providing a more rigorous assessment of the HCTS’s psychometric properties. By exploring these dynamics, we provide insights into measuring propensity to trust and offer methodological guidance for applying MICOM in HCI research. Thus, our main research question is complemented by our secondary goal, which is to explore if and how MICOM analysis can help identify differences in trust in technology between the countries.
This paper begins with an overview of trust in technology, the conceptual grounding of the HCTS, and the impact of national culture on trust. We then outline our data collection and analysis procedures, followed by a detailed account of the MICOM procedure and results. In the discussion, we integrate our findings to address the research questions and provide practical insights for conducting trust assessments across countries. By addressing these questions, our study enhances the validity of the HCTS across cultures and presents a methodological approach for future HCI research.

2. Theoretical Background

2.1. Trust in Technology

Trust is a fundamental mechanism enabling individuals to manage uncertainty in social interactions [9]. It plays a crucial role across various dimensions of life, from personal relationships to broader societal engagements [10]. When we look into human–technology interactions, these characteristics are, to some extent, similar. From an HCI perspective, trust in technology also involves rational and emotional responses [11], potentially improving individuals’ acceptance, sustained usage, and satisfaction when interacting with technology [12].
With the rapid advancement of technology in recent decades, people now encounter new ways to interact with digital systems, introducing new layers of complexity to trust. These complexities arise from unclear perceptions of technological characteristics and the indirect factors that influence trust within human–technology relationships [13]. Consequently, measuring trust in technology has become both more complicated and important. For those reasons, our goal is to contribute to advancing the validation of an existing psychometric instrument designed to assess trust in technology.
Trust as an intention is different but is an antecedent to behavior [14]. We follow the conceptual groundings of the HCTS and adopt Mayer’s [15] trust conceptualization, which explains it as one’s willingness to be vulnerable based on the expectation that the other party will perform a particular action. This definition focuses on trust as a propensity and highlights that this intention is based on individuals’ subjective assessment of the trustee—in this case, technology. It also points out that the propensity to trust depends on the situation, implying that there is some degree of uncertainty or risk for the trustor [16].
Trust in technology can also be approached as a behavior, where it is defined as an attitude that reflects an individual’s belief that a system will help them achieve their goals in situations of uncertainty [17]. This concept differs from propensity, as it is characterized as an attitude that mediates interactions. While the two approaches are complementary and relevant, they entail different measures and yield distinct outcomes. We focus on propensity, which is more closely related to culturally shaped inclinations and is suitable for studying emerging technologies or hypothetical scenarios, thereby addressing an important aspect of HCI.

2.2. Human–Computer Trust Scale (HCTS)

The Human–Computer Trust Scale (HCTS) is a psychometric instrument designed to measure individuals’ propensity to trust technology [18]. It is based on the Human–Computer Trust Model (HCTM), a theoretically grounded model of individuals’ trust formation that approaches trust in technology through a socio-technical lens [19], that is, recognizing technology as embedded within social and organizational contexts.
The model and scale are derived from previous research on trust in technology, which examines the dynamics of trust formation in online relationships [14], as well as definitions and measures that examine trust-related attributes within the technological artifact rather than in the associated people or organizations [16]. Although these cannot be entirely disassociated, they imply different objects.
The HCTS focuses on assessing trust propensity, that is, individuals’ predispositions to trusting behaviors [20,21]. The direct translation of disposition to behavior is complex, as it is shaped by cognitive, social, and contextual influences [22,23]. However, understanding trust propensity is essential for anticipating interaction challenges [24].
The scale has been applied to measure trust predisposition in various technology use contexts, including e-voting, voice assistants, future scenarios [1], human–robot interaction, and messaging applications [25].
We use the revised HCTS [25], which includes one additional construct to the original scale [1] to better account for the characteristics of AI, as demonstrated in Figure 1. The model conceptualizes trust as the combination of individuals’ perceptions based on four constructs:
  • Competence (COM) refers to a system’s capability to perform its intended tasks effectively through appropriate features and functionalities, meeting expectations. The concept and items are based on notions from Mcknight and colleagues [16,26], who draw from the idea of the system’s functionality in online interactions, such as e-commerce.
  • Benevolence (BEN) refers to the technology acting in the trustor’s best interest, even when there is no obligation or reward to act in such a manner [14]. In practical terms, it translates into the system’s ability to provide adequate support to help users achieve their interaction goals [1]. The items are based on the work of Bhattacherjee [14], who focuses on trust in online firms.
  • Perceived Risk (PR) refers to individuals’ subjective assessment of the potential consequences of adverse incidents when interacting with technology. It draws on notions from [27], who define Perceived Risk as a subjective belief in the possibility of loss due to potential opportunistic behaviors by online sellers, such as fraud or breaking their commitments. This construct is derived from notions of willingness, that is, the notion of the individual being open to relying on the technology [27], and honesty [26], the notion that the trustee will act with integrity. These constructs were initially included in the HCTM [18] but later combined and reformulated as Perceived Risk [1], a higher-order concept encompassing these notions.
  • Structural Assurance (SA) refers to the belief that reliable legal, contractual, or physical mechanisms are in place to support and secure technology use. This construct was removed after the scale’s initial validations as part of the conceptual model [18], but was re-introduced to address a gap in the scale when targeting AI-based technologies [REF Beltrao]. It was initially conceptualized as part of “institution-based trust”, part of the structure in which the technology is used and not an individual’s disposition [26]. Nevertheless, considering the development of technology in recent years and the fuzzy legal landscape of AI-based systems, we approach Structural Assurance from the perspective of the trustor’s perception of it, regardless of the actual protection offered to them.

2.3. Differences in Trust Across Countries

We approach trust from a socio-technical perspective [28], aiming to better understand the connection between social structures and norms that shape trust at a collective level. This understanding is fundamental in the current context of technology adoption, where technological solutions are created in one place and rapidly adopted across various locations. However, this often happens without understanding their potential impacts on the respective populations. Both too little or too much trust can negatively impact the interactions with technology and harm individuals [29].
Research shows that an individual’s national culture can influence their tendency to trust others in interpersonal relationships [30,31]. Similarly, in HCI, studies have demonstrated that national culture can shape individuals’ trust in technology [32,33,34,35,36]. Nevertheless, most studies are case-specific and allow only limited generalization. Thus, while there is a general agreement that national culture and trust in technology are connected, there is no clear understanding of this relationship.
Along the same lines, national culture can influence how trust in technology is measured. Variations in shared values among individuals can shape their perceptions of trust, which affects the effectiveness of psychometric instruments used to assess this concept. Therefore, ultimately, this study aims to contribute to understanding the relationship between individuals’ national culture and their trust in technology.
This study explores differences between countries through the cultural dimensions framework, by Hofstede and colleagues [37]. This model is widely adopted for investigating cultural differences at the national level, positing that national cultures influence patterns of thinking, feeling, and behaving that are shared and passed down among members of the same nation. The authors identify six dimensions that shape culture, some of which may impact the citizens’ tendency to trust one another. We acknowledge criticisms of the model regarding sample representativeness, universality, and relevance [38,39]; and the existence of other models for comparing cultures, such as GLOBE [40] or Schwartz’s theory of basic values [41]. Nonetheless, Hofstede’s work remains a comprehensive framework that enables practical cross-national comparisons, especially at more general levels, as intended by this study.
The countries included in the study are Brazil, Singapore, Malaysia, Estonia, and Mongolia. They were selected primarily due to their differences in cultural dimensions found to be related to trust, namely Power Distance, Individualism, and Uncertainty Avoidance:
  • Power Distance refers to how the individuals of a society accept the distribution of power, shaping their view on how individuals in different power positions should interact [37]. In societies with higher Power Distance, individuals are more likely to recognize authority and hierarchical structures, so there is generally a higher trust in authority figures. Conversely, a lower Power Distance leads to less centralized power and flatter structures [42,43,44]. It has been demonstrated that societies with a higher Power Distance are less open to accepting and adopting new technologies [45], which is interrelated to their propensity to trust these systems.
  • Collectivism–Individualism describes the extent to which individuals are integrated into groups, with collectivist cultures emphasizing group harmony and cohesion over personal autonomy and individual achievements [5]. Yamagishi and Yamagishi [30] first identified the impact of collectivism on trust, and subsequent research has consistently demonstrated a positive effect between collectivism and trust propensity [46]. Notably, in collectivist societies, trust is often stronger among in-group members, whereas in individualist cultures, it is more broadly distributed [30].
  • Uncertainty Avoidance reflects the extent to which a society feels threatened by ambiguity and unstructured situations, as opposed to tolerating them [5]. Cultures with high Uncertainty Avoidance generally prefer structure, order, and clear rules. Research indicates that this cultural dimension influences trust propensity [43]; specifically, individuals from high Uncertainty Avoidance cultures are more likely to trust when interactions are governed by clear structures and guidelines [47].
Here, we underline the effect of specific dimensions on trust alone to facilitate understanding. It is crucial to notice that each country has specific characteristics shaped by the unique combination of all the dimensions. The complete comparison, including all six dimensions, can be found in Appendix A.

3. Methodology

Our study follows the steps recommended for the MICOM procedure in the literature [2,48,49] to explore individuals’ national culture’s influence on the HCTS. While the MICOM is at the core of our article, we first evaluated the measurement model to ensure its validity in the countries included and later analyzed the path coefficients to further explore the effect of national culture on the model’s behavior. All procedures are detailed in this section.
The five studies were conducted independently but followed the same protocol in each country between 2023 and 2024. This study presented a stimulus and relied on participants’ perceptions regarding the hypothetical implementation of such a system in their country. Facial Recognition Systems (FRSs) were chosen for this study because they have not yet been widely implemented anywhere. This allowed participants to form impressions of the technology in a more abstract manner, making it easier to compare their predispositions. In contrast, using an existing system could trigger past experiences that are context-specific and may vary significantly between countries, complicating the comparison of results.
The stimulus consisted of a 2 min long video featuring excerpts of FRSs implemented in China (Skynet) and England (Metropolitan Police of London). Each excerpt showcased real applications of the technology with slightly different approaches: Skynet focuses on how the system can enhance safety, while the Metropolitan Police emphasizes the importance of maintaining citizens’ privacy. The objective of this stimulus was to help participants understand the potential benefits and risks associated with FRSs without being limited to a specific example.
The questionnaire included items on socio-demographic factors, including age, gender, and education level; items related to technology usage and access; and the HCTS. The HCTS was tailored to reflect the focus on FRSs, following the most recent version of the instrument, containing 11 items measured on a 5-point Likert scale [25], and three items measuring Trust for the PLS-SEM validation [1].
In all cases, the questionnaire was also made available in English, its originally validated version. Additionally, translated versions were provided by native speakers in the local languages: Portuguese in Brazil; Chinese, Tamil, and Malay in Singapore; Chinese and Malay in Malaysia; Estonian in Estonia; and Mongolian and Chinese in Mongolia. Participants were recruited through convenience sampling, with assistance from local institutions and the researchers’ networks in each country. Data collection occurred online via LimeSurvey (https://www.limesurvey.org/ accessed 11 November 2024).

Samples

Participants were recruited using a convenience sampling strategy, which leveraged the authors’ network while also striving to include a diverse sample. We chose convenience sampling because of resource limitations that made probability sampling impractical. Since this is a preliminary investigation aimed at validating the measurement instrument and identifying patterns, a convenience sample is appropriate. However, we emphasize that further studies should focus on generalizability, as our sample may not be fully balanced or representative of the broader populations.
The sample size (N) for each country considered in this study is as follows: Brazil = 133, Singapore = 109, Malaysia = 107, Estonia = 117, and Mongolia = 120, resulting in a total sample size of N = 586. A detailed description of the sample breakdown by gender and age range can be seen in Table 1.

4. Analysis

4.1. Measurement Model Evaluation

Before conducting the MICOM analysis, we assessed the measurement model for each sample. Although the HCTS has already been validated, this assessment is advisable before the MICOM to ensure that the properties of the constructs remain in the different contexts investigated [50].
Table 2 presents the HTCS questionnaire adopted for the assessment. It comprises the constructs Competence (COM), Benevolence (BEN), Perceived Risk (PR) (reversed), and Structural Assurance (SA). The content within brackets refers to the specific technology assessed.
Our assessment of the measurement models focused on the measures’ internal consistency, reliability, convergent validity, and discriminant validity. The results showed considerable variability in the measures’ reliability and validity.
Reliability values refer to how well the indicator reflects the latent construct. Values above 0.5 are considered adequate [3]. All the samples presented at least one problematic item in all cases in the construct COM or PR. The most critical issues were in the construct COM3 for the samples of Singapore and Estonia, and the construct PR2 for Mongolia and Malaysia.
Next, we looked at the average variance extracted (AVE), which represents how well the indicators explain the constructs. Following the problems with the item’s reliability, PR was below the 0.5 threshold for Mongolia. Finally, we considered composite reliability (CR), which assesses the internal consistency of the indicators. The problems remained, with values for PR below the necessary thresholds of 0.7 for Malaysia and Mongolia. Addressing these issues, we removed the constructs with the lowest loadings: COM3 and PR2.
The model without these items yielded better results, with only one loading below the 0.7 threshold for the Malaysia sample (PR3). However, it did not affect the AVE and CR, which were adequate for this version of the model. The results can be seen in Table 3. The results for the original model are available in Appendix A (Table A1).
Next, we assessed the discriminant validity (DV) for the refined model using the Fornell–Larcker criterion. This criterion evaluates the correlations between each item and the constructs in the model. In all samples, each construct’s highest correlation was with its own construct, demonstrating that there are no issues with DV. The table with the DV results is available in the Appendix B (Table A2). The highest values per construct per country are highlighted to facilitate interpretation. Finally, we looked at R², representing the explained variance of the endogenous construct (Trust) by the exogenous constructs (COM, BEN, PR, and SA). The values were as follows: Brazil = 0.625, Singapore = 0.589, Malaysia = 0.510, Estonia = 0.682, and Mongolia = 0.736; all were considered adequate in our domain of study.
After removing two items, all the samples met most of the assessment criteria for reliability, convergent, and discriminant validity. Thus, we proceeded with the MICOM using the adjusted model with nine items. The problems encountered in the measurement model evaluation are further addressed in the Section 5.

4.2. Measurement of Invariance Assessment (MICOM)

Next, we proceeded with the MICOM. The procedure is composed of three steps, which should be followed if the previous criteria have been met: (1) configural invariance, (2) compositional invariance, and (3) equal mean values and variances [48].

4.2.1. Configural Invariance

The first step refers to the assessment of the conceptual model structure, requiring (1) identical indicators per measurement model, (2) identical data treatment, and (3) identical algorithm settings [4].
Following the studies’ design, the scale was implemented with the same items and under the same protocol in all the countries. The scale was administered in English and the local languages, following a back-translation process by native speakers. Our group-specific model estimations draw on identical algorithm settings, and due to the measurement model evaluation and adjustments in the previous step of the analysis, we can also consider that the PLS path model setups are equal across the three countries. Thus, configural invariance is established.

4.2.2. Compositional Invariance

Compositional invariance assesses if the relationships between indicators and the composite constructs are similar across groups. This step is required to ensure that the constructs are formed in the same way across the countries, which is necessary for comparing results between them [4].
This procedure was performed in paired comparisons. Since we had five countries, we had a total of 10 comparisons. We ran the permutation procedure with 1000 permutations and a 5% significance level for each paired comparison.
To assess compositional invariance, we compared the original correlations of composite scores (C) with those generated from the permutation test (Cu). If C is higher than the 5% threshold of Cu, also reflected in non-significant p-values (p > 0.05), compositional invariance is established.
A permutation p-value > 0.05 means that the difference in the construct’s composition is not significantly different, so their results can be compared. Table 4 presents the p-values of the comparisons to facilitate the overview of the results. Only three pairs from our samples achieved full compositional invariance: Singapore vs. Brazil, Singapore vs. Estonia, and Mongolia vs. Malaysia. All other paired comparisons had between one and three constructs violating compositional invariance. The violation most commonly happened for the endogenous construct (Trust). The values in bold represent the cases in which compositional invariance was achieved, indicating that they can be compared. For some comparisons, the difference was significant only for Trust but close to the 0.05 threshold. Similarly, some comparisons had only one significantly different construct.
Although full compositional invariance was not achieved in most cases, we proceeded with further analyses to understand the differences between groups in more depth. This decision is based on the fact that the HCTS has already been validated and variations in invariance have been observed across the samples. Furthermore, it aligns with the exploratory nature of the MICOM, allowing us to further examine whether these problems are reflected in the analysis of equal means and variances.

4.3. Equal Mean Values and Variances

The final step of MICOM evaluates whether the mean values and variances of the constructs are equal across different groups [4]. Equal means indicate that the groups have similar tendencies for each construct, and equal variances imply similar dispersion. If the means and variances are considered equal, full measurement invariance is achieved, and the data from different groups can be pooled. It also means that any differences in path coefficients can be interpreted confidently rather than attributed to measurement variability.
Results are calculated similarly to compositional invariance, with p-values > 0.05 indicating that the differences are not significantly different, and that the results can be compared across the groups. Within our samples, no paired comparison presented equal composite means, and the similarities varied considerably between the pairs. Regarding variance, Brazil vs. Estonia, Singapore vs. Malaysia, and Singapore vs. Estonia had equal variances for all constructs. The number of constructs achieving measurement invariance in other paired comparisons varied considerably. Table 5 and Table 6 present overviews of the results, with values in bold representing the cases in which the conditions were satisfied.
To answer our research question, measurement invariance was not achieved for the HCTS in most cases. Partial measurement invariance was achieved for the pairs Singapore vs. Brazil, Singapore vs. Estonia, and Mongolia vs. Malaysia, indicating that only in these cases can the HCTS results be compared, but not pooled.
If measurement invariance is not achieved, comparing path coefficients can be problematic because differences might be due to the measurement varying across groups rather than genuine differences in the relationships between constructs. Nevertheless, based on our exploratory intentions, we analyze the path coefficients. This analysis focuses on cautious relative comparisons, taking into account the results of the MICOM.

4.4. Path Coefficients

We proceed with the path coefficients analysis, considering that our comparisons reached mostly partial compositional invariance. Table 7 presents the path coefficients per country. There, it can be observed that certain countries have higher similarities. For instance, based on the weights of the constructs BEN and COM, it is possible to identify two groups: Brazil, Singapore, and Mongolia, with a higher weight of BEN over COM, and Singapore and Estonia, with an inverse proportion of these constructs’ weights. Additionally, Figure 2 is included to facilitate the visualization of the results.
To evaluate the significance of the differences, we ran a bootstrapping analysis with 5000 subsamples at a 0.05 significance level. The analysis revealed that the difference was only significant in a few comparisons and on more than two constructs per case. The bootstrapping table with complete results is available in Appendix C (Table A3).
Therefore, although we identified differences between the groups, the lack of statistical significance indicates that the findings must be interpreted cautiously. Figure 3 presents the summary of the procedure adopted, highlighting the outcomes of each step. The next section provides a complete reflection of our findings.

5. Discussion

The HCTS is a validated instrument for assessing trust in technology, which has been validated and subsequently applied in various contexts with distinct groups of participants. Although researchers have been careful with the application of the scale, limited attention has been paid to the effects of the context of the evaluation on the instrument’s functioning. The MICOM results demonstrated the procedure’s potential for a deeper understanding of the instrument and the potential for exploring the groups investigated. In this section, we discuss the implications of our findings, aiming to assist other researchers and practitioners in HCI in implementing the procedure.
First, the measurement model evaluation, a prerequisite for the MICOM, revealed that some constructs were problematic in particular samples, pointing towards a review of the scale [25] to ensure higher adequacy in different contexts—in this case, countries. The items with loading inadequacies varied according to the samples, providing initial indications that the scale behaves differently in each country, and not necessarily that there are structural problems with the scale [50], which we further explored with the MICOM. After the two items with persistent issues, COM3 and PR2, were removed, the scale’s reliability and convergent validity were improved, and adequate DV and R² were achieved for all samples, fulfilling the prerequisites for running the MICOM. Furthermore, it supported the assumption that there were problems with these items. As both constructs (COM and PR) remained with two items each, the removal improved the scale’s measurement quality without compromising the constructs’ conceptual meaning.
Thus, the first contribution of this study is the advancement of the HCTS [1], building on the revised version [8]. The revised scale is available in Appendix D Table A4, and can more consistently be used across different national groups. Insights into the behavior of the HCTS and of the countries investigated are described next.

5.1. Measurement of Invariance Assessment (MICOM)

The first step of the MICOM analysis, configural invariance, assessed the conceptual model structure. This condition was met because the data were collected following the same protocol and analyzed using the same procedures.
The next step, the analysis of compositional invariance, is a requisite for confidently comparing the results between groups. Our empirical research included five samples, so we ran ten paired comparisons. This analysis revealed that 3 out of 10 pairs have significantly different compositions of at least one construct. As shown in Table 4, the invariance was most commonly not achieved for the endogenous construct (Trust). This finding suggests that the differences observed primarily relate to how the constructs affect trust rather than how the items compose the exogenous constructs. From a broader perspective, this result also points to variations in trust formation between the groups compared [8], rather than distinct interpretations of single items.
This result has more serious implications for the HCTS, as the lack of compositional variance indicates that the constructs are formed differently across groups, and thus, their comparison can be misleading [4], as the differences may be a result of differences in the measurement model. In practical terms, this implies that comparisons between countries can only be confidently conducted between the pairs in which compositional invariance was achieved. Figure 4 summarizes the paired comparisons. The pairs with check marks (✓) achieved compositional invariance. The others have partial compositional invariance, and the numbers in the cells reflect the amount of constructs that do not satisfy the condition. For the pairs with partial compositional invariance, only the constructs that satisfy the condition should be compared, as per Table 4.
According to MICOM’s guidelines, the third step of the analysis should only be performed if compositional invariance is achieved [4], which was not the case in our study. Most pairs did not reach compositional invariance, as shown in Table 4. However, three pairs did not meet the criteria but had a single borderline variance value for Trust, while two others had variances for a single exogenous construct each. Considering that these results are not so far from meeting the criteria in most cases, we chose to proceed with the analysis. Nevertheless, we stress that this decision deviates from a rigorous MICOM procedure and is justified by our aim to provide further exploratory insights into our work.
In the third and final MICOM step, the assessment of equal means and variances, no pair had equal composite mean values, but three pairs had equal variances. The fact that none of the group pairs had equal mean values indicates that the average scores on the constructs differ between the groups. This suggests that individuals from different countries likely perceive trust in technology differently. For the three pairs that exhibited equal variances (Brazil vs. Mongolia, Singapore vs. Malaysia, and Singapore vs. Estonia), the consistency or spread of responses within those groups is similar. More specifically, the only pair that satisfied the conditions for compositional invariance and equal variances was Singapore vs. Estonia. These results imply that while significant differences exist between the countries, some comparisons remain valid, particularly regarding the relationships between constructs.
Thus, we can assume that the scale has partial compositional invariance across the countries investigated, with the invariance varying between the countries compared. This means that the data cannot be pooled, and comparing the results between countries is feasible with caution [4],considering which specific indicators or constructs differ across the analyzed groups.
Although our results are limited to five countries, the findings provide evidence that it is necessary to further evaluate the behavior of the psychometric instrument (HCTS or others) before making cross-country comparisons, as the differences between the populations can affect the relationship between endogenous and exogenous constructs. In practice, it means that the comparison of results between groups might embed the differences in their interpretation of the constructs.
Interestingly, the study outcomes also enabled us to explore these differences further. The results helped us identify areas where the instrument behaves consistently or inconsistently, serving as a basis for understanding the disparities between groups and improving the accuracy of our trust assessment results. Building on this, we examined the results for each country.

5.2. Effects of National Culture on the HCTS

The path coefficients (Table 7 and Figure 2) point towards the existence of two main groups among the countries. The first includes Brazil, Malaysia, and Mongolia, in which COM has a considerably higher weight than BEN in shaping Trust. Additionally, Malaysia and Mongolia have similar proportions of PR and SC.
Another group can be identified between Singapore and Estonia, where the weights of all constructs follow similar proportions. Most notably, BEN and COM have inverse weights compared to the other group. The bootstrapping results revealed that the differences are not statistically significant in most cases, but this might have been caused by the lack of full measurement invariance, which can affect the validity of the estimations [3].
If analyzed with the MICOM results, we can see that the paired comparisons between Malaysia and Mongolia, and Singapore and Estonia, reached compositional invariance, and Brazil and Mongolia had a borderline result; and Singapore and Estonia have additionally equal variances. These findings suggest that the outcomes of the HCTS for these specific pairs can be compared with greater confidence. Notably, this interpretation prioritizes the triangulation of the methods over the strictness of the thresholds to explore how the procedures applied can be used to explore further differences between the groups, which can lead to practical insights. However, we emphasize that these findings are speculative and should be used to guide further investigations and not generalizations.
Returning to Hofstede’s cultural dimensions model [37], we observe that it provides limited insight into the groupings. As illustrated in Figure 5, Singapore and Estonia, which show the highest similarities in how the model functions, have notably different scores for Power Distance (Estonia = 40, Singapore = 74) and Uncertainty Avoidance (Estonia = 60, Singapore = 8). While they also differ in Individualism (Estonia = 62, Singapore = 43), both nations have the highest scores among the analyzed countries. One hypothesis is that their higher levels of Individualism foster similar views on autonomy and privacy, which, in turn, may shape their understanding of trust in technology.
Malaysia and Mongolia exhibit close values for Power Distance (Malaysia = 100, Mongolia = 93). This dimension might also be related to these populations’ similar interpretations of trust in technology stemming from analogous views on authority and their acceptance of it. In both cases, the technological object under discussion, FRSs, may have intensified these relationships. However, we stress that these interpretations are speculative, and further studies are needed to investigate these hypotheses, taking into account the unique characteristics of each country as defined by their cultural dimensions or by considering alternative models.

5.3. Implications

Through the implementation of the MICOM, this study demonstrates that the HCTS, an instrument to measure propensity trust in technology, can behave differently across countries. As such, the main implication of our study is that the comparison of such assessment results between countries may be misleading if these differences are not accounted for. The most rigorous solution to this issue is to follow the MICOM procedure before making such comparisons. However, as this may not be feasible, other strategies may be adopted to mitigate this issue, such as investigating the differences between the countries from qualitative-oriented approaches.
In addition, the results indicate that the understanding of trust in technology can vary significantly from one country to another, shaped by cultural perceptions and values. This is crucial for HCI because understanding these differences is essential for designing technologies that better align with users’ expectations in various regions.
By highlighting how trust formation varies across cultures, this study also demonstrates how multilayered this topic is. While the focus of our study is to move towards more accurate assessments using the HCTS, the analysis also led to insights about the differences in the meaning of trust in technology for the groups. It is also noteworthy that we approached culture from a single perspective, from the national lens, which is one among numerous ways to investigate culture. Our results underscore the necessity for further cross-cultural investigations, following other approaches.
If we consider emerging discussions about reaching the optimal, not the highest, levels of trust [29], these findings imply that trust calibration mechanisms must be tailored, considering the varying expectations of system performance and reliability. For instance, in Brazil, Malaysia, and Mongolia, mechanisms focused on Benevolence [14], such as providing adequate support or fostering community involvement, could more strongly influence trust. Conversely, Competence plays a greater role in Estonia and Singapore, so demonstrating that the system meets high technical standards, has precision, and is reliable [26] would have more meaningful effects on trust.
Finally, our outcomes demonstrate the complex and contextual nature of trust in technology. While the insights are useful for designing systems that address different concerns, ethical considerations must be prioritized. Knowledge about differences in trust can improve interactions and empower individuals, but it can similarly be used to deceive them and exacerbate existing disparities. This is even more crucial when considering our object of study, FRSs, as this technology has strong social implications. This discussion is beyond the scope of our research, as here, FRSs were used merely as a prompt, and questions regarding their actual implementation require a much more detailed account. Nevertheless, we highlight that researchers and practitioners investigating FRSs, and more generally, trust in technology, must follow ethical practices and commit to respecting fundamental rights.

6. Conclusions

This study contributes to the field of trust in technology by advancing the cross-country validation of the HCTS. By employing the MICOM procedure, we assessed the psychometric properties of the HCTS across five culturally diverse countries: Brazil, Singapore, Malaysia, Estonia, and Mongolia. Our findings revealed partial measurement invariance, indicating that while the instrument shows potential for cross-cultural applications, the results should be interpreted with caution because there are differences in how trust is understood and formed across these groups.
Our results underscore the importance of rigorous validation processes when applying psychometric instruments in cross-cultural contexts. Without such assessments, researchers may draw inaccurate conclusions about trust differences based on measurement variances rather than genuine disparities. The exploratory nature of our approach highlights the challenges of establishing full invariance and demonstrates the method’s potential in identifying patterns among the groups.
Additionally, our outcomes provide evidence of the interplay between national culture and trust in technology, emphasizing the need for future research to move beyond national boundaries to explore other dimensions of culture. While we focused on a single case (facial recognition systems for law enforcement), the findings are also useful for reflecting on the interaction with other applications.
Overall, this study takes a step toward enhancing the robustness and applicability of the HCTS for cross-cultural research and understanding differences in trust in technology across countries. We expect our findings to guide further culturally sensitive research on trust in technology.

7. Limitations

This study has several limitations that should be acknowledged. First, the sample used for analysis is not representative of the population of each country, and differences in the sample characteristics, such as gender and age distributions, may have influenced the results, which limits the generalizability of the findings. Second, the MICOM-recommended thresholds were not strictly adhered to in all cases. This decision was guided by the intention to allow further exploration of the topic, but it may have affected the robustness of some findings. In addition, using countries as proxies for cultural background represents only a superficial dimension of individuals’ cultural identity. Countries do not represent homogeneous cultural groups, and this approach may overlook important within-country variations. Future research should include more diverse and representative samples and explore additional dimensions of cultural differences to provide a more nuanced understanding of the interplay between culture and trust in technology.

Author Contributions

Conceptualization, G.B., S.S., and D.L.; methodology, G.B., S.S., and D.L.; validation, G.B.; formal analysis, G.B.; investigation, G.B.; data curation, G.B.; writing—original draft preparation, G.B.; writing—review and editing, D.L.; visualization, G.B.; supervision, S.S. and D.L.; project administration, S.S. and D.L.; funding acquisition, S.S. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project MARTINI, grant CHIST-ERA-21-OSNEM-004 (TKA22209), and the European Office of Aerospace Research and Development and US Air Force Office of Scientific Research: FA8655-22-1-7051.

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Tallinn University (protocol code 20 and date of approval 14 June 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available in the Open Science Framework (OSF) at https://osf.io/xzamn/?view_only=90946a29e1d44f50b511239dcc3ed59d. (accessed on 21 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Country Comparison

Figure A1 presents the complete comparison of Brazil, Singapore, Malaysia, Estonia, and Mongolia as per the country comparison tool, based on the cultural dimensions model [5]. The comparison was enabled through the online tool available at https://www.theculturefactor.com/country-comparison-tool (accessed on 21 April 2025).
Figure A1. Full country comparison based on the cultural dimensions model.
Figure A1. Full country comparison based on the cultural dimensions model.
Electronics 14 01806 g0a1

Appendix B. Measurement Model

A summary of the measurement model for the original model is presented in Table A1. The results of the assessment of DV for the refined model are presented in Table A2.
Table A1. Summary of measurement model evaluation results—original model.
Table A1. Summary of measurement model evaluation results—original model.
ItemConstructReliability (>0.5)AVE (>0.5)CR (>0.7)
BR SIN MAL EE MON BR SIN MAL EE MON BR SIN MAL EE MON
BEN1BEN0.7200.7520.7270.7890.7260.7710.7220.6610.7440.6930.8550.8080.7870.8640.788
BEN2 0.8380.7500.7210.7930.733
BEN3 0.7540.6630.5340.6490.620
COM1COM0.496  **0.7250.6270.6880.5670.6230.6560.6780.6120.6820.7360.7830.7730.7100.810
COM2 0.7290.7800.7040.7420.741
COM3 0.6440.463  **0.7040.406  **0.737
PR1PR0.6920.6560.9230.6880.6300.7070.6930.5300.7790.401  **0.8350.9360.227  **0.870−0.138  **
PR2 0.6050.6410.313  **0.8360.000  **
PR3 0.8240.7810.355  **0.8130.573
SA1SA0.8400.7930.8280.8510.6190.8520.8110.8460.8270.7350.8310.7730.8280.8020.756
SA2 0.8650.8300.8650.8030.852
TR1TR0.6580.6230.7000.7350.8410.7560.7380.7000.7750.8550.8520.8340.8160.8570.919
TR2 0.7970.7860.6540.7960.805
TR3 0.8130.8050.7470.7950.918
** Values below the threshold.
Table A2. Fornell–Larcker criterion for refined model.
Table A2. Fornell–Larcker criterion for refined model.
BENCOMPRSCTrust
BENBR0.878
SIN0.850
MAL0.813
EE0.862
MON0.832
COMBR0.5280.841
SIN0.4460.903
MAL0.4310.876
EE0.4250.891
MON0.5640.885
PRBR−0.408−0.2160.894
SIN−0.306−0.2500.856
MAL−0.237−0.0440.807
EE−0.534−0.2630.889
MON−0.1050.1300.869
SCBR0.6060.330−0.3520.923
SIN0.6390.453−0.4030.901
MAL0.5880.405−0.3300.920
EE0.5630.419−0.6170.909
MON0.4950.482−0.0510.857
TrustBR0.7080.481−0.5220.6220.869
SIN0.5560.642−0.3990.6380.859
MAL0.6460.406−0.2550.6150.837
EE0.6080.607−0.5760.7300.880
MON0.7220.555−0.2190.7300.925
The values in bold indicate the highest correlation of each item. In all cases, the highest correlation is with the same construct, demonstrating there are no issues with DV.

Appendix C. Path Coefficients

Table A3 presents the bootstrapped p-values for the path coefficients.
Table A3. Summary of two-tailed p-values for path coefficients, obtained from bootstrapping in paired comparisons.
Table A3. Summary of two-tailed p-values for path coefficients, obtained from bootstrapping in paired comparisons.
p-Values
BRxEE BRxMAL BRxMON BRxSIN EExMAL EExMON EExSIN MALxMON MALxSIN MONxSIN
BEN → Trust0.0660.7930.8130.0560.0220.050.8180.9840.0220.042
COM → Trust0.0670.7260.8910.0110.0470.0970.3550.8540.0080.021
PR → Trust0.3210.0460.3360.2080.3440.860.8160.2350.4690.664
SC →Trust0.1670.6490.0760.6580.4780.6250.4030.2780.9620.223
Values in bold for significant differences (p < 0.05).

Appendix D. HCTS Questionnaire

Table A4 presents the HTCS questionnaire revised based on this article’s results. It comprises the constructs Competence (COM), Benevolence (BEN), Perceived Risk (PR) (reversed), and Structural Assurance (SA). The content within brackets should be adjusted according to the technology assessed.
Table A4. Revised HCTS.
Table A4. Revised HCTS.
ConstructItem
COM1[Facial recognition systems] are competent and effective in [identifying dangerous individuals].
COM2[Facial recognition systems] perform their role in [identifying potentially dangerous individuals] very well.
PR1There could be negative consequences when using [facial recognition systems].
PR2It is risky to interact with [facial recognition systems].
BEN1[Facial recognition systems] will act in my best interest.
BEN2[Facial recognition systems] will do their best to help me if I need help.
BEN3[Facial recognition systems] are interested in understanding my needs and preferences.
SA1I feel assured that legal and technological structures provided by the government protect me when using [facial recognition systems].
SA2I can trust [facial recognition systems] because [Artificial Intelligence] systems are robust and safe.

References

  1. Gulati, S.; Sousa, S.; Lamas, D. Design, development and evaluation of a human-computer trust scale. Behav. Inf. Technol. 2019, 38, 1004–1015. [Google Scholar] [CrossRef]
  2. Sarstedt, M.; Henseler, J.; Ringle, C.M. Multigroup analysis in partial least squares (PLS) path modeling: Alternative methods and empirical results. In Measurement and Research Methods in International Marketing; Emerald Group Publishing Limited: Bingley, UK, 2011; pp. 195–218. [Google Scholar]
  3. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications Inc.: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  4. Henseler, J.; Ringle, C.M.; Sarstedt, M. Testing measurement invariance of composites using partial least squares. Int. Mark. Rev. 2016, 33, 405–431. [Google Scholar] [CrossRef]
  5. Hofstede, G.; Hofstede, G.J.; Minkov, M. Cultures and Organizations: Software of the Mind; Mcgraw-Hill: New York, NY, USA, 2005; Volume 2. [Google Scholar]
  6. Bach, T.A.; Khan, A.; Hallock, H.; Beltrão, G.; Sousa, S. A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. Int. J. -Hum.–Comput. Interact. 2024, 40, 1251–1266. [Google Scholar] [CrossRef]
  7. de Souza, D.F.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Özdemir, M.C.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in HRI—A Systematic Literature Review. arXiv 2025, arXiv:2501.19323. [Google Scholar]
  8. Beltrão, G.; Sousa, S.; Lamas, D.; Goh, S.T. Assessing Trust in Technology Across Cultures. Hum. Behav. Emerg. Technol. 2025. forthcoming. [Google Scholar]
  9. Luhmann, N. Trust and Power; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  10. Robbins, B.G. What is trust? A multidisciplinary review, critique, and synthesis. Sociol. Compass 2016, 10, 972–986. [Google Scholar] [CrossRef]
  11. Gambetta, D. Can we trust trust. Trust. Mak. Break. Coop. Relations 2000, 13, 213–237. [Google Scholar]
  12. Kassim, E.S.; Jailani, S.F.A.K.; Hairuddin, H.; Zamzuri, N.H. Information system acceptance and user satisfaction: The mediating role of trust. Procedia-Soc. Behav. Sci. 2012, 57, 412–418. [Google Scholar] [CrossRef]
  13. Sousa, S.; Lamas, D.; Dias, P. Value creation through trust in technological-mediated social participation. Technol. Innov. Educ. 2016, 2, 1–10. [Google Scholar] [CrossRef]
  14. Bhattacherjee, A. Individual trust in online firms: Scale development and initial test. J. Manag. Inf. Syst. 2002, 19, 211–241. [Google Scholar]
  15. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  16. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 1–25. [Google Scholar] [CrossRef]
  17. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  18. Gulati, S.; Sousa, S.; Lamas, D. Modelling trust in human-like technologies. In Proceedings of the 9th Indian Conference on Human-Computer Interaction, Bangalore, India, 16–18 December 2018; pp. 1–10. [Google Scholar]
  19. Emery, F.E.; Trist, E.L. Socio-technical systems. Manag. Sci. Model. Tech. 1960, 2, 83–97. [Google Scholar]
  20. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  21. Gill, H.; Boies, K.; Finegan, J.E.; McNally, J. Antecedents of trust: Establishing a boundary condition for the relation between propensity to trust and intention to trust. J. Bus. Psychol. 2005, 19, 287–302. [Google Scholar] [CrossRef]
  22. McKnight, D.H.; Cummings, L.L.; Chervany, N.L. Initial trust formation in new organizational relationships. Acad. Manag. Rev. 1998, 23, 473–490. [Google Scholar] [CrossRef]
  23. Jessup, S.A.; Schneider, T.R.; Alarcon, G.M.; Ryan, T.J.; Capiola, A. The measurement of the propensity to trust automation. In Proceedings of the Virtual, Augmented and Mixed Reality, Applications and Case Studies: 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, 26–31 July 2019; Proceedings, Part II 21. Springer: Berlin/Heidelberg, Germany, 2019; pp. 476–489. [Google Scholar]
  24. Kaplan, A.D.; Kessler, T.T.; Brill, J.C.; Hancock, P.A. Trust in artificial intelligence: Meta-analytic findings. Hum. Factors 2023, 65, 337–359. [Google Scholar] [CrossRef]
  25. Beltrão, G.; Sousa, S. Factors Influencing Trust in WhatsApp: A Cross-Cultural Study. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 24–29 July 2021; Springer: Cham, Switzerland, 2021; pp. 495–508. [Google Scholar]
  26. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef]
  27. Pavlou, P.A.; Gefen, D. Building effective online marketplaces with institution-based trust. Inf. Syst. Res. 2004, 15, 37–59. [Google Scholar] [CrossRef]
  28. Bostrom, R.P.; Heinen, J.S. MIS problems and failures: A socio-technical perspective. Part I: The causes. MIS Q. 1977, 1, 17–32. [Google Scholar] [CrossRef]
  29. De Visser, E.J.; Peeters, M.M.; Jung, M.F.; Kohn, S.; Shaw, T.H.; Pak, R.; Neerincx, M.A. Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 2020, 12, 459–478. [Google Scholar] [CrossRef]
  30. Yamagishi, T.; Yamagishi, M. Trust and commitment in the United States and Japan. Motiv. Emot. 1994, 18, 129–166. [Google Scholar] [CrossRef]
  31. Hayashi, N.; Ostrom, E.; Walker, J.; Yamagishi, T. Reciprocity, trust, and the sense of control: A cross-societal study. Ration. Soc. 1999, 11, 27–46. [Google Scholar] [CrossRef]
  32. Vance, A.; Elie-Dit-Cosaque, C.; Straub, D.W. Examining trust in information technology artifacts: The effects of system quality and culture. J. Manag. Inf. Syst. 2008, 24, 73–100. [Google Scholar] [CrossRef]
  33. Carter, L.; Weerakkody, V. E-government adoption: A cultural comparison. Inf. Syst. Front. 2008, 10, 473–482. [Google Scholar] [CrossRef]
  34. Lowry, P.B.; Zhang, D.; Zhou, L.; Fu, X. Effects of culture, social presence, and group composition on trust in technology-supported decision-making groups. Inf. Syst. J. 2010, 20, 297–315. [Google Scholar] [CrossRef]
  35. Julsrud, T.E.; Krogstad, J.R. Is there enough trust for the smart city? Exploring acceptance for use of mobile phone data in Oslo and Tallinn. Technol. Forecast. Soc. Chang. 2020, 161, 120314. [Google Scholar] [CrossRef]
  36. Park, J.; Gunn, F.; Han, S.L. Multidimensional trust building in e-retailing: Cross-cultural differences in trust formation and implications for perceived risk. J. Retail. Consum. Serv. 2012, 19, 304–312. [Google Scholar] [CrossRef]
  37. Hofstede, G. Dimensionalizing cultures: The Hofstede model in context. In Online Readings in Psychology and Culture; The Berkeley Electronic Press: Berkeley, CA, USA, 2011; Volume 2, ISSN 2307-0919. [Google Scholar]
  38. McSweeney, B. Hofstede’s model of national cultural differences and their consequences: A triumph of faith-a failure of analysis. Hum. Relations 2002, 55, 89–118. [Google Scholar] [CrossRef]
  39. Jones, M.L. Hofstede-culturally questionable? In Proceedings of the 2007 Oxford Business & Economics Conference, Oxford, UK, 24–26 June 2007.
  40. House, R.J.; Hanges, P.J.; Ruiz-Quintanilla, S.A.; Dorfman, P.W.; Javidan, M.; Dickson, M.; Gupta, V. Cultural influences on leadership and organizations: Project GLOBE. Adv. Glob. Leadersh. 1999, 1, 171–233. [Google Scholar]
  41. Schwartz, S.H. The refined theory of basic values. In Values and Behavior: Taking a Cross Cultural Perspective; Springer: Berlin/Heidelberg, Germany, 2017; pp. 51–72. [Google Scholar]
  42. Gefen, D.; Heart, T.H. On the need to include national culture as a central issue in e-commerce trust beliefs. J. Glob. Inf. Manag. (JGIM) 2006, 14, 1–30. [Google Scholar] [CrossRef]
  43. Srite, M.; Karahanna, E. The role of espoused national cultural values in technology acceptance. MIS Q. 2006, 30, 679–704. [Google Scholar] [CrossRef]
  44. Gelfand, M.J.; Erez, M.; Aycan, Z. Cross-cultural organizational behavior. Annu. Rev. Psychol. 2007, 58, 479–514. [Google Scholar] [CrossRef]
  45. Mensah, I.K. Impact of power distance and uncertainty avoidance on the adoption of electronic government services. Int. J. E-Serv. Mob. Appl. (IJESMA) 2020, 12, 1–17. [Google Scholar] [CrossRef]
  46. Westjohn, S.A.; Magnusson, P.; Franke, G.R.; Peng, Y. Trust propensity across cultures: The role of collectivism. J. Int. Mark. 2022, 30, 1–17. [Google Scholar] [CrossRef]
  47. Hwang, Y.; Lee, K.C. Investigating the moderating role of uncertainty avoidance cultural values on multidimensional online trust. Inf. Manag. 2012, 49, 171–176. [Google Scholar] [CrossRef]
  48. Schlägel, C.; Sarstedt, M. Assessing the measurement invariance of the four-dimensional cultural intelligence scale across countries: A composite model approach. Eur. Manag. J. 2016, 34, 633–649. [Google Scholar] [CrossRef]
  49. Kuppelwieser, V.G.; Putinas, A.C.; Bastounis, M. Toward application and testing of measurement scales and an example. Sociol. Methods Res. 2019, 48, 326–349. [Google Scholar] [CrossRef]
  50. Sarstedt, M.; Ringle, C.M.; Hair, J.F. Partial least squares structural equation modeling. In Handbook of Market Research; Springer: Berlin/Heidelberg, Germany, 2021; pp. 587–632. [Google Scholar]
Figure 1. HCTS conceptual model.
Figure 1. HCTS conceptual model.
Electronics 14 01806 g001
Figure 2. Path coefficients per country.
Figure 2. Path coefficients per country.
Electronics 14 01806 g002
Figure 3. Summary of procedure and respective outcomes.
Figure 3. Summary of procedure and respective outcomes.
Electronics 14 01806 g003
Figure 4. Summary of compositional invariance results.
Figure 4. Summary of compositional invariance results.
Electronics 14 01806 g004
Figure 5. Country comparison based on the cultural dimensions model, presenting three dimensions.
Figure 5. Country comparison based on the cultural dimensions model, presenting three dimensions.
Electronics 14 01806 g005
Table 1. Summary of sample characteristics.
Table 1. Summary of sample characteristics.
BrazilEstoniaMalaysiaMongoliaSingaporeTotal
Total 133 117 107 120 109 586
Gender
Female4170467473304
Male8944594636274
Other332 8
Age Range
17 or less3 2 5
18–24931476532238
25–342836113232139
35–44441162345129
45–54522391756
55–64 41 813
65 or more 156
Table 2. Instrument under assessment (HCTS).
Table 2. Instrument under assessment (HCTS).
ConstructItem
COM1[Facial recognition systems] are competent and effective in [identifying dangerous individuals].
COM2[Facial recognition systems] perform their role in [identifying potentially dangerous individuals] very well.
COM3 **[Facial recognition systems] have all the functionalities I would expect from [Artificial Intelligence].
PR1There could be negative consequences when using [facial recognition systems].
PR2 **I must be cautious when using [facial recognition systems].
PR3It is risky to interact with [facial recognition systems].
BEN1[Facial recognition systems] will act in my best interest.
BEN2[Facial recognition systems] will do their best to help me if I need help.
BEN3[Facial recognition systems] are interested in understanding my needs and preferences.
SA1I feel assured that legal and technological structures provided by the government protect me when using [facial recognition systems].
SA2I can trust [facial recognition systems] because [Artificial Intelligence] systems are robust and safe.
TR1 *I am willing to use [facial recognition systems].
TR2 *I can rely on [facial recognition systems] for [law enforcement].
TR3 *I can trust the outcomes of [facial recognition systems].
* Trust (TR) items were used to assess the measurement and structural model using PLS-SEM. ** Items removed due to low loadings.
Table 3. Summary of measurement model evaluation results—refined model.
Table 3. Summary of measurement model evaluation results—refined model.
ItemConstructReliability (>0.5)AVE (>0.5)CR (>0.7)
BR SIN MAL EE MON BR SIN MAL EE MON BR SIN MAL EE MON
BEN1BEN0.7190.7520.7280.7890.7260.7710.7220.6610.7440.6930.8550.8080.7870.8640.788
BEN2 0.8370.7500.7210.7940.733
BEN3 0.7530.6630.5340.6480.619
COM1COM0.5750.7760.7340.7730.7140.7070.8150.7670.7940.7840.7040.7980.7080.7470.789
COM2 0.8410.8520.8010.8150.854
PR1PR0.7760.5910.9120.7450.8210.8000.7330.6510.7900.7560.7570.8200.8860.7610.716
PR3 0.8230.8740.391 **0.8350.691
SA1SA0.8410.7940.8280.8500.6190.8520.8110.8460.8270.7350.8310.7730.8280.8020.756
SA2 0.8650.8300.8650.8030.852
TR1TR0.6590.6230.6990.7380.8430.7560.7380.7010.7750.8550.8510.8340.8140.8560.920
TR2 0.7960.7870.6560.7920.805
TR3 0.8120.8030.7480.7960.918
** Values below the threshold. AVE = average variance extracted; CR = coefficient of reliability; BR = Brazil, SIN = Singapore, MAL = Malaysia, EE = Estonia, MON = Mongolia.
Table 4. Compositional invariance significance test summary.
Table 4. Compositional invariance significance test summary.
Permutation p-Value
BRxSIN BRxMAL BRxEE BRxMON SINxMAL SINxEE SINxMON MALxEE MALxMON EExMON
BEN0.9410.0110.0190.5970.1230.1390.7120.8930.4460.457
COM0.3250.3780.1790.6750.7850.6120.6820.8690.6590.468
PR0.1550.0090.5930.3920.0560.2440.2650.0030.5760.227
SA0.9370.8500.1060.1570.8940.0810.2170.0580.2310.003
Trust0.8330.0020.0790.0450.0360.2430.0470.0080.0890.088
Values in bold represent p > 0.05, indicating that the difference in the construct’s composition is not significantly different. ✓ represents the cases where compositional invariance was achieved. BR = Brazil, SIN = Singapore, MAL = Malaysia, EE = Estonia, MON = Mongolia.
Table 5. Summary of mean value significance tests.
Table 5. Summary of mean value significance tests.
Permutation p-Value
BRxSIN BRxMAL BRxEE BRxMON SINxMAL SINxEE SINxMON MALxEE MALxMON EExMON
BEN0.0110.4290.0000.0780.0000.3210.0000.0000.4370.000
COM0.3350.2300.1010.1050.0500.5690.3800.0090.0110.761
PR0.0530.0930.0000.3040.7810.0400.0020.0130.0060.000
SA0.0010.0000.0600.0000.0640.0000.0040.0000.2650.000
Trust0.4620.7790.0000.5260.2410.0020.2100.0000.7760.000
Values in bold represent p > 0.05, indicating that the difference in the construct’s composition is not significantly different. BR = Brazil, SIN = Singapore, MAL = Malaysia, EE = Estonia, MON = Mongolia.
Table 6. Summary of variance significance tests.
Table 6. Summary of variance significance tests.
Permutation p-Value
BRxSIN BRxMAL BRxEE BRxMON SINxMAL SINxEE SINxMON MALxEE MALxMON EExMON
BEN0.0560.0210.1570.5180.6510.5850.0110.2730.0060.023
COM0.6530.6120.9480.0210.9540.7630.0150.7210.0100.021
PR0.0040.0170.1570.9550.8300.1960.0320.4390.0370.170
SA0.1970.6960.7650.3390.1560.3790.0520.4140.5570.187
Trust0.0570.0010.3960.1930.2910.2810.0020.0360.0000.050
Values in bold represent p > 0.05, indicating that the difference in the construct’s composition is not significantly different. ✓ represents the cases where compositional invariance was achieved. BR = Brazil, SIN = Singapore, MAL = Malaysia, EE = Estonia, MON = Mongolia.
Table 7. Path coefficients per country.
Table 7. Path coefficients per country.
BRSINMALEEMON
BEN → Trust0.3880.1330.4000.1620.346
COM → Trust0.1450.4140.1280.3300.220
PR → Trust 0.243 0.134 0.066 0.116 0.155
SA → Trust0.2440.2980.2930.4090.404
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Beltrão, G.; Sousa, S.; Lamas, D. Assessing the Measurement Invariance of the Human–Computer Trust Scale. Electronics 2025, 14, 1806. https://doi.org/10.3390/electronics14091806

AMA Style

Beltrão G, Sousa S, Lamas D. Assessing the Measurement Invariance of the Human–Computer Trust Scale. Electronics. 2025; 14(9):1806. https://doi.org/10.3390/electronics14091806

Chicago/Turabian Style

Beltrão, Gabriela, Sonia Sousa, and David Lamas. 2025. "Assessing the Measurement Invariance of the Human–Computer Trust Scale" Electronics 14, no. 9: 1806. https://doi.org/10.3390/electronics14091806

APA Style

Beltrão, G., Sousa, S., & Lamas, D. (2025). Assessing the Measurement Invariance of the Human–Computer Trust Scale. Electronics, 14(9), 1806. https://doi.org/10.3390/electronics14091806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop