Next Article in Journal
Unfolding Nostalgia: Spatial Visualization, Nostalgia, and Well-Being
Previous Article in Journal
Initial Analysis of the Effectiveness of Compass-Behavioral for Autistic Youth: A Community-Based Retrospective Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GAAIS-J: Translation and Validation of the Japanese Version of the General Attitudes Toward Artificial Intelligence Scale

Department of Sports Intelligence & Mass Media, Sendai University, 2-2-18, Funaoka Minami, Shibata-machi, Shibata-Gun 989-1693, Miyagi-ken, Japan
*
Author to whom correspondence should be addressed.
Behav. Sci. 2025, 15(12), 1668; https://doi.org/10.3390/bs15121668
Submission received: 10 August 2025 / Revised: 26 November 2025 / Accepted: 27 November 2025 / Published: 3 December 2025
(This article belongs to the Section Social Psychology)

Abstract

The rapid integration of artificial intelligence (AI) into daily life highlights the need for culturally adapted tools to assess public attitudes. This study translated, culturally adapted, and validated the Japanese version of the General Attitudes toward Artificial Intelligence Scale (GAAIS-J) to enable cross-cultural research. The original GAAIS was translated, back-translated, and reviewed by experts for cultural and linguistic equivalence. A web-based survey of 3689 Japanese was conducted. Confirmatory factor analysis supported a two-factor model—positive and negative attitudes—after removing one low-loading item and allowing select item covariances, yielding acceptable fit indices. Reliability was high (ω = 0.94, α = 0.92), and average variance extracted values indicated satisfactory convergent validity. Construct validity was demonstrated through correlations with AI trust/distrust, generative AI usage frequency, socioeconomic status, gender, and Big Five traits. Positive attitudes were linked to higher trust, income, and openness; negative attitudes correlated with older age, lower education, and lower agreeableness. The GAAIS-J is a reliable, valid instrument for assessing AI attitudes in Japan and supporting international comparisons.

1. Introduction

Artificial Intelligence (AI) is being increasingly integrated into various domains such as healthcare, finance, education, and entertainment, reshaping societal structures and everyday life (Dixon, 2024). As the influence of AI expands, understanding public attitudes toward it becomes critical for its ethical development, responsible deployment, and social acceptance. One established tool for measuring attitudes toward AI is the General Attitudes toward Artificial Intelligence Scale (GAAIS), developed by Schepman and Rodway (2023). The GAAIS captures both positive and negative perceptions of AI and has demonstrated strong psychometric properties in prior research.
The GAAIS employs a two-factor structure—positive and negative attitudes—enabling comprehensive assessment of cognitive and emotional evaluations of AI (Schepman & Rodway, 2023). It has been associated with various psychosocial variables such as personality traits, political orientation, and risk perception, and shows external validity through its relationships with AI trust and usage experiences (Fast & Horvitz, 2017; Zhang & Dafoe, 2019).
Given increasing interest in cross-cultural AI perception, adapting such instruments requires more than linguistic translation. Cultural context significantly shapes AI-related attitudes, and establishing construct equivalence through cross-cultural validation is essential (F. J. R. Van de Vijver & Leung, 1997; Tussyadiah, 2020). Thus, translating and validating international psychological instruments in the Japanese context is a critical step toward advancing research on attitudes toward AI.
Despite the growing relevance of AI in Japan, no standardized Japanese version of the GAAIS currently exists. Previous research has suggested that Japanese attitudes toward AI and robotics differ from those in Western societies, often showing greater affinity (Nomura et al., 2006; Nomura et al., 2008). Developing a culturally adapted, psychometrically sound version of the GAAIS would thus facilitate both domestic and international comparative studies.
The present study aimed to translate and culturally validate the GAAIS for use in Japan, examining structural equivalence and external validity while maintaining fidelity to the original instrument. Through a rigorous translation and back-translation process and confirmatory factor analysis, this study assesses the structural validity of the GAAIS-J. Further, associations with theoretically related constructs—including AI trust/distrust, generative AI usage frequency, and Big Five personality traits—are examined to evaluate external validity. The validated GAAIS-J is expected to serve as a standard tool for assessing AI attitudes in Japan and contribute to cross-cultural research in human–AI interaction.

2. Literature Review

Attitudes toward AI are grounded in established theories from technology acceptance and trust research. The Technology Acceptance Model (TAM) emphasizes perceived usefulness and ease of use as predictors of technology attitudes (Davis, 1989). Studies applying TAM to AI show that usefulness strongly predicts AI adoption, while ease of use varies in effect (Schepman & Rodway, 2023). The Unified Theory of Acceptance and Use of Technology (UTAUT) expands TAM by adding performance expectancy, effort expectancy, social influence, and facilitating conditions, moderated by user traits (Venkatesh et al., 2003). Both models explain how functional appraisals and contextual factors shape AI attitudes. Trust theory further enriches this understanding. Trust in AI—defined as willingness to rely on AI under uncertainty—depends on perceived competence, integrity, and benevolence (Glikson & Woolley, 2020). High trust correlates with acceptance, while distrust limits uptake. Together, these models frame attitudes as shaped by rational appraisals and emotional evaluations.
Culture moderates AI perceptions. Japanese society is often described as more accepting of AI, influenced by animism and media (Nomura et al., 2006). While early studies showed differences in robot attitudes (Bartneck et al., 2010), later work suggested more cross-cultural similarity than assumed (MacDorman et al., 2009). Nonetheless, cultural framing matters. Japanese individuals tend to ascribe more emotional and autonomous qualities to AI (Nomura et al., 2006), while Westerners focus on practical utility. Recent studies show that similar psychological factors predict AI attitudes in both Japan and the West, including job loss anxiety and AI familiarity (Persson et al., 2021). Cultural narratives still color perception, but universal drivers such as trust, exposure, and perceived threat are influential across contexts.
Several instruments have been developed to assess attitudes toward AI. The General Attitudes toward Artificial Intelligence Scale (GAAIS) captures both positive and negative sentiments (Schepman & Rodway, 2023). The Attitude Toward Artificial Intelligence Scale (ATAI) measures acceptance and fear with five items (Sindermann et al., 2020), and the Threats of AI Scale (TAI) focuses on fear-related aspects (Kieslich et al., 2021). In Japan, the Negative Attitudes toward Robots Scale (NARS) was one of the earliest tools developed to capture public responses to robotics (Nomura et al., 2008), reflecting the country’s longstanding interest in human–robot interaction. More recently, trust-specific instruments have also emerged, including Katase’s (2021) AI Trust Scale, which differentiates between distrust in AI’s fidelity and trust in its social utility. Outside Japan, Jian et al. (2000) proposed a general trust-in-automation scale, and later tools have measured trust’s specific subdimensions (Glikson & Woolley, 2020). Together, these instruments provide a multidimensional view of AI perception across contexts.
Demographic and psychological factors also influence AI attitudes. Age consistently predicts AI perceptions, with younger people showing more optimism (Zhang & Dafoe, 2019). Education and SES correlate with AI familiarity and acceptance (Ipsos, 2022). Gender effects are modest—men often report slightly greater confidence, but attitudinal differences are small when controlling for other factors (Schepman & Rodway, 2023). Personality traits also matter: openness and conscientiousness are linked to positive AI views, while agreeableness predicts trust. Neuroticism shows mixed associations, and extraversion tends to be unrelated (Schepman & Rodway, 2023). Traits such as technophilia and technology anxiety further contribute to individual differences (Sindermann et al., 2020). Cross-national research indicates that these predictors operate similarly across cultures; for instance, job-loss concerns and AI exposure affect attitudes in both Sweden and Japan (Persson et al., 2021). Thus, while certain measurement tools and cultural contexts are specific to Japan, the underlying psychological mechanisms shaping AI attitudes appear broadly shared across societies.
Building upon these findings, the present study aimed to translate and culturally validate the General Attitudes toward Artificial Intelligence Scale (GAAIS) for use in Japan. In doing so, we also examined how demographic and psychological factors—such as age, education, income, and personality traits—relate to individuals’ positive and negative attitudes toward AI. These variables were included not as the main focus but as external criteria to test the construct validity of the adapted scale. By integrating the translation and validation process with an examination of theoretically relevant correlates, this study seeks to provide a comprehensive understanding of AI attitudes in the Japanese context and to contribute to cross-cultural comparisons with the original English version.

3. Translation Procedure

Two native Japanese speakers familiar with neuropsychological questionnaires independently translated the original GAAIS items into Japanese. A reconciled version was then created by integrating the two translations. This version was back-translated into English by a bilingual speaker fluent in both Japanese and English. Discrepancies between the original and back-translated versions were reviewed and resolved through consensus, emphasizing conceptual over literal translation to ensure semantic and cultural equivalence.

4. Method

A web-based survey was conducted to assess the reliability and validity of the GAAIS-J. Participants were informed that their responses were voluntary, anonymized, and statistically analyzed. Informed consent was obtained before the survey began. Participants were recruited via a survey company that distributed the questionnaire only to eligible monitors. The survey was conducted from 14 to 17 March 2025, yielding 5575 responses. After excluding responses failing attention checks, data from 3689 valid participants were analyzed (2161 males, 1528 females; mean age = 51.65, SD = 11.52). The questionnaire included demographic information, the GAAIS-J, items on generative AI usage frequency, the Big Five personality traits by TIPI-J (Oshio et al., 2012) and Katase’s (2021) trust in AI scale for construct validity testing. The present survey was conducted with the approval of the ethics committee of the first author’s affiliated research institution.
All analyses were conducted using JASP (version 0.19.3), an open-source software for advanced statistical modeling and structural equation analysis.

5. Result

5.1. Confirmatory Factor Analysis (CFA)

CFA was conducted assuming the same factor structure as the original GAAIS. Unlike the original validation study (Schepman & Rodway, 2023), which employed the DWLS estimator, we used the robust maximum likelihood (MLR) estimator. Although our items were ordinal, we opted for MLR because our dataset was substantially larger and the response distribution approximated continuity. Prior simulation studies have shown that MLR performs comparably to DWLS for Likert-type scales with five or more response categories (Rhemtulla et al., 2012; Li, 2016), and it offers greater robustness to non-normality and missing data through full-information estimation. Initial model fit indices were: χ2 = 4371.523, df = 169, p < 0.001, CFI = 0.884, TLI = 0.87, NFI = 0.88, PNFI = 0.783, RFI = 0.865, IFI = 0.884, RNI = 0.884. However, RMSEA = 0.082 (90% CI = [0.08, 0.084]) and SRMR = 0.102 indicated suboptimal model fit. Item Neg3 (“Organizations use AI unethically”) showed low factor loading (0.17), suggesting poor alignment with the intended construct. Given its impact on convergent validity and model fit, Neg3 was removed, and the CFA was re-conducted (Table 1).
The revised model demonstrated improved fit: χ2 = 2163.885, df = 145, p < 0.001, CFI = 0.943, TLI = 0.932, NNFI = 0.932, NFI = 0.939, RFI = 0.928, IFI = 0.943, RNI = 0.943, RMSEA = 0.061 (90% CI = [0.059, 0.064]), and SRMR = 0.07. Reliability indicators were also strong (McDonald’s ω = 0.937; Cronbach’s α = 0.924). AVE for the first factor exceeded 0.50, while the second was acceptable (>0.40) (Table 2). Covariance was allowed between specific item pairs (e.g., Neg15 and Neg9; Pos7 and Pos18), which further improved model fit. The final model met all reliability and validity standards.

5.2. Construct Validity

Currently in Japan, no psychometric scale with established reliability and validity has been developed to measure general impressions of AI. However, with the recent advancement of AI technologies, scales that assess trust and perceptions toward generative AI have begun to receive attention. Katase (2021) developed the “AI Trust Scale” through a nationwide web survey and examined its reliability and validity. This scale consists of two subfactors: “distrust in AI’s fidelity” and “trust in AI’s social utility,” with reliability coefficients (α) of 0.73 and 0.72, respectively. The study concluded that further validation is necessary regarding the scale’s reliability and validity. In the present study, Katase’s (2021) instrument was used as one criterion to assess the external validity of the GAAIS-J. In addition, we examined whether the mean scores of each subscale (positive and negative attitudes) in the GAAIS-J correlated with scores on Katase’s two subfactors and the weekly frequency of generative AI use. Spearman’s rank correlation coefficient was used for the analysis (Table 3).
The GAAIS-J positive subscale showed a strong positive correlation with trust in AI (ρ = 0.617, p < 0.001) and a moderate correlation with generative AI usage frequency (ρ = 0.313, p < 0.001). The negative subscale showed a moderate negative correlation with AI distrust (ρ = −0.367, p < 0.001) and a weak but significant negative correlation with AI usage frequency (ρ = −0.093, p < 0.001). These findings support the scale’s external validity.

5.3. Socioeconomic Status (SES) and Gender Analysis

To examine the relationships between the GAAIS-J’s positive and negative factors and socioeconomic status (SES) as well as gender, correlation analyses were conducted using Spearman’s rank correlation coefficients (Table 4). SES indicators included age, income, and highest level of education. The analysis revealed no significant correlation between the positive factor and age (ρ = −0.012). However, significant positive correlations were observed with income (ρ = 0.188, p < 0.001) and education level (ρ = 0.051, p < 0.01), suggesting a modest association between more positive attitudes toward generative AI and higher income and educational attainment.
For the negative factor, a significant positive correlation was found with age (ρ = 0.033, p < 0.05), while significant negative correlations were observed with education (ρ = −0.284, p < 0.001) and income (ρ = −0.065, p < 0.001). These results suggest that negative attitudes toward generative AI may be more common among older individuals and those with lower levels of education, although the effect sizes were small.
Gender differences were observed only for the positive factor: men scored slightly higher than women, t(3687) = 5.663, p < 0.001, Cohen’s d = 0.189, 95% CI [0.124, 0.255] (small). No difference emerged for the negative factor, t(3687) = −0.521, p = 0.602, Cohen’s d = −0.017, 95% CI [−0.083, 0.048].
Taken together, these findings indicate that the positive factor of the GAAIS-J is primarily associated with income and education, with men exhibiting more favorable attitudes toward generative AI. In contrast, the negative factor was more strongly associated with older age and lower educational attainment, and showed no significant gender differences.

5.4. Big Five Personality Traits

To examine the associations between the positive and negative factors of the GAAIS-J and the Big Five personality traits (extraversion, agreeableness, conscientiousness, neuroticism, and openness), correlation analyses were conducted using Spearman’s rank correlation coefficients (Table 5).
The results showed that the positive factor of the GAAIS-J was significantly positively correlated with extraversion (ρ = 0.087, p < 0.001), agreeableness (ρ = 0.206, p < 0.001), conscientiousness (ρ = 0.302, p < 0.001), and openness (ρ = 0.302, p < 0.001). Among these, the strongest correlations were observed with openness and conscientiousness, suggesting that individuals with higher levels of these traits tend to have more positive attitudes toward generative AI. In contrast, the negative factor was significantly negatively correlated with agreeableness (ρ = −0.215, p < 0.001), conscientiousness (ρ = −0.231, p < 0.001), and openness (ρ = −0.191, p < 0.001), and showed no significant correlation with extraversion (ρ = −0.019, p = 0.243).
Notably, Neuroticism showed a distinct pattern: it was negatively correlated with the positive factor (ρ = −0.172, p < 0.001) and positively correlated with the negative factor (ρ = 0.141, p < 0.001). This suggests that individuals higher in Neuroticism tend to hold less positive and more negative attitudes toward generative AI, possibly reflecting heightened anxiety, distrust, or sensitivity to perceived risks associated with AI technologies. Unlike the other Big Five traits, which showed consistent directions of correlation across both factors, Neuroticism emerged as a potential dispositional risk factor for AI-related apprehension.
Taken together, these findings confirm that both the positive and negative factors of the GAAIS-J are meaningfully associated with the Big Five traits. These relationships suggest that individuals’ attitudes and trust toward AI may be influenced by their personality characteristics, with Neuroticism operating in a direction opposite to that of traits such as openness and conscientiousness.

6. Discussion

6.1. Cultural Adaptation and Item Evaluation

This study aimed to validate the reliability and construct validity of the Japanese version of the General Attitudes toward Artificial Intelligence Scale (GAAIS-J) through a large-scale web-based survey. The findings demonstrated significant associations between attitudes toward generative AI and individual differences, including socioeconomic status (SES), gender, and Big Five personality traits, supporting the predictive validity of the scale.
Item diagnostics revealed that item Neg3 (“Organizations use AI unethically”) showed weak loading on its intended factor. A plausible interpretation is that this item evokes culturally framed ethical concerns, which may diverge between Japanese and Anglophone contexts. Previous research suggests that Japanese individuals may emphasize relational or assurance-based trust over generalized trust (Yamagishi & Yamagishi, 1994), potentially reducing the salience of abstract institutional-ethics prompts.
The removal of Neg3 was thus based not only on its weak psychometric performance but also on its potential for cultural non-equivalence. In Japan, public trust in institutions tends to be higher and perceptions of AI more technophilic (Nomura et al., 2006), contrasting with more skeptical views prevalent in Western contexts (Pew Research Center, 2025). Recent global surveys suggest that responses to questions about ethical AI misuse often reflect institutional trust rather than AI-specific concerns (Gillespie et al., 2025; Rieger et al., 2021). Consequently, Neg3 may conflate cultural differences in institutional cynicism and attitudes toward AI itself. Its exclusion enhances construct validity and aligns with cross-cultural adaptation guidelines that recommend removing items with interpretive inequivalence (F. Van de Vijver & Tanzer, 2004).
Furthermore, a comparison with the original English GAAIS (Schepman & Rodway, 2023) indicates that the Japanese adaptation maintained the same two-factor structure and comparable reliability indices. Apart from the exclusion of one culturally inconsistent item (Neg3), the pattern of factor loadings and the distinction between positive and negative attitudes were consistent with those reported in the original validation. These results suggest that the GAAIS-J preserves the theoretical and psychometric integrity of the source instrument while achieving cultural and linguistic equivalence.

6.2. Model Evaluation and Construct Validity

In line with structural equation modeling guidelines (Byrne, 2016; Kline, 2023), modification indices (MIs) were examined. A small number of theoretically justified residual covariances were introduced between semantically similar items (Brown, 2015). These additions, which do not alter the latent factor structure, addressed method variance while preserving theoretical coherence. All covariances were guided by conceptual rationale, adhering to psychometric standards for model parsimony and theoretical justification (MacCallum et al., 1992; Hair et al., 2019).
The scale’s external validity was supported through correlations with variables such as SES, personality traits, and general AI usage, which align with well-established frameworks. For instance, the Technology Acceptance Model (TAM) and UTAUT frameworks predict that positive AI attitudes relate to greater perceived usefulness and actual use (Davis, 1989). Observed associations between personality traits (e.g., openness, conscientiousness) and AI attitudes reflect broader findings linking personality to digital adoption in organizational contexts. Trust in AI, as a multidimensional construct encompassing both affective and cognitive components (Glikson & Woolley, 2020), is echoed in the scale’s bifactorial structure.

6.3. Generalizability, Limitations, and Future Directions

Socio-demographic gradients such as age and education mirrored typical digital-divide patterns, yet these results should be interpreted with caution. Given the large sample size, even very small effects reached statistical significance. To address this, interpretations were calibrated according to effect sizes, and we recommend that future studies supplement p-values with confidence intervals (Putnick & Bornstein, 2016).
Although alternative structural specifications (e.g., higher-order or bifactor models) could be examined, they were intentionally excluded in order to maintain structural fidelity with the original GAAIS (Schepman & Rodway, 2023). As emphasized by Borsa et al. (2012), cross-cultural validation should prioritize construct equivalence, and introducing new model forms can risk conceptual drift.
The AVE for the negative factor fell slightly below the conventional 0.50 threshold (Fornell & Larcker, 1981), a result that is common for scales encompassing items with diverse content (Hair et al., 2019). This factor includes elements ranging from technical malfunction to distrust, which may naturally reduce inter-item correlations. Nevertheless, all factor loadings were significant and theoretically coherent. Discriminant validity was supported by distinct correlation patterns between the two factors and external constructs (e.g., general trust, AI-specific distrust, and Big Five traits). Although future research might further explore bifactor or method-factor models, the present approach ensures cross-cultural comparability and remains faithful to the original validation framework.
While a multi-group CFA was not conducted due to scope constraints, future studies should test measurement invariance across demographic subgroups. Likewise, behavioral validation (e.g., usage logs) could provide deeper insight into the dynamics between attitudes and actual AI use.
Finally, although the large-scale online sample afforded high statistical power, its nonprobabilistic nature limits generalizability. Future research should employ probability sampling, preregistered invariance testing, and mixed-methods approaches to enhance robustness. Incorporating behavioral data would also help elucidate the attitude–behavior relationships central to technology acceptance models (Davis, 1989).

7. Conclusions

The GAAIS-J reproduced the original two-factor structure with satisfactory fit and reliability, supporting its use for assessing public attitudes toward AI in Japan. Situating the findings within established frameworks, our results are consistent with multidimensional views of AI trust and with technology-acceptance theories linking favorable attitudes to self-reported use (Davis, 1989; Glikson & Woolley, 2020; Venkatesh et al., 2003). Methodologically, we prioritized structural fidelity to the source instrument over exploratory re-specification, in line with cross-cultural adaptation guidance. As a planned extension, we will examine multi-group measurement invariance (configural → metric → scalar) across gender and age groups—and, ideally, across languages—to confirm that observed differences reflect substantive attitudes rather than measurement artifacts (Borsa et al., 2012; Putnick & Bornstein, 2016).
Tools such as the GAAIS-J can inform evaluation of educational and workplace interventions, public communication, and ethical governance of AI. To enhance applied value, future studies should report effect sizes and confidence intervals, integrate objective usage indicators, and test invariance to support fair cross-group/cross-national comparisons.

Author Contributions

Conceptualization, Y.Y.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y. and C.H.; formal analysis, Y.Y.; investigation, Y.Y.; resources, Y.Y. and N.S.; data curation, C.H.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y.; visualization, Y.Y.; supervision, Y.Y. and N.S.; project administration, Y.Y. and N.S.; funding acquisition, Y.Y. and N.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was made possible through the support of a grant from the Creative Education and Research Plan of Sendai University.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Sendai University (protocol code 2024-47(1), 9 February 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Anonymized data and analysis code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

During the preparation of this manuscript/study, the authors used [JASP, 0.19.3] for statistical analysis. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bartneck, C., Bleeker, T., Bun, J., Fens, P., & Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn, 1(2), 109–115. [Google Scholar] [CrossRef]
  2. Borsa, J. C., Damasio, B. F., & Bandeira, D. R. (2012). Cross-cultural adaptation and validation of psychological instruments: Some considerations. Paideia, 22(53), 423–432. [Google Scholar] [CrossRef]
  3. Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford Press. [Google Scholar]
  4. Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Routledge. [Google Scholar]
  5. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  6. Dixon, P. (2024). How AI will change your life: A futurist’s guide to a super-smart world. Profile Books. [Google Scholar]
  7. Fast, E., & Horvitz, E. (2017, February 4–9). Long-term trends in the public perception of artificial intelligence. Thirty-First AAAI Conference on Artificial Intelligence (pp. 963–969), San Francisco, CA, USA. [Google Scholar]
  8. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  9. Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. [Google Scholar]
  10. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. [Google Scholar] [CrossRef]
  11. Hair, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2019). A primer on partial least squares structural equation modeling (PLS-SEM) (2nd ed.). Sage Publications. [Google Scholar]
  12. Ipsos. (2022). Opinions about AI vary depending on countries’ level of economic development [Survey report]. Ipsos. Available online: https://www.ipsos.com/en/global-opinions-about-ai-january-2022 (accessed on 9 August 2025).
  13. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. [Google Scholar] [CrossRef]
  14. Katase, T. (2021). Development of a trust scale for Artificial Intelligence (AI) and examination of its reliability and validity—Cohort analysis of gender and age by national Web survey. Research Report of JSET Conferences, 2021(3), 172–179. [Google Scholar]
  15. Kieslich, K., Lünich, M., & Marcinkowski, F. (2021). The threats of artificial intelligence scale (TAI) development, measurement and test over three application domains. International Journal of Social Robotics, 13(7), 1563–1577. [Google Scholar] [CrossRef]
  16. Kline, R. B. (2023). Principles and practice of structural equation modeling. Guilford Press. [Google Scholar]
  17. Li, C. H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. [Google Scholar] [CrossRef] [PubMed]
  18. MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111(3), 490–504. [Google Scholar] [CrossRef]
  19. MacDorman, K. F., Vasudevan, S. K., & Ho, C. C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485–510. [Google Scholar]
  20. Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2008). Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative attitudes toward robots. IEEE Transactions on Robotics, 24(2), 442–451. [Google Scholar] [CrossRef]
  21. Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006). Measurement of negative attitudes toward robots. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 7(3), 437–454. [Google Scholar] [CrossRef]
  22. Oshio, A., Abe, S., & Cutrone, P. (2012). Development, reliability, and validity of the Japanese version of ten item personality inventory (TIPI-J). The Japanese Journal of Personality, 21(1), 40–52. [Google Scholar] [CrossRef]
  23. Persson, A., Laaksoharju, M., & Koga, H. (2021). We mostly think alike: Individual differences in attitude towards AI in Sweden and Japan. The Review of Socionetwork Strategies, 15(1), 123–142. [Google Scholar] [CrossRef]
  24. Pew Research Center. (2025, October 15). How people around the world view AI. Available online: https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/ (accessed on 9 August 2025).
  25. Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review: DR, 41, 71–90. [Google Scholar] [CrossRef] [PubMed]
  26. Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods. Psychological Methods, 17(3), 354–373. [Google Scholar] [CrossRef]
  27. Rieger, M. O., Wang, M., Massloch, M., & Reinhardt, D. (2021). Opinions on technology: A cultural divide between East Asia and Germany? Review of Behavioral Economics, 8(1), 73–110. [Google Scholar] [CrossRef]
  28. Schepman, A., & Rodway, P. (2023). The general attitudes towards artificial intelligence scale: Confirmatory validation and associations with personality, corporate distrust and general trust. International Journal of Human–Computer Interaction, 39(13), 2724–2741. [Google Scholar] [CrossRef]
  29. Sindermann, C., Sha, P., Zhou, M., Wernicke, J., Schmitt, H. S., Li, M., Sariyska, R., Stavrou, M., Becker, B., & Montag, C. (2020). Assessing the attitude towards artificial intelligence: Introduction of a short measure in German, Chinese, and English language. KI—Künstliche Intelligenz, 35(1), 109–118. [Google Scholar] [CrossRef]
  30. Tussyadiah, I. P. (2020). A review of research into automation in tourism: Launching the annals of tourism research curated collection on AI and robotics in tourism. Annals of Tourism Research, 81, 102883. [Google Scholar] [CrossRef]
  31. Van de Vijver, F., & Tanzer, N. K. (2004). Bias and equivalence in cross-cultural assessment: An overview. European Review of Applied Psychology, 54(2), 119–135. [Google Scholar] [CrossRef]
  32. Van de Vijver, F. J. R., & Leung, K. (1997). Methods and data analysis for cross-cultural research. Sage Publications. [Google Scholar]
  33. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  34. Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and Japan. Motivation and Emotion, 18(2), 129–166. [Google Scholar] [CrossRef]
  35. Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Center for the Governance of AI, Future of Humanity Institute, University of Oxford. [Google Scholar]
Table 1. Factor loadings for CFA1.
Table 1. Factor loadings for CFA1.
ItemStandardized
Estimate
SEzp95% CI
Lower
95% CI
Upper
Positive
Pos1For routine transactions, I would rather interact with an artificially intelligent system than with a human.0.6450.01835.942<0.0010.6090.680
Pos2Artificial Intelligence can provide new economic opportunities for this country.0.7120.01645.736<0.0010.6820.743
Pos4Artificially intelligent systems can help people feel happier.0.6940.01448.231<0.0010.6660.722
Pos5I am impressed by what Artificial Intelligence can do.0.7750.01453.701<0.0010.7470.804
Pos7I am interested in using artificially intelligent systems in my daily life.0.8150.01458.027<0.0010.7870.842
Pos11Artificial Intelligence can have positive impacts on people’s wellbeing.0.6860.01447.647<0.0010.6580.715
Pos12Artificial Intelligence is exciting.0.6720.01543.574<0.0010.6420.702
Pos13An artificially intelligent agent would be better than an employee in many routine jobs.0.5670.01830.685<0.0010.5310.604
Pos14There are many beneficial applications of Artificial Intelligence.0.6000.01636.812<0.0010.5680.632
Pos16Artificially intelligent systems can perform better than humans.0.5640.01732.728<0.0010.5300.598
Pos17Much of society will benefit from a future full of Artificial Intelligence.0.6660.01545.609<0.0010.6370.694
Pos18I would like to use Artificial Intelligence in my own job.0.7480.01548.910<0.0010.7180.778
Negative
Neg6I think artificially intelligent systems make many errors.0.3940.02019.545<0.0010.3550.434
Neg8I find Artificial Intelligence sinister.0.6590.01640.515<0.0010.6270.691
Neg9Artificial Intelligence might take control of people.0.7540.01840.797<0.0010.7180.791
Neg10I think Artificial Intelligence is dangerous.0.7950.01651.254<0.0010.7650.826
Neg15I shiver with discomfort when I think about future uses of Artificial Intelligence.0.7140.01839.064<0.0010.6790.750
Neg19People like me will suffer if Artificial Intelligence is used more and more.0.6370.01933.401<0.0010.6000.675
Neg20Artificial Intelligence is used to spy on people.0.5220.01926.939<0.0010.4840.560
Neg3Organizations use Artificial Intelligence unethically.0.1700.0237.439<0.0010.1250.215
Table 2. Factor loadings for CFA2.
Table 2. Factor loadings for CFA2.
ItemStandardized
Estimate
SEzp95% CI
Lower
95% CI
Upper
Positive
Pos1For routine transactions, I would rather interact with an artificially intelligent system than with a human.0.5870.01441.584<0.0010.5590.614
Pos2Artificial Intelligence can provide new economic opportunities for this country.0.7500.01072.191<0.0010.7290.770
Pos4Artificially intelligent systems can help people feel happier.0.7730.00982.918<0.0010.7550.791
Pos5I am impressed by what Artificial Intelligence can do.0.7740.00988.499<0.0010.7570.791
Pos7I am interested in using artificially intelligent systems in my daily life.0.7710.00988.279<0.0010.7540.788
Pos11Artificial Intelligence can have positive impacts on people’s wellbeing.0.7930.01083.351<0.0010.7740.812
Pos12Artificial Intelligence is exciting.0.7230.01163.264<0.0010.7000.745
Pos13An artificially intelligent agent would be better than an employee in many routine jobs.0.5900.01636.845<0.0010.5580.621
Pos14There are many beneficial applications of Artificial Intelligence.0.7060.01353.264<0.0010.6800.732
Pos16Artificially intelligent systems can perform better than humans.0.6180.01539.933<0.0010.5880.649
Pos17Much of society will benefit from a future full of Artificial Intelligence.0.7690.01170.059<0.0010.7470.790
Pos18I would like to use Artificial Intelligence in my own job.0.6780.01160.372<0.0010.6560.699
Negative
Neg6I think artificially intelligent systems make many errors.0.4310.02120.208<0.0010.3890.473
Neg8I find Artificial Intelligence sinister.0.7520.01452.104<0.0010.7230.780
Neg9Artificial Intelligence might take control of people.0.6790.01739.502<0.0010.6460.713
Neg10I think Artificial Intelligence is dangerous.0.7910.01266.274<0.0010.7680.815
Neg15I shiver with discomfort when I think about future uses of Artificial Intelligence.0.6990.01643.997<0.0010.6680.730
Neg19People like me will suffer if Artificial Intelligence is used more and more.0.6240.01736.833<0.0010.5910.658
Neg20Artificial Intelligence is used to spy on people.0.5340.01829.065<0.0010.4980.570
Note. CFI = Comparative Fit Index; TLI = Tucker–Lewis Index (also known as NNFI = Non-Normed Fit Index); NFI = Normed Fit Index; RFI = Relative Fit Index; IFI = Incremental Fit Index; RNI = Relative Noncentrality Index; RMSEA = Root Mean Square Error of Approximation; SRMR = Standardized Root Mean Square Residual; AVE = Average Variance Extracted.
Table 3. Correlations between the GAAIS and the subscales of the AI Trust Scale.
Table 3. Correlations between the GAAIS and the subscales of the AI Trust Scale.
How Often Do You Use Generative AI in a Typical Week?Trust in AI’s Social UtilityDistrust in AI’s Fidelity
GAAIS
Pos
0.313 ***0.617 ***−0.094 ***
GAAIS
Neg
−0.093 ***−0.367 ***0.400 ***
*** p < 0.001.
Table 4. Correlations between the GAAIS and socioeconomic status (SES).
Table 4. Correlations between the GAAIS and socioeconomic status (SES).
AgeIncomeEducational Level
GAAIS
Pos
−0.0120.188 ***0.161 ***
GAAIS
Neg
0.033 *−0.065 ***0.067 ***
* p < 0.05, *** p < 0.001.
Table 5. Correlations between the GAAIS and the Big Five personality traits.
Table 5. Correlations between the GAAIS and the Big Five personality traits.
OpennessConscientiousnessExtraversionAgreeablenessNeuroticism
GAAIS
Pos
0.158 ***0.166 ***0.087 ***0.206 ***−0.172 ***
GAAIS
Neg
−0.039 *−0.097 ***−0.019−0.215 ***0.141 ***
* p < 0.05, *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yamaguchi, Y.; Hashimoto, C.; Saito, N. GAAIS-J: Translation and Validation of the Japanese Version of the General Attitudes Toward Artificial Intelligence Scale. Behav. Sci. 2025, 15, 1668. https://doi.org/10.3390/bs15121668

AMA Style

Yamaguchi Y, Hashimoto C, Saito N. GAAIS-J: Translation and Validation of the Japanese Version of the General Attitudes Toward Artificial Intelligence Scale. Behavioral Sciences. 2025; 15(12):1668. https://doi.org/10.3390/bs15121668

Chicago/Turabian Style

Yamaguchi, Yasumasa, Chiaki Hashimoto, and Nagayuki Saito. 2025. "GAAIS-J: Translation and Validation of the Japanese Version of the General Attitudes Toward Artificial Intelligence Scale" Behavioral Sciences 15, no. 12: 1668. https://doi.org/10.3390/bs15121668

APA Style

Yamaguchi, Y., Hashimoto, C., & Saito, N. (2025). GAAIS-J: Translation and Validation of the Japanese Version of the General Attitudes Toward Artificial Intelligence Scale. Behavioral Sciences, 15(12), 1668. https://doi.org/10.3390/bs15121668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop