Next Article in Journal
From Intersectional Marginalization to Empowerment: Palestinian Women Transforming Through Higher Education
Previous Article in Journal
Stress Overload: A Mixed-Methods, Single-Case Exploration of a Principal’s Stress Accumulation, Sleep, and Well-Being over a School Year
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia

by
Amrita Deviayu Tunjungbiru
1,
Bernardi Pranggono
2,*,
Riri Fitri Sari
1,*,
Erika Sanchez-Velazquez
2,
Prima Dewi Purnamasari
1,
Dewi Yanti Liliana
3 and
Nur Afny Catur Andryani
4
1
Department of Electrical Engineering, Faculty of Engineering, Universitas Indonesia, Depok 16424, Indonesia
2
School of Computing and Information Science, Anglia Ruskin University, Cambridge CB1 1PT, UK
3
Department of Informatics and Computer Engineering, Politeknik Negeri Jakarta, Depok 16424, Indonesia
4
Computer Science Department, BINUS Graduate Program-Doctor of Computer Science, Bina Nusantara University, Jakarta 11530, Indonesia
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1143; https://doi.org/10.3390/educsci15091143
Submission received: 26 March 2025 / Revised: 30 July 2025 / Accepted: 26 August 2025 / Published: 2 September 2025
(This article belongs to the Special Issue AI Literacy: An Essential 21st Century Competence)

Abstract

Artificial Intelligence (AI) is reshaping industries and workforce demands globally. To ensure that individuals are prepared for an increasingly AI-driven world, it is crucial to develop robust AI literacy and address persistent gender biases in STEM fields. This paper presents a comparative study of AI literacy and gender bias among 192 participants from the United Kingdom and Indonesia. Using a survey-based approach, the study examines participants’ familiarity with AI concepts, confidence in utilizing AI tools, and engagement in ethical discussions related to AI. The findings reveal that while overall AI literacy levels are similar across both countries, UK respondents demonstrate significantly higher familiarity with programming and AI technologies, likely reflecting differences in educational frameworks and digital infrastructure. Moreover, despite widespread use of AI, discussions on its ethical implications remain limited in both countries. The study also highlights persistent gender biases that affect women’s participation and progression in AI and STEM fields; differences in perceptions of gender bias in recruitment, leadership promotion, and support for women suggest that, although progress is being made, significant barriers still exist. The study uncovers nuanced cultural variations in the perception of gender bias: UK participants exhibit greater confidence in gender inclusivity within recruitment and leadership roles, whereas Indonesian respondents report a higher prevalence of targeted initiatives to support women in technology. Overall, this research deepens our understanding of how AI literacy varies across diverse cultural and technological landscapes and offers valuable strategic guidance for tailoring interventions to overcome specific barriers, ultimately supporting innovative developments for women in STEM and women in AI in particular.

1. Introduction

Artificial Intelligence (AI) has emerged as a transformative force in the modern era, influencing nearly every industry and redefining workforce expectations (Broo et al., 2022; Jain et al., 2021; Li, 2022). As AI technologies gain traction, they are reshaping the global economy and altering the landscape of employment opportunities. The need for widespread AI literacy—the ability to understand, use, and critically engage with AI technologies—has never been more pressing (Long & Magerko, 2020). In this context, AI literacy is not merely a technical skill; it is a crucial competency that enables individuals to navigate the complexities of a digital world increasingly dominated by AI systems (Hornberger et al., 2023; Long & Magerko, 2020). This competency is essential for professional growth and informed participation in the digital age, as it empowers individuals to engage with AI technologies responsibly and ethically (Ng et al., 2021). AI literacy enables users to understand fundamental AI concepts, allowing them to identify potential biases and appreciate the ethical implications (Wang et al., 2023). While technical abilities such as programming are occasionally framed as core to AI literacy, most definitions emphasize its role in fostering versatile, domain-agnostic competencies that integrate diverse fields of knowledge (Ng et al., 2023). The recent Paris AI Action Summit international declaration, endorsed by 60 nations, also outlines a commitment to bridging digital divides by advancing AI accessibility, fostering gender equality, and ensuring that the technology’s development remains transparent, safe, secure, and trustworthy (Elysee_Palace, 2025).
While AI literacy is critical for workforce readiness, gender biases in access to education and training exacerbate inequalities in STEM participation. Access to AI knowledge and skills remains unevenly distributed, often reflecting broader societal inequalities, including gender-based ones. The gender gap in science, technology, engineering, and mathematics (STEM) fields, particularly in AI, is a significant concern, as it perpetuates systemic barriers that hinder women’s participation in technology-oriented professions (Rubin & Utomo, 2022; Pal et al., 2024). Narrowing the gender gap also bring benefits such as economic benefits (Morais Maceira, 2017). Research indicates that women are underrepresented in AI-related roles, with studies highlighting the challenges they face in accessing education and training opportunities in this field (Young et al., 2021; González-Pérez et al., 2022). For instance, the United Kingdom (UK) labor market has been shown to have a persistent underrepresentation of female graduates in STEM disciplines, compounded by cultural and institutional barriers that limit their advancement in AI careers (Callea et al., 2024; González-Pérez et al., 2022). Similarly, in Indonesia, statistics reveal that women have lower chances of pursuing advanced education relevant to technological sectors, further exacerbating gender disparities in AI literacy and participation (Lemus-Delgado & Cerda, 2025; Schwab et al., 2021).
The global AI capabilities of the UK and Indonesia show substantial disparities, particularly concerning government strategy, infrastructure, and research and development (Tortoise, 2024). The UK ranks 4th in the Global AI Index (2024) due to its strong research and policy framework, while Indonesia ranks 49th, reflecting an emerging AI landscape. These differences provide an ideal basis for exploring AI literacy and gender bias in distinct contexts (Tortoise, 2024).
While a growing body of research has examined AI literacy (Long & Magerko, 2020; Ng et al., 2021) and gender bias (Sarah Myers West, 2020; González-Pérez et al., 2022) within national or institutional contexts, few studies have adopted a truly comparative, cross-cultural approach. In particular, there is a paucity of work contrasting countries with markedly different technological infrastructures and socio-cultural frameworks, for example, between a high-income, AI-advanced nation and an emerging economy (Kong et al., 2024; Sarah Myers West, 2020). Much of the extant literature focuses on high-income or culturally homogeneous regions (e.g., Europe and the US in (Hornberger et al., 2025) and Arab universities in (Hobeika et al., 2024)), limiting the applicability of findings to under-studied global contexts. This narrow focus risks overlooking unique barriers and facilitators operative in lower-resourced or culturally distinct environments (Bahagijo et al., 2022; Lemus-Delgado & Cerda, 2025). Furthermore, many studies operationalize AI literacy in purely technical terms, such as code proficiency or tool usage, without adequately addressing ethical, social, and cultural dimensions (Hornberger et al., 2023; Laupichler et al., 2022). In non-Western settings, where AI adoption may raise distinct privacy, equity, and cultural concerns, such holistic definitions are especially critical (Kazanidis & Pellas, 2024; Shah, 2024).
Our study addresses this limitation by offering a multidimensional comparative perspective of AI literacy and gender bias in the UK and Indonesia, two countries that differ significantly in their AI readiness, educational strategies, and gender equity initiatives, thereby contributing new insights to a relatively underexplored area in the field.
The remainder of this paper is organized as follows. Section 2 presents the related work on AI literacy and gender bias, providing a comprehensive overview of the existing literature in this domain. Section 3 outlines the materials and methods employed in the study, detailing the research design and data collection process. The results of our study are discussed and analyzed in Section 4. We discuss our key findings and their implications in Section 5. Finally, we summarize our findings in Section 6, offering insights into how educational systems and policies can be improved to ensure greater inclusivity in AI literacy.

2. Related Work

The burgeoning influence of AI across global sectors necessitates a critical examination of AI literacy and its equitable development. This section reviews the existing literature on AI literacy, technology adoption in diverse cultural contexts, and the pervasive issue of gender bias within STEM fields, with a particular focus on its manifestation in AI. By integrating these theoretical perspectives, we aim to provide a conceptual framework for our comparative study of AI literacy and gender bias in the UK and Indonesia.

2.1. AI Literacy and Technology Adoption in a Global Context

AI literacy is described as fundamental knowledge, competencies, and critical awareness required for an understanding and successful interaction with AI technologies (Long & Magerko, 2020; Ng et al., 2021). Long and Magerko (2020) described AI literacy as multidimensional, encompassing technical knowledge, ethical issues, and the ability to evaluate the societal impact of AI (Long & Magerko, 2020). This definition underlines the importance of not only understanding how AI functions but also being aware of its broader implications for society and individuals. The need for AI literacy is more critical in education, where equipping individuals with the necessary skills can help them navigate through a future that is increasingly AI-driven with ease (Kong et al., 2024; Laupichler et al., 2022). Empirical findings have suggested that AI literacy gaps are widespread among the general public across different populations, including factors such as gender and socio-economic status (Kasinidou, 2023). Druga et al. (2022) point out the significant disparities in the levels of AI literacy, where it is noted that women and other marginalized groups face a lack of access to AI education and resources (Druga et al., 2022). This calls for tailored approaches sensitive to local contexts and addressing specific challenges that various groups face. In a cross-national investigation, Hobeika et al. (2024) assessed the reliability and validity of the Arabic-language AILS among university students across four Arab countries: Lebanon, Saudi Arabia, Morocco, and Palestine (Hobeika et al., 2024). AI literacy is often seen as a subset of digital literacy. Kazanidis and Pellas (2024) investigated how generative AI could be used to enhance digital literacy among early childhood education students and computer science undergraduates in Greece (Kazanidis & Pellas, 2024).
The adoption and integration of new technologies, including AI, are not uniform across cultures. Theoretical models such as Hofstede’s cultural dimensions theory provides a foundational lens for understanding cross-cultural differences in technology adoption and gender dynamics (Hofstede, 2001). Two dimensions are particularly relevant to our study: power distance and masculinity–femininity. Power distance reflects the extent to which less powerful members of society accept an unequal power distribution, while the masculinity–femininity dimension captures societal preferences for achievement, assertiveness, and material success versus cooperation, modesty, and quality of life (Hofstede et al., 2010). In diverse cultural contexts like the UK and Indonesia, differences in digital infrastructure, educational emphasis on STEM, and societal openness to technological change can significantly impact the rate and nature of AI literacy acquisition. For instance, countries with higher levels of individualism or power distance may exhibit distinct patterns in how individuals perceive and adopt AI tools, affecting confidence and engagement (Hofstede, 2001). Similarly, the Technology Acceptance Model (TAM) (Davis, 1989) emphasizes perceived usefulness and perceived ease of use as critical determinants of technology adoption, which can vary based on cultural backgrounds, prior educational experiences, and access to resources. Venkatesh and Zhang (2010) demonstrated that cultural dimensions significantly moderate the relationships between technology acceptance factors, with individualistic cultures showing different adoption patterns compared to collectivistic cultures (Venkatesh & Zhang, 2010). Examining AI literacy through these cross-cultural technology adoption frameworks allows for a nuanced understanding of observed differences in AI familiarity and application across nations. Indonesia’s higher collectivism score (14) compared to the UK’s individualism (89) suggests that technology adoption decisions may be more influenced by social networks and community acceptance in Indonesia, while UK adoption patterns may be driven more by individual perceived usefulness and ease of use (TheCultureFactor, n.d.). This theoretical framework helps explain our finding that Indonesian respondents demonstrate high practical engagement with AI tools despite lower technical familiarity—adoption may be driven by social proof and community endorsement rather than deep technical understanding.
Furthermore, the role of AI literacy in bridging the gender gap in STEM subjects cannot be underestimated. Research suggests that improving AI literacy among women may result in heightened involvement and persistence in STEM professions, which have traditionally been male-dominated (Shah, 2024). The establishment of community networks, the provision of mentorship, and academic assistance are recognized as essential approaches to motivate women to engage in careers within AI and analogous disciplines. By fostering an environment that encourages AI proficiency, educational institutions can greatly contribute to closing the gap and ensuring that women have equal opportunities to become part of the workforce in the future (Casad et al., 2021).

2.2. Gender Bias in STEM and AI Across Cultures

Gender bias remains a pervasive issue in AI education and employment, perpetuating inequalities that hinder women’s participation in the field. Gender bias refers to the unequal treatment, perceptions, or opportunities afforded to individuals based on their gender. In the context of AI and STEM fields, gender bias manifests in various ways, including societal stereotypes about women’s aptitude for technology, discriminatory workplace practices, and systemic barriers that limit women’s access to education and career advancement (Callea et al., 2024; Sarah Myers West, 2020).
Research indicates that teams with diverse expertise and cultural backgrounds tend to produce work with greater originality and long-term impact, especially in science and technology fields (Zheng et al., 2022). Consequently, increasing female representation in AI not only addresses ethical and equity concerns but also enhances technological creativity and problem-solving capacities. Tackling implicit biases, which often subtly influence hiring practices, performance assessments, and professional interactions, is crucial. Regular audits and interventions designed to identify and rectify biased behaviors and decision-making processes within organizations can significantly contribute to overcoming gender disparities.
Ultimately, overcoming gender bias in AI requires sustained, collective efforts from academia, industry leaders, policymakers, and civil society (Bahagijo et al., 2022). By prioritizing gender equality initiatives, the AI community can ensure a more inclusive, fair, and innovative technological future.

2.3. Comparative Context: UK and Indonesia

Comparing AI literacy and gender bias between the UK and Indonesia is valuable for several reasons, as these two countries differ significantly in terms of economic, educational, technological, and cultural landscapes. A comparative study can offer important insights into how AI literacy is shaped by these differences and how gender bias manifests in AI-related fields across diverse contexts.
The educational environments in the UK and Indonesia offer various challenges and prospects for the cultivation of AI literacy among women. As one of the leading countries in AI, the UK has initiated several programs to improve AI literacy, including the National AI Strategy of 2021, which has noted diversity and gender equity at the core of the strategy (Secretary of State for Digital, Culture, Media and Sport, 2021). Despite these efforts, there is still a significant gender gap in AI-related jobs, particularly at higher levels of employment. Research indicates that while there are programs to encourage more women to pursue AI and STEM fields, socio-cultural barriers continue to hinder them. These include the lack of female role models in AI, limited mentorship programs targeting women, and persistent workplace bias in promotion decisions (Nweje et al., 2025; Roopaei et al., 2021).
On the other hand, the higher education system in Indonesia has shown increased participation by women in technology-related fields; yet challenges persist. The Global Gender Gap Report 2021 highlights that, despite initiatives aimed at boosting female participation, significant barriers persist for women in accessing higher education in STEM fields. In fact, the report indicates that women constitute only 12.39% of STEM students compared to 29.39% of their male counterparts (Schwab et al., 2021). These disparities demonstrate the need for comparative research that explores the potential for each country’s educational system to be improved in ways that would increase AI literacy and help achieve gender equality in STEM fields. Through an analysis of the different contexts existing in the UK and Indonesia, this research seeks to shed light on the current state of AI education and the factors that influence women’s participation in these crucial fields. Conclusively, the literature review reveals the multidimensional relationship between AI literacy, gender bias, and both the UK and Indonesian educational settings. It is important to address these challenges so that a more inclusive and fairer technological environment can be fostered, where women can actively participate and contribute toward the advancement of AI and related fields.
While existing studies have explored AI literacy and gender bias in national or institutional contexts (Hobeika et al., 2024; Shah, 2024), there is a notable lack of comparative, cross-cultural analyses, particularly between countries with significantly different technological infrastructures and socio-cultural frameworks. Much of the current literature focuses on high-income nations or culturally similar regions, which limits the generalizability of findings to diverse global contexts. Furthermore, many studies adopt a narrow or technical definition of AI literacy, often overlooking ethical and socio-cultural components that are especially relevant in non-Western settings. These gaps constrain our understanding of how AI literacy and gender perceptions are shaped by broader systemic and cultural influences. Our study addresses this limitation by offering a multidimensional comparative perspective of AI literacy and gender bias in the UK and Indonesia—two countries that differ significantly in their AI readiness, educational strategies, and gender equity initiatives—thereby contributing new insights to a relatively underexplored area in the field.

3. Materials and Methods

The above literature reveals several gaps in understanding regarding the intertwinement of AI literacy and gender bias in the context of the UK and Indonesia.
To address the above research gap, in this study we delve into the comparative analysis of a survey on AI literacy and gender bias that was conducted in the UK and Indonesia. This study investigates the following research questions (RQs):
  • RQ1: How does AI literacy differ between the UK and Indonesia?
  • RQ2: How do perceptions of gender bias in the technology field differ between respondents in the UK and Indonesia?
Using a comparative framework, this study investigates how individuals from different cultural contexts (the UK vs. Indonesia) and technological backgrounds vary in their levels of AI literacy and experiences of gender bias. The research deepens our understanding of these issues in both countries and may inspire further studies and the development of best practices in various fields. Ultimately, the findings have the potential to influence educational policies and training programs.
To examine the multidimensional aspects of AI literacy, we followed the definition by Hossain et al. (2025) and employed a detailed breakdown of AI components, as illustrated in Table 1 (Hossain et al., 2025).

3.1. Research Design

A questionnaire instrument was developed with Likert and open-ended questions to address the research questions. The responses were then analyzed using quantitative analysis approaches. Regarding the quantitative approaches, besides descriptive statistics, two independent-sample t-tests were also implemented to project the AI literacy comparison between the Indonesian and UK populations. For the analysis, the following hypotheses were formulated:
H0: 
There is no difference in AI literacy between Indonesia and the UK.
H1: 
There is a significant difference in AI literacy between Indonesia and the UK.

3.2. Participants

A total of 192 respondents participated voluntarily in this study, with 131 from Indonesia (66 females, 65 males) and 61 from the UK (26 females, 35 males). A summary of the participants’ demographics is shown in Table 2.
The median age range is 25–34 years for both Indonesian and UK participants. The participants’ age groups represented in the dataset span from ‘18–24’ to ‘45+’ years. Participant roles primarily included university students, lecturers, researchers, professional workers, and entrepreneurs. Fields of study and work spanned science and technology, the medical field, economy and business, humanities and social sciences, education, and information technology (IT)/digital fields, among others.

3.3. Instruments

The survey questionnaire was developed by integrating established frameworks and adapting items from validated tools to ensure reliability and relevance. The instrument comprised 30 Likert-scale items (1 = ‘Strongly disagree’ to 5 = ‘Strongly agree’ for perception-based questions, with other scales used as appropriate for familiarity/confidence questions) across three dimensions, plus 5 open-ended questions. A detailed list of survey questions used in this paper is available in the Supplementary Materials.
The survey was developed based on established frameworks in AI literacy research, incorporating core components such as familiarity with AI, conceptual and technical knowledge, and ethical perceptions. These were adapted from prior validated instruments, including the AI Literacy Scale (AILS) by Long and Magerko (2020), the Arabic AILS adaptation by Hobeika et al. (2024), and guidelines from I. Lee et al. (2021) on educational AI frameworks. The survey questionnaire was developed by integrating established frameworks and adapting items from validated tools to ensure reliability and relevance. Specifically, the AI literacy dimensions (familiarity, knowledge/application, and ethical perceptions) were operationalized using Hossain et al.’s (2025) multidimensional framework, which has been rigorously tested for internal consistency (Cronbach’s α > 0.85 in prior studies). To align with our comparative focus, we adapted questions from Laupichler et al. (2023) and Wang et al. (2023), both of whom validated their instruments for cross-cultural AI literacy assessments. For instance, items measuring confidence in AI tools were drawn from Wang et al.’s AI Literacy Scale (AILS), while ethical perception questions were adapted from Laupichler et al.’s validated self-assessment tool.
Specific measurements were as follows:
AI Literacy: The study assessed AI literacy through the following dimensions, derived from questions such as ‘How familiar are you with and how often do you interact with AI?’ (for familiarity), ‘How confident are you in using AI-based tools or applications?’ (for confidence), and ‘Have you ever discussed ethical issues related to AI…?’ (for ethics discussion):
  • Familiarity with AI concepts: Participants’ awareness of AI systems and their ability to identify them.
  • Confidence in AI tools: The extent to which respondents feel comfortable using AI applications.
  • Discussion of AI ethics: Engagement in conversations about the ethical and societal impact of AI.
To categorize respondents’ AI literacy, we adapted the three-tier scheme used by (Hornberger et al., 2023):
  • No AI Literacy: Respondents scoring in the bottom quintile (≤20th percentile) on our composite AI literacy index, indicating minimal awareness of AI concepts and tools.
  • Basic AI Literacy: Those scoring between the 21st and 60th percentiles, reflecting familiarity with AI terminology and limited hands-on experience (e.g., can identify common AI applications but lack deeper understanding).
  • Advanced AI Literacy: Respondents in the top 40% (≥61st percentile), demonstrating both conceptual understanding (e.g., algorithmic principles) and practical skills (e.g., regular use of AI tools for problem-solving), as well as engagement with ethical and societal implications.
Perceptions of Gender Bias: The study examined gender bias in the technology field based on the following indicators, derived from questions like ‘Have you ever witnessed or experienced gender bias…?’ (for witnessed bias), ‘Does your workplace provide mentors…?’ (for mentorship programs), ‘How often do you see women being promoted…?’ (for leadership opportunities), ‘Do you feel women have more difficulty being accepted…?’ (for challenges), and ‘How often do you hear stereotypes that men are superior…?’ (for stereotypes):
  • Witnessed gender bias: Whether participants have observed gender discrimination in AI-related environments.
  • Availability of mentorship programs for women: Awareness of structured initiatives supporting women’s participation in AI and STEM fields.
  • Opportunities for women in leadership: Perceived frequency of women’s promotion into leadership positions.
  • Challenges for women in tech careers: Agreement with statements about barriers that women face in entering and advancing in AI and STEM fields.
  • Prevalence of gender stereotypes: How often respondents encounter gender-based stereotypes in technology-related roles.

3.4. Analysis Methods

Quantitative Analysis: Descriptive statistics were used to summarize Likert-scale responses (means and standard deviations). Inference statistics on two independent-sample t-tests were applied to capture population insight into the difference in responses between UK and Indonesia in terms of 95% confidence levels and χ2 tests for categorical dependencies. A Cronbach’s α value was computed for each survey dimension to assess internal consistency.
Qualitative Analysis: Open-ended responses were coded using thematic analysis to identify recurring themes related to AI literacy and gender bias. Inter-coder reliability was the limitation for this analysis. Patterns and contrasts between the two groups were analyzed to provide deeper context for the quantitative results.
This methodology ensured a comprehensive evaluation of AI literacy and gender bias in both countries, offering insights into the cultural and systemic factors influencing these issues. The findings contribute to discussions on AI education, workplace inclusivity, and strategies for fostering gender diversity in STEM fields.

4. Results

The results of this study are organized into several sections, each focusing on different aspects of the collected data and addressing the research questions.

4.1. Analysis of AI Literacy

To answer RQ1 and test the corresponding hypotheses, the analysis focused on comparing the levels of AI literacy between individuals in Indonesia and the UK. An F-test was first conducted to assess the equality of variances between the groups for each AI literacy dimension.
  • For AI concept familiarity, the F-test showed F(60, 130) = 1.386, p > 0.05. As we failed to reject the null hypothesis of equal variances, equal variances were assumed for the subsequent t-test.
  • For AI tool confidence, the F-test showed F(60, 130) = 1.228, p > 0.05. As we failed to reject the null hypothesis of equal variances, equal variances were assumed for the subsequent t-test.
  • For discussion of AI ethics, the F-test showed F(60, 130) = 1.194, p > 0.05. As we failed to reject the null hypothesis of equal variances, equal variances were assumed for the subsequent t-test.
Based on the outcome of these F-tests (indicating homogeneity of variances), an independent-sample t-test assuming equal variances was applied to evaluate the differences in familiarity with AI concepts, confidence in AI tools, and engagement in AI ethics discussions. The results are presented in Table 3. The degrees of freedom (df) reported in Table 3 are calculated as (n1 + n2 − 2), which is consistent with the assumption of equal variances validated by the F-test results.
The results of the two independent-sample hypothesis tests presented in Table 3, above, are intended to project the mean differences between the UK and Indonesian populations in terms of familiarity with AI concepts, confidence in utilizing AI tools, and ethical AI discussion. In terms of familiarity with AI concepts, the measurement parameter (p-value = 0.0002 < 0.05) indicates sufficient evidence to support the mean difference between the UK and Indonesian populations at the 95% confidence level. Supported by the information regarding the means and standard deviations for AI concept familiarity for the UK and Indonesia (Indonesian respondents reporting a m e a n = 2.47 and an SD = 0.79 and UK respondents a m e a n = 2.97 and an SD = 0.93), it can be inferred that the UK population has slightly better familiarity with AI concepts.
Meanwhile, in terms of confidence in AI tool utilization, the measurement parameters (p-value = 0.8008 > 0.05) indicate that there is no significance evidence to support the mean difference in AI tool utilization confidence between the UK and Indonesian populations. Supported by the information of the means and standard deviations (Indonesian respondents reporting a m e a n = 4.05 and an SD = 0.74 and UK respondents a m e a n = 4.08 and an SD = 0.82), the confidence in utilizing AI tools tends to be similar between the UK and Indonesian populations.
Regarding engagement in AI ethics discussions, the statistical test indicates that there is no sufficient evidence to support the mean difference between the UK and Indonesian populations. This is supported by a p-value = 0.6061, which is greater than the significance level of α = 0.05. Referring to the information on sample means and standard deviations for the two groups (Indonesian respondents reporting a m e a n = 2.66 and an SD = 0.97 and UK respondents a m e a n = 2.74 and an SD = 1.06), the engagement in AI ethics discussion of Indonesian and UK respondents is comparable.
The hypothesis test approaches for the AI literacy analysis reveal that the UK population has greater intentions regarding understanding AI concepts compared to the Indonesian population. Meanwhile, regarding AI tool utilization confidence, both groups tend to be similar. It can be implicitly inferred that the Indonesian population tends to be more practical in terms of AI engagement.
Ethical AI is something that is not directly related to AI utilization. However, it covers urgent aspects of appropriate AI adoption and development. Based on the hypothesis test, the Indonesian and UK respondents tended to indicate similar awareness of ethical AI, which was considered low (supported by the mean and standard deviation information). Hence, greater efforts to build awareness of ethical AI aspects are crucial for both the Indonesian and UK populations.
To further explore AI literacy differences, we analyzed literacy levels across gender and country (see Table 4). The findings indicate that female respondents in Indonesia exhibit the highest proportion of no AI literacy (74.24%), with only 12.12% at the basic level and 13.64% at the advanced level. Indonesian male respondents demonstrate slightly higher AI literacy, with 60% categorized as having no AI literacy, while 21.54% possess basic AI literacy and 18.46% are at the advanced level.
In contrast, UK respondents show a higher overall AI literacy level. Among female participants, 42.31% have no AI literacy, while 30.77% have basic knowledge, and 26.92% demonstrate advanced proficiency. UK male respondents reported the highest AI literacy, with 34.29% classified as having no AI literacy, 20% classified at the basic level, and a significant 45.71% classified at the advanced level. The advanced literacy results in both countries were higher for males than for females.
While these trends indicate notable disparities in AI literacy, it is essential to consider whether these differences are statistically significant. We conducted a dependency test (t-test) to determine whether education level, field of study, and professional background influence AI literacy outcomes. The results suggest that AI literacy disparities are significantly associated with educational background (p < 0.001), indicating that individuals with higher education levels tend to report greater AI literacy. However, the effect of gender alone was not statistically significant (p = 0.12), suggesting that other factors may contribute more strongly to AI literacy differences.
These findings highlight that AI literacy gaps may not solely be due to nationality or gender but could be influenced by access to education, professional exposure, and digital literacy initiatives. Future research should explore how AI-related education programs and industry exposure contribute to these differences.
To gain further qualitative insights, we analyzed open-ended responses regarding the purposes for which the respondents use AI tools and the specific AI tools they frequently use. Both UK and Indonesian respondents predominantly use AI for academic and professional tasks, including the following:
  • Writing scientific papers;
  • Completing university assignments;
  • Reviewing study materials;
  • Enhancing work quality and efficiency.
The most frequently mentioned AI tools include ChatGPT, Grammarly, Quillbot, and ScopusAI, indicating a strong preference for AI-driven text generation, grammar correction, and research assistance. Additionally, several respondents cited the use of Adobe Firefly, D-ID, Freepik Pikaso, and Blockade Labs, suggesting that AI-powered design and creative content generation tools are also gaining traction. The most often used AI tools for UK respondents are ChatGPT, Gemini, Co-Pilot, and Claude. Other UK respondents also mentioned the use of MATLAB, SciSpace, Canva, and Adobe Firefly.
Interestingly, a few respondents provided critical perspectives on AI use. Some expressed concerns about over-reliance on AI for academic tasks, questioning whether AI truly enhances learning or merely facilitates task completion without deep understanding. This is in agreement with the recent studies which found that increased reliance on GenAI in the workplace is linked to a decline in critical thinking (Gerlich, 2025; H.-P. Lee et al., 2025). Other participants highlighted ethical dilemmas, such as data privacy concerns and biases in AI-generated content, demonstrating an awareness of AI-related challenges that extend beyond simple usability.
These qualitative responses reinforce the quantitative findings regarding confidence in AI tools and engagement in AI ethics discussions. While users express high confidence in AI tools, concerns about ethical implications and responsible AI usage remain underexplored.

4.2. Analysis of Gender Bias

To answer RQ2 (‘How do perceptions of gender bias in the technology field differ between respondents in the UK and Indonesia?’) and test H2 (‘Women will report higher perceptions of gender bias than men in both contexts’), the analysis focused on perceptions of gender bias in the technology field, comparing responses from individuals in the UK and Indonesia. Five key factors were examined: witnessed gender bias, programs for women, promotion to leadership, difficulty in tech careers, and tech stereotypes.
The perception of witnessing gender bias in the technology industry varies across the different demographic groups surveyed, as illustrated in Figure 1. Among Indonesian females, a majority reported uncertainty or less frequent encounters with bias: 56.1% were ‘Unsure’ if they had witnessed gender bias, while 30.3% reported witnessing it ‘Yes, a few times’. A smaller proportion, 10.6%, stated ‘Never’, and only 3.0% reported ‘Yes, several times’. Notably, 0.0% of Indonesian females reported witnessing gender bias ‘Yes, often’. For Indonesian males, the most common response was observing bias ‘Yes, a few times’ (43.1%), followed by being ‘Unsure’ (35.4%). A total of 21.5% of Indonesian males reported ‘Never’ witnessing gender bias, with 0.0% reporting ‘Yes, several times’ and 0.0% ‘Yes, often’.
In the UK female cohort, responses were more distributed across experiencing bias. The largest group (38.5%) stated they had witnessed gender bias ‘Yes, several times’, and 34.6% reported ‘Yes, a few times’. UK females of 15.4% reported ‘Yes, often’, while 11.5% stated ‘Never’, and 0.0% were ‘Unsure’. Among UK males, 42.9% reported witnessing bias ‘Yes, a few times’, making this the most common response for this group. A total of 31.4% reported ‘Never’ witnessing gender bias, 17.1% reported ‘Yes, several times’, and 8.6% reported ‘Yes, often’. Similar to UK females, 0.0% of UK males selected ‘Unsure’.
The high percentage of ‘Unsure’ responses among Indonesian females (56.1%) and a notable portion of Indonesian males (35.4%) suggests that gender bias may be less explicitly recognized or is a less salient issue for a significant number of respondents in Indonesia, or that they may lack clarity on what constitutes gender bias in this context. In contrast, the complete absence of ‘Unsure’ responses from both male and female UK participants indicates a more definitive stance or awareness regarding their experiences with or observations of gender bias.
UK females reported more definitive experiences of witnessing bias, with a combined 88.5% (38.5% ‘Yes, several times’ + 34.6% ‘Yes, a few times’ + 15.4% ‘Yes, often’) indicating some level of observed bias. UK males also reported notable levels of witnessing bias (‘Yes, a few times’ at 42.9% and ‘Yes, several times’ at 17.1%). This suggests that in the UK context there is a generally higher reported observation of gender bias incidents compared to the Indonesian cohort, especially when comparing the ‘Yes, often’ and ‘Yes, several times’ categories. The Indonesian groups, particularly females, showed much lower reporting for these more frequent/intense categories of witnessing bias, with a large proportion being unsure.
In Indonesia, a significant majority of female respondents (72.7%) reported that ‘Yes, women received’ such programs (see Figure 2). This contrasts with Indonesian male respondents, where 43.1% reported ‘Yes, women received’ these programs, and an identical 43.1% stated such programs were ‘Never received’. Only 4.5% of Indonesian females were ‘Unsure’ about the existence of these programs, compared to 13.8% of Indonesian males. In the UK, 38.5% of female respondents and 31.4% of male respondents indicated that ‘Yes, women received’ mentorship programs. A similar proportion of males and females in the UK reported that these programs were ‘Never received’ (34.3% for males and 34.6% for females). A notable percentage of respondents in the UK were also ‘Unsure’ about these programs, with 26.9% of females and 34.3% of males selecting this option.
The data suggests a strong perception among Indonesian females that mentorship programs for women are available and received (72.7%). This is notably higher than the perception among Indonesian males, where opinions are more evenly split between receiving such programs and them never being received, with a moderate level of uncertainty.
In the UK, the perception of program availability and reception is more mixed across both genders, with roughly a third of respondents in each gender group reporting receiving programs, another third reporting never receiving them, and a significant portion (especially males at 34.3%) being unsure. The relatively high percentage of ‘Unsure’ responses in the UK, and also among Indonesian males, may indicate that even if such programs exist, their visibility or accessibility could be improved. The strong positive response from Indonesian females, however, points to a potentially successful outreach or implementation of such initiatives for this specific group in that context. Organizations aiming to bridge gender gaps could focus on enhancing the promotion and accessibility of these mentorship initiatives, particularly to groups showing higher uncertainty or lower perceived reception.
Figure 3 illustrates varying perceptions regarding the frequency of women being promoted to leadership positions in the technology sector across different demographic groups. Among Indonesian males, responses were distributed across the spectrum. The most frequently cited category was ‘Often’ (32.3%), followed closely by ‘Occasionally’ (29.2%). A smaller but notable portion reported ‘Rarely’ (21.5%) or ‘Never’ (12.3%), while only 4.6% perceived such promotions ‘Very often’. For Indonesian females, the predominant perception was that women are promoted ‘Occasionally’ (45.5%). This was followed by ‘Rarely’ (28.8%) and ‘Often’ (18.2%). Relatively few Indonesian females reported seeing promotions ‘Never’ (3.0%) or ‘Very often’ (4.5%).
In the UK, responses from male participants indicated a tendency towards observing more frequent promotions. The largest group selected ‘Occasionally’ (37.1%), followed by ‘Often’ (28.6%) and ‘Very often’ (17.1%). Fewer UK males reported ‘Never’ (8.6%) or ‘Rarely’ (8.6%). UK female respondents also most commonly reported ‘Occasionally’ (34.6%), followed by ‘Rarely’ (30.8%). Promotions were seen ‘Often’ by 19.2% and ‘Very often’ by 11.5% of this group, with only 3.8% stating ‘Never’.
The data reveals distinct patterns in how frequently leadership promotions for women are perceived across the surveyed groups. In Indonesia, males reported seeing promotions ‘Often’ or ‘Very often’ at a combined rate of 36.9% (32.3% + 4.6%), slightly higher than those reporting ‘Never’ or ‘Rarely’ (33.8%). Indonesian females, however, perceived frequent promotions (‘Often’ or ‘Very often’) less commonly, at a combined 22.7% (18.2% + 4.5%), with ‘Occasionally’ being the dominant response (45.5%).
In the UK, both genders perceived promotions for women occurring with some regularity. UK males reported the highest combined frequency of ‘Often’ or ‘Very often’ at 45.7% (28.6% + 17.1%). UK females also perceived these occurring frequently, though at a slightly lower combined rate of 30.7% (19.2% + 11.5%), with ‘Occasionally’ (34.6%) and ‘Rarely’ (30.8%) also being prominent responses.
Overall, UK respondents, particularly males, tended to report a higher frequency of women being promoted to leadership positions compared to Indonesian respondents. While ‘Occasionally’ was a common perception across most groups (except for Indonesian males, where ‘Often’ was slightly more common), the perception of promotions occurring ‘Often’ or ‘Very often’ was lowest among Indonesian females. These results suggest that while leadership opportunities for women are perceived to exist, particularly in the UK, there may still be differences in how these opportunities are viewed across genders and national contexts. The varied frequencies reported, especially the lower rates of ‘Often’ or ‘Very often’ perceived by Indonesian females, may still highlight areas where structured leadership pipelines and support for gender diversity in senior roles could be beneficial.
Figure 4 presents respondents’ perceptions regarding whether women face more difficulties than men in entering and advancing in technology professions. Among Indonesian female respondents, a significant portion perceived such difficulties: 7.6% ‘Strongly agree’ and 40.9% ‘Agree’ that women face more difficulties. Meanwhile, 36.4% were ‘Neutral’, 15.2% reported ‘Disagree’, and 0.0% selected ‘Strongly disagree’.
In contrast, UK female respondents expressed an even stronger perception of difficulties. A total of 34.6% reported that they ‘Strongly agree’ and another 34.6% that they ‘Agree’ with the statement. A smaller percentage were ‘Neutral’ (11.5%) or reported that they ‘Disagree’ (19.2%), with 0.0% reporting that they ‘Strongly disagree’. The perception of challenges was notably lower among male respondents in both countries. For Indonesian males, 32.3% ‘Disagree’ and 3.1% ‘Strongly disagree’ that women face more difficulties. Conversely, only 7.7% ‘Strongly agree’ and 13.8% ‘Agree’, while the largest group (43.1%) remained ‘Neutral’. Similarly, UK male respondents also showed lower agreement levels. A total of 31.4% ‘Disagree’ and 2.9% ‘Strongly disagree’ that women face greater difficulties. While 5.7% ‘Strongly agree’ and 25.7% ‘Agree’, a significant portion (34.3%) were ‘Neutral’.
These findings highlight a clear disparity in perceptions between genders. Women in both countries, particularly in the UK, perceive greater structural barriers to entry and advancement in technology careers than their male counterparts acknowledge. A substantial combined 48.5% of Indonesian females and an even more pronounced 69.2% of UK females either agree or strongly agree that women face more difficulties.
In contrast, males in both nations are less likely to perceive these heightened difficulties for women. Among Indonesian males, a combined 35.4% disagree or strongly disagree, with 43.1% remaining neutral, compared to only 21.5% who agree or strongly agree. For UK males, a combined 34.3% disagree or strongly disagree, 34.3% are neutral, and 31.4% agree or strongly agree. While UK male responses show a somewhat more distributed view than Indonesian males, both male groups exhibit considerably less agreement on the existence of such difficulties compared to their female counterparts. The strong agreement among UK females, in particular, suggests that despite any progress in inclusivity policies, a significant proportion still experience or observe gender-based challenges in the industry. The high neutrality among males, especially Indonesian males, might indicate a lack of awareness or exposure to the specific challenges women encounter.
Figure 5 presents the frequencies with which respondents reported hearing gender stereotypes suggesting male superiority in technology. The data indicates that such stereotypes remain a notable issue, particularly within the Indonesian cohort. Among Indonesian female respondents, a striking majority reported frequent encounters with these stereotypes: 53.0% stated they heard them ‘Often’, and an additional 24.2% reported hearing them ‘Very often’, culminating in 77.2% perceiving tech stereotypes frequently. Far fewer Indonesian females reported hearing these stereotypes ‘Occasionally’ (16.7%), ‘Rarely’ (4.5%), or ‘Never’ (1.5%). This perception of tech stereotypes was also evident among Indonesian males, although at a lower combined frequency for the higher categories. A total of 41.5% of Indonesian males reported hearing such stereotypes ‘Often’ or ‘Very often’. More specifically, 33.8% of Indonesian males heard them ‘Occasionally’, 16.9% ‘Rarely’, and 7.7% ‘Never’.
In contrast, respondents in the UK generally appeared to perceive tech stereotypes less intensely. Among UK females, 23.1% reported hearing gender stereotypes ‘Often’ and 19.2% ‘Very often’, totaling 42.3% for these more frequent categories. The most common response for this group was ‘Occasionally’ (46.2%), with 11.5% reporting ‘Rarely’ and 0.0% ‘Never’. Of the four demographic groups compared, UK male respondents reported the lowest exposure to these stereotypes. A significant 42.9% stated they ‘Rarely’ encountered such biases, and 8.6% indicated ‘Never’. For this group, 17.1% reported hearing them ‘Occasionally’, while 14.3% heard them ‘Often’ and 17.1% ‘Very often’, summing to 31.4% for the combined ‘Often’ and ‘Very often’ categories.
These findings suggest that cultural attitudes may play a key role in shaping the prevalence and perception of gender stereotypes in technology. The significantly higher reporting of frequently encountered stereotypes (‘Often’ or ‘Very often’) among Indonesian respondents (Indonesian females: 77.2%; Indonesian males: 41.5%) compared to UK respondents (UK females: 42.3%; UK males: 31.4%) highlights a more pronounced issue of stereotyping in the Indonesian context as perceived by the participants.
The particularly high percentage among Indonesian females underscores a substantial challenge. The lower, though still present, reports of stereotypes in the UK, especially among UK males, might reflect more established progressive attitudes toward gender equality in STEM fields, potentially influenced by differing educational frameworks, workplace policies, and broader societal changes. Nevertheless, the fact that over 40% of UK females still perceive these stereotypes ‘Often’ or ‘Very often’ indicates that the issue is far from resolved in that context either. The persistent and varied levels of perceived gender stereotypes across both countries and genders emphasize the ongoing need for targeted awareness campaigns and policy interventions designed to foster more inclusive and equitable environments in technology.

5. Discussion

This study aimed to explore the general public’s experiences with AI literacy and gender bias in the UK and Indonesia, two countries with distinct technological, economic, educational, and socio-cultural landscapes. The global AI capabilities of the UK and Indonesia show substantial disparities, particularly concerning government strategy, infrastructure, and research and development. As per the Global AI Index (2024), the UK is positioned 4th in terms of AI utilization, whereas Indonesia is ranked 49th (Tortoise, 2024). By exploring the nuances of gender disparities in AI literacy, this study seeks to contribute to the expanding research supporting inclusivity and diversity in AI and STEM disciplines.
This study utilized a self-assessment approach, a method widely employed in previous research to measure AI literacy (e.g., Laupichler et al., 2023). The results suggest that UK respondents generally report higher familiarity with both AI and programming compared to their Indonesian counterparts. This disparity may stem from differences in AI education integration within national curricula, access to advanced technological infrastructure, and the prevalence of AI-related industries. The UK’s National AI Strategy (2021) places a strong emphasis on AI education, which may contribute to greater awareness and technical proficiency (Secretary of State for Digital, Culture, Media and Sport, 2021). In contrast, while Indonesia has made strides in digital literacy, AI-specific education remains less structured, which could explain the lower familiarity levels. Moreover, research indicates that AI literacy among Indonesian higher education students is relatively uniform across different demographics, with no significant variations based on gender or age. However, regional disparities exist, influenced by socio-economic conditions and access to technological infrastructure. This suggests that while informal, application-based learning pathways are contributing to AI literacy, structural factors still play a significant role in shaping educational outcomes (Sari et al., 2025). Over time, the integration of AI tools into daily life and informal learning environments may enhance certain aspects of AI literacy, such as functional understanding and confidence in using AI technologies. This bottom-up approach to skill acquisition contrasts with the UK’s more top-down, curriculum-driven model, highlighting the importance of recognizing diverse pathways to AI literacy. By acknowledging and supporting these varied learning experiences, educational strategies can be better tailored to promote AI literacy across different cultural and infrastructural contexts.
The positive correlation between programming and AI literacy observed in both countries reinforces the notion that foundational programming skills can enhance AI understanding. This aligns with prior research suggesting that exposure to coding fosters computational thinking, a critical aspect of AI literacy (Laupichler et al., 2023; Sulmont et al., 2019; Hornberger et al., 2023). However, as AI literacy extends beyond programming, future educational initiatives should focus on broader AI competencies, including ethical considerations and real-world applications. Our findings show higher advanced literacy results in both countries for males than for females. These findings align with prior research suggesting that gender bias in self-perception exists in AI knowledge (Laupichler et al., 2024; Cachero et al., 2025).
The study uncovers subtle differences in perceptions of gender bias between the UK and Indonesia. UK respondents generally express greater confidence in gender inclusivity within recruitment and leadership roles, while Indonesian participants report a higher prevalence of initiatives designed to support women in technology. This indicates that although gender equity measures are more firmly established in the UK, Indonesia is actively implementing targeted programs to boost female participation or is better in promoting such programs.
The findings also indicate that a significant proportion of respondents in both countries have witnessed gender bias in technology-related roles. This highlights the persistence of structural barriers and societal stereotypes that continue to limit women’s advancement in AI and STEM fields, as noted by previous studies. The results align with the existing literature showing that despite global efforts to bridge the gender gap in AI, women remain underrepresented in leadership positions and technical roles (Piloto, 2023; Ramseook-Munhurrun et al., 2025). Addressing these disparities requires not only educational interventions but also systemic changes in workplace policies and AI development practices (Rubin & Utomo, 2022). Addressing gender bias within AI and STEM fields demands targeted strategies at multiple levels. Educational institutions must actively challenge gender stereotypes by fostering inclusive environments, implementing mentorship programs, and highlighting female role models in AI to inspire future generations (Bennaceur et al., 2018; González-Pérez et al., 2022). Providing equitable access to resources, including scholarships, training workshops, mentorship, and networking opportunities tailored specifically for women, can mitigate systemic barriers that hinder their advancement (Casad et al., 2021). Additionally, organizational policies within technology companies should enforce unbiased recruitment and promotion processes, regular gender bias training, and transparent accountability systems to cultivate inclusive workplaces (Christie et al., 2017; Nweje et al., 2025).
This study reveals a nuanced connection between AI literacy and women’s participation in STEM, particularly in AI-related fields. Key findings indicate that higher AI literacy correlates with STEM engagement: respondents with advanced AI literacy (e.g., understanding AI concepts and confidence in tools) were more likely to pursue or persist in STEM careers. However, women in both countries lagged behind men in advanced AI literacy (see Table 3). This aligns with findings that perceived challenges are higher for women; for example, 50.0% of Indonesian females and 42.9% of UK females ‘strongly agreed’ that women face difficulties in entering and advancing in tech careers (see Figure 4). Improving women’s AI literacy, especially in ethical and technical domains, could therefore be a tool for empowerment, potentially increasing their representation in STEM. However, literacy gains must be paired with systemic reforms, such as dedicated mentorship programs and gender bias audits, to be fully effective. Thus, the study advocates for policies that combine AI education (to boost literacy) with gender-inclusive workplace practices (to dismantle barriers), aligning with frameworks like the Paris AI Action Summit’s emphasis on equity.

5.1. Implications

Our study provides educators, organizations, policymakers, and researchers with a framework to assess and improve AI literacy among the general public in the UK and Indonesia. By emphasizing factual knowledge, the study offers insights that can inform curriculum development and educational initiatives aimed at enhancing the understanding and responsible use of AI. Moreover, the findings highlight for policymakers the need to invest in AI education and literacy programs that move beyond basic interface familiarity to address the underlying mechanisms and ethical implications of AI.
The findings highlight the need for targeted AI education strategies that address both technical proficiency and ethical awareness. In Indonesia, there is an opportunity to strengthen AI curricula and improve access to AI education, particularly for underrepresented groups. Meanwhile, in the UK, efforts should focus on ensuring inclusivity and reducing barriers that deter women from pursuing AI-related careers.
For policymakers, these insights underscore the importance of investing in AI literacy programs that go beyond basic interface familiarity to include critical analysis and ethical considerations. By promoting gender-inclusive AI education and addressing societal biases, both countries can work towards a more equitable and responsible AI landscape.

5.2. Limitations

The present study has certain limitations. The overrepresentation of students in the sample may have influenced the findings, since they may not have firsthand experience in leadership promotion. However, even without firsthand experience in leadership promotion, they may acquire such information through their family and friends. This reliance on second-hand knowledge may not accurately reflect the complexities and nuances of leadership dynamics and gender bias in professional settings. Consequently, the findings related to workplace gender bias should be interpreted with caution, as they might not fully capture the experiences and challenges of those actively engaged in the workforce. Future research should strive to include a more balanced sample that incorporates a greater proportion of working professionals to enhance the generalizability of results concerning leadership and workplace gender bias.
Additionally, unequal sample sizes (UK n = 61; Indonesia n = 131) limit direct comparability, though proportional analysis mitigates this where possible. The unequal sample sizes resulted from practical constraints in data collection across different geographic contexts and institutional partnerships. The smaller sample size for the UK warrants caution when generalizing findings specifically for this group. We conducted a post hoc power analysis for our primary statistical tests (independent t-tests). For the significant finding regarding AI concept familiarity (effect size d = 0.59, α = 0.05), our current sample sizes provide adequate power (>0.80) to detect meaningful differences.
Self-reporting bias and the absence of intersectional data (e.g., socio-economic status and ethnicity) also constrain generalizability. Although this study included a culturally diverse sample, it did not incorporate culturally specific measures to assess the potential influence of these factors on the results. Despite these limitations, the study’s findings add valuable insights to the expanding body of research on AI literacy and gender bias. Future studies should incorporate larger and equal samples to better understand gender bias in AI workplaces.
Furthermore, our survey did not collect information on crucial dimensions of identity such as ethnicity, socio-economic background, disability status, or rural/urban residence. These factors often intersect with gender to shape both access to AI education and the perception (and reality) of bias in STEM fields. For instance, women from under-resourced communities may face compounded barriers: lesser digital infrastructure, fewer local role models, and less institutional support for mentorship; similarly, racial or ethnic minority women might experience unique stereotyping that differs from majority-group peers. Incorporating intersectional measures in future surveys will allow researchers to separate how multiple forms of disadvantage combine to influence AI literacy and career advancement and will support the design of finely targeted interventions. Although this study included a culturally diverse sample, it did not incorporate culturally specific measures to assess the potential influence of these factors on the results. Despite these limitations, the study’s findings add valuable insights to the expanding body of research on AI literacy and gender bias. Future studies should incorporate larger and more demographically balanced samples to better understand these complex issues. The use of age ranges instead of exact ages limits precise demographic comparisons. Future studies should collect granular age data to enable more nuanced analyses.

6. Conclusions

This study provides a comparative analysis of AI literacy and gender bias between the UK and Indonesia, revealing both similarities and nuanced differences shaped by distinct cultural, educational, and technological contexts. Our findings indicate that while overall AI literacy levels are similar across the two countries, UK respondents demonstrate higher familiarity with programming and AI, potentially reflecting the influence of a well-established educational framework and advanced digital infrastructure. Conversely, Indonesian respondents, despite exhibiting comparable average age and self-reported AI literacy, face unique challenges in terms of access to quality AI education and broader societal factors that shape gender perceptions.
Importantly, the study highlights persistent gender biases that affect women’s participation and advancement in AI and STEM fields. Differences in perceptions of gender bias in recruitment, leadership promotion, and support for women suggest that while progress is being made in both contexts, significant barriers remain. These disparities underscore the need for targeted interventions—ranging from curriculum reform and enhanced mentorship programs to policy initiatives aimed at bridging the gender gap in technology sectors.
The implications of these findings are twofold. First, they offer strategic insights for educators and policymakers seeking to design more inclusive and effective AI education programs. Second, they emphasize the importance of contextualized approaches in addressing gender bias within diverse cultural settings. Although the study’s use of voluntary sampling and an overrepresentation of students may limit the generalizability of the results, the comparable demographic profiles and robust analytical methods employed provide a valuable foundation for future research.
Moving forward, further investigations should aim to include a broader range of professional experiences and additional cultural contexts to better understand the evolving dynamics of AI literacy and gender bias globally. By addressing these challenges, stakeholders can work towards fostering a more equitable digital landscape where all individuals are empowered to engage with and benefit from AI technologies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/educsci15091143/s1.

Author Contributions

Conceptualization, B.P. and R.F.S.; methodology, P.D.P., R.F.S., D.Y.L., and N.A.C.A.; software, A.D.T.; investigation, A.D.T. and P.D.P.; writing—original draft preparation, B.P. and A.D.T.; writing—review and editing, B.P., A.D.T., R.F.S., E.S.-V., P.D.P., D.Y.L., and N.A.C.A.; visualization, A.D.T.; supervision, B.P., R.F.S., and P.D.P.; funding acquisition, B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the British Council under the Going Global Partnerships grant GEP2023-058.

Institutional Review Board Statement

The study protocol was approved by the Ethics Committee of Universitas Indonesia (Ket-489/UN2.F10.D11/PPM.00.02/2025).

Informed Consent Statement

The survey was conducted on an anonymous and voluntary basis. The information provided for this survey was treated confidentially and analyzed at an aggregated level. The collected data was kept in a secured file (and may have been deleted after the intended purposes were achieved) by the researchers involved. Personally identifiable information (if any) would not be shared.

Data Availability Statement

The data used is publicly available, and its citation is provided in the manuscript.

Acknowledgments

We would like to express our gratitude to all participants who voluntarily participated in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bahagijo, S., Prasetyo, Y. E., Kawuryan, D., Tua, B., & Eridani, A. D. (2022). Closing the digital gender gap in Indonesia through the roles and initiatives of civil society organizations. Jurnal Ilmu Sosial, 21(1), 14–38. [Google Scholar] [CrossRef]
  2. Bennaceur, A., Cano, A., Georgieva, L., Kiran, M., Salama, M., & Yadav, P. (2018, May 27–June 3). Issues in gender diversity and equality in the UK. 2018 IEEE/ACM 1st International Workshop on Gender Equality in Software Engineering (GE) (pp. 5–9), Gothenburg, Sweden. [Google Scholar] [CrossRef]
  3. Broo, D. G., Kaynak, O., & Sait, S. M. (2022). Rethinking engineering education at the age of industry 5.0. Journal of Industrial Information Integration, 25, 100311. [Google Scholar] [CrossRef]
  4. Cachero, C., Tomás, D., & Pujol, F. A. (2025). Gender bias in self-perception of artificial intelligence knowledge, impact, and support among higher education students: An observatioanal study. ACM Transactions on Computing Education, 25(2), 15. [Google Scholar] [CrossRef]
  5. Callea, V., Dagklis, E., Nantsou, T., Otegui, X., Tovar, E., & Villa, G. (2024, May 8–11). Factors influencing women’s underrepresentation in engineering: A literature review at EDUCON. 2024 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–10), Kos, Greece. [Google Scholar] [CrossRef]
  6. Casad, B. J., Franks, J. E., Garasky, C. E., Kittleman, M. M., Roesler, A. C., Hall, D. Y., & Petzel, Z. W. (2021). Gender inequality in academia: Problems and solutions for women faculty in STEM. Journal of Neuroscience Research, 99(1), 13–23. [Google Scholar] [CrossRef]
  7. Christie, M., O’Neill, M., Rutter, K., Young, G., & Medland, A. (2017). Understanding why women are under-represented in science, technology, engineering and mathematics (STEM) within higher education: A regional case study. Production, 27, e20162205. [Google Scholar] [CrossRef]
  8. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  9. Druga, S., Otero, N., & Ko, A. J. (2022, July 11–13). The landscape of teaching resources for ai education. 27th ACM Conference on Innovation and Technology in Computer Science Education (Vol. 1, pp. 96–102), Dublin, Ireland. [Google Scholar]
  10. Elysee_Palace. (2025, December 2). Statement on inclusive and sustainable artificial intelligence for people and the planet. Elysee Palace. Available online: https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet (accessed on 14 February 2025).
  11. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
  12. González-Pérez, S., Martínez-Martínez, M., Rey-Paredes, V., & Cifre, E. (2022). I am done with this! Women dropping out of engineering majors. Frontiers in Psychology, 13, 918439. [Google Scholar] [CrossRef] [PubMed]
  13. Hobeika, E., Hallit, R., Malaeb, D., Sakr, F., Dabbous, M., Merdad, N., Rashid, T., Amin, R., Jebreen, K., Zarrouq, B., Alhuwailah, A., Shuwiekh, H. A. M., Hallit, S., Obeid, S., & Fekih-Romdhane, F. (2024). Multinational validation of the Arabic version of the artificial intelligence literacy scale (AILS) in university students. Cogent Psychology, 11(1), 2395637. [Google Scholar] [CrossRef]
  14. Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. Sage Publications. [Google Scholar]
  15. Hofstede, G., Hofstede, G. J., & Minkov, M. (2010). Cultures and organizations: Software of the mind (3rd ed.). McGraw Hill LLC. [Google Scholar]
  16. Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023). What do university students know about artificial intelligence? Development and validation of an AI literacy test. Computers and Education: Artificial Intelligence, 5, 100165. [Google Scholar] [CrossRef]
  17. Hornberger, M., Bewersdorff, A., Schiff, D. S., & Nerdel, C. (2025). A multinational assessment of AI literacy among university students in Germany, the UK, and the US. Computers in Human Behavior: Artificial Humans, 4, 100132. [Google Scholar] [CrossRef]
  18. Hossain, Z., Biswas, M. S., & Khan, G. (2025). AI literacy of library and information science students: A study of Bangladesh, India and Pakistan. Journal of Librarianship and Information Science, 09610006241309323. [Google Scholar] [CrossRef]
  19. Jain, H., Padmanabhan, B., Pavlou, P. A., & Raghu, T. S. (2021). Editorial for the special section on humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Information Systems Research, 32(3), 675–687. [Google Scholar] [CrossRef]
  20. Kasinidou, M. (2023, July 7–12). AI literacy for all: A participatory approach. 2023 Conference on Innovation and Technology in Computer Science Education (Vol. 2, pp. 607–608), Turku, Finland. [Google Scholar] [CrossRef]
  21. Kazanidis, I., & Pellas, N. (2024). Harnessing generative artificial intelligence for digital literacy innovation: A comparative study between early childhood education and computer science undergraduates. AI, 5(3), 1427–1445. [Google Scholar] [CrossRef]
  22. Kong, S.-C., Korte, S.-M., Burton, S., Keskitalo, P., Turunen, T., Smith, D., Wang, L., Lee, J. C.-K., & Beaton, M. C. (2024). Artificial intelligence (AI) literacy—An argument for AI literacy in education. Innovations in Education and Teaching International, 62(2), 477–483. [Google Scholar] [CrossRef]
  23. Laupichler, M. C., Aster, A., Meyerheim, M., Raupach, T., & Mergen, M. (2024). Medical students’ AI literacy and attitudes towards AI: A cross-sectional two-center study using pre-validated assessment instruments. BMC Medical Education, 24(1), 401. [Google Scholar] [CrossRef]
  24. Laupichler, M. C., Aster, A., Perschewski, J.-O., & Schleiss, J. (2023). Evaluating AI courses: A valid and reliable instrument for assessing artificial-intelligence learning through comparative self-assessment. Education Sciences, 13(10), 978. [Google Scholar] [CrossRef]
  25. Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence, 3, 100101. [Google Scholar] [CrossRef]
  26. Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025, April 26–May 1). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. ACM CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. [Google Scholar] [CrossRef]
  27. Lee, I., Ali, S., Zhang, H., DiPaola, D., & Breazeal, C. (2021, March 13–20). Developing middle school students’ AI literacy. 52nd ACM Technical Symposium on Computer Science Education (pp. 191–197), Virtual Event. [Google Scholar] [CrossRef]
  28. Lemus-Delgado, D., & Cerda, C. (2025). ASEAN, gender equality and women’s empowerment in STEM. Asian Education and Development Studies, 14(2), 299–313. [Google Scholar] [CrossRef]
  29. Li, L. (2022). Reskilling and upskilling the future-ready workforce for industry 4.0 and beyond. Information Systems Frontiers, 26, 1697–1712. [Google Scholar] [CrossRef]
  30. Long, D., & Magerko, B. (2020, April 25–30). What is AI literacy? Competencies and design considerations. 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  31. Morais Maceira, H. (2017). Economic benefits of gender equality in the EU. Intereconomics, 52(3), 178–183. [Google Scholar] [CrossRef]
  32. Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies, 28(7), 8445–8501. [Google Scholar] [CrossRef]
  33. Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509. [Google Scholar] [CrossRef]
  34. Nweje, U., Amaka, N. S., & Makai, C. C. (2025). Women in STEM: Breaking barriers and building the future. International Journal of Science and Research Archive, 14(1), 202–217. [Google Scholar] [CrossRef]
  35. Pal, S., Lazzaroni, R. M., & Mendoza, P. (2024). AI’s missing link: The gender gap in the talent pool. Available online: https://www.stiftung-nv.de/publications/ai-gender-gap (accessed on 1 March 2025).
  36. Piloto, C. (2023). The gender gap in STEM: Still gaping in 2023. MIT Professional Education. [Google Scholar]
  37. Ramseook-Munhurrun, P., Naidoo, P., & Armoogum, S. (2025). Navigating the challenges of female leadership in the information and communication technology and engineering sectors. Journal of Business and Socio-Economic Development, 5(1), 55–70. [Google Scholar] [CrossRef]
  38. Roopaei, M., Horst, J., Klaas, E., Foster, G., Salmon-Stephens, T. J., & Grunow, J. (2021, May 10–13). Women in AI: Barriers and solutions. 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA. [Google Scholar] [CrossRef]
  39. Rubin, C., & Utomo, E. (2022). Strengthening ASEAN women’s participation in STEM. Available online: https://asean.org/wp-content/uploads/2023/10/Policy-Brief-Strengthening-ASEAN-Womens-Participation-in-STEM-Endorsed.FINAL_.pdf (accessed on 1 March 2025).
  40. Sari, D. K., Supahar, S., Rosana, D., Dinata, P. A., & Istiqlal, M. (2025). Measuring artificial intelligence literacy: The perspective of Indonesian higher education students. Journal of Pedagogical Research, 9(2), 143–157. [Google Scholar] [CrossRef]
  41. Schwab, K., Samans, R., Zahidi, S., Leopold, T. A., Ratcheva, V., Hausmann, R., & Tyson, L. D. (2021). The global gender gap report 2021. World Economic Forum. Available online: https://www3.weforum.org/docs/WEF_GGGR_2021.pdf (accessed on 31 January 2025).
  42. Secretary of State for Digital, Culture, Media and Sport. (2021). National AI strategy. Command Paper 525. HM Government. Available online: https://www.gov.uk/government/publications/national-ai-strategy (accessed on 31 January 2025).
  43. Shah, S. S. (2024). Gender bias in artificial intelligence: Empowering women through digital literacy. Journal of Artificial Intelligence, 1, 1000088. [Google Scholar] [CrossRef]
  44. Sulmont, E., Patitsas, E., & Cooperstock, J. R. (2019, February 27–March 2). Can you teach me to machine learn? 50th ACM Technical Symposium on Computer Science Education (pp. 948–954), Minneapolis, MN, USA. [Google Scholar] [CrossRef]
  45. TheCultureFactor. (n.d.). Country comparison tool. Available online: https://www.hofstede-insights.com/country-comparison/ (accessed on 6 February 2025).
  46. Tortoise. (2024). The global AI index. Available online: https://www.tortoisemedia.com/intelligence/global-ai (accessed on 1 March 2025).
  47. Venkatesh, V., & Zhang, X. (2010). Unified theory of acceptance and use of technology: US vs. China. Journal of Global Information Technology Management, 13(1), 5–27. [Google Scholar] [CrossRef]
  48. Wang, B., Rau, P.-L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behaviour & Information Technology, 42(9), 1324–1337. [Google Scholar] [CrossRef]
  49. West, S. M. (Director). (2020, February 25). Discriminating systems: Gender, race and power in artificial intelligence [Video recording]. AI Now Institute. Available online: http://hdl.handle.net/1853/62480 (accessed on 14 February 2025).
  50. Young, E., Wajcman, J., & Sprejer, L. (2021). Where are the women? Mapping the gender job gap in AI. The Alan Turing Institute. [Google Scholar]
  51. Zheng, H., Li, W., & Wang, D. (2022). Expertise diversity of teams predicts originality and long-term impact in science and technology. arXiv, arXiv:2210.04422. [Google Scholar] [CrossRef]
Figure 1. Witnessed gender bias.
Figure 1. Witnessed gender bias.
Education 15 01143 g001
Figure 2. Programs for women.
Figure 2. Programs for women.
Education 15 01143 g002
Figure 3. Promotion to leadership.
Figure 3. Promotion to leadership.
Education 15 01143 g003
Figure 4. Difficulty in tech careers.
Figure 4. Difficulty in tech careers.
Education 15 01143 g004
Figure 5. Tech stereotypes.
Figure 5. Tech stereotypes.
Education 15 01143 g005
Table 1. AI components.
Table 1. AI components.
AI ComponentsDescription
FamiliarityLevel of awareness about AI systems, ability to identify them, and practical experience using AI tools
Knowledge and applicationUnderstanding of AI concepts combined with technical expertise in implementing and utilizing AI technologies
Ethical perceptionsViews on how AI affects academic integrity and its broader societal implications
Table 2. Participants’ demographics.
Table 2. Participants’ demographics.
Indonesia (n = 131)UK (n = 61)
Age distribution:Age distribution:
18–24 years: 45%18–24 years: 33%
25–34 years: 38%25–34 years: 41%
35–44 years: 12%35–44 years: 20%
45+ years: 5%45+ years: 6%
Median age range: 25–34 yearsMedian age range: 25–34 years
Table 3. Hypothesis tests for AI concept familiarity, AI tool confidence and AI ethical engagement.
Table 3. Hypothesis tests for AI concept familiarity, AI tool confidence and AI ethical engagement.
IndonesiaUKtdfp-ValueCI 95%
Mean ± SDMean ± SDLowerUpper
Familiarity with AI Concepts2.47 ± 0.792.97 ± 0.933.85501900.0002−0.7558−0.2442
Confidence in AI Tools4.05 ± 0.744.08 ± 0.820.25261900.8008−0.26430.2043
Discussion of AI Ethics2.66 ± 0.972.74 ± 1.060.51651900.6061−0.38550.2255
Table 4. AI literacy levels (in %).
Table 4. AI literacy levels (in %).
GenderNo AI LiteracyBasic LiteracyAdvanced Literacy
IndonesiaFemale74.2412.1213.64
Male6021.5418.46
UKFemale42.3130.7726.92
Male34.292045.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tunjungbiru, A.D.; Pranggono, B.; Sari, R.F.; Sanchez-Velazquez, E.; Purnamasari, P.D.; Liliana, D.Y.; Andryani, N.A.C. AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia. Educ. Sci. 2025, 15, 1143. https://doi.org/10.3390/educsci15091143

AMA Style

Tunjungbiru AD, Pranggono B, Sari RF, Sanchez-Velazquez E, Purnamasari PD, Liliana DY, Andryani NAC. AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia. Education Sciences. 2025; 15(9):1143. https://doi.org/10.3390/educsci15091143

Chicago/Turabian Style

Tunjungbiru, Amrita Deviayu, Bernardi Pranggono, Riri Fitri Sari, Erika Sanchez-Velazquez, Prima Dewi Purnamasari, Dewi Yanti Liliana, and Nur Afny Catur Andryani. 2025. "AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia" Education Sciences 15, no. 9: 1143. https://doi.org/10.3390/educsci15091143

APA Style

Tunjungbiru, A. D., Pranggono, B., Sari, R. F., Sanchez-Velazquez, E., Purnamasari, P. D., Liliana, D. Y., & Andryani, N. A. C. (2025). AI Literacy and Gender Bias: Comparative Perspectives from the UK and Indonesia. Education Sciences, 15(9), 1143. https://doi.org/10.3390/educsci15091143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop