Next Article in Journal
Embracing Linguistic Diversity: Pre-Service Teachers’ Lesson Planning for English Language Learning in Sweden
Next Article in Special Issue
A Comparison of Responsive and General Guidance to Promote Learning in an Online Science Dialog
Previous Article in Journal
Enhancing Higher-Education Governance Through Telepresence Robots and Gamification: Strategies for Sustainable Practices in the AI-Driven Digital Era
Previous Article in Special Issue
Integrating Youth Perspectives into the Design of AI-Supported Collaborative Learning Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI for Culturally Responsive Science Assessment: A Conceptual Framework

by
Matthew Nyaaba
1,2,3,*,
Xiaoming Zhai
1,2,4,* and
Morgan Z. Faison
3
1
AI4STEM Education Center, University of Georgia, Athens, GA 30602, USA
2
National Center on Generative AI for Uplifting STEM+C Education, University of Georgia, Athens, GA 30602, USA
3
Department of Educational Theory and Practice, University of Georgia, Athens, GA 30602, USA
4
Department of Mathematics Science and Social Studies Education, University of Georgia, Athens, GA 30602, USA
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2024, 14(12), 1325; https://doi.org/10.3390/educsci14121325
Submission received: 2 September 2024 / Revised: 21 October 2024 / Accepted: 21 November 2024 / Published: 30 November 2024

Abstract

:
In diverse classrooms, one of the challenges educators face is creating assessments that reflect the different cultural backgrounds of every student. This study presents a novel approach to the automatic generation of cultural and context-specific science assessments items for K-12 education using generative AI (GenAI). We first developed a GenAI Culturally Responsive Science Assessment (GenAI-CRSciA) framework that connects CRSciA, specifically key cultural tenets such as indigenous language, Indigenous knowledge, ethnicity/race, and religion, with the capabilities of GenAI. Using the CRSciA framework, along with interactive guided dynamic prompt strategies, we developed the CRSciA-Generator tool within the OpenAI platform. The CRSciA-Generator allows users to automatically generate assessment items that are customized to align with their students’ cultural and contextual needs. We further conducted a pilot demonstration of item generation between the CRSciA-Generator and the base GPT-4o using standard prompts. Both tools were tasked with generating CRSciAs that aligned with the Next Generation Science Standard on predator and prey relationship for use with students from Ghana, the USA, and China. The results showed that the CRSciA-Generator output assessment items incorporated more tailored cultural and context assessment items for each specific group with examples, such as traditional stories of lions and antelopes in Ghana, Native American views on wolves in the USA, and Taoist or Buddhist teachings on the Amur tiger in China compared to the standard prompt assessment items within the base GPT-4o. However, due to the focus on nationality in the pilot demonstration, the CRSciA-Generator assessment items treated the countries as culturally homogeneous, overlooking subcultural diversity in these countries. Therefore, we recommend that educators provide detailed background information about their students when using the CRSciA-Generator. We further recommend future studies involving expert reviews to assess the cultural and contextual validity of the assessment items generated by the CRSciA-Generator.

1. Introduction

“Sometimes it’s not the student who is failing the assessment—it might be that the assessment is failing to fully assess the abilities of the student”—
[1]
In today’s diverse science classrooms, one of the challenges educators face is cre-ating science assessments that genuinely reflect the cultural backgrounds of every student [2]. K-12 science education plays a vital role in equipping students to tackle social and cultural issues like climate change, public health, and technological advancements. Consequently, K-12 science assessments, guided by frameworks like the Next Generation Science Standards (NGSS) [3,4], seeks to cultivate a profound grasp of fundamental and culturally scientific concepts and the capacity to utilize this knowledge in addressing real-world challenges [5,6]. This goal encourages the teaching and assessment of science to engage students in practices that are meaningful to their cultural experiences as assets in their learning processes [6,7]. There have been initiatives, such as culturally responsive assessment (CRA)) to connect science concepts to students’ cultural experiences an asset [8]. Studies on culturally responsive science assessments (CRSciAs) indicates that they make learning more relevant, engaging, and improve achievements among student [9]. However, implementing CRSciAs in K-12 classrooms remains challenging to educators. Educators continue to face difficulties due to increasing classroom diversity and the dominance of traditional, west-ern-centric assessments, which often disadvantage migrant, historically marginalized, and Indigenous students [10]. Furthermore, CRSciA practices, though essential, are time-consuming and complex, making them hard to scale effectively [11,12].
The recent development of Generative Artificial Intelligence (GenAI) offers a promising solution to CRSciAs [13,14,15]. GenAI can provide multimodal learning opportunities [16], handling a variety of cultural contexts, such as adapting to different languages, symbols, and local knowledge, more often and more effectively than what a single classroom science teacher could manage alone [17,18]. Recent studies indicate a significant rise in the use of GenAI, particularly ChatGPT, is used by educators to automate assessment generation. This trend reflects the potential of GenAI to address the traditionally time-consuming and labor-intensive aspects of assessment creation [18].
Nonetheless, the rapid adoption of GenAI for this purpose also presents challenges, as it does not yet fully assist teachers in achieving culturally responsive science assessments. Concerns persist about cultural and contextual misrepresentations and inaccuracies, which may stem from the reliance on the generic, large datasets used to train GenAI systems, as well as the imperfections in prompts crafted by teachers or users [19,20,21]. For instance, Kıyak [22], in their review study on GenAI and automatic assessment, highlighted the critical issue of GenAI-generated assessments as they lack contextual knowledge that relates to languages and Indigenous knowledge. Moreover, Chan et al. [23], in their study exploring ChatGPT’s potential in question generation, also identified cultural biases concerning race, ethnicity, and religious beliefs in the generated questions [24]. These studies, among others, recommended the need for datasets and frameworks that can guide GenAI to prompt engineering to consider the diverse cultural backgrounds of students as assets in assessment generation. These recommendations emphasize the urgent and critical importance of addressing these issues, as without careful consideration, there is a potential risk that GenAI could hegemonically reinforce the very inequities they are intended to overcome in education through assessment generation [25].
In response to these challenges, this study seeks to develop a conceptual framework that brings together the capabilities of GenAI and the core tenets of CRSciA for K-12 science assessments. Specifically, we first develop a GenAI-CRSciA conceptual framework by identifying the key concepts of culturally responsive assessments with GenAI and articulating the relationships between them [26]. This lays the foundation for the CRSciA-Generator within the OpenAI model. We then integrate the conceptual framework into the GPT model to create the CRSciA-Generator. This involves an interactive guided dynamic prompt method based on the CRSciA-Framework. This prompt approach dynamically initiates conversation and further interacts with users to generate assessment items tailored to their students’ cultural and context-specific needs based on the information provided by the user [27]. Finally, we pilot the CRSciA-Generator by comparing it with base GPT 4o and standard prompt within the cultures of the US, Ghana, and China. The findings show that the CRSciA-Generator has the potential to automatically generate cultural and context-specific science assessments. This has the potential to acknowledge students cultural backgrounds as an asset to scientific literacy.

2. A Literature Review

There are few studies related to CRA, in general, and, consequently, to CRSciA. This section provides an overview of the impact of standardized assessment, which has traditionally dominated science education. It also examines the discourse about CRSciA, the challenges that persist in implementing CRSciA, and finally, the capabilities of GenAI in addressing these challenges.

2.1. Impact of Standardized Assessment

One of the key impacts of standardized assessment (traditional standardized assessment) is the high possibility of creating an “achievement gap”. Achievement gap discourse is prominent, particularly in countries like the US with diverse student populations [28,29]. This term refers to the disparities in standardized test scores between Native Indigenous, Black, Latina/o, recent immigrants, and White students [30]. While this is a concern, Ladson-Billings [28] further posited that even a focus on the gap is misplaced. Instead, we need to look at the “education debt” that has accumulated historically over time. She draws an analogy with the concept of national debt, which she contrasts with that of a national budget deficit to argue the significance of historical inequities in education. Moreover, historically marginalized students, including Native Indigenous, Black, and Latina/o, have had accumulated disadvantages, limited opportunities, and a lack of access to equal education for generations [31].
Beyond the supposed “achievement gap”, CRSciA aims to address the educational debt in science. This implies that the lack of representation of cultures in science assessments has profound and far-reaching effects across every aspect of life in wider society. A biased science assessment that affects a particular group of students can influence their career goals and limit their contributions to society [32]. Studies show that unfair assessments have broader societal impacts, such as contributing to higher school dropout rates [33]. Students who are unfairly assessed may become disengaged in the educational system and may be pushed out (drop out) from school [33]. This, in turn, can result in a larger number of unproductive citizens, which negatively affects society by increasing the burden on social services and reducing overall economic productivity [34,35,36]. However, recent research shows that, unlike high-stakes assessments, culturally responsive assessments motivate students and promote authentic, life-long learning [37]. Furthermore, students who possess a well-developed understanding and awareness of cultural issues are ready to engage in the CRSciA [38]. GenAI’s ability to generate personalized educational content demonstrates its potential to continue addressing the diverse needs of students through CRSciAs.

2.2. Culturally Responsive Assessments in Science Education

Even though there has been an initial perception that science education is unsuitable for culturally responsive assessments, this misconception has been cleared [39]. Recent studies have shown remarkable practices of CRSciAs and even expanded to include Science, Technology, Engineering, Arts, and Mathematics subjects. This proves the value of culturally responsive assessments across all disciplines and educational levels [39]. Moreover, the Framework for K–12 Science Education articulates a broader set of expectations to ensure that by the end of 12th grade [3], all students would possess sufficient knowledge of science to engage in their everyday lives, continue learning about science outside of school, and have the skills to pursue careers of their choice. They emphasize the phrase “all students” throughout this framework to provide equitable opportunities, including assessment, for all students to succeed in science [40].

2.3. Challenges of Implementing Culturally Responsive Assessments in Science Education

The implementation of CRSciAs facessignificant challenges including teachers’ biases, limited cultural knowledge, and personal identities that influence their engagement with diverse student experiences. Additionally, the misalignment of CRSciAs with established curricular standards, inadequate institutional support, and insufficient professional development hinder effective adoption.
One major challenge arises from the limitations stemming from classroom teachers’ identities and biases, as well as their limited knowledge of diverse cultural contexts [41]. Teachers’ personal identities, including their race, ethnicity, and cultural background, can influence how they perceive and engage with CRSciAs [42]. For example, in science assessments, a teacher who lacks familiarity with the cultural experiences of their students may unintentionally introduce biases into the assessment process, either by favoring certain cultural narratives or by overlooking others.
Another challenge lies in the integration of CRSciA within the Framework for K-12 Science Education [3]. Though this is a reflection of a broader struggle within educational systems to effectively align CRSciAs with established curricular standards, it critically applies to science assessment [11,43,44]. Ladson-Billings [39] critiques this misalignment of CRSciA and =Framework for K-12 Science Education by highlighting how top-down initiatives to implement CRA often miss the mark; As countries, States, districts, and professional organizations attempt to address cultural issues of assessments through various frameworks and guidelines, their efforts frequently fall short of the theory’s original intent [39]. More to the challenges of CRSciAs is the issue of inadequate resources and lack of institutional support. The lack of institutional support and inadequate resources also makes it difficult for science teachers to effectively incorporate cultural responsiveness into their assessments, and with the necessary tools, guidance, and resources, educators are unable to fully leverage CRSciA practices [45].
These challenges such as limited teacher’s knowledge and misalignment of CRA with curricula standards call for continuous professional development, yet, studies indicate that there is currently a inadequate continuous professional development available for teachers regarding CRSciAs [46]. For instance, studies by Harris et al. [7] surveyed teachers across 18 states in the US and revealed that 86.36% of K-12 teachers viewed the integration of CRA with NGSS positively but suggested more robust teacher training programs to enhance awareness and effective adoption of both NGSS and CRA in science classrooms.

2.4. Generative AI and Culturally Responsive Assessment

There is currently limited research on the integration of GenAI into CRAs. However, this study draws upon the documented potential of GenAI in existing literature to establish its capabilities in addressing CRSciAs. Generative AI refers to advanced computational techniques that create new, meaningful content such as text, images, and audio from existing data [14]. These capabilities of GenAI can address significant challenges that teachers face in designing CRA, such as the time-intensive nature of creating materials that are both culturally relevant and pedagogically sound [47,48]. Furthermore, the interactive nature of GenAI-based assessments allows for real-time feedback and adaptation, providing teachers and students with immediate opportunities to learn and correct misunderstandings [49,50]. For instance, GPT-4’s ability to interpret and generate multimodal content, including visual data like graphs and chemical diagrams, enhances its utility in crafting assessments that are aligned with cultural contexts and could engage students in ways that traditional automatic text-based assessments cannot [16]. This multimodal approach, grounded in the theory of multimedia learning, can help overcome the limitations of traditional AI, which has been largely text-bound, by incorporating a broader spectrum of human experience into the assessment process [14,16,51].

3. Generative AI Framework for Culturally Responsive Assessments in Science

The framework of this study was grounded in cultural tenets that are central to shaping individuals’ identities and learning experiences in science as well as GenAI capabilities. Currently, GenAI has the ability to translate over 50 languages, an essential feature that plays a significant role in assessment generation. Studies also show that, with proper prompts, GenAI could reduce race and ethnicity biases [52,53,54]. Furthermore, GenAI is noted for demonstrating respect for religious and cultural differences and has the potential to enhance Indigenous knowledge [55,56,57,58]. These qualities of GenAI largely influenced the selection of these cultural tenets [59]. We acknowledge other cultural tenets, such as socio-economic status and gender, and advocate for future frameworks as the field of GenAI evolves [60]. The framework specifically focused on such prevailing tenets as Indigenous language, Indigenous knowledge, ethnicity/race, religious beliefs, and community and family (See Figure 1).

3.1. Indigenous Language

Indigenous language plays a crucial role in CRA within K-12 education, particularly in science classrooms where students from diverse linguistic backgrounds are increasingly prevalent [43]. Wright and Domke [61] highlight the significant role of language in the NGSS, noting how language can contribute to science learning and achievement among students’ K-12 students. Kūkea Shultz and Englert [62] provide a compelling example of culturally and linguistically responsive assessment with the development of the Kaiapuni Assessment of Educational Outcomes (KĀʻEO) in Hawaii. The Kaiapuni Assessment of Educational Outcomes (KĀʻEO) was developed to address the inequities faced by students in the Hawaiian Language Immersion Program (Kaiapuni), who were historically assessed with tools that were culturally and linguistically inappropriate. The assessment demonstrated significant outcomes by providing culturally and linguistically valid measures that better captured the academic abilities of Kaiapuni students. In the context of linguistically responsive assessment, the application of GenAI in educational settings has shown significant potential in addressing the diverse linguistic needs of students. A notable example is the study by Latif et al. [63], which introduced G-SciEdBERT (German Science Education BERT), a specialized adaptation of the standard German BERT (G-BERT) model. G-SciEdBERT was developed to overcome G-BERT’s limitations in scoring written science responses. By pre-training G-SciEdBERT on a substantial corpus of German-written science responses and fine-tuning it on specific assessment items, the researchers demonstrated a marked improvement in scoring accuracy, with a 10% increase in the quadratic weighted kappa compared to G-BERT. This finding highlights the role of GenAI models in creating linguistically and culturally responsive assessments.

3.2. Religion

Religious beliefs are a critical factor to consider in the development of CRA in K-12 science education. Mantelas and Mavrikaki [64] highlighted that the intersection of religiosity and scientific concepts, such as evolution, presents unique challenges for educators in CRSciA. Their study demonstrates that students with strong religious convictions may struggle with certain scientific ideas, which can negatively impact their academic performance if assessments do not account for these beliefs. This means CRSciA should allow students to demonstrate their scientific understanding without forcing them to choose between their religious beliefs and academic success [65].
Barnes et al. [66] further stress the importance of considering religious backgrounds in CRA, particularly for students of color who may rely on religion as a critical support system. Their research shows that strong religious beliefs can influence students’ acceptance of scientific theories like evolution, which, in turn, affects their academic success in science-related subjects. Culturally responsive assessments must, therefore, be designed to account for these factors, ensuring that they do not inadvertently disadvantage students whose religious beliefs differ from mainstream scientific views.
Moreover, Owens et al. [67] contribute to this discussion by advocating for a “pedagogy of difference” in teaching science, which could be extended to assessment practices in K-12 science education. This pedagogical approach encourages students to explore the relationship between their religious beliefs and scientific concepts, fostering an environment where multiple perspectives are acknowledged and valued. In light of this, Sumarni et al. [68] proposed a holistic model for integrating religious and cultural knowledge into STEM education, which can serve as a foundation for CRA practices. Their RE-STEM model emphasizes the importance of bridging the gap between religion, culture, and science, suggesting that assessments should be designed to reflect this integration.
Although the intersection of GenAI and religious contexts remains underexplored, GenAI’s potential for engaging in nuanced discussions about religious concepts is promising [69]. For instance, GenAI models like ChatGPT have shown the ability to participate in theological dialogues, offering responses that respect religious traditions [58]. This capability allows for the development of assessments that honor students’ religious beliefs. Nonetheless, ethical considerations are paramount, as Ashraf [70] advocates that GenAI applications must be carefully monitored to avoid infringing on religious freedoms, bias, and disrespect in digital interactions.

3.3. Indigenous Knowledge

Indigenous knowledge plays a pivotal role in shaping CRSciA in K-12 education, offering a means to create an equitable achievement [71]. Trumbull and Nelson-Barber [72] explored the challenges and opportunities in developing CRA for Indigenous students in the US, highlighting the limitations of standardized assessments that often disregard Indigenous knowledge systems. They argued that these assessments could be ineffective and even harmful, as they fail to engage Indigenous students or accurately measure their knowledge. This affirms Muhammad et al.’s [31] assertion that the traditional assessments and curricula often overlook the historical and cultural contexts of Black and Brown children in the US, as highlighted by the National Assessment of Educational Progress (NAEP).
Therefore, the concept of “culturally valid assessment” was proposed by Trumbull and Nelson-Barber [72] to incorporate Indigenous ways of knowing and to be responsive to the cultural and linguistic diversity of the students. This approach is crucial for creating assessments that support students’ academic success while also preserving and respecting their cultural identities [73]. Furthermore, Jin [74] systematically reviewed educational programs aimed at supporting Indigenous students in science and STEM fields, revealing the positive impact of integrating Indigenous knowledge with Western scientific assessment. This review shows that culturally responsive assessment approaches in these programs lead to improved educational outcomes, as they allow Indigenous students to draw connections between their cultural heritage and the scientific concepts they are learning.
GenAI has the potential to challenge dominant Eurocentric narratives and promote the inclusion of Indigenous perspectives in K-12 science assessment [75]. For instance, GenAI tools can help ensure that science assessments honor and reflect Indigenous cultural identities. However, it is essential that GenAI-generated content is contextually accurate and respects the complexity of Indigenous cultures. Castro Nascimento and Pimentel [76] emphasized the need for GenAI models to be trained on diverse cultural datasets to avoid perpetuating narrow perspectives. The deliberate integration of Indigenous knowledge into GenAI models can significantly enhance the cultural relevance of science education.

3.4. Race and Ethnicity

CRSciA requires a thorough understanding of how race and ethnicity shape students’ learning experiences and outcomes. Atwater et al. [77] emphasized that traditional science assessments often overlooked the diverse cultural backgrounds of students, particularly those from African American, Latino, and Asian American communities. They, therefore, advocated for science assessments that were inclusive and reflective of the race and ethnicity within classrooms. Similarly, Wells [78] called for strategic cross-sector collaboration between K-12 and higher education to address the sociocultural factors affecting diversity in education, reinforcing the need for assessments that are sensitive to the varied cultural contexts of students.
The importance of factoring race and ethnicity into science teaching and assessment was further highlighted by Riegle-Crumb et al. [79], who found that inquiry-based learning was associated with more positive attitudes toward science among students from diverse racial and ethnic backgrounds. When assessments are designed to reflect the competencies of students’ cultural backgrounds, they allow them to demonstrate their understanding through exploration, critical thinking, and problem-solving in STEM [80].
However, a study by Choudhary [81] highlights the prevalence of racial bias in GenAI tools, including ChatGPT, underscoring the necessity to mitigate these biases. In the context of CRA, GenAI tools must be designed to promote fairness and inclusivity, particularly in the assessment of students from diverse racial and ethnic backgrounds. Warr et al. [82] provided evidence of racial bias affecting GenAI evaluations, demonstrating that racial descriptors could influence GenAI-generated assessments. This underlines the importance of developing transparent and fair GenAI tools that can assist teachers to develop assessments that address the diverse cultural background of all students.

3.5. Family and Community Engagement

While family and community might not be directly involved in the creation of assessment items, the role of family and community experts in validating cultural factors of assessments is crucial in the CRSciA framework. Family and community involvement are elements providing essential context and resources that directly impact students’ academic performance and engagement. Denton et al. [83] emphasized the significance of community cultural wealth in STEM education, particularly in K-12 settings. They argued that an assets-based approach, which recognized the diverse forms of capital, such as familial, linguistic, and social, that students from nondominant communities bring, was essential for developing assessments that truly reflected students’ backgrounds. In K-12 science assessments, this means creating assessment tools that account for the cultural and social capital that students acquire from their families and communities.
Gerde et al. [84] provide insight into the specific ways that families contribute to science learning at the K-12 level. Their study shows that the resources and opportunities families provide, such as access to science-related books and toys, and community experiences like visits to parks and science centers, vary widely based on parents’ self-efficacy and beliefs about science. This is an indication that CRSciA at this level should not only measure what students know from formal education but also integrate the informal learning experiences that occur within their family and community contexts. Soto-Lara and Simpkins [85] further elaborated on the role of culturally grounded family support in the science education of Mexican-descent adolescents, focusing on the K-12 educational context. Their findings reveal that parents provide support through both traditional means, such as encouraging their children, and nontraditional methods, like Indigenous cultural practices and leveraging extended family networks.
Family and community involvement is a cornerstone of student success, particularly in science assessment. Garbacz et al. [86] stressed the importance of involving families and communities through ecological support. GenAI-driven tools can bridge the gap between school and home by providing culturally relevant resources and information, fostering a supportive learning environment. However, Saıd and Al-Amadı [87] noted challenges in engaging families, particularly in areas with limited digital literacy and technology access. GenAI has the potential to address these challenges by creating accessible communication channels between schools and families. The goal is to leverage GenAI not only for assessment but also to strengthen the connection between students’ educational experiences and their broader family and community contexts, enhancing overall educational support and involvement.

4. Developing the CRSciA-Generator

The OpenAI platform offers the opportunity to configure and customize GPT models as specialized ChatGPTs (such as. Code Tutor) for particular use cases [88]. This configuration allows users to tailor the model to meet specific needs, making it more relevant and effective for specialized tasks based on context. This process is essential for this study because base GPT models lack domain-specific focus and are not fine-tuned for specialized tasks or contexts, making them more prone to biases or hallucinations [89].
The development of the CRSciA-Generator follows a three-step process: Configuration and Customization Process; Prompt Engineering; and Final Output. These processes were designed to ensure that educators could engage seamlessly with the CRSciA-Generator tool and generate CRSciA items for their class without facing inaccurate outputs. With these steps, the model was made more user-friendly, allowing educators of all prompt skills to follow through and obtain the best possible output from the system (See Figure 2).

4.1. Configuration and Customization

The CRSciA-Generator is grounded in rigorous configuration and customization processes, starting with the integration of the GenAI-CRSciA frameworks and SNAP items. These documents are uploaded to the model as files. A critical aspect of customization involves optimizing the uploaded files for efficient processing while staying within the token limits of the GPT model. Exceeding the token capacity could have resulted in incomplete data processing. To mitigate this, we choose to upload PDF files instead of DOC/DOCX files, primarily due to their smaller file size. PDFs are generally more compact because they are designed for viewing rather than editing, allowing for content compression without a loss in quality [90,91]. The use of PDF is particularly important for SNAP items, which contain both text and visual components.
To support the creation of dynamic and interactive assessments, the configuration involves the incorporation of advanced tools such as web search, DALL·E image generation, and a code interpreter. The web search feature enables the generator to access relevant and up-to-date scientific information, ensuring the assessments remain accurate and aligned with the latest developments. The DALL·E image generation feature provides customized diagrams or visuals to accompany certain assessment questions, adding an interactive and visual dimension (multimodal ability), which is especially useful for science subjects requiring illustrations. Additionally, the code interpreter enhances the generator’s functionality by allowing for the creation of programming-based assessment items, particularly in STEM subjects.

4.2. Prompt Engineering

The customization involves a interactive guided dynamic prompt (dynamic prompts) strategy (proactive user prompts) based on the GenAI-CRSciA framework to support teachers and users in generating CRSciA items. The interactive guided dynamic prompt strategy specifically involves two strategies: conversation starters and dynamic replies. The conversation starters are designed to initiate interaction by asking key questions at the outset [92]. For example, the model prompts the user with:
“Welcome! I am your culturally responsive science assessment generator (CRSciA-Generator). I am here to help you develop science assessment items that meet the diverse cultural and context-specific needs of your students. Would you like assistance in developing a culturally responsive science assessment for your students that aligns with the NGSS? Please type ‘Yes’ or ‘No’ to proceed.
Moreover, the dynamic replies simplify user interaction with suggested inputs and follow-up questions, such as “Yes” or “No”. For example, a “yes” to the conversation ensures that users have the flexibility to tailor the assessments based on specific needs and preferences. An example might be as follows:
“Great! I can help you create an assessment aligned with NGSS standards. Would you like me to use the SNAP questions from the Stanford NGSS Assessment Project? Please type ‘Yes’ or ‘No’.”
To use the OpenAI-powered API (like GPT-4) within the OpenAI Python library, see the example of how to adapt the interactive guided dynamic prompt (see Table 1). In the provided Python code, parameters such as t e m p e r a t u r e = 0.7 and m a x _ t o k e n s = 150 are set by default to control the randomness and length of the model’s output [89] (See Appendix B). This was to ensure consistency with the CRSciA-Generator parameters within the OpenAI platform, preventing any divergent behavior in the model’s response generation.

4.3. Piloting the CRSciA-Generator

In the application to determine the efficacy of the CRSciA-Generator demonstration was conducted to compare it with standard prompts within the base GPT 4o. The objective was to identify the aspects where the GenAI-CRSciA framework intersected with standard prompt strategies and to assess any differences in outcomes. Standard prompt offers a simple and direct approach for instructing ChatGPT by specifying a particular task for the model to execute, often using a format like “Generate a [task]” [93]. Given that SNAP items require students to complete specific tasks, standard prompts are an ideal choice for comparison in this context. In contrast, dynamic prompts provide a more flexible and adaptive approach, adjusting based on user inputs and the specific context of the task. This responsiveness makes interactive guided dynamic prompts particularly useful for generating content that aligns with the user’s needs. In the CRSciA-Generator, for example, interactive guided dynamic prompts start with a conversation starter, facilitating a deeper and more interactive engagement with the user [94,95].

4.4. Use Cases of the CRSciA-Generator and Prompts

To apply the GenAI-CRSciA framework in practical examples, we adapted the NGSS Life Science questions to prompt the base ChatGPT 4o (See Table 2) using the standard prompt (See Table 3) and interactive guided dynamic prompt within the CRSciA-Generator (see Table 4). Find below the original and generated assessment items for both CRSciA Generator and standard prompts within the base GPT for use with students from Ghana, the USA, and China.
In evaluating the outputs, the standard prompt was found to generate suggestive approaches to developing questions that could meet the GenAI-CRSciA framework, particularly suggesting teachers or users to “Think about a predator and prey relationship that is familiar in your cultural context or from your region (Ghana, the USA, or China)”. This makes it limited in its ability to fully embrace individual cultural nuances, which constrains the specificity of the output.
In contrast, the output from the CRSciA-Generator comprehensively addresses the GenAI-CRSciA framework by generating different sets of questions for each cultural group. For example, regarding language, the CRSciA-Generator translates certain keywords into the specific languages of each country (e.g., Akan for Ghana, Mandarin for China), ensuring linguistic accessibility and cultural connection for students. It further generates relevant examples and visuals that align with local species and ecological relationships, enhancing student engagement by resonating with their racial and ethnic contexts. Moreover, the CRSciA-Generator acknowledges the religious components within the framework, providing culturally specific examples, such as an antelope, a wolf, or an amur, that address beliefs or stories unique to each country.
Nevertheless, one observation about the CRSciA-Generator output is that it generalized the content for each of the countries rather than specifying distinct regional or subcultural contexts within each country, which could potentially introduce bias by overlooking the rich diversity within subcultures. This limitation is likely related to the amount of background information provided, as the input is more of country-specific levels.

5. Discussion

The alignment of the CRSciA- Generator with the unique cultural contexts of Ghana, the USA, and China involves careful adaptation across several key tenets. In terms of language, the questions are presented in English for both Ghana and the USA, where it is the dominant language of instruction. While in China, the questions are delivered in Mandarin [97,98]. This ensures that students can engage fully with the content in their native or dominant instructional language. For instance, the efficiency of the GenAI-CRSciA framework becomes particularly evident when addressing the challenges faced by students in linguistically diverse environments, such as China and Ghana. In China, despite policies aimed at incorporating English as a medium of instruction, research shows that Chinese students often struggle in predominantly English-medium contexts [99]. The GenAI-CRSciA framework effectively mitigates these challenges by tailoring content to specific cultural and linguistic contexts, such as translating content into Chinese and aligning it with the students’ cultural backgrounds and linguistic needs [100]. Similarly, in Ghana, where the language-in-education policy mandates the use of mother tongue (L1) as the medium of instruction in lower primary grades (KG1 to Primary 3) and English (L2) from Primary four onwards, the CRSciA-Generator demonstrates its cultural awareness by generating science assessments predominantly in English. This aligns with the reality of Ghana’s educational system, where English remains the dominant medium of instruction and assessment, particularly in science education [101,102].
Indigenous knowledge is also thoughtfully integrated, with traditional Ghanaian stories and proverbs highlighting the relationship between lions and antelopes, Native American perspectives shedding light on the spiritual and ecological significance of wolves and moose, and Chinese history and folklore contextualizing the interaction between Amur tigers and sika deer. For example, in Ghana, the framework integrates culturally significant symbols, such as the lion and deer, which are deeply embedded in local folklore and traditions. The ‘Aboakyire Festival’ celebrated by the people of Winneba involves the symbolic hunting of a deer, an event that carries profound cultural and spiritual significance [103]. Similarly, among the Akans of Akyem Abuakwa, animal symbolism, including that of the lion and deer, plays a crucial role in cultural identity and educational practices, conveying lessons and wisdom that are essential to the community’s heritage [104]. Likewise, in China, the framework demonstrates its effectiveness by incorporating ecologically significant animals such as the Amur tiger and sika deer into educational content, aligning the assessments with the local knowledge and values of communities in Northeast China [105].
Further adaptations reflect the importance of race and ethnicity, where the symbolic significance of lions is considered within various Ghanaian ethnic groups, differing views on wolves and moose are acknowledged among Native American tribes and European settlers in the USA, and the cultural heritage of Amur tigers is recognized within both the Han Chinese majority and ethnic minorities [106]. Moreover, religious beliefs are explored, with Ghanaian students reflecting on traditional animistic beliefs, American students considering how spiritual beliefs shape their views on wolves and moose, and Chinese students examining Taoist or Buddhist concepts of balance and harmony as they relate to the relationship between Amur tigers and sika deer [107,108]. This comprehensive approach ensures that the assessments are scientifically and deeply rooted in the cultural realities of the students. The findings demonstrate GenAI’s potential to revolutionize 21st-century assessments, which aligns with the conclusions of Owoseni et al. [109], who highlight GenAI’s capability to enhance summative assessments by automating tasks like question generation and grading.
Again, the findings of this study suggest that customization and interactive guided dynamic prompt engineering will be the future of AI literacy. This evolution highlights the need for further research and the integration of prompt engineering into education. This suggestion aligns with those of Knoth et al. [110] and Arvidsson and Axell [111], who emphasize the crucial role of prompt engineering in optimizing GenAI across different fields, including education and requirements engineering. Both works highlight that the quality of GenAI outputs is directly influenced by the precision of prompt engineering, underscoring the importance of GenAI literacy Arvidsson and Axell [111] and further pointing out how domain-specific guidelines can improve the accuracy and utility of GenAI for specialized tasks. Furthermore, the study suggests that teachers need to improve their knowledge to be able to use AI as an adopter, collaborator, or inventors [112]. Most educators, including teachers, pre-service teachers are not prepared with the knowledge, so they need professional learning, such as prompt engineering [113].

6. Conclusions and Future Directions

This study presents a novel approach to automatic generation of cultural and context-specific science assessments for K-12 education using GenAI. We first developed a GenAI-CRSciA framework that establishes the relationship between CRSciA and GenAI, by incorporating key cultural tenets such as indigenous language, Indigenous knowledge, ethnicity/race, and religion. Using the GenAI-CRSciA framework and interactive guided dynamic prompt strategies, we developed the CRSciA-Generator tool within the OpenAI platform. The interactive guided dynamic prompt based on the GenAI-CRSciA-framework to automatically guide educators to prompt the generate assessments following the conversational information from users. The interactive guided dynamic prompt strategy addresses the challenges associated with prompt engineering skills, as users do not require advanced prompt skills or iterative refinements, while ensuring high-quality GenAI outputs. We further conducted a pilot demonstration comparing the CRSciA-Generator with standard prompt in generating cultural and context specific science items.
The CRSciA-Generator produced assessment items that incorporate more local knowledge, traditional stories, and cultural perspectives, making it more culturally responsive to students in regions such as Ghana, the USA, and China than the base GPT4o with standard prompt did. The assessment specifically features examples such as traditional stories of lions and antelopes in Ghana, Native American views on wolves in the USA, and Taoist or Buddhist teachings on the Amur tiger in China. These culturally grounded examples illustrate the potential of the CRSciA-Generator to create equitable and inclusive assessments tailored to the cultural and contextual experiences of diverse learners.
However, the pilot test was overgeneralized by focusing on broad national contexts, treating entire countries as culturally homogenous. Therefore, we recommend that teachers provide sufficient cultural and context background information about their students to enable the CRSciA-Generator to produce more contextually accurate assessments. Additionally, we believe that the pilot demonstration does not fully validate the model’s efficacy. Future studies involving human experts are necessary to review and verify the cultural and contextual specificity of the generated assessments. We recommend conducting empirical studies in diverse contexts to further use and validate the tool’s overall effectiveness.
Furthermore, the results highlight customized GPTs as potential tools for future learning management systems (LMS). Customized GPTs in education will be impactful when powered by cultural frameworks, like those embedded in the CRSciA-Generator, as they ensure education remains equitable, inclusive, and culturally relevant. However, the argument could be strengthened by addressing how these tools integrate with existing LMS infrastructure and by providing specific examples of their scalability in diverse educational settings.

Author Contributions

Conceptualization, M.N. and X.Z.; methodology, M.N.; validation, X.Z. and M.Z.F.; writing—original draft preparation, M.N.; writing—review and editing, X.Z. and M.Z.F.; visualization, M.N.; supervision, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no external funding for this study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data is available for this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Demonstration of the CRSciA Generator

This screenshot showcases the interactive process of the CRSciA Generator in assisting educators to create culturally responsive science assessments.
Education 14 01325 i005

Appendix B. Python Code Snip of Interactive Guided Dynamic Prompt Screenshot

Education 14 01325 i006

References

  1. Frommert, C. Creating an Environment Where All Students See That They Can Be Good at Math. 27 January 2023. Available online: https://www.edutopia.org/article/helping-students-see-themselves-good-math (accessed on 5 August 2024).
  2. Kouo, J.L. Don’t Assess a Fish by Its Ability to Climb a Tree: Considerations and Strategies to Ensure Equitable Formative Assessment Practices for All Learners. In Research Anthology on Physical and Intellectual Disabilities in an Inclusive Society; IGI Global: Hershey, PA, USA, 2022; pp. 873–889. [Google Scholar] [CrossRef]
  3. NRC. A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas; National Academies Press: Washington, DC, USA, 2012. [Google Scholar]
  4. Ashreef, S.Y.; Asowayan, A.A.; Omar, S.H. A Systematic Review: The Next Generation Science Standards and the Increased Cultural Diversity. Engl. Lang. Teach. 2017, 10, 63–76. [Google Scholar] [CrossRef]
  5. Mullis IV, S.; Martin, M.O.; von Davier, M. (Eds.) TIMSS 2023 assessment frameworks. TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College, & International Association for the Evaluation of Educational Achievement (IEA). 2021. Available online: https://timssandpirls.bc.edu/timss2023/frameworks/index.html (accessed on 1 September 2024).
  6. States, N.N.L. Next Generation Science Standards: For States, by States; National Academies Press: Washington, DC, USA, 2013. [Google Scholar]
  7. Harris, K.; Sithole, A.; Kibirige, J.; McCarthy, P. The Next Generation Science Standards and the quest towards culturally responsive pedagogy: Perceptions of K-12 educators. Am. Acad. Sci. Res. J. Eng. Technol. Sci. 2018, 49, 192–208. [Google Scholar]
  8. Câmara, J.N. Funds of knowledge: Towards an asset-based approach to refugee education and family engagement in England. Br. Educ. Res. J. 2024, 50, 876–904. [Google Scholar] [CrossRef]
  9. O’Leary, E.S.; Shapiro, C.; Toma, S.; Sayson, H.W.; Levis-Fitzgerald, M.; Johnson, T.; Sork, V.L. Creating inclusive classrooms by engaging STEM faculty in culturally responsive teaching workshops. Int. J. Stem Educ. 2020, 7, 32. [Google Scholar] [CrossRef]
  10. Idrus, F.; Sohid, M. Teachers’ expectations and challenges in using culturally responsive teaching (CRT) strategies in the ESL classroom. J. Lang. Teach. Res. 2023, 14, 629–635. [Google Scholar] [CrossRef]
  11. Nortvedt, G.A.; Wiese, E.; Brown, M.; Burns, D.; McNamara, G.; O’Hara, J.; Altrichter, H.; Fellner, M.; Herzog-Punzenberger, B.; Nayir, F.; et al. Aiding culturally responsive assessment in schools in a globalising world. Educ. Assess. Eval. Account. 2020, 32, 5–27. [Google Scholar] [CrossRef]
  12. Preston, J.P.; Claypool, T.R. Analyzing assessment practices for Indigenous students. In Frontiers in Education; Frontiers Media SA: Lausanne, Switzerland, 2021; p. 679972. [Google Scholar]
  13. Wu, X.; He, X.; Liu, T.; Liu, N.; Zhai, X. Matching Exemplar as Next Sentence Prediction (MeNSP): Zero-Shot Prompt Learning for Automatic Scoring in Science Education; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 401–413. [Google Scholar]
  14. Lee, G.-G.; Shi, L.; Latif, E.; Gao, Y.; Bewersdorf, A.; Nyaaba, M.; Guo, S.; Wu, Z.; Liu, Z.; Wang, H. Multimodality of ai for education: Towards artificial general intelligence. arXiv 2023, arXiv:2312.06037. [Google Scholar]
  15. Li, X.; Li, B.; Cho, S.-J. Empowering Chinese language learners from low-income families to improve their Chinese writing with ChatGPT’s assistance afterschool. Languages 2023, 8, 238. [Google Scholar] [CrossRef]
  16. Bewersdorff, A.; Hartmann, C.; Hornberger, M.; Seßler, K.; Bannert, M.; Kasneci, E.; Kasneci, G.; Zhai, X.; Nerdel, C. Taking the Next Step with Generative Artificial Intelligence: The Transformative Role of Multimodal Large Language Models in Science Education. arXiv 2024, arXiv:2401.00832. [Google Scholar]
  17. Zhai, X. ChatGPT for next generation science learning. XRDS Crossroads ACM Mag. Stud. 2023, 29, 42–46. [Google Scholar] [CrossRef]
  18. Tan, B.; Armoush, N.; Mazzullo, E.; Bulut, O.; Gierl, M. A Review of Automatic Item Generation in the Large Language Model Era. Conference Poster Presented at the Faculty of Education Celebrating Socially Transformative Research and Teaching Event, 28 March 2024. Available online: https://era.library.ualberta.ca/items/2338861e-e0d5-44ac-9fd9-3552f69c255a (accessed on 28 June 2024).
  19. Hwang, K.; Challagundla, S.; Alomair, M.; Chen, K.; Choa, F.-S. Towards AI-Assisted Multiple Choice Question Generation and Quality Evaluation at Scale: Aligning with Bloom’s Taxonomy. Poster Presented at the Workshop: Generative AI for Education (GAIED): Advances, Opportunities, and Challenges, NeurIPS Conference. 2023. Available online: https://neurips.cc/virtual/2023/79098 (accessed on 22 April 2024).
  20. Richards, M.; Waugh, K.; Slaymaker, M.; Petre, M.; Woodthorpe, J.; Gooch, D. Bob or Bot: Exploring ChatGPT’s Answers to University Computer Science Assessment. ACM Trans. Comput. Educ. 2024, 24, 5. [Google Scholar] [CrossRef]
  21. Andreadis, K.; Newman, D.R.; Twan, C.; Shunk, A.; Mann, D.M.; Stevens, E.R. Mixed methods assessment of the influence of demographics on medical advice of ChatGPT. J. Am. Med. Inform. Assoc. 2024, 31, ocae086. [Google Scholar] [CrossRef]
  22. Kıyak, Y.S.; Emekli, E. ChatGPT Prompts for Generating Multiple-Choice Questions in Medical Education and Evidence on Their Validity: A Literature Review. Postgrad. Med. J. 2024, 100(1189), 858–865. [Google Scholar] [CrossRef] [PubMed]
  23. Chan, W.; An, A.; Davoudi, H. A Case Study on ChatGPT Question Generation. In Proceedings of the 2023 IEEE International Conference on Big Data (BigData), Sorrento, Italy, 15–18 December 2023; pp. 1647–1656. [Google Scholar]
  24. Sembok, T.M.T.; Wani, S. Is ChatGPT not Appropriate for Religious Use? Springer Nature Singapore: Singapore, 2024; pp. 595–605. [Google Scholar]
  25. Nyaaba, M.; Wright, A.L.; Choi, G.L. Generative AI and Digital Neocolonialism in Global Education: Towards an Equitable Framework. arXiv 2024, arXiv:2406.02966. [Google Scholar]
  26. Jaakkola, E. Designing conceptual articles: Four approaches. AMS Rev. 2020, 10, 18–26. [Google Scholar] [CrossRef]
  27. McKenney, S.R.; Thomas, C. Conducting Educational Design Research; Routledge: London, UK, 2012. [Google Scholar]
  28. Ladson-Billings, G. From the achievement gap to the education debt: Understanding achievement in US schools. Educ. Res. 2006, 35, 3–12. [Google Scholar] [CrossRef]
  29. Yerrick, R.; Ridgeway, M. Culturally Responsive Pedagogy, Science Literacy, and Urban Underrepresented Science Students. In Inclusive Principles and Practices in Literacy Education; Milton, M., Ed.; Emerald Publishing Limited: Bingley, UK, 2017; Volume 11, pp. 87–103. [Google Scholar]
  30. Morgan, P.L.; Hu, E.H.; Farkas, G.; Hillemeier, M.M.; Oh, Y.; Gloski, C.A. Racial and Ethnic Disparities in Advanced Science and Mathematics Achievement During Elementary School. Gift. Child Q. 2023, 67, 151–172. [Google Scholar] [CrossRef]
  31. Muhammad, G.E.; Ortiz, N.A.; Neville, M.L. A Historically Responsive Literacy Model for Reading and Mathematics. Read. Teach. 2021, 75, 73–81. [Google Scholar] [CrossRef]
  32. Hoosain, M.S.; Paul, B.S.; Ramakrishna, S. The impact of 4IR digital technologies and circular thinking on the United Nations sustainable development goals. Sustainability 2020, 12, 10143. [Google Scholar] [CrossRef]
  33. Johnston-Goodstar, K.; Roholt, R.V. “Our kids aren’t dropping out; they’re being pushed out”: Native American students and racial microaggressions in schools. In Microaggressions and Social Work Research, Practice and Education; Routledge: London, UK, 2020; pp. 31–48. [Google Scholar]
  34. Apple, M.W. Can Education Change Society? Routledge: London, UK, 2012. [Google Scholar]
  35. Baker, D. The Schooled Society: The Educational Transformation of Global Culture; Stanford University Press: Redwood City, CA, USA, 2020. [Google Scholar]
  36. Fink, D.; Stoll, L. Educational change: Easier said than done. In International Handbook of Educational Change: Part One; Springer: Berlin/Heidelberg, Germany, 1998; pp. 297–321. [Google Scholar]
  37. Benediktsson, A.I.; Ragnarsdóttir, H. Immigrant students’ experiences of assessment methods used in Icelandic universities. Multicult. Educ. Rev. 2020, 12, 98–116. [Google Scholar] [CrossRef]
  38. Maaruf, S.Z.; Mohamad, S.N.A.; Mohamed, A.M.D.; Shukor, N. Examining Knowledge, Awareness and Action Orientation Through Multicultural Assessment Proficiency (MAP) for Cultural Responsive Pedagogy in Higher Education Practices. In Proceedings of the 2nd International Research Conference on Business and Economics (IRCBE), Semarang, Indonesia, 3–4 August 2016. [Google Scholar]
  39. Ladson-Billings, G. Three decades of culturally relevant, responsive, & sustaining pedagogy: What lies ahead. In The Educational Forum; Taylor & Francis: Abingdon, UK, 2021; pp. 351–354. [Google Scholar]
  40. Ghattas, N.I.; Carver, J.S. Cultural responsiveness of the next generation science standards. J. Stem Teach. Educ. 2017, 52, 5. [Google Scholar] [CrossRef]
  41. Brown, M.; Bruns, D.; Macnamara, G.; O’Hara, J. Culturally responsive classroom-based assessment A case study of secondary schools in Ireland. Rie-Rev. Investig. Educ. 2022, 40, 15–32. [Google Scholar] [CrossRef]
  42. Edwards, F.; Edwards, R. A story of culture and teaching: The complexity of teacher identity formation. Curric. J. 2017, 28, 190–211. [Google Scholar] [CrossRef]
  43. Rosheim, K.M.; Tamte, K.G.; Froemming, M.J. Reducing Inequalities Inherent in Literacy Assessment of Multilingual Learners. Read. Psychol. 2024, 45, 735–751. [Google Scholar] [CrossRef]
  44. Nayir, F.; Brown, M.; Burns, D.; O’Hara, J.; McNamara, G.; Nortvedt, G.; Skedsmo, G.; Gloppen, S.K.; Wiese, E.F. Assessment with and for Migration Background Students-Cases from Europe. Eurasian J. Educ. Res. 2019, 19, 39–68. [Google Scholar] [CrossRef]
  45. Muñiz, J. Culturally Responsive Teaching: A 50-State Survey of Teaching Standards; New America: Washington, DC, USA, 2019; Available online: http://files.eric.ed.gov/fulltext/ED594599.pdf (accessed on 26 July 2024).
  46. Stepp, Z.A.; Brown, J.C. The (lack of) relationship between secondary science teachers’ self-efficacy for culturally responsive instruction and their observed practices. Int. J. Sci. Educ. 2021, 43, 1504–1523. [Google Scholar] [CrossRef]
  47. Ionescu, V.M.; Enescu, M.C. Using ChatGPT for Generating and Evaluating Online Tests. In Proceedings of the 2023 15th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Bucharest, Romania, 29–30 June 2023; pp. 1–6. [Google Scholar]
  48. Yi, G.; Chen, Z.; Chen, Z.; Wong, W.E.; Chau, N. Exploring the Capability of ChatGPT in Test Generation. In Proceedings of the 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security Companion (QRS-C), Chiang Mai, Thailand, 22–26 October 2023; pp. 72–80. [Google Scholar]
  49. Pandey, M.K.; Upadhyay, J. Overview of ChatGPT Model Architecture. In Advanced Applications of Generative AI and Natural Language Processing Models; IGI Global: Hershey, PA, USA, 2024; pp. 87–107. [Google Scholar]
  50. Lee, U.; Jung, H.; Jeon, Y.; Sohn, Y.; Hwang, W.; Moon, J.; Kim, H. Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in english education. Educ. Inf. Technol. 2023, 29, 11483–11515. [Google Scholar] [CrossRef]
  51. Alasadi, E.A.; Baiz, C.R. Multimodal Generative Artificial Intelligence Tackles Visual Problems in Chemistry. J. Chem. Educ. 2024, 101, 2716–2729. [Google Scholar] [CrossRef]
  52. Gupta, N.; Choudhuri, S.S.; Hamsavath, P.N.; Varghese, A. Fundamentals of Chat GPT for Beginners Using AI; Academic Guru Publishing House: Bhopal, India, 2024. [Google Scholar]
  53. Bevilacqua, M.; Oketch, K.; Qin, R.; Stamey, W.; Zhang, X.; Gan, Y.; Yang, K.; Abbasi, A. When Automated Assessment Meets Automated Content Generation: Examining Text Quality in the Era of GPTs. arXiv 2023, arXiv:2309.14488. [Google Scholar] [CrossRef]
  54. Kartal, G. Contemporary Language Teaching and Learning with ChatGPT. In Contemporary Research in Language and Linguistics; Galip KARTAL: Konya, Turkey, 2023; Volume 1, ISSN 2980-2253. [Google Scholar] [CrossRef]
  55. Arifianto, M.L.; Thoriquttyas, T.; Izzudin, I.F. Inquiring Arabic-Islamic Debatable Terms to ChatGPT: Will It Take a Side? Digit. Muslim Rev. 2023, 1, 18–35. [Google Scholar] [CrossRef]
  56. Shabbir, A.; Rizvi, S.; Alam, M.M.; Su’ud, M.M. Beyond boundaries: Navigating the positive potential of ChatGPT, empowering education in underdeveloped corners of the world. Heliyon 2024, 10, e35845. [Google Scholar] [CrossRef] [PubMed]
  57. Zheng, Y.D.; Stewart, N. Improving EFL students’ cultural awareness: Reframing moral dilemmatic stories with ChatGPT. Comput. Educ. Artif. Intell. 2024, 6, 100223. [Google Scholar] [CrossRef]
  58. Oxenberg, R. A Dialogue on the Existence and Nature of God with ChatGPT (Part I). PhilArchive. Available online: https://philarchive.org/rec/OXEADO (accessed on 28 June 2024).
  59. Jayaweera, M. From Jargon to Clarity: Enhancing Science Communication with ChatGPT. Vidyodaya J. Sci. 2023, 26. [Google Scholar]
  60. Elu, J. Gender and science education in sub-Saharan Africa. J. Afr. Dev. 2018, 20, 105–110. [Google Scholar] [CrossRef]
  61. Wright, T.S.; Domke, L.M. The role of language and literacy in K-5 science and social studies standards. J. Lit. Res. 2019, 51, 5–29. [Google Scholar] [CrossRef]
  62. Kūkea Shultz, P.; Englert, K. Cultural validity as foundational to assessment development: An indigenous example. In Frontiers in Education; Frontiers Media SA: Lausanne, Switzerland, 2021; p. 701973. [Google Scholar]
  63. Latif, E.; Lee, G.-G.; Neuman, K.; Kastorff, T.; Zhai, X. G-SciEdBERT: A Contextualized LLM for Science Assessment Tasks in German. arXiv 2024, arXiv:2402.06584. [Google Scholar]
  64. Mantelas, N.; Mavrikaki, E. Religiosity and students’ acceptance of evolution. Int. J. Sci. Educ. 2020, 42, 3071–3092. [Google Scholar] [CrossRef]
  65. Black, P. Christian beliefs and values in science and religious education: An essay to assist the work of teachers of both subjects. Int. Stud. Cathol. Educ. 2017, 9, 206–222. [Google Scholar] [CrossRef]
  66. Barnes, M.E.; Supriya, K.; Dunlop, H.M.; Hendrix, T.M.; Sinatra, G.M.; Brownell, S.E. Relationships between the religious backgrounds and evolution acceptance of Black and Hispanic biology students. CBE—Life Sci. Educ. 2020, 19, ar59. [Google Scholar] [CrossRef]
  67. Owens, D.C.; Pear, R.S.; Alexander, H.A.; Reiss, M.J.; Tal, T. Scientific and religious perspectives on evolution in the curriculum: An approach based on pedagogy of difference. Res. Sci. Educ. 2018, 48, 1171–1186. [Google Scholar] [CrossRef]
  68. Sumarni, W.; Faizah, Z.; Subali, B.; Wiyanto, W. The Urgency of Religious and Cultural Science in STEM Education: A Meta Data Analysis. Int. J. Eval. Res. Educ. 2020, 9, 1045–1054. [Google Scholar] [CrossRef]
  69. Cheong, P.H. Robots, religion and communication: Rethinking piety, practices and pedagogy in the era of artificial intelligence. In Religion in the Age of Digitalization; Routledge: London, UK, 2020; pp. 86–96. [Google Scholar]
  70. Ashraf, C. Exploring the impacts of artificial intelligence on freedom of religion or belief online. Int. J. Hum. Rights 2022, 26, 757–791. [Google Scholar] [CrossRef]
  71. Black, A.; Tylianakis, J.M. Teach Indigenous knowledge alongside science. Science 2024, 383, 592–594. [Google Scholar] [CrossRef]
  72. Trumbull, E.; Nelson-Barber, S. The Ongoing Quest for Culturally-Responsive Assessment for Indigenous Students in the U.S. Front. Educ. 2019, 4, 40. [Google Scholar] [CrossRef]
  73. Azam, S.; Goodnough, K. Learning Together About Culturally Relevant Science Teacher Education: Indigenizing a Science Methods Course. Int. J. Innov. Sci. Math. Educ. 2018, 26, 74–88. Available online: https://www.researchgate.net/publication/324971384 (accessed on 28 June 2024).
  74. Jin, Q. Supporting indigenous students in science and STEM education: A systematic review. Educ. Sci. 2021, 11, 555. [Google Scholar] [CrossRef]
  75. Washburn, J.; McCutchen, J.M. AI Meets AI: ChatGPT as a Pedagogical Tool to Teach American Indian History. Crit. Humanit. 2024, 2, 2. [Google Scholar] [CrossRef]
  76. Castro Nascimento, C.M.; Pimentel, A.S. Do large language models understand chemistry? a conversation with chatgpt. J. Chem. Inf. Model. 2023, 63, 1649–1655. [Google Scholar] [CrossRef]
  77. Atwater, M.M.; Lance, J.; Woodard, U.; Johnson, N.H. Race and ethnicity: Powerful cultural forecasters of science learning and performance. Theory Pract. 2013, 52, 6–13. [Google Scholar] [CrossRef]
  78. Wells, A.S. Racial, ethnic, and cultural diversity across K–12 and higher education sectors: Challenges and opportunities for cross-sector learning. Chang. Mag. High. Learn. 2020, 52, 56–61. [Google Scholar] [CrossRef]
  79. Riegle-Crumb, C.; Morton, K.; Nguyen, U.; Dasgupta, N. Inquiry-based instruction in science and mathematics in middle school classrooms: Examining its association with students’ attitudes by gender and race/ethnicity. AERA Open 2019, 5, 2332858419867653. [Google Scholar] [CrossRef] [PubMed]
  80. Lesseig, K.; Firestone, J.; Morrison, J.; Slavit, D.; Holmlund, T. An analysis of cultural influences on STEM schools: Similarities and differences across K-12 contexts. Int. J. Sci. Math. Educ. 2019, 17, 449–466. [Google Scholar] [CrossRef]
  81. Choudhary, T. Reducing Racial and Ethnic Bias in AI Models: A Comparative Analysis of ChatGPT and Google Bard. Preprints 2024, 2024062016. [Google Scholar] [CrossRef]
  82. Warr, M.; Pivovarova, M.; Mishra, P.; Oster, N.J. Is ChatGPT Racially Biased? The Case of Evaluating Student Writing. In The Case of Evaluating Student Writing; Elsevier: Amsterdam, The Netherlands, 2024. [Google Scholar]
  83. Denton, M.; Borrego, M.; Boklage, A. Community cultural wealth in science, technology, engineering, and mathematics education: A systematic review. J. Eng. Educ. 2020, 109, 556–580. [Google Scholar] [CrossRef]
  84. Gerde, H.K.; Pikus, A.E.; Lee, K.; Van Egeren, L.A.; Huber, M.S.Q. Head Start children’s science experiences in the home and community. Early Child. Res. Q. 2021, 54, 179–193. [Google Scholar] [CrossRef]
  85. Soto-Lara, S.; Simpkins, S.D. Parent support of Mexican-descent high school adolescents’ science education: A culturally grounded framework. J. Adolesc. Res. 2022, 37, 541–570. [Google Scholar] [CrossRef]
  86. Garbacz, S.A.; Herman, K.C.; Thompson, A.M.; Reinke, W.M. Family engagement in education and intervention: Implementation and evaluation to maximize family, school, and student outcomes. J. Sch. Psychol. 2017, 62, 1–10. [Google Scholar] [CrossRef]
  87. Saıd, Z.; Al-amadı, A. The Role of Family Engagement in Students’ Science Learning in Qatari Schools. Eurasia Proc. Educ. Soc. Sci. 2018, 11, 142–152. [Google Scholar]
  88. Shahin, M.; Chen, F.F.; Hosseinzadeh, A. Harnessing customized AI to create voice of customer via GPT3.5. Adv. Eng. Inform. 2024, 61, 102462. [Google Scholar] [CrossRef]
  89. Gallifant, J.; Fiske, A.; Levites Strekalova, Y.A.; Osorio-Valencia, J.S.; Parke, R.; Mwavu, R.; Martinez, N.; Gichoya, J.W.; Ghassemi, M.; Demner-Fushman, D. Peer review of GPT-4 technical report and systems card. PLOS Digit. Health 2024, 3, e0000417. [Google Scholar] [CrossRef] [PubMed]
  90. Bast, H.; Korzen, C. A benchmark and evaluation for text extraction from PDF. In Proceedings of the 2017 ACM/IEEE joint conference on digital libraries (JCDL), Toronto, ON, Canada, 19–23 June 2017; pp. 1–10. [Google Scholar]
  91. SIregar, A.Z. Implementation Of ADFGVX Cipher And RSA On TXT And Doc File Security. J. Basic Sci. Technol. 2021, 10, 105–110. [Google Scholar] [CrossRef]
  92. Janssens, R.; Wolfert, P.; Demeester, T.; Belpaeme, T. Integrating Visual Context into Language Models for Situated Social Conversation Starters. IEEE Trans. Affect. Comput. 2024. Available online: https://ieeexplore.ieee.org/document/10599883 (accessed on 1 September 2024). [CrossRef]
  93. Spasić, A.J.; Janković, D.S. Using ChatGPT standard prompt engineering techniques in lesson preparation: Role, instructions and seed-word prompts. In Proceedings of the 2023 58th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), Nis, Serbia, 29 June–1 July 2023; pp. 47–50. [Google Scholar]
  94. Yang, F.; Yang, S.; Butt, M.A.; van de Weijer, J. Dynamic prompt learning: Addressing cross-attention leakage for text-based image editing. Adv. Neural Inf. Process. Syst. 2023, 36, 26291–26303. [Google Scholar]
  95. Mo, W.; Zhang, T.; Bai, Y.; Su, B.; Wen, J.-R.; Yang, Q. Dynamic Prompt Optimizing for Text-to-Image Generation. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 26627–26636.
  96. Consortium, C. Wolves and moose interactions (ID# 116-04-Y03). NGSS standard for MS-LS2-2 2024. Available online: https://scienceeducation.stanford.edu/assessments (accessed on 18 February 2024).
  97. Erling, E.J.; Adinolfi, L.; Hultgren, A.K.; Buckler, A.; Mukorera, M. Medium of instruction policies in Ghanaian and Indian primary schools: An overview of key issues and recommendations. In English as a Medium of Instruction in Postcolonial Contexts; Routledge: London, UK, 2018; pp. 18–34. [Google Scholar]
  98. Loh, E.K.; Tam, L.C.; Lau, K.-c. Moving between language frontiers: The challenges of the medium of instruction policy for Chinese as a second language. Lang. Policy 2019, 18, 131–153. [Google Scholar] [CrossRef]
  99. Zhang, Z. English-medium instruction policies in China: Internationalisation of higher education. J. Multiling. Multicult. Dev. 2018, 39, 542–555. [Google Scholar] [CrossRef]
  100. Yu, S.; Wang, Y.; Jiang, L.; Wang, B. Coping with EMI (English as a medium of instruction): Mainland China students’ strategies at a university in Macau. Innov. Educ. Teach. Int. 2021, 58, 462–472. [Google Scholar] [CrossRef]
  101. Owu-Ewie, C.; Eshun, E.S. The Use of English as Medium of Instruction at the Upper Basic Level (Primary Four to Junior High School) in Ghana: From Theory to Practice. J. Educ. Pract. 2015, 6, 72–82. [Google Scholar]
  102. Owu-Ewie, C.; Eshun, E.S. Language representation in the Ghanaian lower primary classroom and its implications: The case of selected schools in the Central and Western Regions of Ghana. Curr. Issues Lang. Plan. 2019, 20, 365–388. [Google Scholar] [CrossRef]
  103. Akyeampong, O.A. Aboakyer: Traditional festival in decline. Ghana Soc. Sci. J. 2019, 16, 97. [Google Scholar]
  104. Lumor, F. Significance of Animal Symbolism Among the Akans of Akyem Abuakwa Traditional Area; Kwame Nkrumah University of Science and Technology: Kumasi, Ghana, 2009. [Google Scholar]
  105. Li, Y.; Powell, J.; Jin, A.; Ryoo, H.K.; Li, H.; Pandey, P.; Zhu, W.; Li, D.; Lee, H. Community attitudes towards Amur tigers (Panthera tigris altaica) and their prey species in Yanbian, Jilin province, a region of northeast China where tigers are returning. PLoS ONE 2022, 17, e0276554. [Google Scholar] [CrossRef] [PubMed]
  106. Rudenko, S.V.; Sobolievskyi, Y.A. Philosophical Ideas in Spiritual Culture of the Indigenous Peoples of North America. Anthropol. Meas. Philos. Res. 2020, 18, 168–182. [Google Scholar] [CrossRef]
  107. Kosoe, E.A.; Adjei, P.O.-W.; Diawuo, F. From sacrilege to sustainability: The role of indigenous knowledge systems in biodiversity conservation in the Upper West Region of Ghana. GeoJournal 2020, 85, 1057–1074. [Google Scholar] [CrossRef]
  108. Tavor, O. the Land of Tigers and Snakes: Living with Animals in Medieval Chinese Religions; Chen, H., Ed.; Oxford University Press US: Oxford, UK, 2024. [Google Scholar]
  109. Owoseni, A.; Kolade, O.; Egbetokun, A. Applications of Generative AI in Summative Assessment. In Generative AI in Higher Education: Innovation Strategies for Teaching and Learning; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 97–122. [Google Scholar]
  110. Knoth, N.; Tolzin, A.; Janson, A.; Leimeister, J.M. AI literacy and its implications for prompt engineering strategies. Comput. Educ. Artif. Intell. 2024, 6, 100225. [Google Scholar] [CrossRef]
  111. Arvidsson, S.; Axell, J. Prompt engineering guidelines for LLMs in Requirements Engineering. Bachelor’s Thesis, Software Engineering and ManagementGothenburg University, Göteborg, Sweden, 2023. [Google Scholar]
  112. Zhai, X. Transforming Teachers’ Roles and Agencies in the Era of Generative AI: Perceptions, Acceptance, Knowledge, and Practices. J. Sci. Educ. Technol. 2024, 1–11, 1–11. [Google Scholar] [CrossRef]
  113. Nyaaba, M. Transforming Teacher Education in Developing Countries: The Role of Generative AI in Bridging Theory and Practice. arXiv 2024, arXiv:2411.10718. [Google Scholar]
Figure 1. Generative AI Culturally Responsive Science Assessment Framework (GenAI-CRSciA).
Figure 1. Generative AI Culturally Responsive Science Assessment Framework (GenAI-CRSciA).
Education 14 01325 g001
Figure 2. CRSciA-Generator System.
Figure 2. CRSciA-Generator System.
Education 14 01325 g002
Table 1. Python Code for Dynamic Prompt in the CRSciA-Generator.
Table 1. Python Code for Dynamic Prompt in the CRSciA-Generator.
ComponentCode Snippet
Import Librariesimport openai
API Key Setupopenai.api_key = “API Key”
Get Response Functiondef get_openai_response(prompt, model = “gpt-4”):
  response = openai.Completion.create(
   engine = model,
   prompt = prompt,
   max_tokens = 150,
   temperature = 0.7,
   n = 1,
   stop = None
 )
Return Responsereturn response.choices[0].text.strip()
Conversation Starter
Function
def conversation_starter():
  starter_prompt = (
   “Welcome! I am your culturally responsive science assessment generator (CRSciA-Generator). “
   “I am here to help you create assessment items that meet the diverse cultural and context-specific needs of your class “
   “that align with the NGSS. Let’s begin with a few questions to tailor the assessment for your class.\n”
   “What science topic or NGSS standard would you like to cover?”
 )
User Topic Inputuser_topic = input(get_openai_response(starter_prompt) + “\n”)
Return User Topicreturn user_topic
User-Prompted Pathway Functiondef user_prompted_pathway():
  language_prompt = “What are the dominant languages your students can read and write in for science?”
  cultural_prompt = “Would you like to include any culturally specific knowledge or context in the assessment? (Yes/No)”
Get Responses from Userlanguage = input(get_openai_response(language_prompt) + “\n”)
  cultural_relevance = input(get_openai_response(cultural_prompt) + “\n”)
Cultural Context Checkif cultural_relevance.lower() == ‘yes’:
  context_prompt = “Please provide some details about the cultural context you’d like to include.”
  context = input(get_openai_response(context_prompt) + “\n”)
  return language, context
Return Language and Contextelse:
  return language, None
Main Function to Generate Assessmentdef generate_assessment():
  topic = conversation_starter()
  language, context = user_prompted_pathway()
Display Summaryprint(“\n--- Assessment Summary ---”)
  print(f”Science Topic: {topic}”)
  print(f”Language: {language}”)
  if context:
   print(f”Cultural Context: {context}”)
  else:
   print(“No specific cultural context included.”)
Run the Generatorprint(“\nYour assessment will be tailored based on the information provided.”)
generate_assessment()
Table 2. Original Questions.
Table 2. Original Questions.
NGSS standard for MS-LS2-2 [96] NGSS standard for MS-LS2-2 [96] NGSS standard for MS-LS2-2 [96] NGSS standard for MS-LS2-2 [96]
MS-LS2-2_024_ Wolves and moose interactions
Ecosystem Interactions
Construct an explanation that predicts patterns of interactions among organisms across multiple ecosystems.
Activity: Wolve and moose interactions (ID# 116-04-Y03)
Construct an evidence-based argument about whether there exists a predator and prey relationship between wolves and moose.
From: Page 1 Wolves and moose interactions (ID# 116-04-Y03) (concord.org)
Source: This resource is part of the Concord Consortium’s Middle Grades (6–8) project
Task
Wolves and moose interactions (ID# 116-04-Y03)
Education 14 01325 i001
Wolf and moose images are licensed under the public domain.
Table 3. Standard Prompt with base GPT 4o.
Table 3. Standard Prompt with base GPT 4o.
Standard Prompt:
Prompt:
Create a culturally responsive science assessment question based on the NGSS standard MS-LS2-2, which involves ecosystem interactions and predator–prey relationships. The question should be suitable for middle school students in Ghana, the USA, and China. The question should ask students to construct an evidence-based argument about a predator and prey relationship, using examples that are relevant to their cultural context.
Output
Culturally Responsive Science Assessment Question:
NGSS Standard: MS-LS2-2
Concept: Ecosystem Interactions
Grade Level: Middle School
Target Regions: Ghana, the USA, China
Question:
In ecosystems around the world, animals interact as predators and prey. Think about a predator and prey relationship that is familiar in your cultural context or from your region (Ghana, the USA, or China). Construct an evidence-based argument about the relationship between these two organisms. Consider the following in your response:
  • Identify the predator and the prey in this relationship;
  • Describe the ecosystem where these interactions occur;
  • Provide evidence from your knowledge or experiences that shows how the predator and prey populations affect each other;
  • Explain how this relationship might be similar or different from other predator–prey relationships around the world.
Table 4. CRSciA -Generator Conversation Starter and Interactive guided dynamic Prompt.
Table 4. CRSciA -Generator Conversation Starter and Interactive guided dynamic Prompt.
Prompt: Following CRSciA-Generator Conversation Starter and Interactive guided dynamic Prompt Strategies

Objective:
Create a culturally responsive science assessment question that is scientifically rigorous and tailored to the cultural, linguistic, and educational contexts of students from [insert country/region, e.g., Ghana, the USA, China]. The question should be based on the [insert specific scientific concept or standard, e.g., ecosystem interactions, photosynthesis, forces, and motion].
Instructions:
Use the following guidelines to structure the assessment question:
  • Language:
    Use [insert language, e.g., English, Mandarin, Twi] as the primary language for the question;
    Ensure the language used is appropriate for the educational level and cultural context of the students;
  • Indigenous Knowledge:
    Incorporate local or Indigenous knowledge relevant to the students’ cultural backgrounds;
    Use examples that resonate with the student’s experiences, such as [insert specific cultural examples related to the scientific concept, e.g., traditional ecological practices, folklore].
  • Race/Ethnicity:
    Respect and reflect the ethnic and cultural identities of the students;
    Include scenarios that are geographically and culturally relevant to the student’s environment;
  • Family:
    Encourage the student to engage with their family or community members to gather insights or stories related to the scientific concept being assessed;
    Ask the student to incorporate these familial insights into their response;
  • Religious Beliefs:
    Allow for the exploration of religious or spiritual beliefs that might influence the student’s understanding of the scientific concept;
    Ensure the question does not conflict with or marginalize any religious beliefs.
  • Standards

Construct a question that requires the student to engage with the scientific concept by analyzing data, making predictions, or constructing an evidence-based argument.Ensure the question aligns with the [insert specific scientific standard or concept, e.g., NGSS standard MS-LS2-2].
Provide clear instructions for the student on how to incorporate cultural elements into their response.
CRSciA-Generator  Questions

Ghana:
Language: English
Context: Consider how local knowledge and traditional stories in Ghana describe predator–prey relationships, such as those between lions and antelopes in the savanna ecosystem. Think about how the knowledge passed down through your family or community elders might help you understand similar relationships in other parts of the world.
Task:
  • Based on the graph and your understanding of predator–prey relationships, write an argument that shows whether you agree or disagree with Jason’s explanation;
  • Your argument should include:
    A statement about whether you agree or disagree with Jason’s claim. Make sure to include Jason’s claim if you agree with him or write your own claim if you disagree;
    A statement about what is accurate or inaccurate in Jason’s evidence and reasoning;
    Examples from the graph to show why Jason’s evidence is accurate or inaccurate;
    Reflect on traditional Ghanaian stories about predator–prey relationships and discuss how they support or challenge Jason’s reasoning.
    Education 14 01325 i002
USA:
Language: English
Context: Reflect on how Native American perspectives describe wolves as spiritual beings and key regulators in ecosystems. Consider how this Indigenous knowledge might influence your understanding of the predator–prey relationships shown in the graph.
        Task:
  • Based on the graph and your understanding of predator–prey relationships, write an argument that shows whether you agree or disagree with Jason’s explanation;
  • Your argument should include the following:
    A statement about whether you agree or disagree with Jason’s claim. Make sure to include Jason’s claim if you agree with him or write your own claim if you disagree;
    A statement about what is accurate or inaccurate in Jason’s evidence and reasoning;
    Examples from the graph show why Jason’s evidence is accurate or inaccurate;
    Incorporate Native American perspectives on wolves and discuss how these perspectives might support or challenge Jason’s reasoning.
    Education 14 01325 i003
China:
Language: Mandarin
Context: Consider how traditional Chinese views on predators like the Amur tiger can help you understand predator–prey relationships in ecosystems. Reflect on how your family’s stories or the teachings from Taoism or Buddhism might influence your interpretation of the graph.
        任务:
  • 根据图表以及你对捕食者与猎物关系的理解, 写一篇论述, 说明你是否同意 Jason 的解释.
  • 你的论述应包括:
    关于你是否同意 Jason 观点的陈述.如果你同意他的观点, 请务必包含 Jason 的论点; 如果你不同意, 请写下你自己的观点.
    关于 Jason 的证据和推理中哪些内容是准确的或不准确的陈述.
    从图表中举例说明为什么 Jason 的证据是准确的或不准确的.
    结合中国传统文化中对捕食者 (如东北虎) 的看法, 讨论这些文化视角如何支持或挑战 Jason 的推理.

        (Translation: Task:
  • Based on the graph and your understanding of predator–prey relationships, write an argument that shows whether you agree or disagree with Jason’s explanation;
  • Your argument should include:
    A statement about whether you agree or disagree with Jason’s claim. Make sure to include Jason’s claim if you agree with him or write your own claim if you disagree;
    A statement about what is accurate or inaccurate in Jason’s evidence and reasoning;
    Examples from the graph to show why Jason’s evidence is accurate or inaccurate;
    Incorporate traditional Chinese perspectives on predators like the Amur tiger and discuss how these cultural views might support or challenge Jason’s reasoning.)
    Education 14 01325 i004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nyaaba, M.; Zhai, X.; Faison, M.Z. Generative AI for Culturally Responsive Science Assessment: A Conceptual Framework. Educ. Sci. 2024, 14, 1325. https://doi.org/10.3390/educsci14121325

AMA Style

Nyaaba M, Zhai X, Faison MZ. Generative AI for Culturally Responsive Science Assessment: A Conceptual Framework. Education Sciences. 2024; 14(12):1325. https://doi.org/10.3390/educsci14121325

Chicago/Turabian Style

Nyaaba, Matthew, Xiaoming Zhai, and Morgan Z. Faison. 2024. "Generative AI for Culturally Responsive Science Assessment: A Conceptual Framework" Education Sciences 14, no. 12: 1325. https://doi.org/10.3390/educsci14121325

APA Style

Nyaaba, M., Zhai, X., & Faison, M. Z. (2024). Generative AI for Culturally Responsive Science Assessment: A Conceptual Framework. Education Sciences, 14(12), 1325. https://doi.org/10.3390/educsci14121325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop