Next Article in Journal
A Narrative Inquiry into the Cultivation of a Classroom Knowledge Community in a Chinese Normal University
Previous Article in Journal
A Proposal of Integration of Universal Design for Learning and Didactic Suitability Criteria
Previous Article in Special Issue
Online Physical Education: A Mixed-Methods Study of the Health Perceptions and Professional Effectiveness of Physical Education Teachers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Qualitative Descriptive Study of Teachers’ Beliefs and Their Design Thinking Practices in Integrating an AI-Based Automated Feedback Tool

by
Meerita Kunna Segaran
1,* and
Synnøve Heggedal Moltudal
2
1
Department of Educational Sciences, Faculty of Humanities, Sports and Educational Science, University of South-Eastern Norway, 3679 Drammen, Norway
2
Department of Teacher Education and Humanities, Faculty of Pedagogy, Volda University College, 6103 Volda, Norway
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 910; https://doi.org/10.3390/educsci15070910
Submission received: 26 March 2025 / Revised: 22 April 2025 / Accepted: 14 July 2025 / Published: 16 July 2025

Abstract

In this post-digital age, writing assessment has been markedly influenced by advancements in artificial intelligence (AI), emphasizing the role of automated formative feedback in supporting second language (L2) writing. This study investigates how Norwegian teachers use an AI-driven automated feedback tool, the Essay Assessment Technology (EAT), in process writing for the first time. Framed by the second and third-order barriers framework, we looked at teachers’ beliefs and the design level changes that they made in their teaching. Data were collected in Autumn 2022, during the testing of EAT’s first prototype. Teachers were first introduced to EAT in a workshop. A total of 3 English as a second language teachers from different schools were informants in this study. Teachers then used EAT in the classroom with their 9th-grade students (13 years old). Through individual teacher interviews, this descriptive qualitative study explores teachers’ perceptions, user experiences, and pedagogical decisions when incorporating EAT into their practices. The findings indicate that teachers’ beliefs about technology and its role in student learning, as well as their views on students’ responsibilities in task completion, significantly influence their instructional choices. Additionally, teachers not only adopt AI-driven tools but are also able to reflect and solve complex teaching and learning activities in the classroom, which demonstrates that these teachers have applied design thinking processes in integrating technology in their teaching. Based on the results in this study, we suggest the need for targeted professional development to support effective technology integration.

1. Introduction

Teaching is a highly complex profession that demands that educators constantly make pedagogical decisions. In the context of emerging Artificial Intelligence (AI) technologies, AI presents promising opportunities; however, its integration into education relies heavily on teachers (Abel et al., 2022). In the pre-digital age, when new technology was being introduced to the educational system, the primary objective focused on enhancing teachers’ digital competencies for the practical application of these technologies (McDonagh et al., 2021). The complexity of the early technological systems necessitated a focus on developing practical skills to navigate and utilize the technologies effectively. Today, however, contemporary digital technologies, such as computers, tablets, and smartphones, feature significantly more user-friendly interfaces. Consequently, the emphasis within the education system has shifted from merely having the knowledge, understanding, and skill on how to operate and integrate these technologies within pedagogical and didactic frameworks but on how these technological tools are used effectively to improve teaching and learning (Lund et al., 2014; McDonagh et al., 2021; Mishra & Koehler, 2006). From learning management to augmented reality, teachers are continuously in need to upgrade their knowledge in adopting the changes that will enhance classroom instruction. While these technological resources promise exciting opportunities, there has been an ongoing discourse on not only being a digital literate teacher but also a digitally competent teacher (Brevik et al., 2019; Gudmundsdottir & Björnsson, 2021; Nagel et al., 2023). In being a digitally competent teacher, recent studies suggest that some teachers are the ones leading the process of technology integration and are eager to exploit the potential of technology, while some others do not share this affinity (Roblin et al., 2018; Tondeur et al., 2013).
Despite the potential of AI in education, a recent global education monitoring report advocates that educational tools should serve to enhance, but never replace, the human connections that are fundamental to teaching and learning. The report mentioned that the primary focus should be on learning outcomes rather than the mere integration of digital tools (UNESCO, 2023). As Reinders (2009) notes, knowing how to use a software program does not necessarily equate to the ability to effectively incorporate it into pedagogical practices. Previous studies on teachers’ perceptions of technology integration reveal that these decisions are primarily driven by practical concerns, such as the availability of technology in schools (Ertmer, 1999). It quickly became clear that simply introducing or providing access to technology does not guarantee its integration into teaching practices, as it may still be used in ways that reflect traditional pedagogies (Vandeyar & Adegoke, 2024). The true integration of technology requires overcoming teachers’ beliefs (Abedi, 2023; Ifinedo et al., 2020; Prestridge, 2012), and the pedagogical decisions that the teachers made in problem-solving, described as ‘design thinking’, are seen as necessary for technology integration(Johnson & Tawfik, 2022; Łuczak et al., 2023; Tsai & Chai, 2012)
A growing challenge is how teachers adapt their decision-making practices to effectively integrate AI-based tools. In this study, we argue that teachers’ beliefs and their engagement in design thinking processes are essential for making thoughtful and effective decisions, especially when integrating AI-based automated feedback. Design thinking encompasses the cognitive strategies and problem-solving approaches that designers use when addressing complex challenges (Meinel & Krohn, 2022; Razzouk & Shute, 2012). The concept has gained prominence in various fields, including education. While interest in linking design thinking to education continues to grow, its practical application in classrooms remains underexplored. In the context of this study, design thinking is viewed as a useful framework for engaging teachers with real-world problems of practice. Positioning teachers as designers, particularly within innovative, feedback-oriented, and technology-supported learning environments, can foster agency and support their capacity to respond creatively and effectively to ongoing challenges in education (Scott & Lock, 2021). The challenges teachers face in practice are often complex and multifaceted, ranging from designing meaningful feedback processes, deep empathy for students’ needs, to navigating communication with diverse stakeholders (Baran & AlZoubi, 2024). These so-called ‘problems of practice’ are open-ended and rarely have a single, clear-cut solution (research evidence). Adopting a design thinking mindset can support teachers in approaching these challenges creatively. Just as designers explore how people interact with products or systems, teachers can reflect on how students experience automated feedback, how they interpret it, respond to it, and act upon it. This perspective enables teachers to gain deeper insights into students’ learning journeys and to iteratively adapt feedback tools and strategies in more thoughtful, responsive ways. Understanding how teachers perceive and utilize AI tools in their practice can provide valuable insights into how these technologies can support and enhance their decision-making processes. Although the promise of AI in education, especially in formative assessment practices, has been widely acknowledged, there remains a gap in understanding how teachers redesign their instructional practices at a pedagogical level to accommodate these tools.
Prior research has explored the adoption of AI-driven tools in education, focusing primarily on teachers and students accessibility to technology (Ertmer, 1999), enthusiasm or resistance in incorporating technology (Elstad, 2016), initial usage and individual or organizational factors (Burner et al., 2025; Elstad & Eriksen, 2024). To our knowledge, there is a lack of studies into how teachers actively redesign their pedagogical practices and redefine their facilitation roles when using an automated feedback tool. Specifically, little is known about the design-level changes teachers make and their beliefs about integrating an AI-based automated feedback tool effectively, and how these changes influence their teaching practices. This study sought to understand and describe teachers’ experiences in using an automated feedback tool for the first time, including their beliefs and how teachers iteratively redesign their pedagogical practices and make design-level changes to integrate automated formative feedback.

2. Automated Formative Feedback as an Instructional Tool

The development of an AI-based automated formative feedback system stemmed from the need to address the expensive and time-intensive process associated with employing human evaluators for large-scale standardized language assessments where a significant number of learners are evaluated (Zhao et al., 2023). Consequently, the initial development of Automated Writing Evaluation (AWE) was predominantly towards producing numerical scores for summative evaluations, while largely overlooking the provision of qualitative written feedback (Stevenson & Phakiti, 2019). The introduction of the AWE system traces back to the late 1960s with the development of Project Essay Grade (PEG) by Ellis Page and his research team (Page, 1966). Since this pioneering work, a variety of commercial and non-commercial automated scoring systems have emerged. Starting in the 1990s, alongside the advancement in AWE technologies, there has been a heightened focus on employing AWE systems for delivering written feedback, primarily for educational purposes. As such, the focus was to supplement teachers with feedback tools by not only providing an evaluative numerical score but also written feedback that will help learners in their writing.
AI has the potential to support formative assessment practices by enhancing the feedback in terms of both quantity and quality of feedback (Burner et al., 2025). In formative assessment, feedback plays a critical role in guiding student progress, identifying learning gaps, and offering support to improve learning outcomes (Hattie & Timperley, 2007; Sandal et al., 2022). The primary objectives of formative assessment include delivering constructive feedback that is tailored to individual student responses and aiding teachers in their instructional methods (Hopfenbeck et al., 2023). The use of AI in this context facilitates the broad application of formative assessment across various time frames, both cross-sectional and longitudinal. This approach enhances sustainability and efficiency, enabling educators to effectively track the progression of student learning (Zhu et al., 2020). It also assists in pinpointing the specific areas where students may lack knowledge or skills over time.
The AI based automated systems has several distinctive features such as evaluating student writing specifically offering insights on the grammatical structure (Barrot, 2020; Koltovskaia, 2020) and mechanics of language writing such as spelling, capitalization and punctuation (Liu et al., 2017) Among other features of an automated feedback system include identifying issues related to the content, development of ideas, vocabulary and organization (Wang & Han, 2022; Wilson & Czik, 2016). Despite criticism about its length of feedback over brevity, the concern about the accuracy (Hoang, 2022; Qian et al., 2019), the evidence indicates that an automated formative feedback system has the potential to support various instructional approaches and enhance educational objectives (Shadiev & Feng, 2023). A synthesis of existing automated feedback research (Link et al., 2020; Wilson et al., 2021) shows that like any other technological tool, the teacher plays a central role in how automated formative feedback is incorporated into classroom writing instruction (Wilson et al., 2021).

2.1. Barriers to Integrating Technology

The challenges in integrating AI into education are often defined as conditions that hinder the effective implementation of technology into classrooms (Vatanartiran & Karadeniz, 2015) Research identifies differing classifications, but most frameworks distinguish between external barriers such as institutional and resource limitations (Fredrickson et al., 2014; Mustafa et al., 2024) and internal barriers, which stem from teachers’ attitudes and perceptions (Makki et al., 2018). Ertmer (1999) discusses these in terms of first-order (incremental and institutional) and second-order barriers (fundamental and personal) to technology integration in education. First-order barriers encompass external factors that can impede integration, such as insufficient access to equipment, time, training, and institutional support. These factors are external to teachers. Second-order barriers are relatively seen as an internal factor, where, according to Ertmer (1999), are more intrinsic to teachers and include their pedagogical beliefs, technological beliefs, and willingness to change. These personal beliefs can obstruct the implementation of technology in classrooms. Building on this framework, Tsai and Chai (2012) argued that a third type of barrier is the lack of problem-solving capacity or what they described as ‘design thinking’. They argue that this capacity is essential for teachers to creatively and effectively integrate technology into their teaching practices. Thus, while first- and second-order barriers address practical and attitudinal challenges, the third-order barrier highlights the need for teachers to develop a mindset that embraces innovation and problem-solving in the context of technology integration. We employ second-order barriers (Ertmer, 1999) as a comparative lens to examine teachers’ adoption of an automated feedback tool in a lower secondary school setting, while also examining a third-order barrier, the design thinking practices in the use of the automated feedback tool (Tsai & Chai, 2012).

2.1.1. First-Order Barriers

Given the pervasiveness of the use of technology in Norwegian schools, it would seem likely that schools in Norway have high levels of technology integration. Norway has made significant strides in addressing first-order barriers to technology integration in schools, ensuring technological infrastructure and capacity to support educational goals. This achievement reflects comprehensive policies and initiatives aimed at equipping schools with the necessary resources for digital learning environments. The 2017–2021 Action Plan for Digitalization in Primary and Secondary Education and Training marked a critical step in ensuring that students and staff have access to adequate digital infrastructure. Schools have increasingly adopted a one-to-one device model, beginning from first grade up to grade thirteen at upper secondary level, where nearly 100% of students had access to school-provided computers and these devices can be taken home (Gilje et al., 2020). This ensures that students and teachers across varying levels and subjects can access secure, purpose-driven technological devices tailored to their educational and administrative needs.
Additionally, the integration of high-speed internet into schools is a cornerstone of Norway’s educational infrastructure. The 2003 Electronic Communications Act designates internet access as part of its universal service obligations, ensuring digital communications networks are accessible to all. Complementing this, the 2017–2021 digitalization plan emphasized providing students and staff with reliable internet access and high-quality digital equipment and content (Kunnskapsdepartementet, 2017). These measures have facilitated seamless connectivity, ensuring that schools remain equipped for pedagogical practices. This enables both students and teachers to have access to technology, where first-order barriers can be considered exceptionally low.

2.1.2. Second Order Barriers

While first-order barriers refer to external challenges such as insufficient resources, lack of infrastructure, and limited access to technology, second-order barriers are rooted in internal factors. These include teachers’ beliefs, attitudes, skills, and pedagogical practices that influence how they integrate technology into their classrooms (Xie et al., 2021). Unlike first-order barriers, which can often be mitigated through policy changes or investments in resources, second-order barriers require addressing deeply ingrained beliefs and pedagogical habits. As such, they are considered pivotal in ensuring sustainable and meaningful technology integration. Research has shown that even when adequate technological resources are available, the successful integration of technology often hinges on overcoming second-order barriers (Durff & Carter, 2019; Ottenbreit-Leftwich et al., 2018; Prestridge, 2012). These barriers influence whether and how teachers adopt technology in ways that align with transformative educational goals

2.1.3. Third Order Barriers

First-order barriers address challenges associated with digital infrastructure, while second-order barriers focus on individual beliefs and practices. In comparison, third-order barriers examine systemic and design-level challenges that affect the teaching and learning process. The concept of third-order barriers is deeply tied to the iterative process of design thinking (Tsai & Chai, 2012). Within educational contexts, design thinking is exemplified by different constructs in which it involves a process in the creation of knowledge and practice by teachers when they adopt technology and its pedagogical opportunities (Makki et al., 2018). This process challenges teachers to not only adapt but also innovate in response to evolving technological and pedagogical demands. A strong understanding of technological pedagogical knowledge (TPK) (Mishra & Koehler, 2006) is essential for this stage, providing a deep insight into how teaching and learning can be transformed when digital technologies are utilized effectively. Tsai and Chai (2012) argue that without cultivating these skills, teachers may struggle to fully leverage the pedagogical affordances of technology or address systemic barriers effectively. Through the lens of third-order barriers, the focus shifts toward empowering teachers as designers who can navigate complexity, reimagine instructional practices, and innovate to enhance learning outcomes. To support the design thinking process, a structured approach to problem-solving is needed in educational settings. The process can be described in four stages (New South Wales Department of Education, 2021).
  • Identify and Define (Understand the problem)
  • Research and Plan (Research and ideate or brainstorm a solution)
  • Produce and Implement (Create a draft or possible solution, get feedback)
  • Test and Evaluate (Refine project, exhibit, and evaluate or reflect on the outcome)

2.2. Research Questions

While the use of AI-driven automated formative feedback in Norwegian secondary schools is gaining interest, only a limited number of studies have explored its adoption by learners (Bueie, 2024), and there is a lack of research addressing the design level changes that teachers make to integrate AI technology into their teaching practices and their beliefs in incorporating new technologies into the educational process. This study describes the experiences of Norwegian teachers when implementing AI-driven feedback while also documenting successes, lessons learned, and teachers’ perceptions of continued technology use in the classroom. By examining both teachers’ beliefs and the design thinking level changes that the teachers made, this research sheds light on practical strategies and policy considerations that can support the effective integration of AI-driven feedback in diverse educational settings. This study uses Ertmer’s (1999) and Tsai and Chai’s (2012) framework of second and third-order barriers, with specific research questions, as follows:
  • In what ways do teachers’ beliefs shape their approach to integrating automated feedback into their classroom practices?
  • How do teachers redesign their pedagogical practices and implement design-level changes to integrate AI-driven automated feedback tools into their teaching?

3. Methodology

3.1. The Design of Our Study

This descriptive qualitative study was designed to describe teachers’ beliefs and the design level changes made by the teachers. The descriptive qualitative design was selected as the methodological approach for this study as it enables us to provide a thorough, holistic, and rich summary of teachers’ experiences in using the automated formative feedback. This approach remains closely aligned with the data, ensuring a direct and accurate representation of the participants’ experiences (Hall & Liebenberg, 2024).

3.2. Essay Assessment Technology (EAT)

EAT is a design-based research (DBR) initiative (Brown, 1992) developed and designed in a 4-year DBR project called Artificial Intelligence Assessment for Learning (AI4AfL). It is a learning technology software developed to support formative assessment practices. The software technology uses AI technology to personalize and adapt the feedback to the needs of each learner and is being developed through three iterations. The software would offer (i) language feedback (spelling, grammar, and punctuation (ii) content feedback, and (iii) structure and organizational feedback. EAT is designed to assist learners in their writing processes. It offers a range of functionalities, with the most relevant to this study being its focus on writing mechanics. The first prototype specifically addresses the syntactical aspects of writing, including grammar, spelling, and punctuation. EAT does not provide direct corrections; instead, it offers feedback that encourages improvement and provides guidance on how to revise the text. The feedback highlights problematic words or sentences, prompting students to make necessary changes. EAT fosters reflection and deeper learning by offering developmental feedback, helping students understand the underlying reasons for their errors, and promoting a better grasp of the writing process. By using the vast resources available online, we included relevant grammar rules and explanations as external links within the feedback provided. This enabled students to access additional information and deepen their understanding of grammatical concepts.
The first prototype was developed using an open-source language tool called the ‘Language Tool’, which enables the EAT to analyze texts and identify errors in grammar, punctuation, and spelling. Open-source language tools are widely used for natural language processing (NLP) tasks such as text classification, part-of-speech tagging, and machine translation. This prototype was built prior to the emergence of generative AI. However, as the project progressed, generative AI was embedded into the second and third prototypes of EAT. The second prototype focuses on content, providing feedback on ideas, arguments, and the development of the student’s message. The third prototype addresses organization, guiding students on the structure and flow of their writing. The current study explores the first prototype, which focuses on giving syntactical feedback (spelling, punctuation, and grammar).
While tools like Grammarly are commonly used for grammar correction, EAT was developed with specific pedagogical goals in mind. Grammarly mainly provides surface-level syntactical corrections (Ebadi et al., 2023). Unlike Grammarly, EAT emphasizes formative assessment and supports students’ development across content, structure, and language. Additionally, Grammarly is not used in the Norwegian educational context due to concerns around General Data Protection Regulation (GDPR) compliance, making EAT a more suitable and legally appropriate alternative for educational institutions in this setting.

3.3. Informants, Context, and Settings

This study employed convenience sampling, a purposeful sampling technique (Merriam & Tisdell, 2015). The schools were selected based on their accessibility and willingness to participate, ensuring a feasible and efficient data collection process while still capturing relevant insights into the studied phenomena. This study draws from the data collected through the initial phase of the AI4AfL research project. Data for this study were collected in autumn 2022, during the first iteration of the EAT prototype. The informants in this study consisted of three lower secondary schools in a municipality in Eastern Norway. This study was approved by the Norwegian Agency for Shared Services for Education and Research, and all names used in the analysis are pseudonyms. Table 1 (see below) highlights the demographic information of the in-service teachers who participated in this study. Prior to the data collection, a workshop was organized so the teachers could familiarize themselves with EAT technology, and they were further encouraged to explore its features.

3.4. Data Collection

3.4.1. Individual Semi-Structured Interviews

The current study aimed to provide valuable insights into teachers’ perceptions of using automated formative feedback in facilitating students’ writing processes. Three participating teachers agreed to be interviewed. The interview was conducted face-to-face at the end of the process of writing. The interview questions were designed to elicit teachers’ perceptions, experiences, and attitudes toward the use of automated formative feedback. The interviews, which were recorded and transcribed, had a duration ranging from 35 to 45 min. They followed a semi-structured format, as outlined in Kvale and Brinkmann (2009), with open-ended topics and questions that allowed the interviewees to express their thoughts freely. These interviews were primarily conducted in Norwegian by the project team members.

3.4.2. Video Observations

We utilized the video data to cross-validate and corroborate the information provided by the teachers during their interviews. Video material presents a unique and invaluable opportunity for educational scholars and researchers to systematically examine authentic classroom settings. Utilizing video-recorded materials offers several advantages, such as enabling detailed and systematic observations of complex classroom situations and natural settings over extended periods (Klette, 2022). The design of video observations in this study involved the use of three cameras in each classroom. One camera was positioned to focus on the teacher, while the other two were directed towards the students. Additionally, three microphones were employed in this setup—one attached to the teacher and another two were fixed to the students’ group to capture their discussions. This comprehensive recording approach, as demonstrated by (Fischer & Neumann, 2012) has the capability to capture both the actions and viewpoints of teachers and students in a single recording, the ability to break down teaching practices into smaller components, and the option to approach the same recorded teaching segment from various analytical perspectives.
Once the video recordings were exported, we examined the video repeatedly, attempting to describe objectively what was happening as completely as possible without reference to preconceived categories or prejudices. We confine ourselves to pointing out what is visible or audible on the recordings. Klette and Blikstad-Balas (2018) highlight the synergy of video data, which enables the recording, reviewing, analysis, and synthesis of diverse instructional practices. This enhances the overall trustworthiness of observations, not only because the initial observer can revisit the same segments multiple times, but also because it enables traditional quality assurance methods like reliability testing, member checking, and secondary analyses in novel ways. Video data promote transparency and clarity, making codes and coding systems the subjects of collaborative discussions and validation, encompassing the development and testing of the different codes for validity and reliability. The total hours used in the video recording for Laura is 8 h and 25 min, Faben is 7 h 42 min, and John is 7 h and 30 min.

3.5. Data Analysis and Trustworthiness

The interview data were transcribed verbatim by the researchers. At the initial phase, a transcription software, Microsoft Word online, that automatically transcribes audio to text was used. This process allowed for easy editing. This software was especially useful for verbatim translations, ensuring that the meaning of the original Norwegian responses was preserved. After the software-generated transcription, the researcher listened to the recording and edited the words spoken to ensure an accurate representation of the teachers’ responses. The researcher then translated the Norwegian verbatim text into English. The researcher made efforts to accurately translate the Norwegian transcripts into English, while still emphasizing to keep the original meaning as conveyed in Norwegian. We screened the transcription to identify common themes emerging from teachers’ responses. The data were then initially coded manually. The coding was performed manually by the researcher, following a deductive approach to develop codes and themes directly from the data. The researcher maintained detailed notes throughout the coding process to ensure transparency and consistency. The researcher utilized a thematic analysis approach, identifying recurring patterns and key themes related to the teachers’ perceptions and experiences. Initial analysis was conducted by the first author, with the second author independently reviewing the results to ensure internal validity. Any discrepancies were resolved through discussions between both authors. In the second phase, codes were refined. The findings are thus based on the triangulation of video data and interview data. However, in line with Mathison (1988), our aim was not limited to triangulation for validity checks, but also to elaborate on convergence, contradiction, and inconsistency in line with the purpose and research questions of this study. Credibility was established through constant interaction with data and the use of researcher triangulation (Donkoh & Mensah, 2023; Lincoln et al., 1985). Regular peer debriefings ensured that the investigators stayed informed about the research process and continually evaluated the preliminary findings and interpretations of the data.

4. Results

In exploring the integration of an AI-driven automated formative feedback tool, we hope to explore the teachers’ use and perceptions of AI-based automated feedback. Through a comprehensive examination of teacher interviews and video observations, we aim to offer insights into teachers’ beliefs and the design level changes that they made in their teaching. The results are presented according to the research questions.

4.1. Research Question 1: In What Ways Do Teachers’ Beliefs Shape Their Approach to Integrate Automated Feedback into Their Classroom Practices?

4.1.1. Teachers’ Beliefs About Technology

All the teachers in this study had no prior experience with using an automated feedback tool. The teachers emphasized the importance of becoming familiar with the tool before feeling confident enough to integrate it into their teaching. As Faben explained, I had to familiarize myself with it first and see how it worked……at the beginning. So I took some time. But… you know what… I… I used it. This initial hesitation gave way to active use, as he began incorporating the tool. I walked around and said, ‘Look at this mistake here. You have grammatical errors here. Shall we search a bit in the text? See now. It says you’re missing some commas. Shall we search together in the text?’ This illustrates a shift from uncertainty to a more exploratory and student-centered use of the tool, where the teacher guides students in interpreting feedback and reflecting on their writing. Teachers were also helping students to interpret the output from the automated feedback. Automated feedback increases the teachers’ desire to help students understand the feedback that they have received. It highlights how belief in technology’s usefulness may grow through direct experimentation. With minimal familiarity on how to use the tool, the teacher was very optimistic and viewed the tool as effective in helping to achieve the pedagogical objectives. We observed that he helped students and facilitated their understanding and interpretation of feedback. The teacher was actively engaging in the feedback process with the students.

4.1.2. Teachers’ Beliefs About Students’ Role

The teachers in this study believed in student-centered learning, where learners are active participants. Students are expected to guide and give each other feedback (peer feedback) and engage with the content. Through such practices, students are seen as capable of constructing knowledge through dialogue. Faben articulated that
Yes, the students were supposed to guide… or give feedback to each other. So I decided that they should give their PCs to each other and read the text first, and then they should talk. And what happened then was that they gave feedback on what they needed to improve in the text. That could be things like language, structure… uh… grammar… depending on the differences in the groups’ levels. Of course, if you have a group with lower-level differences than one with higher-level differences, the feedback will also differ, I think. So it worked well. I was very careful that they should give constructive feedback—not just be nice to each other, as they usually are. So it improved. So overall, I think it was a very successful part. It could have been much better, of course.
(Faben)
Instead of just being a passive recipient of information, the teacher believes that students should take ownership of their own learning and position themselves as independent thinkers. Faben held a positive view of his students’ engagement with the feedback provided by the automated system and the collaboration practices with their peers. Additionally, the teacher noted that the students responded quickly to the feedback, indicating a high level of motivation and willingness to participate. This observation suggests that the students were effectively engaged and found the feedback process beneficial. However, the teacher also observed that some students exhibited uncertainty, which could be attributed to limited training on how to effectively utilize the feedback from the tool. This indicates a need for more comprehensive instruction on how to interpret and apply feedback, to ensure all students can benefit from the automated system. Moreover, the teacher observed that students initially struggled to grasp the feedback process but noted that their understanding improved after several rounds of feedback. This progression highlights the importance of repeated exposure to the feedback system, allowing students to become more familiar and comfortable with the process over time.
I think it went well. I thought it was nice to see that most of them got started quickly. I was not sure because 40 min is not long. Many managed that, and I thought that was nice to see. It was a bit concerning that some of them locked up at the beginning, and they never got started. And a couple of students achieved that, but then they got started, and yes, it was perhaps the first time they received feedback, so it went very well. Then maybe it was a bit different the third time or the second time they got feedback. That, I do not know. Maybe they were a bit tired or something?
(John)

4.1.3. Teachers’ Pedagogical Beliefs (Task Completion)

Another aspect that is apparent in this study is teachers’ beliefs about task completion. The experienced teacher (21 years in teaching) in this study focuses heavily on task completion and displays concern about students not completing the task within the time frame. The teacher believed that the amount of feedback overwhelmed her students. The teacher suggested reducing the time for group discussion to enable students to write. In using the automated feedback tool, we found that the teacher was more concerned about completing the task than engaging in the feedback from the automated system. The teacher noticed that students were stressed about not completing their stories, which they perceived as critical for their grades. This stress prompted the teacher to reflect on her teaching practice and thus suggested that in the future she would reduce the time for group discussions. The teacher, who is accustomed to established routines, felt that the time allocated for the group discussions was excessively long. This suggests that there may be a need to adjust the duration of these sessions to better align with the teacher’s belief of optimal time management and efficiency. Recognizing the importance of peer discussions for understanding feedback, the teacher nevertheless felt that adjusting the session by reducing discussion time would allow more time for actual writing, which the teacher deemed a natural and productive shift.
Because they were stressed about not having finished their stories, umm, they thought it was important for their grades to complete the story and that they had to spend a lot of time explaining to the others in the group that they had a little mistake with ‘liten i’ [small ‘i’] for example. So, I might have reduced the time for discussions a bit. Of course, I understand that it’s important for them to discuss, but when everyone is satisfied [light laughter] and feels done with discussing the rules, then I think it’s natural to spend the rest of the time writing.
(Laura)

4.2. Research Question 2: How Do Teachers Redesign Their Pedagogical Practices and Implement Design-Level Changes to Integrate AI-Driven Automated Feedback Tools into Their Teaching?

We have identified an iterative design thinking process undertaken by teachers as they adapted their practices for AI integration. As outlined in Table 2, this process involved addressing two dimensions, which can be broadly categorized into two main themes: pedagogical issues and learning issues.

4.2.1. Pedagogical Issues

The design thinking process in this study aligns with commonly accepted models that involve identifying and defining problems, researching and planning, producing and implementing, and finally testing and evaluating (New South Wales Department of Education, 2021). These stages provide a conceptual lens through which we analyzed teachers’ experiences with the automated feedback tool. Here are three sub-issues we found in our study. The first sub-issue is instructional strategy. In the identifying and defining problem phase, the teacher noticed the challenges students face while using the automated feedback system. For example, the teacher noticed that the students had issues navigating the interface. In the process of using the interface, the student had lost half of the text. This can be problematic, as evidenced by instances where teachers find themselves needing to intervene when students click through the process too quickly and lose substantial portions of their text. Such situations highlight the need for teachers to possess a robust understanding of technology to provide timely and effective support. Here, the teacher quickly brainstormed for a solution. In the research and plan phase, the teacher had not previously considered the backup strategy, but was able to come up with a solution where students were encouraged to save it in Microsoft Word first instead of directly typing into the EAT system. This quick response demonstrates adaptive planning and a willingness to co-construct a solution, which is part of a design thinking process.
In this second sub-issue, the teachers demonstrated a critical ability to identify and address students’ delay in getting started to process writing. The teachers recognized that some students hesitated to engage fully with the iterative writing process, often requiring additional support to get started or to use the feedback effectively. Moving to the research and plan phase, teachers responded creatively to these problems. By noticing the patterns of disengagement or hesitation early on, teachers were able to intervene strategically by providing students with targeted support, ensuring that students could move forward. Their understanding of these challenges allowed them to adapt their instructional strategies to better support students, balancing the autonomy intended by the automated feedback tool with guided assistance when necessary.
The issue of student grouping was another key area where teachers engaged in a design thinking process. This aligns particularly well with the phases of identify and define, plan, implement, and evaluate. Teachers could identify the problem of grouping students by similar academic level, which limited the quality of peer discussions, as it reduced the potential for knowledge-sharing and varied perspectives. Next, in the ‘Plan’ stage, the teacher proposes a solution. The teacher felt the need to organize groups with mixed academic levels in future sessions. This suggestion demonstrates both empathy for student learning needs and a proactive mindset, where the teacher considers how peer dynamics can scaffold understanding, especially for weaker students. Though the ‘Implement’ stage is not directly evidenced in the quote, the teacher’s intention to revise groupings in future iterations of the activity suggests an ongoing, reflective process typical of design thinking. Finally, this feeds into the ‘Evaluate’ phase, as the teacher is already evaluating the impact of past grouping strategies and using this insight to improve future practice.

4.2.2. Learning Issues

When the automated system generated a large volume of feedback, teachers expressed concerns about the potential for cognitive overload among students. Using the design thinking model, we can trace how teachers engage with problem identification, planning, implementation, and evaluation. In this situation, in the ‘Identify and Define’ phase, teachers observed that students, particularly weaker learners, felt overwhelmed by the amount of feedback they received. This cognitive overload hindered students’ ability to process and act on the feedback meaningfully. To mitigate this (plan phase), the teacher’s strategy of asking the learners to prioritize the feedback approach shows the teacher’s ability to solve and provide a solution immediately. By highlighting the most critical feedback that the students would have to address first, the teachers reduced the anxiety in the weaker learners.

5. Discussion and Conclusions

Although it was the teacher’s first attempt to integrate the tool into their teaching, the teachers in this study demonstrated a beginning understanding of the theoretical underpinnings of the automated feedback tool. This could be attributed to their openness to new methodologies. Their enthusiasm to adopt the tool aligns with findings from prior research, Karolčík and Marková (2023), which suggests that teachers are more likely to embrace innovative practices as part of their professional growth. This could probably be because teachers mainly think of technology as a new way of supporting and innovating teaching. The ability to assist automated feedback effectively requires a blend of pedagogical knowledge and technological skills from the teacher. In this perspective, we found that the teachers were able to nurture and provide positive classroom support to the learners. Besides, teachers recognized the importance of facilitating students’ active role in learning. Teachers were seen promoting a culture of two-way dialogue with the students to help learners interpret the output from the tool. Teachers had better control of the learning due to the individualised personalized feedback and therefore could assist learners who needed more help and support learners in interpreting the feedback. Additionally, the automated feedback tool not only complements the traditional instructional methods but also successfully cultivates a reflective and collaborative learning culture. The teacher’s role was not only in guiding and mentoring the learning practices, but also viewed as a scaffolding process that helps learners to develop their agency.
Besides, teachers in this study saw the potential of the tool to enhance teaching without overshadowing established practices. By reinforcing their existing strategies with technology, these teachers demonstrated the ability to bridge traditional and innovative approaches effectively. Their belief that group discussions should be shortened to allow students more time to write reflects a pragmatic approach focused on time management and task execution. This aligns with previous research indicating that teachers often view innovations through the lens of classroom efficiency and student outcomes (Abel et al., 2022). Although teachers are very positive towards technology integration and express their interest in using automated feedback, they still hold on to the belief that technology can affect lesson time and flow (Bećirović, 2023). This resonates with prior research suggesting that teachers develop positive beliefs when they perceive it as useful and recognize the value of the new technology (Bowman et al., 2020; Xie et al., 2021).
Another aspect that was prevalent in this study is that teachers revealed that their intentions to utilize technology were influenced by their exposure to design thinking. The teachers’ proactive identification of students’ problems highlights their critical role in facilitating the effective use of automated feedback systems. Their capacity to observe, analyze, and address resistance to process writing ensures that such tools are integrated in ways that align with students’ learning needs and encourage progress toward greater independence. The teachers encounter a range of obstacles that test their adaptability, digital proficiency, and pedagogical strategies. These challenges highlight the complexity of effectively merging traditional teaching methods with technology to meet the diverse needs of learners. However, they were able to solve complex problems and came up with solutions. The teachers in this study not only provided solutions to immediate challenges but also had the intention to develop long-term strategies for integrating technology in meaningful ways. This aligns with findings by Blundell (2024), who argues that teachers can embrace the complexities of teaching-learning practice and are able to move beyond superficial technology use and develop sustainable, pedagogically sound practices. Furthermore, Bartlett (2021) highlights that such intentional, design thinking practices lead to teachers expanding their actions, embracing ambiguity, as well as being optimistic will eventually lead to improved student outcomes.
This study extends the current literature on teachers’ beliefs and on design thinking. This paper, therefore, concludes that the successful integration of AI-based automated formative feedback demands a transformation not only in curriculum and pedagogy but also in teachers’ beliefs, the way teachers think and behave. For AI-based automated feedback tools to be effectively integrated into classrooms, teachers must understand how they perceive their role, how they approach teaching, and how they engage with new technologies. This transformation is critical in ensuring that the potential of AI-based automated feedback is fully maximized.

6. Implications of This Study

There are several implications from this study. First, teachers need targeted professional development courses. Specifically, teachers need guidelines and support when using new technology. Teachers often face challenges in facilitating meaningful interactions when students have similar abilities, which can limit the potential for peer learning. In contrast, mixed-ability groups can enhance classroom dynamics, as students of varying strengths support each other through collaboration. Educators could adopt strategies that encourage interaction among diverse groups, fostering an inclusive learning environment where all students can contribute. When they have low knowledge of how to use the tool, much of their time will be lost in trying to figure out the functionality of the tool. This, in turn, could affect the learning process. When teachers possess a limited understanding of how to utilize new applications, a significant portion of instructional time may inadvertently be dedicated to navigating and deciphering the functionalities of these tools. This scenario not only detracts from the quality of instruction but also impedes the integration of technology into the curriculum. We strongly encourage that professional development programs incorporate design thinking principles to help teachers navigate the complexities of technology integration. Furthermore, teacher education programs should emphasize the development of adaptive, design-focused mindsets, equipping educators to manage not only technical and logistical challenges but also the deeper pedagogical and cognitive barriers that arise in today’s digital learning environments.
Second, teachers need awareness of how to promote independent learners in each student’s academic needs, and the skill to draw on appropriate tools in ways that support students in each of these areas when responding to increasingly diverse student cohorts. Although we know that personalized learning experiences are essential in promoting learners to be independent, teachers who have a low level of knowledge on how to develop, support, and promote autonomous learning would encounter difficulty when they see learners who are struggling.
Third, higher learning institutions should not only focus on teaching future teachers on how to teach with technology, but teacher preparation should encourage teachers’ reflections about beliefs and possible beliefs of change, which is not about fixing beliefs but helping teachers to see. While believing in the value of technology is a necessary precondition, it does not guarantee its use in the classroom. Preservice teachers must engage in authentic classroom situations that prompt them to consider and confront their beliefs regarding technology as an instructional tool. The strategic placement of preservice teacher candidates with mentors who are enthusiastic about the use of technology may facilitate the translation of the preservice teachers’ theoretical commitments into practice while overcoming their own barriers.

7. Limitations and Future Research

Several limitations are apparent in this study. Firstly, the intervention period was relatively short (one school day), and data were collected for only one cycle of process writing. This limited timeframe may have restricted our ability to observe how teachers modify their instructional strategies and to gain a better understanding of their facilitative role in supporting student learning. A longer observation period, such as across the academic year, would have allowed for a deeper exploration of how the tool is integrated into teaching practices over time. Longitudinal studies could provide more comprehensive insights into the long-term impact of such tools on both teaching practices and student outcomes.
Secondly, all observations were scheduled in advance, which may have led some teachers to adjust their preparations with the students, anticipating the observations. This preparation could have influenced the patterns of tool usage observed during this study. To address this limitation, future studies could incorporate a mix of pre-scheduled and randomly occurring observations, which would allow for a more naturalistic exploration of tool usage in everyday teaching contexts.

Author Contributions

Conceptualization, M.K.S. and S.H.M.; methodology, M.K.S.; formal analysis, M.K.S.; data curation, M.K.S.; writing—original draft preparation, M.K.S.; review and editing, M.K.S. and S.H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This study is funded by the Norwegian Research Council, Grant number 326607 and the APC was funded by University of South-Eastern Norway.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Norwegian Agency for Shared Services in Education and Research (SIKT).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data are available upon request from the corresponding author.

Acknowledgments

The authors would like to express their sincere gratitude to Irina Engeness, the project leader, for providing the resources and support for the completion of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abedi, E. A. (2023). Tensions between technology integration practices of teachers and ICT in education policy expectations: Implications for change in teacher knowledge, beliefs and teaching practices. Journal of Computers in Education, 11, 1215–1234. [Google Scholar] [CrossRef]
  2. Abel, V. R., Tondeur, J., & Sang, G. (2022). Teacher Perceptions about ICT Integration into classroom instruction. Education Sciences, 12(9), 609. [Google Scholar] [CrossRef]
  3. Baran, E., & AlZoubi, D. (2024). Design thinking in teacher education: Morphing preservice teachers’ mindsets and conceptualizations. Journal of Research on Technology in Education, 56(5), 496–514. [Google Scholar] [CrossRef]
  4. Barrot, J. S. (2020). Integrating technology into ESL/EFL writing through Grammarly. RELC Journal, 53(3), 764–768. [Google Scholar] [CrossRef]
  5. Bartlett, S. (2021). Teacher as designer of learning: Possibilities and praxis of deep design. In D. Scott, & J. Lock (Eds.), Teacher as designer: Design thinking for educational change (1st ed.). Springer Nature. [Google Scholar]
  6. Bećirović, S. (2023). Challenges and barriers for effective integration of technologies into teaching and learning. In Digital pedagogy. Springer briefs in education. Springer. [Google Scholar] [CrossRef]
  7. Blundell, C. N. (2024). Using design thinking to embrace the complexities of teacher learning-practice with digital technologies. Professional Development in Education, 51, 386–404. [Google Scholar] [CrossRef]
  8. Bowman, M. A., Vongkulluksn, V. W., Jiang, Z., & Xie, K. (2020). Teachers’ exposure to professional development and the quality of their instructional technology use: The mediating role of teachers’ value and ability beliefs. Journal of Research on Technology in Education, 54, 188–204. [Google Scholar] [CrossRef]
  9. Brevik, L. M., Gudmundsdottir, G. B., Lund, A., & Strømme, T. A. (2019). Transformative agency in teacher education: Fostering professional digital competence. Teaching and Teacher Education, 86, 1–15. [Google Scholar] [CrossRef]
  10. Brown, A. L. (1992). Design experiments: Theoretical and Methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141–178. [Google Scholar] [CrossRef]
  11. Bueie, A. A. (2024). Elevperspektiver på å bruke samtaleroboter i skriving. In T. Stenseth, & M. Oddvik (Eds.), Læring og undervisning i en digital verden (pp. 283–305). Gyldendal. [Google Scholar]
  12. Burner, T., Lindvig, Y., & Wærness, J. I. (2025). “We should not be like a dinosaur”—Using AI technologies to provide formative feedback to students. Education Sciences, 15(1), 58. [Google Scholar] [CrossRef]
  13. Donkoh, S., & Mensah, J. (2023). Application of triangulation in qualitative research. Journal of Applied Biotechnology and Bioengineering, 10(1), 6–9. [Google Scholar] [CrossRef]
  14. Durff, L., & Carter, M. (2019). Overcoming second-order barriers to technology integration in K–5 schools. Journal of Educational Research and Practice, 9(1), 246–260. [Google Scholar] [CrossRef]
  15. Ebadi, S., Mina, G., & Vakili, S. (2023). Investigating the effects of using Grammarly in EFL writing: The case of articles. Computers in the Schools, 40(1), 85–105. [Google Scholar] [CrossRef]
  16. Elstad, E. (Ed.). (2016). Why is there a wedge between the promise of educational technology and the experiences in a technology-rich Pioneer School? In Digital expectations and experiences in education (pp. 77–96). Sense Publishers. [Google Scholar]
  17. Elstad, E., & Eriksen, H. (2024). High school teachers’ adoption of generative AI: Antecedents of instructional AI utility in the early stages of school-specific chatbot implementation. Nordic Journal of Comparative and International Education (NJCIE), 8(1), 2–18. [Google Scholar] [CrossRef]
  18. Ertmer, P. A. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology RESEARCH and Development, 47(4), 47–61. [Google Scholar] [CrossRef]
  19. Fischer, H. E., & Neumann, K. (2012). Video analysis as a tool for understanding science instruction. In Science education research and practice in Europe: Retroperspective and prospective (pp. 115–139). Sense Publishers. [Google Scholar] [CrossRef]
  20. Fredrickson, S., Vu, P., & Crow, S. R. (2014). Availability and use of digital technologies in P-12 classrooms of selected countries. Issues and Trends in Educational Technology, 2, 1–14. [Google Scholar] [CrossRef]
  21. Gilje, Ø., Bjerke, Å., & Thuen, F. (2020). Gode Eksempler på Praksis: Undervisning i en-til-en-klasserommet. [Rapport]. Enhet for Forskning, Innovasjon og Kompetanseutvikling i Skolen (FIKS), Universitetet i Oslo. [Google Scholar]
  22. Gudmundsdottir, G. B., & Björnsson, J. K. (2021). Hvor godt er lærere forberedt på den digitale hverdagen? In J. K. Björnsson (Ed.), Hva kan vi lære av TALIS 2018? Gode relasjoner som grunnlag for læring. Cappelen Damm Akademisk. [Google Scholar] [CrossRef]
  23. Hall, S., & Liebenberg, L. (2024). Qualitative description as an introductory method to qualitative research for master’s-level students and research trainees. International Journal of Qualitative Methods, 23, 1–5. [Google Scholar] [CrossRef]
  24. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [Google Scholar] [CrossRef]
  25. Hoang, G. (2022). Feedback precision and learners’ responses: A study into ETS Criterion automated corrective feedback in EFL writing classrooms. The JALT CALL Journal, 18, 444–467. [Google Scholar] [CrossRef]
  26. Hopfenbeck, T. N., Zhang, Z., Sun, S. Z., Robertson, P., & McGrane, J. A. (2023). Challenges and opportunities for classroom-based formative assessment and AI: A perspective article. Frontiers in Education, 8, 1–9. [Google Scholar] [CrossRef]
  27. Ifinedo, E., Rikala, J., & Hämäläinen, T. (2020). Factors affecting Nigerian teacher educators’ technology integration: Considering characteristics, knowledge constructs, ICT practices and beliefs. Computers and Education, 146, 103760. [Google Scholar] [CrossRef]
  28. Johnson, B. T., & Tawfik, A. A. (2022). First, second, and third-order barriers to information literacy and inquiry-based learning for teachers in poverty contexts. Educational Technology Research and Development, 70(4), 1221–1246. [Google Scholar] [CrossRef]
  29. Karolčík, Š., & Marková, H. (2023). Less experienced teachers’ perception of innovation and its impact on the use of digital technology in teaching. Journal of Research in Innovative Teaching & Learning, 18, 39–55. [Google Scholar] [CrossRef]
  30. Klette, K. (2022). The use of video capturing in international large-scale assessment studies: Methodological and theoretical considerations. Springer. [Google Scholar]
  31. Klette, K., & Blikstad-Balas, M. (2018). Observation manuals as lenses to classroom teaching: Pitfalls and possibilities. European Educational Research Journal EERJ, 17(1), 129–146. [Google Scholar] [CrossRef]
  32. Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44, 100450. [Google Scholar] [CrossRef]
  33. Kunnskapsdepartementet. (2017). Framtid, fornyelse og digitalisering: Digitaliseringsstrategi for grunnopplæringen 2017–2021. Available online: https://www.regjeringen.no/contentassets/dc02a65c18a7464db394766247e5f5fc/kd_framtid_fornyelse_digitalisering_nett.pdf (accessed on 10 March 2025).
  34. Kvale, S., & Brinkmann, S. (2009). Interviews: Learning the craft of qualitative research interviewing (2nd ed.). Sage. [Google Scholar]
  35. Lincoln, Y. S., Guba, E. G., & Pilotta, J. J. (1985). Naturalistic inquiry. International Journal of Intercultural Relations, 9(4), 438–439. [Google Scholar] [CrossRef]
  36. Link, S., Mehrzad, M., & Rahimi, M. (2020). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning, 35(4), 605–634. [Google Scholar] [CrossRef]
  37. Liu, M., Li, Y., Xu, W., & Liu, L. (2017). Automated essay feedback generation and its impact on revision. IEEE Transactions on Learning Technologies, 10(4), 502–513. [Google Scholar] [CrossRef]
  38. Lund, A., Furberg, A., Bakken, J., & Engelien, K. L. (2014). What does professional digital competence mean in teacher education? Nordic Journal of Digital Literacy, 9(4), 280–298. [Google Scholar] [CrossRef]
  39. Łuczak, K., Chomiak-Orsa, I., & Piwoni-Krzeszowska, E. (2023). The use of an ontology to evaluate the suitability of using Design Thinking in teaching based on the syllabus. Procedia Computer Science, 225, 2714–2722. [Google Scholar] [CrossRef]
  40. Makki, T. W., O’Neal, L. J., Cotten, S. R., & Rikard, R. (2018). When first-order barriers are high: A comparison of second-and third-order barriers to classroom computing integration. Computers & Education, 120, 90–97. [Google Scholar]
  41. Mathison, S. (1988). Why triangulate? Educational Researcher, 17(2), 13–17. [Google Scholar] [CrossRef]
  42. McDonagh, A., Camilleri, P., Engen, B. K., & McGarr, O. (2021). Introducing the PEAT model to frame professional digital competence in teacher education. Nordic Journal of Comparative and International Education (NJCIE), 5(4), 5–17. [Google Scholar] [CrossRef]
  43. Meinel, C., & Krohn, T. (2022). Design thinking in education: Innovation can be learned (1st ed.). Springer. [Google Scholar] [CrossRef]
  44. Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation (4th ed.). Wiley. [Google Scholar]
  45. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record: The Voice of Scholarship in Education, 108(6), 1017–1054. [Google Scholar] [CrossRef]
  46. Mustafa, F., Nguyen, H. T. M., & Gao, X. (2024). The challenges and solutions of technology integration in rural schools: Systematic literature review. International Journal of Educational Research, 126, 1–14. [Google Scholar] [CrossRef]
  47. Nagel, I., Guðmundsdóttir, G. B., & Afdal, H. W. (2023). Teacher educators’ professional agency in facilitating professional digital competence. Teaching and Teacher Education, 132, 1–14. [Google Scholar] [CrossRef]
  48. New South Wales Department of Education. (2021). Phases of design thinking. New South Wales Government. Available online: https://resources.education.nsw.gov.au (accessed on 5 March 2025).
  49. Ottenbreit-Leftwich, A., Liao, J. Y. C., Sadik, O., & Ertmer, P. (2018). Evolution of teachers’ technology integration knowledge, beliefs, and practices: How can we support beginning teachers use of technology? Journal of Research on Technology in Education, 50(4), 282–304. [Google Scholar] [CrossRef]
  50. Page, E. B. (1966). The imminence of… grading essays by computer. The Phi Delta Kappan, 47(5), 238–243. [Google Scholar]
  51. Prestridge, S. (2012). The beliefs behind the teacher that influences their ICT practices. Computers and Education, 58(1), 449–458. [Google Scholar] [CrossRef]
  52. Qian, L., Zhao, Y., & Cheng, Y. (2019). Evaluating China’s automated essay scoring system iWrite. Journal of Educational Computing Research, 58(4), 771–790. [Google Scholar] [CrossRef]
  53. Razzouk, R., & Shute, V. (2012). What is design thinking and why is it important? Review of Educational Research, 82(3), 330–348. [Google Scholar] [CrossRef]
  54. Reinders, H. (2009). Technology and second language teacher education. In A. Burns, & J. Richards (Eds.), The cambridge guide to second language teacher education (pp. 230–238). Cambridge University Press. [Google Scholar]
  55. Roblin, P. N., Tondeur, J., Voogt, J., Bruggeman, B., Mathieu, G., & van Braak, J. (2018). Practical considerations informing teachers’ technology integration decisions: The case of tablet PCs. Technology, Pedagogy and Education, 27(2), 165–181. [Google Scholar] [CrossRef]
  56. Sandal, A. K., Helleve, I., Smith, K., & Gamlem, S. M. (2022). Feedback practice in lower secondary school: Exploring development of perceptions of feedback practice among students and teachers. Cogent Education, 9(1), 1–19. [Google Scholar] [CrossRef]
  57. Scott, D., & Lock, J. (2021). Teacher as designer: Design thinking for educational change (1st ed.). Springer Singapore. [Google Scholar] [CrossRef]
  58. Shadiev, R., & Feng, Y. (2023). Using automated corrective feedback tools in language learning: A review study. Interactive Learning Environments, 32, 2538–2566. [Google Scholar] [CrossRef]
  59. Stevenson, M., & Phakiti, A. (2019). Automated feedback and second language writing: Contexts and issues. In K. Hyland, & F. Hyland (Eds.), Feedback in second language writing (2nd ed., pp. 125–142). Cambridge University Press. [Google Scholar]
  60. Tondeur, J., Roblin, N. P., van Braak, J., Fisser, P., & Voogt, J. (2013). Technological pedagogical content knowledge in teacher education: In search of a new curriculum. Educational Studies, 39(2), 239–243. [Google Scholar] [CrossRef]
  61. Tsai, C. C., & Chai, C. S. (2012). The “third”-order barrier for technology-integration instruction: Implications for teacher education [Article]. Australasian Journal of Educational Technology, 28(6), 1057–1060. [Google Scholar] [CrossRef]
  62. UNESCO. (2023). Global education monitoring report: Technology in education: A tool on whose terms? UNESCO. [Google Scholar]
  63. Vandeyar, T., & Adegoke, O. O. (2024). Teachers’ ICT in pedagogy: A case for mentoring and mirrored practice. Education and Information Technologies, 29, 18985–19004. [Google Scholar] [CrossRef]
  64. Vatanartiran, S., & Karadeniz, S. (2015). A needs analysis for technology integration plan: Challenges and needs of teachers. Contemporary Educational Technology, 6(3), 206–220. [Google Scholar] [CrossRef] [PubMed]
  65. Wang, Z., & Han, F. (2022). The effects of teacher feedback and automated feedback on cognitive and psychological aspects of foreign language writing: A mixed-methods research [original research]. Frontiers in Psychology, 13, 1–12. [Google Scholar] [CrossRef] [PubMed]
  66. Wilson, J., Ahrendt, C., Fudge, E. A., Raiche, A., Beard, G., & MacArthur, C. (2021). Elementary teachers’ perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers & Education, 168, 104208. [Google Scholar] [CrossRef]
  67. Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English language arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100, 94–109. [Google Scholar] [CrossRef]
  68. Xie, K., Nelson, M. J., Cheng, S.-L., & Jiang, Z. (2021). Examining changes in teachers’ perceptions of external and internal barriers in their integration of educational digital resources in K-12 classrooms. Journal of Research on Technology in Education, 55(2), 281–306. [Google Scholar] [CrossRef]
  69. Zhao, R., Zhuang, Y., Zou, D., Xie, Q., & Yu, P. L. H. (2023). AI-assisted automated scoring of picture-cued writing tasks for language assessment. Education and Information Technologies, 28(6), 7031–7063. [Google Scholar] [CrossRef]
  70. Zhu, M., Liu, O. L., & Lee, H.-S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers & Education, 143, 1–15. [Google Scholar] [CrossRef]
Table 1. Demographic Profile of the EFL Writing Teachers.
Table 1. Demographic Profile of the EFL Writing Teachers.
1 NameFabenLauraJohn
ClassTarget ClassTarget ClassTarget Class
SchoolBAC
EducationGeneral Teacher
Education
International
University
General Teacher Education
Years of teaching4 years21 years10 years
Subjects that teachers are teachingEnglishEnglishEnglish
GenderMaleFemaleMale
Experience with automated feedbackNoNoNo
1 Note: All names are pseudonyms.
Table 2. Key themes in relation to design thinking.
Table 2. Key themes in relation to design thinking.
IssuesSub-IssuesDescriptions Quotations
PedagogicalInstructional strategyNavigating the interface There were some students who clicked through a bit too quickly, who had submitted the text and then we had to make an update to go back a few steps in the writing process, and then many had lost much of the text, and that was not so good. No, exactly, but when that happened, they asked you, and then you felt that you needed to have that knowledge about technology to help the students out of it.
Knowledge on how to promote independent learnersResistance to process writing and feedback due to time constraintsIt was a bit concerning that some of them locked up at the beginning, and they never got started.
GroupingUnable to generate quality discussions due to same academic levelUmm, and then maybe the organization of the groups, because now we put together groups with similar, the same academic level. And I think I would organize the groups a bit differently next time to have more variation in the academic level. Because—umm, the strongest ones grasp all the rules right away, no problem, while it can be difficult if you have some weaker students at the same level
LearningCognitively overloadedFeedback was overwhelming for studentsThere was one who managed to write 38 words. Struggled a long time with those 38 words. Well, he did not get—he did not have 80 errors, but he did—yes on 38 words he had about 16 errors.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Segaran, M.K.; Moltudal, S.H. A Qualitative Descriptive Study of Teachers’ Beliefs and Their Design Thinking Practices in Integrating an AI-Based Automated Feedback Tool. Educ. Sci. 2025, 15, 910. https://doi.org/10.3390/educsci15070910

AMA Style

Segaran MK, Moltudal SH. A Qualitative Descriptive Study of Teachers’ Beliefs and Their Design Thinking Practices in Integrating an AI-Based Automated Feedback Tool. Education Sciences. 2025; 15(7):910. https://doi.org/10.3390/educsci15070910

Chicago/Turabian Style

Segaran, Meerita Kunna, and Synnøve Heggedal Moltudal. 2025. "A Qualitative Descriptive Study of Teachers’ Beliefs and Their Design Thinking Practices in Integrating an AI-Based Automated Feedback Tool" Education Sciences 15, no. 7: 910. https://doi.org/10.3390/educsci15070910

APA Style

Segaran, M. K., & Moltudal, S. H. (2025). A Qualitative Descriptive Study of Teachers’ Beliefs and Their Design Thinking Practices in Integrating an AI-Based Automated Feedback Tool. Education Sciences, 15(7), 910. https://doi.org/10.3390/educsci15070910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop