You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

13 November 2025

Artificial Intelligence and Formative and Shared Assessment in Teacher Education

,
and
Facultad de Educación de Segovia, Universidad de Valladolid, 40005 Segovia, Spain
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Recent Developments in E-learning: Learning and Teaching AI, Learning and Teaching with AI

Abstract

(1) AI is increasingly being used in teacher education (TE), but there seem to be gaps and deficits regarding how to make good use of AI to enhance learning processes. One possibility comes from the integration of AI in formative and shared assessment (F&SA) processes. Therefore, the purpose of this study was to analyse the results found after the implementation of learning activities that combined the use of AI and F&SA. (2) Methods: A double case study was carried out on two learning tasks applied in TE. The data collection techniques were as follows: activity reports (students), teacher observations, dialogical interviews, focus groups and students’ reflective video diaries. Quantitative and qualitative data were combined. (3) Results: In relation to Case 1, the results show that there was a considerable variety of students’ competences in using AI and that the students managed to improve some aspects of their theoretical frameworks with AI (writing, links between paragraphs and generating new relationships between the topics covered). In addition, the main learnings pointed out by the students were as follows: to be aware of the need to review what AI offers slowly, as it may contain important mistakes, and that using it well can help them to improve their work. In relation to Case 2, the integration of an AI simulator in an F&SA protocol improved the students’ communication skills and the formative power of the assessment protocol created was proven; the students found the integrated AI simulator, the complementary feedback from the teacher and the self-assessment carried out through reflective video diaries on the experience useful. (4) Conclusions: The proper integration of AI with F&SA processes seems to favour the learning and competences of TE students, but teachers are shown to be a key part in making them literate in the appropriate use of AI. More research seems to be needed on this topic.

1. Introduction

The use of artificial intelligence (AI) in teacher education (TE) degrees is becoming more widespread. In our context, practically all students use different AI applications in their academic work. Faced with this situation, TE teachers are considering the need to start developing learning situations and activities to work on the contents of our subjects, while at the same time teaching our students to use AI in an academically correct way. In this way, we aim to ensure that it serves them to learn more and meet the basic standards of what an academic paper should be like, instead of them simply complying with the work with deficient learning processes [,,]. In this type of learning processes and tasks, the use of formative and shared assessment (F&SA) systems is a fundamental methodology to ensure the achievement of good learning processes in students.
Therefore, in this article we worked on the combination of these two topics: the use of AI and F&SA systems to improve learning processes in TE.
The rapid advancement of AI technologies is transforming the way we teach and learn in higher education []. AI tools deployed in learning processes enable progress towards the automation and personalisation of learning []. In addition, they help students to become more aware of their difficulties and successes in real time, building their knowledge progressively by motivating and engaging with it [,].
In this context, AI tools and assistants are starting to be used that can act as Socratic opponents (to develop critical thinking), personal tutors (guiding the student and giving immediate feedback on their progress), motivational tutors (offering challenges, rewards) and dynamic evaluators (AI can provide teachers with an immediate profile of the degree of development of the competences being acted upon) []. Authors consider that AI generates motivation and different types of gratification in university students [].
The fear of teachers is that students will use AI as a simple tool to complete academic work quickly, accepting the preliminary results offered by AI and consuming them passively, without effort, revision or analysis, as a simple way of carrying out the tasks given to them by teachers as quickly as possible and without dedicating time or effort to them. This misuse could lead to situations of plagiarism for academic purposes. A recent study highlights the potential drawbacks of AI. However, it also points out that AI could impose restrictions on the teaching profession, replace teachers and produce biassed results. The study suggests that capacity-building strategies for trainee teachers should be enriched in different courses to raise awareness of AI applications [].
This scenario seems to be generating animosity and habitual criticism from university teaching staff, as they discover that the learning methodologies that have been valid up to now no longer make sense if they are not profoundly transformed. It therefore seems logical to review their use and contribute to student and teacher literacy in the use of AI in training processes. However, one study highlights the importance of practical knowledge in promoting the integration of AI into educational practices, finding that higher education teachers were more likely to use AI than teachers at other educational levels [].
Some authors [] consider that the incorporation of an AI tool in the assessment processes in higher education can identify early signs of competence deficits, providing opportunities for the improvement of these identified deficits, and help teachers to build more accurate and effective personalised training itineraries to help students to overcome these diagnosed deficits []. The use of AI in education generates new educational opportunities, including improving writing, creating educational resources, supporting lesson planning and increasing teacher productivity. In addition, significant challenges were identified, such as the ethical use of AI and strategies for detecting AI-text. They claim that the best practices include clear ethical guidelines, prompt development techniques and continuous professional training, to ensure teachers can effectively and responsibly integrate AI tools into their instructional practices [].
However, although these studies follow this line of learning monitoring, and some even provide enriched information in real time [,,,], they do not explain the assessment model. Nor do they explain the processes and protocols in which these AI tools are integrated, the guarantees of ethical use [,] or the effect they have on the development of students’ skills.
The “F&SA” model is well known in countries where Spanish is the official language [,,], but some publications on the model can also be found in English [,,,,]. It is a model that unites two different concepts of assessment in a single system. The ‘formative assessment’ concept refers to an assessment aimed at improving student learning, teacher competence and the quality of the teaching–learning process that takes place. The concept of ‘shared assessment’ refers to student participation in the assessment processes and the development of dialogical assessment processes between teachers and students, with the aim of improving learning.
The F&SA model must fulfil three basic characteristics: (a) assessment has to be clearly oriented to improving student learning, improving teaching competence and improving the teaching–learning processes in the subject, not only to the qualification; (b) assessment has to be part of the teaching–learning process and be carried out in a continuous way throughout the semester, not only at the end; (c) it should promote the active participation of students in the assessment processes, usually through the techniques of self-assessment, peer assessment and shared assessment, with a subsequent process of shared assessment with the teacher [,,].
There are several reasons that justify the use of F&SA systems in TE, following the evidence accumulated by several studies [,,,,,,]:
  • They serve to learn more and better.
  • They help to better develop many teaching competences.
  • They encourage students to focus more on their learning process and to take responsibility for it.
  • They tend to generate more educational success and better academic performance.
  • They are a basic competence for all teachers, which is normally acquired more through practical experimentation than through theoretical study.
  • They facilitate a better transfer between what is learned in pre-service teacher education (PTE) and the educational practice in schools.
These results are in line with the results found in the international literature on the effects of using formative assessment in higher education [,,,,,,].
We understand that it is essential to use F&SA systems in higher education (HE) for three main reasons: (a) it is a model that usually generates better learning processes and the capacity for lifelong learning and self-regulation; (b) assessment is a teaching competence that all teachers should acquire, and therefore should be worked on and experienced; (c) it is difficult for a teacher to implement F&SA systems and processes in the future if they have not lived and experienced them during their HE [,].
Throughout the introduction, we have reviewed the integration of AI into F&SA processes in TE. There do not seem to be enough studies that integrate the use of AI in PTE with F&SA processes. Therefore, our research objective was focused on analysing the results generated by the combined use of AI and F&SA in learning activities in PTE, through two case studies. More specifically, we posed three research questions:
  • What outcomes are generated by the responsible and appropriate use of AI as an aid to the development of the theoretical framework of tutored learning projects in PTE?
  • Do the verbal and non-verbal communication skills of PTE students improve using an AI simulator integrated into an F&SA protocol?
  • What is the formative power of combining AI and an F&SA protocol to improve the communication skills of PTE students?

2. Materials and Methods

The present research is based on two case studies []: two different experiences of integrating AI into F&SA processes in PTE. The aim was to investigate the particularities, singularities and complexity of the two experiences and the results found, focusing on how the participants think, act and learn [].
The qualitative nature of the research allowed us to investigate each case and obtain information from instruments such as activity reports (students), teacher observations and dialogical interviews (Case 1) and the focus group and reflective video diaries of the students (Case 2). ATLAS.ti software (version 9) was used for data analysis. A process of reduction in the information was established, synthesising it into units of analysis or evidence, and then a process of categorisation was carried out to understand and explain the results []. The three researchers reviewed and refined the categories, ensuring their reliability through triangulation.
The two cases of AI integration into F&SA processes in PTE are described below.

2.1. Case 1

The experience was developed in PTE in Spain, in a subject in the 2nd year of the University Degree in Primary Education, taught in the 4th semester. It is a compulsory subject of 6 ECTS credits, consisting of three weekly sessions: (a) two hours of practice; (b) a 75 min theory session; and (c) a 45 min seminar. The group was composed of 36 students (30 women and 6 men), with a positive attitude and good academic level. They knew how to carry out tutored learning projects (TLP), having used this methodology in previous subjects.
The didactic objective was to use AI in an appropriate and responsible way as an aid for the development of the theoretical framework of “Tutored Learning Projects” (TLP). The experience with AI was carried out within the compulsory TLP that students complete in groups (3–4 persons) in the practical part of the subject. Specifically, 10 TLPs have been carried out, with different topics. Table 1 shows the TLP development form to be followed by the students. In Section 3 of the TLP “theoretical framework”, a protocol was established for the students to use AI in an appropriate way and to introduce a brief report of its use (Table 2). The teacher suggested using ChatGPT (version 4) or DeepSeek (version 2). In the evaluative context of the experience, these two AI tools acted as catalysts for self-assessment. Interacting with them through prompts enabled students to evaluate the plausibility of their answers and to construct the theoretical framework of the PAT responsibly.
Table 1. TLP development form.
Table 2. AI protocol for the development of the theoretical framework with the help of AI.
An F&SA system was used throughout the course. We will focus on how F&SA was applied in the preparation and the exposition of the theoretical framework of the TLP, as this was where students had to use AI following the indicated protocol. The F&SA processes were as follows:
(a)
The theoretical framework of the TLP was drawn up by the group and corrected by the teacher in feedback and feedforward cycles through face-to-face tutorials and comments in the online word document, shared between the teacher and the students, ensuring that the TLP presented a correct structure and there was internal coherence between the content and the session plan. In addition, the teacher reminded students to explain the use of AI in a short final report.
(b)
After the presentation of the TLP, a dialogue assessment was carried out with all the students.
(c)
The group presenting the TLP provided an individual reflection on the development of the TLP and a group self-assessment through a descriptive scale, with the first 3 items being those related to the theoretical framework.

2.2. Case 2

The didactic objective of the experience was to improve students’ verbal and non-verbal communication skills through the use of an AI simulator integrated in an F&SA protocol and to test the training power of the designed protocol.
The experience was carried out in PTE in Spain, with the Teacher in Primary Education university degree. It is taught in the third year in the sixth semester. It is an optional subject of 6 ECTS credits. It was taken by 55 students (30 men and 25 women), who were highly committed and motivated by the subject.
Sixteen students (8 men and 8 women) participated in the case study on a voluntary basis. All of them were very committed to improving their expressive and communicative skills and interested in interacting with the AI simulator (“Yoodli”, https://yoodli.ai) that was incorporated into the F&SA protocol.
The protocol consisted of two learning and assessment activities: (1) oral video presentations and (2) individual reflective video diaries of students interacting with the AI simulator and the teacher alternately in three action-reflection cycles (Table 3).
Table 3. F&SA protocol.
Each cycle of interaction (with the AI simulator and with the teacher) was repeated in three time sequences that allowed the students to accumulate experience and improve the competences that were worked on. Each interaction with the AI simulator ‘Yoodli’ was intended to activate an initial direct self-assessment exercise, facilitating an initial response from which students could make initial decisions for improvement. Feedback from the teacher complemented and augmented the formative response of the AI tool, facilitating new self-assessment exercises.
From the very beginning, a self-assessment sheet was designed and shared with the students on twelve verbal and non-verbal communication skills and five levels of development and evaluation: excellent (5), advanced (4), intermediate (3), basic (2) and initial (1). The students, using this card, could map the content of the feedback received in each interaction cycle, monitoring progress or blockages and assessing the level of competence in each communicative skill (Table 4). They could then complete their reflective video diaries and close each cycle. At the end of the process, they had to send the teacher the reflective video diaries and the final self-assessment. This reflective information helps the teacher to better understand the learning process of each student.
Table 4. Self-assessment sheet on communicative competence and indicators.

3. Results

The results of the two case studies combining the use of AI and F&SA processes are presented below.

3.1. Results of Case Study 1

The first research question was the following: what results are generated by the responsible and appropriate use of AI as an aid to the development of the theoretical framework for TLP in PTE?
The vast majority of the groups (9 out of 10) included the AI report in the theoretical framework of the TLP, which can be classified into three main types: (a) those making a good report, with two prompts, corrections, errors detected and changes (groups 1, 3, 6 and 10); (b) those making average reports, without sticking completely to what was asked for or without some sections (groups 2, 4, 5 and 7) and (c) those making incorrect reports: poorly developed, with two very general prompts and without including corrections, errors and changes (group 8).
To analyse the results found, we established three categories: (1) use of AI, especially prompts; (2) improvements in the work thanks to the use of AI and (3) learning achieved by the students after the use of AI in their work.
Category 1. Use of AI, especially prompts.
The results show a lot of variety, from groups that presented very elaborate and well-strung prompts to groups that use simple prompts just to summarise and highlight main ideas of the articles, or to help synthesise the accumulated information. The following are examples of very elaborate and well-structured prompts:
“We are students in the 3rd year of a double degree in Primary and Early Childhood Education. Within the framework of the subject of Physical Education, we have to carry out a project from which we have to plan and implement a proposal in relation to the topic given by our teacher in a real classroom in Primary Education. We have been assigned the subject of the hybridisation of methodologies in Physical Education. This is the theoretical framework that we have developed for the moment in relation to the TLP of Physical Education on the hybridisation of methodologies. As future teachers in Primary Education, we want to complete the information provided in the TLP. We need bibliographical sources with which we can fill in the gaps that the theoretical framework may have at the moment. Could you suggest sections that we still need to address? Are there any other aspects to correct that we may have missed in the final revision?”.
(group 10)
“We have thought of making a final section in which, after the initial changes made, previously suggested by you, we will cover the different pedagogical models that we have used as inspiration for our implementation in the 4th grade Primary Education classroom in the subject of Physical Education. These models are: Cooperative Learning, Personal and Social Responsibility Models (PSRM) and Adventure Action Spaces. It will serve as a final closure for the theoretical framework, theoretical contextualisation of our work on the Hybridisation of Methodologies”.
(group 10)
Below are examples of simple prompts to summarise, synthesise or highlight information:
“Make me a synthesis of these summaries (previous own elaboration) that does not take more than 4 pages”.
(group 8)
“Highlight the most important parts, sections and ideas of these articles on Attitudinal Style and Models of Personal and Social Responsibility”.
(group 8)
“Can you give me a summary of the main differences between cognitive and creative teaching styles in Physical Education, as explained in the book by Mosston and Ashworth (1993)”.
(group 3)
On several occasions the teacher had to remind the students to include the report on the use of AI, both in face-to-face tutorials and in shared word comments, as the students forgot to include it.
Finally, it is worth noting that what was expressed by a student in one of the evaluations was discussed, regarding the fact that students will continue to use AI and that teachers are the ones who should adapt the tasks and activities so that they do not use it, or encourage its appropriate use.
“Teachers are the ones who have to change the way tasks are designed and assessed. We are not going to stop using AI, but you can do activities in which it cannot be used or is used to learn”.
(E2, group 4)
Category 2. Improvements in work through the use of AI.
The data indicate significant differences among the groups in terms of improving work, depending on the prompts they used and how they subsequently managed the information provided by AI. The good groups found that AI helped them to improve the wording of some paragraphs, to link paragraphs better and to write a very interesting final synthesis of the paper, despite the hard work involved in proofreading and correcting everything AI provided.
“The changes we have mainly added to our theoretical framework are in the wording recommended by AI, and in the final part of the framework itself, as it suggested how to finalise the framework by addressing the different pedagogical models we utilise during the implementation in a real Primary Education classroom.
Therefore, the changes are: the final paragraph that summarises the methodologies used in our implementation in a 4th grade of Primary Education classroom and the changes of a connecting nature between paragraphs that we have made in order to unify the whole text under a coherent and cohesive narrative.
As well as completing in a more meaningful way the models that we decided to hybridise, thus finding the common points that connected them and allowing us to realise the interrelation and potential that exist between these three when implementing them in the classroom”.
(group 10)
These groups were able to critically analyse the pros and cons of using AI in their work.
“We highlight the indiscriminate appearance of authors already used by us and the redundancy of content, but written with other words from the Chat, as well as the years and/or articles, journals to which it made reference, finally being found elsewhere. The information per se is not incorrect, but it is not complete either, it is very brief and not specific, so it has been hard work to fill in the blanks left despite having been asked to fill in the blanks”.
(group 10)
“In our analysis, we noted that AI mentioned the studies of Mosston (1993) and Delgado Noguera (1992) in a general way, but did not elaborate on their approaches. We also noticed that specific examples of traditional games were missing, so we added activities such as “the rope” and “the handkerchief” to illustrate the application of these styles. In addition, AI includes unverifiable quotes, which we replaced from recent and relevant academic sources. Finally, we adjusted the explanation of Mosston’s and Delgado Noguera’s approaches, incorporating direct quotations and specific details from their research”.
(group 1)
“AI oversimplified the content, removing important nuances about the relationship between understanding, analysis and decision-making in students”.
(group 6)
The medium groups reported that AI mainly helped them to complete and/or synthesise some parts of the theoretical framework. In addition, some groups used it to add more bibliographical citations on the topic.
“Finally, we have corrected our framework, replacing the repetitive paragraphs with the synthesis provided by AI”.
(group 2)
“On the other hand, the citation of Sicilia Delgado (2002) was also correct, having verified both the author and the year and citation. This definition seems appropriate to us so we decided to add it to the theoretical framework, as it correctly defines what pedagogical models are”.
(group 4)
The data collected show how the weakest group made generic and simple prompts.
“Make me a synthesis of these summaries (previous own elaboration) that does not take more than 4 pages”.
(group 8)
This group reported that they made small modifications with AI.
“After giving these instructions to AI, we did have to make some small modifications”.
(group 8)
Category 3. Learning achieved by the students after using AI in their work.
The results show how both the good and medium groups discovered that AI makes many mistakes (inventing references that do not exist, changing authors, changing years, etc.), so it is essential to check all the texts generated by AI slowly and systematically. They also found that it often does not provide new information, but simply re-explains the same content, rewritten in a different way.
“However, when we asked it about the definition of a concept (pedagogical models), it gave us the definition of an author. When we checked this quotation, we did not find the author mentioned above, so it was very faulty and unreliable”.
(group 5)
“By entering the quote, we were able to find the authors and the year properly. However, the sentence does not exist verbatim in the document, so the quote is not valid. It interprets what is said in the document and from this it derives its own definition”.
(group 4)
“AI is a useful tool, as long as the information it gives us is correctly checked and verified, as it can make mistakes”.
(group 5)
“Furthermore, although general studies are mentioned, no relevant recent research is cited”.
(group 1)
The weakest group also briefly pointed out that AI makes mistakes, but did not reflect on this.
“…we added something that the AI did not say correctly, or did not name at all”.
(group 8)
In the assessment dialogue with the teacher, some students pointed out other learning, such as being aware that they use AI excessively and that this implies a possible loss of skills related to reading and writing.
“AI makes us stupid. We are going to stop reading, writing and we won’t even know how to write”.
(E1, group 1)

3.2. Results of Case Study 2

The second research question was the following: do PTE students’ verbal and non-verbal communication skills improve when using an AI simulator integrated into an F&SA protocol?
To do this, first an analysis of the verbal and non-verbal communication skills that students had worked on in these learning activities was carried out. The data analysis generated 106 pieces of evidence and four communicative categories, which are presented in Table 5. Fifty-six point six per cent of the evidence collected showed progress in verbal communication, specifically in relation to the students’ discursive behaviour. On the other hand, 43.4% of the evidence found showed non-verbal communicative elements brought into play and improved in three different categories: kinaesthetic behaviour (referring to gestures in communication, 22.6%), paralinguistic behaviour (non-verbal elements that accompany and qualify verbal communication, 15.1%) and proxemic behaviour (management of the body in the interpersonal communicative space, 5.7%).
Table 5. Improvements in verbal and non-verbal communication skills.
Below is the analysis by categories.
With regard to verbal communication, the students expressed improvements after the training carried out with the AI simulator and with the teacher, above all at the discursive level, and in terms of order and clarity in the transmission of the message, as well as from the point of view of the selection of information and the ability to synthesise in order to structure ideas and avoid repetition. One of the students revealed: “the organisational aspect of the ideas to be transmitted was the main aspect to be taken into account. […] I managed to make a speech that was more and more ordered, clearer, structured, without jumping around and without repetition” (Student 5).
In the area of non-verbal communication, we have three categories. Regarding “kinaesthetic behaviour”, the feedback from the AI simulator suggested to the students some gestural skills that are useful to accompany verbal speech: for example, the use of the hands to illustrate the word or the management of the gaze to empathise with the interlocutor. One learner indicated: “I used to gesticulate too much, especially with my hands. I think I have controlled this” (Student 1).
Another student stated: “I constantly lost eye contact with the camera, which is the interlocutor’s gaze; I did not address the camera, but read a script” (Student 8).
In the next category (“paralinguistic behaviour”), the students became aware of aspects related to voice management, tone, volume or rhythm from the personalised feedback provided by the teacher. One of them realised the need to avoid monotonous speech: “I have learnt to communicate better, oscillating the volume of my voice and trying not to be too monotonous” (Student 8). Another student was aware of considering the tone and speed of her speech in order to improve: “indeed, I speak a lot and at a high speed, so practising the tone and tempo made it easier for me to improve” (Student 4).
In the last category (“proxemic behaviour”), the students, by observing themselves in the interaction recordings with the AI simulator, and after reviewing the feedback provided by the simulator, were learning to position themselves in front of the camera and to be more aware of the location of the body in the communicative space: “yes, by being able to see yourself with the AI simulator you learn to position yourself correctly: showing your whole body, centred and with a good “visual” position and attitude towards the space and the others” (Student 7).
The third research question was the following: what is the formative power of the combination of AI and an F&SA protocol to improve the communication skills of students in PTE? In relation to this, 70 pieces of evidence and three categories of verification of the formative value were found (Table 6).
Table 6. Formative power of the F&SA protocol applied to the experience.
In relation to the formative capacity of the AI simulator, students highlighted the accuracy of the tool in alerting them to communication skills to be improved. One student shared the following in her reflective video diary: “at the beginning I was struck by its precise alerts about seriousness, monotony in speech and little animation and stimulation on my part. It even indicated to me the minute and second in which I could review these communicative shortcomings” (Student 4). Another student expressed in his video diary the motivating effect of the AI simulator to show communicative improvements through expressions of recognition and reward: “I noticed that the simulator warned me about the lack of pauses and silences. Then, training with it, I applied some strategic pauses and various connectors to the speech to make it flow and it recognised the change by rewarding me” (Student 6).
Regarding the usefulness of teacher feedback, students acknowledged the accuracy of the messages, but, above all, the clarifying length of each explanation, giving personalised advice on how to improve communication skills. In one of the reflective video diaries, a student indicated the following: “the first keys of the AI tool to improve my communication have been fundamental. But the teacher’s feedback, by focusing on them in more depth, has made me think more” (Student 5). In a similar vein, one student revealed the following: “the teacher explained to me in all the feedback clearly and precisely which aspects of gaze management affected the effectiveness of my communicative ability. This was indicated to me by the AI simulator, although not so deeply” (Student 8).
Finally, in relation to the usefulness of the self-assessment exercise, the progressive construction of the reflective video diary seems to have helped the students to develop a deeper capacity for reflection and self-criticism, based on the communicative results gathered from the feedback of the AI simulator and the teacher. This self-assessment exercise was useful for the students, mainly having two effects on their learning process.
First, there was a triangulation effect of the information. The feedback from the AI simulator and from the teacher helped students to tie up loose ends and advance in the improvement of communicative skills by focusing on the ability to reflect on the learning of specific skills. This idea was noted in the focus group discussion, with one of the students expressing how he closed the circle after observing himself, triangulating with the AI simulator and the teacher, thinking and applying his conclusions to improve: “the reflective video diary, in each recorded passage after each cycle of interaction with AI and the teacher, is making me remember and reflect on the trained communicative skills. I triangulated the feedback from both, especially the aspects that could be improved and applied them in the next practice” (Student 4).
The second recognised effect reveals a gain in confidence. One of the students indicated the following: “above all, recording myself in my video diaries helped me to verbalise and understand my shortcomings and my successes. This helped me to have more confidence in myself and my potential […] I know that I have improved, but I have to continue to work more carefully for the future” (Student 1).

4. Discussion

We have organised the discussion according to the research questions.
The first question focused on analysing what results are generated with the responsible and appropriate use of AI as an aid to the development of the theoretical framework of tutored learning projects in PTE. The results show that there is a considerable variety of students’ competences in using AI and the habit of using AI as part of a learning activity, which is consistent with a study indicating that, although AI is present in university students’ assignments, it is not part of more autonomous or deeper learning habits []. Some studies with PTE students indicate that it is necessary for the teacher to design and guide the task, taking into account the use of AI and/or specifically work on AI competences for students to use it appropriately [,,].
On the other hand, students manage to improve some aspects of their theoretical frameworks thanks to AI. The most common ones have to do with improving the writing, creating good links between the paragraphs and topics of the theoretical framework and finding some analyses and relations between the topics covered that had not occurred to the students. In the same vein, the literature [,,,] indicates that PTE students make very varied uses of AI: improving writing, summarising, contributing ideas, personalised learning (explanatory help, guided tutoring), designing practical tasks or regulating their learning.
After the process, one of the main lessons learnt by the students was to be aware of the need to review slowly what AI offers, given that it may contain important errors. In this regard, authors of another study [] observed that when students evaluated ChatGPT with predefined criteria, their perception of the quality of the answers changed significantly, increasing their awareness of the limitations and potential errors of AI, which shows that critical reflection improves their ability to evaluate and not automatically accept the results generated by AI. Another aspect learned by the students was that they had be aware that using it well can help them to improve their work, but that this requires time and effort. Similarly, another study [] reports that PTE students recognise that AI improves their educational effectiveness and facilitates personalised learning. However, they also indicate that it requires time to explore and adapt its use for real pedagogical purposes, as well as the need to critically reflect on processes and outcomes. Another recent study [] with PTE students shows that students are aware that the effective use of AI requires effort, initial training and continued practice.
They also indicate that we should be the teachers in charge of modifying our learning tasks and activities, given that there is a massive use of AI by students. In this regard, recent studies [,] point out that the use of AI among university teachers is still low (less than half) and highlight the need for adequate training to acquire skills in using AI, although its uptake in higher education surpasses that observed in other educational stages [].
Finally, one student expressed concern about a potential decline in reading and writing skills due to AI. This aligns with an experimental study which shows that the exclusive use of AI for academic reading and writing tasks can reduce comprehension accuracy by 25.1% []. Similarly, another study revealed that 65% of university students use AI mainly for grammatical and structural correction and 45% expressed concern about a possible loss of error self-detection and creativity []. In this context, another study [] highlights the importance of building the capacity of student teachers in learning and teaching AI, emphasising that adequate training is essential not only to develop pedagogical competencies, but also to address the ethical challenges associated with AI integration, ensuring its responsible and effective use in higher education.
The second question focused on whether PTE students improve their verbal and non-verbal communication skills by using an AI simulator integrated into an F&SA protocol. The results show that there are improvements in each of the four communicative skills categories. In similar studies [,], the use of AI tools and simulators to train oral expressiveness, strategies and approaches to speeches and oral presentations, or paralinguistic and gestural skills, helped to develop communicative competence in a more personal, effective, on-demand and real-time way, as in this case.
On the other hand, by identifying specific deficits in four modes of verbal and non-verbal communication during their interaction with the AI simulator, students became more aware of both their weaknesses and their strengths. In this respect, some specialised studies [,] consider the interaction with an AI simulator to be the first step to start improving their speech, clarity and conciseness in the transmission of the message and using the body and gestures as communication tools that are complementary to the words, placing them in the communicative scenario in a more conscious and active way. However, AI dialogic systems still need to improve some key aspects of communication—for example, the recognition of spontaneity or expressive creativity, which are skills that AI does not recognise—as it is subject to rigid communicative canons, as pointed out in a specialised study [].
In short, in response to the first two research questions, the inclusion of AI tools has activated important skills in students for their professional future, such as reading and writing, critical thinking and verbal and non-verbal communication. It is not so much the use of the tools themselves but how to use them that is a valuable lesson for their future as teachers.
The third question asked whether the combination of AI and an F&SA protocol has the formative power to improve the communication skills of students in PTE. Again, the results indicate that it does.
On the one hand, the formative capacity of the AI simulator helped students to take a more active role from the beginning of the learning process. In this regard, some specialised studies [,] highlight the educational value of AI in terms of the personalisation of learning, access to advanced competence resources and the efficiency and effectiveness of the learning processes, shortening times and encouraging students to make more precise decisions in the margins of improvement offered. As suggested in the literature [,], the automatic feedback offered by AI tools allows students to self-regulate learning, monitor their competence progress and, consequently, adjust the focus of their work in a more active way.
On the other hand, students found the teacher’s feedback during the assessment to be accurate, in-depth and personalised for their communicative skills. This is also in line with some specialised studies [,] that reveal this complementary and personalised formative value of teacher feedback, which is essential to provide nuance, to complement the information coming from AI, to prevent the risks of over-dependence on technology and to safeguard human interactions. In this way, the formative value of teacher feedback lies in its AI literacy power, to teach students not to take every result that AI tools provide as unique or absolute. The specialised literature [,,,] calls for this teaching role, which is capable of activating the literate, responsible, critical and ethical sense of the use of AI tools in higher education.
Video reflections as guided self-assessment helped students develop reflective and self-critical thinking by integrating feedback from both the AI simulator and the teacher. Some studies [,,] suggest that when students have data upon which to base their reflection on tasks, they learn better, with less uncertainty and more motivation. Therefore, providing varied feedback from different sources, so that they can triangulate, reflect, verbalise and self-regulate their learning, is essential for them to make better decisions and progress in their competence development.

5. Conclusions

We present below the most relevant conclusions of the study, taking into account the three research questions.
  • What are the results of the responsible and appropriate use of AI as an aid for the development of the theoretical framework of the tutored learning projects in PTE? The results show that PTE students demonstrate a considerable range of skills in the use of AI for the development of theoretical frameworks, from very simple to very elaborate and in-depth uses. On a practical level, it was found that the appropriate use of AI can help the student in the development of theoretical frameworks, especially in writing and creating links between paragraphs and topics. Therefore, proper use of AI can help them improve their work, but improper use can cause problems, such as reducing their reading comprehension and writing skills or creating a dependence on AI to perform all activities. Finally, some students point out that in view of the current common use of AI at university, we teachers should modify our learning tasks and activities.
  • Do the verbal and non-verbal communication skills of PTE students improve using an AI simulator integrated into an F&SA protocol? The results indicate that this combination has improved students’ communication skills on discursive, kinaesthetic, paralinguistic and proxemic levels. The AI simulator makes it possible to detect some verbal communicative deficits, such as the inability to select relevant information and structure it with an intentional order of ideas, as well as some non-verbal deficits, such as the lack of control of gestures, monotony and lack of voice modulation or the incorrect positioning of the body in the communicative scenario. The result is a greater awareness of their communicative competence in real time, as well as of what and how to improve.
  • What is the formative power of the combination of AI and an F&SA protocol to improve the communication skills of students in PTE? The results show that this methodological combination has a formative power to improve students’ communication skills with three elements: (1) the feedback from the AI simulator, (2) the feedback from the teacher and (3) the self-assessment exercise carried out on their communicative progress. The first of these is due to its ability to detect communication deficits and successes in real time, alerting and motivating students. The second is due to its power of providing nuance for, complementing and alphabetising the information provided by the AI simulator. The third is a guide for reflection and self-criticism in order to improve and advance communicatively, as well as for favouring a triangulation of all the information. This combination helps to generate literacy processes in the use of AI, which goes beyond accompanying students in the technical use of AI tools; it extends to understanding the ethical considerations regarding their use, such as the authenticity of the response data or the security of the information handled.
In addition, we highlight a series of contributions and lessons learned that may be of interest for scientific and professional knowledge: (a) the literacy of PTE teachers and students is necessary to be able to introduce AI into their learning activities in an appropriate way; (b) teachers must adapt their learning activities so that the use of AI by students is appropriate and improves their learning; (c) the use of AI simulators integrated in F&SA protocols offers first answers on learning or competences to be developed by students; (d) hybrid F&SA protocols (AI and teacher intervention) have a literacy-enhancing power in the use of AI and (e) some students are aware of the possible loss of competences generated by an inappropriate use of AI.
The main limitation is that the two case studies have been recently implemented for the first and only time by the teachers (academic year 2024/25) and each case only used one group of students. It would be desirable to replicate the study with more courses and groups, with the relevant adaptations and changes, in order to be able to draw more solid conclusions. On the other hand, given the diversity of resources offered by AI (increasingly diverse, numerous and complex), it represents a field of opportunities and challenges for research on how to use it in PTE and combine it with F&SA systems or other teaching methodologies, in all types of subjects, including the placement or final degree project.

Author Contributions

Conceptualization, T.F.-N., J.L.A.-H. and V.M.L.-P.; methodology, T.F.-N., J.L.A.-H. and V.M.L.-P.; software, T.F.-N., J.L.A.-H. and V.M.L.-P.; validation, T.F.-N., J.L.A.-H. and V.M.L.-P.; formal analysis, T.F.-N., J.L.A.-H. and V.M.L.-P.; investigation, T.F.-N., J.L.A.-H. and V.M.L.-P.; resources, T.F.-N., J.L.A.-H. and V.M.L.-P.; data curation, T.F.-N., J.L.A.-H. and V.M.L.-P.; writing—original draft preparation, T.F.-N., J.L.A.-H. and V.M.L.-P.; writing—review and editing, T.F.-N., J.L.A.-H. and V.M.L.-P.; visualization, T.F.-N., J.L.A.-H. and V.M.L.-P.; supervision, T.F.-N., J.L.A.-H. and V.M.L.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study did not require ethical approval.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, C.; Schießl, J.; Plößl, L.; Hofman, F.; Gläser-Zikuda, M. Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. Int. J. Educ. Technol. High. Educ. 2023, 20, 49. [Google Scholar] [CrossRef]
  2. Runge, I.; Hebibi, F.; Lazarides, R. Acceptance of Pre-Service Teachers Towards Artificial Intelligence (AI): The Role of AI-Related Teacher Training Courses and AI-TPACK Within the Technology Acceptance Model. Educ. Sci. 2025, 15, 167. [Google Scholar] [CrossRef]
  3. Al-Abdullatif, A.M. Modeling Teachers’ Acceptance of Generative Artificial Intelligence Use in Higher Education: The Role of AI Literacy, Intelligent TPACK, and Perceived Trust. Educ. Sci. 2024, 14, 1209. [Google Scholar] [CrossRef]
  4. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  5. Dogan, M.E.; Dogan, T.G.; Bozkurt, A. The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Appl. Sci. 2023, 13, 3056. [Google Scholar] [CrossRef]
  6. Luo, J.; Zheng, C.; Yin, J.; Hay Teo, H. Design and assessment of AI-based learning tools in higher education: A systematic review. Int. J. Educ. Technol. High. Educ. 2025, 22, 42. [Google Scholar] [CrossRef]
  7. Ossa, C.; Willatt, C. Uso de la Inteligencia Artificial Generativa para retroalimentar escritura académica en procesos de formación inicial docente. Eur. J. Educ. Psychol. 2023, 16, 1–16. [Google Scholar] [CrossRef]
  8. Ruiz-Velasco Sánchez, E.; Bárcenas López, J. Inteligencia Artificial Para la Transformación de la Educación; Sociedad Mexicana de Computación en la Educación: Ciudad de México, Mexico, 2023. [Google Scholar]
  9. Niu, W.; Zhang, W.; Zhang, C.; Chen, X. The Role of Artificial Intelligence Autonomy in Higher Education: A Uses and Gratification Perspective. Sustainability 2024, 16, 1276. [Google Scholar] [CrossRef]
  10. Altinay, Z.; Altinay, F.; Sharma, R.C.; Dagli, G.; Shadiev, R.; Yikici, B.; Altinay, M. Capacity Building for Student Teachers in Learning, Teaching Artificial Intelligence for Quality of Education. Societies 2024, 14, 148. [Google Scholar] [CrossRef]
  11. Güneyli, A.; Burgul, N.S.; Dericioğlu, S.; Cenkova, N.; Becan, S.; Şimşek, Ş.E.; Güneralp, H. Exploring Teacher Awareness of Artificial Intelligence in Education: A Case Study from Northern Cyprus. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 2358–2376. [Google Scholar] [CrossRef]
  12. Vera, V. Integración de la Inteligencia Artificial en la Educación Superior: Desafíos y oportunidades. Transform. Electron. J. 2023, 4, 17–34. [Google Scholar]
  13. Dai, Y.; Wu, Z. Mobile-assisted pronunciation learning with feedback from peers and/or automatic speech recognition: A mixed-methods study. Comput. Assist. Lang. Learn. 2023, 36, 861–884. [Google Scholar] [CrossRef]
  14. Cordero, J.; Torres-Zambrano, J.; Cordero-Castillo, A. Integration of Generative Artificial Intelligence in Higher Education: Best Practices. Educ. Sci. 2025, 15, 32. [Google Scholar] [CrossRef]
  15. Darban, M. The future of virtual team learning: Navigating the intersection of AI and education. J. Res. Technol. Educ. 2023, 57, 659–675. [Google Scholar] [CrossRef]
  16. Rokhayani, A.; Rukmini, D.; Hartono, R.; Mujiyanto, J. Integrating technology in online learning based on computer-mediated communication artificial intelligence to improve students’ achievement. J. High. Educ. Theory Pract. 2022, 22, 234–244. [Google Scholar] [CrossRef]
  17. Rahman, M.; Watanobe, Y. ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Appl. Sci. 2023, 13, 5783. [Google Scholar] [CrossRef]
  18. López-Pastor, V.M. Assessment Practices in Physical Education: Case Studies in Primary, Secondary and Teacher Education; University of Valladolid: Valladolid, Spain, 1999. [Google Scholar]
  19. López-Pastor, V.M. Formative and Shared Assessment in Higher Education: Proposals, Techniques, Instruments and Experiences; Narcea: Madrid, Spain, 2009. [Google Scholar]
  20. López-Pastor, V.M.; Pérez-Pueyo, Á. Formative and Shared Assessment in Education: Successful Experiences in All Educational Stages; University of León: León, Spain, 2017; Available online: http://buleria.unileon.es/handle/10612/5999 (accessed on 10 July 2025).
  21. López-Pastor, V.M.; Castejón, J.; Sicilia Camacho, A.; Navarro, V.; Webb, G. The process of creating a cross-university network for formative and shared assessment in higher education in Spain and its potential applications. Innov. Educ. Teach. Int. 2011, 48, 79–90. [Google Scholar] [CrossRef]
  22. López-Pastor, V.M.; Kirk, D.; Lorente-Catalán, E.; MacPhail, A.; Macdonald, D. Alternative assessment in physical education: A review of international literature. Sport Edu. Soc. 2013, 18, 57–76. [Google Scholar] [CrossRef]
  23. López-Pastor, V.M.; Sicilia-Camacho, A. Formative and shared assessment in higher education. Lessons learned and challenges for the future. Assess. Eval. High. Edu. 2017, 42, 77–97. [Google Scholar] [CrossRef]
  24. Hamodi, C.; López-Pastor, V.M.; López-Pastor, A.T. If I experience formative assessment whilst studying at university, will I put it into practice later as a teacher? Formative and shared assessment in Initial Teacher Education (ITE). Eur. J. Teach. Educ. 2017, 40, 171–190. [Google Scholar] [CrossRef]
  25. Herrero, D.; López-Pastor, V.M.; Manrique, J.C.; Moura, A. Formative and Shared Assessment: Literature review on the main contributions in physical education and physical education teacher education. Eur. Phys. Educ. Rev. 2024, 30, 493–510. [Google Scholar] [CrossRef]
  26. Cañadas, L.; Santos-Pastor, M.L.; Castejón, F.J. Desarrollo de competencias docentes en la Formación Inicial del Profesorado de Educación Física. Relación con los Instrumentos de Evaluación. Estud. Pedagógicos 2018, 44, 111–126. [Google Scholar]
  27. Fraile, A.; Aparicio, J.L.; Asún, S.; Romero-Martín, M.R. La evaluación formativa de las competencias genéricas en la formación inicial del profesorado de Educación Física. Estud. Pedagógicos 2018, 44, 39–53. [Google Scholar] [CrossRef]
  28. Black, P.; Wiliam, D. Assessment for Learning in the Classroom. In Assessment and Learning; Gardner, J., Ed.; SAGE Publications: London, UK, 2006; pp. 9–25. [Google Scholar]
  29. Boud, D.; Falchikov, N. Rethinking Assessment in Higher Education. In Learning for the Long Term; Routledge: Oxford, UK, 2007. [Google Scholar]
  30. Brown, S. Learning, Teaching and Assessment in Higher Education: Global Perspectives; Palgrave-MacMillan: London, UK, 2015. [Google Scholar]
  31. Brown, S.; Glasner, A. Evaluar en la universidad. In Problemas y Nuevos Enfoques; Narcea: Madrid, Spain, 2003. [Google Scholar]
  32. Ibarra-Sáiz, M.S.; Rodríguez-Gómez, G.; Boud, D. Developing student competence through peer assessment: The role of feedback, self-regulation and evaluative judgement. High. Educ. 2020, 80, 137–156. [Google Scholar] [CrossRef]
  33. Nicol, D. Resituar el feedback, de reactivo a proactivo. In El Feedback en Educación Superior y Profesional. Comprenderlo y Hacerlo Bien; Boud, D., Ed.; Narcea: Madrid, Spian, 2015; pp. 115–130. [Google Scholar]
  34. Panadero, E.; Jonsson, A. The use of scoring rubrics for formative assessment purposes revisited: A review. Educ. Res. Rev. 2013, 9, 129–144. [Google Scholar] [CrossRef]
  35. Simons, H. El Estudio de Caso: Teoría y Práctica; Morata: Madrid, Spain, 2009. [Google Scholar]
  36. Stake, R. Estudios de casos cualitativos. In Las Estrategias de Investigación Cualitativa; Denzin, N.K., Lincoln, Y.S., Eds.; Gedisa: Barcelona, Spain, 2013; pp. 154–197. [Google Scholar]
  37. Miles, M.B.; Huberman, A.M.; Saldaña, J. Qualitative Data Analysis: A Methods Sourcebook, 3rd ed.; SAGE: London, UK, 2014. [Google Scholar]
  38. Fošner, A. University Students’ Attitudes and Perceptions towards AI Tools: Implications for Sustainable Educational Practices. Sustainability 2024, 16, 8668. [Google Scholar] [CrossRef]
  39. Kalniņa, D.; Nīmante, D.; Baranova, S. Artificial intelligence for higher education: Benefits and challenges for pre-service teachers. Front. Educ. 2024, 9, 1501819. [Google Scholar] [CrossRef]
  40. Ayuso del Puerto, D.; Gutiérrez Esteban, P. La Inteligencia Artificial como recurso educativo durante la formación inicial del profesorado. RIED-Rev. Iberoam. Educ. Distancia 2022, 25, 347–362. [Google Scholar] [CrossRef]
  41. Kelley, M.; Wenzel, T. Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry. Educ. Sci. 2025, 15, 659. [Google Scholar] [CrossRef]
  42. Spasopoulos, T.; Sotiropoulos, D.; Kalogiannakis, M. Generative AI in Pre-Service Science Teacher Education: A Systematic Review. Adv. Mob. Learn. Educ. Res. 2025, 5, 1501–1523. [Google Scholar] [CrossRef]
  43. Blonder, R.; Feldman-Maggor, Y.; Rap, S. Are They Ready to Teach? Generative AI as a Means to Uncover Pre-Service Science Teachers’ PCK and Enhance Their Preparation Program. J. Sci. Educ. Technol. 2024. [Google Scholar] [CrossRef]
  44. Sadidi, F.; Prestel, T. Impact of Criterion Based Reflection on Prospective Physics Teachers’ Perceptions of ChatGPT Generated Content. arXiv 2024. [Google Scholar] [CrossRef]
  45. Karataş, İ.; Yüce, M. AI and the future of teaching: Preservice teachers’ reflections on using ChatGPT in education. Int. Rev. Res. Open Distrib. Learn 2024, 25, 45–65. [Google Scholar] [CrossRef]
  46. Voogt, J.; Pieters, J.; Kuiper, E. Artificial intelligence adoption amongst digitally proficient trainee teachers: Perceptions, challenges, and instructional changes. Soc. Sci. 2024, 14, 355. [Google Scholar] [CrossRef]
  47. Bae, H.; Hur, J.; Park, J.; Choi, G.; Moon, J. Pre-service teachers’ dual perspectives on generative AI: Benefits, challenges, and integration into teaching and learning. Online Learn. J. 2024, 28, 1–20. [Google Scholar] [CrossRef]
  48. Mah, D.K.; Groß, N. Artificial intelligence in higher education: Exploring faculty use, self-efficacy, distinct profiles, and professional development needs. Int. J. Educ. Technol. High. Educ. 2024, 21, 58. [Google Scholar] [CrossRef]
  49. Qirui, J. Experimental Evidence on Negative Impact of Generative AI on Scientific Learning Outcomes. SSRN Electron. J. 2023, 1–15. [Google Scholar] [CrossRef]
  50. Ziar, P.Q. Impact of AI-Based Tools on Writing Skills. J. Res. Educ. 2025, 1, 97–112. [Google Scholar] [CrossRef]
  51. Afzaal, M.; Zia, A.; Nouri, L.; Fors, U. Informative Feedback and Explainable AI-Based Recommendations to Support Students’ Self-regulation. Technol. Knowl. Learn. 2024, 29, 331–354. [Google Scholar] [CrossRef]
  52. Rodero, E. Inteligencia Artificial (IA) Para Mejorar la Competencia en las Presentaciones en Público; Universidad Pompeu Fabra: Barcelona, Spain, 2024. [Google Scholar]
  53. Zou, B.; Liviero, S.; Hao, M.; Wei, C. Artificial Intelligence Technology for EAP Speaking Skills: Student Perceptions of Opportunities and Challenges. In Technology and the Psychology of Second Language Learners and Users. New Language Learning and Teaching Environments; Freiermuth, M.R., Zarrinabadi, N., Eds.; Palgrave Macmillan: Cham, Switzerland, 2020; pp. 433–463. [Google Scholar]
  54. Lotze, N. Goodbye to Classroom Teaching? Artificial Intelligence in Language Learning; Goethe-Institute: Berlin, Germany, 2018; Available online: https://bib.learnit2teach.ca/blog/goodbye-to-classroom-teaching-artificial-intelligence-in-language-learning/ (accessed on 10 July 2025).
  55. Allen, L.; Kendeou, K. ED-AI lit: An interdisciplinary framework for AI literacy in education. Policy Insights Behav. Brain Sci. 2024, 11, 3–10. [Google Scholar] [CrossRef]
  56. Köbis, L.; Mehner, C. Ethical questions raised by AI-supported mentoring in higher education. Front. Artif. Intell. 2021, 4, 624050. [Google Scholar] [CrossRef] [PubMed]
  57. Lee, S.; Park, G. Development and validation of ChatGPT literacy scale. Curr. Psychol. 2024, 43, 18992–19004. [Google Scholar] [CrossRef]
  58. Barbieri, W.; Nguyen, N. Generative AI as a “placement buddy”: Supporting pre-service teachers in work-integrated learning, self-management and crisis resolution. Australas. J. Educ. Technol. 2025, 41, 34–39. [Google Scholar] [CrossRef]
  59. Gašević, D.; Mirriahi, N.; Dawson, S. Analytics of the effects of video use and instruction to support reflective learning. In Proceedings of the Fourth International Conference on Learning Analytics and Knowledge, Indianapolis, IN, USA, 24–28 March 2014. [Google Scholar]
  60. Algayres, M.G.; Triantafyllou, E. Learning analytics in flipped classrooms: A scoping review. Electron. J. e-Learn. 2020, 18, 397–409. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.