Artificial Intelligence from Google Environment for Effective Learning Assessment
Abstract
:1. Introduction
2. Materials and Methods
2.1. The Developed System
2.2. The Course and the Tests
- Definitions of knowledge, skills, attitudes, context, and competences.
- Assessment and evaluation of competences in the sports.
- Assessment and evaluation of competences at school.
- Peer assessment and evaluation.
- Certification of competences at school.
- European frameworks and laws for the certification of the competences.
2.3. The People
2.4. The Qualitative and Quantitative Evaluation of the Tests
2.4.1. Item Analysis
2.4.2. The Questionnaire
3. Results
3.1. The Effective Participants
3.2. Item Analysis Results
3.3. Answers to the Questionnaire
4. Discussion
- >0.70: too easy;
- 0.30–0.70: optimal;
- <0.30: too difficult.
- >0.30: good;
- 0.20–0.30: fair;
- 0.10–0.20: marginal;
- <0.10: poor, needs revision.
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Weng, X.; Xia, Q.; Gu, M.; Rajaram, K.; Chiu, T.K. Assessment and learning outcomes for generative AI in higher education: A scoping review on current research status and trends. Australas. J. Educ. Technol. 2024, 40, 37–55. [Google Scholar] [CrossRef]
- Wang, L.; Li, S.; Chen, Y. Early adaption of assessments using generative artificial intelligence and the impact on student learning: A case study. Afr. J. Inter/Multidiscip. Stud. 2024, 6, 1–12. [Google Scholar] [CrossRef]
- Mao, J.; Chen, B.; Liu, J.C. Generative Artificial Intelligence in Education and Its Implications for Assessment. TechTrends 2024, 68, 58–66. [Google Scholar] [CrossRef]
- Domenici, G. L’intelligenza artificiale generativa per l’innalzamento della qualità dell’istruzione e la fioritura del pensiero critico. Quale contributo? J. Educ. Cult. Psychol. Stud. (ECPS) 2024, 30, 11–22. [Google Scholar] [CrossRef]
- Gundu, T. Strategies for e-Assessments in the Era of Generative Artificial Intelligence. Electron. J. E-Learn. 2025, 22, 40–50. [Google Scholar] [CrossRef]
- Circi, R.; Hicks, J.; Sikali, E. Automatic item generation: Foundations and machine learning-based approaches for assessments. Front. Educ. 2023, 8, 858273. [Google Scholar] [CrossRef]
- Kaldaras, L.; Akaeze, H.O.; Reckase, M.D. Developing valid assessments in the era of generative artificial intelligence. Front. Educ. 2024, 9, 1399377. [Google Scholar] [CrossRef]
- Paskova, A.A. Potentials of integrating generative artificial intelligence technologies into formative assessment processes in higher education. Vestn. Majkopskogo Gos. Tehnol. Univ. 2024, 16, 98–109. [Google Scholar] [CrossRef]
- Rauh, M.; Marchal, N.; Manzini, A.; Hendricks, L.A.; Comanescu, R.; Akbulut, C.; Stepleton, T.; Mateos-Garcia, J.; Bergman, S.; Kay, J.; et al. Gaps in the Safety Evaluation of Generative AI. Proc. AAAI/ACM Conf. AI Ethics Soc. 2024, 7, 1200–1217. [Google Scholar] [CrossRef]
- Grassini, S. Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
- Arslan, B.; Lehman, B.; Tenison, C.; Sparks, J.R.; López, A.A.; Gu, L.; Zapata-Rivera, D. Opportunities and challenges of using generative AI to personalize educational assessment. Front. Artif. Intell. 2024, 7, 1460651. [Google Scholar] [CrossRef] [PubMed]
- Zhao, J.; Chapman, E.; Sabet, P.G.P. Peyman Generative AI and Educational Assessments: A Systematic Review. Educ. Res. Perspect. 2024, 51, 124–155. [Google Scholar] [CrossRef]
- Swiecki, Z.; Khosravi, H.; Chen, G.; Martinez-Maldonado, R.; Lodge, J.M.; Milligan, S.; Selwyn, N.; Gašević, D. Assessment in the age of artificial intelligence. Comput. Educ. Artif. Intell. 2022, 3, 100075. [Google Scholar] [CrossRef]
- Salinas-Navarro, D.E.; Vilalta-Perdomo, E.; Michel-Villarreal, R.; Montesinos, L. Designing experiential learning activities with generative artificial intelligence tools for authentic assessment. Interact. Technol. Smart Educ. 2024, 21, 1179. [Google Scholar] [CrossRef]
- Barragán, A.J.; Aquino, A.; Enrique, J.M.; Segura, F.; Martínez, M.A.; Andújar, J.M. Evaluación de la inteligencia artificial generativa en el contexto de la automática: Un análisis crítico. Jorn. Automática 2024, 45. [Google Scholar] [CrossRef]
- Amugongo, L.M.; Kriebitz, A.; Boch, A.; Lütge, C. Operationalising AI ethics through the agile software development lifecycle: A case study of AI-enabled mobile health applications. AI Ethics 2023, 5, 227–244. [Google Scholar] [CrossRef]
- Solanki, P.; Grundy, J.; Hussain, W. Operationalising ethics in artificial intelligence for healthcare: A framework for ai developers. AI Ethics 2022, 3, 223–240. [Google Scholar] [CrossRef]
- Hanna, M.G.; Pantanowitz, L.; Jackson, B.; Palmer, O.; Visweswaran, S.; Pantanowitz, J.; Deebajah, M.; Rashidi, H.H. Ethical and Bias Considerations in Artificial Intelligence/Machine Learning. Mod. Pathol. 2025, 38, 100686. [Google Scholar] [CrossRef]
- Nguyen, H.; Hayward, J. Applying Generative Artificial Intelligence to Critiquing Science Assessments. J. Sci. Educ. Technol. 2025, 34, 199–214. [Google Scholar] [CrossRef]
- Pearce, J.; Chiavaroli, N. Rethinking assessment in response to generative artificial intelligence. Med. Educ. 2023, 57, 889–891. [Google Scholar] [CrossRef]
- Pesovski, I.; Santos, R.; Henriques, R.; Trajkovik, V. Generative AI for Customizable Learning Experiences. Sustainability 2024, 16, 3034. [Google Scholar] [CrossRef]
- Bron Eager. AI Literature Reviews: Exploring Google’s NotebookLM for Analysing Academic Literature. 2024. Available online: https://broneager.com/ai-literature-review-notebooklm (accessed on 9 May 2025).
- Somasundaram, R. Discovering NotebookLM: My AI Companion for Smarter Academic Research. iLovePhD. 2025. Available online: https://www.ilovephd.com/discovering-notebooklm-my-ai-companion-for-smarter-academic-research/ (accessed on 9 May 2025).
- Trinchero, R. Item Analysis Manual; FrancoAngeli: Milan, Italy, 2007. [Google Scholar]
- Trinchero, R. Building, Evaluating and Certifying Competences; Pearson: London, UK, 2016. [Google Scholar]
- Ebel, R.L.; Frisbie, D.A. Essentials of Educational Measurement; Prentice Hall: Saddle River, NJ, USA, 1991. [Google Scholar]
- Nunnaly, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
- Krosnick, J.A.; Presser, S. Question and questionnaire design. In Handbook of Survey Research, 2nd ed.; Marsden, P.V., Wright, J.D., Eds.; Emerald Group Publishing Limited: Leeds, UK, 2010; pp. 263–313. [Google Scholar]
- Alansari, I. Evaluating user experience on e-learning using the User Experience Questionnaire (UEQ) with additional functional scale. J. Inf. Syst. Inform. 2022, 17, 145–162. [Google Scholar] [CrossRef]
- Nakamura, W. TUXEL: A technique for user experience evaluation in e-learning. In Proceedings of the VII Congresso Brasileiro de Informática na Educação (CBIE) 2018, Fortaleza, Brazil, 29 October–1 November 2018. [Google Scholar] [CrossRef]
- González-Calatayud, V.; Prendes-Espinosa, P.; Roig-Vila, R. Artificial intelligence for student assessment: A systematic review. Appl. Sci. 2021, 11, 5467. [Google Scholar] [CrossRef]
- Topping, K.J.; Gehringer, E.; Khosravi, H.; Gudipati, S.; Jadhav, K.; Susarla, S. Enhancing peer assessment with artificial intelligence. Int. J. Educ. Technol. High. Educ. 2025, 22, 3. [Google Scholar] [CrossRef]
- Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education; Brookings Institution Press: Washington, DC, USA, 2023. [Google Scholar]
- Williamson, D.M.; Mislevy, R.J.; Bejar, I.I. (Eds.) Automated Scoring of Complex Tasks in K-12 to Postsecondary Education: Theory and Practice; Routledge: London, UK, 2012. [Google Scholar]
- Gershon, S.K.; Anghel, E.; Alexandron, G. An evaluation of assessment stability in a massive open online course using item response theory. Educ. Inf. Technol. 2024, 29, 2625–2643. [Google Scholar] [CrossRef]
- Tran, T.T.T. Enhancing EFL Writing Revision Practices: The Impact of AI- and Teacher-Generated Feedback and Their Sequences. Educ. Sci. 2025, 15, 232. [Google Scholar] [CrossRef]
- Bittle, K.; El-Gayar, O. Generative AI and academic integrity in higher education: A systematic review and research agenda. Information 2025, 16, 296. [Google Scholar] [CrossRef]
- Gustilo, L.; Ong, E.; Lapinid, M.R. Algorithmically-driven writing and academic integrity: Exploring educators’ practices, perceptions, and policies in the AI era. Int. J. Educ. Integr. 2024, 20, 3. [Google Scholar] [CrossRef]
- Smith, A.; Johnson, B. Methodological challenges in automated assessment and the way forward. J. Educ. Meas. 2020, 57, 657–682. [Google Scholar]
- Kristóf, T. Development tendencies and turning points of futures studies. Eur. J. Futures Res. 2024, 12, 9. [Google Scholar] [CrossRef]
- Tristan, L.; Gottipati, S.; Cheong, M.L.F. Ethical Considerations for Artificial Intelligence in Educational Assessments. In Creative AI Tools and Ethical Implications in Teaching and Learning; Keengwe, J., Ed.; IGI Global: Hershey, PA, USA, 2023; pp. 32–79. [Google Scholar] [CrossRef]
Stage | Method/Technology | Example/Use Case |
---|---|---|
1. Early AIG | Template-based algorithms | Automatic item generation using fixed, algorithmic templates to produce large numbers of test items. |
2. Practical Applications | Domain-specific templates | Implementation of AIG in high-stakes and formative assessments—such as large-scale exams, opinion surveys, and classroom tests—to quickly generate items. |
3. Emergence of GenAI | Large Language Models (LLMs) | Use of LLMs to automatically generate customizable learning materials, for example, creating multiple-choice questions based on instructor outcomes. |
4. Expanded Functionality | Integrative AI-based systems | Deployment of LLMs to support various assessment tasks: test planning, question creation, instruction preparation, scoring, and feedback provision. |
Question | Possible Answers |
---|---|
Course of Study | Open text |
Course Year | Number |
Section B: General Test Evaluation | |
B1. The overall quality of the test questions is high? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
B2. Were the questions relevant to the topics covered in the specified study material? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
B3. Did the test adequately cover the topics it was intended to assess? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
B4. Is the overall difficulty level of the test appropriate to your expected level of preparation? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
Section C: Specific Evaluation of Applications | |
C1. Were the questions worded clear and easy to understand? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
C2. Were the questions worded grammatically correct? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
C3. Did the questions appear to be content-wise correct (contain no factual or conceptual errors)? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
C4. Were the questions “fair” (not tricky or based on excessively minor details)? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
Section D: Specific Evaluation of Response Options The test included multiple choice questions, think about the answer options provided and evaluate the following aspects: | |
D1. Were the answers worded clear and easy to understand? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
D2. Were the answers worded grammatically correct? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
D3. Were the wrong answers plausible but clearly incorrect? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
D4. Were the answers consistent with the question? | Rating scale: 1 = Totally disagree; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Totally agree |
Section E: Identifying Specific Problems This section is very important to help us identify specific problems. | |
E1. Did you find cases where more than one answer seemed correct? | Yes/No |
E2. Did you find any cases where no answer seemed correct? | Yes/No |
Section F: Comparison and Final Comments | |
F1. If you have taken tests on similar topics prepared by instructors before, how would you compare the overall quality of the questions on this AI-generated test to those prepared by humans? | Significantly worse/Slightly worse/Similar/Slightly better/Significantly better/Don’t know/I have no terms of comparison |
F2. Do you have any other comments, suggestions, or observations regarding the AI-generated quiz questions or answers that you would like to share? | Open text |
Test n.1 | Item 1.1 | Item 1.2 | Item 1.3 | Item 1.4 | Item 1.5 | Item 1.6 | Item 1.7 | Item 1.8 | Item 1.9 | Item 1.10 | Item 1.11 | Item 1.12 | Item 1.13 | Item 1.14 | Item 1.15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 0.56 | 0.75 | 0.80 | 0.67 | 0.87 | 0.52 | 0.85 | 0.54 | 0.59 | 0.54 | 0.93 | 0.92 | 0.84 | 0.59 | 0.61 |
Discriminatory Power (2) | 0.99 | 0.74 | 0.63 | 0.88 | 0.46 | 1.00 | 0.50 | 0.99 | 0.97 | 0.99 | 0.25 | 0.30 | 0.55 | 0.97 | 0.95 |
Selectivity Index (3) | 0.25 | 0.35 | 0.30 | 0.65 | 0.20 | 0.75 | 0.15 | 0.55 | 0.40 | 0.15 | 0.20 | 0.10 | 0.30 | 0.50 | 0.20 |
Reliability Index (4) | 0.14 | 0.26 | 0.24 | 0.44 | 0.17 | 0.39 | 0.13 | 0.30 | 0.24 | 0.08 | 0.19 | 0.09 | 0.25 | 0.30 | 0.12 |
Test n.2 | Item 2.1 | Item 2.2 | Item 2.3 | Item 2.4 | Item 2.5 | Item 2.6 | Item 2.7 | Item 2.8 | Item 2.9 | Item 2.10 | Item 2.11 | Item 2.12 | Item 2.13 | Item 2.14 | Item 2.15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 1.00 | 0.95 | 0.81 | 0.44 | 0.68 | 0.68 | 0.95 | 0.63 | 0.71 | 0.44 | 0.94 | 0.92 | 0.83 | 0.98 | 0.97 |
Discriminatory Power (2) | 0.00 | 0.18 | 0.62 | 0.99 | 0.87 | 0.87 | 0.18 | 0.93 | 0.82 | 0.99 | 0.24 | 0.29 | 0.58 | 0.06 | 0.12 |
Selectivity Index (3) | 0.00 | 0.14 | 0.33 | 0.62 | 0.33 | 0.76 | 0.05 | 0.43 | 0.43 | 0.24 | 0.14 | 0.05 | 0.24 | 0.05 | 0.05 |
Reliability Index (4) | 0.00 | 0.14 | 0.27 | 0.28 | 0.23 | 0.52 | 0.05 | 0.27 | 0.31 | 0.11 | 0.13 | 0.04 | 0.20 | 0.05 | 0.05 |
Test n.3 | Item 3.1 | Item 3.2 | Item 3.3 | Item 3.4 | Item 3.5 | Item 3.6 | Item 3.7 | Item 3.8 | Item 3.9 | Item 3.10 | Item 3.11 | Item 3.12 | Item 3.13 | Item 3.14 | Item 3.15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 0.90 | 0.93 | 0.99 | 0.38 | 0.87 | 0.33 | 0.83 | 0.67 | 0.88 | 0.99 | 0.78 | 0.67 | 0.97 | 0.80 | 0.86 |
Discriminatory Power (2) | 0.36 | 0.27 | 0.06 | 0.94 | 0.45 | 0.89 | 0.57 | 0.89 | 0.41 | 0.06 | 0.68 | 0.89 | 0.11 | 0.65 | 0.50 |
Selectivity Index (3) | 0.13 | 0.13 | 0.04 | 0.48 | 0.22 | 0.39 | 0.17 | 0.48 | 0.17 | 0.04 | 0.30 | 0.43 | 0.04 | 0.26 | 0.26 |
Reliability Index (4) | 0.12 | 0.12 | 0.04 | 0.18 | 0.19 | 0.13 | 0.14 | 0.32 | 0.15 | 0.04 | 0.24 | 0.29 | 0.04 | 0.21 | 0.22 |
Test n.4 | Item 4.1 | Item 4.2 | Item 4.3 | Item 4.4 | Item 4.5 | Item 4.6 | Item 4.7 | Item 4.8 | Item 4.9 | Item 4.10 | Item 4.11 | Item 4.12 | Item 4.13 | Item 4.14 | Item 4.15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 0.93 | 0.98 | 0.97 | 0.92 | 0.94 | 0.92 | 0.88 | 0.90 | 0.88 | 0.87 | 0.86 | 0.85 | 0.63 | 0.82 | 0.76 |
Discriminatory Power (2) | 0.07 | 0.06 | 0.06 | 0.00 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.05 | 0.05 | 0.00 |
Selectivity Index (3) | 0.15 | 0.00 | 0.00 | 0.10 | 0.00 | 0.00 | 0.05 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.60 | 0.00 | 0.20 |
Reliability Index (4) | 0.14 | 0.00 | 0.00 | 0.09 | 0.00 | 0.00 | 0.04 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.38 | 0.00 | 0.15 |
Test n.5 | Item 5.1 | Item 5.2 | Item 5.3 | Item 5.4 | Item 5.5 | Item 5.6 | Item 5.7 | Item 5.8 | Item 5.9 | Item 5.10 | Item 5.11 | Item 5.12 | Item 5.13 | Item 5.14 | Item 5.15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 0.55 | 0.52 | 0.78 | 0.98 | 0.95 | 0.92 | 0.91 | 1.00 | 1.00 | 0.97 | 0.71 | 0.91 | 0.82 | 0.65 | 0.98 |
Discriminatory Power (2) | 0.99 | 1.00 | 0.68 | 0.06 | 0.18 | 0.28 | 0.34 | 0.00 | 0.00 | 0.12 | 0.83 | 0.34 | 0.60 | 0.91 | 0.06 |
Selectivity Index (3) | 0.48 | 0.71 | 0.52 | 0.00 | 0.14 | 0.05 | 0.05 | 0.00 | 0.00 | −0.05 | 0.76 | 0.19 | 0.52 | 0.62 | 0.05 |
Reliability Index (4) | 0.26 | 0.37 | 0.41 | 0.00 | 0.14 | 0.04 | 0.04 | 0.00 | 0.00 | −0.05 | 0.54 | 0.17 | 0.43 | 0.40 | 0.05 |
Test n.6 | Item 6.1 | Item 6.2 | Item 6.3 | Item 6.4 | Item 6.6 | Item 6.6 | Item 6.7 | Item 6.8 | Item 6.9 | Item 6.10 | Item 6.11 | Item 6.12 | Item 6.13 | Item 6.14 | Item 6.16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Difficulty Index (1) | 0.80 | 0.64 | 0.97 | 0.64 | 0.98 | 0.55 | 0.98 | 0.80 | 0.97 | 0.92 | 1.00 | 0.98 | 0.94 | 0.97 | 0.75 |
Discriminatory Power (2) | 0.65 | 0.92 | 0.12 | 0.92 | 0.06 | 0.99 | 0.06 | 0.65 | 0.12 | 0.29 | 0.00 | 0.06 | 0.23 | 0.12 | 0.75 |
Selectivity Index (3) | 0.33 | 0.57 | 0.10 | 0.62 | 0.05 | 0.48 | 0.05 | 0.33 | 0.05 | 0.24 | 0.00 | 0.05 | 0.10 | 0.10 | 0.33 |
Reliability Index (4) | 0.27 | 0.37 | 0.09 | 0.40 | 0.05 | 0.26 | 0.05 | 0.27 | 0.05 | 0.22 | 0.00 | 0.05 | 0.09 | 0.09 | 0.25 |
B1 | B2 | B3 | B4 | C1 | C2 | C3 | C4 | D1 | D2 | D3 | D4 | E1 | E2 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Min | 3 | 2 | 3 | 4 | 3 | 4 | 4 | 4 | 3 | 2 | 2 | 1 | ||
Max | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | ||
Average | 3.19 | 4.28 | 4.02 | 4.56 | 4.03 | 4.53 | 4.44 | 4.50 | 4.05 | 3.48 | 3.47 | 2.97 | ||
Number of Yes Answers | 4 | 11 | ||||||||||||
Number of No Answers | 60 | 53 |
Answer | Count |
---|---|
Significantly worse | 0 |
Slightly worse | 0 |
Similar | 46 |
Slightly better | 9 |
Significantly better | 7 |
Don’t know | 2 |
I have no terms of comparison | 0 |
Opinion Cluster | Count |
---|---|
Context missing | 6 |
The question needs improvement | 3 |
The answers need improvement | 6 |
Good as it is | 48 |
Other | 1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Miranda, S. Artificial Intelligence from Google Environment for Effective Learning Assessment. Information 2025, 16, 462. https://doi.org/10.3390/info16060462
Miranda S. Artificial Intelligence from Google Environment for Effective Learning Assessment. Information. 2025; 16(6):462. https://doi.org/10.3390/info16060462
Chicago/Turabian StyleMiranda, Sergio. 2025. "Artificial Intelligence from Google Environment for Effective Learning Assessment" Information 16, no. 6: 462. https://doi.org/10.3390/info16060462
APA StyleMiranda, S. (2025). Artificial Intelligence from Google Environment for Effective Learning Assessment. Information, 16(6), 462. https://doi.org/10.3390/info16060462