The Proof Is in the Eating: Lessons Learnt from One Year of Generative AI Adoption in a Science-for-Policy Organisation
Abstract
1. Introduction
2. Context and Case
2.1. JRC’s Approach to Leveraging GenAI
2.2. Anatomy of a State-of-the-Art GenAI System: The Case of GPT@JRC
2.2.1. Design Fundamentals
2.2.2. High-Level Architecture
2.2.3. Introducing the AI-IQ: A Simplified Measure of GenAI System Capabilities
2.3. The Human Factor in GenAI: Meet the GPT@JRC Community
2.3.1. Community Approach
2.3.2. User Adoption and Onboarding Trajectory
2.3.3. How We Learnt from Our Users: Our Research Material
2.3.4. Analysis of User Information for Identifying Use Cases
- Collection of intended use case data and first classification: data was collected through the GPT@JRC Access request form, focusing on intended use and specific research focus or policy areas. We identified seven broad use case categories through an initial analysis of user-reported applications, followed by a qualitative comparison with GenAI literature review findings to establish a refined set of generic use cases.
- Use of GPT@JRC API for classification: an API with prompts was used to categorise the data under the identified generic use cases. The users’ responses were processed through this API with the help of two different AI models, LLaMa3 and Mistral-7b-OpenOrca, to classify each entry. The models were used as-is, without fine-tuning, with a prompt requiring the LLM to behave like a classifier and “label” each use case with the categories defined in step 1.
- Data insertion and quality check: the classified data was then inserted back into the database. A rigorous quality check was conducted manually by our team, using filters and pivot tables to ensure the accuracy of the classification using human oversight.
- Manual corrections: after the quality assessment, a total of 362 entries (6.6% of the classification) were manually corrected to adjust any misclassifications by the AI models.
- Selection of a classification model: based on the manual checks, a comparison could be drawn between the accuracy of the two models used, LLaMa3 and Mistral-7b-OpenOrca. The classification performed by the LLaMa3 model was ultimately selected for use in further analysis, as it was estimated to offer the best accuracy and consistency.
- Cross-tabulation and statistical analysis: finally pivot tables were created, and several cross-tabulations were performed to analyse user distribution across variables such as role, expertise level, and motivational factors.
3. Results and Findings
3.1. A Resource Value Pyramid for Mapping GenAI Use Cases
3.1.1. Level 1: Out-of-the-Box LLMs
3.1.2. Level 2: Enhancing Existing Tools and Processes
3.1.3. Level 3: Creating New, Specialised GenAI Systems
3.2. A Compass for Innovating with GenAI
3.2.1. From the Diamond Model to a Checklist for Your GenAI Use Case Journey
- Can the output be verified or evaluated for quality and/or accuracy (auditability)? Responses include: Yes, by anyone using the tool; Yes, but it requires domain expertise; Yes, but additional technical implementation is required; No, extensive manual verification is required.
- To what extent does this use case fit with the core tasks of the organisation?
- Are there ethical and/or regulatory concerns about relying, even partially, on AI in this use case (e.g., the requirements from the AI Act in the EU applicable to AI systems that fall under the category of high risk in the regulation)?
- What is the required level of user expertise with Generative AI to successfully leverage this use case within the organisation?
- Is the level of digital literacy across the organisation sufficient to ensure that user expertise in GenAI can be acquired quickly?
- Is the level of trust in the technology sufficient for user acceptance, or is it likely to bring about resistance to change?
- What is the potential of this use case to bring value to the organisation?
- What are the potential associated risks (reputational, financial, etc) in the case of poor quality or insufficient accuracy?
- Is the use case described in accurate enough terms to allow for the design of a GenAI system that would most likely address it?
- If so, which AI-IQ score would the GenAI solution require?
- Does the organisation have the necessary financial resources, skills, and time to implement and evaluate a solution with such an AI-IQ, either independently or with the assistance of external partners (e.g., outsourcing certain aspects of the work)?
3.2.2. Making People Work Together with the Technology: A Driver for GenAI Adoption
4. Discussion: Perspectives for GenAI Adoption
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Declaration of Generative AI in Scientific Writing
References
- Bostrom, N. Deep Utopia: Life and Meaning in a Solved World; Ideapress Publishing: 2024. Available online: https://books.google.be/books?id=HSeX0AEACAAJ (accessed on 10 December 2024).
- Belanche, D.; Belk, R.W.; Casaló, L.V.; Flavián, C. The dark side of Artificial Intelligence in services. Serv. Ind. J. 2024, 44, 149–172. [Google Scholar] [CrossRef]
- Mollick, E.; Mollick, L. Instructors as Innovators: A future-focused approach to new AI learning opportunities, with prompts. arXiv 2024. [Google Scholar] [CrossRef]
- Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using Large Language Models. PLOS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
- Luo, X.; Rechardt, A.; Sun, G.; Nejad, K.K.; Yáñez, F.; Yilmaz, B.; Lee, K.; Cohen, A.O.; Borghesani, V.; Pashkov, A.; et al. Large Language Models surpass human experts in predicting neuroscience results. Nat. Hum. Behav. 2024, 9, 305–315. [Google Scholar] [CrossRef]
- Vaccaro, M.; Almaatouq, A.; Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat. Hum. Behav. 2024, 8, 2293–2303. [Google Scholar] [CrossRef] [PubMed]
- Brynjolfsson, E.; Li, D.; Raymond, L. Generative AI at Work; National Bureau of Economic Research: Cambridge, MA, USA, 2023; p. w31161. [Google Scholar] [CrossRef]
- Noy, S.; Zhang, W. Experimental evidence on the productivity effects of Generative Artificial Intelligence. Science 2023, 381, 187–192. [Google Scholar] [CrossRef]
- Messeri, L.; Crockett, M.J. Artificial Intelligence and illusions of understanding in scientific research. Nature 2024, 627, 49–58. [Google Scholar] [CrossRef] [PubMed]
- Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave new world: Service robots in the frontline. JOSM 2018, 29, 907–931. [Google Scholar] [CrossRef]
- Tyler, C.; Akerlof, K.L.; Allegra, A.; Arnold, Z.; Canino, H.; Doornenbal, M.A.; Goldstein, J.A.; Budtz Pedersen, D.; Sutherland, W.J. AI tools as science policy advisers? The potential and the pitfalls. Nature 2023, 622, 27–30. [Google Scholar] [CrossRef]
- Kreitmeir, D.; Raschky, P.A. The Heterogeneous Productivity Effects of Generative AI. arXiv 2024. [Google Scholar] [CrossRef]
- Dell’Acqua, F.; McFowland, E.; Mollick, E.R.; Lifshitz-Assaf, H.; Kellogg, K.; Rajendran, S.; Krayer, L.; Candelon, F.; Lakhani, K.R. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Working Paper No. 24-013, The Wharton School Research Paper. SSRN J. 2023. [Google Scholar] [CrossRef]
- Christensen, C.M.; McDonald, R.; Altman, E.J.; Palmer, J.E. Disruptive Innovation: An Intellectual History and Directions for Future Research. J. Manag. Stud. 2018, 55, 1043–1078. [Google Scholar] [CrossRef]
- Crafts, N. Artificial Intelligence as a general-purpose technology: An historical perspective. Oxf. Rev. Econ. Policy 2021, 37, 521–536. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017. [Google Scholar] [CrossRef]
- Kanitz, R.; Gonzalez, K.; Briker, R.; Straatmann, T. Augmenting Organizational Change and Strategy Activities: Leveraging Generative Artificial Intelligence. J. Appl. Behav. Sci. 2023, 59, 345–363. [Google Scholar] [CrossRef]
- Huang, K.; Zhang, F.; Li, Y.; Wright, S.; Kidambi, V.; Manral, V. Security and Privacy Concerns in ChatGPT. In Beyond AI; Huang, K., Wang, Y., Zhu, F., Chen, X., Xing, C., Eds.; Future of Business and Finance; Springer Nature: Cham, Switzerland, 2023; pp. 297–328. [Google Scholar] [CrossRef]
- Robey, A.; Wong, E.; Hassani, H.; Pappas, G.J. Smoothllm: Defending Large Language Models against jailbreaking attacks. arXiv 2023, arXiv:2310.03684. [Google Scholar]
- Wang, X.; Yuan, P.; Feng, S.; Li, Y.; Pan, B.; Wang, H.; Hu, Y.; Li, K. CogLM: Tracking Cognitive Development of Large Language Models. arXiv 2024. [Google Scholar] [CrossRef]
- Kandpal, N.; Deng, H.; Roberts, A.; Wallace, E.; Raffel, C. Large Language Models Struggle to Learn Long-Tail Knowledge. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J., Eds.; PMLR, 2023. Volume 202, pp. 15696–15707. Available online: https://proceedings.mlr.press/v202/kandpal23a.html (accessed on 5 November 2024).
- Jiang, M.; Karanasios, S.; Breidbach, C. Generative AI In The Wild: An Exploratory Case Study of Knowledge Workers. In Proceedings of the European Conference on Information Systems (ECIS), Paphos, Cyprus, 13–19 June 2024; Available online: https://aisel.aisnet.org/ecis2024/track04_impactai/track04_impactai/7 (accessed on 12 December 2024).
- Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
- Hagendorff, T. Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. arXiv 2024. [Google Scholar] [CrossRef]
- Leavitt, H.J.; March, J.G. Applied Organizational Change in Industry: Structural, Technological and Humanistic Approaches; Carnegie Institute of Technology, Graduate School of Industrial Administration: Pittsburgh, PA, USA, 1962; Available online: https://books.google.be/books?id=P_KZNQAACAAJ (accessed on 17 December 2024).
- Daugherty, P.R.; Wilson, H.J. Human + Machine: Reimagining Work in the Age of AI; Harvard Business Press: Boston, MA, USA, 2018. [Google Scholar]
- Flavián, C.; Pérez-Rueda, A.; Belanche, D.; Casaló, L.V. Intention to use analytical Artificial Intelligence (AI) in services—The effect of technology readiness and awareness. JOSM 2022, 33, 293–320. [Google Scholar] [CrossRef]
- European Commission, Joint Research Centre. The Communities of Practice Playbook: A Playbook to Collectively Run and Develop Communities of Practice; Publications Office: Luxembourg, 2021; Available online: https://data.europa.eu/doi/10.2760/443810 (accessed on 28 November 2024).
- Lowe, S.C. System 2 Reasoning Capabilities Are Nigh. arXiv 2024. [Google Scholar] [CrossRef]
- Dawid, A.; LeCun, Y. Introduction to latent variable energy-based models: A path toward autonomous machine intelligence. J. Stat. Mech. 2024, 2024, 104011. [Google Scholar] [CrossRef]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, 3–10 March 2021; pp. 610–623. [Google Scholar]
- Kalyan, K.S. A survey of GPT-3 family Large Language Models including ChatGPT and GPT-4. Nat. Lang. Process. J. 2024, 6, 100048. [Google Scholar] [CrossRef]
- Bobrow, D.G.; Norman, D.A. Some principles of memory schemata. In Representation and Understanding; Elsevier: Amsterdam, The Netherlands, 1975; pp. 131–149. [Google Scholar]
- Hosseini, A.; Sordoni, A.; Toyama, D.; Courville, A.; Agarwal, R. Not All LLM Reasoners Are Created Equal. arXiv 2024. [Google Scholar] [CrossRef]
- Li, Z.; Cao, Y.; Xu, X.; Jiang, J.; Liu, X.; Teo, Y.S.; Lin, S.-W.; Liu, Y. LLMs for Relational Reasoning: How Far are We? In Proceedings of the 1st International Workshop on Large Language Models for Code, Lisbon, Portugal, 20 April 2024; ACM: Lisbon, Portugal, 2024; pp. 119–126. [Google Scholar] [CrossRef]
- Händler, T. Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for Autonomous LLM-powered Multi-Agent Architectures. arXiv 2023. [Google Scholar] [CrossRef]
- Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; et al. A Survey on Large Language Model based Autonomous Agents. arXiv 2023. [Google Scholar] [CrossRef]
- Chawla, C.; Chatterjee, S.; Gadadinni, S.S.; Verma, P.; Banerjee, S. Agentic AI: The building blocks of sophisticated AI business applications. J. AI Robot. Workplace Autom. 2024, 3, 196–210. [Google Scholar] [CrossRef]
- Bommasani, R.; Liang, P.; Lee, T. Holistic Evaluation of Language Models. Ann. New York Acad. Sci. 2023, 1525, 140–146. [Google Scholar] [CrossRef]
- Tessler, M.H.; Bakker, M.A.; Jarrett, D.; Sheahan, H.; Chadwick, M.J.; Koster, R.; Evans, G.; Campbell-Gillingham, L.; Collins, T.; Parkes, D.C.; et al. AI can help humans find common ground in democratic deliberation. Science 2024, 386, eadq2852. [Google Scholar] [CrossRef]
- Anthony, C.; Bechky, B.A.; Fayard, A.-L. “Collaborating” with AI: Taking a System View to Explore the Future of Work. Organ. Sci. 2023, 34, 1672–1694. [Google Scholar] [CrossRef]
- Burton, J.W.; Lopez-Lopez, E.; Hechtlinger, S.; Rahwan, Z.; Aeschbach, S.; Bakker, M.A.; Becker, J.A.; Berditchevskaia, A.; Berger, J.; Brinkmann, L.; et al. How Large Language Models can reshape collective intelligence. Nat. Hum. Behav. 2024, 8, 1643–1655. [Google Scholar] [CrossRef] [PubMed]
AI-IQ Level | GenAI System Capabilities | Typical GenAI System Configuration |
---|---|---|
0 | Non-conversational—Limited to answering the predefined questions anticipated by the system developer. | Unlike open-ended interactive conversational LLMs, this level represents a more basic semantic search system that retrieves answers to similar questions predetermined in advance. Examples of this level include traditional chatbots found on airline websites, which provide pre-scripted responses to common user inquiries. |
1 | Conversational—Capable of engaging in open-ended conversations and generating text on demand. The accuracy and reliability of the responses may be limited by outdated knowledge and a lack of validation mechanisms, and there is a risk of hallucinations. | The system runs an LLM with some relevant information provided in the context window, such as a custom system prompt (i.e., an instruction that is always provided to the LLM at the beginning of each conversation, made by its programmer and not always shown to the user). |
2 | Basic RAG—Capable of providing answers based on specific, proprietary knowledge from the organisation that owns the system, complementing the GenAI’s (e.g., LLM’s) general knowledge. | The GenAI system employs a basic form of retrieval-augmented generation (RAG), enabling the retrieval of knowledge from organisation-specific documents or knowledge bases and making this information available to the LLM to inform its responses. |
3 | Advanced RAG—The GenAI system is capable of accessing, interpreting, and synthesising knowledge from the organisation’s proprietary sources in a targeted and optimal way, specifically tailored to meet the requirements of identified use cases. | Multiple knowledge bases are made available to the GenAI system as specialised knowledge tools, each optimised for specific use cases through tailored RAG modalities. For example, one knowledge tool might be designed to generate a digest of the latest relevant research papers. The user chooses which knowledge tool(s) to activate in the context of the interaction with the GenAI system, allowing it to focus on the most relevant information for the task at hand. Advanced RAG is also characterised by the level of sophistication and optimisation of the RAG system, e.g., for dividing documents into information chunks, converting such chunks into machine-understandable embeddings, or ranking the relevance of selected information. |
4 | Basic Agentic—Enables the GenAI system to proactively leverage specialised data sources and tools, automatically selecting the most relevant ones from a predefined set of corporate resources and tools that have been made available with all necessary information for the system to utilise them. | A predefined set of tools and corporate resources is made available to the GenAI system, complete with all necessary information for their utilisation. An agentic system component leverages one or multiple LLMs to interpret user requests and understand how to effectively use the available tools. A RAG-based approach is used by the system using the relevant tools. This may involve tasks such as launching API requests, generating Python code that is executed in a sandbox, or taking other actions that enable the system to harness the capabilities of the provided tools. |
5 | Full Agentic—The GenAI system possesses advanced agentic features with some level of autonomy, enabling it to iteratively utilise available tools and resources to not only complete the provided task but also to adapt and improve its approach in response to changing circumstances or new information. This level of agentic capability allows the system to operate with increased independence and flexibility, pursuing multiple lines of inquiry and incorporating new data or insights as it works to achieve its objectives. | The GenAI system incorporates agentic components that enable it to iteratively complete complex tasks by breaking them down into smaller, more manageable tasks, leveraging available resources and tools to execute each step. The GenAI system benefits from a degree of autonomy in exploring various avenues, identifying an optimal solution, and planning various execution steps. Ultimately, the GenAI system can execute routine business tasks with the appropriate level of human oversight, freeing-up resources for more strategic and high-value activities. |
6 | Multi-Agentic Systems—The GenAI system functions as a swarm of agents, each with advanced agentic capabilities, working collaboratively to meet complex objectives. This level represents a significant leap forward in GenAI capabilities, enabling the system to tackle intricate tasks that require coordination, negotiation, and collective problem-solving. The swarm intelligence allows the system to adapt and respond to changing circumstances, leveraging collective knowledge and the ecosystem of available tools and data. | This level of GenAI system is characterised by a decentralised architecture, in which multiple agents operate autonomously, interacting with each other and leveraging the available tools and data to achieve the goal requested by the user. Individual agents contribute to the collective objectives of the swarm. Advanced knowledge management and sharing mechanisms can facilitate the exchange of information and expertise among agents, potentially enabling them to learn from each other and improve overall performance through continuous collaboration. |
Use Case Type | Short Description | Detailed Description and Examples |
---|---|---|
Text enhancement | Proofreading, summarising, translating, and drafting assistance. | Text enhancement involves the augmentation of written text to improve clarity, coherence, and overall quality. This includes activities such as proofreading, summarising, translating, and providing writing assistance. Examples of applications include the following:
|
Programming assistance | Helping with specific programming tasks and automatic documentation. | Programming assistance includes support for various programming-related activities aimed at increasing programming speed and efficiency. Examples of using AI for programming assistance are as follows:
|
Text/data analysis and interpretation | Analysing documents, extracting specific information and help drawing conclusions. | Text/data analysis and interpretation focuses on extracting and analysing information from diverse textual sources to inform decision-making processes. Examples of these applications include the following:
|
Literature review | Discovering and summarising scientific literature. | Literature review aims to streamline the process of identifying and summarising relevant scientific literature across various fields. It assists in the drafting of scientific papers, synthesising information to identify research gaps, and exploring new scientific questions. Examples of using AI to assist literature review tasks are as follows:
|
Learning | Answering questions about anything. | Learning encompasses a broad range of activities designed to expand knowledge on topics which can range from science to policy analysis, as well as the AI itself, aiming to understand its underlying technology and explore ethical considerations, such as bias in AI, data privacy, and the implications of AI-generated content. Examples of applications include the following:
|
Project/process management | Automating tasks, help in planning, and supporting decision-making. | Project/process management involves the use of AI to automate administrative tasks such as document management, planning, and supporting decision-making in project-related activities. Examples of using AI to assist project management tasks include the following:
|
Creative assistance/critical review | Generating ideas, role-playing, and crafting scenarios. | For creative assistance, users leverage AI to generate ideas, facilitate role-playing, and craft scenarios for various creative and strategic purposes. Examples of using AI to help in performing creative tasks are as follows:
|
API Use Case Type | Description | Examples |
---|---|---|
Sending batches of requests to GPT models | Users leverage this approach to facilitate large-scale text processing and analysis within scientific projects. This method involves programmatic access, typically via Python scripts, to dispatch multiple inquiries to LLM and/or embedding models. The applications range from content summarising, information extraction and classification, to more complex tasks such as generating synthetic data, conducting adversarial robustness tests, and exploring model explainability. This batch processing enables the handling of extensive datasets, allowing for the extraction of structured information from unstructured text, the comparison of model performance, and the augmentation of existing data analysis pipelines. | Examples of engaging with the API to send batches of requests to GPT models include the following:
|
Integrating GPT functionality within an IT system | This integration aims to enhance existing information systems by embedding GenAI capabilities to enable new functionalities. Use cases include the automation of tasks such as text classification, metadata assignment, and summarising within various domains such as finance, customs, and scientific research. The integration extends to the development of conversational chatbots, the augmentation of user interfaces with natural language processing features, and the provision of immediate user support by bypassing first-level query responses. The overarching goal is to streamline workflows, automate tasks, improve user experience, and leverage AI to provide assistance in complex processes. | Examples of engaging with the API to integrate GPT GenAI functionality within an IT system include the following:
|
Experimenting | The experimental use of the API serves as a preliminary phase for testing and development purposes and to carry out experimental research. It encompasses the investigation of research on GenAI capabilities and limitations and broad experimentation with potential GenAI applications in scientific projects. Experimentation also includes the development of pilot projects to assess the feasibility and performance of GPT@JRC in various scenarios. This phase is crucial for understanding the potential and limitations of the API before committing to full-scale implementation or integration within IT systems. | Examples of experimenting with the API include the following:
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
De Longueville, B.; Sanchez, I.; Kazakova, S.; Luoni, S.; Zaro, F.; Daskalaki, K.; Inchingolo, M. The Proof Is in the Eating: Lessons Learnt from One Year of Generative AI Adoption in a Science-for-Policy Organisation. AI 2025, 6, 128. https://doi.org/10.3390/ai6060128
De Longueville B, Sanchez I, Kazakova S, Luoni S, Zaro F, Daskalaki K, Inchingolo M. The Proof Is in the Eating: Lessons Learnt from One Year of Generative AI Adoption in a Science-for-Policy Organisation. AI. 2025; 6(6):128. https://doi.org/10.3390/ai6060128
Chicago/Turabian StyleDe Longueville, Bertrand, Ignacio Sanchez, Snezha Kazakova, Stefano Luoni, Fabrizio Zaro, Kalliopi Daskalaki, and Marco Inchingolo. 2025. "The Proof Is in the Eating: Lessons Learnt from One Year of Generative AI Adoption in a Science-for-Policy Organisation" AI 6, no. 6: 128. https://doi.org/10.3390/ai6060128
APA StyleDe Longueville, B., Sanchez, I., Kazakova, S., Luoni, S., Zaro, F., Daskalaki, K., & Inchingolo, M. (2025). The Proof Is in the Eating: Lessons Learnt from One Year of Generative AI Adoption in a Science-for-Policy Organisation. AI, 6(6), 128. https://doi.org/10.3390/ai6060128