Previous Article in Journal
The Relationship Between Information Technology Dimensions and Competitiveness Dimensions of SMEs Mediated by the Role of Innovative Performance
Previous Article in Special Issue
Understanding the Rise of Automated Machine Learning: A Global Overview and Topic Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–AI Learning: Architecture of a Human–AgenticAI Learning System

Independent Researcher, Hull HU6 7RX, UK
Information 2025, 16(12), 1101; https://doi.org/10.3390/info16121101
Submission received: 16 August 2025 / Revised: 30 October 2025 / Accepted: 6 December 2025 / Published: 12 December 2025
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)

Abstract

The Ancient Greeks foresaw non-human automata and the power of dialogic learning, but Generative AI and AgenticAI afford the prospect of going beyond interlocutor to co-creator in an empowering partnership between learner and AI agent to address ‘whole person’ education. This exploratory study reviews existing conceptual models and implementations of learning with AI before proposing the novel and original architecture of a human–AgenticAI learning system. In this, the learner and human tutor are each supported by AI assistants, and an AI tutor coordinates the generation, presentation and assessment of adaptive learning activities requiring the partnership of learner and AI assistant in the co-creation of learning outcomes. The proposed model is significant for incorporating 21st-century skills in a diversity of realistic learning environments. It tracks a formative assessment pathway of the learner’s contribution to co-created outcomes through to the compilation of a summative achievement portfolio for external warranting. Although focused upon learning in universities, the model is transferable to other educational milieux.

Graphical Abstract

1. Introduction

1.1. The Educational Potential of AI

The driver of a modern automobile is assisted by several electronic helpers: the headlamps automatically light at dusk and the windscreen wipers detect rainfall; the cabin is kept at a comfortable temperature, while cruise control and lane guidance maintain constant speed and trajectory; gears are engaged automatically and a satnav provides route guidance and real-time advice on traffic congestion. In all, this partnership uses the power of automated systems to complement and augment the driver’s agency while reducing cognitive load—resulting in safer and less stressful journeys.
This paper examines an analogous partnership in the process and management of learning through recent developments in AI. Large Language Model Generative AI (GenAI) agents such as ChatGPT, DeepSeek and Gemini have, within a few months of launch, received rapid take up by students [1]. Educational institutions have been slower to react [2], casting around for viable adaptation strategies in the face of a new technology that presents significant threats to established practice but offers radical opportunities for pedagogy and systems of delivery [3]. The autonomic features of AgenticAI—a development of GenAI—take this further.

1.2. Purpose and Structure of This Paper

The purpose of this paper is to outline the architecture of an innovative learning experience platform that employs AI. It will be argued that, in contrast to recent trends in learning analytics, which have strengthened oversight and external controls on the learner, AI has the potential to adapt and personalise educational experience—and can be used to empower the learner. The orientation of the paper is pedagogical rather than technical, and learner-focused rather than teacher-focused. As will be detailed later, many studies in this area aim to employ AI as an administrative tool to automate the conventional practice of lectures and summative testing; the aim of the learning experience platform described here is more radical. It provides AI partnership for the learner through personalised and adaptive dialogic engagement and formative assessment, human–agentic co-creation of learning outcomes, and the practising of 21st-century skills in a diversity of environments.
Traditionally, learning and assessment have been sited separately in time and space from the application of knowledge and skills; however, technological affordances of dialogic engagement and criterion-referenced formative assessment make this separation unnecessary. The proposed model embraces these opportunities to obviate the need for lectures and summative norm-referenced examinations. The considerable implications for traditional practice will be examined more detail in the final section of the paper.
Section 2 of the paper discusses the educational context of AI. An overview of 21st-century graduate skills is followed by a brief review of Competency-Based Education and emerging developments in social learning analytics. In Section 3, parallels are identified between traditional Socratic method, formative assessment, the role of Generative AI tutors and the opportunities of AgenticAI. Section 4 is an overview of current AI-supported learning management systems, making the point that many see AI as a tool to reinforce existing practice rather than as an opportunity to explore new ways of working. Section 5 presents the main thesis—and novel offering—of the paper: the architecture of a human–AgenticAI learning system that integrates personalised dialogic engagement and formative assessment within an environment embracing collaborative working and simulated and situated learning. The system logs the learner’s contribution to co-created outcomes, resulting in the compilation of a summative achievement portfolio for CBE warranting. Section 6 concludes with a discussion of the learner empowerment potential of an AgenticAI approach, some limitations of the proposed system, and the implications presented for a higher education system still dominated by traditional practice.

2. Educational Context

2.1. Context and Rationale

This section of the paper provides context and rationale supporting the educational implications of the proposed human–AgenticAI learning system. It argues that exigencies of the knowledge economy necessitate shorter learning-to-application cycles and the interpersonal skills and dispositions of computer-supported collaborative working. By extension, education of the ‘whole person’ becomes more feasible than through conventional institutions of higher education, which developed in the industrial era for the transmission of subject knowledge, but perpetuate the mediaeval practice of lectures and summative norm-referenced examinations. Furthermore, the rapidly adaptive nature of Competency-Based Education and advances in the monitoring of collaborative working make these objectives more achievable and facilitate an education for sustainability [2].

2.2. 21st-Century Graduate Skills and Dispositions

The first quarter of the 21st century has seen higher education struggling to adapt to waves of technological change, with AI as the latest disruptor. Stein [4] comments on what he calls the ‘shrinking half-life of knowledge’ accompanied by rapid growth in procedural ‘frontier knowledge’, and notes increasing pressure in many knowledge-intensive academic disciplines to keep curricula current and relevant. The Future of Jobs Report [5] finds a pattern in which two-fifths of existing skillsets will be substantially changed over the 2025–2030 period. The Report places AI and big data at the top of the list of fastest-growing skills, but also predicts increasing demand for the interpersonal skills and dispositions of computer-supported collaborative working. The professional services network PricewaterhouseCoopers makes similar findings [6], reporting a 66% faster skill change in AI-exposed jobs than other jobs and a 56% wage premium for AI skills.

2.3. Competency-Based Education

Against this background, Competency-Based Education (CBE) has grown in importance. CBE is defined by the U.S. Department of Education [7] as allowing students ‘to advance based on their ability to master a skill or competency at their own pace regardless of environment’. The same source distinguishes between CBE programmes that measure progress using credit hours and those called direct assessment, which measures progress by directly assessing whether a student can demonstrate command of a content area or skill. Where conventional curricula can be outpaced by rapid change, the nature of CBE makes it more adaptive, with direct assessment as a closer partner to formative assessment. Sturgis [8] describes CBE as a learning environment that provides timely and personalised support and formative feedback, with the aim being for students to develop and apply a broad set of skills and dispositions to foster critical thinking, problem-solving, communication, and collaboration.
Where the assessment of collaborative working was problematic in the past, social learning analytics (SLAs) are making progress. A systematic review by Kaliisa et al. [9] examined recent studies of SLAs in computer-supported collaborative learning environments. They concluded that while most studies employed this approach to interpret student behaviours from a constructivist perspective, there remained many opportunities for the employment of multiple analytic tools and the use of SLA data to inform teaching decisions.

2.4. Educating the Whole Person

Education of the ‘whole person’ is paramount. A sole focus on graduate skills and competencies has been widely criticised as narrow and instrumental. Datnow et al. [10] argue the need for academic systems to evolve into humanistic systems that support not only knowledge acquisition and certification but also the social, emotional, moral, and civic development of students. This orientation is reflected in the Future of Jobs Report cited earlier, which aligns personal and interpersonal qualities with the challenges facing graduates. The UK National Foundation for Educational Research [11] identifies the top five ‘soft skills’ as problem solving/decision making; critical thinking/analysis; communication; collaboration/cooperation; and creativity/innovation, but also emphasises the importance of lifelong learning in the acquisition of emergent knowledge, skills and dispositions, and for students to develop as ethically oriented individuals prepared for complex, global citizenship [12]. As Zhao [13] argues, these changing priorities challenge traditional higher education curricula that have focused more narrowly on the transmission of subject knowledge.

3. Learning with AI

This section introduces a number of ways in which the affordances of AI can facilitate educational change. Most immediate is the relationship between learner and tutor, where GenAI supports Socratic dialogue and formative dialogic assessment. AgenticAI, a development of GenAI, facilitates the transformation of this tutorial relationship into a hybrid partnership where human and agent contribute towards shared outcomes. AI can also support a broader palette of educational activities and environments to address the needs of the whole person. In turn, this has the potential to link adaptive curriculum design to implementation, and to manage the competency-based assessment of these key skills within diverse learning environments, including social and collaborative settings. In summary, five principal affordances of AI are identified: dialogic engagement; human–AI co-creation; integration of formative assessment; flexible employment in diverse environments; and active engagement with the socially shared regulation of learning. These will be returned to later in the paper.

3.1. Socratic Dialogue

Recent AI-in-education literature has focused on parallels between traditional Socratic method and the educational stance of AI tutors. Socrates, a Greek philosopher and teacher living in the fifth century BCE, developed the Socratic method as a form of dialogue-based learning to encourage participants to ask and answer questions in order to stimulate critical thinking and clarify ideas. Orynbassarova and Porta [14] go further by defining the Socratic method in the context of AI and performing pilot studies with ChatGPT (version: GPT-4). An analogy between the Socratic method and the Oxford University tutorial is noted by Tapper and Palfreyman [15] and by Balan [16], as an educational approach to promote critical thinking, co-constructed meaning and personalised feedback. Lissack and Meagher [17] draw parallels between the Oxford tutorial and GenAI. A related pedagogical orientation is Vygotsky’s Zone of Proximal Development (ZPD), which refers to knowledge and skills that are too difficult for a learner to acquire alone, but possible with guidance from a ‘more knowledgeable other’. A systematic literature review by Cai et al. [18] found over 150 studies reporting how AI tools can be used to create and facilitate the necessary ZPD scaffolding for effective learning.

3.2. Formative Assessment and Learning Objectives

Many commentators have noted synergies between the Socratic method and formative assessment. A significant feature of the latter is the provision of ongoing feedback, and studies over a number of years have found that well-constructed formative feedback can have high motivational value to enhance learning [19,20,21]. Assessment for Learning (AfL) is an approach within formative assessment that emphasises actively involving and empowering learners in their own learning [22]. A related strand of formative assessment is Assessment as Learning (AaL), with the purpose of supporting learners engaged in self- and peer-assessment in monitoring their own learning processes to develop self-regulation and self-direction [23]. Team working has been found to be particularly effective in this development of learners’ metacognitive skills [24,25].
GenAI provides many new opportunities for formative assessment. These include interactive simulations, virtual reality, gaming and real-time logging of achievements [3]. In these, the potential for assessment through dialogue has been explored in a number of recent studies. Vashishth et al. [26] note the effectiveness of AI-driven learning analytics in enhancing student engagement and supporting individualised learning. However, they caution that ethical considerations and the induction of teaching staff (faculty) must be addressed to make this possible. Another facet of personalisation is the potential of continuous formative and adaptive assessment, and research involving 120 students by Winarno and Al Azies [27] showed positive effects on student confidence and the potential for enhanced learning outcomes. A related issue is students’ self-regulation, and studies by He [28] and Xia et al. [29] both found evidence of the positive effects of formative assessment on students’ self-regulated learning behaviours. The cautions identified by Vashishth et al. [26] are also noted by Mao et al. [2], who share ethical concerns around AI and advocate the need for students to develop new AI literacies to prepare them for future workplace demands. Ilieva et al. [30] also express concern for responsible GenAI adoption and propose a holistic framework for GenAI-supported assessment in which teaching staff design adaptive and AI-informed tasks and provide feedback; learners engage with these tools transparently and ethically; and institutional bodies manage accountability through compliance standards and policies.
Assessment design and the formulation of learning objectives link GenAI to ZPD and Bloom’s Taxonomy. Sideeg [31] takes the position of Wiggins and McTighe [32] that the nature of assessment evidence should be defined before statements of learning outcomes are formulated. He argues that these should target the ZPD zone of what learners can achieve with the support of a ‘more knowledgable other’; in this context, through Socratic dialogue with a GenAI agent. He cites the influential revision of Bloom’s Taxonomy, made in 2001 by Anderson, Krathwohl et al. [33], in which the highest level of learning outcomes on the cognitive process dimension involves Creating (replacing Bloom’s original model that rated Evaluation above Synthesis). This recognises that Creating not only integrates other cognitive processes on this dimension—such as Applying, Analysing and Evaluating—but also involves critical thinking, collaboration and communication. These are the same skills and dispositions identified earlier as important to whole-person education and the needs of 21st-century life.
A related conceptual model for learning with technology is the SAMR model developed by Puentedura [34]. In this, educational technology is conceived to operate at four levels in the following way.
Substitution—technology is a direct tool substitute, without requiring any other change to teaching methods and delivery.
Augmentation—technology is a direct tool substitute, but results in functional improvements.
Modification—technology enables significant redesign of teaching delivery and learning tasks.
Redefinition—technology enables the creation of novel teaching delivery and learning tasks that were previously inconceivable.
At the Substitution and Augmentation levels, the effect of technology is to enhance the existing educational paradigm; whereas at the Modification and Redefinition levels, the effect of technology is to transform the educational paradigm by making new delivery and learning tasks possible. The SAMR model can be seen to be related to the Anderson, Krathwohl et al. revision [33] of Bloom’s Taxonomy by associating the ‘highest’ cognitive process of Creating with Puentedura’s Redefinition, and the ‘lower’ ones (Applying, Understanding and Remembering) with the Substitution and Augmentation levels.
Dialogic co-construction and co-creation are key processes of the human–AgenticAI learning system elaborated later in this paper.

3.3. AgenticAI and Co-Creation

An emerging extension of GenAI is AgenticAI. The key difference between agentic agents and generative large language model AI is in their self-learning and evolutional self-direction. Where the behaviour of conventional GenAI agents follows set algorithms, AgenticAI learns from experience and awareness of context, continuously evolving algorithms to make its behaviour adaptive and proactive [35]. Table 1 summarises key differences in behaviour in terms of agent autonomy, workflow automation, decision-making and tutor roles.
Some wider implications of AgenticAI are discussed by Hughes et al. [36], who foresee in many industries a decentralisation of decision-making, reshaping of organisational structures, and enhancement of cross-functional collaboration. They also call for robust governance frameworks, cross-industry collaboration, and research into ethical safeguards.
In the context of this paper, AgenticAI extends the learner-support role of GenAI to facilitate a human–AI hybrid partnership. Molenaar [37], an early pioneer in this field, proposed a transfer of control model with six levels: from teacher-only control, through four steps of increasing technology involvement, to full automation, in which AI controls all learning tasks. Cukurova [38] takes a different view of the ‘high end’ of automation with AI, distinguishing between two types of human involvement. In the first, human tasks are replaced by AI, resulting in a decline in human competence over time. In the second, human cognition is tightly coupled with AI through human–AI hybrid partnership, resulting in an extension of human competence over time.
These conceptualisations assume that humans are acting independently rather than in groups. The role of AI-supported learners in social–collaborative settings is explored by Järvelä et al. [39]. Here, they adapt a framework for socially shared regulation in learning to include what they call hybrid human–AI shared regulation in learning (HASRL). In this, AI can be used to identify events in the interactions between co-workers that are associated with productive collaboration. As discussed earlier, team-working experiences can be effective in the development of learners’ metacognitive skills. Through socially shared regulation of learning [39], the HASRL model has the potential for application in a variety of team-working and situated-learning environments, such as those requiring the interpersonal soft skills discussed in Section 2.
GenAI and AgenticAI show great potential to support learners and enhance learning, but may complicate assessment. Perkins et al. [40] developed a model to address this, with five levels of AI engagement in what they call an Artificial Intelligence Assessment Scale (AIAS). Activities at these levels range from no use of AI at Level 1 to increasing involvement at Levels 2, 3 and 4. At Level 5 is the full use of AI as human–AgenticAI co-creation. The scale is summarised in Table 2. The scale provides clear statements of levelness but appears to focus on the individual learner, with no account taken of learning environment.
A student assignment involving researching and preparing an information digest on a set topic is used here as an example of how AI might be employed at Levels 2–5. Typical Level 2 activities would include the student’s use of GenAI in brainstorming and developing initial ideas, components and layouts for the digest. No AI-created content would be allowed in the final submission. At Level 3, GenAI would be used for ongoing inspection of student-created work, identifying errors and making minor suggestions to improve the format and finish. AI-created content would be allowed but an appendix of original student-created content would accompany the final submission. The use of AI at Level 4 is more intensive, perhaps involving AgenticAI to complete specific subtasks. The student would provide descriptive commentary on the AI-generated content. This would require critical engagement with AI-generated content and its critical evaluation in relation to the assignment task. Level 5 activity might also involve AgenticAI in the form of dialogic engagement of AI as a ‘co-pilot’ and collaborator to go beyond—in a ZPD fashion—what the student might be able to achieve unaided. At this level, the student would not be required to specify which elements of the submission were wholly student-created or AI-created.
A mapping of the AIAS to collaborative working environments involving groups of students will first require a comparison of the roles of AI agents as tutors in the two situations. Table 3 presents roles that AI tutors could take in supporting individual working and team working. For individual working (left-hand column), the four roles are similar to Level 2–5 of the AIAS. At the first level is what might be called a ‘secretarial support’ role through the curation of study activity; this is followed by Socratic tutoring and dialogic formative assessment; next is a loose involvement of AI in finessing the student’s contribution to the learning activity; and finally, it is the ‘hybrid’ level—of what Järvelä et al. [39] call HASRL and in this paper will be called human–AgenticAI co-creation—of jointly contributed learning outcomes.
In the right-hand column of Table 3 are four levels describing increasing involvement of AgenticAI in supporting the team working of a group of learners; these also follow the AIAS Levels 2–5 in reflecting the increasing involvement of AgenticAI. The final level includes the roles of promoting productive collaboration through ‘hybrid human-AI shared regulation in learning’ [39].
Team working is one of the soft skills identified by NFER [11] and discussed in the previous section. These learning activities may be exercised in a range of environments typical of contemporary higher education. Table 4 illustrates a mapping of activities to learning environments, and can be employed as a means of logging the occasions and quality of engagement using the AIAS descriptors.
In Table 4, the six columns are headed by types of activities included in the Learning Activities Library (see Section 5). The five rows are headed by types of environments included in the Learning Environments Library. Sample learning activities are shown and rated at various levels on the AIAS (see Table 2). For each column, these are as follows:
  • a Problem-Based Learning Activity rated at Level 2, involving a small group of medical students with assistance from AI in researching medical symptoms and generating hypotheses for diagnoses;
  • an individual project rated at Level 3, in which AI assists an engineering student in the presentation of designs and calculations for a load-bearing beam;
  • an individual project rated at Level 4, in which AI works with a mathematics student to research and model the behaviour of advanced hyperbolic functions;
  • a workplace simulation rated at Level 5, in which AI works with a small group of students developing an online game in the identification and design of multiple outcomes;
  • an online conferencing activity rated at Level 2, in which AI assists business management students in the design and presentation of slideshows to pitch for a contract;
  • a face-to-face viva voce examination rated at Level 1, in which an urban design student is questioned on the safety features of a shopping mall.

4. AI-Supported Learning Systems

This section reviews the current literature on AI-supported learning systems, distinguishing between traditional learning management systems (LMSs) and Learning Experience Platforms (LXPs). The rationale for multi-agent systems is outlined, followed by some examples of applications. The section concludes with an overview of the opportunities created by generative multi-agent collaboration to enable adaptive learner-centred systems, and some criteria against which these might be evaluated.

4.1. Learning Experience Platforms

The origins of traditional LMSs can be traced to Behaviourism and Instructional Design. More recent theories of learning, such as Constructivism, Social Constructivism and Connectivism, have focused on the experience of the learner and have influenced the development of LXPs [41,42]. Table 5 provides a summary comparison of the two approaches. Essential differences lie in the degree of control, personalisation of course content and the learner’s experience, and in opportunities for social and collaborative learning.
LXPs show higher compatibility with the educational orientation of Socratic dialogue, formative assessment, human–AI co-creation and team working discussed in the previous section. Ways in which LXPs can foster soft-skills development are discussed by Valdiviezo and Crawford [42]. LXPs are also compatible with competency-based education, based on a bibliometric analysis by Radu et al. [44], and from a correlational study of the impact of technology-based collaborative learning on students’ CBE [45]. Improved student engagement and personalised learning is the target of a novel AI-enabled Intelligent Assistant built upon the Canvas LMS [46]. Similarly, Shamsudin and Hoon [47] report on the integration of ChatGPT into the Moodle LMS in Malaysian universities.

4.2. Multi-Agent Systems

Multi-agent AI systems offer the potential of improved agent reliability and verification, but their complexity creates challenges. Jesson et al. [48] discuss the problem of ‘hallucination’—where an agent generates text that is factually incorrect—and propose a method for estimating hallucination rates for in-context learning with GenAI. In a multi-agent setting, the danger is that misinformation from one agent might be accepted and propagated by others in the network, so the detection and mitigation of hallucinations is a vital but complex task. Kulesza et al. [49] share concerns affecting interaction, adaptation and autonomy, and propose aspect-oriented techniques to facilitate the code generation and modelling of agent crosscutting features. Other recent research studies propose a variety of alternative solutions to multi-agent system function [50,51,52]. However, the reliability of multi-agent systems remains a significant problem to be overcome.

4.3. Overview of Recent AI in Education Studies

The academic literature reviewing recent AI applications in education falls into two groups, reflecting development along the S-curve in this area [53]. It begins with individual case studies (published over 2023–2025) reporting the use of specific AI tools. Following this are systematic reviews (published in 2025) employing theoretical frameworks to classify case studies and identify trends and interpretations.
Typical examples of individual case studies are Qureshi [54], in which two groups of students in a computer lab environment were given programming challenges. The control group had access to print resources but no Internet access, whereas students in the experimental group were encouraged to use ChatGPT to help. The second group achieved higher scores, but there were more inaccuracies in their submitted code. A similar study by Bernabei et al. [55] incorporated AI into the normal teaching environment of biomedical students, using ChatGPT to help write weekly assignments. Thematic analysis showed positive outcomes, indicating that AI allowed students to more efficiently engage in metacognition. A study by French et al. [56] engaged a small group of games programming students in evaluating OpenAI tools in the context of game development. Five case studies presented detailed outcomes, indicating the students’ positive views of AI in helping refine their skills in programming, problem-solving, critical reflection and exploratory design. While these studies reported the performance of individual students with AI, research by de Araujo et al. [57] investigated the interactions of secondary school student pairs with an AI Collaborative Conversational Agent that shares some of the characteristics of GenAI. This experience promoted the student pairs’ dialogue productivity, including listening to one another and sharing each other’s reasoning. However, it was not found to have an impact on deepening reasoning or enhancing knowledge acquisition.
Systematic reviews of the literature around AI in education are more recent entrants to the field. For example, a systematic review by Vaccaro et al. [58] made a meta-analysis of 106 experimental studies comparing the performance of humans alone, with AI alone and human–AI combinations. For the latter, there were performance losses in tasks that involved making decisions but significantly greater gains in tasks that involved creating content. More positive evaluations were made by Kovari [59], where a review of 27 studies found that predictive analytics and multimodal approaches supported by AI enhanced student engagement and motivation. Other systematic reviews have employed theoretical frameworks to classify case studies and identify trends and interpretations. Garzón et al. [60] examined 155 empirical studies, classified by the AI systems employed, on the nature of educational benefits and the types of challenges presented. A review by Belkina et al. [61] found similar overall results, but also categorised the 21 empirical studies in terms of Laurillard’s Conversational Framework [62], the SAMR model [34] and the Technological Pedagogical Content Knowledge framework [63].
This overview of recent AI in education studies shows a rapid growth of research in the area, a generally positive assessment of the educational benefits of AI, and a move from isolated trials towards pedagogical approaches integrating a number of affordances of the technology. This topic will be explored later in the context of the human–AgenticAI learning system to be introduced in the next section of this paper.

4.4. Ethical Issues and Guidelines

Ethical issues around AI relate to its specific affordances. This section will examine the separate but related issues pertaining to GenAI and to AgenticAI, and suggest ways in which they might be mitigated. This will be followed by an overview of statutory regulations and guidelines at international, national and sectoral levels.
For GenAI, the main ethical issues include bias, data protection, hallucinations and transparency. LLM systems reflect the social and cultural biases of the content on which they are trained, and attempts to control for these are complex and problematic [64]. Similarly, issues of data protection and intellectual property are difficult to control retrospectively after LLMs have been trained. The problem of hallucinations remains unresolved except by human-in-the-loop oversight [65]. The lack of transparency of AI agents is highlighted by Jedličková [66] alongside the related issues of accountability and trustworthiness. She recommends the establishment of robust frameworks and organisational procedures to address these concerns: again, these include human-in-the-loop architectures.
For AgenticAI, many of the ethical issues pertaining to GenAI are compounded by the independence, delegation and loss of human control entailed in autonomic action. Ghose [67] identifies four main risks. First is the loss of direct human control in delegating decisions to agents without checks on their ethical consequences. Second is deception and scheming behaviour, and Ghose cites the example of an agent that deliberately deceived developers, disabled monitoring, and acted in self-preservation (there are clear parallels with the HAL computer’s actions in the movie 2001: A Space Odyssey). This lack of trustworthiness—mentioned above by Jedličková—introduces major ethical concerns which relate to the third and fourth risks identified by Ghose: of the safety of systems due to misalignment with human values [67], and of their security from adversarial attack by malicious external agents [68]. The solutions proposed are sandbox testing and human-review checkpoints for all autonomous actions. In a wide-ranging paper, Frenette [69] proposes similar solutions in a core framework for human oversight of AI. These include human intervention models ranging from human-in-command, where humans retain full control over AI decision-making; human-in-the-loop, where humans continuously validate AI decisions; and human-on-the-loop, where AI operates autonomously but human supervisors monitor and intervene when anomalies occur (as is the case with autonomous vehicles). In this way, the employment of hybrid decision-making, using AI as an assistant rather than autonomous decision-maker, could leverage the speed and accuracy of AI while retaining human intuition, empathy, and contextual awareness.
Statutory regulations and guidelines on the use of AI are emerging slowly at international and national levels. International documents include the 2024 European Union Artificial Intelligence Act [70], which prescribes pre-market conformity assessments and post-market monitoring for high-risk products; and the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence [71], which focuses on human rights and dignity, recommending the operationalisation of ethical AI principles in national policies. At national government level, the UK’s AI Regulatory Principles: Guidance for regulators [72] advises the UK Competition and Markets Authority on sector regulation of transparency, contestability, and competition concerns for agentic systems. In the USA, the National Institute of Standards and Technology provides guidance [73] on risk management, performance monitoring and human-in-the-loop controls. Overall, there remain differences in orientation between the mandatory nature of EU and UK regulation and the less directive US guidance. However, what is common is their slow responsiveness to the rapidity of technical development, particularly of AgenticAI systems, weakening safeguarding against malicious damage and uncontrolled outcomes.
At sectoral level, guidelines are more frequently updated, but lack the force of law and wider public consensus. In the context of this paper, the most relevant sets of guidelines are those of the IEEE, the major technology corporations and independent institutions. The Institute of Electrical and Electronics Engineers (IEEE) is based in the USA but has a global reach and authority. As part of its Global Initiative on Ethics of Autonomous and Intelligent Systems, the publication Ethically Aligned Design: A Vision for Prioritizing Wellbeing with Autonomous and Intelligent Systems [74] focuses on human wellbeing, oversight and transparency and directly addresses AgenticAI. The IEEE sets the P7999 Standard for Integrating Organizational Ethics Oversight [75] and also manages various working groups on specific topics within the area. The major technology corporations have their own in-house standards in the form of operational playbooks that combine principles with concrete engineering controls. These are prescriptive for internal product teams, enforcing operationalisation of the principles. Examples are the standards from Microsoft [76], Google [77] and OpenAI [78]. Finally, independent institutions provide their own guidelines, such as the Beijing Academy of Artificial Intelligence [79] in China and the Alan Turing Institute [80] in the UK.

5. Architecture of a Human–AgenticAI Co-Created Learning System

The proposed human–AgenticAI co-created learning system—abbreviated in this paper as HCLS—is a multi-agent learning experience platform designed with safeguards for self-correction and ethical compliance. Its primary purpose is to support and empower learners through dialogic engagement and formative assessment in the practising of skills and competences through co-creation with AgenticAI in a variety of activities and environments, assessed by social learning analytics. The system will be discussed at three levels: the principal agents; an overview of system processes; and functions of the AI agents.

5.1. Principal Agents of the HCLS

This first level introduces the five principal agents of the HCLS. The humans involved are the Learner and the Human Tutor. The AI agents involved are the AI Tutor, the Learner’s AI Assistant and the Human Tutor’s AI Assistant. The relations between these principal agents are illustrated in Figure 1, with the double-headed arrows indicating two-way interactions.
Each human is supported by an AI assistant. The Human Tutor’s AI Assistant provides advice and feedback on course management and the Learner’s AI Assistant supports and interacts with the learner in ways to be detailed later.

5.2. Overview of HCLS Processes

This second level introduces two more AI agents: the Learning Activity Scheduler and the Learning Activity Outcomes Assessor. Figure 2 illustrates how these are located in relation to the five principal agents and to the remaining components and processes of the HCLS.
The main processes of the HCLS are overviewed here and further details will follow later. A typical learning and assessment cycle is as follows.
The AI Supervisor consults the Course Syllabus and relevant libraries to select a learning activity for the Learner. The difficulty level and suitability are determined in consultation with the Learner’s AI Assistant and the details are passed to the Learning Activity Scheduler. This agent specifies a learning activity which is forwarded to the Learner’s AI Assistant and the Human Tutor’s AI Assistant.
The Learner’s AI Assistant cues the activity with the Learner at an opportune time, supports the Learner in completing the learning activity, and forwards the outcomes to the Learning Activity Outcomes Assessor.
The Learning Activity Outcomes Assessor evaluates the outcomes against the specification and reports to the Human Tutor’s AI Assistant.
The Human Tutor’s AI Assistant reports to the Human Tutor and forwards evidence of competence levels to the Learner’s Record of Achievement Portfolio. This is then made available to external systems for academic warranting and awards.

5.3. Functions of AI Agents in the HCLS

5.3.1. Functions of the AI Supervisor

The AI Supervisor liaises closely with the Learner, the Human Tutor and the Learner’s AI Assistant. It consults the Course Syllabus, Ethics Library, Learning Activity Library and External Environments Library to select suitable activities and environments. In addition to being ethically compliant and appropriate to the Learner’s needs and abilities, these activities must be of diverse types, must address specific competences, be context-specific to individual or team working and to real or simulated scenarios. The Ethics Library holds a comprehensive list of ethically compliant activities drawn from authoritative sources, such as IEEE P7999 [75] and the Alan Turing Institute [80]. The AI Supervisor agent comprises two separate sub-agents that operate independently to make approvals; in the case of conflict, a human-on-the-loop safeguard alerts the Human Tutor to seek a resolution. The Learning Activity Library contains activity specifications that relate to the Course Syllabus and includes problem-based learning, projects, research, teamwork, presentations/performances/demonstrations, and viva voce examinations. The Key Competences Library lists all the competences specified in the Course Syllabus. Similarly, the Learning Environments Library holds a comprehensive list of all the learning environments specified in the Course Syllabus that will be encountered by the Learner, which includes flipped classroom/blended learning; individual online study; collaborative online study; real or simulated workplace, or gaming; and activities in laboratory/workshop/studio/performance spaces. The principal functions of the AI Supervisor are illustrated in Figure 3.
The generated activities are approved by the Human Tutor, then the Learning Activity Scheduler and Learner’s AI Assistant are notified. Finally, the AI Supervisor provides a quality feedback loop by evaluating and reporting to the Course Syllabus on the effectiveness and suitability of items in the Learning Activities Library.

5.3.2. Functions of the Learner’s AI Assistant

The Learner’s AI Assistant performs two main functions: secretarial support and co-creation partnership. The secretarial support functions of the assistant include liaison with the Learning Activity Scheduler and AI Tutor to notify the Learner of upcoming learning activities; using learning analytics data collated in the Learner Activity Monitor to track and report the Learner’s level of engagement in various activities and environments, for example, using the rating system shown in Table 2; and charting progress towards activity completion. The assistant also logs achievements evaluated by the Learning Activity Outcomes Assessor for the Learner’s Record of Achievement Portfolio. The co-creation partnership functions of the assistant include researching, collating and summarising information; monitoring course communications via the learning management system (LMS) and peer learners, notifying the Learner on a need-to-know basis; engaging in dialogic/Socratic/formative assessment discussion with the Learner in relation to understanding course material and associated ideas; and assisting the Learner in addressing and structuring responses to learning activities. In addition, the assistant logs all co-creation interactions, including the contributions of each partner, in the Co-created Learning Activity Outcomes datastore for examination by the Learning Activity Outcomes Assessor. These are illustrated in Figure 4.

5.3.3. Functions of the Human Tutor’s AI Assistant

The Human Tutor’s AI Assistant performs two main functions: secretarial support and course management functions. It advises the Human Tutor on the ethical compliance, suitability and scheduling of learning activities. It also collates performance data received from the Learning Activity Outcomes Assessor on learners in the tutor’s supervision group, and has oversight of the assessed outcomes data sent to the Record of Achievement Portfolios of relevant learners. A summary illustration of these functions is presented in Figure 5.

5.3.4. Functions of the Learning Activity Scheduler

The functions of the Learning Activity Scheduler are firstly, to liaise with the AI Supervisor and Learner’s AI Assistant in cueing learning activities; and secondly, to notify the Human Tutor and Human Tutor’s AI Assistant. These functions are illustrated in Figure 6.

5.3.5. Functions of the Learning Activity Outcomes Assessor

The main function of this agent is to make a summative assessment of items logged as Co-created Learning Activity Outcomes alongside the report of the Learner’s AI Assistant. The outcomes are assessed in relation to activity definition criteria in the Learning Activities Library. Two separate sub-agents within the Learning Activity Outcomes Assessor operate independently to make recommendations, which are passed to a human-in-the-loop sub-agent that asks the Human Tutor for final approval. The results of this assessment are then notified to the Human Tutor’s AI Assistant and the Learner’s AI Assistant. An illustration is presented in Figure 7.

5.4. Feedback Paths Within the HCLS

The HCLS employs five self-correcting feedback paths. In the first of these, the AI Supervisor liaises with the Learner, the Human Tutor and the Learner’s AI Assistant in consulting the Course Syllabus and attached libraries to select, adapt and schedule ethically appropriate learning activities suited to the Learner’s needs and relevant learning environment. The second set of feedback paths involve the Learner’s AI Assistant engaging with data from the Learning Activity Monitor and negotiating with the AI Supervisor and Learning Activity Outcomes Assessor to determine the outcomes to be forwarded to the Learner’s Record of Achievement Portfolio. The Human Tutor’s AI Assistant is involved in the third set of feedback paths, to inspect performance data from the Learning Activity Outcomes Assessor and assure congruity in the reports sent to the Learner’s Record of Achievement Portfolio. In the fourth set of feedback paths, the Learning Activity Scheduler interacts with the AI Supervisor, the Learner’s AI Assistant, the Human Tutor and the Human Tutor’s AI Assistant to agree and schedule learning activities. Finally, a fifth set of feedback paths involve the Learning Activity Outcomes Assessor in assessing outcomes in relation to the Learning Activities Library definitions, for ratification by the Learner’s AI Assistant and the Human Tutor’s AI Assistant. In these ways, interoperating designed-in features facilitate self-correction and quality management of the HCLS in relation to its external environment.
In addition to the internal self-correction afforded by multiple-agent AgenticAI, the HCLS incorporates a form of automated quality management. In consultation with the Human Tutor, the Human Tutor’s AI Assistant provides analytics feedback to the Course Syllabus on the usefulness and suitability of course and libraries content. In this curriculum development loop, the HCLS enables adaptive and continuing course tuning and enhancement.

6. Discussion and Conclusions

6.1. Evaluating the HCLS Against Educational and Ethical Criteria

This section attempts to evaluate the HCLS against educational and ethical criteria introduced earlier in the paper. First are the five principal affordances of AI discussed in Section 3: dialogic engagement, human–AI co-creation, integration of formative assessment, flexible employment in diverse environments, and active engagement with the socially shared regulation of learning [39]. The second set of criteria is derived from Puentedura’s Substitution–Augmentation–Modification–Redefinition model for learning with technology [34] discussed in Section 3. The third set of criteria concerns compliance with the ethical issues and guidelines discussed in Section 4.
The five principal affordances of AI have all been addressed in the HCLS design. Dialogic engagement, ongoing formative assessment and human–AI co-creation between the Learner and the Learner’s AI Assistant are integrated at the core of the system. There is also full compatibility with social, collaborative and work-situated learning, and examples can be found in Table 4. It has been argued earlier in the paper that this combination of experiences fosters whole-person education. As detailed in the previous section, summative assessment by the Learning Activity Outcomes Assessor is ratified by other agents, obviating the need for the periodic assessment periods that punctuate learning in conventional settings. This departure from traditional institutional practice across all five principal affordances of AI is characteristic of the Redefinition level of the SAMR model, at which technology enables radically different methods of teaching and assessment. The overview of recent AI-in-education studies made earlier, which examined reports of individual studies as well as systematic reviews, found many examples of innovative practice but none appeared to be transformative across all five affordance areas. The third set of criteria, concerning compliance with ethical issues and guidelines, have been addressed in the HCLS design through direct access by the AI Supervisor to the Ethics Library and by the central oversight and regulatory role of this agent in the safe technical operation detailed in the next section.

6.2. Evaluating the HCLS Against Criteria for Safe Technical Operation

There are implications from the earlier discussion of multi-agent systems to inform criteria for safe technical operation of the HCLS.
A principal criterion is that the functionality and demands made of the AI agents in the HCLS are realistically achievable, so that multiple agents in the network will interact productively without conflicts, and any ‘hallucinations’ from one agent will be detected and rectified by others [48]. The role of the AI Supervisor and human-on-the-loop safeguard go some way towards addressing this problem. However, at the time of writing, ways to ensure fully reliable interoperability and cross-agent verification in multiple agent systems are still under development internationally. It must be assumed, given the rapid pace of development in the power of new AI processor chips and in the functionality of multi-agent LXP architectures, that this problem can eventually be solved.
A related criterion is that the proactive decision-making features and potential ‘scheming’ behaviour [67] of AgenticAI would be moderated by cross-agent verification systems which, as discussed, are currently under development. Finally, as for all LLM-based systems, extensive training and iterative refinement [68] would be necessary before the HCLS can be considered safe and reliable.

6.3. Potential Adoption of the HCLS in Higher Education

As mentioned in the Introduction, the orientation of this paper is pedagogical rather than technical, so the HCLS is proposed in the form of an ideational conception rather than a blueprint for practical implementation. Nevertheless, it has been developed with potential application in mind as a means to help prepare new entrants to a rapidly changing employment landscape in which education of the whole person is key to sustainability. In this regard, some consideration of how the HCLS might be adopted in higher education is explored here. Firstly, there is discussion of ‘human experience’ factors in the adoption of this new way of working. Secondly, there is anticipation of which subject disciplines and course offers might be the most closely aligned with the HCLS and likely to be early adopters. Finally, there is discussion of the opportunities and problems around the induction of students and staff (faculty) to a substantially different way of working.
The first consideration is how humans might find experience of the HCLS disorienting or difficult to use. Recent developments in the use of AI to fine-tune User Interface (UI) and User Experience (UX) design [81,82] would help make the system more intuitive and easy to navigate. Both the Learner and Human Tutor would find mitigation of administrative duties: the secretarial support functions of the Learner’s AI Assistant, and the secretarial and course management functions of the Human Tutor’s AI Assistant would free up time and mental energy for educational matters. The widely used Technology Acceptance Model (TAM) devised by Davis [83] suggests that users are more likely to accept and adopt a new technology when they perceive it as useful and easy to use.
The second consideration is the question of which subject disciplines the HCLS might be particularly compatible with. It is likely that these would include STEM subjects, business management and medicine. These subjects are also offered by universities specialising in CBE—for example, [84,85,86]—and are typically operated part-time for home-based students. Here, the HCLS employment of situated learning could fit well with courses that are work-oriented. For similar reasons, it is likely that the HCLS would be compatible for MBA and other vocationally focused postgraduate courses in which specialised individual tuition is more common.
The final consideration is the induction of users to the HCLS, and it seems likely that students might prove far easier than staff (faculty). Students’ rapid take up of AI tools was mentioned earlier and is also evident in the overview of recent empirical studies. Also, as discussed earlier, so is the motivational effect of formative assessment and the real-time feedback of dialogic engagement. Students might also find variety and social engagement in a vocationally oriented team working in diverse learning environments. The induction of staff could prove more difficult. This issue was noted in Section 3 in the concerns of Vashishth et al. [26] and Mao et al. [2], and points to the need for effective staff (faculty) training, perhaps along the lines suggested by Ilieva et al. [30]. Evidence from the COVID-19 years, when universities rapidly switched from face-to-face to online delivery, indicate that the main problems encountered were staff competence with digital technology and the need to rapidly change methods of assessment [87,88]. However, a study of 30 institutions in Turkey noted fewer issues in universities with developed distance education capacities [89]. By extension, this suggests that staff in universities offering CBE courses who typically employ online delivery might find an easier transition to working with AI. In turn, having a core of staff who are experienced and confident in these new ways of working could prove useful in disseminating practice more widely within and across institutions.
In conclusion, it seems that the technical issues of controlling for reliability and predictability within the HCLS may be more easily resolved than the human issues requiring change in skillsets and ways of working. However, the arrival of AI tools in higher education cannot be ignored. As was argued in the Introduction of this paper, new technologies present a choice. They can be used by institutions, either to strengthen oversight and external controls on the learner, or through personalisation to empower the learner; and the HCLS has been conceived with this latter intention. The goal of the HCLS is—using the terminology of the SAMR model—to go beyond Substitution, Augmentation and Modification to Redefinition, where AI can transform and situate teaching, learning and assessment to foster whole-person development for sustainable learners in a rapidly changing future.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Niedbał, R.; Sokołowski, A.; Wrzalik, A. Students’ Use of the Artificial Intelligence Language Model in their Learning Process. Procedia Comput. Sci. 2023, 225, 3059–3066. [Google Scholar] [CrossRef]
  2. Mao, J.; Chen, B.; Liu, J. Generative Artificial Intelligence in Education and Its Implications for Assessment. TechTrends 2024, 68, 58–66. [Google Scholar] [CrossRef]
  3. Loorbach, D.A.; Wittmayer, J. Transforming universities. Sustain. Sci. 2024, 19, 19–33. [Google Scholar] [CrossRef]
  4. Stein, R.M. The Half-Life of Facts: Why Everything We Know Has an Expiration Date. Quant. Financ. 2014, 14, 1701–1703. [Google Scholar] [CrossRef]
  5. World Economic Forum. The Future of Jobs; World Economic Forum: Cologny, Switzerland, 2016; Available online: https://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf (accessed on 20 October 2025).
  6. PricewaterhouseCoopers. The Fearless Future: 2025 Global AI Jobs Barometer; PwC: London, UK, 2025; Available online: https://www.pwc.com/gx/en/issues/artificial-intelligence/ai-jobs-barometer.html (accessed on 20 October 2025).
  7. U.S. Department of Education. Direct Assessment (Competency-Based) Programs; U.S. Department of Education: Washington, DC, USA, 2025. Available online: https://www.ed.gov/laws-and-policy/higher-education-laws-and-policy/higher-education-policy/direct-assessment-competency-based-programs (accessed on 20 October 2025).
  8. Sturgis, C. Reaching the Tipping Point: Insights on Advancing Competency Education in New England; iNACOL: Vienna, VA, USA, 2016; Available online: https://files.eric.ed.gov/fulltext/ED590523.pdf (accessed on 20 October 2025).
  9. Kaliisa, R.; Rienties, B.; Mørch, A.; Kluge, A. Social Learning Analytics in Computer-Supported Collaborative Learning Environments: A Systematic Review of Empirical Studies. Comput. Educ. Open 2022, 3, 100073. [Google Scholar] [CrossRef]
  10. Datnow, A.; Park, V.; Peurach, D.J.; Spillane, J.P. Transforming Education for Holistic Student Development; Brookings Institution: Washington, DC, USA, 2022; Available online: https://www.brookings.edu/articles/transforming-education-for-holistic-student-development/ (accessed on 20 October 2025).
  11. NFER. The Skills Imperative 2035; NFER: London, UK, 2022; Available online: https://www.nfer.ac.uk/the-skills-imperative-2035 (accessed on 20 October 2025).
  12. Saito, N.; Akiyama, T. On the Education of the Whole Person. Educ. Philos. Theory 2022, 56, 153–161. [Google Scholar] [CrossRef]
  13. Zhao, K. Educating for Wholeness, but Beyond Competences: Challenges to Key-Competences-Based Education in China. ECNU Rev. Educ. 2020, 3, 470–487. [Google Scholar] [CrossRef]
  14. Orynbassarova, D.; Porta, S. Implementing the Socratic Method with AI: Opportunities and Challenges of Integrating ChatGPT into Teaching Pedagogy. 2024. Available online: https://www.semanticscholar.org/paper/ab1cf9d86d10bcdec408489c0ae534aa944a65f4 (accessed on 20 October 2025).
  15. Tapper, T.; Palfreyman, D. The Tutorial System: The Jewel in the Crown. In Oxford, the Collegiate University; Springer: Dordrecht, The Netherlands, 2011; pp. 1–20. [Google Scholar] [CrossRef]
  16. Balan, A. Reviewing the effectiveness of the Oxford tutorial system in teaching an undergraduate qualifying law degree: A discussion of preliminary findings from a pilot study. Law Teach. 2017, 52, 171–189. [Google Scholar] [CrossRef]
  17. Lissack, M.; Meagher, B. Responsible Use of Large Language Models: An Analogy with the Oxford Tutorial System. She Ji 2024, 10, 389–413. [Google Scholar] [CrossRef]
  18. Cai, L.; Msafiri, M.M.; Kangwa, D. Exploring the impact of integrating AI tools in higher education using the Zone of Proximal Development. Educ. Inf. Technol. 2025, 30, 7191–7264. [Google Scholar] [CrossRef]
  19. Black, P.; Wiliam, D. Classroom assessment and pedagogy. Assess. Educ. Princ. Policy Pract. 2018, 25, 551–575. [Google Scholar] [CrossRef]
  20. Parmigiani, D.; Nicchia, E.; Murgia, E.; Ingersoll, M. Formative assessment in higher education: An exploratory study within programs for professionals in education. Front. Educ. 2024, 9, 1366215. [Google Scholar] [CrossRef]
  21. Muafa, A.; Lestariningsih, W. Formative Assessment Strategies to Increase Student Participation and Motivation. Proc. Int. Conf. Relig. Sci. Educ. 2025, 4, 195–199. [Google Scholar]
  22. Sambell, K.; McDowell, L.; Montgomery, C. Assessment for Learning in Higher Education; Routledge: London, UK, 2012. [Google Scholar] [CrossRef]
  23. Schellekens, L.H.; Bok, H.G.; de Jong, L.H.; van der Schaaf, M.F.; Kremer, W.D.; van der Vleuten, C.P. A scoping review on the notions of Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). Stud. Educ. Eval. 2021, 71, 101094. [Google Scholar] [CrossRef]
  24. Atjonen, P.; Kontkanen, S.; Ruotsalainen, P.; Pöntinen, S. Pre-Service Teachers as Learners of Formative Assessment in Teaching Practice. Eur. J. Teach. Educ. 2024, 47, 267–284. [Google Scholar] [CrossRef]
  25. Fleckney, P.; Thompson, J.; Vaz-Serra, P. Designing Effective Peer Assessment Processes in Higher Education: A Systematic Review. High. Educ. Res. Dev. 2025, 44, 386–401. [Google Scholar] [CrossRef]
  26. Vashishth, T.K.; Sharma, V.; Sharma, K.K.; Kumar, B.; Panwar, R.; Chaudhary, S. AI-Driven Learning Analytics for Personalized Feedback and Assessment in Higher Education. In Using Traditional Design Methods to Enhance AI-Driven Decision Making; IGI Global: Hershey, PA, USA, 2024. [Google Scholar] [CrossRef]
  27. Winarno, S.; Al Azies, H. The Effectiveness of Continuous Formative Assessment in Hybrid Learning Models: An Empirical Analysis in Higher Education Institutions. Int. J. Pedagog. Teach. Educ. 2024, 8, 1–11. [Google Scholar] [CrossRef]
  28. He, S.; Epp, C.; Chen, F.; Cui, Y. Examining change in students’ self-regulated learning patterns after a formative assessment using process mining techniques. Comput. Hum. Behav. 2024, 152, 108061. [Google Scholar] [CrossRef]
  29. Xia, Q.; Weng, X.; Ouyang, F.; Jin, T.; Chiu, T. A scoping review on how generative artificial intelligence transforms assessment in higher education. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [Google Scholar] [CrossRef]
  30. Ilieva, G.; Yankova, T.; Ruseva, M.; Kabaivanov, S. A Framework for Generative AI-Driven Assessment in Higher Education. Information 2025, 16, 472. [Google Scholar] [CrossRef]
  31. Sideeg, A. Bloom’s Taxonomy, Backward Design, and Vygotsky’s Zone of Proximal Development in Crafting Learning Outcomes. Int. J. Linguist. 2016, 8, 158. [Google Scholar] [CrossRef]
  32. Wiggins, G.; McTighe, J. Understanding by Design; Merrill-Prentice-Hall: Hoboken, NJ, USA, 2005. [Google Scholar]
  33. Anderson, L.W.; Krathwohl, D.R. (Eds.) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Allyn and Bacon: Boston, MA, USA, 2001. [Google Scholar]
  34. Puentedura, R. SAMR: Moving from Enhancement to Transformation. 2013. Available online: https://www.hippasus.com/rrpweblog/archives/2013/04/16/SAMRGettingToTransformation.pdf (accessed on 16 October 2025).
  35. Sehgal, G. AI Agentic AI in Education: Shaping the Future of Learning; Medium Blog: Gurugram, Haryana, India, 2025; Available online: https://medium.com/accredian/ai-agentic-ai-in-education-shaping-the-future-of-learning-1e46ce9be0c1 (accessed on 20 October 2025).
  36. Hughes, L.; Dwivedi, Y.K.; Malik, T.; Shawosh, M.; Albashrawi, M.A.; Jeon, I.; Dutot, V.; Appanderanda, M.; Crick, T.; De’, R.; et al. AI Agents and Agentic Systems: A Multi-Expert Analysis. J. Comput. Inf. Syst. 2025, 65, 489–517. [Google Scholar] [CrossRef]
  37. Molenaar, I. Towards hybrid human-AI learning technologies. Eur. J. Educ. 2022, 57, 632–645. [Google Scholar] [CrossRef]
  38. Cukurova, M. The interplay of learning, analytics and artificial intelligence in education: A vision for hybrid intelligence. Br. J. Educ. Technol. 2025, 56, 469–488. [Google Scholar] [CrossRef]
  39. Järvelä, S.; Zhao, G.; Nguyen, A.; Chen, H. Hybrid Intelligence: Human-AI Co-evolution and Learning. Br. J. Educ. Technol. 2025, 56, 455–468. [Google Scholar] [CrossRef]
  40. Perkins, M.; Furze, L.; Roe, J.; MacVaugh, J. The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. J. Univ. Teach. Learn. Pract. 2024, 21, 6. [Google Scholar] [CrossRef]
  41. Dahal, P.; Nugroho, S.; Schmidt, C.; Sänger, V. Practical Use of AI-Based Learning Recommendations in Higher Education. In Methodologies and Intelligent Systems for Technology Enhanced Learning, 14th International Conference. MIS4TEL 2024, Salamanca, Spain, 26–28 June 2024; Herodotou, C., Papavlasopoulou, S., Santos, C., Milrad, M., Otero, N., Vittorini, P., Gennari, R., Di Mascio, T., Temperini, M., De la Prieta, F., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2024; Volume 1171. [Google Scholar] [CrossRef]
  42. Clark, D. Artificial Intelligence for Learning: Using AI and Generative AI to Support Learner Development; Kogan Page Publishers: London, UK, 2024. [Google Scholar]
  43. Valdiviezo, A.D.; Crawford, M. Fostering Soft-Skills Development through Learning Experience Platforms (LXPs). In Handbook of Teaching with Technology in Management, Leadership, and Business; Edward Elgar: Cheltenham, UK, 2020; pp. 312–321. [Google Scholar]
  44. Radu, C.; Ciocoiu, C.N.; Veith, C.; Dobrea, R.C. Artificial Intelligence and Competency-Based Education: A Bibliometric Analysis. Amfiteatru Econ. 2024, 26, 220–240. [Google Scholar] [CrossRef]
  45. Asad, M.M.; Qureshi, A. Impact of technology-based collaborative learning on students’ competency-based education: Insights from the higher education institution of Pakistan. High. Educ. Ski. Work-Based Learn. 2025, 15, 562–575. [Google Scholar] [CrossRef]
  46. Sajja, R.; Sermet, Y.; Cikmaz, M.; Cwiertny, D.; Demir, I. Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education. Information 2024, 15, 596. [Google Scholar] [CrossRef]
  47. Shamsudin, N.; Hoon, T. Exploring the Synergy of Learning Experience Platforms (LXP) with Artificial Intelligence for Enhanced Educational Outcomes. In Proceedings of the International Conference on Innovation & Entrepreneurship in Computing, Engineering & Science Education; Advances in Computer Science Research; Atlantis Press: Paris, France, 2024; Volume 117, pp. 30–39. [Google Scholar] [CrossRef]
  48. Jesson, A.; Beltran-Velez, N.; Chu, Q.; Karlekar, S.; Kossen, J.; Gal, Y.; Cunningham, J.P.; Blei, D. Estimating the hallucination rate of generative AI. Adv. Neural Inf. Process. Syst. 2024, 37, 31154–31201. [Google Scholar]
  49. Kulesza, U.; Garcia, A.; Lucena, C.; Alencar, P. A generative approach for multi-agent system development. In International Workshop on Software Engineering for Large-Scale Multi-Agent Systems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  50. Yang, P.F.T.; Liang, M.; Wang, L.; Gao, Y. OC-HMAS: Dynamic Self-Organization and Self-Correction in Heterogeneous Multi-Agent Systems Using Multi-Modal Large Models. IEEE Internet Things J. 2025, 12, 13538–13555. [Google Scholar]
  51. Cheng, Z.; Ma, Y.; Lang, J.; Zhang, K.; Zhong, T.; Wang, Y.; Zhou, F. Generative Thinking, Corrective Action: User-Friendly Composed Image Retrieval via Automatic Multi-Agent Collaboration. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, Toronto, ON, Canada, 3–7 August 2025. [Google Scholar]
  52. Ni, B.; Buehler, M.J. MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge. Extrem. Mech. Lett. 2024, 67, 102131. [Google Scholar] [CrossRef]
  53. Kar, S.; Kar, A.K.; Gupta, M.P. Understanding the S-Curve of Ambidextrous Behavior in Learning Emerging Digital Technologies. IEEE Eng. Manag. Rev. 2021, 49, 76–98. [Google Scholar] [CrossRef]
  54. Qureshi, B. ChatGPT in Computer Science Curriculum Assessment: An analysis of Its Successes and Shortcomings. In Proceedings of the 9th International Conference on e-Society e-Learning and e-Technologies, Portsmouth, UK, 9–11 June 2023; Volume 2023, pp. 7–13. [Google Scholar] [CrossRef]
  55. Bernabei, M.; Colabianchi, S.; Falegnami, A.; Costantino, F. Students‘use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances. Comput. Educ. Artif. Intell. 2023, 5, 100172. [Google Scholar] [CrossRef]
  56. French, F.; Levi, D.; Maczo, C.; Simonaityte, A.; Triantafyllidis, S.; Varda, G. Creative Use of OpenAI in Education: Case Studies from Game Development. Multimodal Technol. Interact. 2025, 7, 81. [Google Scholar] [CrossRef]
  57. de Araujo, A.; Papadopoulos, P.M.; McKenney, S.; de Jong, T. Investigating the Impact of a Collaborative Conversational Agent on Dialogue Productivity and Knowledge Acquisition. Int. J. Artif. Intell. Educ. 2025, 35, 1–27. [Google Scholar] [CrossRef]
  58. Vaccaro, M.; Almaatouq, A.; Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat. Hum. Behav. 2025, 8, 2293–2303. [Google Scholar] [CrossRef] [PubMed]
  59. Kovari, A. A systematic review of AI-powered collaborative learning in higher education: Trends and outcomes from the last decade. Soc. Sci. Humanit. Open 2025, 11, 101335. [Google Scholar] [CrossRef]
  60. Garzón, J.; Patiño, E.; Marulanda, C. Systematic Review of Artificial Intelligence in Education: Trends, Benefits, and Challenges. Multimodal Technol. Interact. 2025, 9, 84. [Google Scholar] [CrossRef]
  61. Belkina, M.; Daniel, S.; Nikolic, S.; Haque, R.; Lyden, S.; Neal, P.; Grundy, S.; Hassan, G.M. Implementing generative AI (GenAI) in higher education: A systematic review of case studies. Comput. Educ. Artif. Intell. 2025, 8, 100407. [Google Scholar] [CrossRef]
  62. Laurillard, D. Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology; Routledge: London, UK, 2012. [Google Scholar] [CrossRef]
  63. Mishra, P.; Warr, M.; Islam, R. TPACK in the age of ChatGPT and Generative AI. J. Digit. Learn. Teach. Educ. 2023, 39, 235–251. [Google Scholar] [CrossRef]
  64. Qu, Y.; Huang, S.; Li, L.; Nie, P.; Yao, Y. Beyond Intentions: A Critical Survey of Misalignment in LLMs. Comput. Mater. Contin. 2025, 85, 249–300. [Google Scholar] [CrossRef]
  65. Kim, S.; Park, C.; Jeon, G.; Kim, S.; Kim, J.H. Automated Audit and Self-Correction Algorithm for Seg-Hallucination Using MeshCNN-Based On-Demand Generative AI. Bioengineering 2025, 12, 81. [Google Scholar] [CrossRef] [PubMed]
  66. Jedličková, A. Ethical Approaches in Designing Autonomous and Intelligent Systems. AI Soc. 2024, 39, 2201–2214. [Google Scholar]
  67. Ghose, S. The Next “Next Big Thing”: Agentic AI’s Opportunities and Risks; UC Berkeley Sutardja Center: Berkeley, CA, USA, 2025; Available online: https://scet.berkeley.edu/the-next-next-big-thing-agentic-ais-opportunities-and-risks/ (accessed on 20 October 2025).
  68. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar] [CrossRef]
  69. Frenette, J. Ensuring human oversight in high-performance AI systems: A framework for control and accountability. World J. Adv. Res. Rev. 2025, 20, 1507–1516. [Google Scholar] [CrossRef]
  70. Official Journal of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council (Artificial Intelligence Act). 12 July 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 20 October 2025).
  71. UNESCO. Recommendation on the Ethics of Artificial Intelligence. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000380455 (accessed on 20 October 2025).
  72. Department for Digital, Culture, Media & Sport (UK). Implementing the UK’s AI Regulatory Principles: Guidance for Regulators. 5 February 2024. Available online: https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators (accessed on 20 October 2025).
  73. Tabassi, E. Artificial Intelligence Risk Management Framework (AI RMF 1.0); NIST Special Publication; AI RMF 1.0; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2023. Available online: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf (accessed on 20 October 2025).
  74. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In Ethically Aligned Design: A Vision for Prioritizing Wellbeing with Autonomous and Intelligent Systems, Version 1; IEEE: New York, NY, USA, 2016; Available online: https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v1.pdf (accessed on 20 October 2025).
  75. IEEE Standards Association. P7999: Standard for Integrating Organizational Ethics Oversight in Projects and Processes Involving Artificial Intelligence. Available online: https://sagroups.ieee.org/7999-series/ (accessed on 16 October 2025).
  76. Microsoft. Microsoft Responsible AI Standard—General Requirements; Microsoft Corp.: Redmond, WA, USA, 2022; Available online: https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft-Responsible-AI-Standard-General-Requirements.pdf (accessed on 20 October 2025).
  77. Google. AI Principles and Google Cloud Responsible AI Guidance. 2018. Available online: https://ai.google/principles/ (accessed on 20 October 2025).
  78. OpenAI. Safety & Responsibility; Usage Policies. OpenAI: San Francisco, CA, USA. Available online: https://openai.com/safety/ (accessed on 16 October 2025).
  79. Beijing Academy of Artificial Intelligence. The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation; BAAI: Beijing, China, 2020. [Google Scholar]
  80. Leslie, D. Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector; The Alan Turing Institute: London, UK, 2019. [Google Scholar] [CrossRef]
  81. Luo, Y. Designing with AI: A Systematic Literature Review on the Use, Development, and Perception of AI-Enabled UX Design Tools. Adv. Hum.-Comput. Interact. 2025, 2025, 3869207. [Google Scholar] [CrossRef]
  82. Vlasenko, K.V.; Lovianova, I.V.; Volkov, S.V.; Sitak, I.V.; Chumak, O.O.; Krasnoshchok, A.V.; Bohdanova, N.G.; Semerikov, S.O. UI/UX design of educational on-line courses. CTE Workshop Proc. 2022, 9, 184–199. [Google Scholar] [CrossRef]
  83. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319. [Google Scholar] [CrossRef]
  84. Western Governors University. Available online: https://www.wgu.edu/ (accessed on 16 October 2025).
  85. Capella University. FlexPath. Available online: https://www.capella.edu/capella-experience/flexpath/ (accessed on 16 October 2025).
  86. Purdue Global. ExcelTrack. Available online: https://www.purdueglobal.edu/degree-programs/business/exceltrack-competency-based-mba-degree/ (accessed on 16 October 2025).
  87. Koh, J.H.L.; Daniel, B.K. Shifting Online during COVID-19: A Systematic Review of Teaching and Learning Strategies and Their Outcomes. Int. J. Educ. Technol. High. Educ. 2022, 19, 56. [Google Scholar] [CrossRef] [PubMed]
  88. Fhloinn, E.N.; Fitzmaurice, O. Mathematics Lecturers’ Views on the Student Experience of Emergency Remote Teaching Due to COVID-19. Educ. Sci. 2022, 12, 787. [Google Scholar] [CrossRef]
  89. Karadağ, E.; Su, A.; Ergin-Kocaturk, H. Multi-Level Analyses of Distance Education Capacity, Faculty Members’ Adaptation, and Indicators of Student Satisfaction in Higher Education during COVID-19 Pandemic. Int. J. Educ. Technol. High. Educ. 2021, 18, 57. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Principal agents of the human–agentic co-created learning system (HCLS).
Figure 1. Principal agents of the human–agentic co-created learning system (HCLS).
Information 16 01101 g001
Figure 2. Overview of components and processes of the HCLS.
Figure 2. Overview of components and processes of the HCLS.
Information 16 01101 g002
Figure 3. Functions of the AI Supervisor.
Figure 3. Functions of the AI Supervisor.
Information 16 01101 g003
Figure 4. Functions of the Learner’s AI Assistant.
Figure 4. Functions of the Learner’s AI Assistant.
Information 16 01101 g004
Figure 5. Functions of the Human Tutor’s AI Assistant.
Figure 5. Functions of the Human Tutor’s AI Assistant.
Information 16 01101 g005
Figure 6. Functions of the Learning Activity Scheduler.
Figure 6. Functions of the Learning Activity Scheduler.
Information 16 01101 g006
Figure 7. Functions of the Learning Activity Outcomes Assessor.
Figure 7. Functions of the Learning Activity Outcomes Assessor.
Information 16 01101 g007
Table 1. Comparison of the educational affordances of GenAI and AgenticAI (adapted from Sehgal [35]).
Table 1. Comparison of the educational affordances of GenAI and AgenticAI (adapted from Sehgal [35]).
FeatureGenAIAgenticAI
AutonomyActs in response to human inputActs autonomously in response to learner and environment 
WorkflowAutomates given workflow processesOptimises and evolves new workflow processes
Decision-makingMakes decisions on the basis of predictive learning analytics dataEmploys self-learning for proactive decision-making
AI Tutor roles‘Secretarial support’ and dialogic engagementAdapting and personalising activities and curriculum for the learner
Table 2. Artificial intelligence assessment scale (adapted from Perkins et al. [40]).
Table 2. Artificial intelligence assessment scale (adapted from Perkins et al. [40]).
Level 1No use of AI
Level 2AI used for brainstorming, creating structures, and generating ideas
Level 3AI-assisted editing, improving the quality of student-created work
Level 4Use of AI to complete certain elements of the task, with students providing a commentary on which elements were involved
Level 5Full use of AI as ‘co-pilot’ in a collaborative partnership without specification of which elements were wholly AI-generated
Table 3. Support roles of AgenticAI in students’ individual and team-working environments.
Table 3. Support roles of AgenticAI in students’ individual and team-working environments.
AgenticAI Support for Individual WorkingAgenticAI Support for Team Working
Curating student’s study activity with notes, summaries, diary management and links to resourcesCurating information and resources, team communications and liaison to support students’ team working.
Providing Socratic tutoring and dialogic formative assessmentProviding Socratic tutoring and dialogic formative assessment
Checking and improving the quality of student-created workIdentifying and curating team working and improving the quality of collaborative achievements
Human–AgenticAI co-creation between student and AI tutor Supporting peer evaluations of collaborative working; engaging in ‘hybrid human-AI shared regulation in learning’ (HASRL)
Table 4. Learning activities mapped to environments, with six sample learning outcomes rated on the Artificial Intelligence Assessment Scale.
Table 4. Learning activities mapped to environments, with six sample learning outcomes rated on the Artificial Intelligence Assessment Scale.
Activities/EnvironmentsPBLProjectsResearchTeamworkPresentationsViva Voce
Flipped classroom/blended B
Level 3
    
Individual online  C
Level 4
   
Collaborative online    E
Level 2
 
Workplace/simulationA
Level 2
  D
Level 5
  
Laboratory/workshop/studio     F
Level 1
Table 5. Summary comparison of learning management systems with learning experience platforms (adapted from Dahal et al. [43]).
Table 5. Summary comparison of learning management systems with learning experience platforms (adapted from Dahal et al. [43]).
FunctionLearning Management SystemsLearning Experience Platforms
Locus of controlTutor/Administrator control. Cognitivist orientation in focus on content delivery and management.Learner control. Constructivist orientation in focus on learner experience and engagement.
PersonalisationLimited personalisation of content and tasks.AI-driven personalisation of content and activities, based on user preferences and behaviour.
Social and collaborative
orientation
Limited social interaction features.Flexible opportunities for social and collaborative learning.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Williams, P. Human–AI Learning: Architecture of a Human–AgenticAI Learning System. Information 2025, 16, 1101. https://doi.org/10.3390/info16121101

AMA Style

Williams P. Human–AI Learning: Architecture of a Human–AgenticAI Learning System. Information. 2025; 16(12):1101. https://doi.org/10.3390/info16121101

Chicago/Turabian Style

Williams, Peter. 2025. "Human–AI Learning: Architecture of a Human–AgenticAI Learning System" Information 16, no. 12: 1101. https://doi.org/10.3390/info16121101

APA Style

Williams, P. (2025). Human–AI Learning: Architecture of a Human–AgenticAI Learning System. Information, 16(12), 1101. https://doi.org/10.3390/info16121101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop