Next Article in Journal
A Novel Membrane Dehumidification Technology Using a Vacuum Mixing Condenser and a Multiphase Pump
Previous Article in Journal
Trajectory Tracking Control of Intelligent Vehicles with Adaptive Model Predictive Control and Reinforcement Learning Under Variable Curvature Roads
Previous Article in Special Issue
FCNet: A Transformer-Based Context-Aware Segmentation Framework for Detecting Camouflaged Fruits in Orchard Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rewired Leadership: Integrating AI-Powered Mediation and Decision-Making in Higher Education Institutions

by
Margarita Aimilia Gkanatsiou
1,*,
Sotiria Triantari
2,
Georgios Tzartzas
1,
Triantafyllos Kotopoulos
1 and
Stavros Gkanatsios
3,*
1
Department of Early Childhood Education, School of Humanities and Social Science, Florina University of Western Macedonia, 53100 Florina, Greece
2
Department of International and European Studies, University of Macedonia, Egnatia 156, 54636 Thessaloniki, Greece
3
Faculty of Mechanical Engineering and Mechatronics, National University of Science and Technology Politehnica, Splaiul Independentei 313, Sector 6, 060042 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Technologies 2025, 13(9), 396; https://doi.org/10.3390/technologies13090396
Submission received: 16 July 2025 / Revised: 13 August 2025 / Accepted: 26 August 2025 / Published: 2 September 2025

Abstract

This study examines how university students perceive AI-powered tools for mediation in higher education, with a focus on the influence of communication richness and social presence on trust and the intention to use such systems. Although AI is increasingly used in educational settings, its role in handling academic mediation, where ethical sensitivity, empathy, and trust are essential, remains underexplored. To fill this gap, this study presents a model that integrates Media Richness Theory, Social Presence Theory, Technology Acceptance Models, and Trust Theory, incorporating digital fluency and conflict ambiguity as key moderating elements. Using a convergent mixed-methods design, the research involves 287 students from a variety of academic institutions. The quantitative findings indicate that students’ willingness to adopt AI mediation tools is significantly influenced by automation, efficiency, and trust, while their perceptions are shaped by how clearly the conflict is understood and by students’ digital skills. The qualitative insights reveal concerns about emotional responsiveness, transparency, and institutional capacity. According to the results, user trust rooted in perceived presence, fairness, and emotional connection is a central factor in terms of AI acceptance, and emotionally aware, transparent, algorithmic and context-sensitive design strategies should be a system-level priority for institutions when integrating AI mediation tools into academic environments.

1. Introduction

1.1. The Rhetorical Crossroads of Tradition and Transformation

Leadership in higher education institutions (HEIs) has traditionally been shaped by classical rhetorical principles such as Aristotle’s concepts of logos (logic), ethos (credibility), and pathos (emotion) [1], principles supporting a human-centered approach focusing on ethical decision-making, emotional connection, and trust. Until recently, interpersonal dialogue, mentorship structures, collegial engagement, and conflict resolution were approached with emotional intelligence.
However, the digitization of higher education, which was also accelerated by the pandemic and the rise of artificial intelligence (AI), is changing how leadership communication functions. Many HEIs now use AI in areas like administrative decisions, personalized learning, student advising, and importantly, leadership communication and mediation [2]. AI programs can now mark assignments, give students feedback, and even step in when there is a dispute. These are jobs that normally rely on a person’s judgment. The rhetorical question is as follows: Can AI perform conflict mediation with the same depth, ethical sense, and trust that a face-to-face leader offers, even though AI works only through a screen? [3,4]. AI does bring speed, uniform grading, and the ability to handle large groups, but its limits in terms of empathy, openness, and “human feel” can weaken trust and damage an institution’s reputation. In this shifting communicative landscape, the present study examines leadership not as a managerial position but as an evolving communicative practice, specifically through the function of conflict mediation [5,6]. Within the context of AI integration, student perceptions of digitally facilitated mediation processes offer valuable insight into how trust, ethical engagement, and institutional responsiveness are being reconfigured [7]. Thus, the concept of “rewired leadership” in this work refers to the transformation of leadership communication as experienced by stakeholders through AI-powered systems that mediate not only information but also values, expectations, and emotional presence [8,9]. Students are not passive recipients of institutional directives; instead, they are active agents in both the adoption and legitimation of AI-powered mediation tools. Their perceptions regarding trust, fairness, and institutional ethics do not merely mirror system performance; rather, they actively inform and reshape it. Within this evolving communicative terrain, rewired leadership materializes as a participatory and digitally infused process, one co-constructed through student interaction with AI. Leadership, in this context, ceases to be a static designation and is instead refracted through expectations of transparency, emotional attunement, and ethical coherence [10,11].

1.2. Conceptual Model Context: Bridging Human–AI Interaction Through Integrated Theory

To understand how traditional leadership practices are evolving in response to AI, this study uses a combined framework based on five interconnected theories. These are synthesized into an integrated conceptual model, as shown in Figure 1. The conceptual framework illustrated in Figure 1 has been constructed with the aim of capturing the dynamic interaction between perceptual and contextual parameters that shape students’ acceptance of AI-powered mediation tools. Within this integrated model, constructs such as perceived media richness, drawing from Media Richness Theory [12], and perceived social presence [13], informed by Social Presence Theory [14], are conceptualized as antecedents of trust, which constitutes the central mediating mechanism. The dimension of behavioral intention to engage with AI tools is theoretically anchored in Technology Acceptance Models [15]. Furthermore, the model includes two moderating variables: conflict uncertainty and digital fluency, which encapsulate the complexity of the mediation context and the individual user’s technological capacity, respectively. The directional links visualized in the figure represent theoretically grounded and empirically tested hypotheses. Although social presence is often viewed as emerging from media richness, in the present model, they are deliberately examined as distinct constructs, so as to highlight their independent and potentially divergent contributions to the formation of trust. Each component of the model is further elaborated in the subsections that follow.
Figure 1 illustrates the conceptual model presented, delineating the key factors that shape students’ intention to engage with AI-powered mediation tools. It integrates theoretical constructs such as perceived media richness and social Presence, both of which function as antecedents of trust, a pivotal mediating variable influencing behavioral intention. In parallel, the model incorporates two moderating contextual parameters, conflict ambiguity and digital fluency, that reflect the situational complexity and technological readiness of users, respectively.

1.3. AI Mediation for Education 5.0

As higher education institutions progressively transition from the logic of Industry 4.0 toward the more human-centric paradigm of Industry 5.0, the need for adaptive and integrated educational frameworks becomes evident [16]. Drawing upon the “Construction Sites of the Future” paradigm, such frameworks can be technically underpinned by digital twin environments, IoT-enabled sensing, LiDAR-based spatial modeling, and AI-driven performance analytics, enabling real-time monitoring, adaptive feedback loops, and data-rich decision-making [17]. Education 4.0 has primarily emphasized the convergence of pedagogy and digital innovation with strong personalization, whereas Education 5.0 extends this approach by embedding sustainability, ethical accountability, and humanistic values into the learning process [18]. Within this evolving context, the present study aims to contribute to the operationalization of Education 5.0 by examining how students perceive AI-supported mediation mechanisms and under what specific conditions these systems can promote trust, facilitate effective academic communication, and support broader adoption in higher education settings [19].

1.4. Conceptual Foundations

1.4.1. Media Richness Theory (MRT)

Refs. [12,20] explain how different channels vary in their ability to convey complex, ambiguous information, making it a key lens for assessing AI’s engineered richness in conflict mediation. MRT evaluates how well a communication medium handles uncertainty and complex issues, such as conflict. AI tools can offer “engineered richness” through features like natural language processing and personalized responses [21,22]. However, their effectiveness depends on users’ digital skills and ability to trust the system.

1.4.2. Social Presence Theory (SPT)

Refs. [14,23] describe the degree to which a medium makes users feel emotionally connected a crucial determinant when AI must simulate empathy in mediation. Social presence refers to how much a medium makes others feel real and emotionally present [24,25]. Since AI lacks human consciousness, it often struggles to convey authenticity and empathy, making trust a critical factor in user acceptance [26].

1.4.3. Technology Acceptance Models (TAMs and UTAUT)

Refs. [15,27] identify perceived usefulness and ease of use as primary drivers of adoption, framing our measures of behavioral intention toward AI mediation tools. Building on TAMs, UTAUT also incorporates social influence (peer and leadership endorsements) and facilitating conditions (organizational and technical support), which are critical in higher education settings. In our survey, perceived usefulness and ease of use map onto the efficiency and automation constructs, while social influence and facilitating conditions inform our items concerning institutional support and digital readiness [28]. These dimensions together explain how judgments about trustworthiness and social realism shape students’ willingness to adopt AI-powered mediation.

1.4.4. Trust Theory

Trust Theory, as articulated in [29,30], positions trust as the pivotal nexus between users’ system perceptions and their ensuing behaviors. Within this framework, trust is deconstructed into three foundational dimensions: an ability, denoting the system’s demonstrable competence; integrity, signifying adherence to proclaimed principles and commitments; and benevolence, reflecting the system’s perceived orientation toward users’ welfare [31]. These dimensions acquire particular resonance in ethically sensitive domains such as AI-mediated conflict resolution, where assessments of technical reliability are inextricably intertwined with judgments of fairness and goodwill [32].
In our study, these dimensions directly inform our trust items, capturing not only students’ judgments of AI’s technical reliability but also their perceptions of its fairness, transparency, and concern for user welfare. Our findings confirm that higher ratings for ability, integrity, and benevolence predict greater behavioral intention to use AI mediation tools, underscoring trust’s central mediating role in our conceptual model.

1.4.5. Ethically Aligned Design and Machine Agency

Ethically aligned design [33] and machine agency [34] frame how users project moral expectations and perceived autonomy onto AI systems. These perspectives are crucial for AI mediation, where students expect not only technically competent responses but also decisions that reflect human values: fairness, transparency, and accountability. In our survey, ethical alignment maps onto the confidentiality and explainability constructs [35], capturing student concerns about data protection and the understanding of AI decision logic, while machine agency informs items on automation and perceived control, reflecting how much independence students are willing to grant the system [36].

1.4.6. The Central Role of Trust

Positioning trust as the central, unifying construct that mediates the relationship between perceived system characteristics (e.g., richness, presence, efficiency, confidentiality) [37] and students’ behavioral intention to adopt AI-powered mediation, this study shows that trust is not only about technical competence, as it also encompasses perceptions of fairness, empathy, and relational safety [34], which are vital in conflict mediation contexts.

1.4.7. The Paradox of Leadership in the Digital Age

HEIs now face a leadership paradox. On the one hand, AI offers operational efficiency, objectivity, and scalability [38]. On the other, it risks diminishing the ethical credibility and human warmth traditionally expected in leadership communication. Institutions that fail to adapt may struggle to scale mediation efforts [36]. Conversely, those adopting AI without ethical safeguards may erode stakeholder trust and face backlash in high-stakes disputes [39].

1.5. Research Gap: From Functionality to Relational Legitimacy

Despite the growing body of literature on AI in education [40,41,42] and well-established technology acceptance frameworks, a key research gap remains. Existing studies rarely interrogate how relational dimensions such as perceived empathy, openness, or ethical alignment shape user trust and influence adoption behavior in contexts like conflict mediation [43]. This study bridges that gap by exploring how students perceive AI tools’ communicative richness and social presence, how these perceptions shape trust in both the system and the institution, and finally, how trust, in turn, drives behavioral intention to adopt and use AI-powered mediation tools.

1.5.1. Research Question

How do students’ perceptions of media richness and social presence in AI-powered mediation tools influence their trust in these systems? And how does this trust subsequently shape their behavioral intentions toward adoption and use within higher education institutions? Furthermore, how are these relationships moderated by contextual factors such as conflict uncertainty and digital fluency?

1.5.2. Research Objectives

  • Perception and trust: Assess how students’ perceptions of media richness and social presence influence trust in AI mediation tools.
  • Trust intention: Determine how trust impacts behavioral intentions to adopt and use such tools.
  • Moderators: Examine how conflict uncertainty and digital fluency moderate these relationships.
  • Qualitative nuance: Explore students’ ethical concerns, emotional expectations, and perceptions through open-ended responses.
  • Model validation: The proposed integrated, idea-driven model was empirically tested across a diverse sample of university students, aiming to examine the hypothesized relationships depicted through the directional pathways in Figure 1.
To substantiate the theoretical orientation of this study and reinforce the conceptual underpinnings of the model, a structured and thematically coherent review of the relevant literature was conducted. The following section outlines five interrelated theoretical domains, each of which directly informs a specific component of the conceptual framework.
(1)
Digital transformation and leadership in higher education establishes the broader institutional context, highlighting how AI integration influences leadership discourse, trust dynamics, and communicative practices.
(2)
The application of AI in educational mediation and governance provides the conceptual foundation for constructs such as automation, confidentiality, and explainability.
(3)
Media richness and communicative effectiveness in digital environments elucidates the constructs of perceived media richness and social presence.
(4)
Trust and social presence in human–AI interaction constitutes the theoretical core of the model’s mediating mechanism—trust—as shaped by emotional perception and ethical interpretation.
(5)
Technology acceptance frameworks within institutional settings inform the outcome variable of behavioral intention while also incorporating the contextual moderators of digital fluency and conflict ambiguity.
Collectively, these five domains comprise the theoretical scaffolding of the conceptual model and underscore both the empirical significance and the theoretical gap this study seeks to address within the broader discourse on AI mediation in higher education.

2. Theoretical Foundations

2.1. Classical Foundations and Digital Disruption

Traditional leadership in HEIs has relied on ethical persuasion, emotional engagement, and relational trust, often being expressed through faculty discussions, mentoring, and in-person conflict resolution. Digital acceleration, however, is shifting these practices (the pandemic, and the rise of AI) [20]. Tasks that once relied on human judgment, including delicate work like mediation, are now increasingly managed by software, but can AI reproduce the relational depth and ethical nuance that academic leaders bring to conflict resolution? Do staff and students see such systems as credible and trustworthy? To explore these issues, this study utilizes Media Richness Theory, Social Presence Theory, Trust Theory, and Technology Acceptance Models, which together offer insight into how digital tools perform in ambiguous, emotionally charged situations and how user trust develops. Prior research suggests that AI-mediated exchanges often feel mechanical or detached [44], and although AI promises efficiency and objectivity, it can lack the authentic presence required to build trust [45]. Consequently, we investigate how students judge AI mediation tools with respect to communication richness, perceived human presence, and overall trustworthiness, while also examining how the level of conflict, uncertainty, and users’ digital fluency shape their willingness to accept these systems.

2.1.1. Media Richness Theory (MRT): Reconfiguring Richness for AI

Effective leadership communication in higher education institutions depends heavily on managing uncertainty, especially in situations of conflict, where clarity, empathy, and responsiveness are considered essential. Media Richness Theory (MRT) [12] provides a useful framework for evaluating different communication tools and how well those tools handle complexities. Originally designed to help managers choose the right communication channels, MRT suggests that media vary in “richness,” or their ability to convey detailed messages and reduce uncertainty [44]. According to MRT, richness is determined by four core dimensions: (1) multiple cues: including verbal intonation, facial expressions, body language, and other nonverbal signals; (2) feedback immediacy: the speed and interactivity of communication exchanges; (3) natural language: the use of expressive and flexible linguistic structures; and (4) personalization: the extent to which messages are tailored to individuals and specific contexts. Face-to-face interaction represents the richest medium, enabling simultaneous transmission of verbal and nonverbal cues, instantaneous feedback, and deeply framed dialogue. In the context of higher education leadership, such richness helps the resolution of emotionally charged conflicts, ranging from disputes over departmental resources to complex student grievances involving issues of equity, identity, or academic integrity [45,46,47]. Yet, in the age of AI-driven communication tools, this long-standing ranking is undergoing a profound disruption. Artificial intelligence introduces new, non-human modes of interaction that challenge the original criteria of MRT. Virtual assistants, chatbots, and AI mediation platforms now claim to replicate or even enhance human communicative functions through technological innovation. These systems aim to offer pseudo-richness by leveraging Natural Language Processing (NLP), affect recognition, and contextual adaptation. However, a question arises: Can artificially constructed richness equate to genuine human interaction in emotionally and mentally complex leadership contexts [48]?

2.1.2. AI-Driven “Pseudo-Richness” and Its Limitations

Communication ranking is being disrupted by AI. Tools such as virtual assistants, chatbots, and AI mediation platforms are designed to replicate, and sometimes enhance, aspects of human communication. These tools claim to offer what can be called “pseudo-richness” by using technologies like Natural Language Processing (NLP), emotion detection, and contextual adaptation [49]. But can this engineered richness truly replace the depth of human interaction in emotionally complex leadership situations? An example of pseudo-richness is seen in virtual avatars and agents that simulate human cues using real-time emotion recognition, voice modulation, and facial expression tracking [50]. These platforms can reflect a user’s emotional state and adapt responses accordingly, creating the appearance of rich, responsive interaction. AI chatbots may personalize communication by recognizing emotional tone or prior interactions [51]. However, AI systems operate within automated boundaries and, unlike human mediators, can structure responses empathetically, but they lack consciousness, ethical judgment, and contextual memory, all of which are considered crucial and contribute to the authenticity and depth of real human mediation. Tools like Otter.ai or Zoom transcription assistants can offer real-time feedback in text but fall short of replicating nonverbal reciprocity. These systems illustrate the issue of “cue simplification,” where emotional signals are captured, processed, and reassembled artificially. This process risks reducing richness to a technical formula, rather than a lived interpersonal experience, thus raising concerns about trust and emotional safety in leadership communication [52].

2.1.3. Reframing Richness: Channel Expansion and the Role of Digital Fluency

MRT has evolved significantly. Carlson and Zmud’s [53] Channel Expansion Theory argues that media richness is not fixed but grows with user experience. According to this point of view, the perceived richness of a medium does not depend solely on its technical features but also on how familiar the user is with the topic, the person they are communicating with, and the platform being used. In the context of AI mediation, this means that individuals with high digital fluency may view chatbot or virtual reality (VR) environments as sufficiently rich [54,55]. However, others may find them impersonal, somewhat confusing, or even intimidating. For instance, digitally skilled students might feel comfortable resolving academic conflicts in AI-powered mediation rooms with emotion-sensitive avatars, while less experienced users might distrust these systems or view them as depersonalizing [56,57]. This insight is crucial for understanding why some users accept AI-mediated communication while others do not. Technical advancement alone does not guarantee meaningfulness. If users do not feel understood, respected, or emotionally supported, especially in sensitive situations, they are unlikely to trust or adopt AI tools, no matter how advanced the features may be.

2.2. Toward a Redefined AI-Mediated Richness

In light of these developments, scholars have argued for a reconceptualization of MRT in AI contexts. Richness must be understood not only as a media property but also as a socio-relational construct that includes openness, explainability, and value alignment. In conflict mediation, for example, a chatbot may meet the criteria of “rich” media in terms of cue simulation and message adaptation. However, if users perceive the AI as biased, opaque, or unempathetic, the interaction is not experienced as rich but rather as mechanized and emotionally void [58,59]. Furthermore, as AI systems increasingly integrate emotion recognition and contextual decision-making, there is a growing need to assess whether this form of richness is genuine or manipulative. AI may detect sadness or frustration in a student’s voice and respond with a pre-programmed show of empathy, but this is not the same as authentic interpersonal validation. Such interactions may risk reinforcing simulated empathy, which may be perceived as deceptive or emotionally unsafe [60]. Finally, this leads to the critical intersection with trust, where richness alone is insufficient. As later sections will explore, the perception of AI’s integrity and benevolence plays a crucial role in shaping whether users engage with AI-mediated platforms at all. In essence, richness without trust is hollow, and perceived trustworthiness may ultimately mediate the adoption of AI in even the richest communicative environments [61]. Media Richness Theory provides an essential starting point for evaluating AI-mediated communication in HEI leadership. However, AI tools reconfigure the foundational dimensions of richness, transforming nonverbal cues into biometric feedback, personalization into automated tailoring, and immediacy into response automation. The perceived richness of AI-mediated platforms, therefore, is deeply shaped by digital fluency, the emotional context, and trust in the system’s openness and intentions. This positions MRT not as a static ranking but as a dynamic interpretive space, one where user perception, context, and ethical design converge to determine whether AI can truly support conflict resolution and leadership communication in academic environments.

2.2.1. Social Presence Theory (SPT): Trust, Relationality, and AI and the Black Box

As established, the richness of a communication medium, particularly in the context of AI-mediated conflict resolution, is no longer a fixed characteristic of the technology itself but a perception mediated by user experience, contextual relevance, and trust. However, perceived richness alone is not sufficient to foster acceptance or effectiveness in leadership communication. To understand the emotional and relational dynamics of digitally mediated leadership, we must complement MRT with Social Presence Theory (SPT), which shifts the analytical focus from information transfer to relational connection. While Media Richness Theory asks whether a medium is equipped to resolve uncertainty, Social Presence Theory [14] asks whether the medium is capable of making the other person feel psychologically “real.” This distinction is especially critical in the context of mediation, where outcomes are often shaped not only by clarity but by empathy, respect, and emotional safety, features less concerned with the cue bandwidth and more with the felt presence of the interlocutor. SPT defines social presence as the “degree of importance of the other person in the interaction and the consequent importance of the interpersonal relationship.” In simpler terms, it refers to the sense that a real person is behind the message, emotionally available, contextually responsive, and ethically attuned. Traditionally, face-to-face interactions and synchronous video calls are classified as high-presence media, while asynchronous text-based communication, such as email, is considered low-presence. High presence fosters warmth, immediacy, and mutual understanding, qualities essential for trust-building and conflict transformation [62,63].

2.2.2. AI and the Presence Deficit

In the case of AI-mediated communication, particularly in conflict resolution contexts within HEIs, social presence becomes a contested terrain. While AI may simulate some indicators of richness, it typically struggles to produce the relational depth required to sustain meaningful presence. For instance, text-based chatbots, even those trained on sentiment analysis or cultural competence algorithms, often fail to convey benevolence or emotional resonance. Users quickly sense the lack of genuine empathy, which reduces the perception of an authentic human presence on the other side of the dialogue [64]. The interaction becomes functional, but not relational mechanical rather than transformational. Moreover, the opacity of AI algorithms, the so-called “black box” problem, exacerbates the relational deficit. In traditional mediation, trust can be built through reason-giving, perspective-taking, and the transparent negotiation of values. In AI-led mediation, by contrast, decisions or suggestions are often generated via opaque automated processes [65]. When users cannot access the logic behind a recommendationsay, why a chatbot suggested a particular compromise or why a predictive model flagged a student for at-risk behavior—the sense of immediacy and intimacy is eroded [66]. This lack of explainability not only undermines presence but also raises suspicion, especially in high-stakes academic settings, where fairness, due process, and institutional credibility are constantly scrutinized.

2.3. Paradoxical Successes: Presence Through Hyper-Relevance

Despite these limitations, AI-mediated systems are not universally devoid of presence. In fact, certain applications have paradoxically succeeded in fostering user connection, even with technically lean media. For instance, AI4PCR, a simulation-based training tool used for conflict resolution and peer communication, has been shown to elicit high user engagement and perceived relationality [67]. This is not because the medium is inherently rich or emotionally expressive but because the content is hyper-relevant, tailored, and cognitively aligned with user expectations and goals. This event highlights an important modification to classical SPT: users interpret presence not only through cues and immediacy but also through personalization, control, and contextual resonance. A system that speaks to one’s specific needs, even if via chatbot, may be perceived as “present” simply because it listens better than a distracted human. The success of such systems implies that presence can be partially reconstructed through responsiveness and perceived attentiveness, even in the absence of true relational emotion. However, these cases remain the exception, not the norm. In most higher education institutions, particularly those with limited infrastructure or insufficient training for faculty and staff, AI-mediated tools still feel sterile, depersonalized, or even alienating [68]. Thus, presence must be understood not as a binary (present/absent) but as a spectrum of perceived authenticity, with AI often languishing in the lower ranges, unless carefully designed for relational fidelity.

2.4. Trust as the Mediating Construct

Given these challenges, trust becomes the bridge between social presence and system adoption. Trust Theory posits that trust is grounded in three pillars: (1) ability: users must believe the AI system is competent (e.g., accurate in detecting emotion or parsing conflict types); (2) integrity: users must believe the system acts fairly and transparently; and (3) benevolence: users must believe the system acts in their best interest. Of these three, benevolence is most directly related to social presence. If users perceive the system as emotionally indifferent or ethically aloof, presence collapses even if the AI scores high on technical ability. Without empathic design, AI remains a tool, not a partner [69]. For example, a chatbot that offers conflict resolution strategies without acknowledging the emotional gravity of an academic discrimination case will likely be rejected as cold or even harmful. Moreover, lack of openness undermines integrity, as users are unable to interrogate or validate the fairness of the AI’s reasoning process. This is particularly problematic in HEIs, where decisions about disciplinary action, academic misconduct, or funding allocation carry significant reputational and emotional weight [32]. This dynamic explains the relative success of AI tools like AI4PCR, which are designed not to replace human judgment but to enhance human skill. These systems build trust not by resolving real conflicts but by preparing users to engage in relationally sensitive conversations, promoting empowerment rather than decision-making. Perceived trustworthiness does not come from mimicking human traits; it comes from being clearly designed to support interpersonal growth and constructive dialogue, which is particularly relevant in the context of leadership within higher education. To conclude, Social Presence Theory offers insights into why AI-driven mediation tools often struggle to gain real traction in typically conflict-heavy settings such as the academic ones. Presence, in this context, is not just a technical attribute; rather, it is an emotional impression, shaped by trust, openness, and a sense of genuine connection. While media richness and social presence are often treated as sequential in communication theory, this study conceptualizes them as distinct constructs. This separation allows for a clearer examination of their independent roles in shaping trust, acknowledging that technologically engineered richness does not necessarily produce authentic emotional or interpersonal presence [70]. While AI may reproduce certain features, like quick replies or tailored messaging, its impact depends on whether users truly feel acknowledged and respected. At its core, presence intersects with perceived goodwill and fairness, forming a key part of how people decide whether to engage with AI-based mediation systems [71,72]. These ideas are further examined in the next section.

2.5. Technology Acceptance (TAM/UTAUT): Adoption Barriers as Trust Failures

In higher education leadership, AI-assisted communication success does not solely depend only on a system’s capabilities (Media Richness Theory) or the emotional tone of an interaction (Social Presence Theory), as it also depends on whether people act on their impressions and whether they actually choose to use these tools or not. Technology Acceptance models help explain this shift by showing how users adopt digital tools in institutional settings. The Technology Acceptance Model (TAM) [73] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [74] help explain why AI tools are either embraced or resisted. These models show that successful adoption does not rely on technical efficiency or platform design only, it also depends on how users evaluate the system’s usefulness, ease of interaction, and perceived credibility. In this sense, resistance to AI in university leadership is not simply about usability, as it often stems from doubts about trust, emotional connection, and the perceived quality of communication [27]. This discourse is dominate by two foundational models: the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). Both frameworks identify the primary drivers behind technology adoption, especially in organizational contexts. The TAM focuses on (1) perceived usefulness (PU): the extent to which users believe a technology enhances their performance or outcomes; and (2) perceived ease of use (PEOU): the degree to which the technology is intuitive, accessible, and not effortful. The UTAUT expands this model with (3) social influence: the role of peer, cultural, or institutional pressure; and (4) facilitating conditions: the infrastructural and technical support enabling usage. In the case of AI-powered mediation in higher education institutions, each of these dimensions becomes intricately tied to the trust-related breakdowns discussed in previous sections, especially failures in perceived richness, presence, and ethical credibility [27].

2.6. Low Perceived Usefulness: When AI Lacks Contextual Intelligence

Many AI tools deployed in HEIs are perceived as having limited usefulness, especially in emotionally or ethically complex scenarios such as conflict mediation. Lean communication tools like static email bots or unsupervised sentiment analyzers struggle with ambiguous, high-stakes conflict [75]. When students or staff members bring deeply personal, context-specific grievances (e.g., discrimination claims or authorship disputes), automated responses often appear tone-deaf, simplistic, or dismissive. This breakdown mirrors earlier critiques from Media Richness Theory, where the lack of nonverbal cues and adaptive feedback results in shallow, ineffective engagement. Even technically rich systems, such as VR mediation platforms, may fail to demonstrate perceived usefulness if they do not align with users’ values, emotional needs, or expectations for fairness [76]. Yet, in rare cases, AI systems have proven their value by aligning technical performance with relational and institutional goals. For example, Georgia State University successfully deployed predictive analytics to enhance student retention by using data ethically and transparently to intervene early with at-risk students [77]. Their model demonstrated that AI can be trusted when it is designed to support rather than supplant human relationships, leading to improved outcomes across diverse student populations. However, such cases are not the norm. In many HEIs, AI remains underutilized or outright rejected, not because of its automated weakness but due to lack of trust, context, and emotional intelligence.

2.7. Low Perceived Ease of Use: When Rich Tools Are Hard to Navigate

Even when AI systems are potentially valuable, their adoption is often undermined by usability challenges. AI interfaces that rely on immersive technologies, such as VR-based mediation tools, dynamic chatbot frameworks, or dashboard-style analytics engines, require users to possess a high level of digital literacy. Without sufficient training, these tools, rather than being pathways to resolution, become barriers to access. Channel Expansion Theory [53] emphasizes that users’ perceptions of a system’s richness and effectiveness heavily depend on their familiarity with the medium and the content domain. Digital skills in HEIs vary significantly among faculty, staff, and students; therefore, ease of use becomes a critical adoption filter. The invisible complexity of AI systems, particularly those driven by machine learning, complicates matters further, meaning that users may not understand how conclusions are reached. This opacity contributes to what is described as AI fatigue and mental overload [78], where users are discouraged from engaging because the learning curve feels unjustified, not because they are incapable.

2.8. Social Influence: Cultural Friction and Institutional Resistance

A major role in shaping whether AI tools are accepted or avoided is played by the cultural context within academic institutions in addition to factors like the ease of use and perceived value. The “social influence” element in the UTAUT is especially important in universities, where faculty traditions and norms often clash with automation, particularly in areas that rely on emotional complexity, such as conflict mediation or student support. Educators often worry that AI tools reinforce bureaucratic control, weaken academic autonomy, and promote values that run counter to human-centered education [79]. This concern which also shows that AI is frequently viewed with skepticism due to its ties to surveillance, uniformity, and impersonal learning experiences [80]. Even when AI systems are not inherently intrusive, there is often a widespread sense that they reduce emotional work like listening, empathizing, or resolving disputes to cold algorithms. If staff or students believe that AI favors speed and consistency over human sensitivity, their response is likely to be negative. On the other hand, when institutions clearly position AI as a supportive tool paired with human input and aligned with shared academic values, resistance tends to decrease [81,82,83].

2.9. Facilitating Conditions: Inequity in Infrastructure and Access

The ability to adopt and implement AI in universities also depends on the resources and support structures in place. Systems such as virtual reality mediation rooms, emotion-sensing interfaces, and advanced chatbots promise faster, more objective conflict resolution. Yet they also demand robust internet access, strong cybersecurity, and dedicated technical support [84], resources that smaller or less-funded universities may lack. This infrastructure gap is not merely technical; it raises ethical concerns because digital innovation can become another source of inequality when only wealthier institutions can adopt AI meaningfully. When students and staff suspect that AI tools are being introduced primarily to save money rather than to enhance academic wellbeing, their trust erodes, especially if access appears uneven or the benefits flow to only a select group [85,86].

3. Materials and Methods

This study used a cross-sectional, convergent, mixed-methods design to examine university students’ perceptions and behavioral intentions toward AI-powered mediation tools in higher education institutions (HEIs), gathering quantitative and qualitative data at the same time and integrating the findings during interpretation to capture both statistical patterns and rich contextual insights [87]. An online questionnaire measured students’ perceptions of trust, efficiency, confidentiality, automation, and intention to use AI-powered mediation, while 127 open-ended responses were thematically analyzed with NVivo 14 (Lumivero, Denver, CO, USA) following Braun and Clarke’s method [88]. The sample comprised 287 under- and postgraduate students from universities in Greece, Spain, Romania, the United Kingdom, the United States, and multilingual European programs, ensuring a diverse mix of academic backgrounds and levels. By combining survey metrics with students’ own words, this research could determine not only whether they would adopt AI-supported conflict resolution but also why their digital fluency, ethical concerns, and perceptions of institutional fairness shape their acceptance of these emerging tools. The online questionnaire (Google Forms, Google Ireland Ltd., Dublin, Ireland) was designed and refined through an iterative development cycle, in line with Yin’s [89] methodological principles, and received ethical approval under Protocol No. 113/2025 prior to implementation. Initially, content and face validity were established through expert review by four domain specialists in educational technology, AI-supported decision-making, conflict resolution, and industrial AI applications. Their feedback ensured theoretical coherence and conceptual alignment. A pilot study involving 15 students, including all levels of study (undergraduate, postgraduate, and doctoral), was then conducted to assess the item clarity and logical flow, resulting in minor revisions to language and sequencing. Construct validity was systematically ensured by aligning each item block with one of five guiding theories: the Technology Acceptance Model, the Unified Theory of Acceptance and Use of Technology, Media Richness Theory, Social Presence Theory, and Trust Theory, following the scale development protocols of Boateng et al. [90] and DeVellis and Thorpe [91]. The internal consistency and reliability were confirmed across all the subscales, with Cronbach’s α exceeding 0.78, surpassing accepted thresholds [92] and consistent with recent reconsiderations of internal consistency standards [93]. Although an exact response rate could not be determined due to the combined use of targeted academic mailing lists and open online distribution, the final sample of 287 valid responses surpasses minimum sample size recommendations for factor and regression analyses, thereby ensuring statistical adequacy. All the procedures complied fully with GDPR, anonymity, and informed consent protocols.

3.1. Participants and Sampling

The study sample comprised 287 university students drawn from institutions in Greece, Spain, Romania, the UK, the US, and an international cohort studying in multilingual European programs. Participants were recruited via academic mailing lists and digital forums. The inclusion criteria ensured a mix of undergraduate and postgraduate students with varied academic backgrounds.
The demographic distribution was as follows:
-
Gender: 74.7% female, 23.2% male, 2.1% preferred not to disclose.
-
Age: 78.8% aged 18–24; 12.1% aged 25–34; 5.1% aged 35–44; 4% aged 45+.
-
Fields of study: 42.4% social sciences, 19.2% business/economics, 12.1% engineering, 10.1% humanities, 16.2% other.
-
Institution type: 75.8% Public universities, 20.2% private institutions, 4% colleges.
-
Level of study: 60.6% undergraduate (bachelor), 34.5% postgraduate (master), 4.9% other (e.g., PhD, non-specified)
Figure 2 presents the demographic distribution of the study sample (N = 287). The participants were predominantly female (74.7%) and primarily aged between 18 and 24 (78.8%), indicating a young, digitally native population. Most respondents studied at public universities (75.8%), with the remainder split between private institutions and colleges. In terms of the academic fields, the largest group came from the social sciences (42.4%), followed by business, engineering, humanities, and other disciplines. This distribution supports this study’s aim to gather diverse student perspectives across educational settings, disciplines, and demographic categories. The composition of the sample aligns with the objectives of the present study, which aimed to collect a broad spectrum of student viewpoints across diverse educational environments, fields of study, and demographic characteristics.

3.2. Instrumentation and Constructs

The survey was divided into four sections: demographics, mediation experience, perceptions of AI-based mediation, and behavioral intention to use AI tools. The key constructs measured included the following:
-
Media richness: personalization, ease of use, multimodal cues.
-
Social presence: authenticity, empathy.
-
Trust: integrity, competence, benevolence.
-
Confidentiality: perceived data protection.
-
Behavioral intention: willingness to adopt AI-powered mediation.
All the constructs were measured using 5-point Likert scales. The Cronbach’s alpha for each subscale exceeded 0.78, demonstrating internal reliability.
The survey was divided into four sections: demographics, mediation experience, perceptions of AI-based mediation, and behavioral intention to use AI tools. The key constructs measured included the following:
Media richness (e.g., “How easy do you think it is to use AI-based mediation tools in conflict resolution?”).
Social presence and trust (e.g., “Would you trust AI systems to deliver impartial outcomes in mediation between students and/or staff?”).
Confidentiality (e.g., “How confident are you that AI tools can maintain confidentiality during the mediation process?”).
Behavioral intention (e.g., “Are you willing to use AI-based mediation tools to resolve conflicts with students and/or colleagues?”).
These items were rated on 5-point Likert scales ranging from 1 (Not at all/Strongly disagree) to 5 (Completely/Strongly agree) and mapped to five guiding theoretical frameworks: the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Media Richness Theory (MRT), Social Presence Theory (SPT), and Trust Theory (TT).
Additional items addressed mediation knowledge and behavior under the Theory of Planned Behavior (TPB), such as “Do you believe mediation between conflicting parties would contribute to a positive academic environment?” and “Do you think mediation training should be offered to students?”
To explore perceptions beyond closed-ended responses, qualitative data were also collected. Participants were invited to respond to three open-ended prompts:
“What changes would you like to see in the current mediation practices at your institution?”
“If given a choice between AI-based mediation and human mediation, which would you prefer and why?”
“What do you believe are the potential benefits and challenges of using AI systems for conflict resolution between students and/or staff?”
These questions were designed to elicit nuanced views on ethical acceptability, emotional intelligence, institutional readiness, and AI–human comparisons, allowing the qualitative findings to directly support or complicate the statistical trends.

3.3. Quantitative Analysis

Descriptive statistics indicated the following:
-
74.2% of students found mediation techniques effective.
-
63.6% expressed concern about the confidentiality of AI tools.
-
41.4% showed positive behavioral intention to use AI tools.
-
68.7% supported automation when combined with human oversight.
The inferential analysis employed chi-square tests and Spearman correlations. Notable findings included the following:
-
Trust × Intention: ρ = 0.49, p < 0.001
-
Automation × Intention: ρ = 0.54, p < 0.001
-
Efficiency × Intention: ρ = 0.47, p < 0.001
The cluster analysis (k = 3) revealed three groups: “Tech Enthusiasts,” “Pragmatic Skeptics,” and “Ethically Concerned.”
A multiple regression model explained 57.6% of the variance in behavioral intention (Adj. R2 = 0.571, F = 126.0, p < 0.001). The strongest predictors were automation (β = 0.380, p < 0.001), efficiency (β = 0.255, p < 0.001), and trust (β = 0.214, p < 0.001), followed by confidentiality (β = 0.097, p = 0.009).

3.4. Qualitative Analysis

The open-ended responses (n = 127) were analyzed using NVivo and Braun and Clarke’s [88] thematic analysis. The emergent themes included the following:
-
Explainability concerns and demand for transparency.
-
Preference for hybrid human–AI models.
-
Emotional intelligence deficits in AI.
-
Institutional infrastructure and readiness gaps.
Figure 3 illustrates the thematic coding structure, highlighting how the qualitative responses were grouped into four major categories based on recurring patterns. These themes capture students’ nuanced concerns beyond statistical metrics, particularly regarding emotional, ethical, and institutional aspects of AI mediation. These themes highlight the ethical and psychological considerations that shape students’ evaluations of AI-mediated conflict resolution. Notably, student concerns related to empathy, fairness, and institutional readiness resonate with core assumptions of Social Presence Theory and Trust Theory. This alignment indicates that perceived emotional intelligence and ethical integrity often carry more weight in acceptance decisions than technical efficiency alone.

3.5. Ethical Considerations

All participants provided informed consent, and anonymity was preserved in accordance with the GDPR and institutional ethical standards. This study addressed potential concerns related to algorithmic bias, emotional insensitivity, and ethical oversight in terms of AI design.

3.6. Analytical Tools

Data were processed using SPSS IBM SPSS Statistics for statistical analysis, AMOS IBM SPSS AMOS 29 for modeling, and Nvivo 14 for qualitative coding. Johnson–Neyman procedures were used to assess conditional interactions.

3.7. Alignment with Research Question

The convergent mixed-methods approach [94] provided both statistical evidence and contextual richness to identify key predictors and inhibitors of AI mediation acceptance in HEIs. The quantitative findings revealed that automation, efficiency, and trust were the strongest predictors of behavioral intention, while the qualitative analysis highlighted students’ ethical concerns, emotional expectations, and infrastructural challenges. The integration of both strands demonstrated that emotional, contextual, and ethical factors, especially trust and presence, are more predictive of acceptance than technical efficiency alone. This approach validates and extends classical technology adoption frameworks while addressing underexplored humanistic dimensions in digitally mediated conflict resolution.
Figure 4 helps to further support the quantitative findings, providing a correlation matrix of the structural equation model constructs, highlighting the relationships between trust, efficiency, confidentiality, automation, and students’ behavioral intention to use AI-powered mediation tools.

4. Results

This study explored what shapes university students’ willingness to use AI-powered mediation tools in higher education. It looked at how key factors like trust, efficiency, automation, and confidentiality influence their intention to adopt these tools, drawing from five main theories: the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Media Richness Theory (MRT), Social Presence Theory, and Trust Theory.
The quantitative results showed that automation and efficiency had the biggest impact on students’ willingness to use AI, followed by trust and confidentiality. These findings are consistent with earlier research under the TAM and UTAUT, which emphasized how important usefulness and ease of use are. They also support MRT’s idea that systems seen as rich and responsive are more likely to be accepted. On the qualitative side, students raised concerns about what numbers alone could not show, like the need for transparency, emotional sensitivity, and reliable institutional support. These insights help explain why some students may hesitate to rely on AI in situations that involve conflict or ethical nuance.
Figure 5 below illustrates the theoretical and empirical links between the key constructs and the outcome variable. It demonstrates how each theory contributed to shaping students’ perceptions and, ultimately, their behavioral intention to use AI-powered mediation. This integrated view sets the stage for a deeper examination of the individual findings, their theoretical implications, and their potential impact on policy and practice in digitally transforming academic leadership.

4.1. Theoretical Framework for AI-Powered Mediation Acceptance in Higher Education

Figure 6 presents the standardized path coefficients derived from the multiple regression analysis, illustrating the integrated conceptual model informed by five foundational theories: the TAM, UTAUT, Media Richness Theory, Social Presence Theory, and Trust Theory. These theories underpin the operational constructs: trust, efficiency, confidentiality, and automation, which collectively predict students’ behavioral intention to adopt AI-powered mediation tools. The path coefficients reflect standardized beta weights (* p < 0.001, p < 0.05), highlighting the relative influence of each construct on behavioral intention.
The recent literature further reinforces the centrality of trust and ethical parameters in shaping the acceptance of AI-powered tools in educational contexts. Trust is increasingly framed as a multidimensional construct that includes perceived usefulness, institutional readiness, and perceived barriers to adoption [95]. Simultaneously, emerging discourse stresses the importance of addressing key concerns such as data privacy, fairness, and algorithmic bias within the specific context of higher education [96].

4.2. Interpretation of Key Predictors in Light of Theoretical Models

The quantitative analysis showed that automation and efficiency were the strongest predictors of students’ willingness to use AI-powered mediation tools, followed by trust and confidentiality. These results strongly support the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), both of which emphasize perceived usefulness and ease of use or performance expectancy as key drivers of technology adoption [15,24].
The fact that automation stood out as the top predictor suggests that students value AI’s ability to make processes faster and reduce the workload traditionally handled by humans. This preference reflects broader trends in educational technology, where users increasingly expect streamlined, self-operating systems [97,98]. It also connects with Rogers’ [99] Diffusion of Innovations theory, particularly the idea of relative advantage, namely that users are more likely to adopt a tool if they believe it clearly improves on what came before.
Efficiency was also a strong influence. According to Media Richness Theory, students likely viewed AI tools as useful because they could handle complex input, respond quickly, and offer some level of personalization. This is consistent with earlier studies on learning analytics and AI in education, which showed that students respond positively to tools that provide timely and efficient support [100]. Trust came out as a moderately strong predictor of students’ intention to adopt AI-powered mediation tools, reflecting the ethical and relational dimensions emphasized in Trust Theory [29]. These findings are consistent with recent studies that highlight how characteristics such as competence, integrity, and benevolence continue to matter even when the interaction involves AI rather than a human [101,102]. In this study, the concept of social presence, based on how students perceived empathy and authenticity in the system, also played a role in shaping that trust.
Confidentiality, while the least influential of the four main predictors, was still statistically significant. This supports existing research on rising concerns about data privacy in digital learning environments [103,104]. Students’ worries about algorithmic surveillance and the adequacy of institutional protections reflect broader extensions of Trust Theory, which recognize ethical data practices as being key to the acceptance of digital tools [105]. These concerns also resonate with findings from the qualitative responses in this study, where students voiced unease about how personal data would be handled and whether institutions could be trusted to protect it.
These results underscore the interdependence of technical affordances and human-centric values in the digital transformation of conflict resolution processes. As such, any implementation strategy for AI mediation tools in HEIs must not only be functionally robust but also emotionally intelligent and ethically transparent, as previously emphasized [106].

4.3. Integration of Quantitative and Qualitative Findings

The integration of both data strands reveals a nuanced and multi-dimensional picture of students’ attitudes toward AI-powered mediation in higher education. While the quantitative data prioritized efficiency, automation, and trust as statistically significant predictors of behavioral intention, the qualitative insights deepened our understanding of why students respond as they do, particularly illuminating concerns around ethics, emotional intelligence, and institutional readiness.
Students’ preference for hybrid mediation models, where AI supports rather than replaces human involvement, is aligned with the moderate but notable role of trust found in the quantitative data. While students value both the efficiency and consistency provided by AI, they remain hesitant to rely entirely on it, especially for emotionally charged or ethically complex conflicts. This finding aligns with recent research emphasizing the importance of designing AI systems that support human judgment and emotional awareness in educational and mediation contexts [107].
Similarly, the emotional intelligence gap identified in qualitative responses aligns with Oh et al.’s [21] systematic review on social presence, which found that users continue to seek warmth, attentiveness, and human-like cues, even when interacting through digital platforms. Although trust and social presence are often examined as separate constructs, studies by Hancock et al. [108] and Mustofa et al. [109] indicated a strong connection between the two, particularly in contexts where technology supports decision-making and conflict resolution.
Although confidentiality was not among the strongest statistical predictors, concerns about fairness and transparency were mentioned by 74% of students in the open-ended responses, revealing its deeper significance. These concerns align closely with the ethical AI and explainability literature [110,111], which highlights that transparency and perceived justice are key to fostering trust and encouraging adoption, even when not reflected prominently in quantitative data.
Students also expressed concerns about whether higher education institutions (HEIs) have the necessary digital infrastructure and staff training to support the effective use of AI mediation tools. This links directly to the concept of facilitating conditions in the UTAUT, which refers to the availability of institutional support and resources that enable technology adoption [24]. Their concerns also echo challenges observed in the rollout of learning analytics systems, where the potential of new technologies often goes unrealized due to fragmented organizational structures and inconsistent levels of digital competence among staff [112].
Findings from the mixed-methods approach indicate that students are cautiously optimistic. While they recognize the value of AI in resolving disputes, they still prefer human-centered safeguards such as emotional presence, explainability, and institutional integrity. This perspective aligns with recent arguments that AI should not replace human judgment but rather support and enhance ethical and relational decision-making [113].

5. Discussion

5.1. Implications for Theory

This study enhances existing technology acceptance theories by demonstrating that students consider not only efficiency but also ethical and emotional factors when evaluating AI tools. In other words, what students view as “useful” now includes empathy and fairness, along with speed and ease of use [11,15,27]. Our findings, gathered through both quantitative surveys and qualitative feedback, align with recent studies [61], emphasizing that the moral and emotional fit of technology is as crucial as its practical functionality.
Specifically, trust emerged as a key factor, influenced by how students perceive empathy, fairness, and interpersonal interactions when using AI. This finding bridges the technology acceptance, social presence, and trust theories [29,114], indicating that trust in AI mediation tools heavily depends on emotional and ethical considerations. Additionally, students strongly expressed the need for AI to be transparent, fair, and emotionally clear, echoing recommendations from the IEEE’s Ethically Aligned Design framework [32] and supporting the arguments made by Holmes et al. [60] that ethics must be central to technology acceptance theories rather than just an optional extra.
Moreover, applying Media Richness Theory to AI mediation, this research confirms the “machine agency” idea, which suggests that students evaluate AI through similar criteria they apply to humans, like immediacy and personalization. The mixed-methods approach used here also supports Renick et al.’s [111] recommendation to interpret quantitative data alongside qualitative narratives, uncovering previously less-explored aspects such as the blending of human and AI roles (AI–human hybridity) and expectations around ethical transparency [115,116].
Furthermore, this study identifies critical dimensions that have not been fully explored in previous acceptance models, such as how emotional intelligence and ethical transparency directly affect user attitudes toward AI. Participants frequently emphasized the importance of clearly understanding how AI arrives at decisions, reflecting broader societal concerns about algorithmic transparency and accountability. These findings suggest a substantial theoretical expansion is necessary, integrating concepts from human–computer interaction (HCI), affective computing, and digital ethics into traditional technology acceptance frameworks [117].
The students’ emphasis on emotional intelligence suggests future theoretical models must explicitly consider affective factors when evaluating technology. This would include not only traditional usability metrics but also how users feel when interacting with AI, especially in sensitive or emotionally charged scenarios such as conflict resolution. The theoretical implications extend beyond conventional usability, urging researchers to prioritize the interplay between technical functionality and emotional responsiveness [118,119].
Additionally, students highlighted fairness as a key evaluative criterion, which broadens the traditional scope of Trust Theory by integrating organizational and systemic dimensions of trust, such as data governance and institutional accountability. This extended view of trust underscores the importance of considering both interpersonal and structural factors in technology acceptance research [120,121].
In conclusion, our findings advocate for a more comprehensive and ethically robust theoretical framework, one that accounts for the nuanced interplay of technical efficiency, emotional intelligence, ethical transparency, and institutional trust. Such a holistic approach better captures the complex ways in which students and potentially other user groups evaluate and engage with AI technologies in educational and broader societal contexts.

5.2. Practical Implications for Higher Education Institutions (HEIs)

The findings of this study provide clear practical guidance for university administrators, technologists, and educational policymakers aiming to implement AI mediation tools effectively. Primarily, AI tools should be viewed as supportive instruments that complement human judgment rather than replace it entirely. Practical tasks such as scheduling appointments, managing paperwork, and conducting initial conflict assessments can be efficiently automated, enabling mediators and administrators to focus on more nuanced and emotionally charged interactions. This strategic use of automation can significantly enhance efficiency while simultaneously preserving the trust and satisfaction of students [99,116].
To effectively integrate these AI tools, institutions must prioritize transparency in their algorithms and clearly communicate how AI decisions are made. Adopting explainable AI systems designed to clearly demonstrate their reasoning processes is critical for building and maintaining trust among students and faculty. Transparent data policies, ensuring the security and ethical use of personal information, are equally essential. Actively involving students and other stakeholders in oversight committees can further reinforce trust and engagement with these technologies, aligning closely with principles outlined in relation to trustworthy AI [117].
A critical practical implication highlighted in this study is the necessity of preserving a collaborative dynamic between human mediators and AI systems, especially in sensitive scenarios requiring high empathy and ethical judgment [33,105]. AI technologies should facilitate, rather than diminish, human capabilities for empathy, fairness, and nuanced decision-making. This collaborative model not only addresses ethical concerns but also supports user acceptance by aligning technological functionalities with the emotional and interpersonal needs of students and staff.
Additionally, infrastructure challenges such as inconsistent internet connectivity, insufficient cybersecurity measures, and limited staff preparedness were identified as significant barriers. Addressing these issues requires targeted investments by universities. Specifically, institutions must enhance their technological infrastructures by upgrading internet capabilities, strengthening cybersecurity frameworks, and providing ongoing professional development and training programs for staff. Such investments will ensure that AI mediation tools are not only implemented effectively but also maintained securely and sustainably [58].
Furthermore, the successful adoption of AI mediation technologies necessitates sensitivity to disciplinary, cultural, and generational differences among users. AI tools must be inclusively designed to respect and adapt to these diverse factors. Drawing on Hofstede’s [122] cultural dimensions theory and intercultural communication insights [107], universities can create tools that are responsive and accessible to all users, regardless of their background or academic discipline.
In conclusion, the effective integration of AI into academic mediation processes hinges on balancing technological advancements with emotional intelligence, ethical transparency, inclusive design, and equitable access. Universities that achieve this balance will likely experience enhanced mediation effectiveness, improved student and staff satisfaction, and a strengthened institutional culture that genuinely embraces technological innovation.

5.3. Limitations

The survey of 287 self-selected students may overrepresent those who are already comfortable with digital tools, which could bias the results. Because we captured opinions at a single point in time, we cannot see how views change as AI tools become more common on campus. Finally, by focusing only on students, important insights from faculty, professional mediators, and administrators could be missed, preventing a full picture of AI tool adoption.

5.4. Future Research

Building on the current work, future studies should first adopt longitudinal designs to observe how students’ trust, perceived utility, and behavioral intentions evolve as AI mediation tools become more deeply integrated into institutional processes. To enrich our understanding of emotional and ethical nuances, mixed-methods approaches that combine large-scale surveys with qualitative interviews or focus groups are recommended [123]. Expanding the participant pool beyond students to include faculty members, professional mediators, IT staff, and administrators will provide a more comprehensive picture of system readiness, cultural fit, and governance requirements. Additionally, integrating AI mediation training and ethics modules into higher education curricula could help transform students from passive recipients into critical evaluators of automated conflict resolution systems. Finally, comparative studies across domains such as healthcare, justice, or corporate environments could test the scalability and adaptability of ethics-first AI mediation models beyond the academy. From survey insight to scalable implementation, a logical next step is to design a lightweight, text-based AI mediation prototype for multilingual academic settings, leveraging state-of-the-art NLP models and a transparent rule layer to balance automation with human oversight. Incoming chat turns in Greek, Spanish, Romanian, and English would be tokenized and lemmatized with spaCy 3.7 [124] and embedded via XLM-R [125], then fine-tuned using Hugging Face Transformers v4.41 and PyTorch 2.1 [126,127]. A two-layer BiLSTM emotion classifier and a logistic-regression phase detector aligned with ISO 24617-2:2020 [128] annotationswould execute within 300 ms on an RTX 4060. A parallel YAML rule layer (~25 rules) distilled from the mediation literature would be aligned via the Levenshtein distance [129], with token-level SHAP overlays providing explainability [130]. Deployed through a React/FastAPI front end (MIT license), the system could be localized in under a week. A small A/B trial (~10 sessions per cohort) analyzed via SEM in R (lavaan) would compare AI-augmented versus human-only mediation, tracing the causal pathway: AI feedback → perceived fairness → satisfaction, and informing iterative refinements aligned with our ethical and emotional design priorities.

6. Conclusions

This study contributes to the evolving discourse on the responsible adoption of artificial intelligence in higher education by empirically investigating students’ trust, acceptance, and perceived utility in terms of AI-powered mediation tools. Drawing on well-established theoretical models, including the Technology Acceptance Model [15], Media Richness Theory [12], Social Presence Theory [14], and Trust Theory [114], the research identifies a multi-dimensional acceptance landscape shaped not only by efficiency and functionality but also by ethical transparency, emotional sensitivity, and human–machine relational dynamics. The findings reinforce earlier research highlighting the importance of communication richness and interpersonal connection in digitally mediated interactions [131,132]. Students’ willingness to adopt AI tools for conflict resolution is strongly tied to their perceptions of emotional intelligence, confidentiality protections, and algorithmic transparency factors, which also align with current calls for ethically aligned AI design. Notably, social presence and perceived fairness were central to fostering trust, reiterating insights from Lowry et al. [108] and Ma et al. [133] about the socio-emotional underpinnings of technology-mediated governance. The cross-national data also support arguments that ethical leadership in digitally transformed institutions must integrate emotional engagement and CSR-like accountability [134]. The model proposed here offers a validated framework that institutions may adopt to design AI systems not as impersonal arbiters but as participatory tools aligned with academic values, stakeholder equity, and relational trust. The robustness of the proposed model is further reinforced by the composition of the research sample, which comprised significant participant groups from Romania (n = 104), Greece (n = 101), and Spain (n = 90), in addition to students attending multilingual, cross-national academic programs across Europe. Such representation across different cultural and institutional settings remains uncommon in student-focused research related to AI adoption. In the present study, this diversity is not perceived as a methodological constraint but rather as a key strength, as it highlights the broader relevance of trust, fairness, and emotional resonance in AI-mediated conflict resolution. Therefore, the conclusions drawn from this model possess substantial cross-cultural validity and practical applicability in varied academic environments. While the study is not without limitations, including its cross-sectional design and single-stakeholder focus, it offers a foundation for longitudinal, interdisciplinary, and multi-actor research into AI in academic mediation contexts. The future of AI integration in higher education will not be defined solely by technical innovation but also by the cultivation of systems that are transparent, fair, and socially intelligent. Ultimately, as educational institutions transition toward digital governance, the human experience must remain central. This study reinforces that trust, empathy, and ethical design are not peripheral to AI adoption but are prerequisites for its legitimacy and success in the uniquely relational terrain of academic conflict resolution.

Author Contributions

Conceptualization, M.A.G. and S.T.; methodology, M.A.G., S.T. and G.T.; software, M.A.G. and S.G.; validation, M.A.G., S.G., S.T. and T.K.; formal analysis, M.A.G., S.T., G.T., T.K. and S.T.; investigation, M.A.G., S.T., G.T., T.K. and S.T.; resources, M.A.G., S.T., G.T., T.K. and S.T.; data curation, M.A.G., S.T., G.T., T.K. and S.T.; writing—original draft preparation, M.A.G.; writing—review and editing, M.A.G., S.T., G.T., T.K. and S.T.; visualization, M.A.G., S.T., G.T., T.K. and S.T.; supervision, S.T., G.T. and T.K.; project administration, S.T., G.T. and T.K. funding acquisition, M.A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of University of Western Macedonia (113/2025, 10 December 2024), although not involving studies humans or animals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Triantari, S. Leadership: Leadership Theories from the Aristotelian Orator to the Modern Leader; K. & M. Stamoulis: Thessaloniki, Greece, 2020; ISBN 978-960-656-012-5. [Google Scholar]
  2. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  3. Battisti, D. Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships? Philos. Technol. 2025, 38, 28. [Google Scholar] [CrossRef]
  4. Bolden, R.; Gosling, J.; O’Brien, A.; Peters, K.; Ryan, M.; Haslam, S.A. Academic Leadership: Changing Conceptions, Identities and Experiences in UK Higher Education; Leadership Foundation for Higher Education: London, UK, 2012; ISBN 978-1-906627-35-5. [Google Scholar]
  5. Liden, R.C.; Wang, X.; Wang, Y. The Evolution of Leadership: Past Insights, Present Trends, and Future Directions. J. Bus. Res. 2025, 186, 115036. [Google Scholar] [CrossRef]
  6. Avolio, B.J.; Kahai, S.S. Adding the “E” to E-Leadership: How It May Impact Your Leadership. Organ. Dyn. 2003, 31, 325–338. [Google Scholar] [CrossRef]
  7. Oncioiu, I.; Bularca, A.R. Artificial Intelligence Governance in Higher Education: The Role of Knowledge-Based Strategies in Fostering Legal Awareness and Ethical Artificial Intelligence Literacy. Societies 2025, 15, 144. [Google Scholar] [CrossRef]
  8. Guo, Y.; Dong, P.; Lu, B. The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 50. [Google Scholar] [CrossRef]
  9. Flavián, C.; Belk, R.W.; Belanche, D.; Casaló, L.V. Automated Social Presence in AI: Avoiding Consumer Psychological Tensions to Improve Service Value. J. Bus. Res. 2024, 175, 114545. [Google Scholar] [CrossRef]
  10. Guo, K. The Relationship Between Ethical Leadership and Employee Job Satisfaction: The Mediating Role of Media Richness and Perceived Organizational Transparency. Front. Psychol. 2022, 13, 885515. [Google Scholar] [CrossRef]
  11. Fraser-Burgess, S.; Heybach, J.; Metro-Roland, D. Emerging Ethical Pathways and Frameworks: Integration, Disruption, and New Ethical Paradigms. In The Cambridge Handbook of Ethics and Education, 1st ed.; Kristjánsson, K., Gregory, M., Eds.; Cambridge University Press: Cambridge, UK, 2024; pp. 593–867. [Google Scholar]
  12. Daft, R.L.; Lengel, R.H. Organizational Information Requirements, Media Richness and Structural Design. Manag. Sci. 1986, 32, 554–571. [Google Scholar] [CrossRef]
  13. Gunawardena, C.N. Social Presence Theory and Implications for Interaction and Collaborative Learning in Computer Conferences. Int. J. Educ. Telecommun. 1995, 1, 147–166. [Google Scholar]
  14. Short, J.; Williams, E.; Christie, B. The Social Psychology of Telecommunications; John Wiley & Sons: Hoboken, NJ, USA, 1976. [Google Scholar]
  15. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  16. Forcael, E.; Garcés, G.; Lantada, A.D. Convergence of Educational Paradigms into Engineering Education 5.0. In Proceedings of the 2023 World Engineering Education Forum—Global Engineering Deans Council (WEEF-GEDC), Monterrey, Mexico, 23–27 October 2023; pp. 1–8. [Google Scholar] [CrossRef]
  17. Broo, D.G.; Kaynak, O.; Sait, S.M. Rethinking engineering education at the age of Industry 5.0. J. Ind. Inf. Integr. 2021, 25, 100311. [Google Scholar] [CrossRef]
  18. Shahidi Hamedani, S.; Aslam, S.; Mundher Oraibi, B.A.; Wah, Y.B.; Shahidi Hamedani, S. Transitioning towards tomorrow’s workforce: Education 5.0 in the landscape of Society 5.0—A systematic literature review. Educ. Sci. 2024, 14, 1041. [Google Scholar] [CrossRef]
  19. Calvetti, D.; Mêda, P.; de Sousa, H.; Chichorro Gonçalves, M.; Amorim Faria, J.M.; Moreira da Costa, J. Experiencing Education 5.0 for civil engineering. Procedia Comput. Sci. 2024, 232, 2416–2425. [Google Scholar] [CrossRef]
  20. Wang, Z. Media richness and continuance intention to online learning platforms: The mediating role of social presence and the moderating role of need for cognition. Front. Psychol. 2022, 13, 950501. [Google Scholar] [CrossRef] [PubMed]
  21. Oh, C.S.; Bailenson, J.N.; Welch, G.F. A systematic review of social presence: Definition, antecedents, and implications. Front. Robot. AI 2018, 5, 114. [Google Scholar] [CrossRef]
  22. Kreijns, K.; Xu, K.; Weidlich, J. Social presence: Conceptualization and measurement. Educ. Psychol. Rev. 2022, 34, 139–170. [Google Scholar] [CrossRef] [PubMed]
  23. Luo, Y.; Sun, L. The Effects of Social and Spatial Presence on Learning Engagement in Sustainable E-Learning. Sustainability 2025, 17, 4082. [Google Scholar] [CrossRef]
  24. Mari, A.; Mandelli, A.; Algesheimer, R. Empathic voice assistants: Enhancing consumer responses in voice commerce. J. Bus. Res. 2024, 175, 114566. [Google Scholar] [CrossRef]
  25. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  26. Ahmad, S.F.; Han, H.; Alam, M.M.; Rehmat, M.; Irshad, M.; Arraño-Muñoz, M.; Ariza-Montes, A. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit. Soc. Sci. Commun. 2023, 10, 17. [Google Scholar] [CrossRef]
  27. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Unified Theory of Acceptance and Use of Technology: A Synthesis and the Road Ahead. J. Assoc. Inf. Syst. 2016, 17, 328–376. [Google Scholar] [CrossRef]
  28. Lee, A.T.; Ramasamy, R.K.; Subbarao, A. Understanding Psychosocial Barriers to Healthcare Technology Adoption: A Review of TAM Technology Acceptance Model and Unified Theory of Acceptance and Use of Technology and UTAUT Frameworks. Healthcare 2025, 13, 250. [Google Scholar] [CrossRef]
  29. Alarcon, G.M.; Lyons, J.B.; Christensen, J.C.; Klosterman, S.L.; Bowers, M.A.; Ryan, T.J. The Effect of Propensity to Trust and Perceptions of Trustworthiness on Trust Behaviors in Dyads. Behav. Res. 2018, 50, 1906–1920. [Google Scholar] [CrossRef]
  30. Cetinkaya, N.E.; Krämer, N. Between Transparency and Trust: Identifying Key Factors in AI System Perception. Behav. Inf. Technol. 2025; online ahead of print. [Google Scholar] [CrossRef]
  31. Aydoğan, R.; Baarslag, T.; Gerding, E. Artificial Intelligence Techniques for Conflict Resolution. Group Decis. Negot. 2021, 30, 879–883. [Google Scholar] [CrossRef]
  32. Sundar, S.S.; Lee, E. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). J. Comput.-Mediat. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
  33. Castellanos-Reyes, D.; Richardson, J.C.; Maeda, Y. The Evolution of Social Presence: A Longitudinal Exploration of the Effect of Online Students’ Peer-Interactions Using Social Network Analysis. Internet High. Educ. 2024, 61, 100939. [Google Scholar] [CrossRef]
  34. IEEE. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, 2nd ed.; IEEE Standards Association: Piscataway, NJ, USA, 2019; Available online: https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf (accessed on 30 April 2025).
  35. Vieriu, A.M.; Petrea, G. The Impact of Artificial Intelligence (AI) on Students’ Academic Development. Educ. Sci. 2025, 15, 343. [Google Scholar] [CrossRef]
  36. Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. [Google Scholar] [CrossRef]
  37. Bankins, S.; Formosa, P.; Griep, Y.; Richards, D. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context. Inf. Syst. Front. 2022, 24, 857–875. [Google Scholar] [CrossRef]
  38. Marvi, R.; Foroudi, P.; AmirDadbar, N. Dynamics of User Engagement: AI Mastery Goal and the Paradox Mindset in AI–Employee Collaboration. Int. J. Inf. Manag. 2025, 83, 102908. [Google Scholar] [CrossRef]
  39. Xiao, Y.; Yu, S. Can ChatGPT Replace Humans in Crisis Communication? The Effects of AI-Mediated Crisis Communication on Stakeholder Satisfaction and Responsibility Attribution. Int. J. Inf. Manag. 2025, 80, 102835. [Google Scholar] [CrossRef]
  40. Chan, C.K.Y. AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks. Behav. Sci. 2025, 15, 287. [Google Scholar] [CrossRef] [PubMed]
  41. Sposato, M. Artificial Intelligence in Educational Leadership: A Comprehensive Taxonomy and Future Directions. Int. J. Educ. Technol. High. Educ. 2025, 22, 20. [Google Scholar] [CrossRef]
  42. Sethi, S.S.; Jain, K. AI Technologies for Social Emotional Learning: Recent Research and Future Directions. J. Res. Innov. Teach. Learn. 2024, 17, 213–225. [Google Scholar] [CrossRef]
  43. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the Shades of the Uncanny Valley: An Experimental Study of Human–Chatbot Interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
  44. Murire, O.T. Artificial Intelligence and Its Role in Shaping Organizational Work Practices and Culture. Adm. Sci. 2024, 14, 316. [Google Scholar] [CrossRef]
  45. Kezar, A.; Eckel, P.D. The Effect of Institutional Culture on Change Strategies in Higher Education: Universal Principles or Culturally Responsive Concepts? J. High. Educ. 2002, 73, 435–460. [Google Scholar] [CrossRef]
  46. Aldosari, A.M.; Alramthi, S.M.; Eid, H.F. Improving Social Presence in Online Higher Education: Using Live Virtual Classrooms to Confront Learning Challenges during the COVID-19 Pandemic. Front. Psychol. 2022, 13, 994403. [Google Scholar] [CrossRef]
  47. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  48. Ngo, A.; Kocoń, J. Integrating Personalized and Contextual Information in Fine-Grained Emotion Recognition in Text: A Multi-Source Fusion Approach with Explainability. Inf. Fusion 2025, 118, 102966. [Google Scholar] [CrossRef]
  49. Aly, M. Revolutionizing online education: Advanced facial expression recognition for real-time student progress tracking via deep learning model. Multimed. Tools Appl. 2025, 84, 12575–12614. [Google Scholar] [CrossRef]
  50. Chen, A.; Evans, R.; Zeng, R. Editorial: Coping with an AI-saturated world: Psychological dynamics and outcomes of AI-mediated communication. Front. Psychol. 2024, 15, 1479981. [Google Scholar] [CrossRef] [PubMed]
  51. Lim, J.S.; Hong, N.; Schneider, E. How Warm-Versus Competent-Toned AI Apologies Affect Trust and Forgiveness through Emotions and Perceived Sincerity. Comput. Hum. Behav. 2025, 172, 108761. [Google Scholar] [CrossRef]
  52. Carbonell, G.; Barbu, C.-M.; Vorgerd, L.; Brand, M.; Molnar, A. The Impact of Emotionality and Trust Cues on the Perceived Trustworthiness of Online Reviews. Cogent Bus. Manag. 2019, 6, 1586062. [Google Scholar] [CrossRef]
  53. Carlson, J.R.; Zmud, R.W. Channel expansion theory and the experiential nature of media richness perceptions. Acad. Manag. J. 1999, 42, 153–170. [Google Scholar] [CrossRef]
  54. Zhang, L.; Mo, L.; Sun, X.; Zhou, Z.; Ren, J. How visual and mental human-likeness of virtual influencers affects customer–brand relationship on e-commerce platform. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 200. [Google Scholar] [CrossRef]
  55. Rapanta, C.; Bhatt, I.; Bozkurt, A.; Chubb, L.A.; Erb, C.; Forsler, I.; Gravett, K.; Koole, M.; Lintner, T.; Örtegren, A.; et al. Critical GenAI literacy: Postdigital configurations. Postdigit. Sci. Educ. 2025; online ahead of print. [Google Scholar] [CrossRef]
  56. Hassani, H.; Silva, E.S.; Unger, S.; TajMazinani, M.; MacFeely, S. Artificial intelligence (AI) or intelligence augmentation (IA): What is the future? AI 2020, 1, 143–155. [Google Scholar] [CrossRef]
  57. Ifenthaler, D.; Yau, J.Y.K. Utilising learning analytics to support study success in higher education: A systematic review. Educ. Technol. Res. Dev. 2020, 68, 1961–1990. [Google Scholar] [CrossRef]
  58. Zafar, M. Normativity and AI moral agency. AI Ethics 2025, 5, 2605–2622. [Google Scholar] [CrossRef]
  59. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Global Series; Globethics Publications: Geneva, Switzerland, 2019. [Google Scholar] [CrossRef]
  60. Mattioli, M.; Cabitza, F. Not in my face: Challenges and ethical considerations in automatic face emotion recognition technology. Mach. Learn. Knowl. Extr. 2024, 6, 2201–2231. [Google Scholar] [CrossRef]
  61. Choung, H.; Seberger, J.S.; David, P. When AI is perceived to be fairer than a human: Understanding perceptions of algorithmic decisions in a job application context. Int. J. Hum.-Comput. Interact. 2023, 40, 7451–7468. [Google Scholar] [CrossRef]
  62. Wang, Y.; Gong, D.; Xiao, R.; Wu, X.; Zhang, H. A systematic review on extended reality-mediated multi-user social engagement. Systems 2024, 12, 396. [Google Scholar] [CrossRef]
  63. Cummings, J.J.; Wertz, E.E. Capturing social presence: Concept explication through an empirical analysis of social presence measures. J. Comput.-Mediat. Commun. 2023, 28, zmac027. [Google Scholar] [CrossRef]
  64. Jim, J.R.; Talukder, M.A.R.; Malakar, P.; Kabir, M.M.; Nur, K.; Mridha, M.F. Recent advancements and challenges of NLP-based sentiment analysis: A state-of-the-art review. Nat. Lang. Process. J. 2024, 6, 100059. [Google Scholar] [CrossRef]
  65. Papa, R.; Jackson, K.M. (Eds.) AI Transforms Twentieth-Century Learning. In Artificial Intelligence, Human Agency and the Educational Leader; Springer: Cham, Switzerland, 2021; pp. 1–32. [Google Scholar] [CrossRef]
  66. Ki, S.; Park, S.; Ryu, J.; Kim, J.; Kim, I. Alone but not isolated: Social presence and cognitive load in learning with 360 virtual reality videos. Front. Psychol. 2024, 15, 1305477. [Google Scholar] [CrossRef]
  67. Hsu, A.; Chaudhary, D. AI4PCR: Artificial intelligence for practicing conflict resolution. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100002. [Google Scholar] [CrossRef]
  68. Li, B.J.; Lee, E.W.J.; Goh, Z.H.; Tandoc, E. From frequency to fatigue: Exploring the influence of videoconference use on videoconference fatigue in Singapore. Comput. Hum. Behav. Rep. 2022, 7, 100214. [Google Scholar] [CrossRef]
  69. Denzin, N.K.; Lincoln, Y.S. (Eds.) The SAGE Handbook of Qualitative Research, 5th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018; ISBN 978-1-4833-4980-0. [Google Scholar]
  70. Sun, X.; Yang, Y.; Song, Y. Unlocking the Synergy: Increasing Productivity through Human–AI Collaboration in the Industry 5.0 Era. Comput. Ind. Eng. 2025, 200, 110657. [Google Scholar] [CrossRef]
  71. Kinchin, N. “Voiceless”: The procedural gap in algorithmic justice. Int. J. Law Inf. Technol. 2024, 32, eaae024. [Google Scholar] [CrossRef]
  72. Wanner, J.; Herm, L.V.; Heinrich, K.; Janiesch, C. The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
  73. Cao, N.; Cheung, S.-O.; Li, K. Perceptive biases in construction mediation: Evidence and application of artificial intelligence. Buildings 2023, 13, 2460. [Google Scholar] [CrossRef]
  74. Al-Adwan, A.S.; Li, N.; Al-Adwan, A.; Abbasi, G.A.; Albelbisi, N.A.; Habibi, A. Extending the technology acceptance model (TAM) to predict university students’ intentions to use metaverse-based learning platforms. Educ. Inf. Technol. 2023, 28, 15381–15413. [Google Scholar] [CrossRef]
  75. Ibrahim, F.; Münscher, J.-C.; Daseking, M.; Telle, N.-T. The technology acceptance model and adopter type analysis in the context of artificial intelligence. Front. Artif. Intell. 2025, 7, 1496518. [Google Scholar] [CrossRef]
  76. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024, 21, 21. [Google Scholar] [CrossRef]
  77. Oppong, M. Leveraging predictive analytics to enhance student retention: A case study of Georgia State University. ResearchGate, 2024; accessed online. [Google Scholar] [CrossRef]
  78. Lopes, S.; Chalil Madathil, K.; Bertrand, J.; Brady, C.; Li, D.; McNeese, N. Effect of mental fatigue on trust and workload with AI-enabled infrastructure visual inspection systems. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Atlanta, GA, USA, 10–14 October 2022; Sage: Los Angeles, CA, USA, 2022; Volume 66, pp. 285–286. [Google Scholar] [CrossRef]
  79. Mosakas, K. On the moral status of social robots: Considering the consciousness criterion. AI Soc. 2021, 36, 429–443. [Google Scholar] [CrossRef]
  80. Dong, X.; Wang, Z.; Han, S. Mitigating learning burnout caused by generative artificial intelligence misuse in higher education: A case study in programming language teaching. Informatics 2025, 12, 51. [Google Scholar] [CrossRef]
  81. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ. 2020, 151, 103862. [Google Scholar] [CrossRef]
  82. Selwyn, N. The future of AI and education: Some cautionary notes. Eur. J. Educ. 2022, 57, 620–631. [Google Scholar] [CrossRef]
  83. Bittle, K.; El-Gayar, O. Generative AI and academic integrity in higher education: A systematic review and research agenda. Information 2025, 16, 296. [Google Scholar] [CrossRef]
  84. Buijsman, S.; Carter, S.E.; Bermúdez, J.P. Autonomy by design: Preserving human autonomy in AI decision-support. Philos. Technol. 2025, 38, 97. [Google Scholar] [CrossRef]
  85. Sadek, M.; Mougenot, C. Challenges in value-sensitive AI design: Insights from AI practitioner interviews. Int. J. Hum.-Comput. Interact. 2024; accessed online. [Google Scholar] [CrossRef]
  86. Ulven, J.B.; Wangen, G. A systematic review of cybersecurity risks in higher education. Future Internet 2021, 13, 39. [Google Scholar] [CrossRef]
  87. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 5th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  88. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  89. Yin, R.K. Case Study Research: Design and Methods, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  90. Boateng, G.O.; Neilands, T.B.; Frongillo, E.A.; Melgar-Quiñonez, H.R.; Young, S.L. Best practices for developing and validating scales for health, social, and behavioral research: A primer. Front. Public Health 2018, 6, 149. [Google Scholar] [CrossRef]
  91. DeVellis, R.F.; Thorpe, C.T. Scale Development: Theory and Applications, 5th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
  92. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  93. Vaske, J.J.; Beaman, J.; Sponarski, C.C. Rethinking internal consistency in Cronbach’s alpha. Leis. Sci. 2016, 39, 163–173. [Google Scholar] [CrossRef]
  94. Fošner, A. University students’ attitudes and perceptions towards AI tools: Implications for sustainable educational practices. Sustainability 2024, 16, 8668. [Google Scholar] [CrossRef]
  95. Bulathwela, S.; Pérez-Ortiz, M.; Holloway, C.; Cukurova, M.; Shawe-Taylor, J. Artificial intelligence alone will not democratise education: On educational inequality, techno-solutionism and inclusive tools. Sustainability 2024, 16, 781. [Google Scholar] [CrossRef]
  96. Nazaretsky, T.; Mejia-Domenzain, P.; Swamy, V.; Frej, J.; Käser, T. The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions. Comput. Educ. Artif. Intell. 2025, 8, 100368. [Google Scholar] [CrossRef]
  97. Al-Zahrani, A.M.; Alasmari, T.M. Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanit. Soc. Sci. Commun. 2024, 11, 912. [Google Scholar] [CrossRef]
  98. Hair, J.F.; Babin, B.J.; Anderson, R.E.; Black, W.C. Multivariate Data Analysis, 8th ed.; Cengage Learning: Boston, MA, USA, 2019. [Google Scholar]
  99. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Siemens, S.W.C.G. A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  100. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: New York, NY, USA, 2003. [Google Scholar]
  101. Bates, T.; Cobo, C.; Mariño, O.; Wheeler, S. Can artificial intelligence transform higher education? Int. J. Educ. Technol. High. Educ. 2020, 17, 42. [Google Scholar] [CrossRef]
  102. Stacks, D.W.; Salwen, M.B. (Eds.) An Integrated Approach to Communication Theory and Research, 2nd ed.; Routledge: New York, NY, USA, 2008. [Google Scholar] [CrossRef]
  103. Viberg, O.; Hatakka, M.; Bälter, O.; Mavroudi, A. The current landscape of learning analytics in higher education. Comput. Hum. Behav. 2018, 89, 98–110. [Google Scholar] [CrossRef]
  104. Choung, H.; David, P.; Ross, A. Trust in AI and its role in the acceptance of AI technologies. Int. J. Hum.-Comput. Interact. 2022, 39, 1727–1739. [Google Scholar] [CrossRef]
  105. Zhou, M.; Liu, L.; Feng, Y. Building citizen trust to enhance satisfaction in digital public services: The role of empathetic chatbot communication. Behav. Inf. Technol. 2025; accessed online. [Google Scholar] [CrossRef]
  106. Pew Research Center. How Americans View Data Privacy: The Role of Technology Companies, AI and Regulation—Plus Personal Experiences with Data Breaches, Passwords, Cybersecurity and Privacy Policies; Pew Research Center: Washington, DC, USA, 2023; Available online: https://www.pewresearch.org/internet/2023/10/18/how-americans-view-data-privacy/ (accessed on 10 April 2025).
  107. Willems, J.; Schmid, M.J.; Vanderelst, D.; Vogel, D.; Ebinger, F. AI-Driven Public Services and the Privacy Paradox: Do Citizens Really Care about Their Privacy? Public Manag. Rev. 2022, 24, 2116–2134. [Google Scholar] [CrossRef]
  108. Hancock, P.A.; Kessler, T.T.; Kaplan, A.D.; Stowers, K.; Brill, J.C.; Billings, D.R.; Schaefer, K.E.; Szalma, J.L. How and Why Humans Trust: A Meta-Analysis and Elaborated Model. Front. Psychol. 2023, 14, 1081086. [Google Scholar] [CrossRef]
  109. Mustofa, R.H.; Kuncoro, T.G.; Atmono, D.; Hermawan, H.D.; Sukirman. Extending the technology acceptance model: The role of subjective norms, ethics, and trust in AI tool adoption among students. Comput. Educ. Artif. Intell. 2025, 8, 100379. [Google Scholar] [CrossRef]
  110. Stahl, B.C. Embedding responsibility in intelligent systems: From AI ethics to responsible AI ecosystems. Sci. Rep. 2023, 13, 7586. [Google Scholar] [CrossRef]
  111. Renick, J.; Wegemer, C.M.; Reich, S.M. Relational principles for enacting social justice values in educational partnerships. J. High. Educ. Outreach Engagem. 2024, 28, 135–152. [Google Scholar]
  112. Kamalov, F.; Santandreu Calonge, D.; Gurrib, I. New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability 2023, 15, 12451. [Google Scholar] [CrossRef]
  113. Kozak, J.; Fel, S. How sociodemographic factors relate to trust in artificial intelligence among students in Poland and the United Kingdom. Sci. Rep. 2024, 14, 28776. [Google Scholar] [CrossRef]
  114. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  115. Saranya, A.; Subhashini, R. A systematic review of explainable artificial intelligence models and applications: Recent developments and future trends. Decis. Anal. J. 2023, 7, 100230. [Google Scholar] [CrossRef]
  116. Baker, R.S.; Hawn, A. Algorithmic bias in education. Int. J. Artif. Intell. Educ. 2022, 32, 1052–1092. [Google Scholar] [CrossRef]
  117. Almheiri, H.M.; Ahmad, S.Z.; Khalid, K.; Ngah, A.H. Examining the impact of artificial intelligence capability on dynamic capabilities, organizational creativity and organization performance in public organizations. J. Syst. Inf. Technol. 2025, 27, 1–20. [Google Scholar] [CrossRef]
  118. Blanka, C.; Krumay, B.; Rueckel, D. The interplay of digital transformation and employee competency: A design science approach. Technol. Forecast. Soc. Change 2022, 178, 121575. [Google Scholar] [CrossRef]
  119. Díaz-Rodríguez, N.; Del Ser, J.; Coeckelbergh, M.; López de Prado, M.; Herrera-Viedma, E.; Herrera, F. Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 2023, 99, 101896. [Google Scholar] [CrossRef]
  120. Tzirides, A.O.; Zapata, G.; Kastania, N.P.; Saini, A.K.; Castro, V.; Ismael, S.A.; You, Y.; dos Santos, T.A.; Searsmith, D.; O’Brien, C.; et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 2024, 6, 100184. [Google Scholar] [CrossRef]
  121. Liu, B.L.; Morales, D.; Roser-Chinchilla, J.; Sabzalieva, E.; Valentini, A.; Vieira do Nascimento, D.; Yerovi, C. Harnessing the Era of Artificial Intelligence in Higher Education: A Primer for Higher Education Stakeholders; UNESCO International Institute for Higher Education in Latin America and the Caribbean: Paris, France; Caracas, Venezuela, 2023; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386670 (accessed on 10 April 2025).
  122. Hofstede, G. Culture’s Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations, 2nd ed.; SAGE Publications: Thousand Oaks, CA, USA, 2001. [Google Scholar]
  123. Dell’Aquila, E.; Ponticorvo, M.; Limone, P. Psychological foundations for effective human–computer interaction in education. Appl. Sci. 2025, 15, 3194. [Google Scholar] [CrossRef]
  124. Honnibal, M.; Montani, I.; Van Landeghem, S.; Boyd, A. spaCy: Industrial-Strength Natural Language Processing in Python. 2020. Available online: https://spacy.io (accessed on 10 April 2025).
  125. Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzmán, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; Stoyanov, V. Unsupervised Cross-lingual Representation Learning at Scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), Online, 5–10 July 2020; pp. 8440–8451. [Google Scholar] [CrossRef]
  126. Chen, S.; Wang, C.; Chen, Z.; Wu, Y.; Liu, S.; Chen, Z.; Li, J.; Kanda, N.; Yoshioka, T.; Xiao, X.; et al. WavLM: Large-Scale Self-Supervised Pre-Training for Full-Stack Speech Processing. IEEE J. Sel. Top. Signal Process. 2022, 16, 1505–1518. [Google Scholar] [CrossRef]
  127. Chai, C.P. Comparison of Text Preprocessing Methods. Nat. Lang. Eng. 2022, 29, 509–553. [Google Scholar] [CrossRef]
  128. Bunt, H.; Petukhova, V.; Gilmartin, E.; Pelachaud, C.; Fang, A.; Keizer, S.; Prévot, L. The ISO Standard for Dialogue Act Annotation, Second Edition. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; Calzolari, N., Béchet, F., Blache, P., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Isahara, H., Maegaard, B., Mariani, J., et al., Eds.; European Language Resources Association: Marseille, France, 2020; pp. 549–558, ISBN 979-10-95546-34-4. Available online: https://aclanthology.org/2020.lrec-1.69/ (accessed on 1 April 2025).
  129. Nguyen, G.; Dlugolinsky, S.; Bobák, M.; Tran, V.; García, Á.L.; Heredia, I.; Malík, P.; Hluchý, L. Machine Learning and Deep Learning Frameworks and Libraries for Large-Scale Data Mining: A Survey. Artif. Intell. Rev. 2019, 52, 77–124. [Google Scholar] [CrossRef]
  130. Navarro, G. A Guided Tour to Approximate String Matching. ACM Comput. Surv. 2001, 33, 31–88. [Google Scholar] [CrossRef]
  131. Suh, K.S. Impact of Communication Medium on Task Performance and Satisfaction: An Examination of Media-Richness Theory. Inf. Manag. 1999, 35, 295–312. [Google Scholar] [CrossRef]
  132. Walther, J.B. Interpersonal Effects in Computer-Mediated Interaction: A Relational Perspective. Commun. Res. 1992, 19, 52–90. [Google Scholar] [CrossRef]
  133. Ma, L.; Zhang, X.; Xu, Y.; Liu, C. Perceived Social Fairness and Trust in Government Serially Mediate the Effect of Governance Quality on Subjective Well-Being. Sci. Rep. 2024, 14, 14321. [Google Scholar] [CrossRef]
  134. Triantari, S.; Vavouras, E. Decision-making in the modern manager-leader: Organizational ethics, business ethics, and corporate social responsibility. Cogito 2024, 16, 7–15. [Google Scholar]
Figure 1. Conceptual model of factors influencing students’ intention to use AI-powered mediation tools.
Figure 1. Conceptual model of factors influencing students’ intention to use AI-powered mediation tools.
Technologies 13 00396 g001
Figure 2. Demographic distribution of the study sample (N = 287) by gender, age, institution type, and academic field.
Figure 2. Demographic distribution of the study sample (N = 287) by gender, age, institution type, and academic field.
Technologies 13 00396 g002
Figure 3. Thematic map based on Braun and Clarke’s (2006) [88] of the qualitative responses (n = 127), six-phase approach, showing the four main categories: transparency, hybrid models, emotional intelligence, and infrastructure.
Figure 3. Thematic map based on Braun and Clarke’s (2006) [88] of the qualitative responses (n = 127), six-phase approach, showing the four main categories: transparency, hybrid models, emotional intelligence, and infrastructure.
Technologies 13 00396 g003
Figure 4. Correlation matrix of the principal constructs in the SEM, showing the Pearson coefficients and highlighting strong positive links among automation, efficiency, trust, and behavioral intention, consistent with the conceptual model in Figure 1.
Figure 4. Correlation matrix of the principal constructs in the SEM, showing the Pearson coefficients and highlighting strong positive links among automation, efficiency, trust, and behavioral intention, consistent with the conceptual model in Figure 1.
Technologies 13 00396 g004
Figure 5. Theoretical synthesis diagram linking key frameworks to constructs influencing students’ adoption of AI-powered mediation tools.
Figure 5. Theoretical synthesis diagram linking key frameworks to constructs influencing students’ adoption of AI-powered mediation tools.
Technologies 13 00396 g005
Figure 6. Standardized regression weights (β) showing the influence of trust, efficiency, confidentiality, and automation on students’ behavioral intention to use AI-powered mediation tools (K). Values are derived from the multiple regression model (Adj. R2 = 0.571, F = 126.0, p < 0.001). Asterisks indicate significance levels: * p < 0.05, *** p < 0.001.
Figure 6. Standardized regression weights (β) showing the influence of trust, efficiency, confidentiality, and automation on students’ behavioral intention to use AI-powered mediation tools (K). Values are derived from the multiple regression model (Adj. R2 = 0.571, F = 126.0, p < 0.001). Asterisks indicate significance levels: * p < 0.05, *** p < 0.001.
Technologies 13 00396 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gkanatsiou, M.A.; Triantari, S.; Tzartzas, G.; Kotopoulos, T.; Gkanatsios, S. Rewired Leadership: Integrating AI-Powered Mediation and Decision-Making in Higher Education Institutions. Technologies 2025, 13, 396. https://doi.org/10.3390/technologies13090396

AMA Style

Gkanatsiou MA, Triantari S, Tzartzas G, Kotopoulos T, Gkanatsios S. Rewired Leadership: Integrating AI-Powered Mediation and Decision-Making in Higher Education Institutions. Technologies. 2025; 13(9):396. https://doi.org/10.3390/technologies13090396

Chicago/Turabian Style

Gkanatsiou, Margarita Aimilia, Sotiria Triantari, Georgios Tzartzas, Triantafyllos Kotopoulos, and Stavros Gkanatsios. 2025. "Rewired Leadership: Integrating AI-Powered Mediation and Decision-Making in Higher Education Institutions" Technologies 13, no. 9: 396. https://doi.org/10.3390/technologies13090396

APA Style

Gkanatsiou, M. A., Triantari, S., Tzartzas, G., Kotopoulos, T., & Gkanatsios, S. (2025). Rewired Leadership: Integrating AI-Powered Mediation and Decision-Making in Higher Education Institutions. Technologies, 13(9), 396. https://doi.org/10.3390/technologies13090396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop