Next Article in Journal
Governing Artificial Intelligence for Sustainable Territorial Development in Fragile Contexts: Insights from North Lebanon
Previous Article in Journal
Entrepreneurial Leadership in Small-Scale Smart City Transformations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prospects for Integrating Artificial Intelligence into the Administration of Higher Education in Greece

by
Ourania Bousiou
1,2,3,*,
Michael Paraskevas
1,4,
Vaggelis Kapoulas
1 and
Panagiotis Liargovas
5
1
Computer Technology Institute and Press “Diophantus”, 26504 Patras, Greece
2
Department of Education and Social Work, University of Patras, 26504 Patras, Greece
3
School of Humanities, Hellenic Open University, 26335 Patras, Greece
4
Electrical and Computer Engineering Department, University of Peloponnese, 26334 Patras, Greece
5
Department of Management Science and Technology, University of Peloponnese, 22100 Tripoli, Greece
*
Author to whom correspondence should be addressed.
Adm. Sci. 2026, 16(3), 131; https://doi.org/10.3390/admsci16030131
Submission received: 2 January 2026 / Revised: 24 February 2026 / Accepted: 28 February 2026 / Published: 6 March 2026

Abstract

This study examines administrative employees’ perceptions of integrating Artificial Intelligence (AI) into the administration of Greek public universities. Using a cross-sectional online questionnaire administered across three universities (N = 127), we map perceptions across five domains: (i) perceived efficiency/effectiveness contributions, (ii) perceived automation benefits, (iii) perceived adoption challenges, (iv) perceived ethics and data protection requirements, and (v) perceived skills development needs. Results indicate a generally supportive climate for AI use in university administration, but support is conditional: ethics and data protection are prioritized most strongly, whereas perceived efficiency/effectiveness gains are closer to neutral-to-slightly positive. Respondents endorse task-level automation more than broad organizational performance claims and emphasize training and human oversight as enabling conditions for responsible deployment. These findings suggest that a governance-first and capacity-first implementation pathway may be more aligned with staff priorities in the Greek public university context. The study provides an exploratory baseline for future evaluative research on AI-enabled administrative modernization.

1. Introduction

In recent years, debates on public-sector modernization have increasingly shifted from paper-based, rule-bound administration toward digital governance that is citizen-oriented, data-driven, and accountable (Dessler, 2024; Reis et al., 2019). Within this broader transition, Artificial Intelligence (AI) has been promoted as a potential enabler of faster service delivery, improved resource allocation, and more consistent decision support (Chowdhury et al., 2023). Greece, like many European countries, has pursued national digital governance reforms, yet the translation of these macro-level ambitions into organizational practice remains uneven, especially in public organizations characterized by procedural formalism and limited administrative capacity. In higher education, these pressures coincide with rising demands for timely student services and compliance-heavy processes, making administrative modernization a practical governance problem rather than a purely technical upgrade.
A key analytical challenge is that national digital-governance goals do not automatically map onto the institutional constraints and workflows of universities (EC, 2024). Public universities operate under distinct legal mandates, multi-actor governance, and heterogeneous information systems, and they must balance responsiveness with transparency, due process, and data protection (Chatterjee & Sreenivasulu, 2019). In university administration, AI adoption would typically target back-office processes—such as case handling and document management, information provision to students and staff, scheduling, financial and procurement support, and management reporting—where automation and decision-support tools could reduce routine workload while requiring clear accountability arrangements. University workflows also involve discretionary judgment and multi-step approvals, which limit end-to-end automation and highlight the role of human-in-the-loop decision support and clear accountability arrangements (Foss & Laursen, 2012; Oracle, 2019; Votto et al., 2021, Uzun et al., 2022).
These realities are particularly salient in Greece, where administrative procedures are often perceived as complex and time-consuming and where reforms are implemented within a centralized and legally intensive public administration environment. Consequently, the prospects of AI integration depend not only on expected performance gains (e.g., efficiency and effectiveness) (Vinichenko et al., 2020) but also on legitimacy considerations (e.g., ethics, fairness, transparency, GDPR compliance) (Chatterjee & Sreenivasulu, 2019; Leslie et al., 2021; Lin et al., 2021; Kerr, 2020; Kaplan, 2023; Dabis & Csáki, 2024; Jungwirth, 2025) and on organizational capacity (e.g., staff skills, training, and human oversight) (Berber & Slavić, 2016; Manresa et al., 2019; Oracle, 2019; Funck & Karlsson, 2020; Chowdhury et al., 2023). Administrative employees are therefore central actors: their perceptions can shape adoption trajectories, resistance, and the practical design of governance safeguards (Androutsopoulou et al., 2019; Hammerschmid et al., 2019; O. J. George et al., 2023; Kefalaki, 2024). In addition, AI use cases depend on data quality, interoperability, and lawful processing; staff may therefore evaluate AI through trade-offs involving liability, contestability, and equitable treatment of users, not only through speed or cost (Davis, 1989; Uzun et al., 2022).
Although AI in higher education has received increasing scholarly attention, empirical work has focused predominantly on teaching and learning applications or student-facing services (Christopoulos et al., 2018; Ocaña-Fernández et al., 2019), with comparatively less evidence on administrative back-office governance and staff perceptions. This gap is consequential because administrative AI implicates public-sector accountability and algorithmic decision-making risks that differ from pedagogical use cases (Ahmad et al., 2022). Moreover, evidence from Southern European, centralized systems—where legitimacy constraints may be especially influential—remains limited. This is consequential because administrative AI implicates algorithmic accountability and procedural fairness in ways that differ from pedagogical applications (Floridi et al., 2018; Allen, 2019; Wirtz & Müller, 2019).
To address this gap, we position the present study within a theory-informed adoption lens that integrates classic technology adoption constructs with public-sector AI governance. Specifically, we interpret attitudes through a legitimacy-capacity-performance logic: support for AI is shaped by perceived performance value (e.g., efficiency/effectiveness and automation benefits) (Spanou, 2016; Liargovas & Pilichos, 2022), perceived legitimacy risks (e.g., ethics, bias, transparency, and data protection) (Pudelko, 2006; Scherer, 2015; Wirtz & Müller, 2019; Black & Murray, 2019; Allen, 2019; Saaida, 2023), and perceived facilitating conditions (e.g., skills development and human-in-the-loop oversight) (Foss & Laursen, 2012; Berber & Slavić, 2016; Wirtz et al., 2020). This framing enables the paper to move beyond advocacy-oriented claims and to generate context-relevant implications for research and policy. Importantly, this lens treats “ethics” and “training” not as generic slogans but as adoption mechanisms that can condition acceptance in public organizations (Manresa et al., 2019; Kerr, 2020; Leslie et al., 2021; Dabis & Csáki, 2024). By organizing results at the construct level, we can also interpret why apparently predictable themes may matter: for instance, a legitimacy-first ordering of priorities suggests that governance safeguards can dominate perceived performance value under strong compliance constraints (Taylor & Machado, 2006; Doyle & Brady, 2018; Leslie et al., 2021; Manzoni et al., 2022).
Against this background, the Greek case offers a theoretically meaningful boundary condition. Greek public universities operate within heightened regulatory oversight (Greek Law No. 4940/2022; Greek Law 4957/2022) and constrained procurement and staffing flexibility, which can amplify concerns about accountability, trust, and compliance. We therefore expect that staff acceptance may be conditional, with legitimacy and capacity priorities potentially dominating generic efficiency expectations (Vinichenko et al., 2020; Lin et al., 2021). The following section reviews the relevant literature and clarifies how these mechanisms have been discussed in relation to AI-enabled administrative modernization. Thus, the Greek case functions as a boundary condition where legitimacy and capacity constraints may weigh heavily in staff priorities (Scherer, 2015; Petit, 2017; Floridi et al., 2018; Mogaji et al., 2022). We therefore examine whether the relative priority of ethics/data protection, capacity-building, and automation benefits forms a distinctive pattern in this context. This ordering can be compared across national systems.

1.1. Literature Review

The integration of AI into university administration is increasingly discussed as part of broader reforms in public governance and digital transformation (EC, 2024). Prior scholarship suggests that AI can contribute to administrative modernization by automating repetitive tasks, improving information management, and supporting managerial decision-making through predictive and diagnostic analytics (Spanou, 2016; Liargovas & Pilichos, 2022). However, these potential benefits are not realized automatically; they depend on organizational context, data quality, and governance arrangements that define accountability and human oversight (Oracle, 2019; Votto et al., 2021; Uzun et al., 2022). The literature further distinguishes between digitization (moving forms online) and AI-enabled transformation (classifying, recommending, or predicting), which shifts governance responsibilities and requires stronger oversight and documentation (Sallis, 2014; McCaffery, 2018).
In operational terms, administrative AI can be grouped into two broad families. First, automation applications target routine, rules-based work (e.g., document classification, routing, form checking, appointment scheduling (Ackermann et al., 2025; Androutsopoulou et al., 2019), and standard communication (Singh & Slack, 2022). Second, decision-support applications assist managers and staff in synthesizing large datasets for reporting (Kappos & Kling, 2020), forecasting (Glyptis et al., 2020), or risk detection (Pudelko, 2006; Scherer, 2015; Wirtz & Müller, 2019; Black & Murray, 2019; Allen, 2019; Saaida, 2023). While both families may improve throughput and consistency, decision-support tools raise additional concerns about explainability, contestability, and the appropriate role of human judgment. Because such tools may influence prioritization, staff perceptions of transparency and appealability become central (Scherer, 2015; Petit, 2017; Floridi et al., 2018; Mogaji et al., 2022).
From an adoption perspective, perceived performance is only one driver of acceptance. Technology adoption research emphasizes performance expectancy and facilitating conditions, including skills, training, and supportive infrastructure (Manresa et al., 2019; Abdullah et al., 2021; Larsson et al., 2025; Chowdhury et al., 2023; Vasilenko & Seliverstova, 2024). In public-sector settings, perceived legitimacy risks are often equally decisive: algorithmic bias, opacity, procedural unfairness, and personal data risks can undermine trust and institutional legitimacy (Uzun et al., 2022; Wirtz et al., 2019; Nikolinakos, 2023; Kaplan, 2023). Accordingly, a governance-first approach highlights the importance of transparency, data protection, and explicit responsibility allocation (human-in-the-loop) as prerequisites for acceptable deployment. Public-sector AI governance work emphasizes transparency, audit trails, risk assessment, and continuous monitoring as prerequisites for trustworthy deployment, aligning legitimacy safeguards with adoption feasibility (Chatterjee & Sreenivasulu, 2019; Oracle, 2019; Votto et al., 2021; Uzun et al., 2022). We also treat facilitating conditions broadly, including skills, change management, and procedural guidance for oversight, because without such support even technically feasible AI tools may be rejected or underused (Sallis, 2014; McCaffery, 2018).
In the Greek context, these issues are intensified by a centralized and legally intensive administrative environment. University administration is embedded in public law procedures, formal documentation requirements, and evolving interoperability with national digital platforms. This setting can make efficiency improvements attractive, yet it can also heighten sensitivity to GDPR compliance, auditability, and error correction. As a result, AI adoption may be evaluated through a legitimacy-first lens, where safeguards and procedural alignment are prioritized before scale-up. In Greece, procedural formalism and constrained staffing can amplify sensitivity to GDPR compliance, auditability, and error correction (Pyrgiotakis & Symeou, 2016; Glyptis et al., 2020; Mohamed et al., 2022; Kaplan, 2023).
Empirical studies on AI in higher education administration remain comparatively scarce, especially regarding administrative employees’ perceptions and enabling needs. Existing evidence suggests that staff acceptance is typically conditional on clarity about use cases, the perceived distribution of benefits and risks, and the availability of training and support. These findings point to the value of construct-level benchmarks that can guide future hypothesis testing, comparative research across institutional systems, and evaluations of implementation outcomes in university administration. Baseline evidence from administrative staff is therefore valuable for designing governance-first implementation pathways and for comparative research.

1.2. Research Purpose

The purpose of this study is to map administrative employees’ perceptions of AI adoption in the administrative functions of Greek universities, with a focus on five domains: (a) perceived contribution of AI to efficiency and effectiveness, (b) perceived benefits of automating routine administrative processes, (c) perceived challenges and barriers to adoption, (d) perceived ethics and data protection requirements, and (e) perceived skill development needs as facilitating conditions. Accordingly, we address the following research questions: (Q1) How do employees perceive AI’s potential contribution to improving efficiency and effectiveness? (Q2) How do employees assess the expected benefits of automation? (Q3) What challenges do employees perceive as most salient for integration? (Q4) How do employees prioritize ethics and data protection in this context? (Q5) How do employees view training and skills development as enabling conditions for responsible adoption? To strengthen scholarly positioning, we also outline exploratory propositions for future testing regarding whether attitudes vary by demographic profile, institutional role, and prior exposure to AI or digital systems. We also indicate exploratory propositions to guide future testing of demographic- and experience-related predictors of these perceptions. These propositions are presented as analytic signposts rather than confirmed causal claims; for example, future work may test whether prior AI exposure predicts higher perceived benefits or lower perceived barriers and whether legitimacy risk predicts conditional support independent of performance expectations. We explicitly treat the study as exploratory and perception-based: it maps attitudes and perceived priorities and does not evaluate effectiveness, organizational readiness, or implementation outcomes.
The structure of our research is the following: First, we present the theoretical framework and summarize the relevant literature. Next, we describe the research design, including the questionnaire, sample, and analysis strategy. We then report the results organized by the five construct domains, discuss implications for governance-first AI modernization in university administration, and conclude with limitations and directions for future theory-driven and evaluative research.

2. Theoretical Framework

2.1. Theoretical and Empirical Background

Rather than treating AI as uniformly beneficial, recent scholarships emphasize that adoption in public and quasi-public organizations depends on how employees evaluate performance value, legitimacy risks, and implementation capacity (efficiency) (Spanou, 2016; Hammerschmid et al., 2019; O. J. George et al., 2023; Kefalaki, 2024). In higher education institutions (HEIs), administrative AI differs from pedagogical AI because it intersects with public accountability, procedural fairness, and legally constrained data handling. Accordingly, this review is organized around five domains that the study measures: perceived efficiency/effectiveness, perceived automation benefits, perceived adoption challenges, perceived ethics and data protection requirements, and perceived skill development needs as a facilitating condition.
Technology adoption research (e.g., TAM/UTAUT and related streams) identifies perceived usefulness/performance expectancy as a core driver of acceptance (Neumann et al., 2024; Schorr, 2023). In administrative contexts, perceived usefulness often refers to shorter processing times, fewer manual errors, improved information retrieval, and more consistent application of rules. Importantly, “usefulness” is not monolithic: staff may endorse narrow automation gains more strongly than broader claims about organizational effectiveness, especially when effectiveness depends on coordination and service quality dimensions beyond routine processing (Ackermann et al., 2025; Androutsopoulou et al., 2019). This motivates separating perceived efficiency/effectiveness from automation-specific benefits (Kankanhalli et al., 2019; Uzun et al., 2022; B. George & Wooden, 2023).
Public-sector AI and algorithmic governance research argue that legitimacy risks can dominate performance value in shaping acceptability. Risks include opacity and limited contestability, discrimination or bias, and erosion of procedural fairness. In GDPR-constrained environments, lawful processing, data minimization, and security safeguards become central preconditions for trust. Consequently, staff support may be conditional on credible governance safeguards (human oversight, audit trails, clear accountability). This motivates treating ethics and data protection as a distinct high-salience domain (Kitsakis et al., 2022; Dabis & Csáki, 2024).
The organizational readiness and sociotechnical perspectives emphasize that adoption is shaped by implementation capacity: skills, training, workflow redesign, interoperability, and change management support. In public organizations, capacity constraints are often intensified by procurement rules, staffing rigidity, and formalized procedures. Thus, skills development can be viewed as a facilitating condition that influences feasibility, confidence, and perceived risk (Vinichenko et al., 2020; Chowdhury et al., 2023).
Institutional and public value perspectives explain why the same AI capability can be evaluated differently across systems. Universities are hybrid organizations combining public law obligations with professional norms and multi-actor governance. In centralized and legally intensive settings, legitimacy constraints may be amplified and may reorder priorities—placing ethics and data protection ahead of performance expectations. This contextual mechanism is particularly relevant for Greece and motivates a theory-informed interpretation of staff perceptions in the sections that follow.

2.2. Conceptual Model of AI Adoption in University Administration

To provide an explanatory lens (rather than a policy overview), we adopt a legitimacy-capacity-performance model of AI adoption in university administration. The model synthesizes core elements from technology adoption theory (performance expectancy/usefulness; facilitating conditions) with public-sector AI governance (legitimacy, accountability, and risk management) and sociotechnical systems (fit between technology, workflows, and human roles). We treat the Greek regulatory and governance context as a boundary condition that amplifies legitimacy and capacity mechanisms rather than as the theoretical framework itself.
Employees’ support for AI integration is expected to be shaped by: (1) performance value (PERF): perceived contribution to efficiency and effectiveness; (2) automation benefits (AUTO): perceived value of applying AI to routine, rules-based tasks; (3) adoption challenges (CHAL): perceived barriers and downsides such as implementation complexity and role uncertainty; (4) legitimacy risks (LEGIT): perceived ethics and data protection requirements (fairness, transparency, accountability, GDPR-aligned security); and (5) facilitating capacity (CAP): perceived need for skills development and training, including human-in-the-loop oversight.
The relationships and interpretation logic among the above five (5) indicators are: PERF and AUTO capture expected value and are anticipated to increase acceptance, while CHAL captures constraints that can dampen acceptance. LEGIT represents a distinct mechanism: in public and GDPR-constrained settings, legitimacy risks operate as gatekeeping conditions and can override performance value if safeguards are perceived as insufficient. CAP functions as a facilitating condition that supports feasibility and can reduce perceived challenges by increasing competence and confidence. This logic explains why staff perceptions may differ across domains: employees can endorse automation benefits while simultaneously prioritizing ethics/data protection and training as prerequisites for acceptable deployment.
To sharpen conceptual precision and ensure measurement transparency, we define the key constructs below and align them with the questionnaire items and composite domains used in the Results section.

2.3. Key Definitions and Alignment with Questionnaire Items

Table 1 summarizes operational definitions and their alignment to questionnaire items/composites.

3. Research Design and Methodology

3.1. Study Design and Setting

This study employed a cross-sectional, quantitative survey design. Data were collected via an online questionnaire administered to university employees in three Greek public universities. The study is exploratory and descriptive: it provides a baseline mapping of staff perceptions and does not aim to estimate population parameters for all Greek universities or to establish causal effects.
Data collection took place between 23 November 2024 and 4 December 2024. Participation was voluntary and anonymous, and respondents completed the questionnaire electronically (average completion time ≈ 10 min). Because distribution occurred via internal channels and some invitations may have been forwarded beyond the initial recipients, a precise response rate could not be calculated for all sites; we therefore report the achieved sample as indicative.

3.2. Sampling Strategy and Recruitment

The sampling strategy used an accessible sampling frame within the participating universities. Invitations were distributed electronically through institutional mailing lists. Where complete staff lists were available, invites were selected using simple random samples within the accessible frame; where this was not feasible, recruitment followed a non-probability convenience approach. For transparency, we therefore interpret the achieved sample as exploratory and indicative rather than claiming full representativeness of all Greek university employees.
Eligibility criteria included employment in a Greek university. We invited participants to respond once; no incentives were provided. Approximately 64% responded to the survey invitation and completed the questionnaire they received, while 36% did not send a response (127/200 questionnaires). To support assessment of potential selection bias, we report demographic characteristics and employment categories in Section 4.1 and discuss sampling limitations in Section 6.
Accordingly, the findings are not claimed to be nationally representative, and the sampling approach “ensured validity for broader university work.” Instead, we emphasize transparency about recruitment and treat the findings as a baseline for future probability-based or multi-site comparative studies. At the same time, we note that the achieved sample concentrates on the employee categories that are most substantively representative of university administrative work in Greece: Higher Education (HE) staff typically constitute the core professional administrative cohort, High School (HS) staff are relatively few, and Elementary Education (EE) staff are predominantly cleaning personnel. In our data, 88% of responses belonged to the HE category, suggesting that the sample reasonably captures perceptions of the main professional administrative group most likely to engage with AI-enabled administrative workflows. Although the overall sample size is modest and national representativeness cannot be guaranteed, the use of random selection within the accessible frame (where implemented) and the alignment of the sample’s job category composition with the practical structure of university administration help support the credibility of the findings for this cohort.

3.3. Questionnaire, Item Transparency, and Content Validity

The instrument was a structured questionnaire organized into thematic sections aligned with the study constructs (PERF, AUTO, CHAL, LEGIT, CAP). Items were designed to capture perceived efficiency/effectiveness, perceived automation benefits, perceived adoption challenges, perceived ethics and data protection requirements, and perceived skill development needs.
Most attitudinal items used a 5-point Likert scale with anchors specified in the instrument. The original online questionnaire can be viewed at: https://docs.google.com/forms/d/e/1FAIpQLSeWSr1hYCZadrYkBV1tKDVRFDFtnU1F2zHa8hGWJDx3smO3JQ/viewform (accessed on 20 October 2024).
Individual Likert-type items are ordinal; however, it is common in applied social science survey research to summarize multi-item constructions using mean-based composites when response categories are symmetric; the scale has five points, and the goal is descriptive comparison of relative salience across domains. In this study, means are used as a parsimonious descriptive summary to compare construct-level priorities (rather than to claim precise interval distances).
The questionnaire was developed based on the literature on AI adoption and public-sector AI governance. Prior to distribution, content validity was supported through an expert review and pilot testing procedure (1–5/11/2024, 26 responses), which took place focusing on clarity, relevance, and coverage of the constructs.

3.4. Measures and Data Analysis

For the sake of research convenience, we grouped the questionnaire questions into five central variables (MO_H1-ΜO_H5) to extract the overall results. The composite scores were calculated as the averages of the corresponding questions, as reflected in Table 2. The average variables were calculated as the averages of their respective items.
Descriptive statistics (means and variance) are reported to compare the relative importance of performance expectations, legitimacy risks and enabling conditions. To demonstrate reliability, we initially referred to Cronbach’s alpha, but then we also proceeded to exploratory comparisons using our demographic data. In this way, we try to interpret the results we obtain with caution and to reach results as carefully and as safely as possible.

3.5. Ethics, Data Protection, and Data Handling

The survey was administered anonymously; no directly identifying personal data was collected. Participants were informed about the purpose of the study and the voluntary nature of participation at the start of the questionnaire. Data were stored securely and processed in accordance with GDPR requirements.

4. Results

To analyze the empirical data, we used a five-point Likert scale for the attitude sections of the questionnaire. In Section 6, we draw conclusions based on the five central variables to support the interpretation of the relative priorities in relation to performance expectations, legitimacy risks, and enabling conditions. The figures provide detailed descriptive statistics for each domain.

4.1. Demographic Data

Regarding gender, the sample was approximately balanced (48% female, 50% male, 2% non-binary). In terms of educational attainment, most respondents held at least an undergraduate degree (Figure 1). To move beyond descriptive reporting, we conducted exploratory subgroup comparisons of the composite domains by gender (female vs. male; non-binary excluded from inferential tests due to very small n). These tests are interpreted cautiously and are presented as indicative patterns rather than population estimates.
With respect to age (Figure 1), the largest proportion of participants fell within the 45–54 years category (47.6%), followed by those aged >55 years (32.5%). Smaller shares belonged to the 35–44 years category (12.7%) and the 25–34 years category (7.1%). Because the distribution is weighted toward older employees, we treat age as a potentially meaningful context factor and explore whether age is associated with perceived benefits, concerns, and training needs.
Taken together, these descriptive distributions suggest that the achieved sample is primarily composed of mid-career and senior employees. This matters analytically because technology adoption perceptions may vary across career stages (e.g., perceived training needs or perceived risks). Therefore, any age-related patterns are reported as exploratory and interpreted considering the study’s descriptive design and partially non-probability recruitment.

4.2. General Questions

Within the “General Questions” category, respondents answered two questions. The first relates to the “perceived usefulness” of employing AI to support communication between the university and its administrative services, while the second concerns the perceived need for AI use in administrative support functions. In line with the descriptive design, responses are interpreted as perceived priorities and attitudes (not as demonstrated necessities).

4.3. Opinions of Administrative Staff on the Contribution of AI to Improving the Efficiency and Effectiveness of Greek Universities

For the overall evaluation of questions Q4 and Q5 in Section 4.3, we computed the composite domain AI Efficiency/Effectiveness Perceptions (MO_H1; mean of Q4–Q5). As shown in Figure 2, the composite mean is around 3.2, and we interpreted the results as “neutral-to-slightly positive” rather than as strong confidence in AI-driven performance gains. This pattern indicates cautious openness to potential improvements, consistent with conditional support in a regulated administrative environment.
Overall, perceived efficiency/effectiveness gains are cautious (neutral-to-slightly positive), reinforcing conditional—not unqualified—support.

4.4. Employee Views on the Benefits of Automating University Administrative Processes Using AI

This section reports six questions (Q7–Q12) on Automation Benefits (MO_H2; mean of Q7–Q12).
Figure 3 suggests a “moderately positive” orientation toward automation-related gains (e.g., streamlining routine processes), but the magnitude should be interpreted as “moderate endorsement” rather than certainty. The contrast between MO_H2 and MO_H1 is analytically informative: respondents appear more confident about task-level automation than about broad organizational effectiveness improvements.
Respondents endorse routine process automation more strongly than broad organizational performance claims, suggesting automation is the more acceptable early adoption domain.

4.5. Challenges and Prospects from the Integration of AI in University Administration

This section summarizes five questions (Q14–Q18) on Adoption Challenges (MO_H3; mean of Q14–Q18). Figure 4 indicates an overall “moderate stance”, consistent with conditional support alongside recognition of constraints. Rather than treating these as definitive barriers, we interpret them as perceived implementation frictions that may vary by demographics and prior exposure.
Perceived challenges indicate implementation frictions rather than rejection, highlighting the importance of governance safeguards and change management support.

4.6. Employees’ Views on Ethics and Deontology with the Integration of AI in University Administration

Figure 5 shows that ethics and data protection (MO_H4; Q20–Q23) receives the strongest endorsement (mean ≈ 4.0). Analytically, this high score can be read in two non-exclusive ways: (i) as “responsible readiness”—staff awareness that legitimate AI deployment requires strong safeguards; and/or (ii) as “hesitancy/barrier signaling”—high concern that could constrain acceptance if safeguards are perceived as insufficient. As an exploratory check, we examine bivariate associations whether MO_H4 is positively associated with perceived benefits (MO_H2) or instead correlates with perceived challenges (MO_H3), using Spearman’s correlations, which can be seen in Table 3:
The data in Table 3 show that in both cases there was a marginally positive but weak relationship between the two variables. Statistically, this positive relationship cannot be verified with our existing sample population. We would probably have more reliable results if our sample were larger.
From the responses to Question 25, we report descriptive frequency results (Figure 6), summarizing which issue participants may consider most critical following AI implementation.
Q25 was designed as a multiple-response item; respondents could select more than one option. Therefore, percentages represent the share of respondents selecting each option (i.e., a “multiple-response” percentage) and do not sum to 100%. Within this framing, the most frequently selected issues were ethics/deontology and lack of knowledge about AI, followed by concerns about job reduction and personal-data risks. These frequencies are substantively informative because they triangulate the composite results: ethical/data protection priorities (MO_H4) and capacity concerns (MO_H5) appear not only in scaled ratings but also in the spontaneous salience of ‘critical issues’ MO_H4:MO_H5 ρ = 0.007 and sig. p-values 0.001 (weak correlation).
For brevity, Figure 6 reports the full multiple-response distribution; the narrative highlights only the most salient contrasts.

4.7. Employees’ Views on the Contribution of Employee Skills Development to the Smooth Integration of AI into the Administration of Greek Universities

Figure 7 summarizes Skills Development Needs (MO_H5; Q26–Q28).
The composite mean (≈3.6) indicates moderately high emphasis on training and upskilling as facilitating conditions for AI adoption. Rather than implying urgency as an empirically demonstrated necessity, we interpret this as a perceived enabling requirement: staff signal that competence-building and guidance for human-in-the-loop oversight may reduce anxiety and improve implementation feasibility. Overall, ethics and GDPR-aligned data protection emerge as the strongest priority, signaling a legitimacy-first adoption logic in this context.

5. Discussion

5.1. Data Interpretations

This section interprets the results through the legitimacy-capacity-performance lens introduced in Section 2. Rather than treating any single mean score as “high” or “low” in isolation, we emphasize (i) the relative ordering of domains and (ii) the size of the differences between them, because these patterns signal which adoption mechanisms are most salient in the Greek public university administrative context.
Firstly, the domain ordering is theoretically informative. The variable “ethics and data protection” (MO_H4) receives the strongest endorsement (≈4.0), whereas the variable “AI Efficiency/Effectiveness Perceptions” (MO_H1) is closer to neutral-to-slightly positive (≈3.2). This gap (roughly 0.8 points on a 5-point scale) suggests that perceived legitimacy and risk governance are more central drivers of AI acceptability than generic performance expectations. In other words, staff support appears conditional: AI is viewed as desirable only when embedded in credible safeguards for data protection, fairness, transparency, and accountability.
Secondly, perceptions differ across two adoption domains that are often conflated in broad narratives: (a) AI for administrative automation (workflow acceleration, routine case handling, and error reduction) and (b) AI for managerial decision support (analytics, forecasting, and decision-making assistance). Our findings are more consistent with stronger endorsement of task-level automation benefits (MO_H2) than of broad organizational effectiveness claims (MO_H1), which aligns with the idea that employees may more readily trust AI in repetitive, rules-based processes than in higher-discretion or consequential decision contexts. The latter domain typically raises sharper concerns about explainability, contestability, and responsibility allocation.
This distinction also clarifies the governance-oriented insight that emerges across domains: respondents repeatedly emphasize human oversight and the need for staff competence. This aligns with established public-sector algorithmic governance and human-in-the-loop accountability principles, where legitimate deployment requires clear lines of responsibility, auditability, and the ability to contest or correct AI-influenced outputs. In a public university setting, such arrangements are not merely technical choices; they are legitimate conditions that protect institutional trust.
Thirdly, the variable “Adoption Challenges” (MO_H3) shows a moderate pattern that is consistent with perceived implementation frictions rather than outright rejection. Combined with the high salience of the variable “Ethics/data security” (MO_H4), this suggests a governance-first adoption pathway: acceptance depends on procedural alignment, risk management, and organizational preparedness (rather than on enthusiasm alone). The multiple-response question “critical issues” (Q25) reinforces this interpretation by highlighting ethics/deontology, limited AI knowledge, job-related concerns, and personal data risks as prominent salience signals.
Fourthly, the variable “Skills Development Needs” (MO_H5) is rated moderately high, supporting the interpretation that capacity-building is a key facilitating condition. Importantly, training needs here should not be interpreted as read only as ‘skills gaps’; they also indicate the perceived necessity of building organizational capability for human-in-the-loop oversight—i.e., staff confidence in interpreting AI outputs, knowing when to override them, and following documented procedures for accountability.
Finally, the prominence of the variable value “Ethics/data protection” priorities (MO_H4) can be interpreted analytically in two alternative ways: as responsible readiness (awareness that legitimate AI deployment requires safeguards) and/or as hesitancy (a barrier signal if safeguards are perceived as inadequate). Future work can distinguish these mechanisms by testing whether the variable MO_H4 correlates more strongly with the variable “perceived benefits” (MO_H2) or with variable “perceived challenges” (MO_H3) and by examining whether such relationships vary by age, job role, or prior AI exposure.
Overall, the discussion indicates that the Greek case is not merely a setting where respondents report “positive attitudes.” Instead, the pattern is conditional and governance-centered: legitimacy risks and capacity constraints appear to structure acceptance more strongly than generic expectations of efficiency. This contributes an interpretable mechanism for future comparative research on AI-enabled administrative modernization in higher education.
We interpret the value ≈3.2 as near neutral-to-slightly positive, whereas the value ≈4.0 indicates strong prioritization of safeguards. This calibration improves alignment between statistical reporting and narrative interpretation and supports a more credible scholarly contribution.

5.2. Limitations of the Research

Limitations are integrated into this interpretation. All patterns reported in Section 4 and Section 5 may be considered the study’s self-report, cross-sectional design and achieved sample. Responses reflect perceived attitudes and priorities at one point in time and may be affected by social desirability or common method bias. In addition, recruitment did not fully ensure probability-based representativeness across all staff groups, and some categories are underrepresented. Therefore, implications are framed as indicative and governance-relevant hypotheses for future evaluative and longitudinal work rather than as claims about realized performance impacts or population-level effects.
Our findings are broadly consistent with international research that highlights the role of accountability, data governance, and staff capability in public-sector AI adoption while also emphasizing a context-sensitive ordering of priorities in Greek public university administration. In this setting, a governance-first implementation pathway—starting from legitimacy safeguards and capacity-building—appears more aligned with staff perceptions than a performance-first narrative focused solely on speed or cost reduction.

5.3. Recommendations

Therefore, the suggestions we arrive at are as follows:
  • It is crucial that policymakers must take clear steps towards governance policy with AI implementation programs. Given the high importance of ethics/data security (MO_H4) recorded by survey participants and the conditional nature of AI support, universities may consider establishing a clear AI governance framework before scaling-up use cases. This includes clear accountability arrangements, documented decision-making rights (who approves, who monitors, who can override), risk assessment processes, and audit trails for outcomes affected by AI.
  • Our findings suggest stronger support for the benefits of automation at the task level (MO_H2) than for broad improvements in organizational effectiveness (MO_H1). Therefore, we conclude that for the smooth and faster integration of Artificial Intelligence, it would be important for designers to focus initially on routine tasks based on administrative automation rules (accelerating workflow, reducing errors, standardized communication) rather than high-risk management decisions (analysis/forecasting) where the risks of explainability and provocation are higher. To this end, it is important for universities to record the available AI tools and map the tasks that could be performed without significant resistance from administrative staff. Gradually, and with the continuous familiarization and specialized training of staff with AI, the attitudes and culture of the organization are expected to change so that more demanding processes can be integrated.
Figure 8 presents a conceptual model that links institutional factors, staff attitudes, perceived barriers, and adoption pathways. The figure is offered as an organizing framework grounded in the literature and informed by our research; however, it is not tested or validated by this study’s design. Unless model testing is performed (e.g., correlational/SEM analyses among composites), we treat the model as an organizing note framework interpreted as a proposed framework. As an initial step, future work can examine correlational relationships among MO_H1-MO_H5 and test whether legitimacy (MO_H4) and capacity (MO_H5) mediate or moderate the association between perceived benefits (MO_H1/MO_H2) and adoption intentions.
  • Regarding investment in capacity-building as a condition for facilitating the integration of AI, the variable Capacity Development Needs (MO_H5) was rated moderately high, indicating that staff consider training and guidance as necessary prerequisites for responsible adoption. We interpret the results and we propose that the universities should be interested in training, which can be designed to be personalized and graded in difficulty to meet the needs of employees at all stages and levels (executives and front-line employees). It is useful to include in training in the following areas: (i) practical use of approved tools, (ii) data management practices and GDPR and (iii) procedures for human oversight (for high-risk processes).
  • In accordance with public-sector legitimacy requirements, Artificial Intelligence systems used in administration can be considered that are designed with mechanisms for human review, appeal, and correction, especially for decisions that affect individuals’ rights or access to services. These results show a trend that employees want universities to define “non-automation” zones for important tasks, particularly for high-impact decisions, and to ensure that staff retain the power to challenge and modify AI outcomes in cases where problems and risks are identified.
  • An important observation from our findings showed that the average central variable MO_H4 received the highest value. These results lead us to conclude that staff perceptions suggest that trust is related to transparent rules for data collection, storage, access, and sharing. The results also lead us to interpret that it is both desirable and necessary for universities to take measures to strengthen the GDPR and the applicable AI governance requirements at the EU/international level, including minimizing data storage and processing, role-based access controls, and incident response protocols. We further recommend that university administrations implement transparency measures, such as documenting training data sources (where relevant) and providing clear notices to users, to further enhance legitimacy and user trust.
  • The challenges of integrating Artificial Intelligence into university governance (MO_H3) indicate perceived friction. Therefore, we interpret that universities may pilot AI tools in limited processes, collect feedback from users, and iterate workflows before universal implementation. Active participation and open dialogue with employees before any AI integration efforts can reduce uncertainty and help distinguish between responsible preparation and risk-based hesitation—particularly regarding ethics, fears about impact on work, and concerns about data protection highlighted in the multiple-choice item ‘critical issues’ (E25).
The most significant finding of our research is that governance and capability are prioritized for the adoption of Artificial Intelligence in Greek university administration. Personnel seem more willing to support automation-related processes, especially when combined with strong ethical and data security safeguards and human oversight. We believe that future evaluative research could assess how such conditions influence outcomes and how they translate into measurable effectiveness, service quality improvements, or cost implications.

5.4. Suggestions for Future Research

The findings suggest that administrative staff perceive AI adoption as potentially relevant to the future performance and sustainability of university administration, but only under strong governance safeguards (ethics, data protection, transparency) and adequate capacity-building (training and oversight). Accordingly, the study should be interpreted and read as an indicative baseline rather than evidence of realized benefits.
Future research may be considered in more depth how AI can be integrated in ways that enhance efficiency and effectiveness while addressing ethical, organizational, and human factors. Specifically, it would be useful to conduct research:
  • With comparative studies across universities. Research could compare the degree and forms of AI use in Greek universities with those in other European or non-European institutions. Such studies could identify factors that enable some universities to achieve a higher level of AI integration, as well as barriers that delay or hinder adoption in others. Comparative designs could also address differences between types of universities (e.g., large vs. small, metropolitan vs. regional, technical vs. humanistic, etc.) and provide insights into how institutional context shapes AI-related policies and practices.
  • Using the triangular methodology (questionnaires and interviews), as this would enable the simultaneous collection of qualitative and quantitative statistical data. A mixed-methods or triangulation design, combining quantitative surveys with qualitative data, could therefore provide a richer (e.g., capturing subtle perceptions, emotional reactions and ethical considerations) and more holistic understanding of AI integration in higher education.
  • Related to the evaluation of specific interventions related to Artificial Intelligence that are already being implemented on a pilot or universal basis in universities. For example, the results of studies related to the implementation of pilot training programs for employees in Artificial Intelligence tools, algorithmic bias, data protection, and digital skills would be important, or targeted initiatives to support specific groups of staff with lower levels of digital skills. The evaluation research could examine not only effectiveness (e.g., impact on processing times, error rates, or user satisfaction) but also acceptance, perceived fairness, and unintended consequences for different categories of workers.
  • With a multicenter stratified sample including all categories of staff. Expanding the scope and composition of the sample would strengthen the generalizability of the results. Research involving employees from all Greek universities, as well as from different staff categories and educational levels, could provide a more comprehensive picture of attitudes towards Artificial Intelligence. It is important to pay attention to groups that are often underrepresented in online surveys (e.g., cleaning staff, security or technical staff, and employees with limited digital access) so that their views and needs are considered when designing policies and interventions related to Artificial Intelligence in higher education.

6. Conclusions

This study provides a theory-informed baseline of administrative staff perceptions of AI adoption in Greek public universities. The results indicate a generally supportive climate for AI integration, but support is conditional and governance-centered: ethical and data protection safeguards are prioritized most strongly, automation benefits are endorsed more than broad claims of organizational effectiveness, and skills development and human oversight are viewed as enabling conditions for responsible deployment. Interpreted through the legitimacy-capacity-performance lens, the Greek case suggests that perceived legitimacy risks and organizational capacity constraints can weigh more heavily than generic efficiency expectations in shaping acceptability in centralized, GDPR-constrained public administration.
Because the data are self-reported and cross-sectional, the study speaks to perceived priorities and attitudes rather than to observed sustainability outcomes, productivity impacts, cost savings, or implementation success. Recruitment constraints and underrepresentation of some staff groups further limit population-level generalization. Accordingly, the manuscript frames its contribution as a baseline benchmark and a context-sensitive mechanism proposal (governance-first, capacity-first adoption) for subsequent testing.
To increase analytical rigor, future work is recommended to focus on: (i) reporting inferential comparisons and predictors (e.g., gender/age differences, prior AI exposure, role category) with effect sizes; (ii) providing full measurement transparency (item wording, scale structure, reliability per composite, and basic construct checks such as inter-item correlations and EFA); (iii) further tightening theoretical positioning by explicitly linking constructs to adoption and public-sector legitimacy frameworks; and (iv) sharpening the distinct contribution of the Greek context by comparing the observed priority ordering (ethics/legitimacy > performance) to patterns reported in less regulated or more decentralized systems. These steps would transform the baseline survey into a more publishable, theory-relevant contribution on AI-enabled administrative modernization in higher education.
In conclusion, staff perceptions indicate a supportive climate for AI integration that is dependent on strong governance safeguards (ethics/GDPR), oversight through human interaction, and targeted capacity development rather than evidence of achieving performance improvements.

Author Contributions

Conceptualization: O.B. and P.L.; methodology: O.B. and M.P.; investigation: O.B.; data curation: O.B.; formal analysis: O.B.; writing—original draft: O.B. (this research was conducted in the context of her Master’s thesis under the supervision of P.L.); review and editing: M.P. (substantial interventions and textual corrections) and V.K. (comments and suggestions for improvement). All authors have read and agreed to the published version of the manuscript.

Funding

The publication fees (open access) were funded by the Institute of Computer Technology Institute and Press “Dio-phantus” (CTI), as part of its policy to support the research activities of the organization’s research teams.

Institutional Review Board Statement

Ethical review and approval were waived for this study because it involved an anonymous, voluntary questionnaire survey of adult participants without any intervention and without the collection of directly identifying personal data; the study posed minimal risk to participants. Data were handled in accordance with applicable data protection requirements.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study; participation was voluntary and anonymous.

Data Availability Statement

The data presented in this study are not publicly available due to privacy and ethical restrictions (GDPR) and to protect the confidentiality of participants. An anonymized dataset may be made available from the corresponding author upon reasonable request, subject to institutional approval and data protection requirements.

Acknowledgments

The authors would like to thank all participants for their time and contribution to the research. The authors also thank the Institute of Computer Technology and Publications “Diophantus” for covering the cost of publishing this work. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The Computer Technology Institute and Press “Diophantus” (CTI and Press) supported the article processing charges (open access fee) and had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Abdullah, Y. I., Schuman, J. S., Shabsigh, R., Caplan, A., & Al-Aswad, L. A. (2021). Ethics of artificial intelligence in medicine and ophthalmology. The Asia-Pacific Journal of Ophthalmology, 10(3), 289–298. [Google Scholar] [CrossRef]
  2. Ackermann, H., Henke, A., Chevalère, J., Yun, H. S., Hafner, V. V., Pinkwart, N., & Lazarides, R. (2025). Physical embodiment and anthropomorphism of AI tutors and their role in student enjoyment and performance. NPJ Science of Learning, 10(1), 1. [Google Scholar] [CrossRef]
  3. Ahmad, S. F., Alam, M. M., Rahmat, M. K., Mubarik, M. S., & Hyder, S. I. (2022). Academic and administrative role of Artificial Intelligence in education. Sustainability, 14(3), 1101. [Google Scholar] [CrossRef]
  4. Allen, J. (2019). Digital entrepreneurship. Routledge. [Google Scholar]
  5. Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through AI-guided chatbots. Government Information Quarterly, 36(2), 358–367. [Google Scholar] [CrossRef]
  6. Berber, N., & Slavić, A. (2016). The practice of employees’ training in Serbia based on Cranet research. Economic Themes, 54(4), 535–548. [Google Scholar] [CrossRef]
  7. Black, J., & Murray, A. (2019). Regulating AI and machine learning: Setting the regulatory agenda. European Journal of Law and Technology, 10(3). Available online: https://www.ejlt.org/index.php/ejlt/article/view/722/980 (accessed on 15 December 2025).
  8. Chatterjee, S., & Sreenivasulu, N. S. (2019). Personal data sharing and legal issues of human rights in the era of artificial intelligence: Moderating effect of government regulation. International Journal of Electronic Government Research (IJEGR), 15(3), 21–36. [Google Scholar] [CrossRef]
  9. Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., & Truong, L. (2023). Unlocking the value of Artificial Intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), 100899. [Google Scholar] [CrossRef]
  10. Christopoulos, A., Conrad, M., & Shukla, M. (2018). Increasing student engagement through virtual interactions: How? Virtual Reality, 22(4), 353–369. [Google Scholar] [CrossRef]
  11. Dabis, A., & Csáki, C. (2024). AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI. Humanities and Social Sciences Communications, 11(1), 1–13. [Google Scholar] [CrossRef]
  12. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  13. Dessler, G. (2024). Human resource management (17th ed.). Pearson Education Limited. [Google Scholar]
  14. Doyle, T., & Brady, M. (2018). Reframing the university as an emergent organisation: Implications for strategic management and leadership in higher education. Journal of Higher Education Policy and Management, 40(4), 305–320. [Google Scholar] [CrossRef]
  15. EC. (2024). Greece 2024 digital decade country report. Available online: https://digital-strategy.ec.europa.eu/en/factpages/greece-2024-digital-decade-country-report (accessed on 15 December 2025).
  16. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., & Valcke, P. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. [Google Scholar] [CrossRef]
  17. Foss, N. J., & Laursen, K. (2012). Human resource management practices and innovation (pp. 505–530). Institut for Strategic Management and Globalization. ISBN 9788791815805. [Google Scholar]
  18. Funck, E. K., & Karlsson, T. S. (2020). Twenty-five years of studying new public management in public administration: Accomplishments and limitations. Financial Accountability & Management, 36(4), 347–375. [Google Scholar] [CrossRef]
  19. George, B., & Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Administrative Sciences, 13(9), 196. [Google Scholar] [CrossRef]
  20. George, O. J., Okon, S. E., & Akaighe, G. O. (2023). Psychological capital and work engagement among employees in the Nigerian public sector: The mediating role of emotional intelligence. International Journal of Public Administration, 46(6), 445–453. [Google Scholar] [CrossRef]
  21. Glyptis, L., Christofi, M., Vrontis, D., Del Giudice, M., Dimitriou, S., & Michael, P. (2020). E-Government implementation challenges in small countries: The project manager’s perspective. Technological Forecasting and Social Change, 152, 119880. [Google Scholar] [CrossRef]
  22. Hammerschmid, G., Van de Walle, S., Andrews, R., & Mostafa, A. M. S. (2019). New public management reforms in Europe and their effects: Findings from a 20-country top executive survey. International Review of Administrative Sciences, 85(3), 399–418. [Google Scholar] [CrossRef]
  23. Jungwirth, C. (2025). Balancing Acts: Navigating Leadership, Transparency, and Compliance in University Governance. Beiträge zur Hochschulforschung, 47, 10–26. [Google Scholar]
  24. Kankanhalli, A., Charalabidis, Y., & Mellouli, S. (2019). IoT and AI for smart government: A research agenda. Government Information Quarterly, 36(2), 304–309. [Google Scholar] [CrossRef]
  25. Kaplan, S. (2023). Experiential intelligence: Harness the power of experience for personal and business breakthroughs. BenBella Books. [Google Scholar]
  26. Kappos, D., & Kling, A. (2020). Ground-level pressing issues at the intersection of AI and IP. Columbia Science & Technology Law Review, 22, 263–283. [Google Scholar]
  27. Kefalaki, M. (2024). The role of strategic management and leadership in higher education institutions. The case of public universities in Greece. Journal of Applied Learning and Teaching, 7(2), 433–440. [Google Scholar] [CrossRef]
  28. Kerr, K. (2020). Ethical considerations when using artificial intelligence-based assistive technologies in education. In Ethical use of technology in digital learning environments: Graduate student perspectives. Open Education Alberta. Available online: https://pressbooks.openeducationalberta.ca/educationaltechnologyethics/ (accessed on 15 December 2025).
  29. Kitsakis, D., Kalogianni, E., & Dimopoulou, E. (2022). Public law restrictions in the context of 3D land administration—Review on legal and technical approaches. Land, 11, 88. [Google Scholar] [CrossRef]
  30. Larsson, C., Launberg, A., & Lindell, E. (2025). Scoping ethical AI in working life: Lessons for the Nordic model. Nordic Journal of Working Life Studies. [Google Scholar] [CrossRef]
  31. Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M. (2021). Artificial intelligence, human rights, democracy, and the rule of law. The Council of Europe’s Ad Hoc Committee on Artificial Intelligence. Available online: https://edoc.coe.int/en/artificial-intelligence/10206-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-a-primer.html# (accessed on 15 December 2025).
  32. Liargovas, P., & Pilichos, V. (2022). Is EU fiscal governance effective? A case study for the period 1999–2019. Economies, 10(8), 187. [Google Scholar] [CrossRef]
  33. Lin, C. C., Huang, Y. Y., Zhang, J. Q., & Chang, S. F. (2021). Combining AI with corporate governance to enhance operational efficiency of universities. Journal of Physics: Conference Series, 1827(1), 012126. [Google Scholar] [CrossRef]
  34. Manresa, A., Bikfalvi, A., & Simon, A. (2019). The impact of training and development practices on innovation and financial performance. Industrial and Commercial Training, 51(7/8), 421–444. [Google Scholar] [CrossRef]
  35. Manzoni, M., Medaglia, R., Tangi, L., van Noordt, C., Vaccari, L., & Gattwinkel, D. (2022). AI watch road to the adoption of Artificial Intelligence by the public sector: A handbook for policymakers. Publications Office of the European Union. [Google Scholar] [CrossRef]
  36. McCaffery, P. (2018). The higher education manager’s handbook: Effective leadership and management in universities and colleges. Routledge. [Google Scholar] [CrossRef]
  37. Mogaji, E., Jain, V., Maringe, F., & Hinson, R. E. (2022). Re-imagining educational futures in developing countries. Springer International Publishing. [Google Scholar] [CrossRef]
  38. Mohamed, M. Z. B., Hidayat, R., Suhaizi, N. N. B., Sabri, N. B. M., Mahmud, M. K. H. B., & Baharuddin, S. N. B. (2022). Artificial Intelligence in mathematics education: A systematic literature review. International Electronic Journal of Mathematics Education, 17(3), em0694. [Google Scholar] [CrossRef]
  39. Neumann, O., Guirguis, K., & Steiner, R. (2024). Exploring Artificial Intelligence adoption in public organizations: A comparative case study. Public Management Review, 26(1), 114–141. [Google Scholar] [CrossRef]
  40. Nikolinakos, N. T. (2023). Ethical principles for trustworthy AI. In EU policy and legal framework for artificial intelligence, robotics and related technologies-the AI act (pp. 101–166). Springer International Publishing. [Google Scholar] [CrossRef]
  41. Ocaña-Fernández, Y., Valenzuela-Fernández, L. A., & Garro-Aburto, L. L. (2019). Artificial Intelligence and its implications in higher education. Journal of Educational Psychology-Propositos y Representaciones, 7(2), 553–568. [Google Scholar] [CrossRef]
  42. Oracle. (2019). The 2019 state of Artificial Intelligence in talent acquisition. HR.Research Institute. Available online: https://www.oracle.com/a/ocom/docs/artificial-intelligence-in-talent-acquisition.pdf (accessed on 15 December 2025).
  43. Petit, N. (2017). Law and regulation of artificial intelligence and robots-conceptual framework and normative implications. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  44. Pudelko, M. (2006). Universalities, particularities, and singularities in cross-national management research. International Studies of Management & Organization, 36(4), 9–37. [Google Scholar] [CrossRef]
  45. Pyrgiotakis, E. I., & Symeou, L. (2016). Qualitative research and the scientific value of the knowledge produced in the social sciences and humanities. In Research methodology in social sciences and education. Contribution to epistemological theory and research practice. Pedio. (In Greek) [Google Scholar]
  46. Reis, J., Santo, P. E., & Melão, N. (2019). Artificial Intelligence in government services: A systematic literature review. In World conference on information systems and technologies (pp. 241–252). Springer. [Google Scholar] [CrossRef]
  47. Saaida, M. B. (2023). AI-driven transformations in higher education: Opportunities and challenges. International Journal of Educational Research and Studies, 5(1), 29–36. [Google Scholar]
  48. Sallis, E. (2014). Total quality management in education. Routledge. [Google Scholar] [CrossRef]
  49. Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29, 353. [Google Scholar]
  50. Schorr, A. (2023). The Technology Acceptance Model (TAM) and its importance for digitalization research: A review. Proceedings TecPsy, 2023, 55. [Google Scholar] [CrossRef]
  51. Singh, G., & Slack, N. J. (2022). New public management and customer perceptions of service quality—A mixed-methods study. International Journal of Public Administration, 45(3), 242–256. [Google Scholar] [CrossRef]
  52. Spanou, C. (2016). Policy conditionality, structural adjustment and the domestic policy system. Conceptual framework and research agenda. Robert Schuman Centre for Advanced Studies Research Paper No. RSCAS 2016/60. European University Institute. [Google Scholar]
  53. Taylor, J., & Machado, M. D. L. (2006). Higher education leadership and management: From conflict to interdependence through strategic planning. Tertiary Education and Management, 12(2), 137–160. [Google Scholar] [CrossRef]
  54. Uzun, M. M., Yıldız, M., & Önder, M. (2022). Big questions of AI in public administration and policy. Siyasal: Journal of Political Sciences, 31(2), 423–442. [Google Scholar] [CrossRef]
  55. Vasilenko, L. A., & Seliverstova, A. D. (2024). Symbiotic intelligence in smart configuration: Opportunities and Challenges. In Digital transformation: What are the smart cities today? (Vol. 846, pp. 173–180). Lecture Notes in Networks and Systems. Springer. [Google Scholar] [CrossRef]
  56. Vinichenko, M. V., Melnichuk, A. V., & Karácsony, P. (2020). Technologies of improving the university efficiency by using artificial intelligence: Motivational aspect. Entrepreneurship and Sustainability Issues, 7(4), 2696. [Google Scholar] [CrossRef]
  57. Votto, A. M., Valecha, R., Najafirad, P., & Rao, H. R. (2021). Artificial Intelligence in tactical human resource management: A systematic literature review. International Journal of Information Management Data Insights, 1(2), 100047. [Google Scholar] [CrossRef]
  58. Wirtz, B. W., & Müller, W. M. (2019). An integrated Artificial Intelligence framework for public management. Public Management Review, 21(7), 1076–1100. [Google Scholar] [CrossRef]
  59. Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—Applications and challenges. International Journal of Public Administration, 42(7), 596–615. [Google Scholar] [CrossRef]
  60. Wirtz, B. W., Weyerer, J. C., & Sturm, B. J. (2020). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. International Journal of Public Administration, 43(9), 818–829. [Google Scholar] [CrossRef]
Figure 1. Variable “age”.
Figure 1. Variable “age”.
Admsci 16 00131 g001
Figure 2. Employee views on the contribution of AI to improving the efficiency and effectiveness of Greek universities.
Figure 2. Employee views on the contribution of AI to improving the efficiency and effectiveness of Greek universities.
Admsci 16 00131 g002
Figure 3. Employee opinions on the benefits of automating university administrative processes using AI.
Figure 3. Employee opinions on the benefits of automating university administrative processes using AI.
Admsci 16 00131 g003
Figure 4. Challenges and prospects from the integration of AI in university administration means.
Figure 4. Challenges and prospects from the integration of AI in university administration means.
Admsci 16 00131 g004
Figure 5. Employees’ views on ethics and conduct with the integration of AI into university management.
Figure 5. Employees’ views on ethics and conduct with the integration of AI into university management.
Admsci 16 00131 g005
Figure 6. Results to the question Q25—In your opinion, what may be considered the most critical issue that will arise from the integration of AI into the Administration of Greek Universities? Percentages represent the proportion of respondents selecting each option (N = 127).
Figure 6. Results to the question Q25—In your opinion, what may be considered the most critical issue that will arise from the integration of AI into the Administration of Greek Universities? Percentages represent the proportion of respondents selecting each option (N = 127).
Admsci 16 00131 g006
Figure 7. Employees’ views on the contribution of employee skills development to the smooth integration of AI into the administration of Greek universities.
Figure 7. Employees’ views on the contribution of employee skills development to the smooth integration of AI into the administration of Greek universities.
Admsci 16 00131 g007
Figure 8. Proposed conceptual (not empirically validated) framework for AI adoption in university administration.
Figure 8. Proposed conceptual (not empirically validated) framework for AI adoption in university administration.
Admsci 16 00131 g008
Table 1. Operational definitions and their alignment to questionnaire items.
Table 1. Operational definitions and their alignment to questionnaire items.
Construct/TermOperational Definition in This PaperAlignment to Questionnaire Items/Composite
AI integration (administration)The perceived introduction and use of AI-enabled automation and/or decision-support tools within university administrative workflows under defined governance rules and human oversight (not pedagogical AI).Measured indirectly via domain items (Q4–Q5; Q7–Q12; Q14–Q18; Q20–Q23; Q26–Q28).
Efficiency Perceived reduction in processing time, workload, or resource use per administrative case attributed to AI support.MO_H1 items Q4–Q5
(efficiency/effectiveness domain).
EffectivenessPerceived improvement in service quality, accuracy, consistency, responsiveness, or decision quality in administration attributed to AI support.MO_H1 items Q4–Q5
(efficiency/effectiveness domain).
Automation benefitsPerceived advantages of AI for routine, rules-based task automation (e.g., speed, error reduction, consistency, standard communication).MO_H2 items Q7–Q12.
Ethics and data protectionPerceived requirements/risks related to fairness, transparency, accountability, explainability, lawful processing, data security, and GDPR compliance.MO_H4 items Q20–Q23
(and Q25 as multiple-response salience item, if applicable).
Skills development (facilitating condition)Perceived need for training, upskilling, and guidance to enable feasible and responsible AI use, including human-in-the-loop oversight.MO_H5 items Q26–Q28.
Adoption challenges/barriersPerceived obstacles to implementation and use (e.g., complexity, uncertainty, role impacts, limited suitability for high-discretion tasks).MO_H3 items Q14–Q18.
Table 2. Five central variables (description and questions).
Table 2. Five central variables (description and questions).
NameDescriptionQuestions
MO_H1AI Efficiency/Effectiveness PerceptionsQ4–Q5
MO_H2 Automation Benefits Q7–Q12
MO_H3 Adoption Challenges Q14–Q18
MO_H4 Ethics and Data Protection Q20–Q23 & Q25
MO_H5Skills Development NeedsQ26–Q28
Table 3. Results of Spearman’s correlations.
Table 3. Results of Spearman’s correlations.
Spearman’s CorrelationsValueSig. p-ValueCharacterization
MO_H4:MO_H2ρ = 0.160sig. p-values 0.073weak correlation
MO_H4:MO_H3ρ = 0.171sig. p-values 0.055weak correlation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bousiou, O.; Paraskevas, M.; Kapoulas, V.; Liargovas, P. Prospects for Integrating Artificial Intelligence into the Administration of Higher Education in Greece. Adm. Sci. 2026, 16, 131. https://doi.org/10.3390/admsci16030131

AMA Style

Bousiou O, Paraskevas M, Kapoulas V, Liargovas P. Prospects for Integrating Artificial Intelligence into the Administration of Higher Education in Greece. Administrative Sciences. 2026; 16(3):131. https://doi.org/10.3390/admsci16030131

Chicago/Turabian Style

Bousiou, Ourania, Michael Paraskevas, Vaggelis Kapoulas, and Panagiotis Liargovas. 2026. "Prospects for Integrating Artificial Intelligence into the Administration of Higher Education in Greece" Administrative Sciences 16, no. 3: 131. https://doi.org/10.3390/admsci16030131

APA Style

Bousiou, O., Paraskevas, M., Kapoulas, V., & Liargovas, P. (2026). Prospects for Integrating Artificial Intelligence into the Administration of Higher Education in Greece. Administrative Sciences, 16(3), 131. https://doi.org/10.3390/admsci16030131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop