Next Article in Journal
Pre-Service Kindergarten Teachers’ Confidence and Beliefs in Music Education: A Study in the Chinese Context
Next Article in Special Issue
Leading AI-Driven Student Engagement: The Role of Digital Leadership in Higher Education
Previous Article in Journal
Challenge/Competence Appraisal by Swiss Two-Way Immersion Teachers of the “Cursus bilingue/Bilingualer Studiengang” in Their Professionalization Process and Career-Entry Stage Implications
Previous Article in Special Issue
Graduate Student Engagement and Digital Governance in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence: Objective or Tool in the 21st-Century Higher Education Strategy and Leadership?

by
Lucien Bollaert
Quality Education Board, Ghent University, 9000 Ghent, Belgium
Educ. Sci. 2025, 15(6), 774; https://doi.org/10.3390/educsci15060774
Submission received: 5 March 2025 / Revised: 21 May 2025 / Accepted: 4 June 2025 / Published: 18 June 2025
(This article belongs to the Special Issue Higher Education Governance and Leadership in the Digital Era)

Abstract

:
Since the launching of ChatGPT (generative) AI has been developed so much and fast that it has entered higher education (HE) and higher education institutions (HEIs). The article is meant to help HE(Is) how to deal with AI strategically and in leadership. It investigates which influences AI and the use of AI tools is having on HE(Is). Therefore 4 research questions are formulated: how does AI and AI tools influence HE(Is) in its mission, organization and context; should AI and its applications then be regarded as an strategic objective or only as a tool to realize the strategy; how is AI and the use of AI tools, as developed and described in an AI strategy, best managed to be adopted and integrated in an effective and responsible way, and finally which influence does AI and its tools have on the leadership and culture? In order to answer those questions, the article describes first our contemporary times, and the leadership needed, then delves into the history of the development of AI and its tools and investigates the current and future attitudes towards, degrees of implementation, and uses of AI and its tools among the internal and external stakeholders of HE(Is). The findings result from a global literature study of international surveys and 2 case studies. The selection is based on topical usefulness, international scope, (statistical) relevance and quality of research in general. In this way the article aims to help to develop an AI strategy and thus can be read as a policy paper underpinned by a meta-analysis. The main results are that, although the use of AI in HEIs is divided, the effective and responsible adoption and integration of AI is a new strategic objective in order to help to realize HE’s three-fold mission in a well-planned and managed way asking for a visionary leadership and a clear policy framework and guidelines, in which the words transparency, responsibility and critical thinking link AI use with an enhancement of unique human competences such as critical thinking.

1. Introduction

Since 2000, or maybe some decades before, higher education (HE) has been confronted with more, deeper, and faster changes in the world. The yearly much-attended World Economic Forum (WEF) at Davos for instance has identified the emergence of a so-called 4th Industrial Revolution. Together with its founder and chairman, Klaus Schwab (Schwab, 2017), this global technology revolution is described as a new era that builds and extends the impact of digitization in new, integrated, and unanticipated ways in the form of cyber-physical systems embedded within societies and even human bodies.
Transformation is the word most encountered when describing our contemporary times. HE and Higher Education Institutions1 (HEIs) have been challenged to adapt to the quite fundamental global changes. Although their mission still seems to exist of the well-known three activities (learning, research, and societal services), the changed context has certainly altered not only their contents and strategic goals but also the ways HE and HEIs are structured, governed, and managed demanding other competencies2 and skills.
Former Google CEO, Eric Schmidt 2025 lately repeated after others that (Generative) Artificial Intelligence (GenAI) “(…) will be the most transformative technology since electricity.”. Not surprisingly Henri Kissinger argues in his book together with Eric Schmidt and Daniel Huttenlocher, dean of the Massachusetts Institute of Technology (MIT) Schwarzman College of Computing, (Kissinger et al., 2021) that AI is going to change everything in international relations since World War II, both economically and military.
In his sequel and Kissinger’s last book (Kissinger et al., 2023) with Craig Mundie (ex-Microsoft) and again Eric Schmidt as co-authors, they draw the dilemma of co-evolution or co-existence with AI and aligning AI systems to human challenges as the real challenge for our age. If disregarded, profit-driven or ideologically-driven purposeful misalignments as well as accidental ones might outweigh the benefits and integration with AI as the path that will not be easy to retreat from.
The fact that the 2024 Nobel Prize in physics was won by John Hopfield (Princeton University), who created a structure that can store and reconstruct information, and Geoffrey Hinton (Toronto University and Google), who invented a method that can independently discover properties in data, which has become important for the deep artificial neural networks now in use, is clearly linked twice to AI. Remarkably, Hinton left Google in 2023 in order to be free to warn against AI’s dangers when it could become more intelligent than humans in maybe 20 years. Also, half of the 2024 Nobel Prize in chemistry, Demis Hassabis, and John Jumper, is linked to AI through the private British laboratory, Google’s DeepMind (see Section 5.2 and Section 6.3).
The MIT Technology Review has also taken up the following AI breakthroughs for 2025 (MIT, 2025): AI agents, small language models, generative virtual playgrounds, LLMs that “reason”, AI in science, AI companies getting cozier with national security, and a legitimate competition to the leader of chips, and Nvidia.
This article focuses on the breakthrough of AI as a further most transformative and influential technology and on the ways it may transform HE and HEI’s urging for another kind of leadership and strategy.

2. Methodology

The ultimate aim of the article is to help HE(Is) adopt and integrate AI in a responsible and ethical way by developing and realizing a successful strategy, an adapted leadership, and an appropriate culture.
As far as methodology is concerned, this article is based on surveys and reports on AI as globally as possible by international and national institutions, organizations, and consultancies available on the internet. The search for interesting surveys and research literature about AI and its usage was conducted with the help of AI. However, the ultimate selection in the long list of publications on the subject gathered by AI was done by the author himself based on their topical usefulness, international scope, (statistical) relevance, and quality of research in general. In this way, this article can be regarded as a meta-analysis in the meaning of gathering many studies in order to answer particular research questions and get a global picture of AI and its usage in HE(Is) at the moment. Except for the case studies the stakeholder’s surveys needed to be international and recent in order to provide the widest view on the quickly changing use of AI and its tools. For each stakeholder, at least 2 international surveys are used in order to check and compare the results as well as observe possible trends. In this way, the research and analysis are close to and contain many reviews. However, the results of the surveys as well as the 2 case studies are eventually used to answer the title of this article in an argued and underpinned way. Since the aim of the article is to help HEIs develop a responsible AI strategy, the article can also be read as a policy paper underpinned by a meta-analysis.
The four main research questions formulated in this article are: how does AI and AI tools influence HE(Is) in its mission, organization and context; should AI and its applications then be regarded as an strategic objective or only as a tool to help to realize the existing non-AI strategy; how are AI and the use of AI tools, as developed and described in an AI strategy, best managed to be adopted and integrated in an effective and responsible way; and finally which influence does AI and its tools have on the leadership and culture? In order to answer those questions in an argued and informed way the article describes first our contemporary times, and the leadership needed, then delves into the history and the development of AI and its tools, and investigates the current and future attitudes towards, degrees of implementation and uses of AI and its tools among the internal and external stakeholders of HE(Is). In this way the research and analysis of the article endorses the concept of the stakeholders’ model as underlying concept of good governance and successful strategy building and performance HE(Is).
In the analysis and writing of this article, no AI (tool) was used. Some case studies and analyses of good practices are taken from institutions the author happens to know as having a famous reputation for ICT in general and AI in particular. At Ghent University and the University of Applied Sciences of West-Flanders (HOWEST), the author has conducted in-depth interviews with the AI officers and researchers, who could react to the drafts of the text, as did some personal critical friends. Final amendments from their feedback and the blind peer reviewing led to this final version.

3. Transformative Times and Challenges

Together with the concept of the 4th Industrial Revolution Schwab and WEF also rightly point to the unevenness of these technologies and warn of three big areas of concern: inequality, security, and identity (Schwab, 2017). By asking the question “what do we want these technologies to deliver for us?”, they touch upon the ethical and moral dimensions in order to ensure that this new shift creates benefits for the many, rather than the few.
The same observations and considerations of a new era lay at the base of the adoption of the 2030 Agenda for Sustainable Development by the United Nations (UN) in 2015 (UN, 2015a, 2015b). The agenda provides a shared blueprint for peace and prosperity for people and the planet, now and in the future. At its heart are the well-known 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries recognizing that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth while tackling climate change and working to preserve the oceans and forests. The fourth goal is to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all.
Since the 1970s Bollaert (Bollaert, 2019, 2023, 2024) has witnessed many changes in HE. In 2024 he identified 5 global and interlinked challenges:
  • the global economy, its financial crises, and finances:
  • the geopolitical dimension of tensions and conflicts;
  • the global climate change;
  • digitalization, IT and AI;
  • the physical and mental health of students and staff.
Bollaert’s 5 transformative global challenges are comparable to those of the OECD. In its report on trends shaping education in 2025 (OECD, 2025) the following four trends are observed and explored:
-
global conflict and cooperation straining public spending, with security and defense budgets expanding at the expense of other priorities, such as education;
-
work and progress transforming global labor markets due to technological advancements, most notably AI, and sustainability imperatives;
-
voices and storytelling focusing on whose voices are heard and whose stories are told in the increasingly digital and globalized world, in which democracies have seen a decline in voter turnout, reflecting growing dissatisfaction with traditional political processes, and the rise of populism and polarization highlighting the need for education to promote social cohesion and critical thinking;
-
bodies and minds, exploring the intricate connections between physical and mental health, environmental factors, and societal changes.
The list and descriptions above also show how the OECD report rightly deals with two major themes cross-cutting the 4 trends: advancements in technology, including AI, and environmental sustainability. Specifically on learning in an AI-driven world, the OECD report explores how AI is reshaping the educational landscape, which will be further explored in this article from 6.

4. University Leadership in Transformative Times

Since the globalization of the economy and the digitalization linked to the internet in the 1990s, leadership in both profit and social-profit sectors has become crucial to survive. Digitalization has had a deep influence on organizations and their management, universities included. Without effective digital academic leadership (DAL), HEIs risk falling behind in a world where technology is central to academic success, operational efficiency, and student satisfaction (Shrivastava & Shrivastava, 2022). DAL is a relatively new concept not being researched extensively yet. The broadest definitions extend beyond technology and emphasize that DAL is more about the ability to lead by using information and communication technologies to reach the HEIs’ strategic aims (Harbani et al., 2021). In this way, DAL should inspire digital change but also motivate students, teachers, and indeed all other stakeholders to actively participate in digital transformation (Salah, 2023). Cheng et al. (2024) have recently published a systematic review of DAL in HE, in which they distinguish no less than 9 descriptions of DAL with 16 sub-divisions ranging from lead and manage digital/technological knowledge to the broad strategic use of digital assets to achieve organizational goals.
Together the five current global challenges have caused quite some changes in the leadership and strategies of HE(Is). Although these challenges may seem a far-away show, they are actually closely linked to everyday life at universities. Next to the formulation and realization of a good strategy by effective actions and next to a matching organizational culture starting with a vision and a mission answering the questions “Why?”, “How?” and “What?” in that specific order (Sinek, 2009), visionary leadership seems to be crucial.
The main features that characterize the new kind of leadership HE(Is) need(s), are many and demanding: visionary, transformational, service-oriented, truly listening and caring, authentic and consistent, engaged, attentive to and addressing culture, value-driven, respected instead of authoritarian, empowering and facilitating, quality-driven, inspired by societal added values, ethically “correct”, flexible, dynamic, resilient, transparent, honest, team-player, and super professional manager are among the most encountered.
This long list of highly demanding, sometimes a bit contradictory leadership needs, already makes clear that this is almost impossible for one human individual to possess and practice. Modern leadership should indeed be described as a collective or team of driven persons on all responsibility levels of an organization. It is therefore of utmost importance that any rector, vice-chancellor or general executive of an HEI constitutes a professional team sharing the same vision, mission, beliefs, and values.
The central outcome of this article should answer the question of whether AI can be regarded as just a further step in digitalization or whether AI digs deeper into the transformation of HE(Is), their mission, their organization and management, their stakeholders, and their contexts. A further question to be answered by this article is whether (Gen)AI causes and/or asks for any changes in this new kind of HE leadership and strategy. Therefore, we should first look at what AI really is and (can) cause(s).

5. AI in a Nutshell

5.1. AI in a Nutshell: The History of AI (Technology)

Since the successful release of ChatGPT by OpenAI in November 2022 (Gen)AI has been in the middle of (technology) news and controversy. Because GenAI contains such powerful transformational power in so many fields and applications, it is worth identifying it in a nutshell.
Contrary to what seems, AI has already a quite long history. Though some histories link AI with Greek mythology and medieval legends about artificial beings, the real birth of AI can be situated in the period 1941–1956. In 1936 already the famous British mathematician Alan Turing built a conceptual machine that would be able to run all algorithms that end in time. This gave not only birth to the computer, but neuroscientists started modeling the human brain as a machine. In 1950 Alan Turing (Turing, 1950) published a landmark paper, in which he speculated about the possibility of creating thinking machines. He simplified “thinking” as “being able to have a conversation indistinguishable from that with a human being”. This was called the famous Turing Test, in which machines can imitate thinking, but not reason by themselves.
At the pivotal Dartmouth workshop of 1956, organized by Marvin Minsky and John McCarthy with the support of IBM scientists, McCarthy introduced the term “Artificial Intelligence” based on the assumption that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it” (McCarthy et al., 1955). In the autumn of the same year, Newell and Simon presented the Logic Theorist at a meeting of the Special Interest Group in Information Theory at MIT. This was the birth of the interdisciplinary paradigm shift joining symbolic artificial intelligence, generative linguistics (Chomsky), cognitive science, cognitive psychology, cognitive neuroscience, and the philosophical schools of computationalism and functionalism. AI laboratories were then set up at a number of British and US universities and subsidized by government agencies like the American Defense Advanced Research Projects Agency.
An important goal of AI research is to allow computers to communicate in natural languages. For that purpose, semantic networks were developed with interlinked nodes or concepts. In the 1960s AI laboratories also researched neural networks.
Notwithstanding AI research and development have also already known two so-called “AI winters” (1974–1980 and 1987–2000), which were followed by AI hype cycles. In the early 2000s, machine learning was applied thanks to the availability of powerful computer hardware, the collection of immense data sets, and the application of solid mathematical methods. Deep learning proved to be a breakthrough technology and transformer architecture has been used to produce generative AI applications since 2017.
(Gen)AI is thus a not very well-defined container concept that entails all developments from the early expert systems to the latest neural networks. Figure 1 below shows the relations among its several elements.
Let us describe the elements in the figure above in order to be precise and understand them. In general terms AI can be described as “a technology that allows machines and computer applications to mimic human intelligence, learning from experience via iterative processing and algorithmic training” (Colorado State University Global, 2024). The verb “to mimic” is quite important in this definition.
Machine learning (ML) is a field of study in artificial intelligence dealing with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.
A LLM is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters and are trained with self-supervised learning on a vast amount of text. The largest and most capable LLMs are generative pretrained transformers (GPTs).
A(n) (artificial) neural network is software that does not have programming rules but is instead a programmed network of nodes called artificial neurons and connections. It always needs to be trained to perform a task. The structure and functioning of neural networks are inspired by the biological networks of brains. These networks have proved to significantly improve performance. Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience and derive conclusions from a complex and seemingly unrelated set of information. In Figure 2 below each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another.
AI systems need to be trained by the input of data, the more the better results are expected, and thus work by combining large sets of data with intelligent, iterative processing algorithms to learn from patterns and features in the data that they analyze. Each time an AI system runs a round of data processing, it tests and measures its own performance and develops additional expertise.
Figure 2 also shows how we know the input and get to know the output of AI, but do not understand its hidden processes. Contrary to rule-based software, where the programmer thinks up the solution, with machine learning it is the computer itself that comes with the solution. This so-called “black box” is one of the main reasons why AI is not trusted by many. The fact that AI’s output is sometimes so unexpected or so clearly wrong, can be related to the input of data, which is always biased in one way or another, but it can never be explained by its way of processing.
Another important category of AI flaws is the so-called “hallucinations”. Those misleading results can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.
Last but not least, AI will always lack self-awareness and consciousness, since it does not have subjective experience, personal identity, or the ability to reflect on its own existence (yet). Humans possess a level of self-awareness that is currently beyond the scope of artificial intelligence (Riggins, 2024). Because of all these fundamental constraints AI will never grow the same as a human being and will always miss uniquely human features as critical thinking since it remains a machine. (see also Section 5.2).
Scientists are working on ways to make hallucinations less frequent and less problematic by developing a toolbox of tricks including external fact-checking, internal self-reflection or even conducting “brain scans” of LLM’s artificial neurons to reveal patterns of deception (Jones, 2025a). Yet, since it cannot be known how AI makes up things in its “black box”, AI hallucinations cannot be stopped and thus the results should always be checked by a human person with knowledge of the discipline or fact as well as of its context. (see also Section 5.2).
Since GenAI is based on recognizing patterns in a way that mimics the human brain and considering that we know more about how the human brain works than the AI’s “black box”, the concern about AI’s processing should stay, yet not be blown up. Also, the human brain makes mistakes in pattern recognition, even in science. Is it not the heart of the matter of science always to double-check results, certainly when discovering new things or exploring new hypotheses or explanatory theories? Should technology not always be combined with human critical thinking about its results, its possible applications, and its use in various contexts?
Overfocusing on the “black box” should not be done, also because recent models have already been developed in which ANNs do not only recognize patterns comparable to our brains but can also generate patterns themselves. Already in 2014 the American computer scientist, Ian Goodfellow, and his fellows created a Generative Adversarial Network (GAN), in which 2 neural networks, the Generator, and the correcting Discriminator, compete and improve in this way (Goodfellow et al., 2014).
The latest development is the coming of computer-using agents (CUAs) or AI agents, such as Anthropic’s Computer Use (October 2024), Google DeepMind’s Mariner (December 2024), and in January 2025 OpenAI’s Operator. Those agents are web applications that can carry out routine online tasks in a browser themselves, such as booking tickets or filling an online grocery order instead of only giving answers as chatbots do. Unlike traditional AI and even GenAI such as ChatGPT, which requires explicit instructions for every task, agentic AI assesses situations, formulates plans, and executives them with minimal, yet not entirely free from, human oversight. Key characteristics of AI agents include autonomy in operation, adaptability through interacting with humans and feedback from past actions, clear goal orientation, proactive information search and analysis, and taking actions without further prompting.
On the other hand, AI agents also require careful consideration and proactive management. A key concern is accountability for AI-made decisions. Another concern is transparency in decision-making processes. As important as security against misuse. AI agents are both corruptible and hackable.
What those AI agents could mean for routine jobs is still unclear, but they will in time replace routine jobs. With this future, we enter the ethical dimension of the context and applications of AI. According to Terence Tse (2025), agentic AI represents more than just technological advancement. It signals a fundamental shift in how we approach work and problem-solving. He argues that AI agents will fundamentally change the modern workplace by automating routine tasks allowing humans to focus on higher-value activities requiring critical thinking, creativity, and emotional intelligence, which in turn can lead to more fulfilling work and higher job satisfaction. In addition, this evolution in workplace dynamics creates opportunities for professionals to develop new skills and expertise in AI system management and strategic oversight, fostering a more dynamic work environment.

5.2. AI in a Nutshell: The Ethical Dimension of Developing and Applying AI (Tools)

Though ChatGPT is only a powerful LLM-based chatbot, AI in general has also gained vigorous discussions across a wide range of fields. Indeed, AI has met a lot of criticism, both from the analysis of its own, unknown processing in its “black box” as well as due to its huge need for energy and its potentially dangerous applications.
As far as energy is concerned a Goldman Sachs report (Singer et al., 2024) calculated a 160% increase in data power consumption by 2030, noting some large data centers already consume 100 mW, the equivalent of powering 80,000 homes for an hour. It may not be surprising that leading cloud service providers (CSPs), like Microsoft, Google, and Amazon, are more and more cooperating with nuclear energy (Patrizio, 2024). Nuclear power is a more consistent, reliable source of energy than solar or wind since both are susceptible to the whims of nature. On the other hand, the nuclear power plant that Microsoft’s 20-year deal plans to reopen in 2028, is Three Mile Island, the site of the worst nuclear accident in US history which was shut down following a partial meltdown in March 1979.
The sudden and surprising appearance of the Chinese open-source chatbot, DeepSeek, on 20 January 2025, not only caused disruptions in Wall Street and geopolitical tensions but also proved that AI can be applied with less energy-consuming chips and lower costs, only 5.6 million dollars against 100 million for ChatGPT. The fact that this AI model was also developed by a relatively young team that has existed for only 2 years and includes graduates and current students from leading Chinese universities is more than surprising. While in China the AI race has been dominated by technology giants like Alibaba and ByteDance, backed by heavyweight investors, the sudden rise of DeepSeek has also caused a shift towards smaller innovators. Caiwei Chen already discovered four other Chinese AI startups to watch beyond DeepSeek (Chen, 2025) At the same time DeepSeek’s surprise could also encourage more students to pursue opportunities at home amid a global race for talent. DeepSeek’s development is said to be already inspirational in Asia and Africa (Bhattacharya, 2025).
Looking at AI’s short history above, it is clear that the development and uses of AI have always been close to the military-industrial complex. The so-called lethal autonomous arms are the latest, terrifying developments and are still unregulated in a world full of geopolitical conflicts and tensions. Kamikaze of exploding drones for instance are already in use in the Gaza and Ukraine wars. Next to Israel and Russia, also the USA, the UK, China, and South Korea are having and developing such killer robot systems with significant autonomy in selecting and attacking targets and are therefore more arbitrary and unaccountable. Lushenko and Carter (Lushenko & Carter, 2024) observe that the debate and development of AI in warfare are largely driven by so-called “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities, thus forming a new kind of military-industrial complex.
As recently as 4 February 2025, Alphabet, Google’s parent company, stated in a blog post (Manyika & Hassabis, 2025) the publication of its 6th annual Responsible AI Progress Report (Google, 2025) and an update to its Frontier Safety Framework and AI principles. Unfortunately, in this update of its ethical guidelines regarding AI, any former reference to the use of such advanced technologies solely for peaceful purposes is removed. CEO, co-founder of Google’s DeepMind, and official AI advisor for the UK government, Demis Hassabis, thereby states that these guidelines were being “overhauled in a changing world” and that “AI should protect national security”. Even seven years before the current update of its AI principles Google stepped into Project Maven, a purely military program of the US Department of Defense to harness AI for military targeting with satellite imagery. As the technology matured, it was reported to be deployed in real-world operations, including Iraq, Syria, Yemen, and the Russia-Ukraine war (Ibrahim, 2024).
Another common ethical question is how to assess the dilemma between security and privacy with, for example, facial recognition making use of AI. Facial recognition of possible terrorists or criminals on the loose can increase our security. But it can also be used to identify and track political opponents in authoritarian regimes. As for the always-biased output of AI due to its input, a painful experiment was done by the Institutes of Tropical Medicine of Antwerp (Belgium) and Nagasaki (Japan) together with the University of Oxford. The AI tool Midjourney Bot was asked to generate an image of the vaccination of a white child by an African doctor. The outcome can be read in the famous medical journal The Lancet (Alenichev et al., 2023). In more than 300 attempts, the patients always had dark skin. Some results even include exaggerated African stereotypes such as giraffes, elephants, or caricatured clothing. The researchers speak of a “wake-up call” for awareness of potential biases in AI-generated images.
Yet, the application of AI in health care is not only very wide but also very helpful in a good sense. With the help of AI pattern recognition nano and hidden fractures and diseases can be discovered while current medical imaging, let alone microscopes or even MRI scans, are not able to observe and show them.
Others blame AI as only being a processing of plagiarism re-designing its input with or without copyright. Artists, such as composers, pictural artists, designers, architects, photographers, and film, cartoon, and video artists, have already uttered their concerns and warnings about their copyrights, while AI-generated (artistic) content does not seem to be copyrighted yet (Glover, 2024). The long SAG-AFTRA union strike of film and television actors from July 14 to 9 November 2023, in Hollywood, demanded not only larger minimum-pay increases and a streaming bonus but also “consent and compensation” provisions against AI. How exactly this last is to be enforced still remains to be seen.
Another important observation of AI’s ethical dimension is its great use in cyberattacks. How cyberattackers use gen AI to accelerate phishing since the launching of ChatGPT was analyzed for McKinsey & Company. They (Greis & Sorel, 2024) observed a rise of not less than 138 percent.
Another important observation is that, although some governments have spent a lot of money on the development of AI, the dynamics have always been in (semi-)private hands. This situation can be compared to the research, production, distribution, and selling of the anti-COVID vaccines. Most HEIs that have strategically chosen AI research cannot keep up with the speed of research, applications, and products from firms like Apple, Alphabet (Google), Microsoft, Nvidia, Meta (Facebook), and OpenAI. A comparable situation can be seen in the development of chips so necessary for AI. The latest public investment of 500 billion dollars announced by President Trump only one day after his inauguration proves not only how competitive the private market of AI companies is, but also how huge sums of public money go to private ownership.
Most appealing to the imagination is how generative AI could transform the human being. At least the inspiration of the biological brain structure and functioning or its comparability in pattern recognition shows how close the human brain and artificial neural networks can be. Turning the way around from brain to computer the famous MIT psychologist and linguist, Steven Pinker, undertook gripping, yet powerful research in trying to explain how the mind works by comparing it to an advanced computer (Pinker, 1997).
The moral question often asked is whether and when AI will not only become more intelligent than human intelligence but whether the artificial one is going to take over from the human. As early as 1997 Facebook unveiled plans for a brain-reading hat that you could use to text just by thinking.
Lately, scientists at the University of Texas (Tang & Huth, 2025) have made new improvements to a “brain decoder” using AI to convert thoughts into text. The new AI “brain decoder” can read a person’s thoughts with just a quick brain scan and almost no training, based on their brain’s responses to stories inside an MRI machine. Still, that listening took many hours, and the decoder worked only for the individuals that were trained. However, the findings could one day support people with aphasia, a brain disorder that affects a person’s ability to communicate (see LiveScience 18 February 2025, AI ‘brain decoder’). On the other hand, AI and neuroscience have also collaborated in good ways. Lately, for example, researchers used machine learning to identify 13 proteins in the blood that predict at which pace a person’s brain ages compared with the rest of their body (Liu et al., 2024). The results could help scientists identify molecules to target in future treatments for age-related brain diseases.
One of Elon Musk’s companies, Neuralink (°2017), wants to expand human intelligence with AI by means of a chip implant in the head and thus biological and artificial brains come very close together. At the moment, Neuralink still focuses on generalized brain interfaces to restore autonomy to those with unmet medical needs, but it is clearly in its mission also to unlock human potential tomorrow.
The real aim is to vanquish death as Larry Ellison (Oracle) has spent hundreds of millions of dollars in research into life-extending technology. Peter Diamandis, American engineer, entrepreneur, (co-)founder and chairman of many (space) organizations, such as XPRIZE Foundation, the private Singularity University, the International Space University, and Human Longevity, Inc., is very open about vanquishing death in his book, Longevity Guidebook. How to Slow, Stop, and Reverse Aging—and NOT Die from Something Stupid (Diamandis, 2024).
This target of everlasting life is precisely for which the Israeli history professor at the Hebrew University of Jerusalem and research fellow at the University of Cambridge warns us in his best-selling second book of his trilogy, Homo Deus (Harari, 2016). In his most recent book Harari not only tells us a brief history of information networks from the Stone Age to AI times (Harari, 2024) but also warns us to make urgent choices in order to not be overrun by AI (Harari, 2018). In his earlier 21 Lessons for the 21st Century, the rising power of big tech is the first theme that Harari addresses, next to nuclear war and an ecological disaster.
Yet, one should never forget that (Gen)AI can only recognize patterns or pictures but never really know nor understand its own output. For AI the recognition of a face or an animal means it is able to identify and name it as such, but it will never understand what the person with this face or the animal with this name really means, let alone in a holistic way, even if it identifies the person with this particular face as your relative or in a depressive mood. AI will never understand nor feel what it means to be an identified person’s father or sister for instance in an emotional or emphatic way. Therefore, how much AI might help us in searching for or identifying somebody or something, it should always be accompanied by the human touch both to critically check the output and to deal with the result in a human and ethical way.
All the observations and considerations about AI, its development, its functioning, and its innumerable applications within the geo-political context and everybody’s life make clear that there is a large and important ethical dimension in relation to AI. How the HE stakeholders want and can incorporate and manage also this dimension next to AI’s technicalities is investigated in the next parts.

6. AI in the Eyes of the Internal Stakeholders

6.1. AI in the Eyes of the Internal Stakeholders: Students

The student’s perspective on AI can be regarded as quite important because youngsters tend to take up and use the latest ICT tools quickly. Notwithstanding this reality and although a lot of student feedback on learning, teaching, and assessment has become a widespread practice, especially in Quality Assurance (QA), not many HEIs, (inter)national organizations, and/or nations have organized interesting surveys in order to know their experiences and attitudes, let alone as strategic input.
Most institutional analyses of the student’s perspective on AI are fragmented and have limitations because of the small number of students involved (Jatautaite, 2023). Zuzeviciute et al. (2023) acknowledge these limitations but their way of linking the results of the students with the main themes of AI in HE as potential for enhancement of teaching, enhancement of learning, and ethical considerations is still interesting.
Fortunately, some research is wide enough and digs deeper to better understand students’ use and attitude towards AI. A good informative AI survey of students was published in the fall of 2024 by the UK-based international membership organization, Digital Education Council (DEC). Its global survey (DEC, 2024) gathers 3839 responses from bachelor, master, and doctorate students in multiple fields of study from across 16 countries. The survey covers the status of AI usage and readiness, student perception of AI use cases, expectations and preferences on university actions on AI (see also Section 6.2 and Section 9), as well as satisfaction with institutions’ AI adoption, concerns, and key attributes for AI use.
The report states that 86 percent of the responding students claim to use already AI in their studies, 24 percent even on a daily basis. ChatGPT emerges as the most widely used tool, with 66 percent of students using it. Grammarly and Microsoft Copilot get a 25 percent adoption rate. A wide range of other AI applications such as Claude AI, Blackbox, DeepL, and Canva image generators are also used. On average, each student seems to use more than 2 AI tools, while 22 percent even claim to use more than three AI tools.
Another recent annual US survey (EducationDynamics, 2025) makes an interesting differentiation between undergraduate and graduate students. All in all, it confirms the students’ AI usage with comparable results. Nearly 70 percent of respondents and near equal numbers of undergraduate and graduate students have utilized AI chatbots, while 32 percent have not. ChatGPT is the most widely used AI platform (49%), followed by Gemini (24%) and Copilot (13%). Traditional undergraduate students are more likely to use AI chatbots than their non-traditional and graduate counterparts.
As for the purposes of using AI tools according to the DEC student report, information search tops the list (69%), followed by grammar checking (42%), summarizing documents (33%), paraphrasing documents (28%), and only finally creating drafts (24%). It thus seems that Gen AI is up to being the new Google before the new writing tool, although the search for quotations can be part of papers and essays.
Surprisingly and different from the DEC’s global staff AI survey 2025 (see Section 6.2) there is no separate subchapter specifically on assessment and/or cheating. Out of the students’ use of AI tools, there is no indication that students consciously try to cheat. This seems contrary to the popular belief that cheating with AI has become normal, a belief which is confirmed by the majority of academia (54%) who are convinced that current student evaluation methods require (urgent) significant to complete change and 28 percent of the academia uses AI to detect cheating (see Section 6.2).
As for the quality of education linked to a potential decrease in the perceived value of the degree, only 27 percent of students and 25 percent of the instructors identify it as a concern, while 24 percent of the academia do not see any clear benefits and 25 percent see more risks than rewards of using AI in teaching. There seems to be still a nostalgic feeling about true student writing, even with errors, and hands-on scholarship.
Next to this perception of the quality of teaching and learning with AI, studies show mixed results on the effectiveness of learning, even with the most advanced AI models (Forero & Herrera-Suárez, 2023; Kumar et al., 2023). While LLMs can answer technical questions, their unguided use lets students complete assignments without engaging critical thinking. After all, AI chatbots are generally designed to be helpful, not to promote learning. They are not trained to follow pedagogical best practices, such as facilitating active learning, managing cognitive load, and promoting a growth mindset.
Another well-known flaw with AI tutors is their uncanny confidence when giving out an incorrect answer or when marking a correct reply as incorrect. Therefore, research studies advocate for a carefully designed AI tutoring system, using the best of current (Gen)AI technology and deployed appropriately in order to not only overcome these challenges but also to address known issues with pedagogy in an accessible way that can offer good education to any community or learning environment with an internet connection (Kestin et al., 2024).
This conclusion of defining and incorporating AI literacy and equity into the pedagogical design instead of just outsourcing work to AI was also the practical experience of Dan Myers, associate professor of computer science, and Anne Murdaugh, associate professor of physics. They introduced GenAI in several courses at the American Rollins College by requiring students to complete semester-long research projects using Claude and Copilot to brainstorm paper topics, conduct literature reviews, develop a thesis, outline drafts, and revise their papers. At each step, students had to use logbooks to write down the prompts they used, the responses they received, and how the experience shaped their thinking. This shift toward collaborating with AI did not unsettle Myers and Murdaugh because the skills that students use to engage thoughtfully with AI are the same ones that colleges are good at teaching, namely: knowing how to obtain and use information, thinking critically and analytically, and understanding what and how you are trying to communicate (McMurtrie, 2024).
Related to the fact that 58 percent of the DEC responding students feel they do not have sufficient AI knowledge and skills yet (question: To what extent do you agree or disagree with the following statement: I have sufficient AI knowledge and skills) and 48 percent do not feel adequately prepared for an AI-enabled workplace (same question on the statement: I feel prepared for a future that heavily utilizes AI), an overwhelming 80 percent say AI in their universities does not fully meet their expectations.
Students expect universities to provide more training on AI for themselves (41% strongly agree) as well as for faculty (42%) and expect more courses on AI literacy (41%). In all 73 percent agrees that universities should provide training for staff on the effective use of AI tools while 59 percent expect their institutions to increase the use of AI in teaching and learning.
The results in the DEC report also make clear that a majority of students (71%) demand to be involved in the AI decision-making of their institutions, while only 34 percent think that their university actively seeks feedback from them in that matter. Yet only 18 percent of the students believe that courses primarily created and delivered by AI are more valuable than traditional ones, which should make universities cautious when using AI in content creation and delivery.
The latest report on student AI use published during this research (1 February 2025) was from the Oxford-based Higher Education Policy Institute (Hepi), which polled 1250 UK undergraduate students and found that 88 percent of them already use tools such as ChatGPT to help with their assignments, up from 53 percent in 2024 (Freeman, 2025). Using AI in assessment though does not mean they were necessarily breaking the rules, with 59 percent agreeing that their universities had changed how they conducted assessments in response to the rise of AI and three-quarters confident that their institution could spot AI use in assessed work. Only a small fraction (5%) said they had submitted the AI-generated text without editing it. Instead, most students responded they use AI tools to explain concepts, suggest research ideas, summarize articles, or assist with grammar, translation, and essay structure. The proportion of students who reported using any AI tool in general, and not only with assessment assignments, has jumped from 66 percent last year to 92 percent this year. Real “generating text” is the most popular reason for using AI, ahead of editing work and accessing university textbooks. Interestingly, the report also shows that uptake has been higher among those from more privileged backgrounds (58%), compared to the least privileged backgrounds (51%). The least that can be said is that the use of (Gen)AI by students has grown extraordinarily.

6.2. AI in the Eyes of the Internal Stakeholders: Educators

In January 2025, DEC (2025) published its survey of academia complementary to that of the students. This survey has gathered 1681 responses from faculty members of 52 participating HEIs across 28 countries. A majority of respondents (61%) report having already used AI in teaching. Yet, 88 percent of them have only used AI sparingly (minimal to moderate use), while 86 percent see themselves using AI in teaching in the future and 66 percent agree that incorporating AI would be essential in preparing students for future AI-augmented work environments. The current AI usage ranges from creating teaching materials (75%), supporting administrative tasks (58%), teaching students to use and evaluate AI in class (50%) to boosting student engagement in class (45%), detecting cheating (28%), and generating feedback for students work (24%).
John Warner developed an interesting practice and approach in his latest book, More Than Words: How to Think About Writing in the Age of AI (Warner, 2025). Starting from the observation that “writing is thinking”, and thus that the use of emotion, memory, physicality, and community all allow humans to create writing that AI cannot reproduce, he argues that HEIs need to make the process of learning a “root value” of writing, rather than producing the perfect essay, which drives students to ChatGPT being a shortcut to the desired, and thus the attractive outcome. Warner states that GenAI programs like ChatGPT not only can kill the student essay but should, since these assignments don’t challenge students to do the real work of writing. The fact that we ask students to complete so many assignments that a machine could do is a sign that something has gone very wrong with writing instruction, he argues. The book calls for using AI as an opportunity to reckon with how we work with words—and how all of us should rethink our relationship with writing. With this in mind, Warner makes a plea to concentrate on the process, the values of experience, reflection, and metacognition rather than on the outcome. In his classes, he asks questions like “What do you know that you didn’t know before?”, or “What can you do now that you could not do before?”, questions that bring knowledge into play as a new start and can hardly be answered by AI since they ask for comparing before using AI. Questions like these can be embedded in subject-matter courses and in reviewing or assessing student’s writing.
The most critical subject with AI for academia seems to be student assessment. More than half of them (54%) believe in the DEC 2025 survey that current student evaluation methods require significant changes, with 13 percent even calling for an urgent, complete revamp and 50 percent believing that assignment redesign is needed. This is linked to the fact that a significant majority of faculty members (83%) express concern about students’ ability to critically evaluate AI-generated output.
An approach to assessment similar to Warner’s questions (see above) is used in a growing number of HEIs (see also Section 9). Colleagues from Manchester Metropolitan University (Dorobat et al., 2024) discussed the expectations and practiced some options around AI assessment. They kept to their expectations: encourage peer/student collaboration in a team-driven learning process while working honestly and with integrity. Each submission was to be a product of their own understanding and effort, with all sources acknowledged.
They have come up with three options for assessment. At one extreme AI is made impractical, impossible, or forbidden. At the other extreme, a surreptitious use of gen AI tools is allowed. Yet they regard it not only as undermining integrity and originality but also ensures that future graduates will be easily replaced in the workplace by more skilled AI technology. Their choice is a partnership between students and AI tools, enhanced by AI, but not replaced, thus keeping embedding creative and cognitive processes in the academic work.
The assessors propose an interesting three-step process for assessment redesign:
  • Understand how a specific assessment task allows for AI collusion by stress-testing it;
  • Map the assessment task against several key parameters, including structure, context, criticality, format, or foundation by taking a quiz;
  • Redesign assessments by moving and combining these parameters to create multidimensional tasks that even encourage AI collaboration.
A most interesting and meanwhile well-known way of assessing the use of AI in assignments taking into consideration ethics has been designed by Perkins et al. (Perkins et al., 2024). They have developed the AI Assessment Scale (AIAS) in order to empower educators to select the appropriate level of GenAI usage in assessments based on the learning outcomes they seek to address. Their AIAS looks as Table 1 below, with each level specifying the extent of allowed AI use and the student’s responsibility.
The AIAS offers greater clarity and transparency for students and educators and provides a fair and equitable policy tool for institutions to work with. HOWEST used AIAS as inspiration in its strategic AI framework (see Section 10.2). At the same time, it offers a nuanced approach that embraces the opportunities of AI while recognizing there are instances where AI tools may not be pedagogically appropriate or necessary.
Further reasons for not using AI in teaching by faculty mentioned in the DEC 2025 report are: from not having time or resources to explore AI (40%) and not knowing how to use it (38%) to clear negative reasons as being concerned about potential negative impacts (32%), seeing more risks than rewards of using AI (25%) to clearly seeing no benefits (24%).
This rather negative attitude is reflected in a clearly divided faculty sentiment on AI, with 57 percent feeling positive while 13 percent holds a negative sentiment and 30 percent stays neutral, being uncertain or having mixed feelings about AI’s impact on education. Yet, all in all, 65 percent see it as an opportunity and 64 percent believe AI will bring significant to transformative change to the roles of instructors. Though only 9 percent of the staff believe AI will bring no to minimal change to the roles of instructors, still 16 percent feel AI will indeed change teaching, but they are not fully aware of the possible changes.
A large number of faculty (40%) report they have no understanding or are only beginners in terms of AI literacy and skills. The skills the academia identifies as needed in the age of AI are: facilitating students’ critical thinking and learning (81%), AI and digital literacy (66%), adaptability and flexibility (55%), expertise in ethical and responsible AI and technology (52%), and innovating pedagogy (50%).
Last but not least, academia is very critical to the AI frameworks and guidelines of their institutions at the moment (see also Section 9). An overwhelming majority perceive their institutional AI guidelines in teaching as incomprehensive and unclear (80% in both cases). The most populated zone, called “the lost” by DEC 2025, is made up of 27 percent of faculty who are unaware of the AI guidelines and believe them to be lacking in comprehensiveness.
Another recent research publication on the internal stakeholders’ AI usage and feelings in HE confirms the DEC findings in broad terms. In December 2024 for instance, the Boston-based (USA) private company providing educational content, technology, and services, Cengage, published its state of GenAI in the HE report (Cengage, 2024). Exploring AI in Higher Education surveyed 455 faculty members across EMEA countries (UK 35% of responses, Europe 35%, South Africa 15%, Middle East 6%, and others 9%), most of them being lecturers (36%), but also containing professors (37%), students and staff (12%). Its first key finding is that most instructors (97%) hold a positive view of GenAI, valuing its potential to enhance efficiency and innovation in educational settings. This enthusiasm is reflected in institutional policies, with over 40% of HEIs supporting faculty use of AI. Yet, while 74 percent of instructors believe AI could improve student engagement, HEIs are more cautious about student use, with 34 percent opposing it compared to 26 percent in favor.
Majorities of instructors are still waiting for institutional AI guidance (64%), for learning from peers (64%), and still in need of assistance to evaluate the marketing materials and information related to AI (59%). The second key finding is thus clear that, despite the optimism, instructors are actively seeking guidance to navigate GenAI use effectively, and at the same time protecting academic integrity.
This divided AI landscape in HE is also reflected in the identification of benefits and challenges by the internal stakeholders. Perceived top benefits of AI usage in teaching and learning are efficiency and productivity enhancement, enhancement of creativity and innovation, and improvement of teaching and learning experiences. On the other hand, the challenges are accuracy and reliability, ethical and academic integrity, and the difficult learning curve based on the right knowledge and techniques to use AI.

6.3. AI in the Eyes of the Internal Stakeholders: Researchers

Since the use of AI tools such as the chatbot has boomed since November 2022, also AI research has enormously expanded. In this context, it is important to first make a difference between research IN AI, research OF AI, and research WITH AI. Those differences are linked to the different internationally accepted modes of research in the well-known Frascati Manual 2015 (OECD, 2015).
“Research IN AI” refers to research in the foundational aspects, possibilities, and further developments of Gen AI as a computerized process comparable to human thinking, thus imitating and helping it, while it cannot replace it as a phenomenon. “Research OF AI” is investigating and developing better uses of AI than those already existing. In this way, DeepSeek for instance, is a result of research and development needing less data, chips, energy, and costs than other chatbots, but still deliberately biased by its country of origin. “Research WITH AI” deals with the development of huge numbers of AI applications that can be used in all sectors and thus is again subject to ethical considerations and choices. While “research of AI” investigates and develops further AI tools, “research with AI” uses those tools to adapt to the needs of and apply them in the world of work. Though the latter is clearly applied research, there are a lot of links and grey zones between the different modes of AI research, comparable to, for instance, space travel and weapons development. The three modes of AI research need to be distinguished from the use of AI in research, which is said to be a game-changer in research as well.
In order to have a better view of how AI research is structured it is worth looking again at the history of AI. Next to the aforementioned 2024 Nobel Prize in Physics, the 2024 Nobel Prize in Chemistry is also linked to AI. The prize was given to chemist David Baker together with chemist and computer scientist Sir Demis Hassabis, both for their work on predicting the structure of proteins with deep learning in the ’90s.
Hassabis, who already created successfully the game “Theme Park” while waiting for one year to be allowed to study computer sciences at Cambridge University, got his PhD in cognitive neurosciences after he had worked for the gaming company Lionhead Studios and had founded his failing own Elixir Studios. In 2010 he established the enterprise DeepMind together with Shane Legg, still with DeepMind, and Mustafa Suleyman, at the moment AI head at Microsoft. In 2014 DeepMind was bought by Google for 500,000 euros. Already in 2016, DeepMind made world news with AlphaGo, an AI system that defeated Lee Sedol, the world champion of Go, a board game many times more complicated than chess. It was precisely the AI application AlphaGo that solved the prediction of protein structures in no time and which is now the reason for the 2024 Nobel Prize.
Having become part of Google, DeepMind also got its own AI lab in Silicon Valley: Google Brain. In 2017 Google Brain was responsible for yet another AI breakthrough: the AI transformer architecture that made generative AI booming. The T in GPT still stands for “(Generative Pretrained) Transformer”. Yet Google doubted the transformer’s possibilities so Sam Altman of OpenAI brought the ChatGPT on the market with the Google-developed technology in November 2022. The following far-reaching AI hype is well-known and followed globally.
The story above shows some essential elements of research in AI. Firstly, as also the very beginnings of AI prove (see Section 5.1) AI research is inter- and multi-disciplinary. The collaboration of computer scientists with neurologists, linguists, psychologists, and even philosophers is quite common and means a real challenge to the traditional university’s structure (see Section 12).
Secondly, and most importantly, it is clear that the drive of AI research, whatever mode, is situated in private enterprise. Before the appearance of DeepSeek, it was thought those companies had to be international, huge, and very wealthy, thus always looking for and needing more profit, next to being innovative. Now the new experience is that smaller and younger startups cannot only develop AI (tools) but also cause quite important disruption on a global scale. Yet the current overwhelming power of large AI private companies put HE(Is) to the strategic challenge of the relationship and way to collaborate with the private sector.
Thirdly, AI research in itself and in its applications is running very fast. Indeed, both 2024 Nobel Prize in chemistry winners, Hassabis (48) and Jumper (39) are quite young in the tradition of Nobel prizes. DeepMind (°2010) and AlphaFold (°2018) are relatively young companies and already have a deep influence on Homestead Nobel Prize winners.
Fourthly, although the companies and researchers may be quite young, they make large use of earlier research. AlphaFold 1, for instance, was built on work developed by various teams in the 2010s that looked at large databanks of related DNA sequences, in order to try to find changes at different residues that appeared to be correlated. Building on work recently prior to 2018, AlphaFold estimated a probability distribution of a likely distance map using advanced learning methods. This observation questions the degree of open access to and ownership of former research taking into account the fact that the drive of AI research is primarily in private hands. Though many governments now begin to subsidize public AI research at universities and public laboratories, Trump’s announcement of investing 500 billion dollars was unsurprisingly only for a partnership of a selection of only private AI companies.
On the other hand, the fact that DeepSeek has deliberately chosen open data is important. For researchers, DeepSeek R1’s openness could be game-changing by using its online chatbot, DeepThink. Researchers can also download the model to their own servers and run and build on it for free. Indeed, since its launch on 20 January 2025, many researchers have been investigating training their own reasoning models, based on and inspired by R1, and backed up by data from “Hugging Face”, an open-science repository for AI that hosts the DeepSeek R1 code. In the week after its release, the site logged more than three million downloads of different versions of R1, including those already built on by independent users (Gibney, 2025).
While the current hype and international competition of new AI tools and applications is everywhere in the media, fundamental research in AI is still going on but does not easily make the front pages. One such work at the University of Freiburg (Germany) for instance could be revolutionary for the field of data science. It was published in Nature in January 2025 and answers the question of whether AI can still provide reliable answers when trained on fewer data sets. The study (Hollmann et al., 2025) suggests that reliable results could be achieved if AI models are trained on “synthetic data”, which are randomly generated data that mimic the statistical properties of real-world data.
Next to the inter- and multidisciplinary research in the development of GenAI itself, the second mode of research using AI in other disciplines seems to be huge. From DeepMind’s AlphaFold predicting 3D models of protein structures to discovering a new family of antibiotics and materials for more efficient batteries, recent advances in deep learning, GenAI and foundation models prove to be transformative. In WEF’s flagship report, Top 10 Emerging Technologies of 2024 (Fink et al., 2024) the authors predict advances in such areas:
-
Diagnosis, treatment, and prevention of diseases;
-
Novel materials that enable next-generation green technologies;
-
Breakthroughs in the life sciences that extend current understanding of biology;
-
Transformative leaps in how the human mind is understood, and many more.
A recent international study (Luo et al., 2024) says that AI tools powered by LLMs can more accurately predict the results of proposed neuroscience studies than humans. The researchers developed BrainBench, which evaluates how well LLMs can predict neuroscience study results. On average, LLMs had an 81 percent accuracy, while the human experts averaged a 63 percent accuracy.
Having dealt with AI research as such, we now need to have a look at how AI might change research(ing) in itself. Already clear from the AI possibilities in teaching and learning (see Section 6.1 and Section 6.2), it should be noted that AI also helps research in finding references and writing abstracts faster, but there are many more AI usages in research today and there will certainly be in the future.
In a survey of nearly 5000 researchers in more than 70 countries by the publishing company Wiley (Wiley, 2025) they were asked where they would identify themselves as early, average, or later adopters. Figure 3 below shows how divided the landscape of AI use in research still is.
Although these results are more favorable to AI than the DEC results about students and instructors—researchers seem to be more inclined towards new things and innovation -, the amount of late adopters is still almost as large as that of the early ones.
The Wiley survey also asked the researchers which GenAI tools they are currently using, including chatbots such as ChatGPT and DeepSeek, and how they feel about various potential applications of technology. The top 5 ways researchers are currently using GenAI tools in their work are shown in Figure 4 below.
The results, shown in Figure 5 below (Naddaf, 2025), suggest that the majority of researchers see AI becoming central for scientific research and publishing.
More than half of the respondents think that AI currently outperforms humans at more than 20 of the tasks given. The outperformance is accepted in for example following use cases: reviewing large sets of papers, summarizing research findings, detecting errors in writing, checking for plagiarism, and organizing citations. More than half of the researchers expect AI to become mainstream in 34 out of 43 use cases in the next two years.
As for looking for references and processing literature review AI is a technology that has already been used for quite a while, even without the researcher’s consciousness. Google Scholar, for example, as well as Scopus and Web of Science are open, but not that transparent, about their AI use. A private company like the Philadelphia-based US Clarivate, which advertises itself as a leading global provider of transformative intelligence and offers data, analytics, workflows, and other services in the area of research making use of, among others, the Web of Science, on the other hand openly declares making use of AI.
In the sharply increased and hypeful competition among the AI Big Techs OpenAI has released as recently as on 2 February 2025 a pay-for-access tool called “Deep Research”, which synthesizes information from hundreds of websites into a cited report several pages long. The tool follows a similar one from Google, released in December 2024, and is based on Gemini 1.5 Pro, rather than on its leading reasoning model 2.0 Flash. It acts as a personal assistant, doing the equivalent of hours of work in tens of minutes (Jones, 2025b). Both firms present such tools as a step towards AI agents that can handle complex tasks by combining the improved reasoning skills of the 03 LLMs with the ability to search the internet. Although many scientists who have tried them are impressed with their ability to write literature reviews or full review papers and even identify gaps in knowledge, others are less enthusiastic. OpenAI admits on its website, that its tool “is still early and has limitations”: it can get citations wrong, hallucinate facts, fail to distinguish authoritative information from rumors, and fail to convey its uncertainty accurately.
Google’s countermove did not wait long and launched its “Co-scientist” in February 2025. This scientific AI agent was able to produce a biological hypothesis in only two days where as it took scientists years to indeed prove that some bacteria with antibiotic resistance make use of various viruses. This latest achievement shows again how AI can be a great help in scientific research thanks to its speed being faster than human beings and pattern recognition on a very wide basis. On the other hand, it still challenges the necessity of the scientist’s own intellectual analysis, insight, and decisions as well as (re)commendations.
The consensus of the academic publishing community is that the use of AI tools in the actual writing process must be declared in the published article. However, an analysis of the first 500 examples collected reveals that undeclared use of AI is widespread and penetrating the journals and conference proceedings of highly respected publishers (Glynn, 2024). Undeclared AI seems to appear surprisingly in journals with higher citation metrics and higher article processing charges (APCs).
Another study (Lund et al., 2024) examined ethical considerations surrounding the integration of AI into academia by reviewing recent interdisciplinary literature. The research focused on the potential for AI to be used for scholarly misconduct and necessary oversight when using it for writing, editing, and reviewing scholarly papers. The findings highlight the need for collaborative approaches to AI usage among publishers, editors, reviewers, and authors to ensure that this technology is used ethically and productively.
Knowing that academic publishers are looking for lucrative contracts with AI companies for training data the call for regulation is paired with respect for copyright and the need for open access. Taylor & Francis, for example, are expected to make 75 million dollars from AI licensing deals for academic publications in 2024, and Wiley 44 million. Oxford University Press has confirmed it is working toward similar deals. Major publishing houses, like Elsevier and Wiley, already offer the use of AI to search themes and subjects in their large libraries and databases for research. While authors are feeling troubled by the way their academic work is being exploited for profit, paywalls or more restrictive licenses are not the answer. Instead, the public good should be prioritized through open access, some scholars argue (Waibel & Hansen, 2025).
They announced large projects to improve academic research into AI by pulling major private technology companies into a partnership that hopes to soften longstanding reluctance to share their vast and valuable datasets as announced by Biden and Trump, also calling for a regulation of AI usage in research.
This call for a collaborative regulation for AI usage in research is certainly becoming urgent when two London School of Economics scholars have lately developed an LLM chatbot that is said to be able to conduct even research interviews with thousands of participants in a matter of hours. The tool does not use a standard set of multiple-choice and open-text questions, traditional with online surveys, but encourages participants to freely express their views, and then pose follow-up questions to ensure clarity using “cognitive empathy” (Rowsell, 2024).
Out of an international survey with 380 academic respondents Digital Science (Digital Science et al., 2024) found the following five key findings in relation to research transformation in the era of AI and technology, open research, impact and evaluation, collaboration, and research security:
  • Open research is transforming in a positive sense, but challenges include a lack of awareness, funding, support, resources, and infrastructure, while there are concerns around data security, research quality, and competitiveness;
  • Research metrics are evolving and calling for a more holistic evaluation of research quality and impact shifting to a more responsible use of traditional metrics and introduction of alternative ones with HEIs addressing academic culture issues and needing greater recognition for non-traditional contributions;
  • AI’s transformative potential is huge and is expected to drive efficiencies in data and analytics, and open research, while the AI skills gap should be addressed and change management strategies introduced as concerns around ethics, security and integrity, AI bias, hallucinations, and impact on critical thinking remain;
  • Interconnected technology and open research make collaboration booming, but there are increasing concerns over funding and security;
  • Security threats and risk management need a proactive strategic and cultural overhaul while HEIs are not equipped and tend to ‘wait and see’.
The observed picture and attitudes of the current state of art of research in these times of transformation accelerated by AI are comparable to that of the stakeholders of teaching and learning. Also, in research AI is welcomed as a helpful and powerful tool with specific concerns. The reactive attitude and activities of the HEIs are still divided and need an adapted leadership (see Section 6.4 and Section 12) and strategy (see Section 11).

6.4. AI in the Eyes of the Internal Stakeholders: Academic Leaders

In January 2025 the American Association of Colleges and Universities (AAC&U) together with Elon University (USA) published a report in which 337 HE leaders responded. From 4 November to 7 December 2024, university presidents, chancellors, provosts, deans, and other senior leaders at public and private American institutions were asked to examine generative AI tools’ current and future impacts.
This Leading Through Disruption report (Watson & Rainie, 2025) deals with the already common themes such as student and faculty AI use, cheating, and faculty concerns, as well as future-focused initiatives, investments, and expectations. About those themes, the same divided reality is found, and the report comes up with some interesting observations about institutional readiness. While all responding university leaders say student use of AI tools is nearly ubiquitous, they conclude that faculty use trails significantly behind.
More than a third of HE leaders (35%) perceive their institutions to be below average or far behind others in using GenAI tools. Sixty percent say their schools are not yet prepared to use GenAI effectively to help students, faculty, and staff. Yet, 48 percent expect significant changes in their institutions’ typical teaching model over the next 5 years, at the same time predicting only minor reductions in staff and faculty. Sixty-two percent expect GenAI tools will both enhance some aspects of the role that HEIs play in society and diminish others, such as greater digital inequities and serious concerns about academic integrity. HEI leaders overwhelmingly say it is necessary to address ethical issues spawned by the spread of GenAI tools.
As for the use of AI in research specifically, the Digital Science transformation report (Digital Science et al., 2024) recommends that leadership foster a culture of transparency and collaboration, ensuring that leadership principles filter throughout the HEI and take a proactive approach to research security, rather than waiting for issues to arise. The report recommends looking actively for anomalies and unusual behaviors that may be putting researchers and institutions at risk.
That being said, a specific aspect of AI is the way it could be used in decision-making. Given the overwhelming volume of information and stress associated with decision-making, it is no surprise that AI is increasingly turned to for support. Workplace tools like spreadsheets are no doubt helpful, but AI’s ability to handle vast datasets in real time has shifted it from a supportive tool to an actor in decision-making. No longer just a back-office assistant, AI is playing a crucial role in driving strategic choices.
However, despite all these advancements, José Miguel Diez Valle and Nikita (Valle & Nikita, 2025) argue full technological takeover of decision-making must still be accompanied by the need for ethical considerations, regulatory compliance, and human intervention. Thus AI should be regarded as a new tool, but not a new manager. They think it is unlikely to fully replace human decision-making in the foreseeable future, particularly in roles that demand higher cognitive skills.

7. AI in the Eyes of the External Stakeholders

7.1. AI in the Eyes of the External Stakeholders: World of Work

All reports on the introduction and use of AI in the world of work agree in predicting a thorough transformation of its structure, needed skills, and leadership. The year 2024 was definitely the breakthrough of AI in the world of work. According to Microsoft’s and LinkedIn’s 2024 annual work trends report (Microsoft & LinkedIn, 2024), 75 percent of knowledge workers use AI at work today, and 46 percent of users started using it less than six months ago. Users say AI helps them save time (90%), focus on their most important work (85%), be more creative (84%), and enjoy their work more (83%).
On the other hand, while most leaders agree AI is a necessity, the pressure to show immediate return on investment is making leaders move slowly: 79 percent of leaders agree their company needs to adopt AI to stay competitive, but 59 percent worry about quantifying the productivity gains of AI. This uncertainty is stalling vision: 60 percent of leaders worry their organization’s leadership lacks a plan and vision to implement AI. This lack can be compared to the poor presence of any strategic AI plan, framework, or guidelines in HE(Is) (see Section 9 and Section 10).
According to the same Microsoft 2024 report though, 66 percent of business leaders say they would not hire someone who does not have AI skills; 71 percent say they would even rather hire a less experienced employee with AI skills than a more experienced employee who lacks them.
McKinsey partners go even deeper and state that as many employees adopt GenAI at work, companies not only struggle to follow suit, but must transform their processes, structures, and approach to talent (Relyea et al., 2024). The McKinsey & Company report shows early AI adopters are better than others at addressing talent- and training-related challenges, but also that all companies have room to improve.
A comparable conclusion may be drawn as far as mindset and behavioral changes within an organization are concerned. Also, those organizational and cultural elements seem to be more spread among early adopters than others.
The researchers conclude that GenAI is not only causing transformation in the world of work but that AI’s next inflection point is a strategic re-organization of the company with a holistic approach to transforming how the whole organization will work with GenAI and not only focus on the technology.
In WEF’s latest insight report on labor-market transformation and jobs (WEF, 2025) the most important macro trend driving business transformation according to the surveyed employers is the broadening of digital access (60%), while increased geopolitical division and conflicts only account for 34 percent and stricter anti-trust and competition regulations score the lowest (17%). Within the technological change, AI seems to be acknowledged as the most important technology trend driving the coming business transformation in 2025–2030 (86%).
The WEF authors write that GenAI, in particular, has witnessed a rapid surge in both investment and adoption across various sectors, although they acknowledge that more generalized adoption of AI remains comparatively low, with only a small fraction of firms using it in 2023. Adoption is growing rapidly though, albeit unevenly across sectors. They also conclude that, while the full extent of long-term productivity gains from AI remains uncertain, enhancing human skills and performance is more likely to come about if technology development is focused on enhancing rather than substituting for human capabilities, the latter increasing inequality and unemployment.
The latest OECD study about the impact of AI on the workforce (Lane, 2024) investigates which workers will be most affected. The paper makes the right and interesting difference between the AI workforce (the narrow set of workers mostly with a university degree having the skills to develop and maintain AI systems) and AI users (workers who interact with AI at work). The author concludes that, though empirical analysis does not suggest that overall employment levels have fallen due to AI, also non-routine, cognitive tasks of tertiary-educated workers in “white-collar” occupations will likely face disruption. The main risk for those without tertiary education, female and older workers is that they will lose out due to lower access to AI-related employment opportunities and productivity-enhancing AI tools in the workplace. This consequence could again increase inequalities and societal resistance to technological progress. At the same time, if used correctly, the study finds that some features of AI could open up new opportunities for traditionally underrepresented groups. Consequently, education is an important determinant of AI’s impact on the world of work.
It has become clear that in 2025 AI investment is growing spectacularly across all sectors of industry and life. Companies earning over $500 m are spending 5 percent of their revenue on artificial intelligence initiatives. One in three companies across all markets are planning to spend $25 m or more on AI in 2025, a study from Boston Consulting Group (BCG) has reported (de Bellefonds et al., 2025).
However, after all the hype over AI, the value is still hard to find. CEOs have authorized investments, hired talent, and launched pilots, but only 22 percent of companies have advanced beyond the proof-of-concept stage to generate some value, and only 4 percent are creating substantial value, according to new BCG research. In the interview with HOWEST’s AI Lab (see Section 10.2) the coordinator felt that even during the AI hype particularly small and medium-sized enterprises (SMEs) seem to return from AI because they start realizing what efforts and costs it would mean to turn their processes into data.
The divided reality between AI early students and academic adopters and HEIs’ leaders seems to be comparable to the situation in the industry in times when AI is rapidly developing and innovating. A recent McKinsey report, Superagency in the Workplace (Meyer et al., 2025), surveyed 3613 employees in various roles and 238 C-level executives in October and November 2024, 81% coming from the USA, and the rest from 5 other countries: Australia, India, New Zealand, Singapore, and the United Kingdom. The report states that three times more employees are using GenAI for a third or more of their work than their leaders imagine and more than 70 percent of all employees believe that within 2 years GenAI will change 30 percent or more of their work. While 92 percent of companies plan to invest more in AI over the next 3 years, only 1 percent of their leaders believe their investments have reached maturity. Almost half of the executives (47%) say their companies are developing AI tools too slowly. Almost half of the employees (48%) rank training as the most important factor for AI adoption. Yet, nearly half feel they are actually receiving moderate or less support. As in HE, it seems that leaders need to recognize their responsibility in driving the needed AI transformation.
The latest trend is that AI development and applications are not limited to ICT companies. McKinsey recently published a report (Viswa et al., 2025) concluding that AI has also entered the life sciences industry. The challenge in this sector, as in others, is to rethink how to scale it in order for AI to deliver transformational business value.
Even in the mixed education landscape of AI use as above, Coursera data generated for WEF’s Future of Jobs Report 2025 reveals significant growth in demand for GenAI training among both individual learners and enterprises. The data show a rapid increase in GenAI training enrolment per month since July 2023, growing from hardly 10,000 to c. 200,000 in August 2024.
The enrolment trends, however, ask for a tailored approach to GenAI learning, where individuals focus on foundational knowledge-building, such as prompt engineering, trustworthy AI practices, and strategic decision-making around AI. Organizations prioritize training that delivers immediate workplace productivity gains, such as leveraging AI tools to enhance efficiency in Excel or leveraging the technology to develop applications.
Which particular skills are demanded both by adult students and employers in these transformative times, many studies (Martin, 2018; McKinsey, 2022; WEF, 2025) have come up with the so-called “21st-century competences”. They contain a mixture of interpersonal or transversal skills, such as (self-)leadership and teamwork; cognitive skills, such as creativity and critical thinking; technical competences, such as ICT skills and computational thinking; and communication skills, such as presenting and listening.
Zooming in on the specific AI skills, HOWEST’s strategic framework (see Section 10.2) gives a good overview of them and at the same time relates them to 21st-century competences as in Table 2 below. The latter are still being future-proof and even become more crucial with AI.
A linked combination of AI skills with the more than ever needed transversal skills such as critical thinking, writing, and teamwork is also the starting observation of a recently published guide to teaching with AI (Bowen & Watson, 2024). José Antonio Bowen, a former president of Goucher College (USA), and C. Edward Watson, the associate vice-president for curricular and pedagogical innovation at the American Association of Colleges and Universities (AAC&U), together wrote a practical book trying to engender a mindset within academics and students that AI does not do the work for them; rather, it works with them and thus should enhance their critical competences, human creativity, and ideas. They wrote the book to help people start small and build confidence gradually by experimenting with AI in ways that fit their specific disciplines and expertise, rather than trying to master everything at once. The goal is to understand how these AI tools can enhance existing teaching methods without replacing them.
As with researchers, students and academics the growing demand for AI skills is a transformational challenge for the strategic societal services of HEIs with the world of work. Addressing the necessary AI skills in Life-Long Learning (LLL) as well as developing AI applications for the world of work ask for a redesigned strategic collaboration between the HEI and the profit and social profit sectors.
The external stakeholders of HE are definitely in search of institutional strategic engagement about the introduction, use, and development of AI and AI literacy and skills. A good and efficient practice is to combine this AI need of internal stakeholders (students, academia, and administrative staff) with the external ones. Recently several US Community Colleges, founding an AI Readiness Consortium, have announced they will collaborate over the next several months to design 25 new courses that will give students the chance to use AI tools to solve real-world problems or create efficiencies within the world of work organizations. Ideally, those courses will offer a model for other faculty members and community colleges and at the same time, their low-income students will not be disadvantaged in developing AI skills.

7.2. AI in the Eyes of the External Stakeholders: Authorities

As has already become clear AI, its development, research, and many applications, is a very global and complex item with technological as well as ethical dimensions that ask for international, national, and organizational strategies and regulations.
One of the most extensive and interesting international guidelines has been produced by UNESCO. Its series of publications on AI consists of a recommendation on the ethics of AI (UNESCO, 2021), the use of GenAI in education and research (UNESCO, 2023), practical AI competency frameworks for teachers (UNESCO, 2024a) and students (UNESCO, 2024b) and a global education monitoring report on technology in education with the latest data per country.
As can be expected from UNESCO its approach towards AI and uses of AI tools in teaching and research is one of co-creation of all stakeholders regulating AI uses in a responsible collaboration between AI and human beings while at the same time being conscious of the concerns of bias, necessary human and technically digital skills, equity, privacy and further ethics. The publications have been inspiring for regions, nations, and indeed also for HEIs to design their own policies written in AI frameworks, guidelines, and regulations.
Another well-known regulation is the European AI Act (EU, 2025), which has been described as the most prescriptive one and the first-ever legal framework on AI that explicitly addresses the risks of AI. The EU approach is to build AI centers on excellence and trust It aims to boost research and industrial capacity while ensuring safety and fundamental rights.
The Act uses a risk-based approach by distinguishing four degrees of risk:
-
unacceptable risk, such as harmful and criminal AI-based manipulation and deception;
-
high risk, such as AI safety components in critical infrastructures risking the life and health of citizens (e.g., transport, robot-assisted surgery);
-
transparency risk needed around the use of AI;
-
and minimal or no risk.
The AI Act prohibits eight unacceptable risk practices, formulates strict obligations to high-risk AI systems before they can be put on the market, introduces specific disclosure obligations with transparency risk in order to preserve trust, and finally does not introduce rules for AI deemed minimal or no risk. The EU Act categorizes AI solutions used in HEIs that may determine the access to education and the course of someone’s professional life, scoring of exams for example, as high risk. Chatbots and AI-generated content are identified as having a transparency risk. Thus their use should be clearly and visibly named.
Once an AI system is on the market, EU authorities, such as the European AI Office or in the EU member states, are in charge of market surveillance. Deployers must ensure human oversight and monitoring, and providers must have a post-market monitoring system in place.
Though the AI Act aims to position Europe to play a leading role globally, it was quickly opposed by AI providers’ lobbies as too restrictive and bureaucratic. Indeed, this was the general tenor that could be heard at the global AI Action Summit in Paris on 10 and 11 February 2025. Several reports have already made clear that Europe was behind in the hyped and fast development of AI and AI tools (Gallup, 2024; McKinsey’s Sukharevsky et al., 2024).
The global Paris AI Summit responded to Trump’s Stargate promise of 500 billion dollars by the Commission’s chairwoman, Ursula von der Leyen, announcing an investment of 200 billion euros (50 billion public European money), while others rightly warn for deregulation and a purely voluntary approach (Hoffmann et al., 2025) Unfortunately, the Paris AI summit did not deal with concerns and international regulations as it predecessor, the first global AI safety summit hosted by the UK in 2023 (Burki, 2024). Instead, in Paris AI was clearly approached as a mighty technological development in the context of the geopolitical world economy. Times have changed indeed since Trump and the American Big Tech have surprisingly united to proclaim deregulation hand in hand with investment in AI.
The DEC reports too give useful considerations and recommendations for governments and regulations in their reports. In its student survey (2024) the Council points out that:
-
Governments should collaborate with academia to guard against over-reliance on AI in an effort to avoid negative impacts on the productivity and competitiveness of the future workforce;
-
Governments should reflect on the appropriate role to play in upskilling people in AI by appropriate upskilling incentives taking into consideration expected economic outcomes and stakeholders’ expectations;
-
Regulators need to consider balancing the promotion of a positive environment for AI innovation and adoption in universities with effective compliance requirements to ensure equity, transparency, and accountability.
Although not focusing directly on investment in AI research and development, other good practices can be found in various countries. In the UK, for example, the not-for-profit organization and membership community, Jisc, was founded in 2021 and has offices in Bristol, London, Oxford, and across the UK. It is a digital, data, and technology agency focused on tertiary education, research, and innovation. It provides managed and brokered products and services, enhanced with expertise and intelligence to build sector leadership and enable digital transformation. Jisc has 1250 staff members, 359 skills providers, and 24 researchers and is linked with 165 UK HEIs.
Another good public practice has been developed in Flanders (Belgium). All Flemish universities and universities of applied sciences and arts together with employers’ organizations have founded the Flanders AI Academy, VAIA (https://www.vaia.be). VAIA is meant to be a hub for collecting and supporting training courses to help professionals learn about or apply the possibilities of AI.
Yet, since ICT is a very international technology and very much globally used, there clearly is a need for international, worldwide regulation. That is also what Sam Altman, CEO of OpenAI, called for at the World Governments Summit in Dubai in February 2024. They even suggested creating a body like the International Atomic Energy Agency because of possible “very subtle societal misalignments” that could make systems wreak havoc and not leave the industry in the driver’s seat when it comes to making regulations governing an AI industry that is likely advancing faster than the world expects.

8. AI and the University as Organization of Professional Services

After this long overview of the stakeholders’ perceptions, usages, and attitudes towards AI and its tools, we can return to the HEI itself and look at possible AI use on an institutional level. It may already be clear that AI needs to be approached in a strategic (see Section 11) and thus institution-wide manner.
Next to the expensive installation of the necessary AI infrastructure in order to be able to integrate and use AI (tools) in teaching and learning, research, and societal services, there are some other institutional processes in which AI could be used. This opportunity should in fact be one of the reasons why a HEI should organize training in digital and AI literacy. Quite some people could get a promotion in this way. Unfortunately, a global survey of 3863 academics and HE staff from 1949 universities across 100 countries conducted by the Times Higher Education (THE) consultancy arm found that while respondents believed that they engaged in professional development activities and stayed up to date with digital technologies, they were less likely to believe that digital competence was given due recognition by managers (THE, 2024).
It is often heard that quite some routine administrative tasks can and will be done by AI in the future, particularly with agentive AI. AI will thus lower the cost, making them faster and more efficient than spending many hours and much administrative staff. That should not necessarily mean that people become redundant, although the latest generation of AI agents will certainly have influences on routine administrative jobs.
AI can be and is already used in finding and reaching out to prospective students. The annual US survey of EducationDynamics (2025) gathered some interesting AI uses in collecting school information. Thirty-seven percent of its respondents confirmed they had already used AI tools like AI chatbots to gather information about schools. This use indicates another AI opportunity to provide comprehensive and accessible information and support prospective students in their decision-making process. The most sought-after information by those AI users includes tuition fees (57%), course offerings (51%), admission requirements (43%), campus facilities (37%), and student reviews (35%). An overwhelming majority (75%) said to were satisfied with the information provided. Graduate students show slightly higher satisfaction (41% very satisfied) compared to undergraduates (37%). However, undergraduate traditional students are less likely to find total satisfaction with AI tools. About two-thirds find AI tools to be at least somewhat trustworthy, while 20 percent highly trust them to help them make decisions (to consider) to enroll. Forty-nine percent even responded that generative experience responses impacted their consideration. Overall, 26 percent of respondents have utilized school website chatbots to gather information and support, although traditional undergraduate students were far less likely to use them during their search. Of the website chat users, 93 percent found it helpful. These results suggest that AI and AI tools will be used more for supporting prospective students.
AI and data could also be used to prioritize, personalize, and target emails and SMS campaigns. Universities have become more aware of email and other message fatigue since the pandemic. Universities are also focusing more on the entire life cycle of a student, from prospective student to alumnus.
AI can also be integrated into many other aspects of communication. Using AI can support staff by crafting a few sentences to plug and play into their mails, whether they are for communication reasons with or even among students or for marketing. Making fundraising for instance can be made more effective as well as enrollment higher. AI is even already used to identify and contact prospective students, draft office communications, and sort through application materials.
Another example of institution-wide AI use is in the health care for students’ wellbeing. Though, as for the internal organizational student services, only 24 percent of the students responding to the DEC survey perceive AI monitoring of wellbeing positively. While the discovery and diagnosis could be faster and better by AI, the counselling itself during the healing process surely needs human intervention.

9. Regulatory University AI Policies and Frameworks

The importance of developing AI policies and frameworks trying to enhance AI literacy of students and staff is wide-spreading and supported by research as well as international recommendations, such as UNESCO’s (see Section 7.2). International research has shown that lower artificial intelligence literacy predicts greater AI receptivity (Tully et al., 2025) This link is not explained by differences in perceptions of AI’s capability, ethicality, or feared impact on humanity. Instead, this link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption. These findings, although they contradict expectations, already show how AI can be and cause disruptions in itself, in its use and in the globalized volatile world.
The least that a HEI, its leadership and governance can do at this moment when AI is accelerating global transformation and is being used more and more by HE stakeholders, is trying to regulate the adoption of AI in the best and most responsible way. This challenge is often proposed, described and/or regulated in institution-wide frameworks and guidelines that are developed best in co-creation with all stakeholders’ views, practices and situations as described above (see Section 6 and Section 7) in order to have sufficient support and impact.
The American researchers, Wang et al. (2024), analyzed policies, statements, resources, and guidelines at the 100 top-ranked universities in the US to see how they respond and adapt to the development of GenAI, especially ChatGPT, in their academic contexts. Results show again that the majority of these universities adopt an open but cautious approach. Primary concerns lie in ethical usage, accuracy, and data privacy. Yet, the study also concludes that out of the 100 top US universities, more than one-third had unclear or undecided policies on AI use and more than half left decisions to individual instructors. Although the latter makes some sense as far as academic freedom and ownership are concerned, faculty are deeply divided regarding whether AI use constitutes academic dishonesty. While failing to issue an institution-wide policy, it also undermines the missions of modern research universities to prepare students with the literacy competencies they need to prosper in a workplace that is being transformed by AI.
In times in which the reading and writing of students and academia is more and more taken over by AI, it should at least be clear that the assignments and their assessments must be adapted to the new situation.
The staff responses in the DEC (2025a) faculty survey (see Section 6.2) clearly show a division among the academia, who adopt a cautious approach and are not fully aware of the possible changes AI will bring in teaching while still 40 percent have no understanding or are only beginners in AI use. This indicates an information gap that HE(Is) urgently need to rectify through AI literacy and skills training.
With more than 90 percent of the students using AI (DEC, 2024) (see Section 6.1) and 82 percent of staff being worried about students becoming overly dependent on AI tools (DEC, 2025a) (see Section 6.2), 55 percent identify this issue as a significant concern. Therefore, there is a pressing need to promote appropriate use of AI in education by developing appropriate AI policies and frameworks.
These frameworks and guidelines should be comprehensive and clear (80% of staff find they are not in DEC 2025). Therefore, HEIs indeed need clear frameworks and/or guidelines ensuring accountability and oversight to limit over-reliance on AI among students and staff in order to maintain a high quality of education.
As recent as 10 March 2025 the DEC published its AI literacy framework (DEC, 2025b). Designed to support HEIs, the framework provides structured guidance to develop AI literacy approaches that equip both students and faculty with both foundational AI competencies and industry-specific applications. The framework defines comprehensively the five known key dimensions of AI literacy: understanding AI and data, critical thinking and judgement, ethical and responsible use, human-centricity, emotional intelligence, and creativity, and domain expertise. Each dimension is linked to specific AI competences distinguishing 3 levels and defines them adding examples of competencies and of actions for progression. Not surprisingly, the DEC AI literacy framework takes a human-centred approach to AI literacy, emphasising the importance of human skills such as critical thinking, creativity, and emotional intelligence.
A special concern exists with the use of AI in student writing and assessing. Most colleges and universities do not prohibit the use of AI tools by students, faculty, or staff, but they frequently have guidelines on how to do so (Weldon, 2025). Both APA and MLA both already provide guidelines for citing generative AI work. The well-known AIAS (Perkins et al., 2024) with its 5 levels of AI allowance, student’s responsibility and pedagogical links to learning outcomes has entered many HEIs’ frameworks and guidelines (see Section 7.1).
The wider context of AI as described above is full of ethical and other concerns. This is reflected in the opinions and perceptions of the stakeholders. A majority of students responding to the global DEC 2024 survey identify privacy and data security (61%) and trustworthiness of AI generated content (51%) as top concerns, followed by the also content-linked bias and fairness in AI evaluations and decisions (32%).
With over half of the academia (57%) expecting their students to use AI with disclosure, together with instructions, the time is ripe to produce and implement AI governance frameworks with clear instructions and specific guidelines. Considering the findings and observations above a general table of contents of a HEI’s AI framework or subjects addressed in the guidelines can be written out. Table 3 below also makes use of UNESCO’s guidance and competency frameworks, as well as Wang et al.’s (2024) coding scheme for analyzing university AI policies and statements.
The three essential words that keep being repeated in the frameworks and guidelines are indeed: transparency, responsibility or accountability, and critical thinking. In the in-depth interviews, the AI officers (see Section 10) kept repeating that AI should only be a tool that enhances those human competences so much needed today.
Looking at a number of frameworks and/or guidelines and at the in-depth interviews at Ghent University and HOWEST (see Section 10), it becomes clear that it is not enough to have a clear framework or guidelines. The regulations and recommendations that are formulated in them cannot be implemented and followed if they are not communicated well and not accompanied by a variety of practices, such as training, workshops, helpdesk, and an active institution-wide AI platform. According to Wang et al. (2024) most universities also actively respond and provide diverse types of resources, such as syllabus templates, workshops, shared articles, and one-to-one consultations focusing on a range of topics, such as general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools.
Finally, as far as teaching and learning are concerned, Wang et al.’s findings provide four practical pedagogical implications for educators in teaching practice: accept AI’s presence, align its use with learning objectives, evolve curriculum to prevent misuse, and adopt multifaceted evaluation strategies rather than relying on AI detectors. The autChors also have two recommendations for education policies: establishing discipline-specific policies and guidelines and managing sensitive information carefully. Here we touch upon the place of AI in strategy and/or strategic planning and actions (see Section 11).

10. Case-Studies

10.1. Case Study: AI at Ghent University

Ghent University has about 50 professors, 50 post docs and 200 PhD students involved in AI research resulting in collaboration with over 300 local and international companies. The university also has a partnership with Microsoft. Within Flanders, the university is currently known for being the most permissive guidelines on the use of AI in education (UGent, 2024a). The reason said for this is that AI is here to stay, so students and academic staff need to learn how to use it in a responsible way, instead of unsuccessfully forbidding it.
The Ghent AI guidelines on education were established in consultation with the education directors of the eleven faculties and have been concretized in supporting material. They are quite extending and deal with defining AI and describing existing systems, their possibilities, and risks before prescribing the use referring to certain competences and AI literacy. The guidelines applying to writing tasks not only permit “a responsible use” of GenAI tools but even encourage responsible use in preparation for the master’s thesis. “Responsible” refers to being careful in terms of privacy, reliability, and bias. Yet, individual teachers and departments in specific subjects can still prohibit its use or make it less permissive.
The guidelines also refer to the importance of the learning outcomes of the programs and courses, and the impact of GenAI use on the evaluation of those learning outcomes. Lecturers will have to assess in another way, for instance by asking for the process. Self-reflection is said to play an important role here, next to knowledge of effective communication, critical thinking and creativity, i.e., generic competencies The insertion of specific AI competences within the learning outcomes is still subject of the individual program commissions and lecturers.
The AI institutional officer confirmed that, in accordance with the European AI Act, the aim is to strive for AI literacy by everyone. A long training on specific AI tools is not envisaged, because they change so fast. She also said that the guidelines were discussed in the university board because of their importance and the link with the university’s baseline: dare to think. The university provides a lot of introductory and training material, workshops and even face-to-face guidance for both students and academics in order to make the guidelines alive.
As for the use of AI in research, there are comparable guidelines referring to integrity using the university’s Commission for Research Integrity, to the European Code of Conduct for Research Integrity (EU, 2023) and to its own policies and procedures (UGent, 2024b).

10.2. Case-Study: AI at HOWEST

Also, HOWEST has international fame in ICT, especially in gaming and cybersecurity. As a university of applied sciences, it has developed a quite detailed and comprehensive framework approved by its board and refers to its strategy focusing on anticipating the future through innovation. The institute offers some AI degrees and specialized courses in the field of AI. It is also doing applied research in AI in specific disciplines, such as in education in its EdHub, and for enterprises and social-profit organizations as societal services with an AI Lab. The mission of its AI Lab is to bridge the gap between cutting-edge AI research and real-world solutions, by developing practical AI applications for SMEs and citizens.
HOWEST’s extensive framework deals with basic knowledge of AI, AI applications in the disciplines, problem-solving with AI, critical thinking, ethics and responsibility. In those five themes the framework formulates institution-wide and program regulations and actions from the academic year 2024–2025 onwards. As far as assessment and final paper are concerned, the framework gives a 5-scale grade instruction table ranging from prohibition to usage that needs to be disclaimed as developed by Perkins (Perkins et al., 2024) (see Section 6.2). As mentioned above (see Table 1) the framework also contains a good overview of AI competences linked to the other learning outcomes. The ultimate goal is to embed AI in (generally the first year of) each program and in assessment with the help of LLM validation, yet not without a clear policy framework and training.
In the in-depth interviews with the institutional AI-coach and AI-Lab director they stressed the necessity of having a good, multi-disciplinary platform and provide information as well as workshops and training in order to integrate AI in a responsible way. As for the competences they regarded AI in education rather as an accelerator of the transversal competences needed than as new competences. To them transparency, responsibility and critical thinking are still the crucial competences needed in the future.

11. Strategy and Governance with AI

In general, the transformations that (Gen)AI is causing are not only deep but also wide, including not only technology and innovations but also innumerable applications in a world characterized by geopolitical tensions and climate change bringing a heavy ethical dimension with it.
GenAI has definitely entered HE in many aspects and uses by all its stakeholders (see Section 6 and Section 7). In December 2024 the Cengage report, Exploring AI in Higher Education (Cengage, 2024), rated successful use of AI as 71 percent in teaching, 61 percent in research, and 61 percent in administrative tasks.
From the various fields of activities explored above it should be clear that the emergence and further development of AI has a transformative effect on the realization of the three-fold mission of HEIs as well as on the organization itself, its structure and management, not to forget the global context in which HE is active. The opinions, feedback and expectations of the stakeholders have already made clear which strategic aims and actions the HEI and its leadership should take. What is already clear from the information and research above is that an attitude of “If you can’t beat it, join it” is not in its place with AI. AI is so much transformative, innovative and complex, while at the same time disruptive and disrupting, that strategic thinking and action is the only way of dealing with it.
Unfortunately, the THE 2024 Digital Maturity Index (THE, 2024), the DEC reports on AI use by students and staff (DEC, 2024, 2025a), and the many other reports mentioned above show that universities have been both reactive and resilient when it comes to managing change to adapt to and adopt AI in a good, responsible, and ethically correct way. Yet, the initial AI shock felt within the HEIs has shifted towards greater acceptance, moving from implementing stringent policies to promoting research and practical use in the classrooms. Notwithstanding HE(Is’) cautious approach, universities must progress in the deep transformations caused by AI in a strategic way with a strong leadership (see Section 12), considering their student population, staff needs, funding, and broader institutional and societal goals, such as preparing their students for the future world.
In Elsevier’s recent Academia Futura report (Elsevier, 2025) the words “future” and “transformation” not only play a central role. In the underlying survey to evaluate their HEIs’ priorities and to judge how well they were performing on each of the goals 450 academic leaders from 20 countries responded. The report notes that: “The primary line of inquiry revealed that academic leaders are focused on digital transformation, developing global networks and building sustainable institutions, while consistently recognizing the importance of excellent graduate outcomes. Whilst there is lingering circumspection about AI, they see that digital transformation can convert the potential of new technology into tangible performance improvements across an institution’s operations.” (p. 3)
Among Elsevier’s 25 high-performance strategic objectives ranked by transformational potential the five top objectives are: excellent graduate outcomes scoring 66 percent, academic excellence in knowledge creation and research outputs (bibliometrics) (64%), a developed and strong global education network (63%), high sustainability performance (62%), and an effective digital transformation (62%). On the other hand, the lowest high-performance objective is an effective AI integration (38%).
Again, there is quite a gap between 84 percent of academic leaders saying effective digital transformation is a top priority and only 48 percent reporting good progress. The transformational potential ascribed to effective digital transformation also varies per region with 76 percent agreeing with the statement in North America, some 69 percent in Europe, and only just 50 percent in Asia Pacific. As for (Gen)AI two-thirds of academic leaders regard an effective integration of GenAI as highly prioritized.
Yet, it is clear that digital transformation, certainly when linked with (Gen)AI, has definitely entered HEIs’ strategies. Next to financial health, research performance, research outcomes and impact, student success, a diverse student body, a diverse faculty and staff, and a diversity in leadership positions, the Elsevier 2025 report also adds AI as a transformative strategic objective adding the underlying general action “ensure effective integration and adoption of generative AI across the institution through an evaluation structure, such as a committee, and processes to oversee strategy and responsible adoption.” (p. 18)
One of the central elements that appears even stronger with the rise of AI than before in these transformative times of the 4th industrial revolution is critical thinking. It has been formulated as one of the most important so-called competencies of the 21st century and has grown into a strategic goal for the whole of HE.
Since the DEC AI report 2024 (DEC, 2024) indicates that a vast majority of students feel their university’s AI integration has not fully met their expectations (see Section 6.1), it is critical for institutions to address this gap by understanding AI use cases and attributes that students perceive as most valuable. This understanding could enable HEIs to integrate AI in a more effective and student-centered way. Along this way the DEC 2024 formulates 4 strategic actions HEIs should consider for AI integration:
-
HEIs should ensure that all educators are trained and proficient in handling AI tools that they plan to integrate into teaching and learning;
-
HEIs should seek feedback from staff and students about how AI should be implemented and on the effectiveness of AI integration to identify areas of improvement;
-
HEIs should identify the attributes of AI that their students and staff value;
-
HEIs should define guidelines for AI usage by staff and students to ensure accountability and preserve academic integrity.
The order of the strategic actions above is not without importance. The fact that first the teaching personnel should be trained before AI tools are integrated and indeed AI enters the programs rightly presumes that HEIs should have a strategic plan before integrating AI. Unfortunately, and partly because the development of AI is more in private than in public hands, the reality is that most HEIs were kind of surprised and certainly taken at speed by the fact that more and more students were using AI tools in their learning. However, since only a minority of students (18%) believe that courses primarily created and delivered by AI are more valuable than traditional courses, universities should consider cautious moves and prescribe mandatory transparency in their regulations.
All in all, HEIs should realize and make use of the fact that more than 50 percent of students do not want to become over-reliant on AI, and they do not want their professors to do so either. Strategically, universities should thus strike a balance between integrating AI and an over-reliance on AI. The most important focus should remain on the quality of education. In this respect, AI could be more regarded as an infrastructure and/or a tool and certainly as an accelerator, but the mission of qualitative education and research remains.
Operational actions that could enable to integration of AI in teaching and learning going along with the 4 strategic actions above are (percentage of current and future AI users calling for them):
-
Give access to tools and resources (65%);
-
Train staff on AI literacy and skills (64%);
-
Collect best practices and use cases for AI integration (60%);
-
Develop or clarify guidelines on AI in teaching (50%).
Instead of using AI primarily as a marketing tool, HEIs should keep being focused on their core aims and processes of their missions, integrating AI strategically in co-creation with and for the benefit of their internal and external stakeholders, and thus considering AI as an investment in order to perform better and with a greater impact.
Although AI is transforming the contexts, realities and possible realizations of HE(I)’s three-fold mission so deeply and widely, it seems that AI does not necessarily need to explicitly enter the mission statement. Most HEIs still formulate their mission as providers of qualitative teaching and learning, groundbreaking research, and impactful social services.
The common definition of strategy though, is precisely that it formulates objectives and actions by which the mission could be realized (better) (Bollaert, 2019). Thus, it seems quite necessary to incorporate AI in the new strategies as a goal, but at the same time it is a tool or way to have the best chances to realize the current missions.
In order to prepare students with the competences needed in the world of work, in order to add to interdisciplinary knowledge about AI and its applications, and in order to higher perform in societal services and impact by developing AI applications tailor-made to the needs of profit and social-profit organizations, the main strategic AI objective could then be formulated as “effective and responsible integration and adoption of (Gen)AI” (see also Elsevier, 2025).
Underlying actions and indicators need to be developed in order to measure and monitor the new strategic AI objective. Both private consultancies as well as universities’ umbrella organizations, have developed policies, events and help in order to realize strategic AI objectives through well-planned, institution-wide actions. The European University Association (EUA), for example, has been working on the theme quite a while and provides interesting material, such as on institutional strategies, the ethical dimension, and AI uses. The report of the AAC&U (Watson & Rainie, 2025) and the practical guidebook co-authored by its associate vice-president (Bowen & Watson, 2024) have already been mentioned. The Elsevier 2025 report on the future provides under the 25 top objectives interesting actions and quite some indicators.
The private, San Francisco-based company, Grammarly, does not only offer a free AI writing assistant but has also published a useful prioritization guide in order to adopt responsible AI in HEIs (Grammarly, 2025). Grammarly’s clear pathway to practical AI adoption in HEIs starts from security, transparency and trust with the IT professionals, administrators, educators and students as stakeholders. Its responsible AI framework (RAI) plans three actionable steps: assess your current AI practices, identify gaps and define next steps, and finally engage stakeholders for shared responsibility. Each theme (security, transparency, and trust) is then broken down into a step-by-step implementation plan. Together with the University of Texas it also published another faculty guide (Grammarly, 2025) in which 20 activities and 9 lesson plans that were developed at the university at Austin are described.
Another private company is Turnitin, who published actionable strategies in order to evaluate students’ use of AI writing tools (Turnitin, 2024). The publication deals with such important steps as to safeguard academic and integrity standards, to cultivate students’ original thinking, to empower educators to drive change, and to adapt teaching and assessment for authentic learning.
As far as accountability is concerned, both to the internal and external stakeholders, the AI strategy, actions and realizations could be fluently measured and monitored with the least administrative burden if they are integrated in the existing internal (IQA) and external quality assurance (EQA). In this way, QA also incorporates strategy and policies, teaching and learning as well as research and the impact of societal services. Thus, the often fragmented and therefore burdensome QA grows into a kind of total quality management (TQM) tailored to HE(Is) (Bollaert, 2019).

12. Leadership and Culture with AI

The strongest leadership is indeed the one who, by using a strategic and comprehensive TQM focusing on critical (self-)reflection, learns from its failures within an organizational culture of continuous enhancement and improvement.
Effective digital transformation including (Gen)AI (tools) as mentioned in HEIs’ strategies demands strong leadership. HEIs should incorporate digital leadership into their strategic plans, develop leaders with these skills, and allocate part of their technology budget to leadership and workforce training next to teaching, research and societal service.
The fast and deep consequences of AI in all aspects of HE(Is), the context of its mission, its strategy, its stakeholders, its core processes and management, ask for a leadership who not only acknowledges these transformations but can also look ahead to a future that spans more than the 5-year period of a mission statement and strategy. This is precisely the definition of “visionary”. Just as climate change will take longer than this period, AI will continue to develop and influence HE thoroughly. The only appropriate answer to these long-term challenges is a vision that takes these transformations into account with a visionary leadership.
Crucial in the vision about AI is the cooperation between AI usage and humans, not replacing them but augmenting the humans, their even more needed unique competences such as critical thinking, decision-making and communicative cooperation as well as their activities. José Miguel Diez Valle and Nikita (Valle & Nikita, 2025) are of the same opinion when AI is used to improve outcomes and to take complex decisions. They argue that AI certainly has its limitations in complex decision-making and, while it can help the preparation and underpinning of a decision, human insight is indispensable.
Another important element of the leadership’s vision applies to the structure of the HEI itself. The history of AI development (see Section 5.1) has already made clear that AI has not been developed only in mathematics, ICT engineering, or computer science, but in an interdisciplinary collaboration with neurosciences, psychology, linguistics, and even archeology and philosophy. Since its inception and due to the fact that AI has become not only more complicated but also more widely applicable, AI and AI tools can be found in all academic disciplines even at the most comprehensive universities. This is a great challenge for the structure of HEIs and their leadership. Maybe the time has come to develop a vision of a future institutional structure that exists of multiple- and interdisciplinary teams and projects co-creating with all stakeholders instead of the departmental structure of traditional disciplines (Bollaert, 2019).
Gallup’s report on global leadership surveyed in 52 countries (Gallup, 2024) comes to the same characteristics as Sinek in his famous book of 2013 (Sinek, 2013). The essence is that while the demands of leadership are complex, the foundation of being a good leader is rooted in knowing and meeting the needs of those who they serve. Gallup’s report also shows that the more leaders can provide their followers with hope (56 % of the respondents list this as first), trust (33%), compassion (7%) and stability (4%) by leaning into their unique strengths and applying them to the specifics of their role, the more successful they will be.
In order to be a successful leader through building multiple teams Gallup (2024) mentions seven important expectations around three themes to focus on the behaviors as made clear in Table 4 below:
On the other hand, current practices of management are still producing very little development. If you have old management practices, you need to significantly change your workplace strategy (see Section 11) and transform your culture (see below). Since the introduction of New Public Management (NPM) in the public and social-profit sector HEIs are indeed more and more managed in comparable ways as in companies (Bollaert, 2019). Taking into consideration though that HEIs have other structures of governance, which are more democratic stakeholders’ models, and another drive or culture than profit-making, Sinek’s and Gallup’s features of successful leadership can still be inspirational.
Linked to the many possible AI applications in the various processes of a HEI, let alone the other four challenges for HE(Is) mentioned in the beginning of this article (see Section 3), it should be clear that a modern leadership can only be a collective that tries to unify all executive managers and team leaders of all levels that exist within a HEI, from institutional to course level, from academic to administrative level, from research to teaching level, from personnel to student, and from internal to external stakeholder.
As with QA, the organizational culture is a critical dimension in successfully integrating AI within the HEI’s strategy. A well-known definition of organizational culture was given by Edgar Henry Schein of MIT’s Sloan Business School as “A pattern of shared basic assumptions that the group learned as it solved its problems of external adaptation and internal integration that has worked and, therefore, to be thought to new members as the way to perceive, think, and feel in relation to those problems.” (Schein, 1985, p. 90) Organizational culture is a layer that is partly hidden and where values and beliefs shape behaviors and attitudes (Bollaert, 2019). The table above already holds some cultural elements, such as trust, expectations, listening, and certainly vision.
Indeed, vision is the most important element in shaping an organizational culture in the way it should first answer the question “why?” (in its vision), and then only “how” (strategy) and “what?” (results), as Sinek investigated in 2009. This makes him conclude that “A company is a culture. A group of people brought together around a common set of values and beliefs. It’s not products or services that bind a company together. It’s not size and might that make a company strong, it’s the culture—the strong sense of beliefs and values that everyone, from the CEO to the receptionist, all share.” (Sinek, 2009, p. 90).
How vision and mission are linked to Sinek questions and how they are the real starting point of a vision, a mission and a strategy, the realization of which should be measured and monitored by a self-reflective TQM by all stakeholders, also in HE, is shown in Figure 6 below (Bollaert, 2019, p. 268).
In the original of Figure 6 above the red arrows have been added in order to make clear that AI has such a transformative influence in the wide context of HE that it should be taken into account in the vision dealing with the future in the long term and the realization of the HEI’s mission. Therefore, it is necessary to develop an AI strategy in order to integrate it in a transparent, responsible, and ethical way making sure that the uniqueness of the human dimension is enhanced and the organizational culture is made AI-proof by values and beliefs that engage in that human uniqueness even stronger.
Unfortunately, HE(Is) seem(s) not to be that strong in formulating distinctive and engaging visions and missions. Already in 2015, Gallup had a look at the mission statements of 50 HEIs (Dvorak, 2015). The analysis concluded that they are too similar representing the broad views and aspirations of education leaders and their institutions and probably differentiate the HEIs from financial service and retail companies. However, the statements offer little guidance to current and future students in their selection of institutions. The three recommendations of the study were: to establish a clear and differentiated purpose by answering the questions “Why do we exist?” and “What value do we provide to the world?”; to align the brand based on tangible outcomes; and, to support identity with an engaged culture.
In 2019, an international group of researchers looking at the mission statements of UK universities came to the same conclusion (Seeber et al., 2019). They used 20 factors that are often mentioned in UK universities’ missions, such as student graduates served, staff served, quality attribute, research mean, community served, society served, innovation attribute, and international attribute. The researchers could not otherwise conclude that mission statements are identity narratives more a type of symbolic representation of the HEI.
The following year two authors of the previous group (Huisman & Mampaey, 2020) extended the research to 21 categories that UK universities could use to differentiate themselves relating to missions compared between 2005 and 2015. Again, they found even more similarities in 2015 than in 2005. The elements were rather vague and common, if not bland, without further specifying what was understood by words like “excellence”, “top quality”, “a strong community”, and “a stimulating atmosphere”.
Organizational practices, values, and beliefs that underpin critical thinking are transparency, authenticity and integrity, innovation, sustainability, empowered engagement and active commitment (synonyms to broad entrepreneurship), open communication, respect for diversity, equity, and inclusion (DEI) (Bollaert, 2024). Indeed, we come to a long and difficult list of features of modern leadership in HE(Is), which conforms to the long list at the beginning of this article (see Section 4). Those values and beliefs typically shape the necessary organizational culture that is needed in HEIs even more than before AI. Unfortunately, several of these values are being attacked by some national governments and AI company executives at the moment. It is therefore important that the whole of the leadership on each level of the institution acts as a collective to those values.
This organizational culture will not be shaped nor survive if the university’s leadership does not reflect the values above. ‘Walk the talk’ is a most convincing and motivating practice for modern leadership. It is therefore crucial to create an environment that encourages innovation and tolerates failure, also in AI use. That was also one of the recommendations of the study by the Centre for Higher Education Governance Ghent (CHEGG) at Ghent University and the Centre for Higher Education Policy Studies (CHEPS) at Twente University on how one can create a culture for quality enhancement (Kottman et al., 2016). With leadership the researchers also added the importance of blended leadership, combining managerial and academic values in teaching and learning, and addressing the collective, not solely the individual teachers.
Indeed, the combination of academic and managerial competences and values is only one of the many dimensions university leadership has to take into account, address, and manage. This complexity is also acknowledged in the new framework for leading in HE published by the UK Advance HE on 12 February 2025 (Advance HE, 2025). The framework was developed in conjunction with the global HE sectors and recognizes the multiplicity of roles and contexts in which HE leaders operate, as can be seen in Figure 7 below.
It was published together with a practical report on the development process (Lennoxsmith & Foster, 2025).
Particularly in relation to AI integration and adoption, the already mentioned Gallup’s report about the cultural readiness in Europe towards AI is worth reading (Gallup, 2024). The report concludes that Europe’s lack of AI adoption is not due to a financial resource but rather to organizational cultures not being ready to embrace AI.
Indeed, next to the conclusion that workplaces might not be prepared for AI, the question is whether the organizational cultures are AI-ready. Ratanjee and Royal (2024) argue that organizations should prioritize the cultural component of AI adoption and digital transformation. Therefore, they have designed a framework with three key dimensions of organizational readiness that are essential to building a culture that prepares employees to make the most of AI and other digital technologies as in Table 5 below.
In the preparatory self-reflective framework above are clearly elements of organizational culture (vision, agility, anticipation), strategy (goals, full potential, policies), actions (guidelines, learning path), and QA (feedback loop) interwoven.
These last observations on the link between vision, mission and organizational culture(s) of HEIs, whether they are public or private, make it possible to answer the head question formulated in the title of this article as well as the four underlying research questions by drawing some conclusions.

13. Conclusions

In order to answer the title question in an argued and informed way this article began with describing the main challenges to HE in our contemporary times and the leadership needed, delved into the history and development of AI and its tools, and investigated the currently divided, yet quickly shifting attitudes towards degrees of implementation and uses of AI (tools) among the internal and external stakeholders of HE(Is). Let us return to the four research questions formulated in the beginning in this article in order to draw some conclusions. As this question is of strategic importance, the article deals with institutional policy and leadership and can also be read as a policy paper underpinned by surveys showing the uses of and attitudes towards AI and AI tools by HE’s stakeholders.
The first question was how AI and AI tools influence HE(Is) in its mission, organization, and context. In order to answer this question, we started by observing that we are currently living in a globalized and transformative world with a lot of disruption and volatility due to geopolitical and economic tensions and conflicts, climate change, and digitalization, causing a 4th industrial revolution loaded by mental and physical unhealthiness and ethical dilemmas. These global and contextual challenges have entered HE and have direct consequences on the ways and degrees of realization of the three-fold mission of teaching and learning, research, and societal services.
This three-fold mission is globally accepted by most HEIs and often occurs in their mission statements only with different adjectives of quality and commitment (Dvorak, 2015; Huisman & Mampaey, 2020), AI is not (yet) and does not (yet) need to be explicitly mentioned in HE(Is) mission statements. This might be surprising as other identified global challenges such as the international dimension and sustainability do appear in the missions.
This means AI can and should be strategically considered as an accelerator and game-changer in the realization of the existing mission as well as an introducer of new skills. However, ultimately AI seems to increase the importance of uniquely human transversal competences such as creativity, critical thinking and empathy. This proves that AI should be considered not as contrary to the human brains and being, but as an opportunity to enhance them. The latter belongs to a leadership’s vision and is of strategic importance.
The second question was precisely whether AI and its applications should be regarded as a strategic objective or only as a tool to realize the strategy. The answer is double. The effective and responsible adoption and integration of AI and AI tools is clearly a strategic objective to realize HE(I)’s three-fold mission in a better way in a hopefully better world. In this way, AI is a strategic tool to realize the mission. On the other hand, HE(Is) need(s) a clear and well-planned AI strategy in order to meet this strategic AI objective formulated as “an effective, efficient, and responsible adoption and integration of AI”.
As such this strategic AI objective and its underlying actions need to be formulated and well-planned in co-creation with all stakeholders, approved institution-wide by the board, managed and communicated well by an engaged leadership and staff. Ihe implementation and realization of this AI objective needs to be measured and monitored by an underlying self-critical reflection based on quantitative and qualitative indicators in order to enhance. As the changes occurring with AI are that fast, mid-term review of the strategy, or even shorter strategies and surely more flexibility seem to be important.
In this way we fluently come to the third question: how are AI and the use of AI tools, as developed and described in an AI strategy, best managed in order to be adopted and integrated in an effective and responsible way? Although the use of AI both in HEIs and in the world of work is currently clearly divided among early-adopters, followers and opponents, the understanding is quickly rising that those tools pose fundamental challenges to some of the foundational structures of education.
AI adoption and integration needs to be carefully planned and well managed. Both the infrastructure and technology, as well as the ethical dimensions need to be addressed in a clear and transparent way and to all stakeholders in order to have them join the AI transformation.
Central and crucial in the management of this AI strategic plan is a clear framework and guidelines linked to professional workshops, training, and help for students, staff, and societal partners through an active, institution-wide platform. The most common content of such AI frameworks and guidelines is a short introduction to AI, regulations about the use especially in writing and assessment, research, AI help offered by the institution, warnings about bias and hallucinations, and the ethical dimension. Some frameworks also contain a list of AI skills linked to specific discipline-like learning outcomes as well as transversal competences of programs and courses. The essential words of an AI framework and/or guidelines are transparency and/or notification against plagiarism, responsibility, and critical thinking. Good examples and practices have been given in this article. The UNESCO and EU frameworks and guidelines as well as examples and help provided by various member organizations and consultancies can be inspiring.
The fourth and final question was about which influence AI and its tools have on leadership and culture. As AI is speeding up the degree and velocity of the transformation that characterizes our time, the competences already needed by modern leadership are only becoming more important. Again, AI seems to be more an accelerator than a new needed feature of successful leadership. Modern HE leaders need to be visionary in the longer term in all aspects of the three-fold mission going beyond the 5-year strategic plans more than ever. As the big challenges of our times, geopolitical tensions and conflicts, global economy and finances, climate change, digitalization, and health, are fundamental and yet unpredictable, vision and resilient flexibility are musts.
If we only think about the inter- and multidisciplinary character of AI and its possible applications in all disciplines, the leadership needs to be visionary also about the HEI’s structure and way of cooperating. The time might have come to do away with the traditional disciplinary departments and work more project-based with multiple teams in research as well as teaching and societal work co-creating with all stakeholders.
Essential managing competencies such as listen, communicate, empower, convince, stimulate, and innovate have become more important than control and coordinate processes. Vision is crucial to be able to be proactive.
Finally, leadership can only be successful when it is authentically practiced by an engaged collective that is felt and accepted as such by the stakeholders. The culture of trust bottom-up as well as top-down is pivotal. The belief in the necessary change to integrate AI and to use AI tools effectively in the organization in a responsible and ethical way should be shared. The fact that AI and its tools should be considered, approached, and used for the enhancement of human intelligence, uniquely human competences, and the creation of more non-routine interesting work is part of the leadership’s new vision and should be embedded in the organizational culture. The culture of the organization is precisely the place of beliefs and values without losing (self-)critical reflection. The more AI (tools) is (are) used, the higher the need for human intelligence seems to be rising. The human touch is not only necessary for the critical check of the AI results but also for holistically humanistic and ethical reasons.
The most future-proof attitude is not to deny or forbid AI usage but to look at the opportunities to explore and integrate it in such a way that the focus is even more shifted towards the human being with its unique and even more necessary characteristics in a global context that is challenged by dangerous and disastrous anti-humankind and unethical evolutions. Looking at AI in a wider global context, the strategic need to address and manage AI seems not only to apply to HE and HEIs but also to society as a whole.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and their details are still owned by the authors and organizations referred. The author obtained their admission to use, to refer and to cite them.

Conflicts of Interest

The author declares no conflict of interest.

Notes

1
In this article ‘university’ is synonym to any HEI with the traditional 3-fold mission of providing teaching, research, and societal services, whether it called university, university of applied sciences, polytechnic, technical university, community college, graduate school, (higher) academy, school of arts or any other name in the national legislature.
2
The words “competence”, “competency” and “skill” are used as synonyms in this article, meaning: a combination of knowledge, insight, ability and attitude that is successful functioning in a specific context.

References

  1. Advance HE. (2025). Framework for leading in higher education: Diagram and explanation. Available online: https://advance-he.ac.uk/leadership/framework-leading-higher-education (accessed on 19 February 2025).
  2. Alenichev, A., Kingori, P., & Grietens, K. (2023). Reflections before the storm: The AI reproduction of biased imagery in global health visuals. The Lancet, 11, e1496–e1498. Available online: https://www.thelancet.com/pdfs/journals/langlo/PIIS2214-109X(23)00329-7.pdf (accessed on 28 November 2024). [CrossRef] [PubMed]
  3. Bhattacharya, A. (2025). Non-Western founders say DeepSeek is proof that innovation need not cost billions of dollars. Available online: https://restofworld.org/2025/deepseek-ai-model-openai-dominance-challenge/ (accessed on 3 February 2025).
  4. Bollaert, L. (2019). A manual for internal quality assurance in higher education: Looking for a new quality in HE in a new world (updated and revised 2nd ed.). EURASHE. [Google Scholar]
  5. Bollaert, L. (2023). Staat het geglobaliseerd hoger onderwijs op een historisch keerpunt? (Is global HE at a historical turning-point?). Tijdschrift voor Onderwijsrecht en Onderwijsbeleid, 2023–2024, 52–86. [Google Scholar]
  6. Bollaert, L. (2024, July 4). New global challenges for higher education ask for new visionary leadership. International LEAD Conference (pp. 7–30), Brussels, Belgium. [Google Scholar]
  7. Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. John Hopkins University Press. [Google Scholar]
  8. Burki, T. (2024). Crossing the frontier: The first global AI safety summit. The Lancet, 6(2), e91–e92. [Google Scholar] [CrossRef]
  9. Cengage. (2024). Exploring AI in higher education—The state of GenAI in higher education: EMEA instructor insight report. Available online: https://www.cengage.uk/gen-ai-research/ (accessed on 16 January 2025).
  10. Chen, C. (2025). Four Chinese AI startups to watch beyond DeepSeek. MIT Technology Review. Available online: https://www.technologyreview.com/2025/02/04/1110942/four-chinese-ai-startups-deepseek/ (accessed on 4 February 2025).
  11. Cheng, Z., Dinh, N. B. K., Caliskan, A., & Zhu, C. (2024). A systematic review of digital academic leadership in higher education. International Journal of Higher Education, 13(4), 38. [Google Scholar] [CrossRef]
  12. Colorado State University Global. (2024). How does AI acutaully work? Available online: https://csuglobal.edu/blog/how-does-ai-actually-work (accessed on 12 December 2024).
  13. de Bellefonds, N., Charanya, T., Franke, M. R., Apotheker, J., Forth, P., Grebe, M., Luther, A., de Laubier, R., Lukic, V., Martin, M., Nopp, C., & Sassine, J. (2025). Where’s the value in AI? Boston Consulting Group. Available online: https://web-assets.bcg.com/a5/37/be4ddf26420e95aa7107a35aae8d/bcg-wheres-the-value-in-ai.pdf (accessed on 17 February 2025).
  14. DEC. (2024). AI or not AI: What students want–Digital education council global AI student survey 2024. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024 (accessed on 12 December 2024).
  15. DEC. (2025a). AI meets academia–Digital education council global AI faculty survey 2025. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey (accessed on 12 February 2025).
  16. DEC. (2025b). DEC AI literacy framework: AI literacy for all. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-ai-literacy-framework (accessed on 12 March 2025).
  17. Diamandis, P. H. (2024). Longevity guidebook. How to slow, stop and reverse aging–And not die from something stupid. Ethos Collective. [Google Scholar]
  18. Digital Science, Hahnel, M., Porter, S., & Delevante, R. (2024). Research transformation: Change in the era of AI, open and impact: Voices from the academic community. Available online: https://digitalscience.figshare.com/articles/report/Research_transformation_change_in_the_era_of_open_AI_and_impact/27193923?file=50137452 (accessed on 22 December 2024). [CrossRef]
  19. Dorobat, C.-E., Underwood, S., Larner, A., & Sutherst, J. (2024). How to team up with AI: 3 steps for assessment redesign. AdvanceHE. Available online: https://www.advance-he.ac.uk/news-and-views/how-team-ai-3-steps-assessment-redesign (accessed on 31 October 2024).
  20. Dvorak, N. (2015, August 11). It’s hard to differentiate one higher ed brand from another. Gallup Education. Available online: https://www.gallup.com/education/243425/hard-differentiate-one-higher-brand.aspx (accessed on 21 March 2016).
  21. EducationDynamics. (2025). Engaging the modern learner: 2025 report on the preferences & behaviors shaping higher education. EducationDynamics LLC. Available online: https://insights.educationdynamics.com/rs/183-YME-928/images/EDDY-Modern-Learner-Report-2025.pdf (accessed on 21 February 2025).
  22. Elsevier. (2025). Academia futura: High performance never stops transforming. Available online: https://assets.ctfassets.net/o78em1y1w4i4/4CXOoRfvjeczg6vt6XO1oV/b0a22b4ac661d1cc47f4cb0ae04396ad/Elsevier_Transformation_Report_WEB.pdf (accessed on 19 February 2025).
  23. EU. (2023). European code of conduct for research integrity. Available online: https://allea.org/code-of-conduct/ (accessed on 10 February 2025).
  24. EU. (2025). The AI act, regulation EU 2024/1689. Available online: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed on 10 February 2025).
  25. Fink, O., Hartung, T., Lee, S. Y., & Maynard, A. (2024). Top 10 emerging technologies of 2024. WEF. Available online: https://www3.weforum.org/docs/WEF_Top_10_Emerging_Technologies_of_2024.pdf (accessed on 14 January 2025).
  26. Forero, M. G., & Herrera-Suárez, H. J. (2023). ChatGPT in the classroom: Boon or bane for physics students’ academic performance? arXiv, arXiv:2312.02422. [Google Scholar]
  27. Freeman, J. (2025). Student generative AI survey 2025. Hepi/Kortext. Available online: https://www.hepi.ac.uk/2025/02/26/hepi-kortext-ai-survey-shows-explosive-increase-in-the-use-of-generative-ai-tools-by-students/ (accessed on 26 February 2025).
  28. Gallup. (2024). Culture of AI–Benchmark report: State of AI adoption and culture readiness in Europe. Gallup. Available online: https://www.gallup.com/workplace/652784/culture-of-ai-and-adoption-report.aspx (accessed on 17 February 2025).
  29. Gibney, E. (2025, January 29). Scientists flock to DeepSeek: How they’re using the blockbuster AI model. Nature. Available online: https://www.nature.com/articles/d41586-025-00275-0 (accessed on 29 January 2025).
  30. Glover, E. (2024). AI-generated content and copyright law: What we know: AI-generated content isn’t protected by U.S. copyright laws. But there are still a lot of legal questions to untangle. Available online: https://builtin.com/artificial-intelligence/ai-copyright (accessed on 17 February 2025).
  31. Glynn, A. (2024). Suspected Undeclared Use of Artificial Intelligence in the Academic Literature. arXiv, arXiv:2411.15218v1. Available online: https://www.researchgate.net/publication/386112581_Suspected_Undeclared_Use_of_Artificial_Intelligence_in_the_Academic_Literature_An_Analysis_of_the_Academ-AI_Dataset (accessed on 12 December 2024).
  32. Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Available online: https://arxiv.org/pdf/1406.2661 (accessed on 22 November 2024).
  33. Google. (2025). Responsible AI progress report. Available online: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf (accessed on 28 February 2025).
  34. Grammarly. (2025). Responsible AI in higher education: A prioritization guide. Available online: https://www.grammarly.com/edu/events-resources/responsible-ai-framework-guide (accessed on 28 February 2025).
  35. Greis, J., & Sorel, M. (2024). The cybersecurity provider’s next opportunity: Making AI safer. McKinsey & Company. Available online: https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-cybersecurity-providers-next-opportunity-making-ai-safer (accessed on 22 November 2024).
  36. Harari, Y. N. (2016). Homo Deus—A brief history of tomorrow. Harvill Secker. [Google Scholar]
  37. Harari, Y. N. (2018). 21 lessons for the 21st century. Penguin Random House. [Google Scholar]
  38. Harari, Y. N. (2024). Nexus—A brief history of information networks from the stone age to AI. Penguin Random House. [Google Scholar]
  39. Harbani, H., Muna, J., & Judiarni, J. A. (2021). Digital leadership in facing challenges in the Era Industrial Revolution 4.0. Webology, 18(S105), 975–990. [Google Scholar] [CrossRef]
  40. Hoffmann, M., Narayanan, M., & Daniels, O. J. (2025). Will the Paris artificial intelligence summit set a unified approach to AI governance—Or just be another conference? Bulletin of the Atomic Scientists. Available online: https://thebulletin.org/2025/02/will-the-paris-artificial-intelligence-summit-set-a-unified-approach-to-ai-governance-or-just-be-another-conference/ (accessed on 6 February 2025).
  41. Hollmann, N., Müller, S., Purucker, L., Krishnakumar, A., Körfer, M., Hoo, S. B., Schirrmeister, R. T., & Hutter, F. (2025). Accurate predictions on small data with a tabular foundation model. Nature, 637, 319–326. Available online: https://www.nature.com/articles/s41586-024-08328-6 (accessed on 20 February 2025). [CrossRef] [PubMed]
  42. Huisman, J., & Mampaey, J. (2020). Use your imagination: What UK universities want you to think of them. Oxford Review of Education, 44, 4. Available online: https://www.researchgate.net/publication/323271994_Use_your_imagination_What_UK_universities_want_you_to_think_of_them (accessed on 13 February 2025). [CrossRef]
  43. Ibrahim, A. (2024, December 15). United States’ project maven and the rise of AI-assisted warfare. Global Defense. Available online: https://defensetalks.com/united-states-project-maven-and-the-rise-of-ai-assisted-warfare/ (accessed on 19 December 2024).
  44. Jatautaite, D. (2023, December 12). Students’ attitudes towards learning foreign languages for specific purposes via AI mediation. Conference Proceedings Advanced Learning Technologies and Applications (ALTA’23) (pp. 23–34), Kaunas, Lithuania. [Google Scholar]
  45. Jones, N. (2025a). AI hallucinations can’t be stopped—But these techniques can limit their damage. Nature, 637, 778–780. [Google Scholar] [CrossRef] [PubMed]
  46. Jones, N. (2025b). OpenAI’s ‘deep research’ tool: Is it useful for scientists? Nature. Available online: https://www.nature.com/articles/d41586-025-00377-9 (accessed on 12 February 2025).
  47. Kestin, G., Miller, K., Klales, A., Milbourne, T., & Ponti, G. (2024). AI tutoring outperforms active learning. Available online: https://www.researchsquare.com/article/rs-4243877/v1 (accessed on 28 November 2024). [CrossRef]
  48. Kissinger, H., Schmidt, E., & Huttenlocher, D. (2021). The age of AI: And our human future. John Murray Publishers Ltd. [Google Scholar]
  49. Kissinger, H., Schmidt, E., & Mundie, C. (2023). Genesis: Artificial intelligence, hope and the human spirit. John Murray Publishers Ltd. [Google Scholar]
  50. Kottman, A., Huisman, J., Brockerhoff, L., Cremonini, L., & Mampaey, J. (2016). How can one create a culture for quality enhancement? Available online: https://www.researchgate.net/publication/309610434_How_can_one_create_a_culture_for_quality_enhancement (accessed on 21 February 2025).
  51. Kumar, H., Rothschild, D. M., Goldstein, D. G., & Hofman, J. M. (2023). Math education with large language models: Peril or promise? Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641653 (accessed on 12 December 2024).
  52. Lane, M. (2024). Who will be the workers most affected by AI? A closer look at the impact of AI on women, low-skilled workers and other groups. OECD Publishing. Available online: https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/10/who-will-be-the-workers-most-affected-by-ai_fb7fcccd/14dc6f89-en.pdf (accessed on 12 December 2024).
  53. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. [Google Scholar] [CrossRef] [PubMed]
  54. Lennoxsmith, F., & Foster, W. (2025). Framework for leading in higher education: A report on the process of development. Advance HE. Available online: https://s3.eu-west-2.amazonaws.com/assets.creode.advancehe-document-manager/documents/advance-he/Framework_for_Leading_report_1739437352.pdf (accessed on 25 February 2025).
  55. Liu, W.-S., You, J., Chen, S.-D., Zhang, Y., Feng, J.-F., Xu, Y.-M., Yu, J.-T., & Cheng, W. (2024). Plasma proteomics identify biomarkers and undulating changes of brain aging. Nature Aging, 5, 99–112. Available online: https://www.nature.com/articles/s43587-024-00753-6 (accessed on 9 December 2024). [CrossRef] [PubMed]
  56. Lund, B., Lamba, M., & Oh, S. H. (2024). The impact of ai on academic research and publishing. arXiv, arXiv:2406.06009. Available online: https://www.researchgate.net/publication/381307162_The_Impact_of_AI_on_Academic_Research_and_Publishing (accessed on 20 December 2024).
  57. Luo, X., Rechardt, A., Sun, G., Nejad, K. K., Yáñez, F., Yilmaz, B., Lee, K., Cohen, A. O., Borghesani, V., Pashkov, A., & Marinazzo, D. (2024). Large language models surpass human experts in predicting neuroscience results. Nature Human Behaviour, 9, 305–315. Available online: https://www.nature.com/articles/s41562-024-02046-9 (accessed on 3 December 2024). [CrossRef] [PubMed]
  58. Lushenko, P., & Carter, K. (2024). A new military-industrial complex: How tech bros are hyping AI’s role in war. Bulletin of the Atomic Scientists. Available online: https://thebulletin.org/2024/10/a-new-military-industrial-complex-how-tech-bros-are-hyping-ais-role-in-war/ (accessed on 10 October 2024).
  59. Manyika, J., & Hassabis, D. (2025). Responsible AI: Our 2024 report and ongoing work. Available online: https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work (accessed on 19 February 2025).
  60. Martin, J. P. (2018). Skills for the 21st century: Findings and policy lessons from the OECD survey of adult skills. OECD. [Google Scholar] [CrossRef]
  61. McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A proposal for the dartmouth summer research project on artificial intelligence. Available online: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html (accessed on 12 September 2024).
  62. McKinsey. (2022). The skills revolution and the future of learning and earning. McKinsey&Company. Available online: https://www.mckinsey.com/~/media/mckinsey/industries/education/the-skills-revolution-and-the-future-of-learning-and-earning-report-f.pdf (accessed on 6 December 2024).
  63. McMurtrie, B. (2024, October 3). The future is hybrid—Colleges begin to reimagine learning in an AI world. The Chronicle. Available online: https://www.chronicle.com/article/the-future-is-hybrid (accessed on 11 October 2024).
  64. Meyer, H., Yee, L., Chui, M., & Roberts, R. (2025). Superagency in the Workplace—Empowering people to unlock AI’s full potential. McKinsey. Available online: https://www.mckinsey.com/~/media/ (accessed on 21 February 2025).
  65. Microsoft & LinkedIn. (2024). AI at work is here. now comes the hard part—2024 work trend index annual report from Microsoft and LinkedIn. Available online: https://assets-c4akfrf5b4d3f4b7.z01.azurefd.net/assets/2024/05/2024_Work_Trend_Index_Annual_Report_6_7_24_666b2e2fafceb.pdf (accessed on 20 December 2024).
  66. MIT. (2025). Technology review: 10 breakthrough technologies 2025. Available online: https://www.technologyreview.com/2025/01/03/1109178/10-breakthrough-technologies-2025/ (accessed on 19 February 2025).
  67. Naddaf, M. (2025, February 4). How are researchers using AI? Survey reveals pros and cons for science. Nature. Available online: https://www.nature.com/articles/d41586-025-00343-5 (accessed on 12 February 2025).
  68. OECD. (2015). Frascati manual 2015: Guidelines for collecting and reporting data on research and experimental development, the measurement of scientific, technological and innovation activities. OECD Publishing. Available online: https://www.oecd.org/content/dam/oecd/en/publications/reports/2015/10/frascati-manual-2015_g1g57dcb/9789264239012-en.pdf (accessed on 12 December 2024). [CrossRef]
  69. OECD. (2025). Trends shaping education 2025. OECD Publishing. Available online: https://www.oecd.org/en/publications/trends-shaping-education-2025_ee6587/fd-en/full-report.html (accessed on 19 February 2025).
  70. Patrizio, A. (2024, December 4). Three tech companies eyeing nuclear power for AI energy. Tech Target Network. Available online: https://www.techtarget.com/whatis/feature/Three-tech-companies-eyeing-nuclear-power-for-AI-energy (accessed on 29 January 2025).
  71. Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching & Learning Practice, 21(6). [Google Scholar] [CrossRef]
  72. Pinker, S. (1997). How the mind works. W.W. Norton. [Google Scholar]
  73. Ratanjee, V., & Royal, K. (2024, November 1). Your AI strategy will fail without a culture that supports it. Gallup. Available online: https://www.gallup.com/workplace/652727/strategy-fail-without-culture-supports.aspx (accessed on 25 November 2024).
  74. Relyea, C., Maor, D., Durth, S., & Bouly, J. (2024). Gen AI’s next inflection point: From employee experimentation to organizational transformation. QuatumBlack AI by McKinsey, McKinsey&Company. Available online: https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/gen-ais-next-inflection-point-from-employee-experimentation-to-organizational-transformation (accessed on 20 September 2024).
  75. Riggins, S. (2024, October 22). Digital nexus in higher education: Artificial intelligence impact on academic integrity. The Cengage Blog. Available online: https://blog.cengage.com/digital-nexus-in-higher-education-artificial-intelligence-impact-on-academic-integrity/ (accessed on 25 October 2024).
  76. Rowsell, J. (2024, November 20). AI chatbot can conduct research interviews on unprecedented scale. THE. Available online: https://www.timeshighereducation.com/news/ai-chatbot-can-conduct-research-interviews-unprecedented-scale (accessed on 22 November 2024).
  77. Salah, D. (2023). The effect of women managers’ digital leadership competencies on glass ceiling: A research at universities in Turkey [Doctoral dissertation, Mersin University]. [Google Scholar]
  78. Schein, E. H. (1985). Organisational culture and leadership (3rd ed.). John Wiley & Sons. [Google Scholar]
  79. Schwab, K. (2017). The fourth industrial revolution. Penguin. [Google Scholar]
  80. Seeber, M., Barberio, V., Huisman, J., & Mampaey, J. (2019). Factors affecting the content of universities’ mission statements: An analysis of the United Kingdom higher education system. Studies in Higher Education, 44, 2. Available online: https://www.researchgate.net/publication/318284091_Factors_affecting_the_content_of_universities’_mission_statements_an_analysis_of_the_United_Kingdom_higher_education_system (accessed on 14 January 2025).
  81. Shrivastava, S. K., & Shrivastava, C. (2022). The Impact of Digitalization in Higher Educational Institutions. International Journal of Soft Computing and Engineering, 11(2), 7–11. [Google Scholar] [CrossRef]
  82. Sinek, S. (2009). Start with why—How great leaders inspire everyone to take action. Penguin. [Google Scholar]
  83. Sinek, S. (2013). Leaders eat last—Why some teams pull together and others don’t. Penguin. [Google Scholar]
  84. Singer, B., Bingham, D. R., Corbett, B., Davenport, C., & Gandolfi, A. (2024). AI/data centers’ global power surge and the Sustainability impact. The Goldman Sachs Group Inc. Available online: https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gs-sustain-generational-growth-ai-data-centers-global-power-surge-and-the-sustainability-impact/sustain-data-center-redaction.pdf (accessed on 18 December 2024).
  85. Sukharevsky, A., Hazan, E., Smit, S., de la Chevasnerie, M.-A., de Jong, M., Hieronimus, S., Mischke, J., & Dagorret, G. (2024). Time to place our bets: Europe’s AI opportunity. QuatumBlack, McKinsey Global Institute. Available online: https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/time%20to%20place%20our%20bets%20europes%20ai%20opportunity/time-to-place-our-bets-europes-ai-opportunity.pdf (accessed on 13 November 2024).
  86. Tang, J., & Huth, A. G. (2025). Semantic language decoding across participants and stimulus modalities. Current Biology, 35(5), 1023–1032. [Google Scholar] [CrossRef] [PubMed]
  87. THE. (2024). Digital maturity index—Examing the global digital landscape in higher education. Available online: https://www.timeshighereducation.com/content/digital-maturity-index (accessed on 11 September 2024).
  88. Tse, T. (2025, February 11). With autonomous problem-solving, agentic AI will upend what you consider work. LSE Business Review. Available online: https://blogs.lse.ac.uk/businessreview/2025/02/11/with-autonomous-problem-solving-agentic-ai-will-upend-what-you-consider-work/ (accessed on 14 February 2025).
  89. Tully, S., Longoni, C., & Appel, G. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. Journal of Marketing. [Google Scholar] [CrossRef]
  90. Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. [Google Scholar] [CrossRef]
  91. Turnitin. (2024). Why evaluating students’ use of AI writing tools is important: Actionable strategies for institutions. Available online: https://www.turnitin.com/ebooks/actionable-strategies-evaluate-students-ai-writing-use (accessed on 28 January 2025).
  92. UGent. (2024a). Generative AI in education at Ghent University. Available online: https://onderwijstips.ugent.be/en/tips/chatgpt-een-generatief-ai-systeem-met-impact-op-he/ (accessed on 31 October 2024).
  93. UGent. (2024b). Onderzoekstips (Research hints). Available online: https://onderzoektips.ugent.be/en/tips/00002188/ (accessed on 31 October 2024).
  94. UN. (2015a). Transforming our World: The 2030 agenda for sustainable development. Available online: https://sdgs.un.org/sites/default/files/publications/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf (accessed on 9 January 2025).
  95. UN. (2015b). Sustainable development goals. Available online: https://sdgs.un.org/goals (accessed on 9 January 2025).
  96. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Available online: https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence (accessed on 8 January 2025).
  97. UNESCO. (2023). Guidance for generative AI in education and research. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 8 January 2025).
  98. UNESCO. (2024a). AI competence framework for teachers. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391104 (accessed on 8 January 2025).
  99. UNESCO. (2024b). AI competence framework for students. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391105 (accessed on 8 January 2025).
  100. Valle, J. M. D., & Nikita. (2025). AI is revolutionising decision-making, but it can’t replace human leaders. LSE. Available online: https://blogs.lse.ac.uk/businessreview/2025/01/03/ai-is-revolutionising-decision-making-but-it-cant-replace-human-leaders/ (accessed on 22 January 2025).
  101. Viswa, C. A., Shu, D., Zurkiya, D., & Bleys, J. (2025). Scaling gen AI in the life sciences industry. McKinsey&Company. Available online: https://www.mckinsey.com/industries/life-sciences/our-insights/scaling-gen-ai-in-the-life-sciences-industry#/ (accessed on 16 January 2025).
  102. Waibel, G., & Hansen, D. (2025, January 7). AI and the struggle for control over research. InsideHE. Available online: https://www.insidehighered.com/news/tech-innovation/artificial-intelligence (accessed on 30 January 2025).
  103. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024, July 12). Generative AI in higher education: Seeing ChatGPT through Universities’ policies, resources, and guidelines. Elsevier. [Google Scholar]
  104. Warner, J. (2025). More than words: How to think about writing in the age of AI. Basic Books. [Google Scholar]
  105. Watson, C. E., & Rainie, L. (2025). Leading through disruption: Higher education executives assess AI’s impacts on teaching and learning. Available online: www.aacu.org/research/leading-through-disruption (accessed on 21 February 2025).
  106. WEF. (2025). Future of jobs report 2025. WEF. [Google Scholar]
  107. Weldon, W. (2025). How to ensure student success in higher education with AI-powered feedback analytics. Available online: https://narratives.insidehighered.com/explorance-ensure-student-success-in-higher-education-ai-feedback-analytics/ (accessed on 7 January 2025).
  108. Wiley. (2025). ExplanAItions. Wiley. Available online: https://www.wiley.com/content/dam/wiley-dotcom/en/b2c/content-fragments/explanaitions-ai-report/pdfs/Wiley_ExplanAItions_AI_Study_February_2025.pdf (accessed on 26 February 2025).
  109. Zuzeviciute, V., Butrime, E., & Jatautaite, D. (2023, December 12). HE in the context of the AI expansion: Students’ perspective. Conference Proceedings Advanced Learning Technologies and Applications (ALTA’23) (pp. 55–62), Kaunas, Lithuania. [Google Scholar]
Figure 2. A general AI process. Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and “training” them to process data. The adjective “deep” refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised, or unsupervised (LeCun et al., 2015).
Figure 2. A general AI process. Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and “training” them to process data. The adjective “deep” refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised, or unsupervised (LeCun et al., 2015).
Education 15 00774 g002
Figure 3. Landscape of AI use in research. Question: Which of the following best describes the part you want to play in the adoption of GenAI in your work, whether at your institution/organization and/or among members of your field of study?
Figure 3. Landscape of AI use in research. Question: Which of the following best describes the part you want to play in the adoption of GenAI in your work, whether at your institution/organization and/or among members of your field of study?
Education 15 00774 g003
Figure 4. Top 5 ways researchers are currently using GenAI tools.
Figure 4. Top 5 ways researchers are currently using GenAI tools.
Education 15 00774 g004
Figure 5. AI Uses. Question: Which, if any, of these represent use cases or solutions that are similar to anything you are already doing and/or have already tried with AI in the past? Source: ExplanAItions report, Wiley.
Figure 5. AI Uses. Question: Which, if any, of these represent use cases or solutions that are similar to anything you are already doing and/or have already tried with AI in the past? Source: ExplanAItions report, Wiley.
Education 15 00774 g005
Figure 6. Link between vision, mission and strategy in interaction with AI.
Figure 6. Link between vision, mission and strategy in interaction with AI.
Education 15 00774 g006
Figure 7. Advance HE framework for leading in HE.
Figure 7. Advance HE framework for leading in HE.
Education 15 00774 g007
Table 1. The AI Assessment Scale (AIAS).
Table 1. The AI Assessment Scale (AIAS).
#LevelDescription
1No AIThe assessment is completed entirely without AI use, ensuring that students rely solely on their knowledge, understanding, and skills.
AI must not be used at any point during the assessment.
2AI-assisted idea generation & structuringAI can be used in the assessment for brainstorming, creating structures, and generating ideas for improving work.
No AI content is allowed in the final submission.
3AI-assisted editingAI can be used to make improvements to the clarity or quality of the work in order to improve the final output, but no new content can be created using AI.
AI can be used, but the original work with no AI content must be provided in an appendix.
4AI task completion, human evaluationAI is used to complete certain elements of the task, with students providing discussion or commentary on the AI-generated content. This level requires critical engagement with AI generated content and evaluating its output.
AI is used to complete specified tasks in the assessment. Any AI created content must be cited.
5Full AIAI should be used as a “co-pilot” in order to meet the requirements of the assessment, allowing for a collaborative approach with AI and enhancing creativity.
AI may be used throughout the assessment to support your own work and the AI-generated content does not have to be specified.
Table 2. HOWEST AI competences matrix.
Table 2. HOWEST AI competences matrix.
Understand AI ConceptsAct Ethically and ResponsiblyThink CriticallyApply in Own DisciplineProblem-Solve with AI
Basic knowledge (Gen)AIIntegrate outputMake choicesSee opportunitiesAsk for step-by-step plan
Accounts & costsRefer correctlyRecognize AI in productOptimize work processesBrainstorm & ask questions
Knowledge of buttonsApply copyright & legislationRecognize AI sourcesChoose appropriate toolGive feedback to AI model
Prompt effectivelyDetermine privacyAnalyze outputUse tools efficientlyAsk for feedback
Find help & explanationRequest diversityRecognize diversityBe open to LLLConvert learnt into actions
Screen impact on well-beingAssess added value
Screen impact on sustainability
Table 3. general table of contents of a HEI’s AI framework.
Table 3. general table of contents of a HEI’s AI framework.
SubjectPossibility or Sub-ItemDescription
Introduction The way the framework was designed, co-created and decided with dates
University decisionAllow use with conditionsThe university permits the use of AI with conditions, such as appropriate citations
Ban useThe university prohibits the use of AI
Instructor decidesThe university allows the use of AI depending on the instructor’s decisions
Instructor decisionProhibition by defaultThe use of AI is generally not allowed unless explicitly permitted by instructor
Permissibility by defaultThe use of AI is generally allowed unless explicitly prohibited by the instructor
Neutral The university relies on the instructor’s decision without a specific stance
Education purposePlagiarism preventionTo prevent students from directly copying texts generated from AI
Authorship and attributionTo require acknowledge AI-generated content in student academic assignments
LimitationsTo address limitations, including biased, inaccurate, unreliable, or falsely cited information generated by AI
Learning outcomes and competenciesThe university formulates specific AI competencies and links them with existing and new learning outcomes
Assessment The university addresses points of attention and/or new ways of assessment because of the use of AI
Research purposeIntellectual propertyTo highlight the importance of acknowledging AI-generated content in professional research settings
Data privacy and securityTo address the confidentiality and security of data when using AI in professional research
IntegrityThe university has (no) specific rules on academic integrity
AI research competenciesThe university explicitly addresses AI competencies that can be used in research
Faculty, staff and management/leader-shipAI competenciesThe university explicitly mentions the/no new and future needed AI competencies required by the faculty, staff and management/leadership and the tools used to achieve them
Ethical dimension(No) Ethical prohibitionsThe university does (not) explicitly prohibits particular AI uses and/or applications because of specific ethical reasons
Ethical concernsThe university utters only ethical concerns with particular AI uses and/or applications and refers to a case-to-case decision-making ethical body
Table 4. Gallup’s 7 expectations to focus on behaviors.
Table 4. Gallup’s 7 expectations to focus on behaviors.
PEOPLE
  • Build relationships and establish connections with others to share ideas, achieve goals and build trust
2.
Develop people and help them become more effective by setting clear expectations, encouraging and coaching
PURPOSE
3.
Inspire others through positivity, vision, confidence and recognition
4.
Communicate clearly by sharing information concisely with purpose and being open to listening
DECISIONS
5.
Lead change and efforts to adapt work that align with the new vision
6.
Think critically about information, and seek to solve problems
PERFORMANCE
7.
Create accountability by holding self and others responsible for performance (empowering)
Table 5. Gallup’s framework to prepare an AI-ready culture.
Table 5. Gallup’s framework to prepare an AI-ready culture.
STRATEGY
  • Is there a clear vision for how AI will help the organization achieve its goals?
2.
Is the workforce optimistic about the impact of AI on individual, team and organizational performance? In order words, organize a survey.
3.
Does the organization have the necessary organization agility to adapt the vision as the organization increases its deployment of AI tools and applications?
SKILLS
4.
Do the employees know how to use AI and AI tools?
5.
Is there a robust learning strategy implemented to ensure that the organization continually tests, adapts and evolves the vision for AI technologies and their deployment?
6.
Is there an effective feedback loop for testing and learning as AI adoption grows foreseen?
SECURITY
7.
Do all/the majority of employees understand your organization’s AI policies and guidelines?
8.
Are potential limits and barriers to AI adoption being anticipated and planned for?
9.
What assumptions have your organization’s security measures been based on?
10.
Is the objective merely to try to control AI or is it to unleash AI’s full potential for the organization?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bollaert, L. Artificial Intelligence: Objective or Tool in the 21st-Century Higher Education Strategy and Leadership? Educ. Sci. 2025, 15, 774. https://doi.org/10.3390/educsci15060774

AMA Style

Bollaert L. Artificial Intelligence: Objective or Tool in the 21st-Century Higher Education Strategy and Leadership? Education Sciences. 2025; 15(6):774. https://doi.org/10.3390/educsci15060774

Chicago/Turabian Style

Bollaert, Lucien. 2025. "Artificial Intelligence: Objective or Tool in the 21st-Century Higher Education Strategy and Leadership?" Education Sciences 15, no. 6: 774. https://doi.org/10.3390/educsci15060774

APA Style

Bollaert, L. (2025). Artificial Intelligence: Objective or Tool in the 21st-Century Higher Education Strategy and Leadership? Education Sciences, 15(6), 774. https://doi.org/10.3390/educsci15060774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop