Next Article in Journal
Empowering Local Communities Through Homestay Management: An Innovative Strategy for Sustainable Rural Tourism in Yogyakarta
Next Article in Special Issue
Experts’ Perceptions on Barriers and Incentives to Green Hydrogen Adoption: Evidence from Europe and Beyond
Previous Article in Journal
Boosting Employee Creativity in SMEs: Double Mediation of Knowledge Management and Competitive Work Environment
Previous Article in Special Issue
Who in the World Is Generation Z? The Rise of Mobile Natives and Their Socio-Technological Identity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Will AI Replace Us? Changing the University Teacher Role

by
Walery Okulicz-Kozaryn
1,*,
Artem Artyukhov
2,3,4 and
Nadiia Artyukhova
3,4
1
Faculty of Social Sciences and Humanities, Humanitas University, 41-200 Sosnowiec, Poland
2
Institute of Public Administration and Business, WSEI University, 20-209 Lublin, Poland
3
Faculty of Commerce, Bratislava University of Economics and Business, 852-35 Bratislava, Slovakia
4
Academic and Research Institute of Business, Economics and Management, Sumy State University, 40-007 Sumy, Ukraine
*
Author to whom correspondence should be addressed.
Societies 2026, 16(1), 32; https://doi.org/10.3390/soc16010032
Submission received: 18 November 2025 / Revised: 11 January 2026 / Accepted: 14 January 2026 / Published: 16 January 2026
(This article belongs to the Special Issue Technology and Social Change in the Digital Age)

Abstract

This study examines how Artificial Intelligence (AI) is reshaping the role of university teachers and transforming the foundations of academic work in the digital age. Building on the Dynamic Capabilities Theory (sensing–seizing–transforming), the article proposes a theoretical reframing of university teachers’ perceptions of AI. This approach allows us to bridge micro-level emotions with meso-level HR policies and macro-level sustainability goals (SDGs 4, 8, and 9). The empirical foundation includes a survey of 453 Ukrainian university teachers (2023–2025) and statistics, supplemented by a bibliometric analysis of 26,425 Scopus-indexed documents. The results indicate that teachers do not anticipate a large-scale replacement by AI within the next five years. However, their fear of losing control over AI technologies is stronger than the fear of job displacement. This divergence, interpreted through the lens of dynamic capabilities, reveals weak sensing signals regarding professional replacement but stronger signals requiring managerial seizing and institutional transformation. The bibliometric analysis further demonstrates a theoretical evolution of the university teacher’s role: from a technological adopter (2021–2022) to a mediator of ethics and integrity (2023–2024), and, finally, to a designer and architect of AI-enhanced learning environments (2025). The study contributes to theory by extending the application of Dynamic Capabilities Theory to higher education governance and by demonstrating that teachers’ perceptions of AI serve as indicators of institutional resilience. Based on Dynamic Capabilities Theory, the managerial recommendations are divided into three levels: government, institutional, and scientific-didactic (academic).

1. Introduction

This study examines whether the role of university teachers will change under the influence of Artificial Intelligence. Here, the authors consider an extreme scenario: will AI replace us? The study’s premise was the potential conflict between SDG 8 (decent work and economic growth [1]) and SDGs 9 (industry, innovation, and infrastructure [2,3]) and 4 (quality education [4,5]) in the context of the digital era.
AI technology is rapidly transforming a crucial subsystem of society, the labor market [6,7,8,9]. Employers and employees face challenges that extend far beyond the implementation of new technology; they are redefining professional roles, redistributing responsibilities, and creating new foundations for the sustainability of business systems [7,10,11]. Where digitalization was previously perceived as an auxiliary process, it is now becoming a factor that determines the competitiveness and viability of enterprises [10,12,13].
At the center of these changes is the university teacher [13,14,15,16,17]. On the one hand, it is the university teacher who brings knowledge, develops critical thinking, and supports academic culture. On the other hand, the question is increasingly being raised: if algorithms are capable of generating texts, checking papers, and designing courses [13,17,18,19], will the teaching profession be displaced by AI tools? This question goes beyond individual fears. It reflects a profound tension between technological pressures and limited human resources. It also raises questions about the sustainability of the existing social academic contract [19,20,21].
Our study aims to examine how university teachers perceive the threat of their displacement by AI and how this perception relates to the management tasks of universities and other stakeholders in the context of digital transformation.
The authors approached this question through the lens of management science. Dynamic Capabilities Theory (DCT [22,23,24]) served as the theoretical framework for the study. This theory allows us to consider the situation more broadly than simply as a threat to university teachers’ employment. University leaders must be able to recognize risks and opportunities (sensing) and make decisions to maximize benefits and minimize costs (seizing). They must also be able to adapt and transform management practices, assign roles, and create new staff support mechanisms (transforming). Within this theory, university teachers’ perceptions of AI tools become less a psychological marker and more an indicator of organizational adaptability and resilience.
The empirical basis of the study is a survey of 453 academic teachers at Ukrainian universities, which enables an assessment of expectations and fears. The results offer management guidance on maintaining professional resilience, reducing staff anxiety, developing effective HR strategies, and ensuring the long-term competitiveness and sustainable development of higher education. Bibliometric analysis was used to qualitatively assess the changing roles of university teachers from 2021 to 2025.
  • Research hypothesis 1: University teachers perceive an imbalance between technological requirements and available professional resources, which is positively associated with the expectation that AI tools will replace faculty within the next five years. This expectation signals the need for universities to identify risks early and develop staff support strategies.
  • Research hypothesis 2: University teachers fear that the implementation and use of AI will spiral out of control within the next five years. This fear may be related to the expectation of a breach of the unspoken professional contract. Such fears can reduce faculty engagement and necessitate targeted management decisions to foster trust and psychological resilience.
  • Research hypothesis 3: The expectation that AI tools will replace university teachers is an important, but not the primary, source of their fears about losing control over AI technologies. University leaders need to transform their HR policies, addressing the sources of these fears and developing new adaptation mechanisms.
It is important to emphasize that the presented hypotheses are formulated descriptively. The single-item measures used reflect only individual manifestations of perception and do not allow for testing multidimensional constructs such as “professional contract” or “resource imbalance”. Consequently, the hypotheses are viewed as empirical expectations rather than elements of rigorous theoretical testing.
This study contributes to the scholarly debate on the digital transformation of the labor market, using higher education as an example, in three key ways. First, it extends the application of Dynamic Capabilities Theory [22,23,25] to university management. By analyzing university teachers’ expectations and fears, the study demonstrates how perceptions of AI become an indicator of universities’ ability to sense, seize, and transform. Second, the study reveals that the fear of losing control over AI is statistically significantly stronger than the expectation of replacing university teachers. This outcome shifts the focus of management and transformation: it is not only about preserving jobs but also about the need to strengthen trust and institutional predictability in university personnel policies. Third, the findings have practical implications for university leaders. They enable the development of HR strategies aimed at reducing anxiety, developing digital competencies, and supporting the professional resilience of university teachers. Thus, the article connects empirical analysis with management practices that promote the long-term sustainability of academic personnel management in higher education.

2. Literature Review

2.1. AI and University Teacher Management: Opportunities and Challenges for SDGs 4, 8, and 9

The widespread adoption of AI tools in universities is not only a technological innovation (SDG 9 [2,3]). It is also a factor radically changing human resource management (SDG 8 [1]). University teachers perceive AI from two perspectives: as a tool for improving their effectiveness (SDGs 4 and 9 [2,3,4,5]) and as a threat to their professional identity (SDG 8 [1]). For sustainable higher education management in the context of university digitalization, leaders need to consider both perspectives.
This paper is a continuation of our previous study [26]. We previously demonstrated that AI has changed the structure of higher education. Student activity has led to the emergence of an uninvited assistant, which is Artificial Intelligence. In the current study, we asked the question: could AI replace university teachers—that is, us?
The implementation and application of AI tools in universities offer the following opportunities (SDGs 4 and 9):
  • freeing university teachers from routine tasks [14,15,27]. This activity enables them to focus on mentoring, academic supervision, and developing soft skills. As a result, the quality of higher education services improves;
  • new formats of “human + AI” interaction in education [15,19,28,29,30]. This phenomenon leads to the emergence of new didactic tools. As a result, academics will come to the need to develop didactic theory [31] based on new AI tools;
  • a tool for strengthening the competitiveness of universities [14,32,33,34,35,36];
  • the potential for forming more sustainable human resource strategies through the development of digital competencies [14,35].
  • The rapid implementation and application of generative AI tools in universities pose the following challenges (SDG 8):
  • the threat of replacing faculty roles [17,19,21,26,37,38];
  • violation of the unspoken “professional contract” (autonomy, stability, predictability) [14,19,20,21,36,39,40];
  • increased anxiety and mistrust, which increases the risk of decreased engagement and innovative activity [40,41,42,43,44,45];
  • the risk of staff instability and attrition [14,36,39,46,47,48,49].
The opportunities and challenges associated with the widespread implementation of GenAI directly impact the management of university teachers. Balancing opportunities and challenges requires university leaders to make both systemic HR decisions and long-term HR strategies. Therefore, research into university teachers’ perceptions of AI is essential for the sustainability of university teacher management and forms the basis for further analysis (testing hypotheses H1–H3).

2.2. Transformation of Higher Education Under the AI’s Influence

While broader research on digital governance and innovation provides useful context, this study draws primarily on the literature on AI adoption in higher education and faculty perceptions of technology. This literature set allows us to clarify the role of trust, readiness, and professional identity in the digital transformation of universities.
The transformation of education under the influence of digitalization and AI has become a central topic in modern research, linking governance, innovation, and sustainable development. The allocation of education expenditures significantly determines the digital readiness of nations, shaping regional equality and innovation capacity [50]. Aligning with global frameworks, investments in quality education remain the backbone of post-pandemic recovery and sustainable human development [51]. In Ukraine, the evolution of dual higher education has strengthened connections between academic learning and labor market needs, reflecting the adaptation of educational systems to the challenges of war and globalization [52].
Education’s role as a driver of socioeconomic development is increasingly viewed through the lens of governance and sustainability. Studies of financial institutions highlight that effective governance practices promote sustainability, transparency, and accountability, providing valuable lessons for educational management [53]. The management of public expenditures in the Baltic States offers additional insights into the determinants of fiscal efficiency and innovation-oriented governance [54]. Similarly, research into the Ukrainian educational and scientific potential after the full-scale invasion highlights the need for international cooperation and policy alignment to sustain education and innovation capacities in crisis settings [55].
At the institutional level, leadership and organizational climate are essential for educational transformation. The organizational culture within higher education institutions determines openness, communication, and innovation potential, with a constructive environment reducing organizational silence and resistance to change [56]. Teachers’ professional standards and behavioral intentions strongly influence their willingness to adopt new technologies, thereby facilitating digital transitions in schools [57]. Inclusivity and diversity also contribute to innovation and performance, as the mix of age, ethnicity, and education in academic workforces creates conditions for a more adaptive and equitable system [58].
Digitalization itself has become a defining leadership trend in education, integrating innovation transfer and technological adaptation into institutional strategies [59]. The interplay between entrepreneurship and digitalization further supports sustainable development through mutual reinforcement of innovation ecosystems [60]. The creative industries, as illustrated through university-based information institutions, demonstrate how the strategic management of innovation and creativity can drive regional competitiveness and social impact [61]. Meanwhile, fostering gender balance and aligning education with the needs of Generation Z contribute to building a more sustainable and responsive workforce [62].
The integration of AI into education and science has revolutionized learning environments, informed decision-making, and enhanced governance processes. AI is increasingly used to resolve educational problems and to improve learning quality through data-driven insights [63]. Blended learning systems and open learning forms have demonstrated their efficiency in promoting flexible, lifelong learning models [64]. Bibliometric evidence confirms the growing academic attention to AI and ChatGPT tools in higher education, signaling the expansion of research into the economic and social effects of these technologies [65]. The application of AI also raises new governance and ethical challenges. The use of AI in e-recruitment for education and science has enhanced transparency, administrative efficiency, and fairness, contributing to the prevention of corruption and bias in decision-making [66]. The creative economy likewise benefits from AI-based tools, yet remains challenged by issues of trust, ethics, and human-AI collaboration [67]. Public attitudes towards AI range from optimism to skepticism, revealing divergent expectations about its future societal impact and ethical boundaries [68]. Understanding these attitudes is crucial for shaping AI policies that strike a balance between technological progress and human-centered values.
The literature demonstrates that education, leadership, governance, and AI are deeply intertwined in shaping the future of knowledge-based societies. Digitalization and innovation demand not only technological capacity but also ethical governance, inclusivity, and adaptability. Strengthening the integration between educational policy, AI ethics, and socioeconomic sustainability will be essential for ensuring that technological transformation supports human-centered and resilient development.

2.3. Theoretical Framework

The study is based on the Dynamic Capabilities Theory [22,23,25] developed by David Teece, Gary Pisano, and Amy Shuen [22]. According to this theory, organizations’ sustainable competitive advantage is formed not by the presence of static resources, but by their ability to: (a) recognize opportunities and threats in the external environment (sensing), (b) exploit them through timely management decisions (seizing), and (c) transform internal processes and structures to adapt to change (transforming).
Contemporary research emphasizes that faculty readiness to use AI, their level of AI literacy, as well as trust and explainability factors, have a critical impact on the adoption of technologies in the educational environment [57,61,66,68]. Studies [69,70,71] demonstrate that emotional safety, professional identity, and perception of technological risks form the basis for the development of sensing and seizing capabilities in universities. Organizational learning in business structures also positively influences developing dynamic capabilities (sensing, seizing, and reconfiguring) [72]. The paper [73] examines the transition to SDGs from the perspective of the Dynamic Capabilities Theory. The results reveal sensing capabilities caused by knowledge acquisition, environmentally conscious thinking, and market assessment. Reconfiguration of capabilities encompasses the restructuring of the workforce and supply chain. However, as one recent study showed, tangible changes in business models under the influence of AI have not yet occurred [74].
Regarding higher education, the implementation of AI tools simultaneously presents an opportunity for universities (SDGs 4 and 9 [2,3,4,5]) and a threat to university teachers (SDG 8 [1]). For universities, this means the need to: (a) recognize the risks and potential of using AI in the educational process, (b) develop HR strategies aimed at supporting professional resilience and reducing anxiety among university teachers, and (c) transform university teacher management, redistribute roles and functions, and create new mechanisms to support digital adaptation.
This approach enables university teachers’ expectations and fears to be understood not only as individual psychological reactions but also as organizational signals that indicate the adaptive capacity of universities. In this sense, university teachers’ perceptions of AI can be viewed as an indicator of the effectiveness of management decisions and HR policies in the context of digital transformation and the achievement of SDGs 4, 8, and 9.

3. Materials and Methods

3.1. General Description

The methodological basis of the study is built on a combination of quantitative and qualitative approaches. The results of a questionnaire survey conducted among 453 Ukrainian university teachers served as the empirical basis. Furthermore, the authors employ various definitions that are similar in meaning, including “university teacher”, “faculty”, “academic”, “teachers”, and “educator”, to avoid monotony. All these terms define a university teacher who is not only a transmitter of existing and new (personally acquired) knowledge, but also a provider of growth mindset principles and soft skills.
The choice of approach is due to the task of identifying stable patterns in GenAI’s perception of the threat of substitution of the academic profession.
The survey consisted of two single-item questions. The exact wording was as follows:
  • Will AI replace university teachers in 5 years?
  • Do you feel afraid that the use of AI will get out of control within the next 5 years?
Both questions were answered on a 5-point Likert scale:
(1)
definitely no,
(2)
rather no,
(3)
difficult to say,
(4)
rather yes,
(5)
definitely yes.
The scale was identical for all five respondent groups.
It should be noted that the indicators used are single-item measures. In this format, responses reflect primarily the respondents’ emotional and cognitive reactions, rather than complex psychological and professional constructs. Therefore, the study results are descriptive in nature and do not claim to be theoretically or conceptually comprehensive. The results were interpreted in light of the selected theoretical framework.
The following specific methods were used to achieve the research goals.
A questionnaire is the primary method of data collection. A structured questionnaire was developed, including demographic questions and blocks assessing the degree of expectations and concerns about the introduction of AI tools [75]. The answers were recorded in the Likert-scale format [75,76]. During processing, they were assigned numerical values from 0 to 4.
Two single-item indicators capture limited aspects of perception. They do not operationalize complex conceptual models and do not allow for their empirical testing. In this study, statistical methods are used solely to assess differences in mean values and are descriptive in nature.
Statistical data processing: descriptive statistics methods (expected value, standard deviation [75,76]) were employed. It should be noted that when using single-item indicators and groups of different compositions, statistical tests (t or z) can only be used descriptively. In this study, they are used as approximate measures of differences in mean values, not as methods for rigorous hypothesis testing. Accordingly, the results of the analysis should not be interpreted as statistical confirmation of the theoretical models.
The bibliometric analysis was carried out in accordance with the PRISMA diagram (Figure 1). Analysis tool—VOSviewer, version 1.6.19, ©2009–2023 Nees Jan van Eck and Ludo Waltman.
The analysis was conducted using both manual and automated calculations, employing reliable and cost-effective methods. Data processing adhered to the principles of reproducibility and transparency in quantitative research.
Keyword-cleaning rules:
  • Keyword normalization.
  • Singular–plural harmonization.
  • Spelling standardization.
  • Acronym and full-term unification.
  • Synonym merging.
  • Removal of overly generic or non-informative keywords.
  • Removal of domain-irrelevant keywords.
  • Multi-word keyword consolidation.

3.2. Respondents

The respondents were the faculty from Ukrainian universities. All respondents were located in Ukraine at the time of the survey. All of them were participants of the Polish–Ukrainian internship “Fundraising and foundations of project activities in educational institutions: European experience” (180 h) and represented all regions of Ukraine. It is important to note that the sample represents a convenience sample, drawn from participants in a professional development program. Generally, the results are not representative of all Ukrainian university faculty, and the study’s findings should be interpreted as reflecting the perceptions of this particular professional group. At the same time, professional development is a mandatory element of university teachers’ professional activities. Therefore, participation in an educational program is typical for most academics. This fact allows us to consider the obtained data as representative of one common professional segment, although it does not claim to cover the entire teaching population.
The survey was conducted on the first day of the internship, so the training results did not affect the results. All five groups do not represent repeated measurements of the same individuals. Each group represents an independent cross-sectional sample, formed during different internship groups. Thus, the groups are independent, not longitudinal. The general characteristics of the respondents are presented in Table 1.
Table 1 shows the uniform distribution of respondents across the time points under study. The total number of participants is 453, which provides a sufficient sample size for the reliability of the statistical analysis. Each group of respondents includes both men and women. This fact excludes gender distortions. The spread of numbers between groups is within acceptable limits, allowing intergroup comparisons without a significant risk of structural disproportion. Table 2 presents the characteristics of respondents according to the age group boundaries used in European standards.
Table 2 confirms that the age distribution of respondents corresponds to the typical structure of university teachers. The largest share comprises university teachers in the 35–44 and 45–54 age groups, which reflects their active participation in educational and scientific activities. The minimum number of respondents in the 15–24 age group is expected, given the profession’s age specificity. The presence of representatives of all age categories confirms the sample’s representativeness. The sample enables us to consider age as a potential factor in interpreting differences in the perception of AI, which was not part of the study’s objectives.

3.3. Statistics

Due to the nature of the data and the study’s design, statistical analysis is limited to descriptive comparisons of mean values for two single-item indicators (Q1 and Q2). This approach is justified because each of the five respondent groups represents an independent cross-sectional sample, rather than repeated measurements of the same participants.
The hypotheses presented in the study serve as analytical guidelines, guiding the interpretation of the empirical data. Given the limitations of the measurement instruments and the lack of multivariate scales, formal statistical testing of the hypotheses was not conducted. Statistical analysis focused on the direction and relative magnitude of differences between group means, ensuring the validity of interpretations within the study design. Statistical calculations were performed using standard methods described in [75,76].
The following were used for the descriptive analysis:
Mean values and standard deviations across groups;
Comparison of mean values for Q1 and Q2;
Visualization of differences using a two-dimensional graph (Section 4.1), allowing us to assess the structural asymmetry between expectations of teacher replacement and concerns about loss of control over AI.
The respondents’ answers were assigned the following values:
  • Definitely yes = 4;
  • Rather yes = 3;
  • Hard to say = 2;
  • Rather not = 1;
  • Definitely no = 0.
The neutral value in the survey was the response “Hard to say”. This answer characterizes undecided academic staff. This answer was assigned a value of 2. Therefore, the value μ0 = 2.0 was adopted when verifying statistical hypotheses.
The answers “Rather not” and “Definitely no” were assigned values of 1 and 0, respectively. These answers correspond to the acceptance of the null hypothesis, that is, the absence of expectations of replacing academic staff with AI tools and concerns that the use of AI will get out of control within the next 5 years.
The answers “Rather yes” and “Definitely yes” were assigned values of 3 and 4, respectively. These answers correspond to the acceptance of alternative hypotheses, that is, expectations of replacing academic staff with AI tools and concerns that the use of AI will get out of control within the next five years.
The following conditions were adopted for statistics [76,77]. Under these conditions, a sufficient sample size is 423 persons [77]. Thus, the number of respondents in the study meets the requirements of statistical standards.
This statistical approach ensures the necessary transparency of analysis and is consistent with recommendations for interpreting data based on single-element indicators and independent samples.

4. Results

4.1. Calculation and Visualization of Statistical Indicators

Table 3 presents the distribution of respondents’ answers to the first research question: Will AI replace university teachers within the next 5 years?
Table 3 presents the distribution of responses to the question about the expectation of replacing university teachers with AI within the next five years. The most frequent responses, “Rather not” and “Definitely no”, indicate a dominant skeptical attitude towards the full automation of teaching activities due to AI. The response “Hard to say” also accounts for a significant share, reflecting the uncertainty and doubts of most respondents. The relatively low number of positive responses (“Rather yes” and “Definitely yes”) confirms that the fear of replacement in the foreseeable future is not the dominant mood among teachers. However, the very fact of such expectations requires a response when forming digital transformation strategies in universities.
Table 4 shows the distribution of respondents’ responses to the second research question: Do you feel fear that the use of AI will get out of control within the next 5 years?
Table 4 shows the distribution of responses to the question about the fear that the use of AI will get out of control within the next five years. In contrast to Table 3, the share of positive responses (“Rather yes” and “Definitely yes”) is significantly higher here. This is especially noticeable in group 1, where these responses even outnumber the negative ones. The answer “Hard to say” also takes up a significant share, recording a high level of uncertainty of perception. This fact indicates that the anxiety of university teachers is more related to the general unpredictability of technology than to the direct threat of replacing teachers with AI tools. The combined data (Table 3 and Table 4) confirm that the fear of losing control over AI is perceived more acutely than the fear of losing one’s job. Therefore, the fear of losing control may be a key source of emotional stress in the academic environment. Table 5 presents the results of calculating the statistical indicators M(x) and δx in response to the first research question: Will AI replace university teachers within the next 5 years?
Table 5 shows statistical indicators (expected value, standard deviation) for answering the question about the expectation of replacing teachers with AI. All groups exhibit a consistently low expected value (ranging from 1.15 to 1.39), which confirms the dominance of a skeptical or neutral position among respondents. No group reaches the neutral boundary of 2.0. This outcome allows us to conclude that there are no pronounced expectations of replacing teachers with AI tools in the foreseeable future. Standard deviations from 0.92 to 1.11 indicate an acceptable range of opinions without significant polarization. The total expected value (1.24) emphasizes the trend’s stability for the entire sample.
Table 6 shows the results of calculating the statistical indicators M(x) and δx for answering the second research question: Do you fear AI use will get out of control within the next 5 years?
Table 6 contains statistical indicators for the answers to the question about the fear that AI will get out of control within the next five years. In contrast to the expectations of substitution (Table 5), the expected values for all groups are closer to the neutral mark of 2.0. In group 1, the expected value even exceeds it (2.05). This value indicates a higher degree of concern among university teachers about the unpredictability and lack of control over the consequences of AI implementation. The total expected value for the sample (1.81) confirms that concerns about losing control over AI technologies are expressed more strongly than the expectation of substitution. Standard deviations in the range from 1.00 to 1.10 indicate a relatively uniform distribution of opinions and the presence of a stable emotional background among respondents. Figure 2 presents a graphical interpretation of the answers to the first and second research questions. The point numbers correspond to the group numbers, based on the measurement times from 2023 to 2025 (Table 1). The dotted line is the X = Y straight line.
Figure 2 presents the relationship between the average replacement expectation (x-axis) and the average fear of loss of control (y-axis) for the five respondent groups. All data points lie above the diagonal line (y = x), indicating that fears of loss of control are consistently higher than expectations of direct replacement. This descriptive pattern reveals a vital asymmetry in the perceptions of university teachers: uncertainty about controllability appears to be a more powerful psychological factor than the expectation of replacement itself. Moreover, the maximum difference between the fear of losing control and expectations of replacement occurred at the first measurement point. This difference then decreased toward group 3. And then, the difference began to increase again, approaching that of group 5.
Four of the five data points are located below the 2.0 value. In the accepted school rating scale (0; 1; 2; 3; 4; 5), this means that both fears are below the neutral line. However, this visually recorded difference between expectations and fears may be due to random deviations. To obtain an exact answer, z-statistics are required.
Before linking these empirical findings to broader theoretical implications, it is essential to emphasize that the results reflect differences between independent groups and do not represent longitudinal trends.

4.2. Descriptive Analysis of Differences in Mean Values

To assess differences in perceptions between five independent groups of respondents, a descriptive analysis of the means for two single-item measures was used. This approach is suitable for the nature of the data and does not require formal statistical hypothesis testing. This analysis examines the direction and relative magnitude of differences between groups, reflecting variability in perceptions of the potential replacement of faculty by AI and concerns about losing control over technology. The sample reflects the perceptions of respondents voluntarily participating in the educational program, which may influence their level of interest in AI, readiness for change, and professional expectations.
Hypothesis 1.
University teachers expect a high probability of AI replacing their jobs.
A descriptive analysis of the data in Table 5 reveals that, in all five groups, the mean values for question Q1 (“Will AI replace university teachers in 5 years?”) are significantly below the neutral level of 2.0, ranging from 1.15 to 1.39. These values clearly correspond to the responses “rather no” and “definitely no” on the Likert scale. No group approaches a neutral point, reflecting respondents’ persistent distrust of the likelihood that AI tools will replace university teachers in the foreseeable future. The overall sample mean of 1.24 confirms the consistency of perceptions across all study participants.
Standard deviations (0.92–1.11) indicate limited variability in opinions, suggesting a relative consensus among teachers on this issue. Regardless of the time of data collection and group size, no trend toward increasing expectations of replacement is observed.
Thus, the descriptive differences in the means do not support Hypothesis 1, which states that teachers do not have strong expectations that AI will replace their professional activities within the next five years.
Hypothesis 2.
University teachers fear that the implementation and use of AI will spiral out of control within the next five years.
A descriptive analysis of the data in Table 6 reveals that the mean values for question Q2 (“Do you feel afraid that the use of AI will get out of control within the next 5 years?”) are systematically higher than the mean values for question Q1 in the corresponding groups. For five groups, Q2 values range from 1.62 to 2.05, while Q1 values range from 1.15 to 1.39. The overall mean level of fear of loss of control is M = 1.81, while the overall mean of expectations of replacement is M = 1.24. This result suggests that concerns about losing control are more pronounced than expectations of direct replacement by AI instructors for the entire sample.
It is important to note that in four of the five groups, the mean value for fear of loss of control remains below the neutral level of 2.0, and in the first group (M ≈ 2.05), it only slightly exceeds this level. Thus, we can speak of moderately expressed concerns that do not develop into a dominant state of panic or total distrust. Standard deviations (from 1.00 to 1.10) indicate a relatively homogeneous distribution of responses and the absence of sharp polarization of opinions.
Overall, the descriptive differences in mean values partially support Hypothesis 2: fear of losing control over AI does exist, but in most groups it does not reach the level of a clearly dominant emotional state and remains between “rather not” and “neutral”.
Hypothesis 3.
The expectation that AI tools will replace university teachers is an important, but not the primary, source of their fears about losing control over AI technologies.
A descriptive analysis of mean values for questions Q1 and Q2 reveals a consistent asymmetry between the two types of perception. In all five groups, mean values for fear of loss of control (Q2) are systematically higher than mean values for replacement expectations (Q1). For replacement expectations (Q1), the mean values are within a narrow range of approximately 1.15–1.39, which is significantly below the neutral level of 2.0 and corresponds to responses of “rather not” and “definitely not”. For fear of loss of control (Q2), the mean values are higher. They range from approximately 1.62 to 2.05, and in one group, they slightly exceed the neutral level. This range means that the level of fear is consistently higher than the level of expectations of replacement, and the gap between them remains consistent across all groups.
This asymmetry is clearly illustrated in Figure 2, where the mean values of replacement expectations (Q1) are plotted on the x-axis, and the mean values of fear of loss of control (Q2) are plotted on the y-axis. All five data points, corresponding to the respondent groups, are located above the diagonal Y = X, indicating a consistent “excess” of fear of loss of control over expectations of replacement. The distance between the data points and the diagonal can be interpreted as a descriptive indicator of the portion of fear that is not explained solely by the expectation of AI tools replacing university teachers. It is essential to note that the differences between the data points in Figure 2 reflect the distinct characteristics of independent cross-sectional groups and do not indicate a temporal trend.
At the same time, expectations of replacement themselves remain consistently low and vary little between groups, while the level of fear exhibits more noticeable variability. This descriptive combination (low and relatively stable replacement expectations with higher levels of fear of loss of control) supports Hypothesis 3 in its empirical, descriptive sense: while replacement expectations are indeed important, they are not the only, and, judging by the data, not the primary, source of faculty anxiety about the future of AI. Additional sources of these anxieties go beyond the measurement of two single-item indicators and require further research using more complex scales and instruments.
The resulting asymmetry for Q1 and Q2 may also reflect differences between rational expectations and emotionally charged reactions to technological uncertainty. Replacement expectations are associated with probability assessments, while fear of loss of control is associated with risk perception, which is inherently more sensitive to uncertainty.
This systematic excess of fear of loss of control over replacement expectations is consistent with the logic of Dynamic Capabilities Theory. University faculty respond more actively to the risks of loss of control (sensing) than to predictions of possible automation (seizing), creating an asymmetric profile in their perceptions of technological change.

4.3. Evolution of the University Teacher’s Role in an AI-Driven Educational Environment

A total of 26,425 scientific articles were analyzed using the keywords “artificial intelligence” and “education”. The top 1% of keywords for analysis were selected.
Between 2021 and 2025, the academic discourse on AI in education reveals a steady and multifaceted evolution in the understanding of the teacher’s role. The VOSviewer maps (Figure 3, Figure 4 and Figure 5) illustrate not only a technological and pedagogical transformation but also the emergence of economic and ethical dimensions linked to sustainable development.
In the first period, 2021–2022, research around AI in education remains primarily technological and infrastructure-oriented (Figure 3). The dominant clusters revolve around keywords such as “artificial intelligence”, “machine learning”, “internet of things, “industry 4.0”, and “education”. The proximity of these terms to metadata, information use, and digital transformation indicates that AI was perceived as an emerging technological wave shaping how universities functioned. The term “teachers” is connected to teaching methods, online learning, and students, but its position is peripheral and descriptive rather than conceptual. At this stage, teachers are mainly seen as adopters of new technologies and facilitators of AI-based systems. Their focus is on implementation, experimenting with digital tools, adapting existing pedagogical models, and supporting students’ engagement with intelligent platforms (SDG 4). The research at this time emphasizes efficiency, automation, and technological access, closely tied to the economic aspects of innovation and productivity, an implicit connection to SDG 8, as digital competence becomes a crucial factor in labor market readiness. However, teachers themselves are not yet viewed as agents of economic or systemic transformation, but rather as implementers of externally developed solutions.
By 2023–2024, the research landscape had undergone significant changes (Figure 4). The maps illustrate the emergence of new terms, including “ChatGPT”, “generative artificial intelligence”, “chatbots”, “assessment”, and “academic integrity”. These terms are closely associated with students, curriculum, and learning systems, reflecting a pedagogical and ethical turn in the literature. Teachers now appear more centrally positioned, linked to topics such as digital transformation, personnel training, and education. This fact indicates an expanded understanding of the educator’s role as a mediator between human learning and machine intelligence. The discussion shifts from adoption to regulation and evaluation, as teachers take responsibility for shaping ethical practices and maintaining academic standards in AI-driven contexts. They are no longer merely applying tools; they are interpreting their impact on learning, integrity, and fairness. Economically, this period corresponds with universities recognizing the value of human expertise and judgment as part of sustainable innovation ecosystems. Teachers begin to guide students in developing AI literacy, a skill increasingly critical for future employment and decent work, aligning directly with SDG 8. Thus, educators become mediators not only of knowledge but of socioeconomic adaptation, helping students connect technological skills with employability, creativity, and ethical awareness.
By 2025, the third map reveals a mature and interconnected research landscape (Figure 5). The networks showcase teachers and higher education institutions in key roles, including decision-making, ethics, AI literacy, problem-solving, quality control, and educational innovation. These terms show that educators have moved from the margins to the center of the AI-education ecosystem. Teachers are now conceptualized as architects of the educational environment and professionals who design, adapt, and lead AI-enhanced learning systems. Their function extends beyond instruction; they engage in co-designing curricula, evaluating data-driven insights, and ensuring that AI supports inclusive and human-centered education. The proximity of economic keywords such as performance, training systems, and innovation reflects the growing recognition of education as a key driver of sustainable economic development. Through their role in developing digital and ethical competencies, teachers contribute to producing a workforce that is both technologically literate and socially responsible, thereby reinforcing the objectives of SDG 8 and SDG 4 simultaneously.
This evolution also marks a profound transformation in the teacher’s pedagogical identity. The traditional image of the teacher as a provider of information has given way to a new role: that of a mentor, guide, and designer of learning environments. In AI-supported ecosystems, information is abundant and instantly accessible; the teacher’s task is no longer to transmit knowledge, but to curate, contextualize, and humanize it. Teachers foster critical thinking, creativity, and collaboration skills, essential for navigating a complex, technology-driven economy. They model ethical reasoning and sustainable thinking, ensuring that AI technologies serve inclusive learning and human flourishing, rather than merely automating or profiteering from them.
Overall, the progression from 2021 to 2025 reveals a remarkable professional and conceptual shift (Table 7). Teachers evolve from technology adopters to mediators, and finally to mentors and architects of intelligent learning systems. Their role becomes increasingly strategic, connecting pedagogy, ethics, and economics within a broader framework of sustainable development. This trajectory aligns with the integration of education into the logic of the Sustainable Development Goals, where AI serves as a means to achieve equitable growth, innovation, and lifelong learning. In this emerging paradigm, teachers stand at the intersection of knowledge, ethics, and economy, ensuring that the future of AI in education remains both human-centered and socially sustainable.
The evolution illustrated in Table 7 demonstrates that the role of teachers is not diminishing in the era of AI but is instead undergoing a profound transformation. From initial adopters of technology, educators have become ethical mediators, mentors, and architects of AI-enhanced learning environments. This progression reflects a shift from merely delivering content toward designing holistic, inclusive, and human-centered educational experiences. The integration of AI does not replace the teacher’s intellectual or emotional contribution; instead, it redefines their purpose: placing them at the center of pedagogical design, ethical decision-making, and the cultivation of critical and creative thinking skills. In this way, teachers remain indispensable to achieving the Sustainable Development Goals, particularly SDG 4 on quality education and SDG 8 on decent work and economic growth.
However, for teachers to maintain this pivotal role, they must continuously adapt to technological advances and remain informed about the evolving landscape of AI in education. Lifelong professional development, AI literacy, and interdisciplinary collaboration are essential to ensure that educators can guide students in navigating digital complexity responsibly and effectively. Teachers who stay engaged with innovation become the guarantors of human values in a technology-driven environment, ensuring that AI supports (not supplants) the essence of teaching and learning. The future of education will thus depend not on replacing teachers with machines, but on empowering teachers to lead the intelligent, ethical, and sustainable transformation of learning.
It is important to note that the 2021–2025 visualization reflects the dynamics of changing academic discourse and research topics, rather than providing direct empirical evidence of the changing role of faculty. The “adopter → mediator → architect” model is presented as an interpretative tool based on a combination of bibliometric data and existing literature, rather than as a rigorous scientific conclusion.
Based on these descriptive distinctions, Section 5 discusses the managerial implications for three levels of management: government, institutional, and scientific-didactic (academic).

5. Discussion

The interpretations are based on raw response distributions and are not intended as tests of complex theoretical models. The results are more representative of teachers’ general attitudes than of measuring underlying attitudes or psychological-professional constructs. Therefore, the findings should be understood more as descriptive signals of perception, requiring further investigation using multivariate instruments.
The study’s results reveal a difference between two key perceptions held by university teachers. The expectation that university teachers will be replaced by AI tools (Q1) and the fear that AI technologies will become uncontrollable remain low and are not statistically supported. Meanwhile, the fear of losing control over AI technologies (Q2) is significantly stronger and is evident across all respondent groups. This observation shifts the focus of the analysis: the threat to university sustainability is not the prospect of university teachers being forced out of the profession, but rather the perceived unpredictability and uncontrollability of technology. The nature of the data limits interpretations of the results, as single-item indicators do not fully capture professional or institutional constructs. Therefore, the findings should be understood as descriptive indicators of perceptions, rather than as confirmation of theoretical models.
In this study, the components of Dynamic Capabilities Theory (sensing, seizing, and transforming) are used as a theoretical framework to structure the managerial implications of the identified perceptions. The data are not intended to test the theory, but rather serve as a basis for interpreting possible trajectories of digital adaptation. Interpretation through the lens of dynamic capabilities theory allows us to clarify the significance of these results. The low level of expectation that university teachers will be replaced (Q1) reflects weak signals (sensing [22]) about the risk of losing their profession. At the same time, a more pronounced fear of losing control over AI technologies (Q2) indicates the need for new management decisions (seizing [22]). The difference between the two types of perception confirms the need for transforming (transforming [22]) personnel policies and institutional practices aimed at strengthening trust and creating sustainable conditions for the digital adaptation of higher education.
The findings add a managerial dimension to the description of differences in university teachers’ concerns. This study understands their perceptions not as individual attitudes or psychological reactions, but as descriptive signals of organizational sensing. These signals indicate the extent to which universities are institutionally prepared to adopt AI tools. The gap between low expectations of being replaced and more pronounced fears of losing control indicates not a fear of automation per se, but rather tension in managing digital transformation processes at universities.
At the same time, the interpretations based on the statistical data obtained are descriptive in nature and serve to identify the general direction of university teachers’ perceptions of AI. They do not claim to exhaustively explain all aspects of psychological and professional transformation, and further research using multivariate instruments is required. Therefore, the statistical results should be viewed as descriptive differences in perception distributions rather than strict statistical inferences. This limitation is consistent with the nature of the data and is taken into account when discussing the study’s findings.
For academic staff management, this means that university teachers’ emotional reactions should not be viewed as individual or random. They serve as systemic indicators that influence their engagement, innovative activity, and readiness for transformation (as noted in [22]). Ignoring these signals increases the risk of staff instability and may undermine the quality of educational services. Therefore, university leaders need to implement HR strategies that include digital adaptation programs, digital competency development, and measures to reduce university teachers’ anxiety [78,79].
A comparison of the obtained data with international studies confirms this general trend [80]. Global literature [6,36,41] has found that anxiety about AI is often associated with its unpredictability. The Ukrainian context [52,55,60] adds to this picture the factor of war recovery, which intensifies emotional reactions. At the same time, this context highlights the importance of targeted HR policies in supporting university resilience. The results of the bibliometric analysis confirm that university teachers’ perceptions are consistent with a global shift in academic discourse on AI and higher education. Between 2021 and 2025, a shift occurred from a technological understanding of the teacher’s role as facilitator to a conceptualization of them as mediators, mentors, and architects of the learning environment, ensuring the quality of educational services. The bibliometric analysis confirms that the key challenge of digital transformation is preserving the human dimension in education, the trust and professional identity of university teachers. These results reinforce the significance of the empirical data. Taken together, the empirical and bibliometric data aim to foster a digital pedagogical culture in which AI is viewed as a partner, rather than a competitor, for university teachers.
The study’s results enable the formulation of management recommendations aimed at achieving the UN Sustainable Development Goals: SDG 8 (decent work and economic growth), SDG 4 (quality education), and SDG 9 (innovation and infrastructure).
The study’s findings are relevant to SDG 8 (Decent Work and Economic Growth) because they reflect faculty perceptions of the risks associated with AI-driven transformations of knowledge work. Low expectations of replacement indicate a limited understanding of the threat posed by automation in the academic environment. However, concerns about loss of control point to the need to develop institutional mechanisms for supporting, retraining, and adapting faculty, which aligns with the objectives of sustainable employment development and preparing staff for new professional demands.
The observed asymmetry between low expectations of faculty replacement and more pronounced concerns about loss of control directly correlates with the objectives of SDG 4 (Quality Education). The results highlight the need to develop digital literacy programs and train faculty in the use of AI as a prerequisite for ensuring the sustainability and quality of higher education. Universities’ ability to support educators in the face of technological change is becoming an essential element in providing an inclusive, safe, and effective educational process.
From the perspective of SDG 9 (Innovation and Infrastructure), the study’s findings reflect the early stages of AI integration into educational ecosystems. The higher level of apprehension compared to expectations of replacement signals existing barriers to the acceptance of technological innovation. These expectations underscore the need to develop institutional dynamic capabilities (sensing, seizing, and transforming) that enable universities to safely implement AI infrastructures, support faculty, and reduce the uncertainty associated with new technologies.
Based on the Theory of Dynamic Capabilities (sensing–seizing–transforming), recommendations can be presented at three levels of management: government, institutional, and scientific-didactic (academic).

5.1. For Governments

Public education authorities should recognize weak signals of digital transformation and respond to them promptly [14,36,39,46,47,48,49,53,54]:
  • Develop national strategies for digital adaptation of teachers;
  • Integrate digital resilience indicators into university accreditation and funding systems;
  • Include the development of digital and emotional competence of university teachers in state support and modernization programs for higher education;
  • Ensure interdepartmental cooperation between the ministries of education, digitalization, and labor to coordinate personnel and technological reforms.

5.2. For University Leaders

University leaders should leverage the potential of AI to strengthen organizational resilience and reduce staff anxiety [14,26,32,33,34,35,36,59]:
  • Implement new HR strategies aimed at developing digital competencies of university teachers and building their trust in technology;
  • Creating ethical codes and regulations for the responsible use of AI in educational activities;
  • Monitoring the level of digital stress and the emotional state of university teachers;
  • Engaging university teachers in the co-design of educational courses using AI tools.

5.3. For Educational Researchers

Scholars studying learning theory play a key role in the transformation of didactics [14,15,26,27,28,29,30,31,68]. Their tasks include:
  • Developing a concept for “student + AI + university teacher” interaction in teaching and mentoring;
  • Revisiting the concepts of authorship and academic integrity in the context of generative technologies;
  • Participating in the development of international standards for digital didactics.
Implementation of these recommendations will ensure coordinated actions at all levels of educational management and transform digital transformation into a key factor in the sustainable and human development of universities. Thus, university teachers’ perceptions of AI are not only a subject for psychological assessment but also an important management signal. In this sense, teachers’ perceptions do not ‘measure’ dynamic capabilities directly but function as organizational sensing signals that indicate whether universities can recognize and respond to AI-related risks and opportunities. Taking this into account brings universities closer to achieving SDGs 4, 8, and 9. In other words, the results obtained contribute to ensuring decent employment (SDG 8), quality education (SDG 4), and the sustainable, innovative transformation of higher education (SDG 9).

6. Conclusions

The study’s results indicate that, at this stage, differences between replacement expectations and fears of loss of control should be viewed as early descriptive indicators of institutional adaptation, rather than as a reflection of individual faculty members’ fears. These findings characterize current sentiments and require further research.
From a theoretical perspective, the findings expand the application of dynamic capabilities theory to university management. University teachers’ perceptions of AI serve as an indicator of organizational adaptability: weak expectations of replacement reflect limited sensing, while more pronounced fears indicate the need for new management decisions (seizing) and the adaptation (transforming) of HR strategies and management practices. Thus, the study’s results allow us to take a new look at the emotional reactions of academic staff as systemic management signals. The practical significance of this study lies in the fact that university teachers’ fears and expectations directly impact the sustainability of universities’ HR policies. Ignoring these signals leads to declining engagement, increased instability, and a decline in the quality of higher education services. To minimize these risks, universities need HR strategies that focus on reducing staff anxiety, developing digital competencies, and fostering professional resilience.
Furthermore, the authors have prepared three-tiered recommendations for governments, university leaders, and educational researchers.
Interpreting the results through the lens of the Sustainable Development Goals underscores the importance of this study. Managing university teachers’ perceptions of AI is linked to ensuring quality education (SDG 4), decent employment (SDG 8), and developing innovative infrastructure (SDG 9). Thus, supporting academic staff in the context of digital transformation is not only a local management objective in the labor market but also a strategic prerequisite for the sustainable development of higher education.
The study has several limitations that must be considered when interpreting the results. First, the sample consists of university teachers only from Ukrainian universities. The context may have influenced anxiety and expectations, limiting the ability to directly extrapolate the findings to other countries and higher education systems. All respondents had participated in a professional development program. Theoretically, their views may differ from those of university teachers not currently participating in professional development. Second, the study employs both qualitative and quantitative methods, incorporating self-assessment. This approach captures consistent statistical patterns but does not provide a comprehensive understanding of the individual motivations and emotional mechanisms underlying people’s perceptions of AI. The third limitation of the study is the narrow design of the questionnaire, as it consists of two single-item indicators that capture only superficial expectations and concerns. A more in-depth analysis requires validated multidimensional scales measuring professional identity, trust in technology, and components of the psychological contract. The fourth limitation is the use of co-occurrence analysis in the bibliometric section. While it provides a descriptive overview of thematic structures, it cannot capture the full complexity of conceptual relationships. Finally, the authors’ results should be understood as preliminary indicators of university teachers’ attitudes. They demonstrate differences in perceptions of the threat of replacement and loss of control, but do not allow for the determination of consistent trends or structural changes without further research. A combination of quantitative analysis, interviews, case studies, and cross-country comparisons would be helpful in future research.

Author Contributions

Conceptualization, W.O.-K. and A.A.; methodology, W.O.-K.; software, A.A.; validation, W.O.-K., A.A. and N.A.; formal analysis, W.O.-K. and A.A.; investigation, W.O.-K. and A.A.; resources, N.A.; data curation, N.A.; writing—original draft preparation, W.O.-K., A.A. and N.A.; writing—review and editing, N.A.; visualization, W.O.-K.; supervision, W.O.-K.; project administration, A.A.; funding acquisition, W.O.-K., A.A. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the scientific project “Analysis and Development of Recommendations for Teachers of Ukrainian Universities to Improve the Quality of Educational Services” No. EU 1/21, by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under project (No. 09I03-03-V01-00130), by the Ministry of Education and Science of Ukraine, “Modeling and forecasting of socioeconomic consequences of higher education and science reforms in wartime” (No. 0124U000545), by EU project “Immersive Marketing in Education: Model Testing and Consumers’ Behavior” (No. 09I03-03-V04-00522/2024/VA) and by the grant “Application of hybrid swarm intelligence algorithms in the development of proactive multi-agent systems for the digital educational environment” of the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (No. AP26196023).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by “Sobornost” Institutional Review Board (Project identification code Sobornist ZSL 20211204/1) on 4 February 2022. The study was conducted in accordance with ethical standards applicable to non-invasive research. Participation was entirely voluntary, and no personal or confidential data was collected; all responses were anonymous. In accordance with the host institution’s guidelines and national legislation, formal IRB approval was not required for this type of research. Experimental studies in this article do not require approval from the Review Board (Statement of the Institutional Review Board from 4 February 2022).

Informed Consent Statement

Informed consent for participation was obtained from all subjects involved in the study. Respondents were informed about the purpose of the study, the anonymity of their responses, and the right to refuse participation at any time.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors thank the reviewers for their valuable advice, which significantly improved the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DCTDynamic Capabilities Theory

References

  1. United Nations. Promote Inclusive and Sustainable Economic Growth, Employment and Decent Work for All. Available online: https://www.un.org/sustainabledevelopment/economic-growth/ (accessed on 8 September 2025).
  2. United Nations. 2017 HLFP Thematic Review of SDG 9: Build Resilient Infrastructure, Promote Inclusive and Sustainable Industrialization and Foster Innovation. Available online: https://sustainabledevelopment.un.org/content/documents/14363SDG9format-revOD.pdf (accessed on 10 September 2025).
  3. United Nations. Goal 9: Build Resilient Infrastructure, Promote Sustainable Industrialization and Foster Innovation. Available online: https://www.un.org/sustainabledevelopment/infrastructure-industrialization/ (accessed on 12 September 2025).
  4. United Nations. Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for All. Available online: https://sdgs.un.org/goals/goal4 (accessed on 1 October 2025).
  5. UNESCO. Global Education Monitoring Report 2020: Inclusion and Education—All Means All; UNESCO: Paris, France, 2020. [Google Scholar] [CrossRef]
  6. Nyberg, A.J.; Schleicher, D.J.; Bell, B.S.; Boon, C.; Cappelli, P.; Collings, D.G.; Dalle Molle, J.E.; Feuerriegel, S.; Gerhart, B.; Jeong, Y.; et al. A Brave New World of Human Resources Research: Navigating Perils and Identifying Grand Challenges of the GenAI Revolution. J. Manag. 2025, 51, 2677–2718. [Google Scholar] [CrossRef]
  7. Huang, M.-H.; Rust, R.T. The Caring Machine: Feeling AI for Customer Care. J. Mark. 2024, 88, 1–23. [Google Scholar] [CrossRef]
  8. Eom, H.J.; Lee, M.J. Labor Market Changes in the Era of Intelligent Information Society Based on Artificial Intelligence (AI): A Socioeconomic Approach. Inf. Soc. Media 2020, 21, 1–20. [Google Scholar] [CrossRef]
  9. Koo, G.J.; Kim, J.W.; Choi, Y.B.; Kim, G.H.; Hong, J.E. Development of a Job Substitution Possibility Index for Analyzing the Labor Market Impact of Artificial Intelligence and Robotics. Korean J. Public Adm. 2024, 62, 37–74. [Google Scholar] [CrossRef]
  10. Dubey, S.S.; Astvansh, V.; Kopalle, P.K. Generative AI Solutions to Empower Financial Firms. J. Public Policy Mark. 2025, 44, 411–435. [Google Scholar] [CrossRef]
  11. Grewal, D.; Okazaki, S.; Guha, A.; Liu-Thompkins, Y. Generative AI: Delivering on the Promises and Understanding the Perils. J. Public Policy Mark. 2025, 44, 299–308. [Google Scholar] [CrossRef]
  12. Xie, Y.; Avila, S. The Social Impact of Generative LLM-Based AI. Chin. J. Sociol. 2025, 11, 31–57. [Google Scholar] [CrossRef]
  13. Warr, M.; Heath, M.K. Uncovering the Hidden Curriculum in Generative AI: A Reflective Technology Audit for Teacher Educators. J. Teach. Educ. 2025, 76, 245–261. [Google Scholar] [CrossRef]
  14. Pence, H.E. Artificial Intelligence in Higher Education: New Wine in Old Wineskins? J. Educ. Technol. Syst. 2019, 48, 5–13. [Google Scholar] [CrossRef]
  15. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible Are Jobs to Computerisation? 2013. Available online: https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf (accessed on 1 October 2025).
  16. Artyukhov, A.; Wołowiec, T.; Artyukhova, N.; Bogacki, S.; Vasylieva, T. SDG 4, Academic Integrity and Artificial Intelligence: Clash or Win-Win Cooperation? Sustainability 2024, 16, 8483. [Google Scholar] [CrossRef]
  17. Bilgic, E.; Gorgy, A.; Young, M.; Abbasgholizadeh-Rahimi, S.; Harley, J.M. Artificial Intelligence in Surgical Education: Considerations for Interdisciplinary Collaborations. Surg. Innov. 2021, 29, 137–138. [Google Scholar] [CrossRef]
  18. Hsu, C.H.; Tan, G.; Stantic, B. A Fine-Tuned Tourism-Specific Generative AI Concept. Ann. Tour. Res. 2024, 104, 103723. [Google Scholar] [CrossRef]
  19. Henriet, J. Artificial Intelligence–Virtual Trainer: An Educative System Based on Artificial Intelligence and Designed to Produce Varied and Consistent Training Lessons. Proc. Inst. Mech. Eng. Part P J. Sports Eng. Technol. 2016, 231, 110–124. [Google Scholar] [CrossRef]
  20. Randhawa, G.K.; Jackson, M. The Role of Artificial Intelligence in Learning and Professional Development for Healthcare Professionals. Healthc. Manag. Forum 2019, 33, 19–24. [Google Scholar] [CrossRef] [PubMed]
  21. Hockly, N. Artificial Intelligence in English Language Teaching: The Good, the Bad and the Ugly. RELC J. 2023, 54, 445–451. [Google Scholar] [CrossRef]
  22. Teece, D.J.; Pisano, G.; Shuen, A. Dynamic Capabilities and Strategic Management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar] [CrossRef]
  23. Schilke, O.; Helfat, C.E. Unlocking Dynamic Capabilities: Pathways for Empirical Research. J. Manag. Sci. Rep. 2025, 3, 71–87. [Google Scholar] [CrossRef]
  24. Helfat, C.E.; Finkelstein, S.; Mitchell, W.; Peteraf, M.; Singh, H.; Teece, D.; Winter, S.G. Dynamic Capabilities: Understanding Strategic Change in Organizations; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  25. Li, Y.; Cui, L.; Wu, L.; Lowry, P.B.; Kumar, A.; Tan, K.H. Digitalization and Network Capability as Enablers of Business Model Innovation and Sustainability Performance: The Moderating Effect of Environmental Dynamism. J. Inf. Technol. 2023, 39, 687–715. [Google Scholar] [CrossRef]
  26. Okulich-Kazarin, V.; Artyukhov, A. (Un)invited Assistant: AI as a Structural Element of the University Environment. Societies 2025, 15, 297. [Google Scholar] [CrossRef]
  27. Gellai, D.B. Enterprising Academics: Heterarchical Policy Networks for Artificial Intelligence in British Higher Education. ECNU Rev. Educ. 2022, 6, 568–596. [Google Scholar] [CrossRef]
  28. McKenzie, L. Pushing the Boundaries of Learning with AI. Inside Higher Ed. 2018. Available online: https://www.insidehighered.com/digital-learning/article/2018/09/26/academics-push-expand-use-ai-higher-ed-teaching-and-learning (accessed on 10 October 2025).
  29. Sánchez-Morales, L.N.; Alor-Hernández, G.; Rosales-Morales, V.Y.; Cortes-Camarillo, C.A.; Sánchez-Cervantes, J.L. Generating Educational Mobile Applications Using UIDPs Identified by Artificial Intelligence Techniques. Comput. Stand. Interfaces 2020, 70, 103407. [Google Scholar] [CrossRef]
  30. Mollick, E.R.; Mollick, L. New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. SSRN 2022, preprint. [Google Scholar] [CrossRef]
  31. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  32. Jessop, B. The Rise of Governance and the Risks of Failure: The Case of Economic Development. Int. Soc. Sci. J. 1998, 50, 29–45. [Google Scholar] [CrossRef]
  33. Abdelwahab, H.R.; Rauf, A.; Chen, D. Business Students’ Perceptions of Dutch Higher Educational Institutions in Preparing Them for Artificial Intelligence Work Environments. Ind. High. Educ. 2022, 37, 22–34. [Google Scholar] [CrossRef]
  34. Yang, X. Accelerated Move for AI Education in China. ECNU Rev. Educ. 2019, 2, 347–352. [Google Scholar] [CrossRef]
  35. Hamzah, H.A.; Abu Seman, M.S.; Ahmed, M. The Impact of Artificial Intelligence in Enhancing Online Learning Platform Effectiveness in Higher Education. Inf. Dev. 2025, 41, 794–810. [Google Scholar] [CrossRef]
  36. Ruano-Borbalan, J.-C. The Transformative Impact of Artificial Intelligence on Higher Education: A Critical Reflection on Current Trends and Futures Directions. Int. J. Chin. Educ. 2025, 14. [Google Scholar] [CrossRef]
  37. Koć-Januchta, M.M.; Schönborn, K.J.; Tibell, L.A.E.; Chaudhri, V.K.; Heller, H.C. Engaging with Biology by Asking Questions: Investigating Students’ Interaction and Learning with an Artificial Intelligence-Enriched Textbook. J. Educ. Comput. Res. 2020, 58, 1190–1224. [Google Scholar] [CrossRef]
  38. Wang, H.; Liu, M. Methods and Content Innovation Strategies of Digital Education in Higher Vocational Colleges under the Background of Artificial Intelligence. J. Comput. Methods Sci. Eng. 2025, 25, 2630–2641. [Google Scholar] [CrossRef]
  39. Liu, X.; Guo, B.; He, W.; Hu, X. Effects of Generative Artificial Intelligence on K-12 and Higher Education Students’ Learning Outcomes: A Meta-Analysis. J. Educ. Comput. Res. 2025, 63, 1249–1291. [Google Scholar] [CrossRef]
  40. Bhat, N.A. Job Market in the Era of Illiberalism, Trade Wars, Artificial Intelligence and Big Data: A Study. Indian J. Public Adm. 2024, 70, 817–828. [Google Scholar] [CrossRef]
  41. Marshik, T.; McCracken, C.; Kopp, B.; O’Marrah, M. Student and Instructor Perceptions and Uses of Artificial Intelligence in Higher Education. Teach. Psychol. 2024, 52, 339–346. [Google Scholar] [CrossRef]
  42. Al-Zahrani, A.M. Exploring the Impact of Artificial Intelligence Chatbots on Human Connection and Emotional Support Among Higher Education Students. Sage Open 2025, 15. [Google Scholar] [CrossRef]
  43. Gilmore, J.N.; Whims, T.; Blair, B.W.; Katarzynski, B.; Steffen, L. Technology Acceptance, Moral Panic, and Perceived Ease of Use: Negotiating ChatGPT at Research One Universities. Convergence 2025, 31, 1251–1266. [Google Scholar] [CrossRef]
  44. Filgueiras, F. Artificial Intelligence and Education Governance. Educ. Citizsh. Soc. Justice 2023, 19, 349–361. [Google Scholar] [CrossRef]
  45. Lin, T.; Zhang, J.; Xiong, B. Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability 2025, 17, 3698. [Google Scholar] [CrossRef]
  46. Okulich-Kazarin, V.; Artyukhov, A.; Skowron, Ł.; Artyukhova, N.; Dluhopolskyi, O.; Cwynar, W. Sustainability of Higher Education: Study of Student Opinions about the Possibility of Replacing Teachers with AI Technologies. Sustainability 2024, 16, 55. [Google Scholar] [CrossRef]
  47. Aguado-García, J.-M.; Alonso-Muñoz, S.; De-Pablos-Heredero, C. Using Artificial Intelligence for Higher Education: An Overview and Future Research Avenues. Sage Open 2025, 15. [Google Scholar] [CrossRef]
  48. Holgado-Apaza, L.A.; Ulloa-Gallardo, N.J.; Aragon-Navarrete, R.N.; Riva-Ruiz, R.; Odagawa-Aragon, N.K.; Castellon-Apaza, D.D.; Carpio-Vargas, E.E.; Villasante-Saravia, F.H.; Alvarez-Rozas, T.P.; Quispe-Layme, M. The Exploration of Predictors for Peruvian Teachers’ Life Satisfaction through an Ensemble of Feature Selection Methods and Machine Learning. Sustainability 2024, 16, 7532. [Google Scholar] [CrossRef]
  49. Behera, M.; Nigam, S. Automation, Artificial Intelligence (AI) and Labour Market in India: Challenges and Road Ahead. Indian. J. Hum. Dev. 2025, 19, 159–167. [Google Scholar] [CrossRef]
  50. Yu, Y.; Ruoxi, L.; Tingting, Y.; Xinxin, W. Convergence and Disparities in Higher Education Fiscal Expenditures in China: A Regional Perspective. Financ. Mark. Inst. Risks 2023, 7, 31–47. [Google Scholar] [CrossRef]
  51. Zindi, B.; Majam, T. Harnessing SDG 4 (Quality Education) towards Post COVID-19 Recovery in the Education Sector. Health Econ. Manag. Rev. 2025, 6, 111–130. [Google Scholar] [CrossRef]
  52. Davlikanova, O. Evolution of the Development of the Ukrainian Pattern of Dual Higher Education. Knowl. Econ. Lifelong Learn. 2025, 1, 1–20. [Google Scholar] [CrossRef]
  53. Dachi, A.; Kasztelnik, K. Building Bridges: Implementing Governance for Sustainability in the Microfinance Banks of Developing Countries. Financ. Mark. Inst. Risks 2024, 8, 1–16. [Google Scholar] [CrossRef]
  54. Filipova, M.; Djakona, A.; Haram, V. Determinants of Government Expenditures in the Baltic States. Financ. Mark. Inst. Risks 2025, 9, 66–89. [Google Scholar] [CrossRef]
  55. Iurchenko, M.; Ponomarenko, M. Ukrainian Educational and Scientific Potential After the Full-Scale Invasion: Socioeconomic Challenges and Prospects. SocioEcon. Chall. 2025, 9, 21–38. [Google Scholar] [CrossRef]
  56. Kettaf, R.; Karima, K.; Zohra, D. Investigating the Impact of Organizational Climate on Organizational Silence in Higher Education Institutions. SocioEcon. Chall. 2024, 8, 170–182. [Google Scholar] [CrossRef]
  57. Akther, R.; Hossain, M.M.; Kamrozzaman, M.; Manik, M.M.H. Professional Standards and Educational Leadership: Higher Secondary Teachers’ Behavioral Intention Towards Adopting New Teaching Technologies. Bus. Ethics Leadersh. 2024, 8, 184–198. [Google Scholar] [CrossRef]
  58. Lama, P.B.; Budhathoki, P.B.; Ojha, H.P. Fostering an Inclusive Academic Workforce: Assessing the Impact of Age, Ethnic and Educational Diversity on Employee Performance. Bus. Ethics Leadersh. 2025, 9, 167–179. [Google Scholar] [CrossRef]
  59. Koibichuk, V.; Samoilikova, A.; Vasylieva, T. Digitalization and Innovation Transfer as a Leadership Trend in Education: Bibliometric Analysis and Social Analytics. In Leadership, Entrepreneurship and Sustainable Development Post COVID-19, Proceedings of the NILBEC 2022, 2022 Prague Institute for Qualification Enhancement (PRIZK) International Leadership Conference, Prague, Czech Republic, 24–25 June 2022; Strielkowski, W., Ed.; Springer Proceedings in Business and Economics; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  60. Mursalov, M.; Yarovenko, H.; Vasilyeva, T. Entrepreneurial Ecosystem and Digitalization: Relationship and Synergy of Development. In Leadership, Entrepreneurship and Sustainable Development Post COVID-19, Proceedings of the NILBEC 2022, 2022 Prague Institute for Qualification Enhancement (PRIZK) International Leadership Conference, Prague, Czech Republic, 24–25 June 2022; Strielkowski, W., Ed.; Springer Proceedings in Business and Economics; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  61. Bugrov, V.; Sitnicki, M.W.; Serbin, O. Strategic Management of Creative Industries: A Case Study of University Information Institutions. Probl. Perspect. Manag. 2021, 19, 453–467. [Google Scholar] [CrossRef]
  62. Kozová, K.; Grenčíková, A.; Habánik, J. Building a Sustainable Future: Gender, Education & Workforce Needs of Gen Z. Econ. Sociol. 2024, 17, 209–223. [Google Scholar] [CrossRef]
  63. Skrynnyk, O.; Lyeonov, S.; Lenska, S.; Litvinchuk, S.; Galaieva, L.; Radkevych, O. Artificial Intelligence in Solving Educational Problems. J. Inf. Technol. Manag. 2022, 14, 132–146. [Google Scholar] [CrossRef]
  64. Skrynnyk, O.; Vasilyeva, T. Comparison of Open Learning Forms in Organizational Education. CEUR Workshop Proc. 2020, 2732, 1314–1328. [Google Scholar]
  65. Vorontsova, A.; Tarasenko, S.; Duranowski, W.; Durasiewicz, A.; Soss, J.; Bilovol, A. A Bibliometric Analysis of the Economic Effects of Using Artificial Intelligence and ChatGPT Tools in Higher Education Institutions. Probl. Perspect. Manag. 2025, 23, 101–114. [Google Scholar] [CrossRef]
  66. Pollifroni, M.; Ioana, A.; Canuta, I.L.; Pollifroni, F. AI-Driven E-Recruitment in Education and Science: Moving Towards Good Governance, Prevention of Corruption, Administrative Transparency, and Bias-Free Decision-Making. Bus. Ethics Leadersh. 2025, 9, 225–237. [Google Scholar] [CrossRef]
  67. Schinello, S. Challenges and Opportunities in the Use of Artificial Intelligence in Creative Economy: Insights from Expert Interviews. Econ. Sociol. 2025, 18, 199–216. [Google Scholar] [CrossRef]
  68. Perevozova, I.; Babenko, V.; Krykhovetska, Z.; Popadynets, I. Holistic Approach Based Assessment of Social Efficiency of Research Conducted by Higher Educational Establishments. E3S Web Conf. 2020, 166, 13022. [Google Scholar] [CrossRef]
  69. Teece, D.J. Managing the University: Why “Organized Anarchy” Is Unacceptable in the Age of Massive Open Online Courses. Strateg. Organ. 2017, 16, 92–102. [Google Scholar] [CrossRef]
  70. Leih, S.; Teece, D. Campus Leadership and the Entrepreneurial University: A Dynamic Capabilities Perspective. Acad. Manag. Perspect. 2016, 30, 182–210. [Google Scholar] [CrossRef]
  71. Zajac, E.J.; Kraatz, M.S. A Diametric Forces Model of Strategic Change: Assessing the Antecedents and Consequences of Restructuring in the Higher Education Industry. Strateg. Manag. J. 1993, 14, 83–102. [Google Scholar] [CrossRef]
  72. Bornay-Barrachina, M.; López-Cabrales, Á.; Salas-Vallina, A. Sensing, Seizing, and Reconfiguring Dynamic Capabilities in Innovative Firms: Why Does Strategic Leadership Make a Difference? BRQ Bus. Res. Q. 2023, 28, 399–420. [Google Scholar] [CrossRef]
  73. Phalswal, S.; Bhardwaj, R.; Sahay, A.; Akbar, M. Dynamic Capabilities of Environmentally Sustainable Enterprises: An Exploratory Study. South Asian J. Bus. Manag. Cases 2024, 13, 148–165. [Google Scholar] [CrossRef]
  74. Zabel, C.; Duckwitz, A. Seizing AI: Dynamic Seizing Capabilities in Emerging Technology Markets. The Case of Artificial Intelligence Adoption by German Influencer Marketing Agencies. Emerg. Media 2025, 3, 664–687. [Google Scholar] [CrossRef]
  75. BUS_9641; Business Statistics: Textbook for the Program “Masters of Business Administration”. Kingston University: London, UK, 2010.
  76. Okulich-Kazarin, V. Statistics Using Neural Networks in the Context of Sustainable Development Goal 9.5. Sustainability 2024, 16, 8395. [Google Scholar] [CrossRef]
  77. Scanmarket. Calculator to Calculate a Sufficient Sample Size. Available online: https://scanmarket.ru/blog/vyborka-razmer-ne-glavnoe-ili-glavnoe#calc1 (accessed on 8 May 2025).
  78. Ho, M.; Soo, C.; Tian, A.; Teo, S.T. Influence of Strategic HRM and Entrepreneurial Orientation on Dynamic Capabilities and Innovation in Small- and Medium-Sized Enterprises. Int. Small Bus. J. 2023, 42, 611–640. [Google Scholar] [CrossRef]
  79. Li, J. Construction of Undergraduate Education Teacher Team under the Background of Big Data and Artificial Intelligence. J. Comput. Methods Sci. Eng. 2025, 25, 963–976. [Google Scholar] [CrossRef]
  80. Okulich-Kazarin, V. Sustainable Development Goal 4 and Education Research: A Review of Polish Specifics Against the Background of Global Trends. Sustainability 2025, 17, 2747. [Google Scholar] [CrossRef]
Figure 1. PRISMA diagram for bibliometric analysis (a) and explicit search string in text ((b), screenshot from Scopus database). Notes: The * in the Figure 1a is a standard designation for a group of files with the same extension (file type). To avoid listing the names of all the files, an asterisk is used instead of the name. This is a well-known technique in computer science, including bibliometric analysis.
Figure 1. PRISMA diagram for bibliometric analysis (a) and explicit search string in text ((b), screenshot from Scopus database). Notes: The * in the Figure 1a is a standard designation for a group of files with the same extension (file type). To avoid listing the names of all the files, an asterisk is used instead of the name. This is a well-known technique in computer science, including bibliometric analysis.
Societies 16 00032 g001
Figure 2. Perceived likelihood of AI replacement (Q1) and fear of losing control (Q2): mean values across five groups—visual comparison of answers to the first and second research questions (comparison of values M(x)).
Figure 2. Perceived likelihood of AI replacement (Q1) and fear of losing control (Q2): mean values across five groups—visual comparison of answers to the first and second research questions (comparison of values M(x)).
Societies 16 00032 g002
Figure 3. Query “education” and “artificial intelligence”: keyword map, 2021–2022 (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Figure 3. Query “education” and “artificial intelligence”: keyword map, 2021–2022 (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Societies 16 00032 g003
Figure 4. Query “education” and “artificial intelligence”: keyword map, 2023–2024 (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Figure 4. Query “education” and “artificial intelligence”: keyword map, 2023–2024 (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Societies 16 00032 g004
Figure 5. Query “education” and “artificial intelligence”: keyword map, 2025-ongoing (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Figure 5. Query “education” and “artificial intelligence”: keyword map, 2025-ongoing (https://www.scopus.com/, accessed on 6 October 2025, analysis tool—VOSviewer).
Societies 16 00032 g005
Table 1. General characteristics of the respondents.
Table 1. General characteristics of the respondents.
IndicatorGroup 1Group 2Group 3Group 4Group 5Common
Survey date23 June 202323 December 202320 May 202430 November 202417 May 202523 June 2023–
17 May 2025
Total841168912638453
Men183422337114
Women6582679330337
No data100012
Table 2. Distribution of respondents by age groups, number of persons.
Table 2. Distribution of respondents by age groups, number of persons.
IndicatorGroup 1Group 2Group 3Group 4Group 5Common
15–24030227
25–344111616350
35–443645323017160
45–54243530437139
55–641618726875
65–744449122
Total841168912638453
Table 3. Answers to the first research question.
Table 3. Answers to the first research question.
NoReplyGroup 1Group 2Group 3Group 4Group 5Common
1Definitely yes021216
2Rather yes1210129649
3Hard to say2028173310108
4Rather not3754284511175
5Definitely no1522313710115
6Total841168912638453
Table 4. Answers to the second research question.
Table 4. Answers to the second research question.
NoReplyGroup 1Group 2Group 3Group 4Group 5Common
1Definitely yes6538224
2Rather yes2427153011106
3Hard to say2528262910119
4Rather not2647354712167
5Definitely no391012337
6Total841168912638453
Table 5. Statistical indicators for the first research question.
Table 5. Statistical indicators for the first research question.
IndicatorGroup 1Group 2Group 3Group 4Group 5Common
Sample size, n841168912638453
The average of the sample, M(x)1.34521.27591.14611.15871.39471.2406
The standard deviation for the sample, δx0.93220.92461.07630.97931.11310.9931
Table 6. Statistical indicators for the second research question.
Table 6. Statistical indicators for the second research question.
IndicatorGroup 1Group 2Group 3Group 4Group 5Common
Sample size, n841168912638453
The average of the sample, M(x)2.04761.75861.61801.80161.92111.8079
The standard deviation for the sample, δx1.01071.03071.00011.09851.06081.0509
Table 7. How the role of university teachers has changed in an AI-driven educational environment.
Table 7. How the role of university teachers has changed in an AI-driven educational environment.
PeriodDominant Research FocusPosition of the Teacher in the AI-Education NetworkRole of the TeacherMain Competencies and ResponsibilitiesRelation to SDGsEconomic and Institutional Implications
2021–2022Technological adoption and digital transformationPeripheral: linked to “teaching methods”, “online learning”, “students”Technology adopter (facilitator)- Integrating AI tools into existing teaching methods;
- experimenting with online and blended learning;
- supporting students’ digital engagement.
SDG 4- Focus on digital infrastructure and automation;
- teachers as implementers, not decision-makers;
- early connection to SDG 8 through digital skills training.
2023–2024Generative AI, ethics, and academic integrityMore central: connected to “curriculum”, “assessment”, “ChatGPT”, and “learning systems”Mediator/evaluator/ethical guide- Managing academic integrity and AI ethics;
- adapting assessment to AI-generated content;
- teaching AI literacy and critical thinking;
- guiding responsible AI use.
SDG 4
SDG 8
- Recognition of teachers as key to ethical and employability-oriented AI education;
- universities emphasize digital competence and ethical governance;
- growing demand for teacher training in AI literacy.
2025Ethical innovation, AI literacy, and educational ecosystemsCentral: connected to “decision making”, “ethics”, “AI literacy”, “learning systems”, and “higher education”Designer/mentor/architect of learning environments- Co-designing AI-supported curricula and adaptive systems
- leading ethical and inclusive implementation of AI
- mentoring students as creators, not consumers of knowledge
- promoting lifelong learning and innovation skills
SDG 4
SDG 8
SDG 9
- Teachers as strategic agents in innovation and workforce development;
- education as an engine of sustainable economic growth;
- institutional shift toward AI-informed policy, research, and leadership roles.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Okulicz-Kozaryn, W.; Artyukhov, A.; Artyukhova, N. Will AI Replace Us? Changing the University Teacher Role. Societies 2026, 16, 32. https://doi.org/10.3390/soc16010032

AMA Style

Okulicz-Kozaryn W, Artyukhov A, Artyukhova N. Will AI Replace Us? Changing the University Teacher Role. Societies. 2026; 16(1):32. https://doi.org/10.3390/soc16010032

Chicago/Turabian Style

Okulicz-Kozaryn, Walery, Artem Artyukhov, and Nadiia Artyukhova. 2026. "Will AI Replace Us? Changing the University Teacher Role" Societies 16, no. 1: 32. https://doi.org/10.3390/soc16010032

APA Style

Okulicz-Kozaryn, W., Artyukhov, A., & Artyukhova, N. (2026). Will AI Replace Us? Changing the University Teacher Role. Societies, 16(1), 32. https://doi.org/10.3390/soc16010032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop