Next Article in Journal
Bespoke, Relevant, and Inclusive Self-Paced, Online Modules to Build Tertiary Mathematics Engagement and Confidence
Previous Article in Journal
Bibliometric and Content Analysis of Sustainable Education in Biology for Promoting Sustainability at Primary and Secondary Schools and in Teacher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptions of Opportunities and Risks Posed by Artificial Intelligence: A Survey of Early Childhood Education Professionals in Austria

by
Eva Pölzl-Stefanec
Early Childhood Education Unit, Department of Education Research and Teacher Education, University of Graz, Hilmgasse 4, 8010 Graz, Austria
Educ. Sci. 2026, 16(2), 202; https://doi.org/10.3390/educsci16020202
Submission received: 26 September 2025 / Revised: 25 January 2026 / Accepted: 26 January 2026 / Published: 28 January 2026
(This article belongs to the Section Early Childhood Education)

Abstract

This study investigates how early childhood education and care (ECEC) professionals in Austria perceive opportunities and risks associated with artificial intelligence (AI). While AI has gained increasing attention in educational research, empirical evidence in the ECEC context remains scarce. A quantitative online survey (May–June 2025) was administered to 292 Austrian ECEC teachers. While only 9.6% fully integrate AI into their practice, 34.3% use it at least occasionally. In contrast, 65.6% recognise its potential to support pedagogical processes like documentation, observation, or language education. Non-parametric group comparisons indicated significant differences in opportunity perceptions by age, education, and professional experience, whereas no systematic differences emerged in critical attitudes. The results suggest that professional experience and qualifications shape openness to AI, yet scepticism remains widespread across groups. For practice, the findings highlight the need for targeted PD formats, collegial learning opportunities, and pilot projects that enable professionals to explore AI tools in a reflective manner. Structural support in terms of infrastructure, regulation, and resources is essential to prevent additional strain. This study emphasises the significance of integrating AI into broader professionalisation processes.

1. Introduction

In recent years, digitalisation has become an increasingly important part of early childhood education and care (ECEC), with experts, providers, and politicians all regarding it as a key challenge (Dardanou et al., 2023). To systematically analyse this transformation, this study distinguishes between two central conceptual dimensions: “openness” (the proactive willingness of professionals to integrate digital tools) and “critical attitude” (the reflective evaluation of ethical risks and pedagogical integrity). The development and availability of artificial intelligence (AI) have raised new questions about its potential applications, opportunities, and risks in early childhood education (ECE). Initial international studies suggest that while experts recognise the potential benefits of AI in supporting planning, documentation, and professionalisation, they also have significant concerns, such as those regarding accessibility, restrictions on socio-emotional development, and data protection risks (Mohammed, 2023; Crescenzi-Lanna, 2023; Huang et al., 2021). These attitudes are often shaped by existing structural and cultural conditions. For instance, Mohammed’s (2023) study demonstrates that Ghanese ECE professionals’ attitudes toward the use of AI in childcare facilities are significantly influenced by its perceived long-term feasibility, as well as by existing structural and cultural conditions. These attitudes reflect both the potential benefits, such as personalised learning opportunities and interactive learning environments, and automated administrative processes, as well as profound concerns, such as the loss of human interaction, data protection risks, and the danger of deprofessionalisation. Attitudes range from curiosity and openness to scepticism and reluctance, but the desire for professional development and the use of AI in an ethical, child-friendly manner remains central. This study highlights the importance of continuous training, clear ethical guidelines, cultural sensitivity, and close cooperation between politics, research, and practice for the sustainable implementation of AI in childcare facilities. In their scientific article, Chen and Lin (2024) describe AI in the context of ECE as a “double-edged sword”, whose positive effects only materialise when educational principles such as professionalism, responsibility, and ethical reflection are consistently considered. In their scientific article, they analyse the opportunities and risks—such as personalised learning and interactive play environments and impaired social interactions and weakened critical thinking, respectively—that arise from the excessive or unreflective use of AI. They propose the POWER principles (Purposeful, Optimal, Wise, Ethical, Responsible) as a normative framework to be integrated into the AI competence development of educational professionals, parents, and children. The authors advocate for ethically grounded AI literacy, using current examples such as ChatGPT to highlight the urgent need for clear guidelines on the responsible use of AI technologies in ECE.
Ghamrawi et al. (2024) examine the influence of artificial intelligence (AI) on teacher leadership, discussing whether AI expands the scope of pedagogy or, conversely, threatens the professional autonomy of specialists. Based on qualitative interviews with thirteen teachers from five Arab countries who already use AI in their teaching, the authors present a mixed picture: while AI can reduce workloads and enable data-based decision-making and collaborative leadership, there is also a risk that technical specifications and algorithmic control could restrict pedagogical self-determination. The study concludes that, to strengthen their leadership role in the AI age, teachers require specific skills in technological literacy, adaptability, coaching, cooperation, data-based reflection, and human-centred approaches. These areas of competence could also be transferred to early childhood education professionals, particularly given the existing uncertainties and training needs revealed by Austrian studies on AI integration in ECEC. These areas of competence are directly transferable to the ECEC sector, where professionalisation through continuous training is essential for sustainable AI integration.
The use of ChatGPT-4o for lesson and education planning is one of the specific areas of application that is increasingly being studied empirically at an international level, with a view to identifying potential efficiency gains and the risks of pedagogical reduction (Kalenda et al., 2025; Jambunathan & Jayaraman, 2025; Uğraş, 2024). Studies show that while AI-supported tools such as ChatGPT offer structural support in areas such as brainstorming, structuring, and orienting towards didactic models, they fall short of professional standards in terms of quality when used without reflection. In particular, experts critically evaluated the ability to create complete, differentiated, and developmentally appropriate lesson plans. They criticised inaccurate information, insufficient differentiation, a lack of UDL elements, and incoherence between learning objectives and assessments, among other things. At the same time, however, experts recognise the potential to reduce workloads, promote individual learning processes, and support language education, provided that AI use is accompanied by critical reflection and takes ethical standards into account.
Studies by Jambunathan and Jayaraman (2025) and Uğraş (2024) consistently demonstrate the potential of ChatGPT as a supportive tool in elementary education, provided certain conditions are met. While the former analyse the didactic strengths and weaknesses of ChatGPT using a specific mathematics lesson plan, Uğraş (2024) highlights practical usage scenarios in everyday preschool life, along with the associated opportunities and risks. Both studies emphasise the need for pedagogical follow-up, professional expertise, and context-specific adaptations to ensure a child-friendly, high-quality application. Durrani et al. (2024) call for targeted training, technical equipment, and ethical and content guidelines. These findings complement results from Austria, where experts recognise the potential of AI for planning, documentation, and language development. However, they also caution against unreflective use and advocate for the creation of spaces for professional reflection. Overall, these international studies demonstrate that AI can only be effectively and responsibly utilised in early childhood education if educational professionals possess the requisite skills, resources, and guidance.
Initial empirical studies suggest that ChatGPT could be a valuable tool in early childhood education, facilitating tasks such as lesson planning, language development, and creative activity design. Uğraş (2024) acknowledges its potential to motivate and enhance technical competencies but also highlights risks such as misinformation and social isolation. By using a specific lesson plan, Jambunathan and Jayaraman (2025) demonstrate that while ChatGPT provides a useful basic structure, the content it generates is not automatically developmentally appropriate, differentiated, or context-sensitive. Therefore, pedagogical expertise is required to adapt the suggestions to specific groups. Both studies emphasise the need for technical infrastructure, targeted training, and ethical guidelines to ensure quality and developmental appropriateness for each child.
By systematically collecting the perceptions of Austrian professionals, this study addresses the research gap regarding the negotiation of AI integration within the high-pressure ECEC system. This analysis aims to uncover how varying degrees of “openness” and “critical attitudes” are shaped by these specific structural realities.

2. Theoretical Background

In Austria, over 66% of children aged 0–5 attend early childhood education institutions (Statistics Austria, 2024). However, the field has been characterised by structural challenges for years. Professional training is predominantly below the tertiary level, interaction quality is considered substandard, and a viable research infrastructure is only just beginning to emerge (Krenn-Wache, 2024). Furthermore, in their initial training, ECE professionals have gained limited experience with digital education (Pölzl-Stefanec, 2021).
However, since the onset of the pandemic, digitalisation has gained momentum. While professionals complained about inadequate technical equipment in 2020, digital media are now being used more widely, albeit under conditions that ensure success, such as the technical support and high demand for further training and education in this area (Pölzl-Stefanec & Feierabend, 2025). These developments are taking place within a system characterised by a lack of space, staff shortages, and high workloads (Löffler et al., 2022). Studies show that ECE professionals are generally more prone to stress and exhaustion than other occupational groups (OECD, 2025; Ng et al., 2023), despite the fact that their mental health is central to stable relationships, emotional support, and high-quality education (Jennings & Greenberg, 2009; Hascher & Waber, 2021).
Against this backdrop, AI is emerging as a potential tool to reduce workloads. Its potential is particularly evident in areas such as documentation, lesson planning, and language support, where it can support routine tasks. However, there are risks involved in introducing new technologies without sufficient preparation, as this can create additional uncertainties and demands (Jambunathan & Jayaraman, 2025). Austria is therefore a prime example in the international debate, as it is a sector with high participation but precarious structural conditions. This makes it clear how professionals assess the opportunities and risks of new technologies and what framework conditions are necessary for AI to contribute to professionalisation and quality development.
To examine these dynamics empirically, the present study utilises a mixed-methods survey. It assesses how ECEC professionals evaluate the specific opportunities and risks of AI, their current usage patterns, and the extent to which individual factors—such as educational background, professional experience, and age—influence these perceptions. By doing so, this study provides a necessary empirical foundation to discuss future pathways for professionalisation, ethical standards, and quality assurance in the field.

3. Materials and Methods

This study employs a mixed-methods design, combining a quantitative online survey of ECEC professionals with a systematic qualitative analysis of open-ended responses. This approach allows for a comprehensive understanding of the current state of AI integration, as well as the perceived opportunities and risks within the sector. Consequently, this study addresses the following key research question:
How do early childhood education professionals in Austria use AI applications in early childhood education, care, and upbringing, and what opportunities and risks do they perceive?
The online questionnaire was conducted in Austria between May and June 2025. It was designed as part of a project and, after a pre-test, was distributed via a nationwide newsletter addressed to all kindergartens in the Styrian Association of Kindergartens. In addition, the survey was distributed via national groups on social media. A total of 292 professionals from the field of ECEC and upbringing in Austria participated. The exact socio-demographic data can be found in Table 1.
The questionnaire comprised standardised items in the form of Likert scales and open questions on key aspects of AI use in ECE.
To provide a clear overview of the statistical analyses conducted, Table 2 maps the specific questions to the corresponding tests, p-values, and effect sizes.
The conscious decision to use a four-point scale instead of the conventional five-point scale served to create a forced choice. By omitting a neutral middle category, participants were encouraged to take a clear position (agreement or disagreement) on the still relatively new and often controversial topics of AI. Topics covered included the current use of AI tools (e.g., for documentation and planning), subjectively perceived opportunities and risks, self-assessed competence in dealing with AI, future intentions for use, and individual training needs. The survey was conducted anonymously, and all participants were able to discontinue the survey at any time without providing a reason. For analysis, the data were first analysed descriptively. Group differences were tested using Kruskal–Wallis H tests and Mann–Whitney U tests, and effect sizes (r) were interpreted according to Cohen (1988).

4. Results

4.1. Quantitative Data

Of the 292 ECEC professionals surveyed, 46.0% stated that they currently do not use AI applications in their practice, while only 9.6% stated that they actively and regularly use them (see Table 3).
Despite this low level of use, 61.1% of participants see fundamental potential for AI support in educational tasks such as documentation, language development, and portfolio work. At the same time, 55.7% of respondents were critical of further AI integration into their professional activities.

4.1.1. Between Experience and Training: Differences in the Assessment of AI Applications

This study examined the extent to which respondents’ age, level of education, and professional experience influence the perception of AI applications as an opportunity to provide targeted support for educational processes. A focus was placed on differences between different age cohorts, educational qualifications, and levels of experience among professionals.
The following subsections (Table 4) present the results of these analyses in detail and highlight the correlations between socio-demographic characteristics and the assessments of experts.
Greater Openness with Increasing Age
The question of whether AI represents an opportunity to support educational processes (e.g., in observation, language development, or portfolio work) was assessed using a four-point Likert scale. A Kruskal–Wallis H test revealed significant differences between the age groups (H(4) = 14.31, p = 0.006). Professionals aged 55 and over (MR = 177.77) and those aged 45–54 (MR = 159.69) agreed with the statement more often than their younger colleagues. A supplementary Mann–Whitney U test compared professionals under 35 years of age (categories 19–24 and 25–34 years) with professionals aged 35 and above. The analysis revealed a significant difference—U = 7695.50, Z = −3.50, and p < 0.001—with a small to medium effect (r ≈ 0.21). Professionals aged 35 and above (MR = 159.01, n = 179) considered AI applications as an opportunity significantly more often than professionals under 35 (MR = 125.21, n = 112).
Greater Openness with Increasing Level of Education
The respondents’ level of education also influenced their assessment of AI. The Kruskal–Wallis H test revealed significant differences (H(4) = 13.62, p = 0.009): professionals with a high school diploma (MR = 156.34) and without a high school diploma (MR = 125.98) tended to rate AI applications more positively than those with a bachelor’s degree (MR = 132.98), master’s degree (MR = 113.85), or a magister degree (MR = 65.10). A supplementary Mann–Whitney U test compared professionals without a university degree (0) to those with a university degree (1). The analysis revealed a significant difference: U = 5703.50, Z = −2.49, p = 0.013, and r ≈ 0.15 (small effect). Professionals without a university degree (MR = 152.09, n = 229) considered AI applications as an opportunity significantly more often than professionals with a university degree (MR = 123.49, n = 62).
Greater Openness with Increasing Professional Experience
Professional experience also had a significant influence (H(4) = 11.28, p = 0.024). Professionals with more than 31 years of experience (MR = 171.06) and 19–30 years (MR = 154.93) of experience rated AI applications significantly more positively than colleagues with less professional experience.
Paired tests with Bonferroni corrections demonstrated that experienced professionals (MR = 98.08, n = 114) rated AI significantly more positively than new entrants to the profession (MR = 77.41, n = 66), U = 2898.00, Z = −2.72, p = 0.007, and r ≈ 0.20. There was no significant difference between middle-aged professionals and new entrants.

4.1.2. Critical Attitudes Towards the Use of AI in Educational Work

To supplement the perspective on opportunities, a further step was taken to examine whether age, level of education, and professional experience are also associated with a more critical attitude towards the use of AI applications in educational work. The focus was on whether certain groups of professionals tend to be sceptical about AI and whether socio-demographic characteristics systematically influence these assessments. The following subsections (Table 5) present the results of these analyses in detail.
Critical Attitudes Regardless of Age
Critical attitudes towards AI (“I am critical of the use of AI applications in educational work”) were measured on a four-point Likert scale. A Kruskal–Wallis H test revealed no significant differences between the age groups (H(4) = 2.70, p = 0.609). The supplementary Mann–Whitney U test used to compare professionals under 35 and over 35 also revealed no significant difference (U = 9552.00, Z = −0.71, p = 0.480). The mean ranks indicate that younger and older professionals do not systematically differ in their critical attitude towards AI applications.
Critical Attitudes Are Not Affected by Level of Education
No statistically significant differences were identified regarding level of education (H(4) = 6.61, p = 0.158). Although professionals without a high school diploma (MR = 176.53) and with a master’s degree (MR = 166.62) had higher mean ranks, these differences were not significant. A supplementary Mann–Whitney U test compared professionals without a degree to those with a degree. Similarly, no significant difference was identified (U = 6879.50, Z = −0.39, p = 0.696), suggesting that formal educational attainment in this sample is not systematically associated with a more critical attitude towards AI applications.
Critical Attitudes Independent of Professional Experience
In regard to professional experience, no significant differences were demonstrated in critical attitudes towards AI (H(4) = 0.75, p = 0.946). The supplementary Mann–Whitney U tests for the groups of career starters vs. experienced professionals (U = 3540.00, p = 0.697), career starters vs. career veterans (U = 3720.00, p = 0.896), and experienced professionals vs. career veterans (U = 6025.50, p = 0.518) did not reveal any significant differences. These results indicate that neither the length of professional experience nor a particular stage of professional development is systematically associated with a more or less critical attitude towards the use of AI applications in educational settings.
A comparison of the two sets of findings reveals that openness to AI and critical reflection are not mutually exclusive but rather coexist. The results demonstrate that the positive assessment of AI applications as an educational opportunity varies between groups: older, experienced, and non-academically trained professionals are more likely to recognise specific applications, such as for observation or language support. Their openness seems to be based less on technological expertise than on practical experience. In contrast, having a critical attitude towards AI is independent of age, education, or professional experience. Scepticism is evident across all groups and seems to be individually motivated, for example, by ethical concerns, data protection issues, or uncertainties of working with technology. Openness and critical reflection thus exist in parallel and independently of each other.
ECEC professionals may find AI applications helpful and supportive while at the same time expressing nuanced concerns. Positive assessments often relate to functional aspects, while critical attitudes focus on structural, ethical, or educational issues.

4.2. Qualitative Data

It is clear that critical attitudes towards AI are not systematically linked to age, education, or professional experience but remain individual. These parallel perspectives raise an important question: “What questions or concerns do you personally have regarding the use of AI applications in early education?”
This is precisely where the qualitative data is relevant. Open-ended responses provide deeper insight into the mindsets, uncertainties, and reflection processes of the experts. They enrich the quantitative findings by revealing nuances in content that cannot be captured in standardised response formats. This makes it clear that an approval of AI is not synonymous with unconditional acceptance; experts provide differentiated reasons for their critical attitudes towards the use of AI.
The open-ended responses were evaluated in an inductive coding process. While 292 professionals participated in the survey, a total of 314 thematic units were identified and coded, as some participants addressed multiple topics within their responses. Two researchers independently assigned the responses to thematic categories that were developed directly from the material. The coding was based on a jointly developed code book, which was adapted and refined several times during the course of the analysis.
To ensure intercoder reliability, all data was coded twice. The agreement rate was 68%, which provides a solid basis for further evaluation. Deviations were discussed in a structured consensus process and jointly resolved. This reflexive negotiation process served to ensure quality and contributed to the clarity and consistency of the categories (Table 6).

4.2.1. Concerns Regarding the Use of AI

The feedback reveals a wide range of concerns that can be divided into several key topics.
A key concern is data protection. Many educators express uncertainty about how personal data is processed, stored, and protected. Compliance with the General Data Protection Regulation is cited as a basic requirement, and there is great concern that sensitive information could be unwittingly disclosed or reused for unknown purposes. The responsible handling of data is particularly important in observations, documentation, and portfolio work. Some reject the entry of children’s data on principle, while others want transparent programmes and clear guidelines: “I sometimes use AI for planning, writing letters to parents, etc., However, I am very cautious because I am unsure about data protection. I would never enter children’s data (such as observations) into AI, not even in anonymised form” (Q91).
Ethical and social issues are also frequently addressed. The use of AI is seen as a socially relevant development that cannot be stopped and yet, or precisely because of this, must be utilized critically. There is concern that AI will be adopted blindly and that the responsibility of professionals will be undermined. A lack of source references, possible misinformation, and a lack of transparency are perceived as problematic: “Incorrect information provided by AI due to a lack of high-quality sources and incorrect generation by AI” (Q112). The question of equality in opportunity is also raised in the context of accessibility: “If AI-supported education is only accessible to certain institutions, existing inequalities could be further exacerbated” (Q116). Ecological aspects such as energy and water consumption are also viewed critically. “In addition, the use of AI is highly wasteful in terms of energy and water consumption, and thus a worrying development for our environment” (Q251).
Many professionals express concern that the use of AI will have a negative impact on independent thinking, creativity, and language skills: “I am absolutely in favour of placing more emphasis on natural intelligence in adults and children. AI is seen by many as a substitute for independent thinking and engagement with work, planning, etc.” (Q12).
Another key point concerns individualisation and relationship building. AI is perceived as lacking empathy and individuality. The danger of standardisation and loss of authenticity is emphasised. Personal relationships with children, a culture of dialogue, and situational planning cannot be replaced by AI: “Especially in elementary education, the emotional and social components should be preserved for the well-being of the child. I am already very critical of digital media in early childhood education, so AI in the educational field just makes me shake my head” (Q228). There are fears that portfolio work and observations will become too impersonal as a result of AI.
Some voices argue that AI jeopardises educational responsibility and quality. Content could be adopted without being checked, and personal formulations could be replaced by automated texts. Professionals could be restricted in their independence. The professional eye of educators is described as indispensable, especially when it comes to portfolio work and development documentation.
Many professionals feel that they are not sufficiently informed or trained. There is a great need for further training, materials, and clear communication about available tools. Statements such as “I don’t know anything about it” (Q235) or “I don’t know how AI can be used in education” (Q223) are common. The meaningful use of AI is only considered possible with sufficient expertise and support from specialists. There is a demand for training courses to be offered in a structured manner rather than having to be developed individually.
Furthermore, professionals report that there is often not enough time to integrate AI offerings meaningfully into everyday life.

4.2.2. Critical Comments Regarding the Use of AI in Early Education

The quality of AI content is also critically questioned. There are concerns that programmes such as ChatGPT do not offer reliable sources and that content may be inaccurate, unreflective, or even uninspiring. Further concerns relate to bureaucratisation and the additional effort involved: “I don’t think it saves time, as the documents usually have to be reworked and checked” (Q258).
Even though many have no experience with AI, they recognise its potential for meaningful use. However, this requires that their personal influence is retained and that content is critically reviewed. AI could help to explain unclear content or improve wording: “In educational documentation, AI could be helpful in expressing oneself more professionally and in a more subject-specific manner” (Q164).
Overall, it is clear that attitudes towards AI in the context of early education are highly polarised. While some professionals draw clear boundaries and reject its use, others see opportunities and use concrete examples to show where they already use AI in their everyday educational work.

4.2.3. Specific Applications: Support with Text Work

AI is described by many educators as a helpful tool for planning and documentation. AI can be used, for example, to support the creation of educational and learning stories, to review specialist knowledge, or to prepare educational materials. The time saved is particularly appreciated when preparation time is limited: “Huge time savings when used in a targeted manner” (Q105). AI-supported observation, portfolio, and reflection tools are considered potentially helpful but are currently challenged by limited awareness or unavailability: “How feasible is the use of AI in relation to evidence-based planning?” (Q23). AI is often used to formulate letters to parents and other texts. Many see this as a way of making work easier and increasing efficiency. It is emphasised that no child data should be entered. Some express concerns that the use of AI could lead to less reflection on or questioning of content: “Content is blindly adopted without comparing and adapting it to one’s own knowledge, personality or nursery group” (Q75). Media literacy and training in the use of AI are considered necessary to ensure its safe and professional use: “I don’t feel sufficiently informed about AI yet. There has been no training on this yet” (Q220).
Many appreciate AI as a source of ideas. It serves as inspiration and broadens their own perspective. At the same time, attitudes remain critical: AI is seen as a tool for gaining knowledge, but it cannot replace pedagogical experience or reflection. “I am convinced that AI should only be used in exceptional cases. I feel confident in my knowledge in my profession and would like to keep it that way!” (Q224).
The direct use of AI in everyday life with children is viewed with scepticism by the majority. Although its potential for individual support and analyses of learning processes is recognised, there is a lack of suitable, trustworthy applications. The lack of empathy in digital systems and the danger of excessive technologization are seen as problematic. Personal contact and situational, needs-oriented work with children remain the focus.

5. Discussion

This study has several limitations. It is a cross-sectional survey based on self-reported data, which precludes causal inferences and allows for potential social desirability biases. Furthermore, the convenience sampling approach via newsletters and social media introduces a self-selection bias, as primarily tech-savvy or highly motivated individuals were reached, while the regional focus on Styria limits generalizability across Austria and to other countries. Additionally, as AI implementation in early childhood education was still in its early stages during the 2025 survey, these findings represent a snapshot of a rapidly evolving field. Future research should therefore employ longitudinal designs, international comparisons, and qualitative approaches to capture long-term developments and in-depth professional experiences. The present findings paint a nuanced picture of the attitudes of educational professionals towards AI applications in early education. While actual use rates have been low to date, the majority of respondents recognise AI as a potential tool for supporting educational processes. This tension between opportunities and critical attitudes is reflected in both the quantitative data and the open-ended feedback.
The educational discussion about the use of AI in documentation and planning work reveals a complex and nuanced range of opinions. Although the actual use of AI has been minimal to date, with only around 9.6% of professionals in Austria actively using it, a clear majority of those surveyed recognise it as a potential tool for supporting educational processes. This discrepancy between positive assessment and practical reluctance is not an isolated case but reflects an observed international pattern. This critical stance corresponds to the risks described by Chen and Lin (2024), such as the loss of social interaction and the weakening of critical thinking through the unreflective use of AI. The results thus underline the relevance of an ethically sound orientation framework, such as that offered by the POWER principles (Purposeful, Optimal, Wise, Ethical, Responsible), for the integration of AI into early childhood education in a meaningful and responsible manner.
The results reveal a complex interplay between openness and critical attitudes, which should be perceived not as opposing forces but as concurrent dimensions. Open-mindedness in regard to AI in this field mostly relates to functional support, such as making everyday tasks easier, including planning lessons or writing reports. Critical attitudes, on the other hand, are driven by deep moral and educational values. Professionals are more sceptical about the technology itself than they are about the quality of educational relationships and the integrity of child-centred work. This coexistence of openness and critical attitudes indicates that professionals can recognise AI tools as beneficial instruments while concurrently contemplating their application in relation to ethical risks and data protection.
The low levels of AI use in Austrian education reflect structural uncertainties rather than technical hurdles, such as a lack of educational strategies, unclear guidelines, and a lack of training. In the Austrian context, this reflexivity is particularly significant, as the system is characterised by high participation in professional development despite precarious structural conditions, making it a revealing case for professional adaptation. Similarly, the study by Kasinidou et al. (2024) shows that despite their fundamental interest in AI, teachers often work without didactic materials, technical equipment, and clear definitions. Both perspectives illustrate that targeted training and institutional anchoring are necessary in order to meaningfully integrate AI as a supportive tool and educational topic.
Nevertheless, more than half of the respondents expressed concerns about the use of AI in educational planning. This critical attitude is an expression of individual reflection and professional protective mechanisms. Data protection, ethical issues, a lack of relationship building, and concerns regarding creative impoverishment characterise these reservations. International studies (Amado-Salvatierra et al., 2024; Chen, 2024) confirm these ethical concerns and underline the need to protect educational values such as empathy, relationships, and situational responsiveness. Interestingly, an openness to AI correlates more strongly with professional experience than with academic qualifications. Consequently, perceptions of AI are not merely a matter of technical affinity but an indication of professional reflexivity, which varies according to professional background. While experienced professionals are typically open to AI because they see it as a way to make their work easier, professionals with academic degrees are more likely to think about AI technology in terms of ethical standards and pedagogical risks. This diverse view demonstrates that the discussion about AI in the ECEC field is closely tied to professional identity and the need to protect basic teaching values.
While experienced professionals view AI as a pragmatic tool for reducing workloads, those with academic qualifications are more sensitive to ethical and educational risks. Perception is therefore not only technical but also reflexive.
A multidimensional strategy that goes beyond just technical training is required to make AI a useful part of early childhood education. Integrating AI into the ECEC sector requires a professionalisation strategy that is different from generic professional development programmes and fits with the way that teachers actually work. To meet these needs, the following formats should be used:
  • Professional development (PD) that takes place at work: Training that is built into daily tasks so that knowledge can be put into practice right away.
  • Short-form PD: Short educational modules that help ECEC professionals learn new skills quickly and easily, even when they do not have a lot of time.
  • pilot implementation: Creating safe digital test areas with de-identified data allows professionals to try out AI tools and see how useful they are for their specific groups without worrying about data protection.
These kinds of collaborative learning opportunities and pilot projects help people learn not only how to use technology but also how to think about ethics in a deeper way. This encompasses technological literacy, adaptability, collaboration and mentorship skills, data-driven decision-making, and human-centred methodologies (Kasinidou et al., 2024). At the same time, we need clear rules to enable ethical reflection and algorithmic competence in every user and to provide professionals with advice. To ensure that AI does not add to educator workloads, structural support measures are required. A working technical infrastructure, clear data protection rules, and adequate resources are all necessary to enable the use of AI in a manner that makes work easier and improves the quality of education.
Overall, it is clear that AI in early education is neither rejected outright nor endorsed unreservedly. Rather, a differentiated picture of opinions is emerging that opens up space for professional development, collegial exchange, and context-sensitive use.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of GZ. 39/63/63 ex 2024/25.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

A big thank you goes to all ECE professionals who participated in the survey and made this research possible. Thank you to the University of Graz for funding this article. Open Access Funding by the University of Graz.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Amado-Salvatierra, H. R., Morales-Chan, M., Hernandez-Rizzardini, R., & Rosales, M. (Eds.). (2024, March 10–13). Exploring educators’ perceptions: Artificial intelligence integration in higher education. 2024 IEEE World Engineering Education Conference (EDUNINE), Guatemala City, Guatemala. [Google Scholar]
  2. Chen, J. J. (2024). A scoping study of AI affordances in early childhood education: Mapping the global landscape, identifying research gaps, and charting future research directions. Journal of Artificial Intelligence Research, 81, 701–740. [Google Scholar] [CrossRef]
  3. Chen, J. J., & Lin, J. C. (2024). Artificial intelligence as a double-edged sword: Wielding the POWER principles to maximise its positive effects and minimise its negative effects. Contemporary Issues in Early Childhood, 25(1), 146–153. [Google Scholar] [CrossRef]
  4. Cohen, J. (1988). Set correlation and contingency tables. Applied psychological measurement, 12(4), 425–434. [Google Scholar] [CrossRef]
  5. Crescenzi-Lanna, L. (2023). Literature review of the reciprocal value of artificial and human intelligence in early childhood education. Journal of Research on Technology in Education, 55(1), 21–33. [Google Scholar] [CrossRef]
  6. Dardanou, M., Hatzigianni, M., Kewalramani, S., & Palaiologou, I. (2023). Professional development for digital competencies in early childhood education and care: A systematic review (OECD Education Working Papers No. 295). OECD. [Google Scholar]
  7. Durrani, R., Iqbal, A., & Akram, H. (2024). Artificial intelligence (AI) in early childhood education, exploring challenges, opportunities and future directions: A scoping review. Qlantic Journal of Social Sciences, 5(2), 411–423. [Google Scholar] [CrossRef]
  8. Ghamrawi, N., Shal, T., & Ghamrawi, N. A. (2024). Exploring the impact of AI on teacher leadership: Regressing or expanding? Education and Information Technologies, 29(7), 8903–8921. [Google Scholar] [CrossRef]
  9. Hascher, T., & Waber, J. (2021). Teacher well-being: A systematic review of the research literature from the year 2000–2019. Educational Research Review, 34, 100411. [Google Scholar] [CrossRef]
  10. Huang, J., Saleh, S., & Liu, Y. (2021). A review on artificial intelligence in education. Academic Journal of Interdisciplinary Studies, 10(3), 206. [Google Scholar] [CrossRef]
  11. Jambunathan, S., & Jayaraman, J. D. (2025). Exploring the efficacy of lesson planning in early childhood education using ChatGPT. Journal of Early Childhood Teacher Education, 1–15. [Google Scholar] [CrossRef]
  12. Jennings, P. A., & Greenberg, M. T. (2009). The prosocial classroom: Teacher social and emotional competence in relation to student and classroom outcomes. Review of Educational Research, 79(1), 491–525. [Google Scholar] [CrossRef]
  13. Kalenda, P. J., Rath, L., Abugasea Heidt, M., & Wright, A. (2025). Pre-service teacher perceptions of ChatGPT for lesson plan generation. Journal of Educational Technology Systems, 53(3), 219–241. [Google Scholar] [CrossRef]
  14. Kasinidou, M., Kleanthous, S., & Otterbacher, J. (2024, September 4–9). “Artificial intelligence is a very broad term”: How educators perceive artificial intelligence? 2024 International Conference on Information Technology for Social Good, GoodIT ’24 (pp. 315–323), Bremen, Germany. [Google Scholar]
  15. Krenn-Wache, M. (2024). ECEC workforce profile. In Early childhood workforce profiles across Europe (p. 8). State Institute for Early Childhood Research and Media Literacy. [Google Scholar]
  16. Löffler, R., Michitsch, V., Bauer, V., Esterl, A., Geppert, C., Mayerl, M., Petanovitsch, A., & Pirstnig, M. (2022). Educational and career paths of graduates from educational institutions and colleges for elementary education: Synthesis report to the Federal Ministry of Education, Science and Research. Federal Ministry of Education, Science and Research.
  17. Mohammed, A. S. (2023). Examining the implementation of artificial intelligence in early childhood education settings in Ghana: Educators’ attitudes and perceptions towards its long-term viability. Online Submission, 2(4), 36–49. [Google Scholar] [CrossRef]
  18. Ng, J., Rogers, M., & McNamara, C. (2023). Early childhood educator’s burnout: A systematic review of the determinants and effectiveness of interventions. Issues in Educational Research, 33(1), 173–206. [Google Scholar]
  19. OECD. (2025). Results from TALIS starting strong 2024: Strengthening early childhood education and care. TALIS, OECD Publishing. [Google Scholar] [CrossRef]
  20. Pölzl-Stefanec, E. (2021). Challenges and barriers to Austrian early childhood educators’ participation in online professional development programmes. British Journal of Educational Technology, 52(6), 2192–2208. [Google Scholar] [CrossRef]
  21. Pölzl-Stefanec, E., & Feierabend, S. (2025). Conditions: Online professional development in early childhood education. Journal of Social and Scientific Education, 2(1), 1–11. [Google Scholar] [CrossRef]
  22. Statistics Austria. (2024). Daycare centres in Austria: Monitoring report 2024/25. Statistics Austria. [Google Scholar]
  23. Uğraş, M. (2024). Evaluation of ChatGPT usage in preschool education: Teacher perspectives. Eğitim ve İnsani Bilimler Dergisi: Teori ve Uygulama, 15(30), 387–414. [Google Scholar] [CrossRef]
Table 1. Socio-demographics of participants.
Table 1. Socio-demographics of participants.
Education %
Without secondary school certificate13.4
with secondary school certificate60.7
Bachelor’s degree18.4
Master’s degree7.5
Gender
Female97.9
Male2.1
Age
19–2412
25–3426.5
35–4425.8
45–5426.8
55 years and older8.9
Work experience
1–3 years11.7
4–6 years11.0
7–18 years38.1
19–30 years25.8
Over 31 years13.4
Position
Exempted line9.7
Leadership and children’s ministry47.2
Educator30.3
Other12.7
Facility
Nursery (0–3)14.1
Nursery school (3–6)54.3
Extended age group (0–6)13.7
OperatorPrivate25.1
Public55.0
Ecclesiastical19.9
Note: N = 292.
Table 2. Mapping of research questions and statistical analyses.
Table 2. Mapping of research questions and statistical analyses.
Analytical FocusStatistical TestTest Statistics (H/U,Z)NprInterpretation
Perceptions of opportunity
(openness)
RQ 1: To what extent does age influence the perception of AI as a pedagogical opportunity among ECEC professionals?Kruskal–Wallis HH(4) = 14.312920.006Significant
RQ 1a: Is there a significant difference between younger (<35 years) and older (≥35 years) professionals regarding their assessment of AI potential?Mann–Whitney UU = 7695.50, Z = −3.50291<0.0010.21Small effect
RQ 2: How does the level of education (academic vs. non-academic) affect the openness toward integrating AI into educational processes?Mann–Whitney UU = 5703.50, Z = −2.492910.0130.15Small effect
RQ 3: What is the relationship between years of professional experience and the willingness to utilise AI applications in daily teaching?Kruskal–Wallis HH (4) = 11.28 2920.024Significant
RQ 3a: Do career starters (<6 years) and experienced professionals (≥19 years) differ significantly in their willingness to use AI?Mann–Whitney UU = 2898.00, Z = −2.721800.0070.20Small effect
Critical attitudes
RQ 4: Is there a correlation between the age of ECEC professionals and their critical reflection on AI integration?Kruskal–Wallis HH (4) = 2.70 2920.609Not significant
RQ 4a: Do critical attitudes toward AI differ significantly between age groups (<35 vs. ≥35 years)?Mann–Whitney UU = 9552.00, Z = −0.712910.4800.04Negligible
RQ 5: To what extent does educational attainment influence the critical assessment of AI risks?Kruskal–Wallis HH (4) = 6.61 2920.158Not significant
RQ 5a: Does the level of academic education serve as a predictor for critical reflection skills regarding AI?Mann–Whitney UU = 6879.50, Z = −0.392910.6960.02Negligible
RQ 6: How does the length of professional experience relate to a professional’s critical attitude toward AI?Kruskal–Wallis HH (4) = 0.75 2920.946Not significant
RQ 6a: Is there a significant difference in scepticism toward AI between career starters and experienced professionals?Mann–Whitney UU = 3720.00, Z = −0.131800.8960.01Negligible
Note. N = case number of the specific comparison; p = significance level (exact); and r = effect size. The interpretation of r follows Cohen’s (1988) method: 0.10 (small), 0.30 (medium), and 0.50 (large).
Table 3. Descriptive data.
Table 3. Descriptive data.
Completely AgreeAgreeSlightly DisagreeCompletely DisagreeMSDn
I already use AI applications (e.g., for documentation, planning, or reflection) in my teaching practice.9.624.719.646.03.021.04292
I see AI applications as an opportunity to provide targeted support for educational processes (e.g., observation, language training, portfolio work).19.941.22117.92.370.99292
I plan to make greater use of AI applications in my educational work in the future (e.g., for documentation, planning, or reflection).15.133.731.319.92.560.97292
I feel sufficiently qualified to use AI applications in a reflective and responsible manner in my educational practice.22.028.226.523.42.511.07292
I am critical of the use of AI applications in educational work. 21.334.431.612.72.360.95292
Table 4. Group comparisons on perception of AI as an educational opportunity: socio-demographic characteristics, test procedures, and effect sizes.
Table 4. Group comparisons on perception of AI as an educational opportunity: socio-demographic characteristics, test procedures, and effect sizes.
Group VariableCategoriesnMRTest ProcedureTest Statisticpr
Age19–24 years35126.96Kruskal–Wallis HH(4) = 14.310.006
25–3477124.42
35–4475151.80
45–5478159.69
55 years and older26177.77
Age<35 years112125.21Mann–Whitney UU = 7695.50, Z = −3.50<0.0010.21
>35 years179159.01
EducationWithout secondary school certificate32125.98Kruskal–Wallis HH(4) = 13.620.009
With Matura197156.34
Bachelor44132.98
Master13113.85
Diploma565.10
University degreeNo degree229152.09Mann–Whitney UU = 5703.50, Z = −2.490.0130.15
Study62123.49
Work experience1–3 years34111.57Kruskal–Wallis HH(4) = 11.280.024
4–6 years32144.31
7–18 years111142.19
19–30 years75154.93
>31 years39171.06
Years of professional experienceExperienced (7–18 years)111105.95Mann–Whitney UU = 5544.00, Z = −1.680.0930.11
Experienced professionals
(≥19 years)
114119.87
Career starters (≥6 years)6677.41Mann–Whitney UU = 2898.00, Z = −2.720.0070.20
Experienced professionals
(≥19 years)
11498.08
Career starters (≥ 6 years)6683.54Mann–Whitney UU = 3302.50, Z = −1.150.2510.09
Experienced professionals (7–18 years)11192.25
Note: MR = mean rank; r calculated as Z/√N; and effect size interpreted according to Cohen (1988): r = 0.10 (small), r = 0.30 (medium), and r = 0.50 (large).
Table 5. Group comparisons of critical attitudes towards AI: socio-demographic characteristics, test procedures, and effect sizes.
Table 5. Group comparisons of critical attitudes towards AI: socio-demographic characteristics, test procedures, and effect sizes.
Group VariableCategoriesnMRTest ProcedureTest Statisticpr
Age19–24 years35138.34Kruskal–Wallis HH(4) = 2.700.609
25–3477155.61
35–44 years75141.32
45–5478140.25
55 years and older26158.60
Young vs. old<35 years old112125.21Mann–Whitney UU = 9552.00, Z = −0.710.4800.04
>35 years old179159.01
EducationWithout secondary school certificate32125.98Kruskal–Wallis HH(4) = 6.610.158
With Matura197156.34
Bachelor44132.98
Master13113.85
Graduate programme565.10
University degreeNo degree229152.09Mann–Whitney UU = 6879.50, Z = −0.390.6960.02
Study62123.49
Work experience1–3 years34111.57Kruskal–Wallis HH(4) = 0.750.946
4–6 years32144.31
7–18 years111142.19
19–30 years75154.93
>31 years39171.06
Years of professional experienceExperienced (7–18 years)111105.95Mann–Whitney UU = 6025.50, Z = −0.650.5180.04
Experienced professionals
(≥19 years)
114119.87
Career starters (≥6 years)6677.41Mann–Whitney UU = 3720.00, Z = −0.130.8960.01
Experienced professionals
(≥19 years)
11498.08
Career starters (≥6 years)6683.54Mann–Whitney UU = 3540.00, Z = −0.390.6970.03
Experienced professionals (7–18 years)11192.25
Note: MR = mean rank; r calculated as Z/√N; and effect size according to Cohen (1988) (r = 0.10 (small), r = 0.30 (medium), and r = 0.50 (large)).
Table 6. Frequencies of thematic categories in open-ended responses.
Table 6. Frequencies of thematic categories in open-ended responses.
Main CategoriesSubcategories
Scepticism 47
High scepticism without providing reasons30
No scepticism without providing reasons17
Use 34
Educational planning13
Formulating text7
As a collection of ideas7
Concerns 23
Data protection and data security5
Ethical and social issues37
Lack of individualisation and relationship building40
Cognitive and creative impoverishment34
Lack of knowledge and qualifications39
Loss of educational quality and professionalism25
The open-ended responses were grouped by topic and can be divided into three main areas: specific applications, concerns raised, and general scepticism about the use of AI in early education. These categories provide a structured overview of the main topics. N = 314.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pölzl-Stefanec, E. Perceptions of Opportunities and Risks Posed by Artificial Intelligence: A Survey of Early Childhood Education Professionals in Austria. Educ. Sci. 2026, 16, 202. https://doi.org/10.3390/educsci16020202

AMA Style

Pölzl-Stefanec E. Perceptions of Opportunities and Risks Posed by Artificial Intelligence: A Survey of Early Childhood Education Professionals in Austria. Education Sciences. 2026; 16(2):202. https://doi.org/10.3390/educsci16020202

Chicago/Turabian Style

Pölzl-Stefanec, Eva. 2026. "Perceptions of Opportunities and Risks Posed by Artificial Intelligence: A Survey of Early Childhood Education Professionals in Austria" Education Sciences 16, no. 2: 202. https://doi.org/10.3390/educsci16020202

APA Style

Pölzl-Stefanec, E. (2026). Perceptions of Opportunities and Risks Posed by Artificial Intelligence: A Survey of Early Childhood Education Professionals in Austria. Education Sciences, 16(2), 202. https://doi.org/10.3390/educsci16020202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop