Next Article in Journal
Exploring How the Dark Triad and Curiosity Shape the Trajectory of Affective Events in Response to COVID-19 Stress and Psychological Well-Being: A Three-Way Interaction Model
Previous Article in Journal
Gender Analysis of Stress and Smoking Behavior: A Survey of Young Adults in Japan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

University Students’ Conceptualisation of AI Literacy: Theory and Empirical Evidence

Department of Information Science and Librarianship, Faculty of Arts, Masaryk University in Brno, 602 00 Brno, Czech Republic
Soc. Sci. 2024, 13(3), 129; https://doi.org/10.3390/socsci13030129
Submission received: 11 December 2023 / Revised: 20 February 2024 / Accepted: 21 February 2024 / Published: 23 February 2024

Abstract

:
This research endeavours to systematically investigate the multifaceted domain of AI literacy, given the pervasive impact of artificial intelligence on diverse facets of contemporary human existence. The inquiry is motivated by a fundamental question posed to educators: how best to cultivate AI literacies and competencies and how these proficiencies are structured and influenced. Employing a rigorous two-part methodology, the initial phase scrutinises 28 studies from the SCOPUS database, unveiling five distinct discourses germane to AI literacy. Subsequently, the second phase involves the administration of questionnaires to 73 students, whose responses undergo thematic analysis to discern patterns within the four domains delineated by Ng et al. The ensuing discourse underscores a pivotal revelation: despite formal adherence to established discourses, the conceptualisation of AI literacy necessitates a departure from conventional perspectives. Ethical principles, elucidated by students, emerge not merely as individual components but as integral facets of a broader societal literacy profile, thereby advocating a paradigm shift towards social reflection. This novel insight prompts a critical re-evaluation of AI literacy’s prevailing assumptions and conceptual frameworks, urging a transition towards models grounded in ecological or network dynamic interactionist principles.

1. Introduction

Artificial intelligence (AI) development in the last ten years is undeniably a key topic for discussions in economics, philosophy, the labour market, art, and education. Current education systems in many countries are based on the concept of industrial society, and the impact of AI in specific areas of industry (in a broad sense) will be considerable (Mian et al. 2020; Coşkun et al. 2019). It is the challenge of education to prepare students to work with artificial intelligence (Yi 2021; Ng et al. 2021b). Therefore, new literacies, such as AI literacy, are beginning to be discussed (Ng et al. 2021a). This study will aim to analyse the topic based on the available literature and, at the same time, present the results of our empirical study among university students on the subject of AI literacy and their perceptions of this phenomenon.
AI is a relatively robust field of computer science with a long history (Benko and Lányi 2009; Haenlein and Kaplan 2019; Muthukrishnan et al. 2020). It can be discussed no later than the work of Alan Turing (Church–Turing thesis) in the late 1930s and early 1940s (Boker and Dershowitz 2022). In their conception, artificial intelligence can be understood as a technical model of human thinking. The Turing test was, from the beginning, fixated on speech games, i.e., dialogue, and its indistinguishability between human–human and human–machine interaction (Pinar Saygin et al. 2000). In the literature, the concepts of weak artificial intelligence, artificial solid intelligence (Braga and Logan 2017), and possibly superintelligence (Baum 2018; Bostrom 2016) are the three stages of AI development. Until 2022, one could speak of a clear focus on weak AI, while in the last year, more and more attention has been paid to more general concepts. In the Turing paradigm, the goal of AI may be to achieve machines that can solve problems as efficiently as humans. However, the goal should be that a particular class of issues can be solved more efficiently by AI systems (Wang et al. 2016; Hassabis 2017). A general psychological problem in the relationship between humans and artificial intelligence is the need for more clarity in the definition of human intelligence (Braga and Logan 2017; Baum 2018) and the growing criticism of the concept of its stability or strict rationality (Damasio 1994).
The development of artificial intelligence in recent years is affecting all areas of human life—Manyika (Manyika 2017) estimates that by 2030, nearly half of US jobs will be at risk of automation. Other parts of the occidental cultural circle can be expected to be similarly affected. Today, we can see the emergence of automation and Industry 4.0 (Mian et al. 2020) based on the collaboration of humans and machines (Ghobakhloo 2020). AI is transforming the work of journalists, teachers, and marketers’ work and making inroads into medicine and pharmacy (Wang and Siau 2019; Stray 2019; Howard 2019; Rajpurkar et al. 2022). Today, it is impossible to find an area of human work where AI’s influence is not applied to transform the content of jobs or create new jobs and opportunities. Education must look for ways in which to respond to these transformations.
Artificial intelligence in education has been associated with challenging computing tasks that have limited it to university or other professional education for a relatively long time. Recently, however, technological changes have made it possible to teach AI at a practical level to primary school pupils (for example, the Schrach tutorial (Estevez et al. 2019) has tasks working with AI), libraries and tools using AI are readily available through cloud services and open libraries (TensorFlow, MS Azure), and tools using AI to solve everyday tasks are widely available to all (Parekh 2017; Lee 2022). Questioning written texts using ChatGPT, transcribing conversations or coding them, generating images, or searching relevant literature are common in students’ work (Iskender 2023; Zhai 2022; Lo 2023). Thus, the question of how to teach AI work becomes of practical and broad relevance.
Literacy was traditionally understood as the ability to read, write, and later perform elementary numerical operations. Today, literacy can be understood as a set of skills and knowledge necessary for life in a particular society and culture (Wilkinson and Bruch 2012; Keefe and Copeland 2011; Gross and Latham 2012). Thus, with the development of complexity and the technologisation of society, we can talk about the emergence of new competencies such as information, data, digital, and legal or health (Lawless et al. 2016; Bawden 2001; Vuorikari Rina et al. 2022). Artificial intelligence transforms the world, so we can identify a separate AI literacy (Cetindamar et al. 2022; Eguchi et al. 2021; Ng et al. 2023b), which can be intuitively understood as the ability to use and critically reflect on AI systems in everyday life (Neyland 2019). Its more precise conceptualisation and definition is the goal of this study.

AI Literacy

We used systematic reviews to describe the current state of knowledge (Torres-Carrion et al. 2018; Ginieis et al. 2012; Mareš 2013), for which we used a search query in the SCOPUS database:
TITLE-ABS-KEY (“AI Literacy”) AND (LIMIT-TO (EXACTKEYWORD, “Ai Literacy”) OR LIMIT-TO (EXACTKEYWORD, “Artificial Intelligence Literacy”) OR LIMIT-TO (EXACTKEYWORD, “AI Literacy”) AND (LIMIT-TO (DOCTYPE, “ar”).
The results were reduced to studies that address the topic of AI literacy. We limited the results to studies that were cited at least once. Our goal was not to produce a complete systematic review study but to conduct a basic conceptualisation of the topic using a systematic and methodologically transparent research sample. Another limitation was related to journal articles. The motivation for this step was the lower quality of the proceeding papers and the often very different conceptualisation of other papers (books and book chapters), which would have severely limited research of this kind. In total, we worked with 29 studies that we identified as relevant (actually related to the topic we were addressing). The total initial set found according to the above search query contained 39 documents (also analysed in Figure 1, Figure 2 and Figure 3).
According to SCOPUS data (Figure 1), studies have been increasing in recent years, and the topic is gradually becoming an essential topic of research interest. The SCOPUS database provides results that should pass a good peer review process and offer some guarantee of quality. Similarly, the WoS database could be used. The search (and thus the fixation of results) was performed on 9 August 2023.
Regarding disciplinary focus (Figure 2), studies from the social sciences (mainly education) and computer science are dominant due to the topic’s relationship to computing. At the same time, Figure 2 shows that the issue is not concentrated in one discipline but provides a robust interdisciplinary overlap.
The last chart (Figure 3) shows the countries (by authors’ universities) where the authors work on the topic. It can be said that the USA (11 studies) and especially China (four studies from mainland China and nine from Hong Kong, making a total of thirteen studies) have the most robust research interest in this area. This is followed by European countries (Germany, the UK, Austria, Belgium) and East Asian countries (South Korea, Taiwan). Two studies come from Canada.
We carefully read the 28 selected documents and answered the research questions:
  • What tools are most often used to investigate AI literacy empirically?
  • Which target groups are the studies addressing?
  • Are empirical or theoretical studies predominant?
  • How do different studies define AI literacy?
Regarding research instruments, six studies worked with a questionnaire; four were survey studies; five were theoretical; three worked with some form of document or artefact analysis; two with interviews; and two studies worked with tests. Other studies worked with methods that were not repeated. There is a clear predominance of quantitative approaches over qualitative ones. The topic’s novelty is associated with a relatively high proportion of purely theoretical studies.
For the target groups, primary school (to which we added studies targeting children aged 5–7) was the most frequently represented with ten studies, university students with nine, adults with six, and secondary school students with four. The total is higher than the number of investigations because some studies had multiple target groups.
There were 17 empirical studies and 12 theoretical studies. This division has some limitations—academic studies included review studies that are not conceptualisations of the phenomenon. The empirical ones included all those that were empirical, but a significant proportion (six studies) were associated with an application component. The division between theoretical and observed is not entirely practical given the novelty of the phenomenon, as most authors attempt to conceptualise the topic in some way without having sufficient theoretical underpinning to do so—the form of a review study, pedagogical experiment, test, or observation are merely tools for initial framing and reflection on the phenomenon.
The studies do not work with a single definition of AI literacy, but at least five different discourses can be identified that emphasise other aspects of working with this phenomenon (Table 1). The first category, AI literacy as a competence for everyday life, is interesting because it adheres to the original definition of literacy as a particular instrument for living in a changing society, whether at the social or individual level. It can be said to represent a specific humanistic position, which opposes the last category of AI literacy as a prerequisite for future success in the labour market. It emphasises the neoliberal discourse of success in the labour market as the goal of education. At the same time, it is clear that the modes of education in these two concepts will be fundamentally different.
The other two discourses try to understand the phenomenon as non-isolated but specifically structured. It is typical for the conception of AI literacy as part of a competence structure to emphasise its relation to other competencies such as digital literacy, statistical literacy, and data literacy. In this conception, it is a form of new literacy constituted in a particular multiliteracy concept, where it forms a specific niche. What is crucial for educational practice here is that it cannot be developed in isolation but instead emerges as a consequence of the development of other literacies. The self-contained structuring of AI literacy significantly opposes this conception as a composite structure, which understands the phenomenon as a new, separate competence that has its internal structure but acts autonomously as a whole. In this autonomous way, it makes sense to understand it in the education process; it is necessary to define its educational contents, courses, and activities. It is more than an abstraction or an integration of other literacies because it constitutes literacy in its own right.
The fifth discourse has no counterpart, and its thought construction is based on a highly technical worldview. AI literacy as a form of technical knowledge and skills is based on the assumption that it is a particular technical skill. One first learns to create AI systems, program, and apply ready-made frameworks to specific tasks; thus, one is primarily a computer expert. AI literacy is approached here as a conservative extension of this technical foundation with other detailed knowledge and skills. It is, therefore, not a competency a priori available to all but a specific skill for highly specialised experts.
AI literacy can be seen as a prerequisite for living in a world where AI is increasingly influential in all areas of human life. Dai et al. (2020) emphasise that AI literacy impacts self-awareness and willingness to use AI-enabled systems, which can be reflected in their actual application and different areas of human life. Su and Zhong (2022) relate AI literacy to coping with everyday life and think of it as the ability, knowledge, and attitude towards AI and its use in everyday life. Leichtmann et al. (2023) think of it as the general ability of users to understand AI technology and use AI. Laupichler et al. (2022) understand AI literacy as the ability to use AI in practice. Fyfe also offers a similar approach (Fyfe 2023), which emphasises the course of transformation in the context of AI, which is not intuitively manageable. Yang (2022) highlights the coupling of understanding the principles of AI systems with the ability to apply them in everyday life. A more encompassing notion is associated with Kaspersen et al. (2022), who argue that AI literacy becomes a prerequisite for full participation in society, with the term literacy indicating that the goal is not only to develop children’s instrumental skills but also to develop a critical understanding of the manifestations of power and ideology in AI technologies, and thus their personal and societal implications.
In some respects, the opposite approach can be seen in studies that focus not on personal development and coping with everyday life but on success in the labour market. Eguchi et al. (2021) emphasise a person’s ability to predict future outcomes as a crucial competitive advantage. Williams et al. (2023) underline that the future job market will require the ability to work with AI as an essential job competency. At the same time, they point out that the educational emphasis on this aspect should not limit critical discussion and exploration of the implications of AI implementation in society. Cetindamar et al. (2022) talk about four domains that make up the content of AI literacy—technology skills (and working with data), competencies for work (linking AI and one’s domain), HCI and collaboration within hybrid systems, and the ability to learn throughout life. At the same time, they point out that part of AI literacy must include employee engagement with AI and a drive to make work processes more efficient through AI. Henry et al. (2021) emphasise the connection between the technical and ethical levels, stressing that critical reflection on AI in all respects is at the core of AI literacy but cannot be separated from the technical background.
The question is whether AI literacy is a new literacy, a component of a multiliteracy structure, or something completely new. Some studies place AI literacy in the existing field of other competencies and try to analyse the phenomenon within them. Long et al. (Long et al. 2021) emphasise the link between AI literacy and scientific and computer literacy, pointing out that it overlaps with data literacy with an emphasis on machine learning. Wiljer and Hakim (2019) talk about how AI literacy will influence practice transformation in specific fields, so there are process modifications in four areas: 1. the principles of data management; 2. the fundamentals of statistics and algorithmic decision making; 3. data visualisation and its interpretation capabilities; and 4. understanding how business or clinical processes will change due to integrating AI technologies in a specific field. Wienrich and Carolus (2021) note that the term AI literacy is too broad, and it would be helpful to narrow it down to particular subfields, as AI systems are now available almost everywhere. At the same time, the authors point out a big difference between the competence to use an AI system regularly and the ability to conceptualise it. Humans know AI tools but do not understand them, which poses a potential problem. Ng et al. (2023b) discuss teacher training in AI literacy and highlight the importance of linking this literacy to the professional domain of teachers. In a study by Carolus et al. (2023), AI literacy is seen as a sub-domain of digital competencies but further structured according to sub-applications.
In a particular context, an oppositional approach is offered by studies emphasising a certain autonomy and independence of AI literacy, usually based on the work of Ng et al. (2021b), who understand it as a set of four components: (1) knowledge and understanding of AI; (2) the use and application of AI; (3) the creation and evaluation of AI tools; and (4) AI and ethics, possibly in another analogous composite structure. This division is explicitly advocated by Ng et al. in their text. (Ng et al. 2022). It is also employed by Ng et al. (2023a). Kong (2014) also work with the four components of AI literacy: project work, knowledge of AI concepts, the ethical aspects of working with AI, and reflection on the relationship and responsibility between AI and humans. Southworth et al. (Southworth et al. 2023) report the exact four domains of Ng et al. (2021b). Still, they add a fifth domain labelled “AI Enablement”, which contains a set of knowledge and skills that are marginally related to AI (such as programming) but are essential for AI literacy per se. According to Chen and Lin (2023), AI literacy must have five important characteristics: the student must engage with the technology purposefully, optimally, wisely (reflectively), ethically, and responsibly about AI.
The last group of studies focuses on the technical aspect of AI literacy and tries to show that it relates to the ability to work with AI at the application development or design level. Still, at the same time, this does not mean that it is reduced to the technical aspect only. However, technical skills play a central role here. Lin et al. (2021) emphasise that even non-engineering students should learn to work with AI because it will be essential. The content is linked to the use of AI in competencies transferred from STEM, which are complemented by ethical reflection. Mertala et al. (2022) Emphasise the combination of technical and social aspects. They emphasise that systematic education is crucial for their development, as these competencies will always be only partial in informal courses. According to Adams et al. (2023), AI literacy involves knowledge of concepts, skills, and ethical considerations related to creating and using AI and human–machine collaboration. According to Chen and Lin (2023), AI literacy is associated with the ability and willingness to program AI systems. These systems then need to be implemented to the needs of a particular student. Yi (2021) argues that AI literacy is the ability to work with information with sufficiently developed technical and metacognitive skills to meet the challenges of the future world. Thus, it combines metacognitive aspects and technical knowledge or skills. According to Yi (2021), AI literacy is also associated with anticipating future trends and adjusting one’s work and study strategies.

2. Methodology

This research study aims to analyse how students at a humanities university conceptualise and relate to the phenomenon of AI literacy in different dimensions. It aims to map students’ perceptions of this phenomenon and analyse future educational challenges that must be addressed in the following semesters of study.
The study will be divided into two parts—the first will offer a descriptive self-evaluation of students (Mannila et al. 2018; Tramontano et al. 2021) in various aspects of working with AI; the second will focus on their qualitative reflection on the phenomenon. The qualitative part will be structured according to the students’ free-response questionnaire answers.
The construction of the research questions in the qualitative part was inspired by the research of Mertala et al. (2022) but with primary school students as the target group. The questions were also constructed considering the four dimensions reported in the study by Ng et al. (2021b). The resulting research design associated with each question replicates the research field defined in this area concerning the literature. The research works with questionnaires containing inquiries about a self-assessment scale and free responses treated as qualitative data. In this respect, it is a mixed-design approach.
The study aims to answer how students conceptualise AI literacy through a questionnaire and its evaluation. The research design uses thematic analysis (Braun and Clarke 2012; Tuckett 2005), in which responses to individual questions are coded using open coding, tracking the themes addressed in students’ responses. The unique codes are then clustered into thematic clusters, presented, and analysed in the table in the Section 4.

2.1. Research Sample

A total of 73 students out of 251 respondents answered the questionnaire. The return rate of the optional questionnaire was 29%. These were students of several university courses related to a broader interdisciplinary nature that is not fixated on studying only one discipline.
There were 45 female and 28 male respondents in the sample. Forty-seven students were enrolled in a bachelor’s degree programme, twenty-three in a continuing bachelor’s degree programme, two in a five-year master’s degree programme, and one in a doctoral programme. Fifty-seven of the students were students of the Faculty of Arts. Further represented were students of the Faculty of Informatics (6), the Faculty of Science (4), and the Faculty of Social Studies (4). Fewer than four students represented other faculties.
Two students out of seventy-three wanted to be excluded from the research.

2.2. Research Tool

The research instrument for data collection was a Google Forms questionnaire designed to be entirely anonymous. It is not possible to make any identification of the person from the data. The questionnaire contained questions focused on different areas of information literacy; the relevant ones for this research are:
  • Describe what you think artificial intelligence means.
  • Describe what AI can now be used for.
  • Describe how you think AI works.
  • Describe why artificial intelligence is used. You can relate your answer to specific areas or answer in general terms.
  • Describe which AI tools you use (if there are more than one, choose one).
  • Describe any ethical issues or challenges you may encounter when working with it.
  • Describe what a person should be able to do to say that they are literate to work with AI.
Students were free to answer these questions. Typically, they responded in one or two sentences, but longer or shorter answers could be found in the sample. Data were exported to Atlas.ti, where open data coding occurred.
This was followed by a question working with the Likert scale (Arnold et al. 1967) with values ranging from one (beginner) to seven (expert). Students assessed themselves in the following areas:
  • In the use of specific tools;
  • In reflecting on the ethics of working with artificial intelligence;
  • In the ability to create new tools or processes when working with artificial intelligence;
  • In the knowledge of how artificial intelligence works;
  • In the ability to predict the future development of artificial intelligence;
  • Overall.
Due to the sample size, the data were used in this research only descriptively, aiming to describe the research sample rather than working with more sophisticated statistical tools. The final questions were demographic, and Google Forms was used to process them.

2.3. Data Collection and Processing

Data collection was conducted from 19 September 2023 to 9 October 2023. Students were asked to respond via an email invitation to all students in the selected courses. At the same time, the collection was strictly anonymous and did not include any pressure on students that could lead to motivation to fill out the data, knowing the presence of a discourse of power.
We used the Atlas.ti tool and the open coding method to process the qualitative data. We created an export of separate answers for each question. We read these repeatedly and then created codes for the categories that appeared significant. A first round of coding was then conducted. In the second, we reread the text and identified codes (new or used from the original set) for the responses without codes. The third step involved merging the categories into groups. These groups structure the presentation of the results in the Section 3. Individual student statements are selected from a given category to represent essential attitudes.
In this respect, we also took inspiration from the study of Mertala et al. (2022). Individual codes were uniquely formulated for each question as students answered them. The questions were analysed separately (with one exception, where the response to the tools was logically related to the user’s experience with AI). We only worked with the continuity of individual responses beyond situations where the students explicitly referred to their previous answers.

2.4. Research Limits and Ethics

Students participated in the research voluntarily and entirely anonymously. Two students completed the questionnaire but wanted to avoid being included in the study, and their answers were therefore shredded. The fact that the research was conducted at one time among students of several courses with a non-disciplinary character (some of them inter-faculty) thoroughly ensured the perfect anonymity of the learners. We did not use demographic data in data interpretations but only to describe the sample. Analysing each question separately reduces the likelihood that any student could be identified. For the publication, a higher level of anonymity is further ensured by translating the responses from Czech and Slovak (the research language) into English (the publication language).
No answer in the questionnaire was compulsory, so students could decline to answer questions they perceived as problematic. Collecting questions outside the university’s information system in an external tool reinforces the reduced risk of identifying students based on their digital footprint. It was not possible to identify a person or group from the data. Students were informed of the research design in the questionnaire’s text and the accompanying email.
Students were informed that the results of their answers were used for research. Participation was optional and based on the information in the email and the questionnaire description, so the research could be assessed as wholly voluntary.
We cannot guarantee that some students did not answer untruthfully or that they did not generate answers in tools such as ChatGPT, but at the same time, they had no motivation for doing so—given the anonymity, the voluntary nature and the fact that they knew the researcher—to modify and deliberately distort the results. Nevertheless, this factor cannot be ruled out. This is one of the possible limitations of the study. On the other hand, the quality of ChatGPT responses in Czech during the period in question corresponded only to a limited extent to the typical communication style of the students in the questionnaire.
Regarding the research limits, they can be seen mainly in three areas. The first is the limited sample size, which is a barrier regarding the possible generalisability of the conclusions. By working qualitatively with the responses, we do not perceive this limit as fundamental. The second limit is the composition of the sample—a return rate of around one-third of the reactions, as well as deliberate sampling on students in particular courses, may lead to some specificity in the responses or, again, a reduction in possible generalisability.
A significant limitation is the length of the answers and the impossibility of asking questions. Both are linked to the research procedure and the desire to ensure anonymity, leading to relatively shorter and only sometimes formulated answers that cannot be interpreted in greater depth than that carried out in this research. Follow-up qualitative research asking specific questions in greater depth would be helpful.

3. Results

The results in this section will be structured according to the questions in the questionnaire—a question always corresponds to an intertitle, further differentiating the answers through thematic analysis. We will focus on the students’ self-assessment scale results in the first part. The individual items correspond to the four dimensions in Ng et al. (2021b) with the addition of the “Overall” level. The response scale was structured from 1 to 7. Level 1 indicates a beginner, and level 7 indicates an expert. We expect a linear dissociation of the items, which is not necessarily a trivially satisfied assumption.
Regarding students’ self-assessment, an interesting phenomenon is if we create arithmetic averages of responses in each area. They rate themselves worse on the scale than would be consistent with an arithmetic average (Table 2) or tend to place themselves more extremely than if they work with their overall rating. Thus, the expected emphasis on midpoints, typical of scaled scores, did not occur. This phenomenon may be associated with undervaluation or sub-aspects not covering the overall idea of AI literacy or not covering it evenly. The answer to this question would have to be the subject of further research. At the same time, we can infer from the responses that students perceive the parts to be of varying importance for not copying the arithmetic mean or for missing some characteristics in the list.
Students in our research (Figure 4) were the least confident in their ability to create new tools or techniques when working with AI (mean level of competence was 2.05), followed by their ability to predict future developments in AI (2.14). On the contrary, students ranked highest in reflecting on the ethics of working with AI (3.18) and using specific tools (2.89). The high score in ethics is probably related to the educational profile of the students; a significant number of them have encountered the topic of ethics of working with AI in their previous curriculum—students of the Faculty of Arts. The library at this faculty also covers topics related to selected ethical issues and AI. These two items are also interesting because they have a relatively high degree of uniformity of responses, at least on the part of the spectrum of answers offered. The average competency value was 2.59 compared to the stated overall competency of 2.64.

3.1. What Do You Think Artificial Intelligence Means?

Students conceptualise the phenomenon of artificial intelligence in several categories that have limited overlap. The first group is students who relate to AI as a tool that can be used for some purpose and has a particular form of technical implementation.
AI is a neural network that can produce adequate (most likely) output based on the patterns found from the learning data (learned neurons). More generally, I see AI as an effort to mimic human thinking/perception of the world technologically.
Artificial intelligence is a general umbrella term for a fairly broad field. Within AI, we are trying to make machines perform activities with the same quality as humans. As a sub-category, we can see, for example, machine learning, neural networks, deep learning and so on. A subset that has been talked about a lot lately is generative AI, which can generate content. This can be based on so-called transformers for text or diffusion for images.
A form of machine learning, an ever-expanding database.
The responses of all three students show a broader view of AI as a tool with some technical background and limitations. At the same time, misconceptions or inaccurate ideas of a technical nature were also present for the students in our answers. Still, since they do not relate to the conceptualisation of the phenomenon, we will not explicitly mention them here.
The second group is students who associate AI with learning and adaptability; for them, the ability to learn is the distinguishing feature that separates intelligent systems from non-intelligent ones. For example:
Artificial intelligence (AI) is a phenomenon with a problematic definition; for me, the critical dimension of AI is learning and observing patterns, based on which it generates, sorts, combines, and personalises (…) digital content.
It is a program or technology capable of learning on its own and solving problems on its own.
A tool or technology that can learn and adapt to the situation. It can process large amounts of data in a short time. It encourages creativity and helps users to develop their thinking and formulations.
As such, intelligence is implicitly related to learning, and the fact that students draw attention to it this way is relevant to conceptualising the topic. If systems learn (the word is mainly anthropomorphic or at least biomorphic), they may become some imitation or mimic of human thought and action:
This is the ability of the program to mimic human thinking.
Model of man and his thinking.
AI is an attempt by humans to mimic human thinking in areas where only the human brain can. So, it is an attempt to create our artificial brain.
It is a “layer” that is capable of searching, sorting, analysing (perhaps with limitations), rearranging, and creating “new” forms of existing multimedia data. While it is inherently more efficient and capable than a human, it is also limited by how humans use it.
To what extent the concepts of imitation or mimicry mean the same thing or what exact meaning individual students ascribe to their statements is questionable. Still, the degree of anthropomorphisation of the concept of AI is high. The dominant belief is that it is an attempt to create human thinking, albeit perhaps more powerful or associated with certain limitations, but still, human thought is realised through technical means.
In the statements, however, one can also find a view of AI as a kind of assistant or tool extending human possibilities, i.e., not as an imitation of thinking but as a kind of complement or extension of human cases:
A device that could handle repetitive and programmable activities better than a human so that people can use creative thinking
A tool or technology that can learn and adapt to the situation. It can process large amounts of data in a short time. It encourages creativity and helps users to develop their thinking and formulations.
Technology is an evolving progressively field of computer science and discovery for humanity that will eventually be the same leap in technology technological leap as the Internet once was for society. Furthermore, more and more people will increasingly use it as a tool for everyday life.
In these examples, it is evident that the students’ approach to the AI phenomenon is productive from the perspective of the helper in that it opens up the question of being able to work with, use, and learn systems in a way that will be effective in socio-technical systems that will connect human actors with technical ones.

3.2. What AI can Now Be Used for

Answers to this question are strongly linked to “Describe the AI tools you use (if more than one, just a selection).” Students are using AI systems and putting this reflected experience into practice. The individual responses to the two questions are closely related, with students giving more specific examples with more user experience.
In the answers, we find both more general approaches that relate to broader social astrological perspectives and specific enumerations. Examples of the first category are the statements:
Artificial intelligence is mainly used to make everyday life easier—it helps to organise the household, control electronic appliances or cars, perform routine tasks in the work process, collect large amounts of data and create statistics from them.
It’s the same thing we use humans for. To create virtual works—texts, audiovisual production, art; to communicate information—data, numbers, concepts; to control processes and machines—vehicles, weapons, production lines, accounting; to manipulate others—fake content, mass emails.
There are two levels to this. One is accessible to people with an internet connection. Here, the use appears to be in text creation, search, image creation and other enhancements. Then there is a more technical specialist plane that we still need insight into (complete robots—so more physical).
Some perspective is added by the statement of another student who says that:
The better question is what it cannot be used for:
Examples of fully targeted responses include:
When studying and completing assignments, it is beneficial in formulating theoretical descriptive passages, summarising longer texts, creating tables from numbers embedded in the text, coding, programming, etc. However, it is necessary to check everything constantly—the error rate is relatively high.
One of the biggest buzzwords is content generation (images using datasets and tools like DALL-E text using ChatGTP). Still, it is also helpful in creating translations or personalising content on social media.
Prediction and forecasting (car collisions, traffic, …); combination and counting (chess, knowledge competitions); translation; image recognition; …
Students in this area have relatively high experience with individual tools and their practical implementation. The most frequent references were explicit to ChatGPT (42 responses) and Midjourney (10), arguably the two primary tools that create the AI experience in the anthropomorphic model. Other students mentioned DeepL, Canva, Google, and other devices that do not have such a strong “creative” effect, and their design does not lead to the feeling of imitating human thinking. However, this is an area strongly associated with AI for many students.
The answers to this and the previous question show the presence of two essential discourses—some of the students perceive AI as a technical tool that can be used to perform specific tasks and others perceive AI as an imitation of human thinking.

3.3. How Do You Think AI Works?

When analysing the various concepts associated with AI literacy, a recurring theme is the emphasis on understanding how AI works. Whether a comprehensive understanding of the theoretical underpinnings can help with application or critically reflective work with AI systems should be debated. Still, some elementary knowledge is necessary for users to consider what and how a given system can be used.
Some students find distinguishing AI concepts from deterministic algorithms challenging and imagine AI as a set of carefully programmed rules. In this mechanistic view, systems with and without intelligence differ primarily in the complexity of the overall structure. Examples of such statements are:
In a simplified form, a bunch of “if/else” conditions (if “something”, then “something”. otherwise “something”). More accurately and correctly, an artificial neural network is used.
The programmers create algorithms for how it will work and insert a data package as the basis. Based on that, the AI acts or can evolve.
It is programmed so that when we ask it to find specific books on a topic, it will search several libraries and write down certain books. It is easier because sometimes you need to think of particular titles, so it is up to you to see if the book has sufficiently broken down the problem.
The second group of wildly inaccurate answers are those that suggest that students do work with AI tools somehow but have no realistic idea of how such programming structures work:
It uses information from all sources in the world.
The Turing test determines whether a machine can have the same mindset as a human. It is a database of systems, data, and imputations applied to other systems, websites, and applications. It is a running program. It runs on algorithms that it learns from working with data.
Both responses point to a broader problem. Students have some mechanical knowledge of computer science concepts. Still, they cannot work with them adequately because they need to understand them. Examples of answers that can be described as correct or partially correct are:
It consists of two processes (probably). A learning process that works based on gradient descent. The neurons’ weights are used to compute their output function and then input to the other layers, which are adjusted based on each neuron’s inputs (learning data). Based on the desired output, the neurons change their weights based on gradient descent to minimise the deviation between the function produced by the neural network to the desired position (learning data that has the desired outputs beforehand). Somehow, once the neural network learns in this way, the data we want to know the AI’s response to is transferred to the input layer of the neural network, and the neural network selects the output that most closely approximates some pattern (excites specific neurons in each layer, whose values influence the output selection).
It is an artificial neural network with several layers trained with test data. According to the selected test data, it then chooses the most likely option when making decisions. The data entry process can be repeated indefinitely, with a different result.
It depends on the type of AI. It can be either an encoder-decoder-based algorithm (transformers) infusion diffusion. However, it is about taking a large amount of data to train the model. This training can be either spontaneous or supervised. Based on this, a model will be created that can be used by the intelligence.
Some student responses then focus on the probabilistic aspect of AI systems, which is typical of artificial neural networks, or the inability to track the actual inference process easily:
AI is often thought of as a black box, and so am I. Still, I have a very rough understanding of the mechanics of machine learning, with its emphasis on observing patterns in a dataset and then making decisions based on those patterns.
Search for common themes to a question and calculate the probability of the following word.
Some AI models analyse massive datasets and use probability to predict and optimise. = The more data and training available, the more reliable and efficient the AI.
It is impossible to estimate the number of “correct” and “wrong” answers from the responses; instead, they form a continuum on which students move. This is an exciting opportunity for further research to observe how students’ conceptions of AI are linked to other abilities to think about or work with AI. Students generally find conceptualising the phenomenon more challenging, and it would likely be practical if more space were devoted to it in the curriculum.

3.4. What Ethical Issues or Challenges you may Encounter when Working with AI

Students in our research were well aware of the various sub-limits of AI and its use in society. Responses can be divided according to structure—some are enumerative, others focus on one particular phenomenon. In our case, however, we will use (as in previous chapters) thematic categorisation.
A significant and delineated theme concerns authorship and novelty. We can see both students’ concerns about not committing plagiarism, a broader discussion about what data AI works with, how citation is ensured, and a general understanding of the phenomenon of authorship. It is evident from the responses that, on the one hand, they contain some resentment about how the technology is progressing. Still, at the same time, they are aware that generative AI systems are performative in this area and will necessarily lead to a change in theoretical conceptualisation. Examples of statements are:
Issues of ownership, authorship and plagiarism. Where is the line between helping and inspiring and copying?
For example, AI uses texts, images and pictures from the Internet, i.e., they are subject to copyright and create texts and images from them. Also interesting is the issue of writing books in AI—the author of such a book is the human assigned the task, even though the AI draws on books already registered and the assignor made almost no effort in writing.
It does not quote, which means it takes an author’s work from a website and passes it off as its own.
In particular, the problem of authorship and plagiarism, using AI for school assignments, etc.
These statements should also be understood in the context of the specifics of the research sample—a relatively large proportion of students are studying information sciences. At the same time, the fact that they are students increases the emphasis on compliance with copyright and citation standards. The second group of concerns are those whose common denominator is the fear of discrimination and what social impacts AI may have on selected groups of people. Examples of such statements are:
Racial and discriminatory, and also, who decides to restrict them? Other ethical issues could relate to, e.g., digital reanimation of deceased people, exploitation in music, cinema or images, e.g., fake pornography, fake cover songs (on YouTube Johnny Cash—Barbie Girl)
Discrimination, diversity. The challenge would be with the available data that AI (ChatGPT) will supply and paint only some of the pictures of some hot topics. Inability to assign emotional/moral ratings for given arguments. So, in pure quantification of ideas, people will say, “Look, this argument has so many pros and cons. Yours has fewer pros, so ours is better, and we’re right.”
Discrimination based on race, sex, age, workability and experience or financial situation in recruitment.
Overall, using artificial intelligence for something that one then passes off as one’s own seems slightly unethical to me, but at the same time, in this day and age, one cannot avoid artificial intelligence. At the same time, it can be a problem; for example, if AI is programmed to follow prejudices, it is not objective and can create unpleasant societal situations.
Discrimination is one of the critical topics described in various recommendations for developing AI systems. The current society in which we live carries certain discriminatory forms, prejudices, and structural arrangements that lead to discrimination or the framing of certain phenomena that may manifest themselves in datasets or the value design of particular systems. We see critical thinking about what data are being worked with and what it can produce as essential.
Other answers touch on the reflection on the relationship between cognition and the one which Floridi (2019b) already points out. AI systems provide information that affects how people think, interact, and view the world. It is in this area that severe ethical problems can arise. Examples of statements in this category:
Lots of potential for abuse, plus loss of control and overall poor understanding of how AI models work. If AI systems are so complex in their computations that it is not easy for humanity to check how they arrived at their results, this may bring increasing distrust from users and creators. Society will be reluctant to entrust some tasks to computers that need to be sufficiently tested and secured with additional security measures.
Overall, using artificial intelligence for something that one then passes off as one’s own seems slightly unethical to me, but at the same time, in this day and age, one cannot avoid artificial intelligence. At the same time, it can be a problem; for example, if AI is programmed to follow prejudices, it is not objective and can create unpleasant societal situations.
It is difficult to determine the source from where he gets his information. If I make a decision based on data from her, I have to answer for it. An artificial intelligence does not have to answer for anything.
Lots of potential for abuse, plus loss of control and overall poor understanding of how AI models work. If AI systems are so complex in their computations that it is not easy for humanity to check how they arrived at their results, this may bring increasing distrust from users and creators. Society will be reluctant to entrust some tasks to computers that need to be sufficiently tested and secured with additional security measures.
On closer analysis, technology can be said to lead to a loss of a sense of control or security. The narrative of the neoliberal social discourse, expecting one to make decisions based on knowledge of the situation alone and responsibly, gradually breaks down without the ability to understand the problem sufficiently. On the one hand, one is responsible for one’s decision-making (being able to act ethically). On the other hand, one is deprived of the possibility of knowing adequately. The connection between ethics and epistemology is an essential topic in this field.
The fourth area is reflection on social impacts. Students do not see ethics as an individual matter but strongly emphasise its social dimension. They know AI will change social processes, the labour market, and the workplace. Examples of such statements are:
Especially competition for job consultant positions (many positions can be eliminated as AI can answer all questions).
Offers/gives information to the wrong people (e.g., ChatGPT is already capable of putting together a plan to overthrow a government if you ask indirectly) and lacks empathy—it will offer a radical and statistically best solution to a problem. Still, the question remains whether it is the right one.
It is challenging to set any limits on what we should allow AI to control.
It reflects our society. If the data is racist in some way, for example, the AI will act that way. It is the same with gender issues or a Westernised worldview, for example.
The latter statement brings us back to responsibility and discrimination as social-ethical issues that have an exceptionally high urgency in the context of the advent of artificial intelligence and a different content than social ethics could have focused on a few years ago. At the same time, the answers show that students conceptualise the topic—they are aware of the serious problems—but fail to offer solutions. Thus, AI plunges students (and, in their vision, society as a whole) into an environment of uncertainty and a process of finding new solutions and practices.
At the same time, we can see an emphasis on entirely new topics, such as working with the legacy of deceased people and using them for AI applications (see the quotes in the discrimination topic “digital revival of already deceased people”) and in general the case of incorrectly constructed or non-transparent datasets on which AI systems are trained (in addition to the excerpts above, also, for example):
Flawed information, propaganda by its authors, and biased information.
No sources are given; he makes stuff up when he does not know. However, you mean something more general- so if an analysis of the music of a particular composer is used in the creation of music- is it a work of authorship? And whose? Also, what data does he have available? What about social media data?
In this case, the boundaries may be narrowed between ethics and epistemology and between ethics and technology. The transparent choice of the dataset, its appropriate setup, and its construction are not just technical details but strongly determine how the whole system works.

3.5. What a Person should Know to Be Considered AI Literate

In the last question in the questionnaire, we focused on how students imagine a person who could be described as literate about AI. The responses to this question were quite heterogeneous, and when comparing them with the discourses identified in the literature, it is possible to see some similarities.
The first group of statements focused on a critical analysis of the phenomenon of AI. A literate person can be vital, who works with technology but at the same time maintains a certain distance from it:
To be able to critically distinguish the work of artificial intelligence, to know its limits, and to make decisions according to one’s intuition.
Quote: do not take the AI’s work as perfect; rewrite, proofread, and be careful what they produce.
Be critical, be analytical.
This distance, control, or analytical concept is not—in the context of the issues analysed above—a surprising concept because it focuses on the fact that AI is not a tool among tools, not a Heideggerian hammer, but rather a performative agent, something that transforms the society in which we must be able to live. This notion is followed by the emphasis that students place on the ethical side of such a competent or literate human:
It opens up space for virtue and ethics in general. The children already mentioned did not have to have God knows what knowledge to use AI. However, they needed to be taught why AI should not be used for such a purpose.
Such a person knows approximately how it works, what is the monetary model behind them, who manages them and what their goal is to understand how they can help in a given area (I assume that the person will want to use it for assistance in some specific area, more if necessary), and then how to use it properly (for chatbot it is for example prompting, which is a whole science in itself and companies offer six digits of dollars a year for knowing the right “prompting”), to know where it has weaknesses and where it is often mistaken, and how to potentially correct/mitigate these mistakes.
A literate person is familiar with examples of use cases and knows what can be created with AI and its limits. Furthermore, they acquire essential digital competencies (they know how to use a computer).
The last answer shows how strongly the discourse of ethics and critical approach are related. We need to look for ways of educating in AI literacy that see ethics as an essential domain, not just as an individual ethic or virtue but as a social ethic. Ethics is a critical reflection on action and something that enables and is connected to action. Other statements in our research sample related to specific examples of working with AI as a tool:
He should be able to use the tools associated with artificial intelligence and know how it works and how to “control” it.
Create the correct assignment to get the answer he wanted exactly. Sometimes, working with prompts is also challenging and improving it will significantly multiply the use of AI.
Humans should use AI tools to be significantly more efficient with them.
These statements are interesting in that they emphasise the human as the one who actively controls the AI, works with it and has control or power over it. We can thus see a renewed effort to return to an ethical (in the broad sense) construction of literacy as free will, as the competence to be able to do something. Closely related to this theme is another category of statements that could be linked to practical tasks, with an already more valuable and consistent relation to everyday functional activity:
He should be able to generate what he needs with it quickly. Text/image, etc, with minimum errors and the need for further editing. He should also realise that it is still a machine, a detailed tool and search engine and not literally “intelligence or thinking”.
He should be able to generate a professional text including citations, e.g., a final project report or a photograph.
He should be able to use various AI tools for his work and private life; he should be able to explain the basic principles of AI to someone else who does not have this knowledge. He should not be afraid to use these tools and should spread awareness. He should be able to recognise when an AI tool has been used.
The common denominator of these statements is linked to something humans have to do but for which AI can be helpful or practical. Literacy here is directed towards transforming life’s comforts and providing convenience or efficiency. To work with AI, users also need to have primary computer or digital literacy, as well as linguistic competence, because all tools work better in English than in Czech or Slovak, which is the native language of the respondents:
To work with a computer, to know the applications and their functions. However, Defacto can learn something, as many programs work alone.
He should have computer experience, speak English and know all the risks.
The last specific group of respondents are those who emphasise that to use a tool, it is necessary to understand its principles, know how it works, what to expect from it, and what is behind the technical background. So, we can talk about a specific technical discourse.
To know what principle AI works on. Moreover, I see it as a helper and a tool.
For the first option, literacy is knowing the APIs for creating AI, a basic understanding of how AI works, where it can be used, and how to prepare data so that AI can learn from it.
Understands how AI works technically; knows how to use different AI tools effectively—has hands-on experience with AI; is aware of the ethical and social aspects of working with AI.
Importantly, knowledge of principle always stands alongside other factors—utility, ethics, and efficiency. Thus, knowledge of regulations is not an end but a means to truly informed, competent handling of AI.

4. Analysis and Discussion

The thematic analysis can be summarised in a table (Table 3). The table shows that each area is differentiated into 3–6 additional sub-themes. The specific location was working with tools (what AI can be used for), which was more evidence of students working with AI than offering more precise insights into conceptualising AI literacy.
The available data from the thematic analysis allow us to conceptualise AI literacy from two perspectives—the one that students themselves perceive as essential, i.e., the creation of a particular student’s grasp, or, on the other hand, an analysis of the thematic aspects that students would need to develop AI literacy themselves if it were to rely on some already functional and robust theoretical models.
If we start with the second part, it is evident that students lack a technical understanding of how AI systems work in the first place. The basis for this claim is the data from the self-assessment, which is precisely the technical aspects of the competence profile that students rate as the weakest, as well as our qualitative data, which suggest that some students do not have a clear or correct understanding of how artificial neural networks work and that there is a group of students who perceive just a technical knowledge of the principles of operation as essential. In this respect, there is a clear link with the studies analysed above (Henry et al. 2021; Yi 2021). This technical aspect should be significantly reinforced in the education of the students studied. An excellent technical background knowledge can then be transferred to other aspects that the students rated as critical, namely ethical aspects such as discrimination analysis, social impacts, and the ability to adapt the tools. Thus, it can be hypothesised from the data that students’ low scores in predicting future AI developments may be related to a need for more understanding of the technical aspects of AI (Yi 2021).
Thus, students have a conceptual grasp of AI at the level of a tool that “can do something”, but it seems that for some of them, the technical barriers to understanding the phenomenon are not only related to the inability to adapt systems or create new ones technically, but above all, they are related to the limits in thinking about AI as a whole. Thus, clarifying the theoretical foundations could lead to a critical evaluation and a deeper reflection on AI-related phenomena. At the same time, it is an area that students have experience with and conceptualise in a certain way based on their experience with particular tools.
Students declare their ability to work with the tools. Their explicit user experience was directed towards systems working with generative approaches to language (ChatGPT) or images (Midjourney) (Cobb 2023; Borji 2023). Still, it can be expected that it is, in fact, broader, as many aspects of working with AI may need to be realised by the students or explicitly mentioned in the research. This area is covered by student activity, perhaps better than a formal curriculum could (Ng et al. 2023b).
However, students have experience using the tools and perceive the ethical implications. They name both micro-ethical topics, focusing on working with specific tools in specific life situations (which corresponds, for example, to the context of (Fyfe 2023)), as well as broader social-ethical themes (Borji 2023; Dorrien 2009; Bowker et al. 2009). However, they lack the embedding of phenomena in theoretical frameworks and approaches, and the dominance of an intuitive system is evident, which needs to be complemented and developed through critical discussion. Teaching has much to build on in this area but is probably irreplaceable in its complexity and systematicity.

Student Perspective on AI Literacy

The second perspective is the student conceptualisation of the AI literacy phenomenon itself. This is mainly evident in Table 3 and the entire Section 3. If we look at the defined discourses from the literature, we can say that students touch upon all addresses somehow. There is a belief that understanding technical principles is necessary to work effectively with AI and think critically about AI. As in theoretical studies, connections between the technical dimension and other reflective or conative components are evident.
We can see quite clearly the emphasis of some students on understanding AI as a tool for coping with or making everyday life more enjoyable, i.e., relating to the phenomenon of literacy as a tool for “everyday life”. Part of this area of thought is also the understanding of AI as a tool that should have a defined goal for its application (Cerbone 1999).
Some discourses point to the connection of AI literacy with digital competencies and language competencies, thus placing it in the field of their integral parts, as well as concepts that emphasise the novelty of the whole phenomenon, often accompanied by the need to rethink the work with data and the construction of datasets.
The discourse related to the future of the labour market is the least affected. Three reasons for this can be identified in the available data. First, the pace of development is so fast that students cannot predict the future of AI development, which means they cannot relate to the end of work. Evolution is too rapid and the effects too complex to make sense of and construct such a discourse. A second reason may be the composition of the research sample, where students typically place more emphasis on issues related to their near future in their studies than on the changing labour market itself—it is more relevant for them to ask about citation patterns or professional writing than about the labour market, which does not apply to them now. However, the ethical-social component is also crucial; students do not want to adhere to a neoliberal view of the economy and labour market and are more likely to ask questions about justice, education, and discrimination. There is a need to discuss the future of the world and its values rather than clinging to particular positions. The welfare of the whole seems to be more important to students than their success in a competitive society. This phenomenon may also be influenced by specific Czech conditions linked to deficient unemployment (around 2.6%, the EU average being 5.9—data for August) and specifically low youth unemployment. The economically favourable social climate may affect a different perspective of thinking than that captured by some other studies.
These aspects are significant in the context of responsible AI development in the European environment (Ulnicane 2022; Floridi 2019a). Such an approach is consistent with the principles of the ethics guidelines for trustworthy AI (European Commission 2020), which names seven principles for AI development—human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability. The last three aspects are crucial in students’ thinking about the ethical dimension of AI literacy. Interestingly, they thus transfer them from recommendations for developers (i.e., from the systems side) to the literacy level (i.e., the end user, every citizen).
In the context of the last point, we specifically see three critical aspects as particularly interesting. They are closely related to AI literacy according to students but need to be given more attention in the studies we have analysed.
Artificial intelligence and other social phenomena (globalisation, climate change) are gradually changing the understanding of ethics’ basic premises and concepts. For students, ethics about AI are not primarily individual (“how should I act”) but social (“how should society act”). This transformation is both trivial and easy to analyse but shows, to some extent, a new conception of ethical reflection (Cobianchi et al. 2022; Porcaro et al. 2023).
Social aspects in ethics lead to a greater emphasis on topics such as equality, diversity, and discrimination (European Commission 2020), which students write about regarding AI literacy. Thus, it is no longer just a matter of analysing the impact of specific applications on society but of the ability of individuals to consider their actions of these phenomena. The interconnectedness of activities and the need to look for tools and ways to use AI systems ethically are essential to understanding the phenomenon (Porcaro et al. 2023; Floridi 2013, 2014).
These insights lead to understanding AI literacy as a competency for critical discussion and the world’s future. Students leave the question of developing sub-components (Floridi 2019b) as individual advantages but emphasise the need for a broader debate about the world they want to live in (Adam et al. 2000; Beck 1992). By its complexity and novelty, AI literacy can be an exciting contribution to such an understood debate. Other topics, such as the future of work, education, or health care, are secondary to AI literacy; their reflection explicitly expects an understanding of broader social perspectives.
The second important phenomenon that students have highlighted and that we see as a contribution to this study is conceptualising the phenomenon of accountability and control (Floridi 2013). Students repeatedly discuss losing control over technology, the information environment, and their world. At the same time, they experience a sense of responsibility for their actions and society, which creates an unsettling tension. “How is it possible to be responsible and not have control?” This question does not explicitly come up with the students but can be read easily from the answers. Unquestionably, part of AI literacy must include the search for a solution to this problem. This may be an approach typical of Latour (Latour 1993; Latour and Rose 2021) or authors criticising the early modern way of thinking (Šíp 2019; Dewey 1923) associated with the redefinition of the notions of control and responsibility. Still, we can equally look for answers within modernity so understood precisely in the specific definition and structuring of AI literacy.

5. Conclusions

The phenomenon of artificial intelligence is fundamentally transforming many aspects of human activity. Technology is becoming a tool for revealing the hiddenness of the world but is also a subject of critical debate and part of social change. In contrast to much historical innovation and invention, artificial intelligence raises disturbing questions of an entirely new kind, first by working with the concept of “intelligence” that we have been accustomed to associating primarily with humans but also by applications that push specific human competencies out of the space of activities that can be paid for or that make sense to be performed by humans. Technology is thus, to some extent, forcing humans to redefine some key concepts to which they have been accustomed, such as the boundaries of thinking (in school in Czech, we expect thinking to be an expression of the individual (Rupert 2010; Lakoff and Johnson 1999; Lakoff 2008). There are exceptions such as dialogic learning, project-based learning, or teamwork, but the primary mindset of Czech schools sees thinking as an individual phenomenon), the uniqueness of humanity, efficiency, responsibility, and control. Artificial intelligence as an innovation enters the social context in yet another aspect in a wholly new and unusual way: it is one of the first literacies that cannot be thought of (as our research has shown) as an individual entity phenomenon but which we must relate through the paradigm of ecosystems with dynamic complex interactions (Floridi 2019b; Latour and Rose 2021).
Our research identified five fundamental discourses of approaches to AI literacy that are prevalent in the literature:
  • AI literacy as a competence for everyday life;
  • AI literacy as a prerequisite for future success in the labour market;
  • AI literacy as part of the competence structure;
  • AI literacy as a composite structure;
  • AI literacy as a form of technical knowledge and skills.
All of these approaches are also present in the student research we conducted, except for one—AI literacy as a prerequisite for future success in the job market. Instead, the students emphasise social-ethical discussion as a prerequisite for some possible co-shaping of the labour market. The networked or interconnected model of reflection on AI literacy can be seen most clearly in the two specific perspectives that students emphasised:
  • Ethics is primarily a social phenomenon; the individual’s behaviour impacts the whole, and if ethical reflection is to be meaningful, it must focus on issues of societal phenomena such as discrimination, public good, and sustainable development. Ethics pursuing the interests of the individual is egoistic, based on neoliberal discourse and unacceptable to students. At the same time, it is evident that it is associated with an emphasis on the isolated entity and its behaviour rather than the whole system.
  • AI literacy should be understood as a manifestation of complexity, in which, on the one hand, it is necessary to consider the close connection between technology and ethics, which cannot be easily separated from each other; on the other hand, it is a complex phenomenon associated with many sub-components that are interrelated. The fundamental problem is the rigidity of concepts (control, responsibility, authorship) that rely on overly entity-centric, single-object mental constructs that fail to conceptualise complex phenomena such as the relationship between AI and society.
This perspective that our research has enabled us to identify can significantly contribute to the academic debate on the real-world teaching of AI literacy and its theoretical conceptualisation. At the same time, our work has shown a thematic map of reflection on the basic dimensions of AI literacy among university students, which can be used to design educational activities in this area.
It is clear that the teaching of ethics needs to focus more on social-ethical aspects and that thinking of the relationship between technology and humans as binary oppositional structures is unsustainable in the long run. AI education in schools can be based on small, simple examples but must systematically reflect on issues of complexity, global problems, and grand challenges. It is essential to emphasise and critically reflect on the broadest possible analysis of the impact of AI in social, cultural, and ethical dimensions and the environment of everyday life. The emphasis on individual or partial knowledge, skills, and competencies does not—in our research—appear to be wanted by students. However, it could be more meaningful to AI literacy.

Funding

This research received no external funding.

Institutional Review Board Statement

The study works with student responses. However, at the same time, it proceeds so that no data collection would allow a particular reaction to be linked to a specific student. It is, therefore, fully anonymous. The students responded voluntarily, and even if they did respond, they could (and two did) choose on the questionnaire that they did not wish to be included in the research. The questionnaire did not collect first name, last name or other identifiers. At the same time, in the data processing process, individual responses were handled separately (limiting the possibility of contextual inference of identity). All texts were translated into English (from Czech and Slovak), eliminating the possibility of identifying the student by linguistic means. No ethics committee approval is required for this type of data collection. Projects using anonymous questionnaire surveys in the adult population do not require Ethics Committee assessment.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Adam, Barbara, Ulrich Beck, and Joost van Loon, eds. 2000. The Risk Society and beyond: Critical Issues for Social Theory. London, Thousand Oaks and Calif: SAGE. [Google Scholar]
  2. Adams, Catherine, Patti Pente, Gillian Lemermeyer, and Geoffrey Rockwell. 2023. Ethical Principles for Artificial Intelligence in K-12 Education. Computers and Education: Artificial Intelligence 4: 100131. [Google Scholar] [CrossRef]
  3. Arnold, William E., James C. McCroskey, and Samuel V. O. Prichard. 1967. The Likert-type Scale. Today’s Speech 15: 31–33. [Google Scholar] [CrossRef]
  4. Baum, Seth. 2018. Countering Superintelligence Misinformation. Information 9: 244. [Google Scholar] [CrossRef]
  5. Bawden, David. 2001. Information and Digital Literacies: A Review of Concepts. Journal of Documentation 57: 218–59. [Google Scholar] [CrossRef]
  6. Beck, Ulrich. 1992. Risk Society: Towards a New Modernity. Theory, Culture & Society. London, Newbury Park and Calif: Sage Publications. [Google Scholar]
  7. Benko, Attila, and Cecília Sik Lányi. 2009. History of Artificial Intelligence. In Encyclopedia of Information Science and Technology, 2nd ed. Edited by Mehdi Khosrow-Pour. Hershey: IGI Global, pp. 1759–62. [Google Scholar] [CrossRef]
  8. Boker, Udi, and Nachum Dershowitz. 2022. What Is the Church-Turing Thesis? In Axiomatic Thinking II. Edited by Fernando Ferreira, Reinhard Kahle and Giovanni Sommaruga. Cham: Springer International Publishing, pp. 199–234. [Google Scholar] [CrossRef]
  9. Borji, Ali. 2023. Qualitative Failures of Image Generation Models and Their Application in Detecting Deepfakes. Image and Vision Computing 137: 104771. [Google Scholar] [CrossRef]
  10. Bostrom, Nick. 2016. Superintelligence: Paths, Dangers, Strategies. Oxford and New York: Oxford University Press. [Google Scholar]
  11. Bowker, Geoffrey C., Susan Leigh Star, William Turner, and Leslie George Gasser, eds. 2009. Social Science, Technical Systems, and Cooperative Work: Beyond the Great Divide. Reprinted 2009 by Psychology Press. New York: Psychology Press. London: Taylor & Francis Group. [Google Scholar]
  12. Braga, Adriana, and Robert Logan. 2017. The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence. Information 8: 156. [Google Scholar] [CrossRef]
  13. Braun, Virginia, and Victoria Clarke. 2012. Thematic Analysis. In APA Handbook of Research Methods in Psychology, Vol 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological. Edited by Harris Cooper, Paul M. Camic, Debra L. Long, A. T. Panter, David Rindskopf and Kenneth J. Sher. Washington, DC: American Psychological Association, pp. 57–71. [Google Scholar] [CrossRef]
  14. Carolus, Astrid, Yannik Augustin, André Markus, and Carolin Wienrich. 2023. Digital Interaction Literacy Model—Conceptualizing Competencies for Literate Interactions with Voice-Based AI Systems. Computers and Education: Artificial Intelligence 4: 100114. [Google Scholar] [CrossRef]
  15. Cerbone, David R. 1999. Composition and Constitution: Heidegger’s Hammer. Philosophical Topics 27: 309–29. [Google Scholar] [CrossRef]
  16. Cetindamar, Dilek, Kirsty Kitto, Mengjia Wu, Yi Zhang, Babak Abedin, and Simon Knight. 2022. Explicating AI Literacy of Employees at Digital Workplaces. IEEE Transactions on Engineering Management 71: 810–23. [Google Scholar] [CrossRef]
  17. Chen, Jennifer J, and Jasmine C Lin. 2023. Artificial Intelligence as a Double-Edged Sword: Wielding the POWER Principles to Maximize Its Positive Effects and Minimize Its Negative Effects. Contemporary Issues in Early Childhood 25. [Google Scholar] [CrossRef]
  18. Cobb, Peter J. 2023. Large Language Models and Generative AI, Oh My!: Archaeology in the Time of ChatGPT, Midjourney, and Beyond. Advances in Archaeological Practice 11: 363–69. [Google Scholar] [CrossRef]
  19. Cobianchi, Lorenzo, Juan Manuel Verde, Tyler J Loftus, Daniele Piccolo, Francesca Dal Mas, Pietro Mascagni, Alain Garcia Vazquez, Luca Ansaloni, Giuseppe Roberto Marseglia, Maurizio Massaro, and et al. 2022. Artificial Intelligence and Surgery: Ethical Dilemmas and Open Issues. Journal of the American College of Surgeons 235: 268–75. [Google Scholar] [CrossRef] [PubMed]
  20. Coşkun, Selim, Yaşanur Kayıkcı, and Eray Gençay. 2019. Adapting Engineering Education to Industry 4.0 Vision. Technologies 7: 10. [Google Scholar] [CrossRef]
  21. Dai, Yun, Ching-Sing Chai, Pei-Yi Lin, Morris Siu-Yung Jong, Yanmei Guo, and Jianjun Qin. 2020. Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age. Sustainability 12: 6597. [Google Scholar] [CrossRef]
  22. Damasio, Antonio R. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Putnam. [Google Scholar]
  23. Dewey, John. 1923. Democracy and Education: An Introduction to the Philosophy of Education. New York: Macmillan. [Google Scholar]
  24. Dorrien, Gary. 2009. Social Ethics in the Making: Interpreting an American Tradition. Hoboken: John Wiley & Sons, vol. 2009. [Google Scholar]
  25. Eguchi, Amy, Hiroyuki Okada, and Yumiko Muto. 2021. Contextualizing AI Education for K-12 Students to Enhance Their Learning of AI Literacy Through Culturally Responsive Approaches. KI—Künstliche Intelligenz 35: 153–61. [Google Scholar] [CrossRef] [PubMed]
  26. Estevez, Julian, Gorka Garate, and Manuel Grana. 2019. Gentle Introduction to Artificial Intelligence for High-School Students Using Scratch. IEEE Access 7: 179027–36. [Google Scholar] [CrossRef]
  27. European Commission. 2020. Directorate General for Communications Networks, Content and Technology. In The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment. Luxembourg: Publications Office. [Google Scholar]
  28. Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press. [Google Scholar] [CrossRef]
  29. Floridi, Luciano. 2014. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford: OUP Oxford. [Google Scholar]
  30. Floridi, Luciano. 2019a. Establishing the Rules for Building Trustworthy AI. Nature Machine Intelligence 1: 261–62. [Google Scholar] [CrossRef]
  31. Floridi, Luciano. 2019b. The Logic of Information: A Theory of Philosophy as Conceptual Design. Oxford: Oxford University Press. [Google Scholar]
  32. Fyfe, Paul. 2023. How to Cheat on Your Final Paper: Assigning AI for Student Writing. AI & SOCIETY 38: 1395–405. [Google Scholar] [CrossRef]
  33. Ghobakhloo, Morteza. 2020. Industry 4.0, Digitization, and Opportunities for Sustainability. Journal of Cleaner Production 252: 119869. [Google Scholar] [CrossRef]
  34. Ginieis, Matías, María-Victoria Sánchez-Rebull, and Fernando Campa-Planas. 2012. The Academic Journal Literature on Air Transport: Analysis Using Systematic Literature Review Methodology. Journal of Air Transport Management 19: 31–35. [Google Scholar] [CrossRef]
  35. Gross, Melissa, and Don Latham. 2012. What’s Skill Got to Do with It?: Information Literacy Skills and Self-Views of Ability among First-Year College Students. Journal of the American Society for Information Science and Technology 63: 574–83. [Google Scholar] [CrossRef]
  36. Haenlein, Michael, and Andreas Kaplan. 2019. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review 61: 5–14. [Google Scholar] [CrossRef]
  37. Hassabis, Demis. 2017. Artificial Intelligence: Chess Match of the Century. Nature 544: 413–14. [Google Scholar] [CrossRef]
  38. Henry, Julie, Alyson Hernalesteen, and Anne-Sophie Collard. 2021. Teaching Artificial Intelligence to K-12 Through a Role-Playing Game Questioning the Intelligence Concept. KI-Künstliche Intelligenz 35: 171–79. [Google Scholar] [CrossRef]
  39. Howard, John. 2019. Artificial Intelligence: Implications for the Future of Work. American Journal of Industrial Medicine 62: 917–26. [Google Scholar] [CrossRef]
  40. Iskender, Ali. 2023. Holy or Unholy? Interview with Open AI’s ChatGPT. European Journal of Tourism Research 34: 3414. [Google Scholar] [CrossRef]
  41. Kaspersen, Magnus Høholt, Karl-Emil Kjær Bilstrup, Maarten Van Mechelen, Arthur Hjort, Niels Olof Bouvin, and Marianne Graves Petersen. 2022. High School Students Exploring Machine Learning and Its Societal Implications: Opportunities and Challenges. International Journal of Child-Computer Interaction 34: 100539. [Google Scholar] [CrossRef]
  42. Keefe, Elizabeth B., and Susan R. Copeland. 2011. What Is Literacy? The Power of a Definition. Research and Practice for Persons with Severe Disabilities 36: 92–99. [Google Scholar] [CrossRef]
  43. Kong, Siu Cheung. 2014. Developing Information Literacy and Critical Thinking Skills through Domain Knowledge Learning in Digital Classrooms: An Experience of Practicing Flipped Classroom Strategy. Computers & Education 78: 160–73. [Google Scholar] [CrossRef]
  44. Lakoff, George. 2008. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. Chicago: University of Chicago Press. [Google Scholar]
  45. Lakoff, George, and Mark Johnson. 1999. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books. [Google Scholar]
  46. Latour, Bruno. 1993. We Have Never Been Modern. Cambridge: Harvard University Press. [Google Scholar]
  47. Latour, Bruno, and Julie Rose. 2021. After Lockdown: A Metamorphosis. Cambridge and Medford: Polity Press. [Google Scholar]
  48. Laupichler, Matthias C., Dariusch R. Hadizadeh, Maximilian W. M. Wintergerst, Leon Von Der Emde, Daniel Paech, Elizabeth A. Dick, and Tobias Raupach. 2022. Effect of a Flipped Classroom Course to Foster Medical Students’ AI Literacy with a Focus on Medical Imaging: A Single Group Pre-and Post-Test Study. BMC Medical Education 22: 803. [Google Scholar] [CrossRef] [PubMed]
  49. Lawless, J., Coleen E. Toronto, and Gail L. Grammatica. 2016. Health Literacy and Information Literacy: A Concept Comparison. Reference Services Review 44: 144–62. [Google Scholar] [CrossRef]
  50. Lee, Hye-Kyung. 2022. Rethinking Creativity: Creative Industries, AI and Everyday Creativity. Media, Culture & Society 44: 601–12. [Google Scholar] [CrossRef]
  51. Leichtmann, Benedikt, Christina Humer, Andreas Hinterreiter, Marc Streit, and Martina Mara. 2023. Effects of Explainable Artificial Intelligence on Trust and Human Behavior in a High-Risk Decision Task. Computers in Human Behavior 139: 107539. [Google Scholar] [CrossRef]
  52. Lin, Chun-Hung, Chih-Chang Yu, Po-Kang Shin, and Leon Yufeng Wu. 2021. STEM Based Artificial Intelligence Learning in General Education for Non-Engineering Undergraduate Students. Educational Technology & Society 24: 224–37. [Google Scholar]
  53. Lo, Chung Kwan. 2023. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Education Sciences 13: 410. [Google Scholar] [CrossRef]
  54. Long, Duri, Takeria Blunt, and Brian Magerko. 2021. Co-Designing AI Literacy Exhibits for Informal Learning Spaces. Proceedings of the ACM on Human-Computer Interaction 5: 1–35. [Google Scholar] [CrossRef]
  55. Mannila, Linda, Lars-Åke Nordén, and Arnold Pears. 2018. Digital Competence, Teacher Self-Efficacy and Training Needs. In Proceedings of the 2018 ACM Conference on International Computing Education Research. Espoo: ACM, pp. 78–85. [Google Scholar] [CrossRef]
  56. Manyika, James. 2017. A Future That Works: Automation, Employment, and Productivity. Available online: https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/170622-slides-manyika.pdf (accessed on 10 December 2023).
  57. Mareš, Jiří. 2013. Přehledové Studie: Jejich Typologie, Funkce a Způsob Vytváření. Pedagogická Orientace 23: 427–54. [Google Scholar] [CrossRef]
  58. Mertala, Pekka, Janne Fagerlund, and Oscar Calderon. 2022. Finnish 5th and 6th Grade Students’ Pre-Instructional Conceptions of Artificial Intelligence (AI) and Their Implications for AI Literacy Education. Computers and Education: Artificial Intelligence 3: 100095. [Google Scholar] [CrossRef]
  59. Mian, Syed Hammad, Bashir Salah, Wadea Ameen, Khaja Moiduddin, and Hisham Alkhalefah. 2020. Adapting Universities for Sustainability Education in Industry 4.0: Channel of Challenges and Opportunities. Sustainability 12: 6100. [Google Scholar] [CrossRef]
  60. Muthukrishnan, Nikesh, Farhad Maleki, Katie Ovens, Caroline Reinhold, Behzad Forghani, and Reza Forghani. 2020. Brief History of Artificial Intelligence. Neuroimaging Clinics of North America 30: 393–99. [Google Scholar] [CrossRef]
  61. Neyland, Daniel. 2019. Introduction: Everyday Life and the Algorithm. In The Everyday Life of an Algorithm. Palgrave Pivot. Cham: Palgrave Pivot. [Google Scholar] [CrossRef]
  62. Ng, Davy Tsz Kit, Jac Ka Lok Leung, Jiahong Su, Ross Chi Wui Ng, and Samuel Kai Wah Chu. 2023a. Teachers’ AI Digital Competencies and Twenty-First Century Skills in the Post-Pandemic World. Educational Technology Research and Development 71: 137–61. [Google Scholar] [CrossRef]
  63. Ng, Davy Tsz Kit, Jac Ka Lok Leung, Kai Wah Samuel Chu, and Maggie Shen Qiao. 2021a. Literacy: Definition, Teaching, Evaluation and Ethical Issues. Proceedings of the Association for Information Science and Technology 58: 504–9. [Google Scholar] [CrossRef]
  64. Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021b. Conceptualizing AI Literacy: An Exploratory Review. Computers and Education: Artificial Intelligence 2: 100041. [Google Scholar] [CrossRef]
  65. Ng, Davy Tsz Kit, Min Lee, Roy Jun Yi Tan, Xiao Hu, J. Stephen Downie, and Samuel Kai Wah Chu. 2023b. A Review of AI Teaching and Learning from 2000 to 2020. Education and Information Technologies 28: 8445–501. [Google Scholar] [CrossRef]
  66. Ng, Davy Tsz Kit, Wanying Luo, Helen Man Yi Chan, and Samuel Kai Wah Chu. 2022. Using Digital Story Writing as a Pedagogy to Develop AI Literacy among Primary Students. Computers and Education: Artificial Intelligence 3: 100054. [Google Scholar] [CrossRef]
  67. Parekh, Rajesh. 2017. Designing AI at Scale to Power Everyday Life. Paper presented at the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13–17; Halifax: ACM, p. 27. [Google Scholar] [CrossRef]
  68. Pinar Saygin, Ayse, Ilyas Cicekli, and Varol Akman. 2000. Turing Test: 50 Years Later. Minds and Machines 10: 463–518. [Google Scholar] [CrossRef]
  69. Porcaro, Lorenzo, Carlos Castillo, Emilia Gómez, and João Vinagre. 2023. Fairness and Diversity in Information Access Systems. arXiv arXiv:2305.09319. [Google Scholar]
  70. Rajpurkar, Pranav, Emma Chen, Oishi Banerjee, and Eric J. Topol. 2022. AI in Health and Medicine. Nature Medicine 28: 31–38. [Google Scholar] [CrossRef]
  71. Rupert, Robert D. 2010. Cognitive Systems and the Extended Mind. Philosophy of mind; 1. issued as an Oxford Univ. Press. Oxford: Oxford Univ. Press. [Google Scholar]
  72. Šíp, Radim. 2019. Proč školství a jeho aktéři selhávají. Brno: Masarykova univerzita. [Google Scholar]
  73. Southworth, Jane, Kati Migliaccio, Joe Glover, Ja’Net Glover, David Reed, Christopher McCarty, Joel Brendemuhl, and Aaron Thomas. 2023. Developing a Model for AI Across the Curriculum: Transforming the Higher Education Landscape via Innovation in AI Literacy. Computers and Education: Artificial Intelligence 4: 100127. [Google Scholar] [CrossRef]
  74. Stray, Jonathan. 2019. Making Artificial Intelligence Work for Investigative Journalism. Digital Journalism 7: 1076–97. [Google Scholar] [CrossRef]
  75. Su, Jiahong, and Yuchun Zhong. 2022. Artificial Intelligence (AI) in Early Childhood Education: Curriculum Design and Future Directions. Computers and Education: Artificial Intelligence 3: 100072. [Google Scholar] [CrossRef]
  76. Torres-Carrion, Pablo Vicente, Carina Soledad Gonzalez-Gonzalez, Silvana Aciar, and Germania Rodriguez-Morales. 2018. Methodology for Systematic Literature Review Applied to Engineering and Education. Paper presented at the 2018 IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Spain, April 17–20; Tenerife: IEEE, pp. 1364–73. [Google Scholar] [CrossRef]
  77. Tramontano, Carlo, Christine Grant, and Carl Clarke. 2021. Development and Validation of the E-Work Self-Efficacy Scale to Assess Digital Competencies in Remote Working. Computers in Human Behavior Reports 4: 100129. [Google Scholar] [CrossRef]
  78. Tuckett, Anthony G. 2005. Applying Thematic Analysis Theory to Practice: A Researcher’s Experience. Contemporary Nurse 19: 75–87. [Google Scholar] [CrossRef]
  79. Ulnicane, Inga. 2022. Artificial Intelligence in the European Union. In The Routledge Handbook of European Integrations, 1st ed. Edited by Thomas Hoerber, Gabriel Weber and Ignazio Cabras. London: Routledge, pp. 254–69. [Google Scholar] [CrossRef]
  80. Vuorikari Rina, Riina, Stefano Kluzer, and Yves Punie. 2022. DigComp 2.2, The Digital Competence Framework for Citizens: With New Examples of Knowledge, Skills and Attitudes. No. JRC128415. Brussels: Joint Research Centre. [Google Scholar]
  81. Wang, Fei-Yue, Jun Jason Zhang, Xinhu Zheng, Xiao Wang, Yong Yuan, Xiaoxiao Dai, Jie Zhang, and Liuqing Yang. 2016. Where Does AlphaGo Go: From Church-Turing Thesis to AlphaGo Thesis and Beyond. IEEE/CAA Journal of Automatica Sinica 3: 113–20. [Google Scholar] [CrossRef]
  82. Wang, Weiyu, and Keng Siau. 2019. Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda. Journal of Database Management 30: 61–79. [Google Scholar] [CrossRef]
  83. Wienrich, Carolin, and Astrid Carolus. 2021. Development of an Instrument to Measure Conceptualizations and Competencies About Conversational Agents on the Example of Smart Speakers. Frontiers in Computer Science 3: 685277. [Google Scholar] [CrossRef]
  84. Wiljer, David, and Zaki Hakim. 2019. Developing an Artificial Intelligence–Enabled Health Care Practice: Rewiring Health Care Professions for Better Care. Journal of Medical Imaging and Radiation Sciences 50: S8–S14. [Google Scholar] [CrossRef] [PubMed]
  85. Wilkinson, Carroll Wetzel, and Courtney Bruch, eds. 2012. Transforming Information Literacy Programs: Intersecting Frontiers of Self, Library Culture, and Campus Community. ACRL Publications in Librarianship 64. Chicago: Association of College and Research Libraries, A Division of the American Library Association. [Google Scholar]
  86. Williams, Randi, Safinah Ali, Nisha Devasia, Daniella DiPaola, Jenna Hong, Stephen P. Kaputsos, Brian Jordan, and Cynthia Breazeal. 2023. AI + Ethics Curricula for Middle School Youth: Lessons Learned from Three Project-Based Curricula. International Journal of Artificial Intelligence in Education 33: 325–83. [Google Scholar] [CrossRef] [PubMed]
  87. Yang, Weipeng. 2022. Artificial Intelligence Education for Young Children: Why, What, and How in Curriculum Design and Implementation. Computers and Education: Artificial Intelligence 3: 100061. [Google Scholar] [CrossRef]
  88. Yi, Yumi. 2021. Establishing the Concept of AI Literacy: Focusing on Competence and Purpose. JAHR 12: 353–68. [Google Scholar] [CrossRef]
  89. Zhai, Xiaoming. 2022. ChatGPT User Experience: Implications for Education. SSRN Electronic Journal. [Google Scholar] [CrossRef]
Figure 1. Increase in the number of documents over time (documents in 2022 and especially 2023 will be even higher due to the gradual indexing of studies in the SCOPUS database).
Figure 1. Increase in the number of documents over time (documents in 2022 and especially 2023 will be even higher due to the gradual indexing of studies in the SCOPUS database).
Socsci 13 00129 g001
Figure 2. Distribution of documents by scientific fields.
Figure 2. Distribution of documents by scientific fields.
Socsci 13 00129 g002
Figure 3. Distribution of documents by region.
Figure 3. Distribution of documents by region.
Socsci 13 00129 g003
Figure 4. The graph shows students’ self-assessments. The individual colours indicate the degree of subjectively perceived expertise on a scale of 1, beginner, to 7, expert. The first diagram (Summary) assesses the overall rating of AI competence.
Figure 4. The graph shows students’ self-assessments. The individual colours indicate the degree of subjectively perceived expertise on a scale of 1, beginner, to 7, expert. The first diagram (Summary) assesses the overall rating of AI competence.
Socsci 13 00129 g004
Table 1. Overview of studies and their understanding of AI literacy. The table describes five basic versions of the phenomenon.
Table 1. Overview of studies and their understanding of AI literacy. The table describes five basic versions of the phenomenon.
CategoryStudyDescription
AI literacy as a competence for everyday life(Dai et al. 2020; Su and Zhong 2022; Leichtmann et al. 2023; Laupichler et al. 2022; Kaspersen et al. 2022; Fyfe 2023; Yang 2022)Studies see this literacy as a prerequisite for successful everyday life: literacy in the true sense of the word. Its absence is a significant handicap to understanding the world in which we live.
AI literacy as a prerequisite for future success in the labour market(Cetindamar et al. 2022; Eguchi et al. 2021; Williams et al. 2023; Henry et al. 2021)Studies link the relationship of AI literacy to future employment or competitive advantage and competitiveness. Skills related to working with AI need to be developed through concrete activities, applications and examples with a view to practical application.
AI literacy as part of the competence structure(Ng et al. 2023b; Long et al. 2021; Wiljer and Hakim 2019; Carolus et al. 2023; Wienrich and Carolus 2021)Studies understand AI literacy as part of a broader competence field from which it emerges or in which it is constituted. It is not isolated; it cannot be developed but is always in a specific structural arrangement with other skills, knowledge, and attitudes.
AI literacy as a composite structure(Ng et al. 2021b, 2022, 2023a; Kong 2014; Southworth et al. 2023; Chen and Lin 2023)The definition of AI literacy relies heavily on the work of Ng et al., who view it as a set of four components: (1) knowledge and understanding of AI; (2) use and application of AI; (3) creation and evaluation of AI tools; and (4) AI and ethics, possibly in another analogous composite structure.
AI literacy as a form of technical knowledge and skills(Yi 2021; Chen and Lin 2023; Lin et al. 2021; Mertala et al. 2022; Adams et al. 2023)AI literacy is primarily (though not exclusively) associated with using or understanding—the technical means to implement AI in different ways to solve tasks. Thus, sufficient technical and computer science education is primary, which AI literacy extends.
Table 2. Overall corresponds to the final scores of students, Average is constructed from all four sub-domains, and Difference is the difference between Average and Overall.
Table 2. Overall corresponds to the final scores of students, Average is constructed from all four sub-domains, and Difference is the difference between Average and Overall.
Level1234567
Overall22112013520
Average23.815.814.49.26.62.40.8
Difference1.84.8−5.6−3.81.60.40.8
Table 3. Thematic analysis of student responses.
Table 3. Thematic analysis of student responses.
QuestionSubtopics
What do you think artificial intelligence means?
  • AI as a tool
  • AI as an adaptive system
  • AI as a learning system
  • AI as an imitation of thought
  • AI as an extension of human capabilities
What AI can now be used for
  • A social-anthropological perspective
  • A practical-applicational perspective
  • Omnipresence
How do you think AI works?
  • Perspective of classical deterministic algorithms
  • Completely erroneous use of computer science terms
  • Correct understanding of artificial neural networks
  • Emphasis on the probabilistic nature of AI and the importance of datasets
What ethical issues or challenges you may encounter when working with it
  • Copyright issues
  • Discrimination
  • Loss of control, influencing thoughts and actions
  • Social impacts of AI
  • Ethical implications associated with technical errors or problems.
What a person should know to be considered AI literate
  • A critical approach
  • Ethical approach
  • Completing specific tasks
  • Improving everyday life
  • Digital competences
  • Knowledge of technical basics
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Černý, M. University Students’ Conceptualisation of AI Literacy: Theory and Empirical Evidence. Soc. Sci. 2024, 13, 129. https://doi.org/10.3390/socsci13030129

AMA Style

Černý M. University Students’ Conceptualisation of AI Literacy: Theory and Empirical Evidence. Social Sciences. 2024; 13(3):129. https://doi.org/10.3390/socsci13030129

Chicago/Turabian Style

Černý, Michal. 2024. "University Students’ Conceptualisation of AI Literacy: Theory and Empirical Evidence" Social Sciences 13, no. 3: 129. https://doi.org/10.3390/socsci13030129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop