Next Article in Journal
AI-Based Breast Cancer Detection System: Deep Learning and Machine Learning Approaches for Ultrasound Image Analysis
Next Article in Special Issue
The Exploration of Combining Hologram-like Images and Pedagogical Agent Gesturing
Previous Article in Journal
Mining Product Reviews for Important Product Features of Refurbished iPhones
Previous Article in Special Issue
Review of Robotics Activities to Promote Kindergarteners’ Communication, Collaboration, Critical Thinking, and Creativity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Impact of Prior Coding and Artificial Intelligence Learning on Non-Computing Majors’ Perception of AI in a University Context

1
College of Liberal Arts, Kongju National University, Cheonan 31080, Republic of Korea
2
Department of Education, Chonnam National University, Gwangju 61187, Republic of Korea
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 277; https://doi.org/10.3390/info16040277
Submission received: 5 March 2025 / Revised: 23 March 2025 / Accepted: 27 March 2025 / Published: 29 March 2025

Abstract

:
Artificial intelligence (AI) has emerged as a critical subject in global educational contexts, not only within computing majors but also across all academic disciplines. This shift mirrors the rise of digital literacy in the late 20th century, positioning AI literacy as a needed skill for future generations. Despite its importance, there is ongoing debate about what exactly AI literacy entails and the skills it requires. While previous research has explored how computational thinking and varying educational levels affect AI literacy, there is a gap in related research on the impact of coding experience and the age at which students first learn about AI, especially for university students in non-computer-based majors. This exploratory study revealed that South Korean university students with prior coding experience consistently demonstrated significantly greater AI literacy than did those without prior coding experience. However, the age when students first learn about AI does not seem to play a major role in their overall perception of AI. These results suggest that the idea that coding experience might not be necessary for AI literacy needs additional investigation. However, further research is needed to fully understand the factors that contribute to AI literacy in non-computer-based university majors.

1. Introduction

Since OpenAI introduced ChatGPT, a publicly available artificial intelligence (AI) tool, there has been curiosity about this ‘futuristic’ technology. ChatGPT 3.5 was released to the public in November 2022, and an estimated user base of 100 million people was established by January 2023 [1]. The public introduction of ChatGPT provided educational researchers with an opportunity to explore how AI could benefit teaching on a much larger scale. A systematic review examining the use of AI in higher education research between 2007 and 2018 revealed that only nine percent of the published articles were from education departments [2]. However, a rapid review of ChatGPT education conducted by the end of February 2023 indicated that 50 studies were published within three months following ChatGPT’s release [3].
The increasing focus on AI in the educational environment has led researchers to question and examine what constitutes AI literacy. Long and Magerko [4] suggest that AI literacy involves developing student competencies in areas such as evaluation, communication, and collaboration with AI technologies. Offering a slightly different perspective, Kong et al. [5] suggest that literacy should focus on understanding AI concepts and their application in real-world evaluation and problem solving. Long and Magerko [4] argue that individuals do not necessarily need programming skills (computational literacy) unless they aim to become AI developers. For those who simply want to use AI, possessing digital literacy, particularly in how to use the Internet, is essential. Research by Kong et al. [5] demonstrated that a seven-hour course on AI concepts for university students, covering topics such as machine learning, supervised and unsupervised learning, regression, classification, and clustering, could significantly enhance the AI literacy of students from diverse backgrounds. Importantly, the authors found that programming ability was not needed.
Although evidence suggests that programming knowledge is not essential for significantly enhancing university students’ AI literacy, the impact of early programming experience from elementary, middle, or high school on AI literacy, compared to that of students with no programming experience learning in a university context, remains unclear. In 2015, South Korea implemented block-based and text-based coding in primary and secondary education at designated schools nationwide to foster software awareness. Consequently, current university students present varying backgrounds of programming experience and differing initial exposure ages to AI concepts. Their AI literacy can be evaluated based on the type of coding experience (block-based coding vs. text-based coding vs. no coding) and the age at which they first learned about AI (elementary school vs. middle school vs. high school vs. university). This study examines whether early exposure to different types of programming (block-based coding, text-based coding, or no coding) influences university students’ AI literacy. Additionally, the study investigates whether there is an optimal age (elementary school, middle school, high school, or university) at which exposure to AI education provides the most significant benefit for enhancing AI literacy among university students.

2. Literature Review

Technology is a fundamental part of everyday life in the 21st century; as a result, children in today’s era are introduced to technology at a very young age. Consequently, education systems have increasingly focused on enhancing students’ computational literacy skills through coding programs in early education [6]. The introduction of coding activities during childhood has been linked to improved critical thinking skills, which are crucial for enhancing problem-solving capabilities [7]. Notably, coding could be applied to children as young as three years old [8]. However, computational thinking activities related to coding must be age-appropriate. Rijke et al. [9] found that students over the age of 10 exhibited more advanced skills in abstraction, which involves discerning relevant details and disregarding irrelevant details [10], than did their younger counterparts. Similarly, the ability to use decomposition—breaking down large or complex problems into smaller, more manageable subproblems—significantly improved in students older than 11 years. For younger learners, activities should be more tangible, offering hands-on experiences. Sullivan and Bers [11] discovered that prekindergarten (pre-K) through second-grade students could learn robotics and programming knowledge effectively. However, pre-K students needed more repetition and adult support. As children grow older, their cognitive abilities develop, enabling them to handle more complex programming tasks and concepts.
The maturation of cognitive abilities not only enhances students’ computational thinking but also raises a debate about the integration of these skills into other subjects. Several studies, such as [12], have found no relationship between computational thinking and academic performance. In contrast, a meta-analysis by Lei et al. [13] reported a medium-sized positive correlation between computational thinking and academic performance in other subjects. However, this increase in academic performance associated with computational thinking was not universal. It was primarily observed in Eastern cultures rather than in Western cultures. The authors suggest that this discrepancy might be attributed to differences in educational approaches, with more integrated curricula in Eastern cultures and more fragmented curricula in Western cultures. This might also explain why the study showed the strongest connection between computational thinking and academic performance in elementary school students, whose curriculum is generally more integrated, compared to secondary school students, with the weakest association observed in university students. Although teaching computational thinking appears to be beneficial for enhancing cognitive skills in young learners, its impact on broader academic performance is complex and varies across different educational and cultural contexts.
The literature provides evidence suggesting that students from prekindergarten through secondary school can benefit from learning programming as a part of computational thinking. However, the effectiveness of teaching computational thinking to university students who have no prior programming experience is more complex. Researchers have indicated that computational thinking can be challenging for university students in non-computing majors, particularly in regard to solving complex problems, due to a lack of systematic approaches for handling such problems [14,15]. Yeh et al. [14] found that while novice learners of computational thinking could remember definitions and arguments, they often struggled with the application and problem-solving aspects. The challenge in defining computational thinking and determining the necessary components for students outside of computing majors is compounded by the need to align with students’ existing schemas and the relevance to their majors. Lyon and Magana [16] argue that higher education could benefit from clearer definitions of computational thinking and more research into effective ways of integrating this type of thinking in university settings.

2.1. Block-Based and Text-Based Coding

An effective method for students to practice computational thinking is coding, which can be categorized into two distinct types: block-based coding and text-based coding, each with its own strengths and weaknesses. Block-based coding employs a drag-and-drop interface of graphical blocks that represent code commands. These blocks can be combined to perform various functions. If students try to combine incompatible blocks, the program prevents them from snapping together, thus preventing the introduction of errors into the code [17]. Programs such as Scratch or Alice are popular among novice learners, as they facilitate the understanding of complex coding structures in an accessible way [18].
In contrast, text-based coding requires learners to manually write lines of code to execute desired functions. This approach is often more challenging for beginners. A study by Altadmri and Brown [19], which analyzed 37 million compilations from more than 250,000 students worldwide, revealed that novices learning text-based coding frequently struggled with syntax errors (e.g., mismatched brackets), type errors (e.g., incorrect method calls), and semantic errors (e.g., missing return statements). The study revealed that while syntax errors tend to decrease over time, occurring mostly at the beginning of the learning process, semantic and type errors often increase later. This trend is likely influenced by the structure of the learning process itself. In the context of teaching novices to code, block-based coding can serve as an introductory tool, helping students grasp the fundamental concepts of coding and syntax before they tackle the more complex task of writing text-based code like professional programmers.

2.2. AI Literacy

The field of AI encompasses various subfields, including machine learning, natural language processing, and deep learning [2]. Given that these subfields are heavily reliant on programming knowledge and computational thinking, individuals with these skills are likely to exhibit higher levels of AI literacy. In a study involving 865 university undergraduate and graduate students, Celik [20] explored how cognitive absorption, the digital divide, and computational thinking influence AI literacy. The study yielded two significant findings. First, aspects of the digital divide, such as access to technology and its usage, were found to substantially enhance computational thinking skills. Second, improved computational thinking skills were shown to facilitate AI literacy among students. These results suggest that the skills necessary for developing computational thinking may overlap considerably with those needed for AI literacy.
However, one of the challenges in developing AI literacy begins in the K–12 context. AI literacy is a relatively new concept, and most teachers lack the necessary qualifications to design courses that integrate AI into various subjects. Furthermore, undergraduate programs are not adequate for preparing preservice teachers for AI implementation in the classroom [21,22]. Even with professional development (PD) opportunities for current teachers, researchers have noted the limited availability of such programs [23]. Moreover, there is a lack of clarity regarding the specific skills and knowledge K–12 teachers need to use to effectively integrate AI education into their teaching practices [24]. In a mixed-methods study involving 67 experienced (3+ years) elementary teachers from AI-leading schools in South Korea, [25] assessed the potential skills and knowledge needed for integrating AI education in K–12 settings. The findings suggested that effective implementation of AI education requires teachers to develop strong coding skills, data analysis capabilities, proficiency in using AI technologies, and an understanding of the ethical issues associated with AI. The literature indicates that the development of AI literacy may hinge on concepts such as computational thinking. However, integrating these concepts into the classroom and training teachers accordingly might demand more comprehensive strategies than the typically limited PD opportunities currently available.

2.3. Software Education in Korea

In 2015, the Korean Ministry of Education (KMOE) decided to revise the national curriculum in response to the societal changes brought about by the 4th Industrial Revolution. This revision included the introduction of software education (SW) as a core subject rather than an elective. As a result, 5th and 6th graders were mandated to complete more than 17 h of practical coursework, while middle and high school students were required to complete more than 34 h [26]. Initially, in 2015, 68 schools were designated as SW education schools. This number increased significantly to 1082 by 2019 [27]. A survey conducted by Park [28] across 50 leading schools revealed that, on average, elementary students received approximately 19.1 h of SW education, middle school students 38.25 h, and high school students 49.88 h.
In addition to curriculum hours, staffing and professional development were also key aspects of SW education integration. Schools employed teachers with specialized backgrounds in information and computer technologies to teach these courses. On average, five, ten, and eight specialized teachers were employed in elementary, middle, and high schools, respectively. Furthermore, professional development sessions for teachers and parents were conducted annually. Elementary schools held the most sessions, averaging 2.73 sessions per year, while middle schools and high schools conducted fewer sessions, with 1.42 and 0.88 sessions per year, respectively [28].
The content of software (SW) education in South Korea has been categorized into three distinct areas: unplugged activities, educational programming language (EPL), and physical computing [28]. Unplugged activities, which are exclusive to elementary schools, do not involve the use of computers or technology. Instead, they focus on developing computational thinking through pencil-and-paper activities, games, magic tricks, and competitions [29]. In the realm of EPL, elementary and middle schools primarily use block-based coding languages such as Entry and Scratch, while high school students use the text-based coding language Python. Physical computing, which encompasses the use of robots, drones, and sensor boards, is used at all school levels. However, its use is more prevalent in elementary schools than in middle and high schools [28].
Research focusing on SW education across elementary, middle, and high schools has revealed that students generally exhibit increased awareness of SW, with elementary school students demonstrating the highest level of awareness [27,30]. Furthermore, students who engaged in club activities centered around SW-related content showed greater awareness of and satisfaction with SW education [28,30]. In a study employing multiple regression analysis, Kim et al. [30] reported that both students’ and parents’ awareness, along with their level of interest in SW-related education, had a positive impact on their perceptions of SW education.
Despite the existing studies on computational thinking and AI literacy, notable gaps remain. Specifically, the impact of coding experience and early AI exposure on AI literacy among university students in non-computing disciplines is underexplored. Prior studies have focused on general populations or students explicitly studying computer science. Therefore, our study addresses these shortcomings by investigating how prior coding experience (block-based vs. text-based) and the age at which students first encounter AI concepts influence AI literacy among university students in non-computer-based majors. Identifying these factors will provide valuable insights into curriculum development, potentially guiding educators in designing effective AI literacy programs tailored to diverse student backgrounds and educational experiences.

3. Research Questions

The literature on how coding and the introduction of AI in primary and secondary education impact AI literacy in university students, particularly those in non-computer-based majors, is limited. With South Korea’s curriculum revision in 2015, there exists a cohort of students who have received SW education alongside a cohort with no prior SW education experience. This dichotomy provides a unique opportunity for an early evaluation of how Korea’s SW education initiative influences university students’ AI literacy, especially given the increasing prominence of AI in contemporary society. In this context, AI literacy refers to the skills and competencies necessary for effectively understanding, evaluating, and interacting with AI technologies. This research aimed to examine the influence of early programming and software education on AI literacy among university students in non-computer-based majors. While early coding programs have shown evidence of enhancing computational thinking, and recent studies such as Celik’s [20] suggest that computational thinking is a key component of AI literacy, the specific impact of early SW education on AI literacy in university students pursuing non-computer-based majors remains underexplored. With the increasing public availability of AI and the growing emphasis on AI literacy, this research seeks to address the following questions:
  • RQ1: To what extent does prior coding experience influence AI literacy in university students enrolled in non-computer-based majors?
  • RQ2: How does the age at which students are first exposed to AI components affect their AI literacy in university non-computer-based majors?

4. Methods

4.1. Study Participants

The participants in this study were students enrolled at a university in Seoul, Republic of Korea. All participants were majoring in programs that did not include mandatory AI-focused courses, although they had the option to choose AI-related electives if desired. The survey was distributed and collected online during the summer semester of 2023. The participating students were required to sign a consent form, which explained that their participation involved data collection and that they could withdraw at any time without any penalty. After providing consent, the students were asked to provide demographic information, including gender, major, academic year, prior coding experience, and the age at which they first learned about AI. Following the demographic section, the students responded to survey questions pertaining to their AI literacy. Table 1 shows the demographic data.

4.2. Data Collection

4.2.1. Instruments

The online survey comprised 14 questions adapted from Kang and Lee [31], which previously examined university students’ AI awareness (2 questions), the social impact of AI (8 questions), the self-impact of AI (2 questions), and the need for AI (2 questions). This instrument was selected due to its prior validated use with a similar demographic of university students and its comprehensive coverage of relevant AI-related attitudes and perceptions, aligning closely with the objectives of the current study. When adapting the instrument, particular attention was given to language clarity and contextual appropriateness. The original survey, initially designed in Korean, required careful translation into English to ensure semantic accuracy and consistency. A back-translation method was employed, involving bilingual experts fluent in Korean and English, to verify the accuracy of the translation and minimize language-related discrepancies. No significant translation issues arose during this process, and an independent expert in educational research reviewed the final English version to further ensure linguistic and conceptual integrity. Additionally, pilot testing with a small group of students confirmed that the survey was easily understandable and culturally suitable, resulting in minor revisions for enhanced clarity prior to the main data collection. To measure the reliability of the social impact of the AI scale, Cronbach’s alpha was utilized, yielding an alpha of 0.72, indicative of acceptable reliability. The other scales, consisting of only two questions, were unsuitable for Cronbach’s alpha analysis. Instead, Spearman’s rank correlation coefficient was used to assess the relationships between pairs of questions [32]. A significant positive correlation was found for AI awareness, r(220) = 0.47, p < 0.001. Similarly, the self-impact of AI and the need for AI demonstrated significant positive correlations, with r(220) = 0.15, p = 0.023 and r(220) = 0.52, p < 0.001, respectively. The survey utilized a 5-point Likert scale for each item.

4.2.2. Interviews

Five focus group participants were recruited from the 222 survey respondents based on purposeful sampling criteria, aiming to represent a diverse range of perspectives on AI awareness and attitudes. Participants were specifically selected according to their varied survey responses to ensure differing levels of engagement, interest, and perspectives toward AI. The primary aim of these interviews was to gain in-depth feedback on their survey responses. The interviews were conducted in Korean via Zoom Live, and each session lasted approximately one hour. Thematic analysis was employed to analyze the interview data [33]. Detailed information about the focus group interview participants can be found in Table 2.

5. Results

5.1. RQ1: To What Extent Does Prior Coding Experience Influence AI Literacy in University Students Enrolled in Non-Computer-Based Majors?

A MANOVA test was conducted on the categories measuring AI literacy. Box’s M test indicated that there was a violation of homogeneity (x2 = 33.69, p = 0.028), and the Shapiro–Wilk test indicated a violation of normality (W = 0.968, p < 0.001). Pillai’s trace was used since it is robust to violations of assumptions of homogeneity and distribution [34]. Pillai’s trace indicated that there were significant differences between the groups (Pillai’s trace = 0.171, F(8,434) = 5.07, p < 0.001). Due to these significant findings, ANOVAs were conducted on the subcategories of AI awareness, the social impact of AI, self-impact of AI, and need for AI. Cohen’s d effect sizes were calculated and assessed as small (d = 0.20), medium (d = 0.50), or large (d ≥ 0.80; Cohen, 1988). Table 3 shows the means and standard deviations related to the effect of prior coding experience on AI literacy.

5.2. AI Awareness

To test the effect of prior coding experience (text-based coding, block-based coding, no coding) on AI awareness, an ANOVA was used to determine that there was a significant difference between the groups (F(2,219) = 9.518, p < 0.001). Tukey’s post hoc test was performed, and it indicated that there was a significant difference between text-based coding and no coding experience (p < 0.001; d = 0.63) and between block-based coding and no coding experience (p = 0.018; d = 0.50). Moreover, there was no significant difference between text-based coding and block-based coding (p = 0.745; d = 0.14). Both text-based coding and block-based coding yielded medium effect sizes against the no coding group. There were no other significant differences found.

5.3. Social Impact of AI

An ANOVA determined that there were significant differences between the groups (F(2,219) = 7.075, p < 0.001). Tukey’s post hoc test indicated that the text-based coding group significantly perceived the social impact of AI more than the noncoding group did (p < 0.001; d = 0.56). The block-based coding group was not significantly different from the noncoding group (p = 0.105; d = 0.37). There were no other significant differences between the text-based and block-based coding groups.

5.4. Self-Impact of AI

An ANOVA determined that there were significant differences between the groups (F(2,219) = 7.075, p < 0.001). Tukey’s post hoc tests indicated that the text-based coding group and block-based coding group had significantly greater self-impact ratings of AI than did the noncoding group (p = 0.012; d = 0.44 and p = 0.012; d = 0.52, respectively). There were no other significant differences found.

5.5. Need for AI

An ANOVA determined that there were significant differences between the groups (F(2,219) = 14.49, p < 0.001). Tukey’s post hoc tests indicated that the text-based coding group and block-based coding group rated the need for AI significantly higher than the noncoding group did at p < 0.001, d = 0.72 and p < 0.001, and d = 0.75, respectively. There were no other significant differences between the groups.

5.6. RQ2: How Does the Age at Which Students Are First Exposed to AI Components Affect Their AI Literacy in University Non-Computer-Based Majors?

After examining the level of schooling (no learning, elementary school, middle school, high school, university) at which students learned about AI, a MANOVA test was conducted on the categories measuring AI literacy. Box’s M test indicated that there was a violation of homogeneity (x2 = 6.849, p = 0.002), and the Shapiro–Wilk test indicated a violation of normality (W = 0.968, p < 0.001). Similar to research question one, Pillai’s trace was used due to its robustness against violations of assumptions. Pillai’s trace indicated that there were significant differences between the groups (Pillai’s trace = 0.174, F(4,16) = 2.47, p = 0.001). Due to these significant findings, ANOVAs were conducted on the subcategories of AI awareness, social impact of AI, self-impact of AI, and need for AI. Table 4 shows the means and standard deviations of the age at which students learned about AI.

5.7. AI Awareness

An ANOVA revealed that there was a significant difference between the groups (F(4,217) = 7.290, p < 0.001). Tukey’s post hoc test was performed, and it indicated that students who were learning about AI in middle school and high school had significantly greater AI awareness than students who had not learned about AI (p = 0.004; d = 0.90 and p < 0.001; d = 0.97). Both effect sizes are considered large. There were no other significant differences in the data, but learning about AI in high school approached significance (p = 0.061; d = 0.52) when measured against university education, and university education approached significance against no previous learning experience (p = 0.053; d = 0.45).

5.8. Social Impact of AI

An ANOVA revealed that there was no significant difference between the groups (F(4,217) = 1.974, p = 0.100). Tukey’s post hoc test confirmed that there were no significant differences between the groups.

5.9. Self-Impact of AI

An ANOVA revealed that there was no significant difference between the groups (F(4,217) = 2.047, p = 0.089). Tukey’s post hoc test confirmed that there were no significant differences between the groups.

5.10. Need for AI

An ANOVA revealed that there was a significant difference between the groups (F(4,217) = 10.403, p = 0.003). Tukey’s post hoc test was performed, and it indicated that only students who began learning about AI at university rated their need for AI significantly higher than students who had never studied AI (p = 0.018; d = 0.51). The effect size was considered medium. There were no other significant differences found between the groups.

5.11. Interview Findings

Five university students participated in a focus group interview to explore their detailed opinions on AI convergence education based on their survey responses. This study employed thematic analysis, dividing the interview analysis into three main themes: (1) the importance of AI education, (2) prior experiences with AI education from K–12, and (3) diverse demands for AI education in liberal arts courses.
Regarding the first theme, all participants expressed a high interest in learning about AI in a liberal arts context. The authors acknowledged the significance of AI applications, influenced by the widespread discussion of the fourth industrial revolution in the media. Additionally, news outlets consistently emphasize the importance of learning about AI for future promising industries. The following excerpt is from one of the interviews:
Currently, everyone in the media is talking about AI and recognizing its importance. I am not in a computer-related major, so I do not directly experience the importance or necessity of class due to my major. It will be helpful to get a job if I have basic skills related to AI. It is one of the good skills to put in my resume (Student C, interview transcript).
Student A provided a similar response to the above comments.
Looking around my friends, I see many people who want to get a job as a software developer by learning computer coding or advanced technology skills. However, the basic AI education classes universities provide are insufficient, so they sometimes go to private academies to obtain certificates related to this AI field. While talking with them, I also need to be ready for AI applications. However, my major is humanities, so I still need to improve in AI (Student A, interview transcript).
These responses indicated that all interview participants recognized the importance of AI education and the necessity of acquiring knowledge and skills in AI applications for employment purposes. However, students not majoring in computer-related fields reported limited exposure and opportunities to learn about AI application skills through formal education. While they might have access to one or two AI convergence courses within their majors, they expressed a need for greater confidence in applying these skills for job-related purposes. The second theme is an excerpt from the interview.
Before entering this university, my high school was designated a SW (Software) leading school, so I took a few classes related to basic computer coding skills or using the Entry apps. Of course, it was not an advanced AI course, but these prior AI learning experiences greatly helped me take courses at the university level. Those courses I took in high school influenced the classes we are taking now at the university (Student B, interview transcript).
Student D also agreed with the above comments about students’ prior AI learning experiences.
I agree with the above comment. I went to local/regional schools, so I did not have a chance to learn about AI-related education in middle and high school. Therefore, when I was admitted to this university and tried to take AI classes as one of the mandatory courses, I had many difficulties. I tried to teach myself while watching YouTube tutorial videos, but it took work (Student D, interview transcript).
These responses suggest that students’ prior experiences with SW or AI learning from K–12 can significantly influence their performance in academic AI courses at the university level. Students with previous AI education tend to approach university courses with greater confidence. In contrast, those lacking prior AI experience often require additional effort and time to keep pace with the AI courses offered by the university. Student E said the following regarding the third theme from the interview analysis:
AI classes in liberal arts are being conducted targeting a general average level that does not fit the individual students’ majors and AI application abilities. For example, some students may want to receive more advanced courses because they are already good at computer coding. In contrast, others may want very basic AI or SW education. I hope that various AI classes are provided so that students can find suitable courses for their interests and AI application skills (Student E, interview transcript).
Student B also agreed with the above statement and continued the explanation.
I feel the same way. Before opening new AI-related courses, I hope the university can survey to determine students’ needs and interests. Particularly for students who are not computer majors, the requirements can vary because they are not experts in AI applications (Student B, interview transcript).
The interview participants expressed a desire for AI-related courses tailored to their interests, levels of AI application skills, and respective majors. This preference stems from the fact that the AI courses currently offered by universities may be too basic for students specializing in AI but challenging for those at beginner levels of AI application skills. Therefore, universities should consider offering AI courses that cater to the diverse, multilevel capabilities of students when designing new AI curricula for liberal arts classes.

6. Discussion

This research examined university students’ AI literacy based on coding experience (text-based, block-based, none) and the age at which they learned about AI (elementary, middle school, high school, university, never). The purpose of the research was to examine how coding (computational thinking) and the age at which students are introduced to AI concepts contributed to university students’ AI literacy. These results attempt to explore the potential impact, although on a smaller scale, of the 2015 revised curriculum of the KMOE, which mandated software education in designated schools across Korea, particularly on the AI literacy of students in non-computer-based majors.

6.1. RQ1: To What Extent Does Prior Coding Experience Influence AI Literacy in University Students Enrolled in Non-Computer-Based Majors?

The impact of prior coding experience on AI literacy among university students enrolled in non-computer-based majors has not been extensively explored in the literature. Therefore, the results of this study should be considered within the broader context of coding and computational thinking among university students. These findings are consistent with Celik’s [20] suggestion that learning computational thinking can significantly enhance university students’ AI literacy. One possible explanation for why students with prior text-based coding and block-based coding experience reported significantly greater AI literacy than did those who had never learned about coding is that coding is a fundamental component of creating AI programs or applications. Coding plays a crucial role in AI subfields such as machine learning, natural language processing, and deep learning [2]. Consequently, the computational thinking skills developed through coding could directly impact students’ AI literacy, as they may have a more in-depth understanding of how AI functions and its potential capabilities as a technology.
This research further expands Kong et al.’s [5] findings, which suggest that programming is not a prerequisite for developing AI concepts. Like Kong et al. [5], this study assessed AI literacy among students majoring in liberal arts or other non-computer-based disciplines where programming is not typically needed. Kong et al. [5] suggested that students from diverse majors could acquire AI literacy without a programming background; however, the results of this study indicate that AI literacy is significantly enhanced for students from non-computer-based subjects who have prior programming knowledge. Therefore, these findings imply that if AI literacy becomes an essential skill in future higher education, integrating programming and computational thinking into primary and secondary education could be beneficial. However, it is also plausible to suggest that AI literacy might parallel digital literacy, where increased exposure to AI programs and software enables users to effectively utilize AI tools without necessarily understanding how to program such applications.
In terms of the social impact of AI, the text-based coding condition showed significant differences from the noncoding condition, whereas the block-based coding condition did not. This distinction between text-based and block-based coding methods may indicate that the greater number of technical skills needed for text-based coding is closely associated with understanding social aspects, such as the future job market, human control over AI, and the development of AI in the coming years. It is plausible that proficiency in text-based coding endows students with the foresight and analytical abilities necessary to assess and comprehend the social implications of AI for future considerations.
Interviews with students suggest that prior coding experience is beneficial for enhancing university students’ AI literacy. One student shared that he attended a school designated as an SW leading school by the KMOE, and the basic skills acquired during that time were helpful in selecting university courses (Student B). In contrast, another student who did not attend a designated SW school recounted her struggles with a mandatory AI class, ultimately resorting to self-teaching through online videos (Student D).
These data provide evidence that coding programs, even those introducing basic coding concepts, can positively impact students’ AI literacy. This trend is reminiscent of the development of digital literacy, which evolved from a growing awareness of computers in the 1980s to the advent of hypermedia and the Internet and, ultimately, to networks by the 2000s [35]. Similarly, AI literacy is an emerging field that, like digital literacy, warrants further exploration [36]. A deeper understanding of how AI works could enable students to be more informed about its capabilities and applications as the technology advances and becomes more integrated into society. However, this does not necessitate students acquiring the coding skills of a professional programmer. Rather, a foundational understanding of coding concepts that promote computational thinking could significantly enhance AI literacy.

6.2. RQ2: How Does the Age at Which Students Are First Exposed to AI Components Affect Their AI Literacy in University Non-Computer-Based Majors?

While computational thinking and coding knowledge are important factors, they are not the sole determinants of AI literacy. The results concerning the age at which university students begin learning about AI literacy suggest that certain aspects of AI literacy may benefit from early exposure, while others may not. Notably, the social impact and self-impact of AI did not significantly differ across the different age groups (elementary, middle school, high school, university, no prior learning). However, AI awareness was significantly higher, with large effect sizes, among students who started learning in middle school (d = 0.90) and high school (d = 0.97) than among those with no prior learning experience. Compared with that of university students, AI awareness among high school students approached significance, and university students’ AI awareness similarly approached significance when compared with that of the group without prior learning. Regarding the need for AI subscale, only university students rated the need for AI significantly higher than did those with no prior learning experience.
AI literacy is a relatively new concept [21,22]; as such, the literature exploring how AI literacy developed through primary and secondary education impacts university-level AI literacy is not extensive. More comprehensive data on the effects of early SW education on AI literacy among university students will likely emerge from the Korean context in the coming years. This expectation stems from the KMOE’s 2015 curriculum revision and its subsequent progressive inclusion of more schools. While direct research on this specific topic is currently limited, there are studies, although indirectly related, and interview transcripts that provide valuable insights into potential explanations for the observed findings.
First, the observation that middle school and high school students demonstrated significantly greater AI awareness aligns with the findings of Kim et al. [30], who showed that schools with a focus on software (SW) education had notably greater AI awareness than did those lacking SW programs. Contrary to Kim et al. [30], this study did not identify a significantly greater level of AI awareness among elementary school students than among students in other groups. Park [28] reported that secondary schools received approximately two to two-and-a-half times more emphasis on SW education, along with nearly double the number of instructors, than did primary schools. This increased exposure could help primary school students to develop higher levels of AI literacy. Numerous factors can influence AI literacy in primary and secondary education, but the extent to which these experiences translate to AI literacy in university students is unclear. Therefore, further research is necessary to identify the key variables that significantly contribute to increasing AI awareness among university students.
However, university students who began learning about AI at the university level, rather than earlier, significantly recognized the need for AI more than those with no prior AI learning experience did. This observation could suggest that as students progress through university and approach graduation, they increasingly consider how their studies will impact their future job prospects. Additionally, the recent surge of AI integration into university curricula, possibly influenced by the advent of technologies such as ChatGPT, which are used in an increasing number of new web-based applications, might have heightened students’ awareness of their need for AI knowledge. During the interviews, several students expressed the belief that AI-related skills could enhance their employment opportunities (Student C), and one student noted that while basic AI education classes at university were insufficient, there was a clear need for AI knowledge in the future (Student A). These responses indicate that the escalating emphasis on AI in society is increasingly influencing the academic focus of students in non-computer-based majors at the university level.
The conceptual understanding of AI literacy, including its fundamental components, remains a subject of debate. It is not known whether AI literacy is about grasping concepts that aid in evaluation and problem solving, as suggested by Kong et al. [5], or if it encompasses the development of competencies in evaluation, communication, and collaboration with AI technologies, as proposed by Long and Magerko [4]. Both perspectives imply that programming ability is not a prerequisite for AI literacy, and the findings related to the age at which students learn about AI lend some support to this view. However, when assessing AI literacy in relation to students’ previous coding experience, the results of this study suggest that prior coding experience and computational thinking might indeed be valuable components of AI literacy. Therefore, the factors influencing AI literacy among university students warrant further exploration to gain a more comprehensive understanding.
Overall, these findings should be viewed with nuance, as AI is a fairly recent focus in education. Although the integration of computational thinking into the curriculum might differ holistically between Eastern and Western cultures [13], Eastern cultures are not a monolith. In the Asia–Pacific region, curriculum reforms by major countries with strong technology industries (i.e., China, Japan, Hong Kong, Korea, and Singapore) have impacted pedagogical practices and policies differently [22]. Therefore, practices and policies in South Korea can provide some guidance for countries in the Asia–Pacific region, but due to each country’s unique cultural, educational, and technological landscape, they may not be directly applicable or effective in other contexts. Although South Korea’s practices and policies offer valuable insights, they should serve as a starting point for further adaptation and customization, rather than a blueprint for replication.

7. Limitations

This research has several limitations. Firstly, the small sample size and the fact that the study was conducted at a single university in South Korea restrict the generalizability of the findings. Future research should include larger and more diverse samples from multiple universities across Korea to address this limitation. Secondly, the coding technique utilized for thematic analysis could benefit from further elaboration to improve transparency and replicability. Additionally, the study’s findings may not directly apply to university students in other countries due to South Korea’s standardized national curriculum at the primary and secondary levels. Future research could explore similar studies in varied educational contexts internationally to enhance applicability. Lastly, a deeper discussion regarding the practical implications of the research would further clarify its value for educators and policymakers, providing clear guidance on leveraging the findings to improve AI literacy education.

8. Conclusions

The concept of AI literacy, especially in the context of university students majoring in non-computer-based disciplines such as the liberal arts or humanities, is a subject of emerging debate. These majors, traditionally not associated with computational thinking or coding skills, present a unique challenge for AI literacy. While previous research has largely examined coding for K–12 students and computational thinking for university students, the specific exploration of AI literacy in non-computer-based majors has been insufficient. Historically, AI has predominantly been the domain of computer-based majors, often seeming inaccessible to those without coding backgrounds. However, the introduction of ChatGPT in November 2022 marked a significant shift, making AI tools readily available to a broader range of students, teachers, and researchers. Consequently, AI literacy has now become an increasingly relevant topic across all educational fields.
In light of South Korea’s 2015 initiative to integrate coding and SW education into the curriculum, starting in select schools, forthcoming data about AI literacy education in primary and secondary schools are expected to provide more substantial evidence for the potential benefits of dedicated AI literacy education for non-computer-based majors. This research contributes to the understanding that early SW education can influence AI literacy in non-computer-based university majors. However, questions about the specific topics of AI literacy acquisition have also been raised. Is coding and computational thinking derived from coding, the age at which students are first exposed to AI concepts, or a combination of these factors? These questions require further exploration.
For future research, this study recommends several specific areas for further exploration. First, it suggests investigating the particular AI literacy skills that would be most beneficial for students majoring in liberal arts and humanities. Additionally, future studies should focus on determining optimal methods and timing for introducing AI concepts to students who lack traditional computational backgrounds. Another important area is assessing the impact of integrating user-friendly AI tools, such as ChatGPT, into non-technical curricula. Furthermore, examining teacher preparation and professional development programs aimed at enhancing educators’ competencies in teaching AI literacy to diverse student populations will be critical. Specifically, teachers should integrate accessible AI tools like ChatGPT into their curricula to reduce barriers to AI literacy. They should also encourage interdisciplinary approaches that combine AI with traditional liberal arts and humanities content and actively participate in professional development initiatives designed to strengthen their competencies in AI literacy. These steps will significantly improve educators’ abilities to support students in acquiring practical and conceptual AI knowledge.

Author Contributions

Conceptualization, Y.-J.L.; methodology, Y.-J.L. and R.O.D.; validation, Y.-J.L. and R.O.D.; formal analysis, Y.-J.L. and R.O.D.; investigation, Y.-J.L. and R.O.D.; data curation, Y.L; writing—original draft preparation, Y.-J.L. and R.O.D.; writing—review and editing, Y.-J.L. and R.O.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

All procedures performed in this study involving human participants were in accordance with the ethical standards of the Declaration of Helsinki and approved by the Institutional Review Board of Chonnam National University (protocol code 1040198-240403-HR-050-02).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hu, K. CHATGPT Sets Record for Fastest-Growing User Base-Analyst Note. 2023. Available online: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01 (accessed on 29 November 2023).
  2. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar]
  3. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  4. Long, D.; Magerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar] [CrossRef]
  5. Kong, S.C.; Cheung, W.M.; Zhang, G. Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ. Artif. Intell. 2021, 2, 100026. [Google Scholar]
  6. Manches, A.; Plowman, L. Computing education in children’s early years: A call for debate. Br. J. Educ. Technol. 2017, 48, 191–201. [Google Scholar] [CrossRef]
  7. Bers, M.U. Coding and Computational Thinking in Early Childhood: The Impact of ScratchJr in Europe. Eur. J. STEM Educ. 2018, 3, 1–13. [Google Scholar] [CrossRef]
  8. Bers, M.U.; González-González, C.; Armas–Torres, M.B. Coding as a playground: Promoting positive learning experiences in childhood classrooms. Comput. Educ. 2019, 138, 130–145. [Google Scholar]
  9. Rijke, W.J.; Bollen, L.; Eysink, T.H.; Tolboom, J.L. Computational thinking in primary school: An examination of abstraction and decomposition in different age groups. Inform. Educ. 2018, 17, 77–92. [Google Scholar] [CrossRef]
  10. Wing, J.M. Computational thinking and thinking about computing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2008, 366, 3717–3725. [Google Scholar] [CrossRef]
  11. Sullivan, A.; Bers, M.U. Robotics in the early childhood classroom: Learning outcomes from an 8-week robotics curriculum in pre-kindergarten through second grade. Int. J. Technol. Des. Educ. 2016, 26, 3–20. [Google Scholar] [CrossRef]
  12. Doleck, T.; Bazelais, P.; Lemay, D.J.; Saxena, A.; Basnet, R.B. Algorithmic thinking, cooperativity, creativity, critical thinking, and problem solving: Exploring the relationship between computational thinking skills and academic performance. J. Comput. Educ. 2017, 4, 355–369. [Google Scholar]
  13. Lei, H.; Chiu, M.M.; Li, F.; Wang, X.; Geng, Y.J. Computational thinking and academic achievement: A meta-analysis among students. Child. Youth Serv. Rev. 2020, 118, 105439. [Google Scholar] [CrossRef]
  14. Yeh, K.C.; Xie, Y.; Ke, F. Teaching computational thinking to non-computing majors using spreadsheet functions. In Proceedings of the 2011 Frontiers in Education Conference (FIE), Rapid City, SD, USA, 12–15 October 2011; p. F3J-1. [Google Scholar]
  15. Park, S.H. Study of SW education in university to enhance computational thinking. J. Digit. Converg. 2016, 14, 1–10. [Google Scholar] [CrossRef]
  16. Lyon, J.A.; Magana, A.J. Computational thinking in higher education: A review of the literature. Comput. Appl. Eng. Educ. 2020, 28, 1174–1189. [Google Scholar] [CrossRef]
  17. Weintrop, D.; Wilensky, U. Comparing block-based and text-based programming in high school computer science classrooms. ACM Trans. Comput. Educ. (TOCE) 2017, 18, 1–25. [Google Scholar] [CrossRef]
  18. Xu, Z.; Ritzhaupt, A.D.; Tian, F.; Umapathy, K. Block-based versus text-based programming environments on novice student learning outcomes: A meta-analysis study. Comput. Sci. Educ. 2019, 29, 177–204. [Google Scholar] [CrossRef]
  19. Altadmri, A.; Brown, N.C. 37 million compilations: Investigating novice programming mistakes in large-scale student data. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education, Kansas City, MO, USA, 4–7 March 2015; pp. 522–527. [Google Scholar] [CrossRef]
  20. Celik, I. Exploring the determinants of artificial intelligence (Ai) literacy: Digital divide, computational thinking, cognitive absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
  21. Huang, X. Aims for cultivating students’ key competencies based on artificial intelligence education in China. Educ. Inf. Technol. 2021, 26, 5127–5147. [Google Scholar] [CrossRef]
  22. Su, J.; Zhong, Y.; Ng, D.T.K. A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Comput. Educ. Artif. Intell. 2022, 3, 100065. [Google Scholar] [CrossRef]
  23. Ayanwale, M.A.; Sanusi, I.T.; Adelana, O.P.; Aruleba, K.D.; Oyelere, S.S. Teachers’ readiness and intention to teach artificial intelligence in schools. Comput. Educ. Artif. Intell. 2022, 3, 100099. [Google Scholar] [CrossRef]
  24. Kim, S.A.; Lee, Y.H.; Hong, J.Y.; Koo, D.H.; Park, J.H. Recognition of SW Education of Students, Parents, and Teachers in Elementary, Middle, and High Schools: Focused on the SW Leading School. J. Korean Assoc. Inf. Educ. 2021, 23, 591–598. [Google Scholar] [CrossRef]
  25. Kim, K.; Kwon, K. Exploring the AI competencies of elementary school teachers in South Korea. Comput. Educ. Artif. Intell. 2023, 4, 100137. [Google Scholar] [CrossRef]
  26. Korean Ministry of Education. White Paper on ICT in Education Korea. 2017. Available online: https://www.keris.or.kr/eng/cm/cntnts/cntntsView.do?mi=1188&cntntsId=1334 (accessed on 23 November 2023).
  27. Ma, D.S. A Comparative Analysis Study on the Perceptions and Attitudes of Elementary School Teachers and Students on SW Education: Focusing SW education leading schools and general schools. J. Korean Assoc. Inf. Educ. 2021, 25, 185–193. [Google Scholar] [CrossRef]
  28. Park, J.H. An Analysis on the Current Status and Effectiveness of Software Education Leading School. J. Digit. Contents Soc. 2020, 21, 1845–1854. [Google Scholar] [CrossRef]
  29. Bell, T.; Alexander, J.; Freeman, I.; Grimley, M. Computer science unplugged: School students doing real computing without computers. N. Z. J. Appl. Comput. Inf. Technol. 2009, 13, 20–29. [Google Scholar]
  30. Kim, S.; Jang, Y.; Choi, S.; Kim, W.; Jung, H.; Kim, S.; Kim, H. Analyzing teacher competency with TPACK for K-12 AI education. KI-Künstliche Intell. 2021, 35, 139–151. [Google Scholar] [CrossRef]
  31. Kang, S.; Lee, J. Artificial intelligence liberal arts curriculum design for non-computer majors. J. Digit. Contents Soc. 2022, 23, 57–66. [Google Scholar]
  32. Schober, P.; Boer, C.; Schwarte, L.A. Correlation coefficients: Appropriate use and interpretation. Anesth. Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef]
  33. Castleberry, A.; Nolen, A. Thematic analysis of qualitative research data: Is it as easy as it sounds? Curr. Pharm. Teach. Learn. 2018, 10, 807–815. [Google Scholar] [CrossRef]
  34. Olson, C.L. Practical considerations in choosing a MANOVA test statistic: A rejoinder to Stevens. Psychol. Bull. 1979, 86, 1350–1352. [Google Scholar] [CrossRef]
  35. Dobson, T.; Willinsky, J. Digital literacy. In The Cambridge Handbook of Literacy; Cambridge University Press: Cambridge, UK, 2009; Volume 10, pp. 286–312. [Google Scholar]
  36. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
Table 1. Demographic data of survey participants.
Table 1. Demographic data of survey participants.
CategoryFrequency
(N = 222)
Percentage
(%)
GenderMale7634.23
Female14665.77
MajorScience7433.33
Humanities104.51
Arts11451.35
Music83.60
PE10.45
Communications114.96
Social Science41.80
Grade/YearFreshman3515.77
Sophomore11551.80
Junior4018.02
Senior3214.41
Coding ExperienceText-based coding8136.49
Block-based coding4520.27
No coding experience9643.24
Age of Learning about AINo learning experience6529.28
Elementary school125.41
Middle School219.46
High School3917.57
University8538.28
Table 2. Information about interview participants.
Table 2. Information about interview participants.
CategoryStudent AStudent BStudent CStudent DStudent E
GenderMaleMaleFemaleFemaleFemale
MajorScienceScienceHumanitiesSocial studiesArt
Prior AI experienceOOXXO
AI course taken at universityOOXXO
Table 3. Prior coding experience on AI literacy.
Table 3. Prior coding experience on AI literacy.
AI AwarenessAI Social ImpactAI Self ImpactNeed for AI
NM (SD)M (SD)M (SD)M (SD)
Text-based coding816.40
(1.53)
32.15
(3.50)
7.17
(1.71)
8.04
(1.54)
Block-based coding456.20
(1.50)
31.40
(3.33)
7.31
(1.31)
8.09
(1.61)
No coding965.49
(1.31)
29.95
(4.51)
(6.46
(1.70)
6.93
(1.53)
Table 4. Age at which students learned about AI.
Table 4. Age at which students learned about AI.
AI AwarenessAI Social ImpactAI Self ImpactNeed for AI
NM (SD)M (SD)M (SD)M (SD)
No
Learning
655.35
(1.38)
30.12
(4.84)
6.49
(1.59)
7.08
(1.47)
Elementary School125.50
(1.17)
30.25
(4.60)
6.42
(1.44)
6.67
(2.31)
Middle School216.619
(1.02)
32.52
(3.92)
7.29
(1.59)
8.14
(1.65)
High
School
396.718
(1.10)
31.69
(2.77)
7.26
(1.43)
7.64
(1.39)
University855.988
(1.64)
31.200
(3.72)
7.00
(1.84)
7.89
(1.65)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, Y.-J.; Davis, R.O. Assessing the Impact of Prior Coding and Artificial Intelligence Learning on Non-Computing Majors’ Perception of AI in a University Context. Information 2025, 16, 277. https://doi.org/10.3390/info16040277

AMA Style

Lee Y-J, Davis RO. Assessing the Impact of Prior Coding and Artificial Intelligence Learning on Non-Computing Majors’ Perception of AI in a University Context. Information. 2025; 16(4):277. https://doi.org/10.3390/info16040277

Chicago/Turabian Style

Lee, Yong-Jik, and Robert O. Davis. 2025. "Assessing the Impact of Prior Coding and Artificial Intelligence Learning on Non-Computing Majors’ Perception of AI in a University Context" Information 16, no. 4: 277. https://doi.org/10.3390/info16040277

APA Style

Lee, Y.-J., & Davis, R. O. (2025). Assessing the Impact of Prior Coding and Artificial Intelligence Learning on Non-Computing Majors’ Perception of AI in a University Context. Information, 16(4), 277. https://doi.org/10.3390/info16040277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop