1. Introduction
In recent years, generative artificial intelligence (GenAI) has significantly influenced various domains, notably education. AI applications like chatbots and virtual teachers are revolutionizing learning environments by enhancing student engagement, personalizing education, and streamlining administrative tasks. Scholars have highlighted AI’s potential to improve higher education through customized assistance, real-time feedback, and intelligent tutoring systems (
Bettayeb et al., 2024;
Yang, 2020;
Meng et al., 2022). However, concerns about over-reliance, ethical issues, and academic integrity persist (
Bailey, 2023;
Griesbeck et al., 2024;
Hamidouche, 2023).
Despite extensive AI research, student acceptance and engagement remain key to successful implementation (
Dolenc & Brumen, 2024). Some studies have attempted to address this gap, uncovering a complex and varied landscape of student attitudes toward AI in education. These findings reflect both optimism and concern. Students recognize the potential benefits of AI in enhancing educational processes, such as improving productivity, providing personalized learning experiences, and offering immediate feedback on tasks like writing and coding (
Zhou et al., 2024;
Schei et al., 2024;
Saúde et al., 2024). AI tools, particularly chatbots, are also seen as valuable for grammar correction, idea generation, and personal task assistance (
Arowosegbe et al., 2024). Furthermore, many students view AI as a motivating factor in learning and anticipate its integration into educational settings (
Stöhr et al., 2024).
On the other hand, students express significant apprehensions, particularly regarding academic integrity, privacy, and the sustainability of higher education. Students’ concerns about over-reliance on AI tools potentially undermining critical thinking and creativity are prevalent (
Zhou et al., 2024), alongside worries about data privacy and the potential biases inherent in AI systems (
Lünich et al., 2024). Additionally, some students believe AI could threaten the sustainability of higher education, raising concerns about its long-term impact on educational quality and safety (
Okulich-Kazarin et al., 2024).
Within this strand of research, some show that students’ perceptions of AI vary across demographic groups, academic backgrounds, and cultural contexts. For example,
Stöhr et al. (
2024) ascertained that female students and those in humanities and medicine tended to hold more negative views than male students and those in technology and engineering.
Dolenc and Brumen (
2024) also found that students’ perceptions of AI in language education varied by gender and discipline, highlighting the need for targeted educational initiatives to address these gender and disciplinary gaps.
Ma et al. (
2024) argued that cultural differences further shaped attitudes, as international students exhibited different perceptions compared to their Chinese counterparts in their study. Therefore, more research should be conducted to explore how students from diverse academic and cultural backgrounds perceive AI technologies (
Chan & Hu, 2023).
However, most extant studies on student perceptions of AI focus on Western or general student populations. Little is known about the perceptions of students from top-tier Chinese universities’ unique socio-cultural and institutional environments. The academic pressure and technological exposure in these universities differ significantly from those in other regions, potentially leading to distinct student perceptions and experiences with AI. For instance, in the C9 League (an alliance of nine leading universities in China), students face intense academic competition and high expectations, which may shape their perceptions of AI differently than students in less competitive environments. The demand for efficiency and accuracy in such settings may lead to higher expectations for AI tools.
This study, therefore, aims to address this gap by investigating student perceptions of AI use in teaching and learning at a leading university in China, where admission is highly competitive and academic expectations are exceptionally high (
Li et al., 2022). This study investigates three key research questions (RQs):
RQ1: How do students at a top-tier Chinese university perceive AI’s usefulness and ease of use in teaching and learning?
RQ2: What are their primary functional and affective concerns regarding AI adoption?
RQ3: How do personal, institutional, and social factors shape their AI tool expectations and usage patterns?
By focusing on this specific context, we seek to develop a nuanced understanding of how students perceive AI at a leading university in China. This focus is critical because top universities play a pivotal role in shaping higher education and societal development (
Cheng et al., 2014); their students’ perceptions may directly inform the design and implementation of AI tools for broader educational contexts. By identifying the aspects of AI that students find beneficial, this study may contribute to the academic discourse on AI in education and offer practical implications for educators and policymakers. Ultimately, this research aims to enhance the learning experience by ensuring that AI tools are aligned with students’ needs and preferences.
2. Materials and Methods
This study utilized a questionnaire-based survey to collect both quantitative and qualitative data on university students’ perceptions of AI utility at a higher education institution in China. Adopting a convergent parallel mixed methods design (
Creswell & Plano Clark, 2018), we simultaneously gathered standardized survey responses and open-ended qualitative insights, enabling systematic comparison of patterns and themes that provided both broad trends and nuanced perspectives aligned with our exploratory objectives.
2.1. Context
This study was conducted at a top-tier C9 League university in China, known for its high academic rigor and strong technological integration—an ideal setting to examine student perceptions in elite academic environments. In early 2024, the university partnered with a leading AI company to develop a multimodal large language model, which was then deployed in a pilot AI-assisted teaching program across multiple disciplines. The same year, the university further strengthened its AI focus by establishing the School of Artificial Intelligence and later announcing a new AI-focused college to foster interdisciplinary integration and cultivate high-level tech talent. Given its pioneering role in AI-driven education, this institution provides a distinctive context for exploring how students in competitive academic settings perceive AI—likely differing from less research-intensive universities.
2.2. Participants
Detailed demographic characteristics of participants are systematically presented in
Table 1. The study included 253 students (204 undergraduates, 49 graduates) from diverse disciplines, recruited via convenience sampling. As shown in
Table 1, Engineering majors predominated (52.17%, n = 132), followed by natural sciences (18.58%, n = 47), social sciences (11.86%, n = 30), humanities/arts (4.74%, n = 12), and other fields (12.65%, n = 32). The male-to-female ratio (69.57% vs. 26.88%) and low international student representation (1.19%) mirrored the university’s demographics. Undergraduates and engineering students were slightly overrepresented due to voluntary participation.
2.3. Data Collection
We designed a mixed-methods questionnaire based on the Technology Acceptance Model (TAM) (
Davis, 1989). The survey included:
Demographics: Gender, academic level, major, international status, and prior AI tool usage (5 questions).
Quantitative Measures: 28 Likert-scale items rated on a 5-point scale (1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree), assessing seven AI perception constructs (3–5 items per construct): Perceived usefulness, Perceived ease of use, Concerns and Challenges, Personal Traits, Social Influence, Behavioral Intention to Use, and Actual Use Behavior. Scale items were contextually adapted from established instruments: Perceived Usefulness and Ease of Use items derived from
Davis’s (
1989) Technology Acceptance Model, Concerns and Challenges from
Albayati’s (
2024) AI ethics scale, while Social Influence and other constructs integrated validated items from
Chan and Hu (
2023) and
Dolenc and Brumen (
2024).
Qualitative Insights: Two open-ended questions on AI’s impactful use cases and future expectations in education.
The psychometric properties of the questionnaire were established through three validation phases. First, an expert review was conducted with three researchers who independently evaluated item relevance, resulting in excellent content validity indices (I-CVI > 0.78 for all items). Second, pilot testing with 84 participants demonstrated excellent internal consistency both for the full scale (α = 0.923) and all subscales, with Perceived Usefulness showing the highest reliability (α = 0.876) and Concerns and Challenges the lowest (α = 0.78). Third, cross-cultural adaptation was implemented through a standardized translation protocol: the original English items were forward-translated into Chinese, back-translated by bilingual experts, and subsequently adjusted to ensure cultural and academic appropriateness for the Chinese university context.
The survey was distributed via Questionnaire Star (a reputable Chinese platform) through WeChat, with anonymized responses and minor incentives to boost participation.
2.4. Data Analysis
Quantitative data were analyzed using descriptive statistics (means, standard deviations, frequency distributions) via Questionnaire Star’s built-in tools. For qualitative responses, we applied
Braun and Clarke’s (
2006) thematic analysis: (1) familiarizing with the data, (2) generating initial codes, (3) identifying themes, (4) reviewing and refining themes, and (5) defining final themes. Analysis of open-ended responses revealed seven core themes regarding AI’s educational applications: Language Support (e.g., writing polishing, translation), Research Aids (e.g., idea generation, literature synthesis), Efficiency Improvement (e.g., automating repetitive tasks), Knowledge Retrieval (e.g., concept explanation, knowledge structuring), Programming/Debugging (e.g., code generation, error fixing), Authenticity and Accuracy Concerns (e.g., citation fraud risks), Human-Centric Expectations (e.g., emotional support, career guidance). Themes were derived through iterative coding, with 20% of transcripts double-coded (Cohen’s κ = 0.81).
In line with the convergent parallel design, quantitative and qualitative data were analyzed separately during the Results phase, with integration occurring in the Discussion through narrative comparison—identifying areas of alignment and divergence. This approach ensured methodological rigor while leveraging the strengths of both data types: quantitative statistics provided generalizable insights, whereas qualitative responses revealed contextual nuances about students’ AI perceptions.
3. Results
Among the 253 participants, 12 reported never using AI for learning assistance. Seven expressed an intention to adopt AI in the future, while the remaining five did not. Consequently, the subsequent analysis focused on the 241 participants who indicated prior use of AI in their academic activities.
3.1. Perceived Usefulness
As illustrated in
Table 2, participants generally held favorable perceptions of AI’s utility in education (Mean = 3.97, SD = 0.94 across all items), though with variations by task type. The highest mean score was observed for the statement, “I believe AI can enhance efficiency and save time in learning” (Mean = 4.34, SD = 0.78), with nearly 90% of participants agreeing. This item also ranked as the highest-scoring statement in the survey, reflecting a strong consensus on AI’s ability to improve learning efficiency. In contrast, the statement “I believe AI can assist in solving complex problems in learning” received the lowest mean score (Mean = 3.55, SD = 1.09). This result suggests that participants perceived AI as less effective in addressing complex academic challenges, but there was a fair amount of variability in responses.
3.2. Perceived Ease of Use
Table 3 presents participants’ moderately positive perceptions of the ease of using AI in learning (Mean = 3.68, SD = 0.89 across all items). While most students found AI relatively easy to use (Mean = 3.83, SD = 0.85), some reported instances where AI failed to fully meet their academic needs (Mean = 3.53, SD = 0.96). These findings suggest that while AI is generally user-friendly, there is room for improvement in its ability to address specific learning requirements.
3.3. Concerns and Challenges
Participants expressed moderate concern regarding educational AI use (Mean = 3.51, SD = 0.95 across all items). As detailed in
Table 4, the highest mean score was observed for concerns about the accuracy and reliability of AI-generated content (Mean = 3.74, SD = 0.79), indicating a significant awareness of potential limitations in AI’s output. In contrast, participants were relatively neutral about the impact of AI on interpersonal interactions and socialization (Mean = 2.93), while the high standard deviation (SD = 1.02) reflects a considerable divergence in participants’ assessments of AI’s influence on this aspect. Moreover, concerns about AI-related plagiarism showed moderate agreement (Mean = 3.46, SD = 0.95).
3.4. Personal Traits
Participants demonstrated moderately positive personal dispositions toward AI technology overall (Mean = 3.70, SD = 0.89). As shown in
Table 5, participants reported high levels of interest in using AI (Mean = 4.18, SD = 0.69), but their self-assessed proficiency with AI tools was comparatively lower (Mean = 3.44, SD = 0.90). This suggests a gap between students’ enthusiasm for AI and their competence in utilizing it effectively, underscoring a need for better training and support in AI utilization.
3.5. Social Influence
As shown in
Table 6, participants perceived social and institutional influences on AI adoption as moderately strong overall (Mean = 3.74, SD = 0.82). The second and third statements about institutional influence received the highest mean scores (Mean = 4.01, SD = 0.69; Mean = 3.84, SD = 0.84), indicating that institutional resources and support significantly motivated students to explore AI. The fourth statement about peer influence had a relatively lower mean score (Mean = 3.64, SD = 0.84). In contrast, the influence of immediate social circles (e.g., family, classmates, teachers) was perceived as less pronounced (Mean = 3.48, SD = 0.80). The data indicate that although participants viewed their immediate social circles as exerting comparatively less influence on their AI usage, factors such as institutional support and peer influence may still encourage them to utilize these technologies.
3.6. Behavioral Intention to Use
As shown in
Table 7, participants showed generally strong behavioral intentions to use AI in learning (Mean = 3.82, SD = 0.86 overall). While over 90% of participants expressed a willingness to use AI in learning (Mean = 4.22, SD = 0.66), the majority approached its use selectively and cautiously, with a lower mean score for the statement “I believe I intend to use AI in learning as much as possible” (Mean = 3.39, SD = 0.90).
3.7. Actual Use Behavior
Table 8 demonstrates that participants reported frequent integration of AI into academic activities (Mean = 3.90, SD = 0.83 overall), with comparable mean scores ranging from 3.87 to 3.93 across various use cases. This indicates a relatively high level of engagement with AI tools for academic routines such as knowledge acquisition, idea generation, and task assistance.
While the quantitative data revealed participants’ overall attitudes, their qualitative responses unpacked how these perceptions translated into actual usage patterns and unmet needs.
3.8. Specific Assistance
Qualitative data analysis revealed that participants had broad applications of AI across multiple fields, particularly in language and writing, academic research, efficiency improvement, knowledge construction, and programming and debugging. Participants generally believed that AI provided effective assistance, saved time, improved academic work efficiency, and, in some cases, offered new possibilities for creativity and inspiration.
3.8.1. Language Support
Many participants mentioned using AI to polish articles, generate outlines, expand content, and improve writing efficiency and quality. More specifically, some mentioned that AI could substantially improve their writing throughout the entire drafting process, such as refining structure and content, proofreading, and providing feedback:
AI can quickly generate document outlines as a reference base and produce more mature boilerplate text, reducing the time I spend on such content. AI also helps me find inspiration for writing, as it can quickly combine the necessary content to provide ideas.
AI could proofread and help avoid basic errors when an article was mostly completed. It also provides constructive and personalized feedback to refine my article.
Additionally, AI was valued for its ability to assist with language learning, including vocabulary learning, grammar and spelling, and translation. Some participants claimed:
AI can provide valuable resources for me as a second language learner. I used generative AI tools to assist in translating Japanese novels or song lyrics. AI can capture the nuances of word choice and phrasing better than general machine translation while also helping me accumulate Japanese vocabulary.
3.8.2. Research Aids
According to some participants, AI was applied in academic research, particularly in sparking ideas, shaping research, streamlining literature reviews, and enhancing communication and outreach. For example, in certain circumstances, students experienced difficulty generating ideas or finding topics before formally conducting research or writing papers. They found,
When I did a project I had no prior experience with, AI provided possible directions and methods. Especially, when finalizing a research topic, it would be practical to give AI a general direction, let it brainstorm, and then select a few interesting suggestions until finally coming up with a feasible topic.
Moreover, some used AI for retrieving and reviewing literature,
During the research workflow, AI was competent in creating a framework of ideas, reviewing relevant literature, and synthesizing lengthy literature.
Some also used AI to generate visual content for presentations. Participants reported using AI to create visual materials, such as PPT slides and images:
AI can instantly produce PPT slides for my class presentation if provided with an outline and some text.
3.8.3. Efficiency Improvement
According to participants, AI was extensively used to improve work efficiency and save time, especially in handling repetitive tasks.
AI helps write the scripts for dull and unimportant speeches when the deadline approaches.
Participants highlighted AI’s ability to streamline time-sensitive tasks, such as generating text for assignments. A participant remarked,
AI’s ability to quickly generate large amounts of text can help me complete seemingly meaningless homework assignments. Especially when deadlines are approaching, AI comes to the rescue.
Overall, students stressed AI’s significant role in promoting their learning efficiency across various domains.
3.8.4. Knowledge Retrieval
AI was used to explain complex concepts and help students better understand and construct knowledge. Several students noted AI’s support in organizing knowledge structures.
AI helped a lot in organizing the logical structure of disciplinary knowledge. It could also provide me with knowledge I was unaware of. When asking AI about specific chemical concepts and principles, further questioning made the conversation feel like interacting with a real teacher.
AI was also employed in a class by one participant in providing problem-solving ideas:
When the teacher asked a question in class, and I did not know the answer, I used AI. The answers provided by AI were unique and appeared more sophisticated.
3.8.5. Programming and Debugging
According to many participants, AI was widely used in programming, including code checking, error fixing, and code generation, helping them improve efficiency by reducing repetitive labor. Some participants specifically highlighted AI’s role in debugging:
When solving code errors, AI can provide different solutions to help me solve problems more quickly without having to search step-by-step online for similar issues.
3.9. Expectations for Future Development
Participants generally hoped that AI could provide more authentic and accurate content and help them better tackle intricate tasks in academic, work, and daily life contexts. Additionally, they emphasized the need for more human-centric AI tools that offer emotional support and career guidance.
3.9.1. Authenticity and Accuracy
Although AI can promptly provide fluent and human-sounding responses, many participants found that the authenticity and accuracy of these responses could not always be guaranteed. This sentiment complied with the quantitative results shown in
Table 3. For instance, a doctoral student described AI as a cheat with frequent falsification of citations. Participants conveyed their expectations for reducing the ambiguity of AI-generated content, as two pointed out:
AI should provide reliable references when answering unanswerable questions instead of just winging them.
Given this issue of AI-generated content, many expressed concerns about its use in learning and research, claiming,
I hope AI can address issues of citation fraud, ensuring accuracy when I learn new knowledge or cite references. Only with improved accuracy can AI unlock greater possibilities.
3.9.2. Intricate Problem-Solving
Participants noted that in solving some complex problems, AI might struggle to perform adequately due to its lack of logical reasoning abilities and robustness. Many participants responded with the wish for changes in this area:
I hope AI’s mathematical capabilities can improve, mastering calculus, linear algebra, and more. It should ideally feature enhanced multilingual capabilities, efficient and low-cost logical reasoning, and greater robustness.
Several provided more detailed descriptions in this regard, and they hoped that AI could help with some seemingly advanced formulas, and complicated mathematics derivation in economics. As such, AI still confronted limits when handling intricate and logic-dependent problems. Participants also highlighted its limitations in tasks requiring high levels of precision and customization. This issue has been mentioned numerous times. For example,
I wish AI could help me design high-quality PPT slides and format my article according to specific style guidelines.
3.9.3. Human-Centricity
The human-centricity theme underscored the students’ desire for AI to prioritize their individualized learning requirements and emotional well-being. Many expected to use AI for emotional support, self-development, and life assistance. For example,
I hope AI can develop a stronger sense of humanism so it can offer me emotional value, help me tackle psychological challenges, and provide solutions for life planning.
Particularly, three participants highlighted the possibility of “dating”, “seeking a partner”, and “finding teammates for games” with the help of AI. One even imagined:
One day, AI could be directly implanted as a chip in my human brain, becoming my true secondary brain.
4. Discussion
This study explores university students’ perceptions and use of AI technology in a top-tier Chinese university, revealing several key insights into how students perceive, utilize, and critically evaluate AI tools in their academic routines. Quantitative results revealed that participants acknowledged AI’s utility for learning efficiency but doubted its complex problem-solving capabilities. Despite generally positive ease-of-use ratings, concerns emerged regarding content accuracy, over-reliance, and privacy. While institutional support strongly encouraged adoption, students’ self-reported proficiency lagged behind their interest levels. Qualitative data enriched these findings by identifying valued applications in language learning, research assistance, and coding support. They also highlighted AI’s limitations in content reliability and solving intricate problems requiring deep critical thinking or logical reasoning. Beyond functional needs, participants emphasized the need for more human-centric AI tools that offer emotional support and personalized guidance, reflecting a desire for technologies that address both academic and emotional well-being.
4.1. Contextual Duality in AI Adoption
This study reveals a contextually nuanced duality in students’ AI adoption attitudes at an elite Chinese university. Quantitative results confirm established findings about AI’s benefits for learning efficiency—evidenced by average usefulness ratings approaching but not exceeding 4 on a 5-point scale. Qualitative data unpack this apparent acceptance through two revealing patterns: enthusiastic adoption of AI in routine tasks like idea generation and tedious work automation, contrasted with sharp skepticism toward complex problem-solving. This tension was most pronounced in technical disciplines, where open-ended comments simultaneously praised AI’s coding assistance while condemning its “inability to handle advanced problem-solving” in calculus proofs.
Our results corroborate existing knowledge about AI’s educational value, including its well-documented utility for enhancing productivity as demonstrated by
Albayati (
2024) and
Holmes (
2020). Similarly, we observed the persistent limitations in creative and logical tasks that
Hwang et al. (
2020) and others have identified, along with the accuracy concerns noted by
Chan and Hu (
2023). Moreover, our research context revealed several distinctive patterns that advance current understanding. Most significantly, we uncovered discipline-specific skepticism, particularly in technical fields where participants reported AI’s frequent failures in handling advanced problem-solving tasks. Furthermore, the unique cultural–academic environment of China’s top universities appears to amplify verification behaviors, creating what might be termed a “trust-but-verify” approach to AI adoption. Students in our study demonstrated sophisticated selective usage patterns, strategically employing AI for drafting and ideation while maintaining human oversight for final outputs.
This critical–utilitarian balance suggests that institutional and cultural factors mediate technology acceptance more profoundly than current models account for. While our findings fundamentally support the core constructs of the Technology Acceptance Model (
Davis, 1989), they simultaneously demand expanded theoretical frameworks to better capture technology adoption dynamics in high-achievement educational contexts. The observed patterns of conditional acceptance and heightened verification behaviors point to the need for more nuanced models that account for disciplinary rigor and institutional prestige as key moderating variables.
4.2. Interest–Competence Gap and Institutional Role
This study elucidates two critical factors shaping AI adoption in elite academic settings. First, quantitative data revealed a significant disparity between students’ interest in using AI (Mean = 4.18/5) and their self-assessed proficiency levels (Mean = 3.44/5), confirming but contextualizing
Ng et al.’s (
2021) broader findings about digital literacy gaps. While Ng et al. reported this pattern in general student populations, our results specifically characterize its manifestation in high-achievement educational environments.
Second, institutional support emerged as a key adoption facilitator, aligning with
Ertmer and Ottenbreit-Leftwich’s (
2010) argument for the importance of institutional facilitation. The university’s provision of technological infrastructure and resources created enabling conditions for AI experimentation, even as students maintained critical awareness of tool limitations—a nuance not fully captured in prior studies.
These results collectively demonstrate how micro-level competency gaps and macro-level institutional supports interact in unique ways within competitive academic ecosystems. They extend existing frameworks by revealing how these factors interact in elite university contexts, where both technical preparedness and institutional endorsement prove essential for meaningful integration.
4.3. Emerging Demand for Human-Centric AI
An important finding emerging from this study is participants’ expressed interest in AI tools that could provide both academic support and emotional guidance. This expectation appears particularly pronounced in the high-pressure environment of this elite university. While quantitative data showed that most participants primarily used AI for practical academic tasks like drafting and research, qualitative responses unexpectedly revealed parallel desires for more human-like interaction.
These observations intersect with existing literature in nuanced ways. Previous studies have noted the impersonal nature of AI interactions (
Chen et al., 2023) and the general acceptance of this limitation (
Essel et al., 2022). However, our findings suggest that in highly competitive academic settings, students may develop distinct expectations for technologies that address both cognitive and affective needs. This mirrors
Holmes’ (
2020) findings on educational technology trends but specifically demonstrates how institutional environments shape technology expectations.
The coexistence of utilitarian AI use and emotional support requests in our data points to an understudied dimension of technology acceptance in elite educational environments. Rather than replicating prior findings about AI’s emotional limitations, these results indicate that academic pressure and institutional culture may generate unique user expectations that merit further investigation.
5. Conclusions
This study set out to investigate AI adoption in the distinctive context of a top-tier Chinese university, where exceptional academic pressures and institutional resources create unique technology acceptance dynamics.
5.1. Key Contributions
This study makes several important contributions to the growing educational literature on AI. First, it provides empirical evidence on how students at a top-tier Chinese university perceive and interact with AI tools in their academic environment. By focusing on a highly competitive academic setting, the study can contribute to a more nuanced understanding of AI integration in elite educational institutions. Second, this study uniquely highlights the dual nature of students’ perceptions of AI, a less explored aspect in prior research. This perspective is especially relevant in high-stakes academic environments, where students have higher expectations for the quality and reliability of educational tools. Third, this study identifies a significant gap between students’ high interest in AI and their relatively low self-assessed AI proficiency. It also highlights institutional support as a significant motivator for AI adoption. While previous research has explored students’ acceptance and usage of AI, this study emphasizes the need for targeted training and institutional support, resources, and policies to integrate AI into education successfully. Finally, this study’s distinctive contribution is its emphasis on the need for human-centric AI tools. Students desired AI tools that provide academic support, emotional interaction, and personalized guidance. This finding goes beyond the functional aspects of AI, which are often the focus of prior research, and highlights the importance of addressing students’ holistic well-being through AI technologies.
5.2. Limitations and Future Directions
This study has several limitations that should be considered when interpreting the results. First, the research was conducted at a single top-tier Chinese university (C9 League), and therefore, the findings may not generalize to less competitive institutions or different cultural contexts. The unique environment of elite universities likely shaped students’ AI perceptions in ways that might not apply elsewhere. Second, the convenience sampling method led to demographic imbalances in our sample. Engineering majors and undergraduates were overrepresented, which could have influenced the overall patterns we observed regarding AI acceptance and concerns. Third, our reliance on self-reported data means the results might be affected by response biases. Students may have reported more positive views of AI than they actually hold due to social desirability effects. Finally, as an exploratory study, our analysis focused on identifying broad patterns rather than testing causal relationships. More advanced statistical modeling would be needed to fully understand how different factors interact to shape AI perceptions.
Building on this exploratory study, future research should adopt a multi-method approach to address current limitations and expand understanding of AI perceptions in academia. First, stratified sampling would control for selection bias while enabling systematic examination of how demographic variables (e.g., gender, discipline, academic level) interact to shape AI adoption. Second, integrating behavioral data (e.g., AI usage logs) with self-reports through longitudinal designs could reveal disparities between stated perceptions and actual usage patterns, particularly regarding learning outcomes and mental well-being in competitive environments. Third, advanced statistical methods (e.g., SEM) should be used to analyze how individual and institutional factors jointly shape AI adoption. Finally, comparative cross-cultural studies would clarify how institutional, individual, and technological factors differentially influence educational AI adoption in Eastern versus Western contexts.
5.3. Implications for Practice and Policy
While acknowledging its limitations, this study provides critical guidance for stakeholders shaping the future of AI in education. For educators at elite universities, the findings underscore the need to move beyond basic AI adoption and instead develop discipline-specific frameworks that address ethical dilemmas, reinforce critical thinking, and safeguard academic integrity—particularly as students demonstrate technical readiness but require nuanced guidance. Policymakers should heed the clear demand for AI systems that prioritize transparency, accuracy, and privacy protection, ensuring these technologies adapt to diverse learning contexts rather than imposing one-size-fits-all solutions. For AI developers, the results reveal an urgent mandate. While students are technologically adept, they require tools that not only improve content reliability but also actively support their cognitive and emotional well-being through human-centered design. Ultimately, these insights call for coordinated action across sectors, suggesting that educators, policymakers, and developers collaborate to create AI ecosystems that are as pedagogically sound as they are technologically advanced.
By focusing on a unique elite academic context, this study reveals three distinctive characteristics of AI adoption: (1) students’ critical awareness of AI’s limitations alongside their enthusiasm for its use, (2) the notable gap between students’ interest in AI and their self-assessed competence, and (3) the persistent demand for tools that complement rather than replace human support. These insights may contribute to the growing discourse on AI in education while providing actionable guidance for optimizing AI integration in competitive learning environments.