Previous Article in Journal
To Love and to Serve: Exploring the Strengths of Pacific Youth, and Mobilising Them for Community Wellbeing and Transformative Change
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education

Faculty of Economic Sciences, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland
*
Author to whom correspondence should be addressed.
Youth 2025, 5(4), 106; https://doi.org/10.3390/youth5040106 (registering DOI)
Submission received: 27 August 2025 / Revised: 30 September 2025 / Accepted: 2 October 2025 / Published: 5 October 2025

Abstract

This article examines how young adults in higher education (Generation Z) perceive the usefulness of ChatGPT by analyzing five learning-support roles within the Technology Acceptance Model (TAM), Expectation–Confirmation Theory (ECT), and Task–Technology Fit (TTF). Drawing on an online survey of 409 students from Polish universities and nonparametric analyses, this study consistently finds that students rate ChatGPT’s potential higher than its current usefulness. The tool is evaluated most favorably as a tutor, task assistant, text editor, and teacher, while its motivational role is rated least effective. Usage patterns matter: students who used ChatGPT for writing tasks rated its assistance with educational assignments more highly, and those who used it for learning activities rated its teaching role more strongly. The strongest evaluations appear when model capabilities such as structuring, summarizing, step-by-step explanations, and personalization align with task requirements. By integrating TAM, ECT, and TTF, this study advances evidence on how Gen Z engages with conversational AI and offers practical guidance for educators, support services, and youth-focused policymakers on equitable and responsible use.

1. Introduction

Contemporary education is facing dynamic technological transformations that significantly influence both teaching and learning methods. Tools based on generative artificial intelligence (GenAI) are becoming increasingly widespread in educational settings, offering unique opportunities for personalized instruction.
One of the most popular of these tools is ChatGPT, which in June 2025 alone recorded approximately 5.395 billion visits to its main website (Similarweb, n.d.). ChatGPT’s rapid success can be attributed primarily to its potential to reshape the way individuals perform various tasks, including knowledge acquisition and skills development (Aydın & Karaarslan, 2023).
Findings from OpenAI’s 2025 report, based on a survey among U.S. students aged 18–24, indicate that approximately 25% of respondents’ interactions with ChatGPT involved learning and tutoring. Furthermore, the report highlights indirect educational applications, such as text summarization (≈48%), text refinement (≈43%), and essay preparation (≈32%) (OpenAI, 2025).
According to numerous scholars, ChatGPT serves as a versatile educational tool, offering, among other features, personalized feedback, enhanced accessibility, interactive discussions, assessment support, and innovative approaches to teaching complex concepts (Munaye et al., 2025; T. Xu et al., 2024; Tarchi et al., 2024).
Data from Similarweb show that ChatGPT is most frequently used by individuals aged 18–24 and 25–34, who together account for more than half of all users (Figure 1).
The dominance of younger generations in this demographic distribution aligns with Everett M. Rogers’s diffusion of innovations theory, which describes how new solutions are adopted in society (Rogers, 1983). In this context, so-called “digital natives,” that is, Generation Z, function as early adopters, a term introduced by Rogers to denote individuals optimistic about applying new technologies, of generative artificial intelligence (Rogers, 1983).
Research by Chan and Lee confirms that Generation Z perceives ChatGPT as a tool that can enhance their productivity and learning efficiency, which corresponds with their broader characteristics, including a fundamental inclination toward adopting innovative solutions (Chan & Lee, 2023).
The literature raises concerns about excessive dependence on GenAI, which may lead to a reduction in independence, critical thinking, and competence development (Chan & Hu, 2023; F. Liu et al., 2024; Mirriahi et al., 2025). A key mechanism is cognitive offloading, outsourcing analysis and synthesis to the model, which can gradually habituate students to seek hints instead of constructing their own solutions (Mirriahi et al., 2025). Students commonly turn to GenAI for convenience and efficiency, which encourages “learning shortcuts” (skipping primary readings, reducing deliberate practice) and may result in gradual skill attrition and shallower understanding (Chan & Hu, 2023; Fawns et al., 2024).
Moreover, uneven answer quality (e.g., hallucinations) can foster an illusion of knowledge and lead to questionable academic decisions (F. Liu et al., 2024; Mirriahi et al., 2025). Major risk factors include unclear institutional guidelines and grading rules, low AI literacy (weak habits of verification, citation, and reflection), time pressure, and uncertainty about ethical boundaries and authorship responsibility (Hamerman et al., 2024; Hsiao & Tang, 2024). Accordingly, GenAI use should be paired with safeguards: explicit course policies, mandatory source-checking, process documentation, and a staged approach to tool support (Hamerman et al., 2024; Hsiao & Tang, 2024).
In this article, we use the term usage patterns to denote students’ self-reported ways of interacting with ChatGPT in study contexts (e.g., tutoring, task assistance, editing), not stable learning styles or strategies. The purpose of this study is to identify how Generation Z perceives ChatGPT as an educational tool. This article aims to contribute to the advancement of knowledge by providing a deeper understanding of the dynamics of GenAI adoption, particularly of ChatGPT, within educational contexts.

2. Theoretical Background

2.1. Generation Z’s Attitude Toward ChatGPT

Generation Z generally demonstrates a positive attitude toward ChatGPT, particularly in the context of education and emotional support. Young users appreciate its usefulness, innovativeness, and the possibility of personalized interaction, which translates into willingness to use the tool both for learning and for psychological support (Biloš & Budimir, 2024; Kavitha et al., 2024; Tiwari et al., 2023).
The key factors influencing ChatGPT acceptance include expected effectiveness, social influence, motivation through enjoyment, and habitual use of new technologies, while ease of use and cost appear to have limited significance (Biloš & Budimir, 2024; Tiwari et al., 2023).
Most individuals in this age group adopt a stance of cautious optimism: they are open to new technologies but approach them with some degree of reservation, paying attention to ethical concerns, privacy, and academic integrity (Acosta-Enriquez et al., 2024; Zhang et al., 2025). ChatGPT is also perceived as a tool supporting emotional resilience, providing a safe, non-judgmental space for interaction, although concerns persist regarding privacy and cultural representation (Kavitha et al., 2024).
Despite these positive attitudes, knowledge and enthusiasm alone do not guarantee effective or responsible use of ChatGPT (Acosta-Enriquez et al., 2024). It should be emphasized that attitudes toward the tool are diverse, ranging from enthusiastic acceptance, through rational optimism, to more complex and ambivalent perspectives (Zhang et al., 2025).

2.2. Roles of ChatGPT in Education

As higher education experiments with generative AI, ChatGPT’s contribution can be organized into five complementary roles: tutor, task assistant (assistant in educational tasks), text editor, teacher, and motivator. Together, these roles capture how students and instructors use the tool for explanation, scaffolding, revision, guidance, and engagement.
ChatGPT increasingly functions as a tutoring aid across foreign language learning, mathematics, programming, and argumentative writing. It offers immediate feedback, support with problem solving, explanations of difficult concepts, and practice that can build critical thinking and creativity (Da Silva et al., 2024; Kohnke et al., 2023; Sirisathitkul & Jaroonchokanan, 2025; Y. Su et al., 2023). Students and teachers value its accessibility, rapid responses, and capacity for personalized instruction. They also stress the need for critical evaluation and human oversight because outputs may be imprecise or erroneous, especially for complex mathematical or scientific tasks (Ding et al., 2023; Lo, 2023; Wardat et al., 2023). Learners sometimes trust answers that are not entirely accurate. This underscores the need for responsible-use training and digital literacy (Ding et al., 2023; Kiryakova & Angelova, 2023).
As a task assistant, ChatGPT helps students generate ideas, improve structure, paraphrase, and obtain rapid formative feedback. These functions can support more efficient learning and time management (Imran & Almusharraf, 2023; Ngo, 2023; Y. Su et al., 2023). Support does not automatically translate into higher-quality work. Students using AI have not consistently earned better grades than peers who work independently, and text authenticity can be slightly lower in AI-assisted submissions (Bašić et al., 2023; Oates & Johnson, 2025). The tool is useful for critical text analysis and for developing argumentative skills, but it does not replace independent thinking or creativity. Ongoing concerns about source reliability, plagiarism, and ethical use highlight the need for clear institutional guidelines and policies (AlAfnan et al., 2023; Imran & Almusharraf, 2023; Rejeb et al., 2024).
In academic settings, particularly in foreign-language learning and scholarly writing, ChatGPT is frequently used to edit and refine texts. Students value its speed and the effectiveness of feedback that improves clarity, coherence, and tone (Allen & Mizumoto, 2024; Koltovskaia et al., 2024; Rojas, 2024). It is helpful for eliminating minor errors, paraphrasing, and elevating professional style. Some users remain cautious about the complete accuracy of suggested revisions (Koltovskaia et al., 2024; Rojas, 2024).
Beyond on-demand tutoring, ChatGPT can support and monitor individualized learning. It provides personalized assistance, immediate feedback, and opportunities for self-regulated study. When integrated thoughtfully into course activities and supervised by instructors, it can foster metacognitive self-regulation, motivate learning, and encourage reflection on progress (Chiu, 2024; Dahri et al., 2024).
ChatGPT can also act as a motivator for students and teachers. Reported benefits include stronger feelings of autonomy, competence, and social connectedness. These factors can raise intrinsic motivation and engagement, especially when learners have sufficient AI literacy (Chiu, 2024; Shah et al., 2024; Tummalapenta et al., 2024). Instructors also point to individualized pacing, quick access to information, and streamlined problem-solving as reasons to adopt the tool in educational contexts (Bhaskar & Rana, 2024; Chiu, 2024).

3. Materials and Methods

3.1. Characteristics of the Research Sample

The study involved 409 students representing various fields of study. The survey was distributed among students from universities located in several major Polish cities, including Gdańsk, Toruń, Warsaw, Kraków, and Olsztyn. This sampling strategy allowed for standardized results without excessive regional bias. The target group was selected based on two criteria: first, current participation in the educational process, and second, the demographic tendency indicating that younger individuals are more frequent users of ChatGPT v3.5 or v4.0 (see Figure 1). The composition of the student sample by field cluster and level of study is presented in Table 1.
The questionnaire was designed within the quantitative research paradigm, which enabled comparability of results and facilitated the identification of trends in the perception of ChatGPT in educational contexts. Data collection took place in the second quarter of 2024 through an online survey distributed via the Webankieta platform. This method was chosen for its advantages, particularly its capacity to ensure complete anonymity of respondents. Webankieta allows responses to be collected without acquiring personal data, thereby reducing identification concerns and encouraging honest self-reporting. Moreover, the online survey format facilitated automated data collection and preliminary processing, which improved the reliability of the analyses and reduced the risk of errors typically associated with manual data entry.
We provide a sample of the questionnaire and example items for each role in Appendix A, which shows the layout with current and potential usefulness side by side.

3.2. Research Framework

The objective of this study is to identify how Generation Z perceives ChatGPT in the educational context. The research framework integrates TAM, ECT, and TTF, which allows for a comprehensive perspective on technology acceptance and its alignment with educational tasks.
Respondents were asked to evaluate the perceived usefulness of ChatGPT in education on a five-point Likert scale, ranging from 1 (very low usefulness) to 5 (very high usefulness), across five roles: tutor, task assistant, text editor, teacher, and motivator (Table 2). These categories provided a holistic view of ChatGPT’s potential functions within the education sector.
Perceived usefulness was divided into two temporal horizons: current (“present” usefulness) and future (“potential” usefulness). Additionally, respondents were asked to indicate their prior ways of using ChatGPT for educational purposes. This enabled the identification of relationships between actual usage patterns and perceptions of usefulness across different time perspectives.
The preparatory steps outlined above allowed for the formalization of the hypothesis validation process, which is illustrated in Figure 2.
We treated role evaluations as ordinal outcomes from five-point Likert items. Distributions were non-normal with many ties and ceiling effects, which weakens the assumptions of parametric tests about interval scaling, homoscedasticity, and normality of paired differences. We therefore used the Wilcoxon signed-rank test for within-person comparisons of “current” versus “potential” usefulness, and the Mann–Whitney U test for between-group contrasts, including users versus non-users and experience subgroups. Alongside exact p-values, we report effect sizes robust for ordinal data with ties: Rosenthal’s r for Wilcoxon and Mann–Whitney, and Cliff’s δ for between-group contrasts. Given the large sample, we also present medians and interquartile ranges to convey practical importance rather than only detectability.
All statistical analyses were conducted using Statistica software (v13.3), which offers extensive capabilities for statistical computation, visualization of results, and testing of complex research models.

3.3. Hypothesis Development

Building on TAM, ECT and TTF, we formulate role-specific and testable hypotheses. We examine five roles through which students engage with ChatGPT: tutor, task assistant, text editor, teacher, and motivator.
H1: 
Respondents will report higher “potential” than “current” evaluations for each role.
Under TTF, perceived usefulness grows when the model’s capabilities fit what students need to do in their studies. Editing, clarity improvement, formative feedback, step-by-step explanations and task structuring align well with typical academic tasks, which supports higher usefulness in these roles. ECT explains why future-oriented evaluations can exceed present assessments when early exposure and expectation formation amplify perceived benefits, especially while full task–technology fit is still developing (Gupta et al., 2020; Y. Liu et al., 2024). Evidence on affective outcomes is mixed and often contingent on human support, which points to a smaller effect for the motivator role (Deng et al., 2024; Huang et al., 2025).
H2: 
Non-users will demonstrate a greater increase in evaluations for instructional roles (“tutor” and “teacher”) than users.
Instructional uses depend on dialogue and adaptive guidance, which is the core of TTF in tutoring and teaching contexts (Lai & Lin, 2025). Non-users have not calibrated these affordances against real limitations such as hallucinations or uneven accuracy, so their expectations remain less constrained by experience (Hsiao & Tang, 2024). ECT predicts that expectations without confirmation are more optimistic, which widens the potential minus current gap for those who have not used the tool (Baig & Yadegaridehkordi, 2025). Thus, we expect non-users to report a greater increase in evaluations in instructional roles, while users will report smaller gains.
H3: 
ChatGPT users who have employed the tool for writing assignments will report higher “potential” evaluations of the task assistant role than those without such experience.
TTF is role-specific. Writing scenarios involve outlining, summarizing, idea generation and revision, which map directly to assistance and editing capabilities. This close fit strengthens beliefs about usefulness for the task assistant role and makes higher potential ratings likely among students who have used ChatGPT for writing (Saif et al., 2023; Yang et al., 2024). Therefore, we expect students with prior writing experience to report higher potential evaluations for the task assistant role, with no a priori differences for tutor, teacher or motivator.
H4: 
ChatGPT users who have employed the tool for learning will report higher “potential” evaluations of the teacher role than those without such experience.
Learning use reveals step-by-step explanations, difficulty adjustment and immediate formative feedback. TTF links these capabilities to tutor and teacher tasks, which should raise perceptions of usefulness in these instructional roles. Consistent empirical reports of these instructional affordances support higher potential ratings among students with learning experience using ChatGPT (Almulla, 2024; Dahri et al., 2024; Lai & Lin, 2025; Saif et al., 2023). Accordingly, we expect students with prior learning use to report higher potential evaluations for the tutor and teacher roles, with no a priori differences for task assistant, text editor or motivator.

4. Results

4.1. Perception of “Current” and “Potential” Usefulness of ChatGPT in Education

To identify the relationship between “current” and “potential” evaluations of ChatGPT’s usefulness across the defined roles, the Wilcoxon signed-rank test was applied (Table 3).
The Wilcoxon test results show that for each role, the “potential” usefulness of ChatGPT was rated significantly higher than the “current” one (all p < 0.001). According to Cohen’s thresholds for effect sizes (|r| ≈ 0.10 = small, 0.30 = medium, 0.50 = large) (Prajzner, 2023), the obtained Rosenthal r values indicate large effects for tutor (r = 0.5459), text editor (r = 0.6087), teacher (r = 0.6175), and motivator (r = 0.6173), and a medium-to-large effect for task assistant (r = 0.4900, close to the “large” threshold).
To further verify the relationship between “current” and “potential” evaluations, descriptive statistics are presented in Figure 3 and Table 4.
The quartile plots show a clear upward shift from “current” to “potential” evaluations for every role. The assistant, text editor, and teacher roles move most strongly upward, with Q1 and the medians approaching the top of the scale. For the tutor, the median stays at 4, but the lower quartile rises from 3 to 4, which signals improvement among lower ratings. The motivator also shifts upward but remains the lowest rated and the most dispersed.
The assistant, text editor, and teacher roles rise in median from 4 to 5. The tutor keeps a median of 4, while the motivator increases from 2 to 3. In the “current” condition, the mode is 4 for all roles except the motivator, which is 2. In the “potential” condition, the mode is 5 for the assistant, text editor, teacher, and tutor, while the motivator remains at 2. These shifts are consistent with the Wilcoxon results and indicate a general increase in expected usefulness.

4.2. Evaluation of Instructional Roles Among Non-Users

Internal consistency (Cronbach’s alpha) of the tutor–teacher pair was low for “current” evaluations (α = 0.43) and moderate for “potential” evaluations (α = 0.63). Therefore, hypothesis H2 was tested at the level of individual roles within the instructional group (Δ tutor, Δ teacher).
To identify differences in the increase in evaluations (Δ = “potential” − “current”) between users and non-users, the nonparametric Mann–Whitney U test was applied (Table 5). The analysis was conducted separately for the instructional roles of tutor and teacher. Alongside U/Z statistics and p-values, effect sizes r (Rosenthal) and Cliff’s δ are reported to assess both statistical significance and the strength of the differences.
For the tutor role, non-users demonstrated a significantly greater increase in evaluations (p = 0.000, r = 0.4008, Cliff’s δ = 0.7265, large effect). For the teacher role, no significant differences were found (p = 0.5059, r = −0.0301, δ = −0.0546, negligible effect).
Descriptive statistics for increases in the tutor and teacher roles are presented in Table 6 to illustrate the magnitude and dispersion of the effect in both groups.
The descriptive statistics reveal a strong contrast in the tutor role. Among non-users, the median increase was 3, indicating a substantial rise in perceived usefulness, while among users the median equaled 0, with small and symmetrical changes around zero. For the teacher role, both groups had a median of 0, confirming the absence of significant differences between them and indicating a much weaker effect compared to the tutor role. These descriptive distributions align with the Mann–Whitney U test results, namely that the significant increase concerns only the tutor role among non-users.

4.3. Evaluation of ChatGPT’s Usefulness as a “Task Assistant” Among Users

The Wilcoxon signed-rank test confirmed a significant difference in the evaluation of ChatGPT’s role as a task assistant between students who reported using ChatGPT for writing assignments (N1 = 145) and those who had not (N0 = 217). The test indicated a highly significant difference (p < 0.001) with a large effect size (r = 0.8666). This justified further subgroup analyses.
Subsequently, the Mann–Whitney U test was applied to identify the direction of this difference. Students who used ChatGPT for writing assignments achieved higher average ranks (198.38 vs. 170.22), yielding U = 13,284.5, Z = −2.913, p = 0.0036. The effect size was small (r = 0.1319, Cliff’s δ = −0.1556). These findings indicate that experience with ChatGPT in writing assignments is associated with a more optimistic evaluation of its potential usefulness in the assistant role.
To illustrate these relationships in more detail, descriptive statistics are provided in Figure 4 and Table 7.
In all three groups (all respondents, users, and non-users), “potential” ratings for the task assistant role exceed “current” ratings. The median potential sits near the top of the scale with an interquartile range of 4 to 5. Among users, the box is slightly higher and tighter around 5 than among non-users, indicating a larger concentration of maximum scores for those who have used ChatGPT for writing.
Subgroup differences appear in the share of maximum ratings. The median and mode are 5 in every row (N = 362 overall; 145 users; 217 non-users), yet about 71% of users gave the top score (103 of 145) compared with about 55% of non-users (120 of 217). This pattern aligns with the Mann–Whitney U test and points to more optimistic potential assessments among students with writing experience.

4.4. Evaluation of ChatGPT’s Usefulness as a “Teacher” Among Users

The Wilcoxon signed-rank test confirmed a significant difference in the “potential” evaluations between students who reported using ChatGPT for learning (N1 = 247) and those who had not (N0 = 115). The test indicated a highly significant difference (p < 0.001) with a large effect size (r = 0.8666), which justified further subgroup comparisons.
The Mann–Whitney U test showed that students with learning experience using ChatGPT achieved higher average ranks (190.50 vs. 162.17), yielding U = 11,979, Z = −2.5324, p = 0.0113. The effect size was small (r = 0.1260, Cliff’s δ = −0.1566). The results suggest that learning experience with ChatGPT is associated with a more optimistic evaluation of its potential usefulness in the teacher role, although the effect is modest.
To provide a clearer picture of these relationships, descriptive statistics are presented in Figure 5 and Table 8.
In the “potential” assessment of the teacher’s role, there is a clear increase in all three groups, with the highest ratings among those who used ChatGPT for learning. The box for users shifts upwards and clusters closer to 4–5 than for non-users, indicating a moderate but steady increase in expected usefulness. For the record, “current” remains lower in all groups.
In both subgroups, the teacher role was evaluated highly (median = 4). The difference lies in the distribution peak: among users, the mode was 5 (mode count = 104), while among non-users it was 4 (mode count = 47). This indicates a greater concentration of maximum evaluations among students who had used ChatGPT for learning. This subtle difference aligns with the Mann–Whitney U test results.

5. Discussion

5.1. Perception of “Current” and “Potential” Usefulness of ChatGPT in Education

According to Paradeda et al., 77% of users consider ChatGPT effective or highly effective in education (Paradeda et al., 2025). The present findings confirm this tendency: in four out of five roles, the median for “current” evaluation was 4, which indicates high usefulness of ChatGPT in the roles of tutor, task assistant, text editor, and teacher. From the perspective of TAM, such high scores reflect perceived usefulness (PU) in areas aligned with the strengths of large language models (LLMs). Given the consistently high perceived ease of use (PEOU), PU translates into the intention to continue usage in the future (Davis, 1989).
The exception was the role of motivator, for which the median was only 2. This may be explained by the fact that motivation is deeply rooted in internal and interpersonal factors, such as peer, teacher, or family support, which are difficult to replace with interaction through a technological tool (Menges et al., 2016). The lower evaluation of the motivator role suggests a weaker PEOU → PU relationship. In other words, even an easy-to-use tool does not generate high usefulness when its function does not match the task requirement. In line with TAM, the absence of such a match reduces PU and the intention to use (Davis, 1989).
Overall, findings suggest that Gen Z students see ChatGPT’s educational potential as not yet fully realized (average effect size across roles r ¯ = 0.576). The gap between “current” and “potential” evaluations is consistent with Expectation–Confirmation Theory. Positive experiences and the anticipated improvement of models lead to expectation confirmation, satisfaction, and a stronger intention to continue (Bhattacherjee, 2001). This aligns with other research suggesting that ChatGPT may in the future become a standard educational tool shaping teaching and learning strategies (Da Silva et al., 2024; Looi & Jia, 2025; Sirisathitkul & Jaroonchokanan, 2025; Rejeb et al., 2024). These results reflect the assumptions of both TAM and ECT. Consequently, hypotheses H1 is accepted.

5.2. Evaluation of Instructional Roles Among Non-Users

The findings indicate that among non-users, the increase in evaluation concerned primarily the tutor role (p < 0.001), while for the teacher role, no significant differences between groups were observed (p = 0.5059). This pattern may be explained in psychological terms: individuals without direct experience are more likely to rely on cognitive heuristics such as availability and representativeness. It is easier to imagine ChatGPT as a “second source of suggestions” than as a full-fledged teacher. Moreover, the absence of experience weakens the stability and consistency of attitudes, making them more susceptible to external influences and more cautious in assessing high-stakes applications such as the teacher role. This is consistent with classical work on judgment heuristics (Tversky & Kahneman, 1974) as well as with meta-analyses showing that direct experience strengthens the attitude–behavior link (Glasman & Albarracín, 2006).
Additionally, expectancy effects may influence anticipated effectiveness. Positive or negative predispositions can raise or lower evaluations independently of actual performance (Foroughi et al., 2016). Classical media research has also cautioned that “novelty effects” and attribution of gains to the medium may bias user impressions (Clark, 1983). In light of these considerations, H2 in its general form is rejected, since empirical support applies only to the tutor role, not the teacher role.

5.3. Evaluation of ChatGPT’s Usefulness as an Assistant Among Users

The results show that users who employed ChatGPT for writing assignments reported higher “potential” usefulness in the assistant role compared to those without such experience (Wilcoxon: p < 0.001, r = 0.8666; Mann–Whitney: p = 0.0036, small effect: r = 0.1319, Cliff’s δ ≈ −0.156). This finding is consistent with Task–Technology Fit theory, which holds that technology produces benefits when its functions match task requirements, such as structuring content, summarization, rapid retrieval, and text outlining. Real experience with such alignment strengthens perceptions of usefulness (Goodhue & Thompson, 1995).
Empirical studies on ChatGPT’s role in writing support confirm positive engagement and usefulness in editing and planning tasks (Allen & Mizumoto, 2024; Koltovskaia et al., 2024; Oates & Johnson, 2025). Broader reviews highlight the advantages of assistant-type applications over full substitution of student work (Rejeb et al., 2024). In the context of this study, the conclusion is that actual use in “matching” tasks teaches users where ChatGPT works best and how to utilize it, which enhances operational fluency and reduces uncertainty. This naturally leads to higher evaluations of “potential” usefulness in the assistant role. Accordingly, H3 is accepted.

5.4. Evaluation of ChatGPT’s Usefulness as a Teacher Among Users

The analysis demonstrates that students who used ChatGPT for learning assigned higher “potential” evaluations to the teacher role compared to others (Wilcoxon: p < 0.001, r = 0.8666; Mann–Whitney: p = 0.0113, small effect: r = 0.1260, Cliff’s δ ≈ −0.1566). This indicates a meaningful, though modest, upward shift in evaluations among those with direct learning experience using AI. From the perspective of TTF, this finding supports the argument that matching the tool’s functions (explanation, step-by-step examples, adjustment of difficulty) to the requirements of the task (independent learning, problem-solving) strengthens perceived usefulness and the intention to continue use (Goodhue & Thompson, 1995).
Recent research identifies three “teaching” mechanisms that facilitate this fit:
personalization, that is, dynamic adjustment of content and difficulty to student needs (Looi & Jia, 2025);
metacognitive self-regulation, where ChatGPT supports planning and monitoring of learning (Dahri et al., 2024);
structuring self-regulated learning (SRL) activities, aligned with motivational frameworks (Chiu, 2024).
Comprehensive reviews of LLMs in education confirm that such functions contribute to higher evaluations of usefulness when they are applied in tasks to which they effectively “fit” (Yang et al., 2024). In light of the empirical findings and theoretical justification, H4 is accepted.

5.5. Implications for AI Literacy

AI literacy is becoming a key component of education. Students and teachers should understand how ChatGPT works, know its limitations, evaluate the credibility of generated content, and use AI tools responsibly (Bender, 2024; Kohnke et al., 2023; J. Su & Yang, 2023). Building these skills requires embedding into curricula topics on how models function, their social impact, and the basics of ethical use (Bender, 2024; Chaudhuri & Terrones, 2024; J. Su & Yang, 2023).
In practice, a simple routine works well: prompt, verify, cite, reflect. It trains query formulation, verification, transparent attribution of AI support, and brief reflection on the process.

5.6. Implications for Pedagogy

ChatGPT can support personalization, help students enter difficult content, and foster engagement and creativity. At the same time, there is a risk of shallow learning, reduced independence, and over-reliance on the tool (Adiguzel et al., 2023; Z. Lin, 2023; Malik et al., 2024). The teacher’s role shifts from content provider to guide who models responsible practice and teaches effective use of AI while maintaining quality and integrity standards (Bettayeb et al., 2024; Karataş & Yüce, 2024; Kohnke et al., 2023).
This requires tasks and assessments that promote source verification, transparency about AI contributions, students’ own analysis, and documentation of the work process (Antoniou et al., 2025; Chaudhuri & Terrones, 2024). An assist-first approach eases implementation: start with process support and editing, then introduce dialogic uses with stronger scaffolding.

5.7. Ethical and Institutional Implications

The main ethical challenges include plagiarism and authenticity of work, transparency about AI use, privacy protection, equity of access, and the risk of algorithmic bias (Adel et al., 2024; Anders, 2023; García-López et al., 2024). Institutions should develop clear policies for AI use, communicate them to students, and teach transparent ways to report the role of AI in coursework (García-López et al., 2024; Vaccino-Salvadore, 2023).
Local and discipline-specific ethical frameworks are also needed. They should set clear boundaries for acceptable assistance in different teaching contexts (Vetter et al., 2024). With such guidance, ChatGPT serves as a partner in learning rather than a substitute for students’ own work or for quality standards.

6. Limitations and Future Recommendations

This study is cross-sectional and relies on self-reported data from Generation Z students in Poland, collected online. Accordingly, causal inferences are limited, and generalization beyond this population should be cautious. Role usefulness was measured using five-point Likert scales, and analyses were conducted primarily with nonparametric tests. Some between-group effects, such as those in H3 and H4, were small in magnitude, which further suggests caution in generalization.
Future studies on this topic should employ longitudinal and/or panel designs to trace the dynamics of “current” and “potential” evaluations in relation to user experience and advances in language models. It is also recommended to extend the sample beyond Polish Generation Z students to include other age groups and international contexts, as well as to report results by discipline and task type.

7. Conclusions

This study examined the perceived usefulness of ChatGPT across five roles, in both “current” and “potential” horizons, within the TAM, ECT, and TTF frameworks. Potential evaluations exceeded current ones in all roles (H1 accepted). Among non-users, the increase applied only to the tutor role (H2 rejected), whereas among users, higher evaluations were observed for the assistant role in writing assignments (H3 accepted) and the teacher role in learning activities (H4 accepted). The highest evaluations were concentrated in tasks aligned with the strengths of LLMs, such as structuring, summarization, and step-by-step explanations. This pattern confirms the TTF mechanism and links present PU/PEOU (TAM) with the predominance of “potential” evaluations explained by expectation confirmation (ECT).
In terms of practice and policy for young people, these findings suggest prioritizing ChatGPT as a process assistant and instructional support in low-stakes, low-risk tasks, combined with clear guidelines on verification and academic integrity. The contribution lies in categorically distinguishing utility across five roles and demonstrating that experience with high task–technology fit, rather than mere exposure to AI, leads to the most positive ratings. This provides evidence-based guidance for student support services, curriculum design, and youth-focused policy. Limitations (cross-sectional design, self-reporting, small effects) point to directions for future work, including longitudinal interventions, interregional replications, and research on equity of access and digital inclusion for different youth groups.

Author Contributions

Conceptualization, M.O. and K.S.; methodology, M.O. and K.S.; software, K.S.; validation, M.O. and K.S.; formal analysis, K.S.; investigation, M.O.; resources, M.O.; data curation, K.S.; writing—original draft preparation, K.S.; writing—review and editing, M.O.; visualization, K.S.; supervision, M.O.; project administration, M.O. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Scientific Research Ethics Committee from University of Warmia and Mazury in Olsztyn (Decision No. 8/2024, 25 March 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Survey Questionnaire (sample)
  • Have you ever used ChatGPT?
    Yes.
    No.
  • For what purposes do you use ChatGPT?
    (multiple options possible)
    Assistance in writing routine letters (e.g., official letters, applications, etc.).
    Assistance in writing academic papers, articles, etc.
    Help in learning.
    Searching for information.
    Other (please specify) _____________________________________________
  • For each role below, please rate how useful ChatGPT is in education now and how useful it will be, in your opinion, in the near future (2–3 years).
    (use the scale: 1—Definitely NOT useful; 2—Rather NOT useful; 3—Neither useful nor not useful; 4—Rather useful; 5—Definitely useful).
RoleCharacteristicsCurrent
Usefulness
Potential
Usefulness
TutorExplaining complex topics and answering students’ questions at various educational levels.
Assistant in
educational tasks
For example, create an essay plan or write a few sentences of a given text for an essay, etc.
Text editorWill correct the text, find errors, propose changes.
TeacherPrepare quizzes, tests, and other educational materials that facilitate learning.
MotivatorMotivating individuals in education to continue their efforts, pointing out progress in learning, setting goals.

References

  1. Acosta-Enriquez, B. G., Ballesteros, M. A. A., Vargas, C. G. A. P., Ulloa, M. N. O., Ulloa, C. R. G., Romero, J. M. P., Jaramillo, N. D. G., Orellana, H. U. C., Anzoátegui, D. X. A., & Roca, C. L. (2024). Knowledge, attitudes, and perceived Ethics regarding the use of ChatGPT among generation Z university students. International Journal for Educational Integrity, 20(1), 10. [Google Scholar] [CrossRef]
  2. Adel, A., Ahsan, A., & Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences, 14(8), 814. [Google Scholar] [CrossRef]
  3. Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429. [Google Scholar] [CrossRef]
  4. AlAfnan, M. A., Dishari, N. S., Jovic, N. M., & Lomidze, N. K. (2023). ChatGPT as an educational tool: Opportunities, challenges, and recommendations for communication, business writing, and composition courses. Journal of Artificial Intelligence and Technology, 3(2), 60–68. [Google Scholar] [CrossRef]
  5. Allen, T. J., & Mizumoto, A. (2024). ChatGPT over my friends: Japanese English-as-a-Foreign-Language Learners’ preferences for editing and proofreading strategies. RELC Journal. [Google Scholar] [CrossRef]
  6. Almarashdi, H. S., Jarrah, A. M., Khurma, O. A., & Gningue, S. M. (2024). Unveiling the potential: A systematic review of ChatGPT in transforming mathematics teaching and learning. Eurasia Journal of Mathematics Science and Technology Education, 20(12), em2555. [Google Scholar] [CrossRef]
  7. Almashy, A., Ahmed, A. S. M. M., Jamshed, M., Ansari, M. S., Banu, S., & Warda, W. U. (2024). Analyzing the impact of CALL tools on English learners’ writing skills: A Comparative study of errors correction. World Journal of English Language, 14(6), 657. [Google Scholar] [CrossRef]
  8. Almulla, M. A. (2024). Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students perspective. Heliyon, 10(11), e32220. [Google Scholar] [CrossRef]
  9. Alsaweed, W., & Aljebreen, S. (2024). Investigating the accuracy of ChatGPT as a writing error correction tool. International Journal of Computer-Assisted Language Learning and Teaching, 14(1), 1–18. [Google Scholar] [CrossRef]
  10. Anders, B. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns, 4(3), 100694. [Google Scholar] [CrossRef] [PubMed]
  11. Antoniou, C., Pavlou, A., & Ikossi, D. G. (2025). Let’s chat! Integrating ChatGPT in medical student assignments to enhance critical analysis. Medical Teacher, 47(5), 791–793. [Google Scholar] [CrossRef]
  12. Aydın, Ö., & Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations? Academic Platform Journal of Engineering and Smart Systems, 11(3), 118–134. [Google Scholar] [CrossRef]
  13. Baig, M. I., & Yadegaridehkordi, E. (2025). Factors influencing academic staff satisfaction and continuous usage of generative artificial intelligence (GenAI) in higher education. International Journal of Educational Technology in Higher Education, 22(1), 5. [Google Scholar] [CrossRef]
  14. Bašić, Ž., Banovac, A., Kružić, I., & Jerković, I. (2023). ChatGPT-3.5 as writing assistance in students’ essays. Humanities and Social Sciences Communications, 10(1), 750. [Google Scholar] [CrossRef]
  15. Bender, S. M. (2024). Awareness of Artificial Intelligence as an essential digital literacy: ChatGPT and Gen-AI in the classroom. Changing English, 31(2), 161–174. [Google Scholar] [CrossRef]
  16. Berman, J., McCoy, L., & Camarata, T. (2024). LLM-generated multiple choice practice quizzes for pre-clinical medical students; use and validity. Physiology, 39(S1), 118–134. [Google Scholar] [CrossRef]
  17. Bettayeb, A. M., Talib, M. A., Altayasinah, A. Z. S., & Dakalbab, F. (2024). Exploring the impact of ChatGPT: Conversational AI in education. Frontiers in Education, 9, 1379796. [Google Scholar] [CrossRef]
  18. Bhaskar, P., & Rana, S. (2024). The ChatGPT dilemma: Unravelling teachers’ perspectives on inhibiting and motivating factors for adoption of ChatGPT. Journal of Information Communication and Ethics in Society, 22(2), 219–239. [Google Scholar] [CrossRef]
  19. Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25(3), 351. [Google Scholar] [CrossRef]
  20. Biloš, A., & Budimir, B. (2024). Understanding the adoption dynamics of ChatGPT among generation Z: Insights from a modified UTAUT2 model. Journal of Theoretical and Applied Electronic Commerce Research, 19(2), 863–879. [Google Scholar] [CrossRef]
  21. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  22. Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 60. [Google Scholar] [CrossRef]
  23. Chaudhuri, J., & Terrones, L. (2024). Reshaping academic library information literacy programs in the advent of ChatGPT and other generative AI technologies. Internet Reference Services Quarterly, 29(1), 1–25. [Google Scholar] [CrossRef]
  24. Chiu, T. K. F. (2024). A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: A case of ChatGPT. Educational Technology Research and Development, 72(4), 2401–2416. [Google Scholar] [CrossRef]
  25. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445–459. [Google Scholar] [CrossRef]
  26. Črček, N., & Patekar, J. (2023). Writing with AI: University students’ use of ChatGPT. Journal of Language and Education, 9(4), 128–138. [Google Scholar] [CrossRef]
  27. Dahri, N. A., Yahaya, N., Al-Rahmi, W. M., Aldraiweesh, A., Alturki, U., Almutairy, S., Shutaleva, A., & Soomro, R. B. (2024). Extended TAM based acceptance of AI-powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon, 10(8), e29317. [Google Scholar] [CrossRef]
  28. Da Silva, C. A. G., Ramos, F. N., De Moraes, R. V., & Santos, E. L. D. (2024). ChatGPT: Challenges and benefits in software programming for higher education. Sustainability, 16(3), 1245. [Google Scholar] [CrossRef]
  29. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319. [Google Scholar] [CrossRef]
  30. Deng, R., Jiang, M., Yu, X., Lu, Y., & Liu, S. (2024). Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Computers & Education, 227, 105224. [Google Scholar] [CrossRef]
  31. Ding, L., Li, T., Jiang, S., & Gapud, A. (2023). Students’ perceptions of using ChatGPT in a physics class as a virtual tutor. International Journal of Educational Technology in Higher Education, 20(1), 63. [Google Scholar] [CrossRef]
  32. ElSayary, A. (2023). An investigation of teachers’ perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. Journal of Computer Assisted Learning, 40(3), 931–945. [Google Scholar] [CrossRef]
  33. Fawns, T., Henderson, M., Matthews, K., Oberg, G., Liang, Y., Walton, J., Corbin, T., Bearman, M., Shum, S. B., McCluskey, T., McLean, J., Shibani, A., Bakharia, A., Lim, L., Pepperell, N., Slade, C., Chung, J., & Seligmann, A. (2024). Gen AI and student perspectives of use and ambiguity. Proceedings of Australasian Society for Computers in Learning in Tertiary Education, 132–134. [Google Scholar] [CrossRef]
  34. Fokides, E., & Peristeraki, E. (2024). Comparing ChatGPT’s correction and feedback comments with that of educators in the context of primary students’ short essays written in English and Greek. Education and Information Technologies, 30(2), 2577–2621. [Google Scholar] [CrossRef]
  35. Foroughi, C. K., Monfort, S. S., Paczynski, M., McKnight, P. E., & Greenwood, P. M. (2016). Placebo effects in cognitive training. Proceedings of the National Academy of Sciences, 113(27), 7470–7474. [Google Scholar] [CrossRef]
  36. García-López, I. M., González, C. S. G., Ramírez-Montoya, M., & Molina-Espinosa, J. (2024). Challenges of implementing ChatGPT on education: Systematic literature review. International Journal of Educational Research Open, 8, 100401. [Google Scholar] [CrossRef]
  37. Glasman, L. R., & Albarracín, D. (2006). Forming attitudes that predict future behavior: A meta-analysis of the attitude-behavior relation. Psychological Bulletin, 132(5), 778–822. [Google Scholar] [CrossRef] [PubMed]
  38. Goodhue, D. L., & Thompson, R. L. (1995). Task-Technology fit and individual performance. MIS Quarterly, 19(2), 213. [Google Scholar] [CrossRef]
  39. Guo, A. A., & Li, J. (2023). Harnessing the power of ChatGPT in medical education. Medical Teacher, 45(9), 1063. [Google Scholar] [CrossRef]
  40. Guo, K., & Wang, D. (2023). To resist it or to embrace it? Examining ChatGPT’s potential to support teacher feedback in EFL writing. Education and Information Technologies, 29(7), 8435–8463. [Google Scholar] [CrossRef]
  41. Gupta, A., Yousaf, A., & Mishra, A. (2020). How pre-adoption expectancies shape post-adoption continuance intentions: An extended expectation-confirmation model. International Journal of Information Management, 52, 102094. [Google Scholar] [CrossRef]
  42. Hamerman, E. J., Aggarwal, A., & Martins, C. (2024). An investigation of generative AI in the classroom and its implications for university policy. Quality Assurance in Education, 33(2), 253–266. [Google Scholar] [CrossRef]
  43. Han, J., & Li, N. M. (2024). Exploring ChatGPT-supported teacher feedback in the EFL context. System, 126, 103502. [Google Scholar] [CrossRef]
  44. Hsiao, C., & Tang, K. (2024). Beyond acceptance: An empirical investigation of technological, ethical, social, and individual determinants of GenAI-supported learning in higher education. Education and Information Technologies, 30(8), 10725–10750. [Google Scholar] [CrossRef]
  45. Huang, W., Jiang, J., King, R. B., & Fryer, L. K. (2025). Chatbots and student motivation: A scoping review. International Journal of Educational Technology in Higher Education, 22(1), 26. [Google Scholar] [CrossRef]
  46. Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemporary Educational Technology, 15(4), ep464. [Google Scholar] [CrossRef]
  47. Karataş, F., Abedi, F. Y., Gunyel, F. O., Karadeniz, D., & Kuzgun, Y. (2024). Incorporating AI in foreign language education: An investigation into ChatGPT’s effect on foreign language learners. Education and Information Technologies, 29(15), 19343–19366. [Google Scholar] [CrossRef]
  48. Karataş, F., & Yüce, E. (2024). AI and the future of teaching: Preservice teachers’ reflections on the use of artificial intelligence in open and distributed learning. The International Review of Research in Open and Distributed Learning, 25(3), 304–325. [Google Scholar] [CrossRef]
  49. Kavitha, K., Joshith, V. P., & Sharma, S. (2024). Beyond text: ChatGPT as an emotional resilience support tool for Gen Z—A sequential explanatory design exploration. E-Learning and Digital Media, 1–27. [Google Scholar] [CrossRef]
  50. Kiryakova, G., & Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Education Sciences, 13(10), 1056. [Google Scholar] [CrossRef]
  51. Kıyak, Y. S., & Emekli, E. (2024). ChatGPT prompts for generating multiple-choice questions in medical education and evidence on their validity: A literature review. Postgraduate Medical Journal, 100(1189), 858–865. [Google Scholar] [CrossRef]
  52. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. [Google Scholar] [CrossRef]
  53. Koltovskaia, S., Rahmati, P., & Saeli, H. (2024). Graduate students’ use of ChatGPT for academic text revision: Behavioral, cognitive, and affective engagement. Journal of Second Language Writing, 65, 101130. [Google Scholar] [CrossRef]
  54. Lai, C., & Lin, C. (2025). Analysis of learning behaviors and outcomes for students with different knowledge levels: A case study of intelligent tutoring system for coding and learning (ITS-CAL). Applied Sciences, 15(4), 1922. [Google Scholar] [CrossRef]
  55. Lin, S., & Crosthwaite, P. (2024). The grass is not always greener: Teacher vs. GPT-assisted written corrective feedback. System, 127, 103529. [Google Scholar] [CrossRef]
  56. Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. Royal Society Open Science, 10(8), 230658. [Google Scholar] [CrossRef]
  57. Liu, F., Chang, X., Zhu, Q., Huang, Y., Li, Y., & Wang, H. (2024). Assessing clinical medicine students’ acceptance of large language model: Based on technology acceptance model. BMC Medical Education, 24(1), 1251. [Google Scholar] [CrossRef]
  58. Liu, Y., Park, J., & McMinn, S. (2024). Using generative artificial intelligence/ChatGPT for academic communication: Students’ perspectives. International Journal of Applied Linguistics, 34(4), 1437–1461. [Google Scholar] [CrossRef]
  59. Lo, C. K. (2023). What is the impact of CHATGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  60. Looi, C., & Jia, F. (2025). Personalization capabilities of current technology chatbots in a learning environment: An analysis of student-tutor bot interactions. Education and Information Technologies, 30, 14165–14195. [Google Scholar] [CrossRef]
  61. Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: A mixed methods intervention study. Smart Learning Environments, 11(1), 9. [Google Scholar] [CrossRef]
  62. Malik, A., Khan, M. L., Hussain, K., Qadir, J., & Tarhini, A. (2024). AI in higher education: Unveiling academicians’ perspectives on teaching, research, and ethics in the age of ChatGPT. Interactive Learning Environments, 33(3), 2390–2406. [Google Scholar] [CrossRef]
  63. Menges, J. I., Tussing, D. V., Wihler, A., & Grant, A. M. (2016). When job performance is all relative: How family motivation energizes effort and compensates for intrinsic motivation. Academy of Management Journal, 60(2), 695–719. [Google Scholar] [CrossRef]
  64. Mirriahi, N., Marrone, R., Barthakur, A., Gabriel, F., Colton, J., Yeung, T. N., Arthur, P., & Kovanovic, V. (2025). The relationship between students’ self-regulated learning skills and technology acceptance of GenAI. Australasian Journal of Educational Technology, 41(2), 16–33. [Google Scholar] [CrossRef]
  65. Munaye, Y. Y., Admass, W., Belayneh, Y., Molla, A., & Asmare, M. (2025). ChatGPT in education: A systematic review on opportunities, challenges, and future directions. Algorithms, 18(6), 352. [Google Scholar] [CrossRef]
  66. Ngo, T. T. A. (2023). The perception by university students of the use of CHATGPT in education. International Journal of Emerging Technologies in Learning (IJET), 18(17), 4–19. [Google Scholar] [CrossRef]
  67. Nugroho, A., Andriyanti, E., Widodo, P., & Mutiaraningrum, I. (2024). Students’ appraisals post-ChatGPT use: Students’ narrative after using ChatGPT for writing. Innovations in Education and Teaching International, 62(2), 499–511. [Google Scholar] [CrossRef]
  68. Oates, A., & Johnson, D. (2025). ChatGPT in the classroom: Evaluating its role in fostering critical evaluation skills. International Journal of Artificial Intelligence in Education. [Google Scholar] [CrossRef]
  69. OpenAI. (2025, February 20). Building an AI-ready workforce: A look at college student ChatGPT adoption in the US. Available online: https://openai.com/global-affairs/college-students-and-chatgpt/ (accessed on 28 July 2025).
  70. Paradeda, R. B., Torres, D. T. B., & Takahashi, A. (2025). Generative AI in education: A study of undergraduate students’ expectations of ChatGPT. RENOTE, 22(3), 174–185. [Google Scholar] [CrossRef]
  71. Prajzner, A. (2023). Selected indicators of effect size in psychological research. Annales Universitatis Mariae Curie-Skłodowska Sectio J—Paedagogia-Psychologia, 35(4), 139–157. [Google Scholar] [CrossRef]
  72. Rejeb, A., Rejeb, K., Appolloni, A., Treiblmaier, H., & Iranmanesh, M. (2024). Exploring the impact of ChatGPT on education: A web mining and machine learning approach. The International Journal of Management Education, 22(1), 100932. [Google Scholar] [CrossRef]
  73. Rogers, E. M. (1983). Diffusion of innovations (3rd ed.). The Free Press. [Google Scholar]
  74. Rojas, A. J. (2024). An investigation into ChatGPT’s application for a scientific writing assignment. Journal of Chemical Education, 101(5), 1959–1965. [Google Scholar] [CrossRef]
  75. Saif, N., Khan, S. U., Shaheen, I., ALotaibi, F. A., Alnfiai, M. M., & Arif, M. (2023). Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Computers in Human Behavior, 154, 108097. [Google Scholar] [CrossRef]
  76. Sallam, M., Salim, N., Barakat, M., & Al-Tammemi, A. (2023). ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J, 3(1), e103. [Google Scholar] [CrossRef]
  77. Shah, C. S., Mathur, S., & Vishnoi, S. K. (2024). Is ChatGPT enhancing youth’s learning, engagement and satisfaction? Journal of Computer Information Systems, 1–16. [Google Scholar] [CrossRef]
  78. Similarweb. (n.d.). chatgpt.com. Available online: https://pro.similarweb.com/#/digitalsuite/websiteanalysis/overview/website-performance/*/999/1m?webSource=Total&key=chatgpt.com (accessed on 28 July 2025).
  79. Sirisathitkul, C., & Jaroonchokanan, N. (2025). Implementing ChatGPT as tutor, tutee, and tool in physics and chemistry. Substantia, 9(1), 89–101. [Google Scholar] [CrossRef]
  80. Su, J., & Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Review of Education, 6(3), 355–366. [Google Scholar] [CrossRef]
  81. Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57, 100752. [Google Scholar] [CrossRef]
  82. Tarchi, C., Zappoli, A., Ledesma, L. C., & Brante, E. W. (2024). The use of ChatGPT in source-based writing tasks. International Journal of Artificial Intelligence in Education, 35(2), 858–878. [Google Scholar] [CrossRef]
  83. Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramaniam, R., & Khan, M. A. I. (2023). What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interactive Technology and Smart Education, 21(3), 333–355. [Google Scholar] [CrossRef]
  84. Tummalapenta, S., Pasupuleti, R., Chebolu, R., Banala, T., & Thiyyagura, D. (2024). Factors driving ChatGPT continuance intention among higher education students: Integrating motivation, social dynamics, and technology adoption. Journal of Computers in Education. [Google Scholar] [CrossRef]
  85. Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. [Google Scholar] [CrossRef]
  86. Vaccino-Salvadore, S. (2023). Exploring the ethical dimensions of using ChatGPT in language learning and beyond. Languages, 8(3), 191. [Google Scholar] [CrossRef]
  87. Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers & Composition/Computers and Composition, 71, 102831. [Google Scholar] [CrossRef]
  88. Wang, L., Chen, X., Wang, C., Xu, L., Shadiev, R., & Li, Y. (2023). ChatGPT’s capabilities in providing feedback on undergraduate students’ argumentation: A case study. Thinking Skills and Creativity, 51, 101440. [Google Scholar] [CrossRef]
  89. Wardat, Y., Tashtoush, M. A., AlAli, R., & Jarrah, A. M. (2023). ChatGPT: A revolutionary tool for teaching and learning mathematics. Eurasia Journal of Mathematics Science and Technology Education, 19(7), em2286. [Google Scholar] [CrossRef]
  90. Wu, T., Lee, H., Li, P., Huang, C., & Huang, Y. (2023). Promoting Self-Regulation progress and knowledge construction in blended learning via ChatGPT-Based Learning Aid. Journal of Educational Computing Research, 61(8), 3–31. [Google Scholar] [CrossRef]
  91. Xiao, Y., & Zhi, Y. (2023). An exploratory study of EFL learners’ use of ChaTGPT for language learning tasks: Experience and perceptions. Languages, 8(3), 212. [Google Scholar] [CrossRef]
  92. Xu, T., Weng, H., Liu, F., Yang, L., Luo, Y., Ding, Z., & Wang, Q. (2024). Current status of ChatGPT use in medical education: Potentials, challenges, and strategies. Journal of Medical Internet Research, 26, e57896. [Google Scholar] [CrossRef]
  93. Xu, X., Wang, X., Zhang, Y., & Zheng, R. (2024). Applying ChatGPT to tackle the side effects of personal learning environments from learner and learning perspective: An interview of experts in higher education. PLoS ONE, 19(1), e0295646. [Google Scholar] [CrossRef]
  94. Yang, J., Jin, H., Tang, R., Han, X., Feng, Q., Jiang, H., Zhong, S., Yin, B., & Hu, X. (2024). Harnessing the power of LLMs in practice: A survey on ChatGPT and beyond. ACM Transactions on Knowledge Discovery From Data, 18(6), 1–32. [Google Scholar] [CrossRef]
  95. Zhang, Y., Yang, X., & Tong, W. (2025). University students’ attitudes toward ChatGPT profiles and their relation to ChatGPT intentions. International Journal of Human-Computer Interaction, 41(5), 3199–3212. [Google Scholar] [CrossRef]
  96. Zou, M., & Huang, L. (2023). The impact of ChatGPT on L2 writing and expected responses: Voice from doctoral students. Education and Information Technologies, 29(11), 13201–13219. [Google Scholar] [CrossRef]
Figure 1. Demographic distribution of ChatGPT usage. Source: own elaboration based on https://www.similarweb.com/website/chatgpt.com/#overview (accessed on 1 October 2025).
Figure 1. Demographic distribution of ChatGPT usage. Source: own elaboration based on https://www.similarweb.com/website/chatgpt.com/#overview (accessed on 1 October 2025).
Youth 05 00106 g001
Figure 2. Hypothesis validation process.
Figure 2. Hypothesis validation process.
Youth 05 00106 g002
Figure 3. Quartile distribution for “current” and “potential” evaluations across educational roles. Source: own elaboration.
Figure 3. Quartile distribution for “current” and “potential” evaluations across educational roles. Source: own elaboration.
Youth 05 00106 g003
Figure 4. Quartile distribution for “current” and “potential” assessments of the role of a task assistant across respondents who use ChatGPT for writing assignments. Source: own elaboration.
Figure 4. Quartile distribution for “current” and “potential” assessments of the role of a task assistant across respondents who use ChatGPT for writing assignments. Source: own elaboration.
Youth 05 00106 g004
Figure 5. Quartile distribution for “current” and “potential” assessments of the role of a teacher across respondents who use ChatGPT for learning.
Figure 5. Quartile distribution for “current” and “potential” assessments of the role of a teacher across respondents who use ChatGPT for learning.
Youth 05 00106 g005
Table 1. Sample composition by level of study and field cluster.
Table 1. Sample composition by level of study and field cluster.
Field of Study (Polish Classification)Full-Time
Bachelor
Part-Time
Bachelor
Full-Time MasterPart-Time MasterTotal
Humanities612-9
Engineering and technology54613174
Agricultural sciences11--2
Social sciences13758596260
Natural sciences23525-53
Other821-11
Total229731007409
Source: own elaboration.
Table 2. Five roles of ChatGPT in study contexts with task–technology fit and representative literature.
Table 2. Five roles of ChatGPT in study contexts with task–technology fit and representative literature.
RoleShort Description (TTF)Example Literature
TutorStep-by-step explanations and Q&A that align with study tasks(Almarashdi et al., 2024; Da Silva et al., 2024; A. A. Guo & Li, 2023; Sallam et al., 2023; Xiao & Zhi, 2023)
Task assistantOutlining, planning, and summarizing that fit workflow support(Črček & Patekar, 2023; Mahapatra, 2024; Nugroho et al., 2024; Y. Su et al., 2023; Zou & Huang, 2023)
Text editorClarity improvement, revision, and tone adjustment for writing tasks(Almashy et al., 2024; Alsaweed & Aljebreen, 2024; Fokides & Peristeraki, 2024; S. Lin & Crosthwaite, 2024)
TeacherFormative feedback and guidance for learning activities(Berman et al., 2024; K. Guo & Wang, 2023; Han & Li, 2024; Kıyak & Emekli, 2024; Wang et al., 2023)
MotivatorPrompts to start and persist, effort regulation with human scaffolding(Chiu, 2024; ElSayary, 2023; Karataş et al., 2024; X. Xu et al., 2024; Wu et al., 2023)
Source: own elaboration based on literature.
Table 3. Wilcoxon signed-rank test for “current” and “potential” evaluations of ChatGPT’s usefulness.
Table 3. Wilcoxon signed-rank test for “current” and “potential” evaluations of ChatGPT’s usefulness.
RoleN *TZpr
Tutor2043872.507.79730.00000.5459
Task assistant1552630.506.09990.00000.4900
Text editor1922761.508.43390.00000.6087
Teacher2053038.508.84170.00000.6175
Motivator1882560.008.46350.00000.6173
* N indicates the number of non-tied pairs. Source: own elaboration.
Table 4. Descriptive statistics for “current” and “potential” evaluations across educational roles.
Table 4. Descriptive statistics for “current” and “potential” evaluations across educational roles.
RoleNMedianModeMode
Count
“Current” evaluation
Tutor40944212
Assistant in educational tasks44183
Text editor44162
Teacher44164
Motivator22125
“Potential” evaluation
Tutor40945203
Assistant in educational tasks55244
Text editor55224
Teacher45150
Motivator32113
Source: own elaboration.
Table 5. Mann–Whitney U test results for differences in evaluation increases between users and non-users.
Table 5. Mann–Whitney U test results for differences in evaluation increases between users and non-users.
RoleN (Users)N (Non-Users)Avg. Rank (Users)Avg. Rank (Non-Users)UZ (Corrected)prCliff’s δ
Tutor36247187.9268336.50002326.5008.23070.00000.40080.7265
Teacher206.2831195.11708042.500−0.66530.5059−0.0301−0.0546
Source: own elaboration.
Table 6. Descriptive statistics for evaluation increases among users and non-users, tutor and teacher roles.
Table 6. Descriptive statistics for evaluation increases among users and non-users, tutor and teacher roles.
RoleNMedianModeMode
Count
Lower QuartileUpper Quartile
Users
Tutor3620087−11
Teacher0018801
Non-users
Tutor47331623
Teacher002901
Source: own elaboration.
Table 7. Descriptive statistics for the “potential” evaluation of the assistant role among students using and not using ChatGPT for writing assignments.
Table 7. Descriptive statistics for the “potential” evaluation of the assistant role among students using and not using ChatGPT for writing assignments.
Use of ChatGPT for Writing AssignmentsNMedianModeMode
Count
All respondents36255223
Users14555103
Non-users21755120
Source: own elaboration.
Table 8. Descriptive statistics for the “potential” evaluation of the teacher role among students using and not using ChatGPT for learning.
Table 8. Descriptive statistics for the “potential” evaluation of the teacher role among students using and not using ChatGPT for learning.
Use of ChatGPT for LearningNMedianModeMode
Count
All respondents36245136
Users24745104
Non-users1154447
Source: own elaboration.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oliński, M.; Sieciński, K. Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education. Youth 2025, 5, 106. https://doi.org/10.3390/youth5040106

AMA Style

Oliński M, Sieciński K. Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education. Youth. 2025; 5(4):106. https://doi.org/10.3390/youth5040106

Chicago/Turabian Style

Oliński, Marian, and Kacper Sieciński. 2025. "Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education" Youth 5, no. 4: 106. https://doi.org/10.3390/youth5040106

APA Style

Oliński, M., & Sieciński, K. (2025). Youth and ChatGPT: Perceptions of Usefulness and Usage Patterns of Generation Z in Polish Higher Education. Youth, 5(4), 106. https://doi.org/10.3390/youth5040106

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop