Next Article in Journal
Executive Cognition, Capability Reconstruction, and Digital Green Innovation Performance in Building Materials Enterprises: A Systems Perspective
Previous Article in Journal
From Equilibrium to Evolution: Redesigning Business Economics Education Through Systems Thinking and Dynamic Capabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Differences: A Generational Convergence in Technology Use Among Business Students

Faculty of Economics and Business, University of Maribor, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Systems 2025, 13(12), 1095; https://doi.org/10.3390/systems13121095
Submission received: 31 October 2025 / Revised: 28 November 2025 / Accepted: 2 December 2025 / Published: 3 December 2025

Abstract

The rapid digitalization of higher education has transformed how students learn, collaborate, and engage with emerging technologies such as artificial intelligence (AI). While earlier research emphasized generational or academic-level differences in digital behavior, recent evidence suggests convergence in technology use. This study explores whether undergraduate and postgraduate students at the Faculty of Economics and Business, University of Maribor, display distinct technology engagement patterns across five constructs: excessive technology use, online collaboration, the use of digital learning tools (E-boards), the use of AI in education, and perceived academic success. A survey among 285 students was analyzed using the non-parametric Mann–Whitney U test due to a non-normal data distribution. The findings showed that postgraduate students did not report higher levels of E-Board use, online collaboration, or perceived academic success. Undergraduate students scored higher on one item related to excessive technology use, but not across the full construct. However, significant differences emerged in AI use, where postgraduate students showed greater confidence and willingness to integrate AI tools. The findings suggest that digital competence and the quality of technology integration, rather than study level, shape students’ learning experiences. Higher education institutions should promote balanced and ethical technology use, strengthen AI literacy, and foster self-regulated learning skills.

1. Introduction

In recent decades, digital technologies have become deeply embedded in the academic environment of higher education. The increasing reliance on smartphones, online platforms, collaborative tools, and, more recently, artificial intelligence (AI) applications has substantially reshaped how students engage in learning, communication, and the organization of their academic activities. Although it could be assumed that different student populations (e.g., undergraduate versus graduate) exhibit distinct patterns of digital engagement, empirical evidence suggests that these differences are less pronounced [1].
Recent studies have increasingly examined whether students’ technology use is better explained by generational characteristics or by their academic level as undergraduate or postgraduate students [2,3]. While earlier work often highlighted generational divides in comfort with and the adoption of digital tools, more recent studies offer a more nuanced picture in which generation alone may not be a decisive predictor of technology-related behavior. For instance, systematic reviews of generational differences in technology behavior observe that members of Generation Z (born 1995–2010) generally share high digital adaptability and pervasive use of technology, often reducing within-generation differences across subgroups [4,5].
Moreover, in the context of higher education, empirical investigations suggest that attitudes toward integrating technology into courses and general student attitudes do not always differ significantly by generation. For example, Culp-Roche et al. [6], found that while older students (Baby Boomers, Generation X) exhibited greater anxiety around technology use, differences between generations in terms of technology integration into courses and student attitudes were not statistically significant [6]. As the educational landscape continues to transform, the rise of generative AI adds a new layer to the understanding of technology adoption in higher education. Recent studies examining student perceptions and use of generative AI point to demographic variations in readiness, confidence, and perceived benefits, which in some cases extend across academic levels. Evidence from a survey of 482 higher education students in the United States showed that although many participants reported feeling comfortable with generative AI, factors such as academic level, enrollment status, and institutional setting were linked to notable differences in attitudes and usage patterns [7]. A complementary global study drawing on the Diffusion of Innovations theory analyzed adoption strategies across 40 universities and emphasized that institutional, cultural, and contextual conditions strongly influence how students approach and integrate AI tools into their learning practices [8]. These developments open an important line of inquiry for our research. If today’s students are part of a generation that shares similar digital experiences, then technology-related habits may no longer vary substantially across academic levels. This raises the possibility that undergraduate and graduate students are more alike than different in how they engage with technology during their studies. At the same time, the rapid emergence of AI introduces a new dimension that may reveal or even magnify subtle distinctions within an otherwise homogeneous group. As digital ecosystems continue to evolve, distinctions once defined by generational or academic boundaries appear to be dissolving. The contemporary university classroom has become a hybrid cognitive space where human intelligence interacts with algorithmic systems, and learning outcomes depend increasingly on students’ ability to navigate, evaluate, and ethically apply technology. Understanding whether academic level still plays a differentiating role in this environment is therefore not only a question of digital competence but also of adaptability, critical awareness, and cognitive resilience. Examining this issue among business students provides a valuable opportunity to capture how future professionals, educated at different academic stages, conceptualize the role of technology in shaping their academic and professional success. The present study addresses these issues by examining business students at the Faculty of Economics and Business, University of Maribor. In this study, the terms undergraduate and postgraduate students refer to students enrolled in first- and second-cycle study programs at the Faculty of Economics and Business, University of Maribor. We investigate multiple facets of technology use, including excessive use, e-board practices, online collaboration, and AI adoption, as well as students’ perceptions of their academic success.

2. Literature Review

2.1. Digital Transformation in Higher Education

The digitalization of higher education has accelerated in recent years, with AI emerging as one of the most influential technologies shaping teaching and learning practices. Unlike earlier technologies that primarily served as delivery platforms, AI offers possibilities for personalized learning, adaptive feedback, and new forms of academic support. Systematic reviews confirm that the adoption of AI in higher education is increasing, although its integration is still uneven and often experimental [9]. Castillo-Martínez et al. [10] emphasize that artificial intelligence has already established its role in the university context by supporting both research activities and students’ learning and writing.
Research by Arowosegbe et al. [2] provides valuable insights into how students in UK higher education perceive and use generative AI in their studies. The findings reveal a high level of awareness, with 94% of students reporting familiarity with generative AI and more than half (52%) having already used it. The most common applications included grammar correction (56%), idea generation (55%), answering questions (41%), and summarization of content (37%). Younger students (18–24 years old) tended to express more negative views, whereas students in the 25–34 age group and postgraduate cohorts reported more positive attitudes. This suggests that both age and level of study may shape the degree of confidence and readiness with which students engage with AI. Looking toward the future, 83% of students anticipated that generative AI would play an increasingly important role in academia, and nearly half indicated that it should be formally integrated into the curriculum.
Chan and Hu [11] investigated students’ perceptions of generative artificial intelligence in higher education and found a predominantly positive orientation toward its use. Students regarded generative AI as a valuable resource for enhancing learning and academic productivity, particularly through applications such as writing assistance, idea generation, content summarization, and personalized learning support. Nevertheless, their perceptions were thoughtful and critical. Concerns were raised about the accuracy and reliability of AI-generated outputs, ethical implications including plagiarism, and the risk of overreliance that might undermine critical thinking and creativity. Importantly, the study highlighted students’ awareness of these limitations and their recognition of the necessity for explicit institutional guidelines to ensure responsible and effective integration of AI into academic practice.

2.2. Academic Level and Technology Use: From Perceived Gaps to Digital Homogeneity

In recent years, researchers have increasingly questioned the assumption that younger generations, simply because of their continuous exposure to digital technologies, are more digitally competent than older cohorts. Several studies show that frequent use of mobile phones, computers, or social media does not necessarily translate into advanced digital literacy. Skills such as critical thinking, evaluating online information, or ensuring privacy and security often remain underdeveloped, even among students who are considered highly “tech-savvy” [12].
More recent analyses also highlight that the idea of clear generational divides in technology use is outdated. For instance, Mertala et al. [13] demonstrate that the term »digital native« still circulates in educational discourse but lacks an empirical foundation. Similarly, Pongrac et al. [14], show that digital competences among younger generations are uneven, varying across domains such as information literacy and online safety. These findings suggest that digital competence is shaped more by education, socio-economic background, and institutional support than by age alone. Çoklar and Tatli [15] investigated digital nativity levels among Generations X, Y, and Z using the Digital Nativity Assessment Scale and found that younger generations generally score higher in areas such as multitasking, preference for visual communication, and expectation of immediacy. Their results show a gradual increase in digital nativity from Generation X to Generation Z, suggesting that digital exposure and familiarity have become more pervasive over time. At the same time, the study emphasizes that these differences are not absolute: contextual factors such as access to devices, socio-economic background, and educational opportunities significantly influence digital competence. Importantly, gender differences that were more visible in older cohorts diminished in Generation Z, reflecting greater equality in access and use of digital technologies. Overall, the authors conclude that while younger cohorts display stronger digital nativity traits, the gap between generations is narrowing, pointing to the growing role of contextual rather than purely generational determinants of digital skills.
Previous studies examining excessive technology use among university students have produced mixed and sometimes contradictory findings. Research suggests that younger students tend to engage more frequently in multitasking, smartphone checking, and digital distraction, which may stem from generational patterns of technology immersion and habitual device use [6,14]. This recent study identifies excessive non-academic internet use among undergraduate students and links it to reduced study time and increased psychological distress [16]. On the other hand, investigations comparing mobile learning adoption between undergraduate and postgraduate students report no significant differences, indicating that both groups often rely on digital devices primarily for personal rather than academic purposes [17]. These divergent results highlight the need to clarify whether excessive technology use systematically differs across academic levels. Based on the tendency of younger, undergraduate students to display more frequent digital distraction, we propose the following hypothesis:
H1. 
Undergraduate students exhibit higher levels of excessive technology use compared to postgraduate students.

2.3. Technology Use in Higher Education

Digital learning tools, including e-boards, interactive applications, and audience response systems, are increasingly recognized as central to active and student-centered learning in higher education. Empirical evidence shows that interactive environments can significantly enhance students’ cognitive outcomes. Song and Cai [18] demonstrated that the use of mobile-based interactive applications led to measurable improvements in critical thinking skills such as analysis, inference, and problem-solving. Similarly, Martín-Sómer et al. [19] examined the use of multiple interactive applications in engineering education and found that these tools significantly increased student participation and engagement. The study highlighted that relying on a single application often led to monotony and reduced attention, whereas combining several interactive applications broke routine, stimulated motivation, and encouraged active involvement in both online and face-to-face settings. Importantly, the integration of these tools was also associated with improved academic performance compared to previous years when only one application had been used. Both students and teachers reported that interactive applications helped maintain interest, enhanced feedback opportunities, and supported deeper interaction during lectures.
The interactive features of e-boards, such as enabling students to provide input and receive immediate feedback, align with findings by Mand et al. [20], who showed that technology-enhanced audience interaction fosters active learning and creates a psychologically safe environment by allowing broader participation. Complementing these perspectives, Zhao et al. [3] investigated the broader construct of digital learning power among undergraduates and graduates and showed that while the overall level of digital learning power was similar across academic levels, graduate students performed significantly better in sub-dimensions such as mindful agency, sense-making, and optimism. These findings highlight that digital learning tools can enhance engagement, participation, and higher-order thinking, but their impact may vary across dimensions and student groups, pointing to the importance of both pedagogical design and academic level in shaping their effectiveness.
Lau et al. [17], in their study on m-learning via smartphones, found no significant differences between undergraduates and postgraduates in adopting mobile learning, despite some variation in their learning patterns. Both groups reported relatively low frequencies of academic use, suggesting that mobile devices were more often employed for personal rather than study-related purposes. In contrast, Xu and Du [21] investigated satisfaction with digital libraries in China and revealed that graduate students were consistently more satisfied than undergraduates across several dimensions, including system quality, information quality, service quality, perceived ease of use, and perceived usefulness. These findings suggest that postgraduates, due to their stronger research orientation and more advanced information needs, tend to engage more deeply with digital learning resources, whereas undergraduates may face greater barriers in perceiving their usefulness. Rembielak [22] found that postgraduate students tend to evaluate digital learning tools more critically and meaningfully, suggesting that they may perceive greater value in structured e-learning environments such as E-Boards. Moreover, some evidence suggests that postgraduate students may use digital learning resources more efficiently due to greater academic experience and information-processing skills [21]. Therefore, the following hypothesis was developed:
H2. 
Postgraduate students evaluate the use of E-Boards for learning more positively than undergraduate students.

2.4. Online Collaboration and Student Learning

Online collaborative learning has emerged as an important pedagogical approach that promotes deeper understanding and the development of competencies essential for academic and professional success [23]. De Nooijer et al. [24] emphasize that such learning can be effective when carefully designed to balance course structure with student autonomy. Overall, the literature suggests that well-structured online collaboration, supported by synchronous interaction and community-building strategies, can substantially enhance students’ learning gains, satisfaction, and development of collaborative skills, making it a vital component of modern higher education [23,24]. In addition, Mahdi et al. [25] compared undergraduate and postgraduate students’ attitudes toward cooperative learning in online environments. Their study found that both groups held generally positive perceptions, emphasizing that online group work fostered creativity, improved attitudes toward course content, and encouraged collaboration. Interestingly, no significant differences were found between undergraduates and postgraduates, suggesting that online collaborative learning is valued similarly across academic levels. Students did, however, express a preference for smaller group sizes, which were perceived as more effective for active participation, equitable task distribution, and creative engagement. In contrast, Renau [26] identified clearer distinctions in perceptions between undergraduates and master’s degree students. While 68.85% of undergraduates expressed positive views, their responses also included a notable proportion of neutral (15.57%) and negative (11.48%) experiences, often linked to unequal participation, communication issues, and time management challenges. In comparison, 98% of master’s students reported positive experiences, indicating that academic maturity and professional experience strongly shape attitudes toward group work. Master’s students were more likely to perceive group work as productive, relevant, and aligned with their career goals, reflecting higher levels of satisfaction and engagement. These findings suggest that while undergraduates may require greater support and structure in group work, postgraduates benefit from tasks that emphasize professional application and autonomy. Al Fadda [27] found that postgraduate students report more positive attitudes toward online cooperative learning compared to undergraduate students, indicating clearer benefits perceived by students at higher academic levels. Some studies find that postgraduate students perceive online collaboration more positively, often due to higher academic maturity, clearer academic goals, and greater professional experience [25,26]. Conversely, other studies report no significant differences, suggesting that both groups value online collaboration similarly, particularly when it enhances communication, mutual support, and active engagement [25,28]. Thus, we propose the following hypothesis:
H3. 
Postgraduate students perceive online collaboration as more beneficial than undergraduate students.

2.5. Artificial Intelligence in Higher Education

The study by Tsiani et al. [29] examined how undergraduate and master’s students perceive the use of generative AI in education. The findings revealed that students from both study levels exhibited positive attitudes toward integrating AI tools into the learning process. They recognized that generative AI could enhance creativity, support personalized learning, assist in idea generation, and improve the overall efficiency of academic tasks. Despite their generally favorable perceptions, students expressed concerns regarding ethical use, reliability, and academic integrity, indicating a balanced awareness of both the benefits and risks of AI in education. The study by Aldreabi et al. [30] examined the determinants influencing the adoption of generative artificial intelligence (GenAI) tools among undergraduate and postgraduate students in Jordanian universities. Using a structural equation modeling approach, the authors identified several key predictors that significantly shape students’ behavioral intention to use AI tools for learning. The results revealed that effort expectancy (the perceived ease of use) and performance expectancy (the perceived usefulness of AI) were the strongest predictors of students’ intention to adopt generative AI. This suggests that students are more likely to use AI tools when they believe that such tools are both easy to use and effective in enhancing their academic performance. Additionally, facilitating conditions such as institutional support, access to digital infrastructure, and training opportunities were found to have a positive influence on adoption. The study also highlighted the role of cost value (perceived affordability and accessibility) as a moderating factor, indicating that students’ willingness to use AI depends partly on how accessible these technologies are within their educational environment [30]. Moreover, Chan and Hu [11] surveyed 399 undergraduate and postgraduate students in Hong Kong and found that participants demonstrated a solid understanding of the capabilities and limitations of generative AI. The students expressed a strong willingness to integrate such tools into their academic practices and future professional activities, highlighting several perceived benefits, including personalized and immediate learning support, assistance with writing and idea generation, research and analytical guidance, and improved efficiency in both administrative and creative tasks. Further, Black and Tomlinson [31] highlighted that the educational use of artificial intelligence should not be limited to developing technical skills but should also foster critical and responsible engagement with technology. The research conducted among undergraduate university students found that those with higher levels of AI literacy demonstrated a more reflective and ethical approach to using AI in their studies. Students employed AI for various purposes, including cognitive support, creative exploration, and writing enhancement, suggesting that AI serves as a complementary learning tool rather than a substitute for traditional learning. The authors also emphasized the need for institutional frameworks and training programs to strengthen students’ critical AI literacy and ensure the ethical integration of AI in higher education. Evidence shows that postgraduate students often report higher readiness, confidence, and perceived usefulness regarding AI-based tools, which may derive from their greater academic experience and more demanding research tasks [2,7]. Gandhi et al. [32] found that postgraduate students are more likely than undergraduate students to adopt AI tools in their studies, demonstrating a clear difference in AI engagement across academic levels. However, other studies find that concerns about ethical use, reliability, and the limitations of generative AI are shared across both undergraduate and postgraduate cohorts, indicating areas of similarity rather than difference [11,29]. Accordingly, we formulate the following hypothesis:
H4. 
Postgraduate students use AI tools more actively and confidently than undergraduate students.

2.6. Perceived Academic Success and Technology

Cohen et al. [33] examined how undergraduate students from Tel Aviv University and Monash University use technology and perceive its usefulness in supporting their studies. The findings showed that students who viewed digital tools as more useful also reported greater efficiency and confidence in managing their learning tasks, suggesting a link between technology use and perceived academic success. While Australian students emphasized the value of academic digital resources, Israeli students highlighted the importance of collaborative online environments and social media for peer learning. Overall, the study indicated that the perceived usefulness of technology positively contributes to students’ sense of learning effectiveness and academic achievement [33]. Tareke et al. [34] conducted a systematic review of 21 empirical studies to examine how modern technological approaches influence students’ perceptions of academic success in higher education. Their findings revealed that the effective use of digital learning tools, e-learning environments, and artificial intelligence applications could enhance students’ motivation, engagement, and learning outcomes. The study emphasized that technology use is particularly beneficial when it supports self-regulated learning and fosters active participation in the learning process. Despite the positive effects, the authors also noted potential challenges, such as technostress and the existing gap between traditional models of academic success and contemporary technology-driven approaches. Overall, the review concluded that digital tools could improve students’ perceived academic effectiveness and satisfaction when they are appropriately integrated into the learning environment, while poor access or ineffective use of technology may have the opposite effect. The authors therefore recommended that higher education institutions adopt comprehensive strategies that align technological innovation with pedagogical and motivational dimensions of learning [34]. Simon [35] found that postgraduate students tend to evaluate their educational experience more positively, suggesting they perceive greater academic progress and satisfaction than undergraduate students. In contrast, other research demonstrates minimal or no differences between study levels, suggesting that perceived academic success may depend more on individual factors than on academic stage alone [36,37]. Thus, we propose the following hypothesis:
H5. 
Postgraduate students report higher levels of perceived academic success than undergraduate students.

3. Materials and Methods

3.1. Data and Sample

Between 3 January and 20 February 2024, an online survey was conducted among 285 students of the Faculty of Economics and Business at the University of Maribor, Slovenia. The sample included both undergraduate (55%) and postgraduate students (45%). In terms of gender composition, 57% of respondents were female and 43% were male. Participants represented a wide range of study fields: 3% were enrolled in strategic and project management, 3% in international business economics, 7% in economics, 6% in accounting, auditing, and taxation, 9% in entrepreneurship and innovation, 8% in management informatics and e-business, 16% in finance and banking, 26% in marketing, and 22% in management, organization, and human resources. Regarding self-reported academic performance, 25% of students indicated acceptable-quality achievements with certain deficiencies, 50% reported moderate-quality achievements, 23% reported very-high-quality achievements, while 2% described their results as outstanding.

3.2. Measurement Instrument

The research instrument consisted of a structured online questionnaire. Respondents evaluated their level of agreement with the statements on a five-point Likert scale, where 1 indicated “strongly disagree” and 5 represented “completely agree”. The measurement items were drawn from previously validated sources. Specifically, the construct excessive use of technology was measured using items adapted from Abid et al. [38] and Navarro-Martínez and Peña-Acuña [39]. Items capturing the construct use of an E-Board for learning were based on Mishra et al. [40]. To assess online collaboration, items were adapted from Venter [41] and Forman and Miller [28]. The construct use of artificial intelligence in education was measured using items from Chen et al. [42] and Wang et al. [43]. Finally, perceived academic success was evaluated with items adopted from Cunningham [36] and Rea [37].

3.3. Statistical Analysis

Descriptive statistics (means, medians, and standard deviations) were calculated to summarize students’ responses. To prevent arbitrary and overly rigid response classifications, the results are interpreted descriptively without applying absolute threshold values to label agreement or disagreement. Since the results of the Kolmogorov–Smirnov test showed that the data could not comply with a normal distribution (p < 0.001), we used the Mann–Whitney U test to compare the two independent samples, i.e., first-cycle and second-cycle students.

4. Results

The selected descriptive statistics and the results of the Mann–Whitney U test for “excessive use of technology” are presented in Table 1.
The results of the Mann–Whitney U test indicate that first-cycle students report a statistically significantly higher level of frequent technology use because it helps them learn (p/2 < 0.05), while for the other statements related to excessive use of technology, no student group reports statistically significantly higher levels (Table 1). Across the 13 items measuring excessive technology use, both groups reported relatively higher mean and median values for checking their smartphones during study sessions and for the perception that reducing screen time could improve their academic performance. First-cycle students also reported slightly higher values regarding extending their technology use beyond intended study time and being distracted by online content, whereas second-cycle students showed more neutral response patterns on these items. Both groups reported lower values on the item related to feeling anxious when unable to check their phone or social media during study time. Second-cycle students additionally reported lower values for using screen-time limiting apps and for experiencing sleep disturbances related to technology use, while first-cycle students showed more neutral responses. For the remaining items, both groups demonstrated similar and generally moderate response patterns (Table 1). Table 2 presents the descriptive statistics and the results of the Mann–Whitney U test for “using an E-Board for learning”.
Table 2 shows that the one-tailed statistical significance (p/2 < 0.05) for remembering more difficult information learned from physical books compared to an E-Board does not contribute to a more positive evaluation of E-Board use for learning among second-cycle students compared to first-cycle students. For the other statements on E-Board use for learning, no student group reaches statistically significantly higher levels. Across the items, both groups reported lower mean and median values on the statement regarding finding it more difficult to remember information learned from physical books compared to an E-Board. In contrast, both groups reported higher values on the items, indicating that studying from physical books results in better memorization, that comprehension is better when using physical books compared to E-Boards, and that traditional book-based learning supports retention and recall more effectively despite the benefits of technology. First-cycle students also reported somewhat higher values on the statement suggesting that E-Boards do not promote as deep a level of reading and understanding as physical books, while second-cycle students displayed more neutral response patterns on this item. For the remaining statements describing “using an E-Board for learning,” responses in both groups were generally moderate and similar. For the construct “online collaboration,” the selected descriptive statistics and the results of the Mann–Whitney U test are presented in Table 3.
By examining the results in Table 3, it can be concluded that there are no statistically significant differences between first- and second-cycle students regarding the statements related to “online collaboration” (p > 0.05). Both groups reported higher mean and median values for feeling comfortable collaborating with peers online, finding online collaboration tools easy to use, perceiving online collaboration as enhancing their learning experience, and considering communication and idea sharing in online environments as easy. First-cycle students also reported slightly higher values for statements related to gaining diverse perspectives through online collaboration, perceiving peer feedback in online settings as beneficial for their understanding of the subject, viewing online collaboration as contributing to skills relevant for their future career, and expressing a preference for in-person collaboration over online collaboration. Second-cycle students, however, showed more neutral response patterns on these items. For the remaining statements describing “online collaboration,” responses from both groups were generally moderate and similar. For the statements describing the “use of AI in the study,” the descriptive statistics and the results of the Mann–Whitney U test are presented in Table 4.
By examining the results in Table 4, it can be observed that second-cycle students reported higher mean and median values on the statements, indicating that AI has the potential to make complex topics easier to understand through visualizations and simulations, and that AI-based tools such as language translators or grammar checkers are beneficial for their studies. First-cycle students displayed more neutral response patterns on these items. The Mann–Whitney U test confirmed statistically significant higher levels among second-cyle students compared to first-cycle students (p/2 < 0.05). Although both first- and second-cycle students reported generally moderate values regarding the regular use of AI-based educational tools, the effectiveness of AI-driven tutoring systems in supplementing their learning and the belief that AI can make learning more engaging and interactive, second-cycle students tended to endorse these items at higher levels. The Mann–Whitney U test again indicated statistically significant differences between the two groups (p/2 < 0.05). Apart from these items with statistically significant differences, the remaining statements describing “use of AI in the study” did not show statistically significant differences between the groups (p > 0.05). First-cycle students reported somewhat higher values regarding concerns that the use of AI in the study could reduce the need for human interaction, while second-cycle students demonstrated more neutral response tendencies. Conversely, second-cycle students reported higher values on the statement that the use of AI in education will be beneficial for their future career, given increasing digitization across sectors, whereas first-cycle students presented more moderate responses. For the remaining statements within the construct “use of AI in the study,” both groups reported similar and generally moderate response patterns. Table 5 presents the selected descriptive statistics and the results of the Mann–Whitney U test for perceived academic success.
By examining the individual statements describing “perceived academic success,” the results of the Mann–Whitney U test in Table 5 show that first-cycle students achieve statistically significantly higher levels of gaining practical knowledge that they can apply in everyday life compared to second-cycle students (p/2 < 0.05). Second-cycle students also showed moderate response levels on the statement related to gaining practical knowledge applicable in everyday life, while first-cycle students reported higher values on this item. For the other statements regarding perceived academic success, no student group reaches statistically significantly higher levels than the other (p/2 > 0.05). Both groups reported generally moderate response patterns on the statements, indicating that their academic performance exceeds expectations and that they perceive their study progress as positive. For the remaining statements within the construct “perceived academic success,” students from both study cycles consistently reported higher mean and median values, indicating generally positive perceptions of their academic functioning.

5. Discussion

The increasing integration of digital technologies into higher education has significantly changed the way students study, communicate, and manage their academic responsibilities. However, while technology offers flexibility and access to learning resources, excessive or uncontrolled use may interfere with students’ concentration, self-regulation, and overall academic performance. Prior studies have linked frequent multitasking, smartphone dependence, and digital distraction with reduced academic outcomes and lower study efficiency (e.g., Jeilani and Abubakar [44]; Cohen et al. [33]; Chan and Hu [11]). Since both undergraduate and postgraduate students rely heavily on digital tools for study-related purposes, it is important to explore whether their level of study influences the perceived intensity of technology overuse and its impact on learning effectiveness. Our results show that postgraduate students do not report higher levels of excessive technology use, use of E-boards, online collaboration, or perceived academic success, which means that H2, H3, and H5 are rejected. H1 is supported only partially, as undergraduate students achieved higher scores on one item within the construct of excessive technology use, while the remaining items showed no meaningful differences. The only statistically significant difference emerged in the construct use of AI in the study, where postgraduate students demonstrated higher confidence and greater engagement in AI-based tools, thereby supporting H4. Thus, the results show that undergraduate students express slightly higher agreement only on the item suggesting that frequent technology use helps them learn, whereas the remaining indicators of excessive technology use do not differ between the two groups. This means that the expected pattern of higher excessive use among first-cycle students is not consistently supported across the construct (Table 1). Both groups most readily agreed that they often check their smartphones during study sessions and that reducing screen time could improve their academic performance, while they tended to disagree that they feel anxious if unable to check the phone; postgraduates also disagreed that their sleep is disturbed by technology or that they regularly use screen-time limiters. These patterns suggest broadly similar habits and self-perceptions across study levels, with minor nuances in self-regulation. The absence of differences between undergraduate and postgraduate students is consistent with earlier evidence indicating that patterns of mobile technology use do not systematically vary across study levels, despite differences in academic demands [45]. Furthermore, the observed patterns in this study are in line with meta-analytic findings showing that digital distraction and technostress are negatively associated with students’ perceived learning effectiveness and academic performance, but that structured guidance and institutional support can mitigate these effects [46]. Overall, these results suggest that perceptions of excessive technology use are shared across study levels, implying that universities should promote digital well-being and responsible technology use among all students, not only specific academic groups. Based on these findings, it is recommended that higher education institutions promote digital well-being through structured awareness initiatives. Universities should organize workshops and seminars on time management, digital detox strategies, and responsible information consumption to help students establish healthy study and life boundaries. Including reflective digital-use assignments in the curriculum, such as short digital journals or self-assessments, could help students recognize when technology supports their learning and when it becomes a distraction. Moreover, academic advisors should be trained to identify early signs of digital fatigue or technostress and provide personalized guidance. Institutions may also consider implementing »technology-free« study sessions or awareness campaigns that emphasize balance rather than avoidance, thereby supporting sustainable engagement with technology over the long term. This could be achieved through the establishment of designated »digital well-being days« on campus, where students are encouraged to participate in offline academic and social activities. Universities could also collaborate with student organizations to create peer-led initiatives that promote mindful technology use and share effective strategies for maintaining focus and concentration during study sessions.
In recent years, digital learning boards and interactive platforms have become an integral part of higher education, supporting visual engagement, collaborative learning, and communication between students and instructors. E-learning boards are often used to share course materials, collect assignments, and provide feedback, thus enhancing student interaction and course organization. Previous research indicates that students generally perceive such platforms as beneficial for structuring their learning process and improving communication efficiency (e.g., Martín-Gutiérrez et al. [47]; Al-Fraihat et al. [48]). However, differences in students’ familiarity with technology and learning strategies may influence how effectively they use these digital tools and how useful they perceive them to be [49]. Table 2 presents the descriptive statistics and Mann–Whitney test results for the construct E-Board for learning. The results show that postgraduate students do not demonstrate higher evaluations of the use of E-Boards for learning. Although one item showed a statistically higher value for undergraduate students, this difference does not support the hypothesis, as it appears in the opposite direction of the theoretical expectation. Overall, both groups perceive E-Boards similarly in terms of usefulness, ease of use, and learning support, indicating that academic level does not meaningfully shape students’ attitudes toward digital learning boards. Although inclined toward physical books, both groups of students also showed an interest in e-learning boards, agreeing that such tools help them to better organize their studies and facilitate access to materials and feedback from instructors. These findings suggest that both undergraduate and postgraduate students view digital learning boards as supportive tools that enhance study efficiency and communication. The lack of differences between the two groups indicates that experience with digital platforms is now widespread across study levels, reflecting a broader normalization of technology use in higher education. Similar results were reported by Al-Fraihat et al. [48], who found that perceived usefulness and system quality are key predictors of e-learning satisfaction, regardless of students’ study level. Likewise, Martín-Gutiérrez et al. [47] emphasized that visual and interactive learning environments foster engagement and comprehension among students across academic disciplines. Overall, the results confirm that students perceive e-learning boards as valuable components of the learning process that facilitate organization, collaboration, and feedback. These tools appear to contribute positively to students’ perceived learning effectiveness, aligning with previous studies that link digital platforms with improved engagement and learning satisfaction [50]. These results suggest that faculties should strengthen online collaboration frameworks to enhance social and academic interaction. Faculties can achieve this by introducing structured peer-learning projects and incorporating collaborative problem-solving tasks into courses. Educators should receive training in designing online activities that combine synchronous (e.g., live discussions) and asynchronous (e.g., shared documents, forums) modes of communication. Encouraging the use of digital platforms can provide students with ongoing opportunities for engagement beyond class hours. Furthermore, embedding peer assessment mechanisms within group assignments could enhance accountability and deepen students’ reflection on teamwork dynamics. Such practices may cultivate digital collaboration skills that are not only valuable for academic success but also transferable to professional contexts. To achieve these goals, faculties should adopt a strategic and institution-wide approach to fostering digital collaboration competencies. This could involve establishing structured professional development programs where educators receive ongoing training in online pedagogical design and digital facilitation. Faculties might also encourage interdisciplinary collaboration by creating shared online teaching repositories or communities of practice that enable educators to exchange materials, reflect on challenges, and share effective strategies for integrating collaborative tools into coursework. Moreover, faculties could implement pilot initiatives that embed collaborative digital platforms into course delivery to model effective virtual teamwork. Providing educators with access to instructional designers or technology support staff would further ensure that the use of these tools aligns with learning objectives and enhances student engagement rather than merely replicating traditional classroom dynamics online. Finally, institutional policies should recognize and reward innovative teaching practices that promote online collaboration, thereby encouraging faculty members to continuously experiment with new digital approaches to student engagement.
Collaborative learning has become a cornerstone of modern higher education, particularly in online and blended learning environments. Through digital collaboration tools such as discussion forums, shared documents, and virtual classrooms, students are encouraged to exchange ideas, co-create knowledge, and develop teamwork skills that are essential in professional contexts. Prior research shows that online collaboration enhances student engagement, social connectedness, and learning satisfaction when interactions are well-structured and supported by instructors [23,51]. It can also promote deeper understanding through peer feedback and collective problem-solving [28]. The results in Table 3 revealed no statistically significant differences between undergraduate and postgraduate students, indicating that both groups perceive online collaboration similarly. Mean values suggest moderate agreement with statements related to collaborative activities, such as sharing ideas, working in teams, and contributing to group learning outcomes. These findings indicate that the level of study does not influence students’ perceptions of online collaboration, and both groups recognize its importance for academic success and skill development. This outcome is consistent with Mahdi et al. [25], who reported that undergraduate and postgraduate students similarly value collaborative digital learning experiences, viewing them as opportunities to enhance communication and critical thinking rather than as level-dependent practices. Likewise, Venter [41] found that effective online collaboration supports students’ confidence and participation in learning activities across academic levels. The absence of differences in this study suggests that collaborative competencies and digital communication habits are now well-integrated into both undergraduate and postgraduate curricula. In summary, the results demonstrate that online collaboration is perceived as a positive and valuable learning component among all students, regardless of their study level. This reinforces previous evidence that the success of online collaboration depends less on students’ academic level and more on the design of learning environments and the facilitation strategies implemented by instructors [51]. Based on these findings, it is recommended that faculties systematically integrate AI literacy education into both undergraduate and postgraduate curricula. Such initiatives should not only focus on developing technical competencies (e.g., using AI for research assistance, writing enhancement, or data analysis) but also cultivate ethical awareness and critical evaluation skills. Educators should emphasize transparency about AI capabilities and limitations, encouraging students to question bias, reliability, and authorship when using generative AI systems. Furthermore, faculties could establish interdisciplinary »AI Learning Labs«, where students from various fields collaboratively explore responsible AI applications. Regular workshops and case studies on ethical decision-making in AI-assisted learning could empower students to use these technologies consciously and responsibly. Institutional policies should also ensure equitable access to AI tools and provide clear guidelines for academic integrity in AI-supported coursework. To achieve these objectives, faculties should integrate AI ethics and literacy modules into existing curricula across different study programs rather than limiting them to computer science or engineering courses. Establishing partnerships between academic departments, IT services, and industry experts could provide students with hands-on exposure to real-world AI applications and ethical dilemmas. Universities could also create interdisciplinary working groups composed of educators, researchers, and student representatives to continuously evaluate the pedagogical, ethical, and technical implications of AI use in higher education. Moreover, faculties might develop certification programs or micro-credentials in responsible AI use, which would recognize students’ competencies in understanding algorithmic transparency, data privacy, and digital ethics. Investment in dedicated AI resource centers would allow students and educators to experiment safely with generative AI tools under guided supervision. Finally, institutions should implement continuous professional development programs for educators, enabling them to stay informed about evolving AI technologies and to model critical, reflective, and ethical use of these tools in their own teaching practice.
The rapid advancement of AI has transformed higher education, offering new opportunities for personalized learning, automated feedback, and academic support. AI-driven tools enable students to access real-time explanations, improve writing quality, and enhance problem-solving skills. Studies highlight that AI integration can increase students’ engagement, self-regulated learning, and perceived academic success when used purposefully and ethically [11,32,34,42]. However, the effectiveness of AI adoption often depends on students’ awareness of its limitations, trust in AI systems, and institutional support for ethical AI use [44]. The results in Table 4 indicate that postgraduate students show higher levels of AI use. Although most items within the construct did not differ between the groups, several statements revealed clearly higher evaluations among postgraduate students, particularly in their regular use of AI-based tools, their confidence in integrating AI into learning, and their belief that AI can make complex topics easier to understand through visualizations and simulations. These differences align with the theoretical assumption that postgraduate students, due to greater academic experience and more demanding study tasks, tend to adopt AI tools more actively and with greater confidence. Nevertheless, small but notable differences emerged in individual items: postgraduate students scored slightly higher on statements related to the regular use of AI-based tools (e.g., ChatGPT) and their belief that AI can make learning more engaging and help explain complex topics through visualizations and simulations. Both groups, on average, neither agreed nor disagreed that AI can support personalized learning and provide instant feedback, while expressing some caution about reduced human interaction and potential overreliance on AI systems. This aligns with Black and Tomlinson [32], who found that students use AI to enhance comprehension, structure ideas, and improve writing, but remain critical of its accuracy and prefer maintaining intellectual autonomy. Similarly, Chan and Hu [11] observed that both undergraduate and postgraduate students perceive AI as beneficial for brainstorming, information retrieval, and writing assistance, demonstrating comparable readiness to integrate AI into their academic routines. Tareke et al. [34] further emphasized that the perceived benefits of technology, including AI, depend largely on students’ motivation and self-regulated learning skills, while Jeilani and Abubakar [44] highlighted the crucial role of institutional support and technological self-efficacy in enhancing students’ learning outcomes. Overall, the findings suggest that attitudes toward AI are largely consistent across study levels. However, the results also underline the importance of promoting AI literacy and ethical awareness to ensure responsible and reflective use of AI technologies in higher education.
A practical implication of these findings is that educators should maximize the pedagogical potential of digital tools by embedding them purposefully into course design. Interactive E-boards, collaborative whiteboards, and digital polling systems can transform traditional lectures into dynamic, participatory sessions. Universities should invest in continuous professional development for teachers, focusing on digital pedagogy and instructional design. Workshops could help educators learn how to use E-boards effectively for concept visualization, problem-solving demonstrations, and immediate formative feedback. To ensure sustainability, institutional strategies should promote the sharing of good teaching practices through peer observation and teaching innovation networks. Furthermore, aligning the use of digital boards with active-learning principles—such as flipped classrooms or inquiry-based methods—can deepen conceptual understanding and bridge the gap between technology and pedagogy.
Perceived academic success represents students’ subjective evaluation of their academic performance, learning progress, and competence in achieving study-related goals. It reflects not only objective outcomes but also students’ self-efficacy, engagement, and satisfaction with their learning experience. In the digital learning context, numerous studies have emphasized the positive role of educational technologies in enhancing students’ perceptions of success, motivation, and learning effectiveness [34,48]. Students who effectively integrate technology into their learning routines often report greater confidence, improved self-regulation, and a stronger sense of academic achievement [52]. The results of this study do not allow confirming the hypothesis that postgraduate students reported higher levels of perceived academic success. Although one item within the construct showed a statistically higher value for undergraduate students, this difference does not support the hypothesis, as it appears in the opposite direction of what was theoretically anticipated. Overall, both groups expressed comparable evaluations of their academic performance and learning progress, indicating that perceived academic success is shaped more by individual motivation and learning strategies than by study level. (Table 5). Both groups reported similar levels of confidence in their academic performance and learning effectiveness. This suggests that perceived academic success is influenced more by individual motivation, digital literacy, and learning strategies than by the level of study. These findings are consistent with recent empirical research highlighting that technology use and self-regulated learning play a crucial role in shaping students’ perceptions of their academic progress. For example, Chen [53] demonstrated that digital literacy directly predicts academic achievement, with learning adaptation and online self-regulation acting as mediating factors. Similarly, Shi et al. [54] found that artificial intelligence literacy and self-regulated learning jointly enhance students’ writing performance and academic confidence. Huang and Lee-Post [55] found that integrating self-assessment and commitment strategies in an asynchronous online course significantly improved students’ time management, engagement, and exam performance. The intervention enhanced students’ metacognitive awareness, helping them better plan, monitor, and adjust their study behaviors. These improvements in self-regulated learning behaviors translated into higher academic achievement over time. The findings of our research confirm that fostering self-assessment and reflective practices in technology-supported learning environments can positively influence students’ perceived and actual academic success. In conclusion, the findings show that both undergraduate and postgraduate students perceive themselves as academically competent and technologically adaptive. This indicates that perceived academic success in the digital age is increasingly linked to students’ ability to self-regulate, engage, and effectively use technology to support their learning goals. Based on these findings, it is recommended that educators design learning environments that promote self-regulated learning and reflective practice. Courses should incorporate digital tools such as progress dashboards, self-assessment quizzes, or reflection prompts that encourage students to set goals, monitor progress, and adjust learning strategies. Embedding formative feedback mechanisms through AI-based or peer-supported systems could enhance students’ confidence and perception of learning growth. Furthermore, mentoring programs could help students translate digital engagement into tangible academic outcomes by linking self-regulation with goal achievement. At the institutional level, universities should emphasize the role of digital self-efficacy training in orientation programs, ensuring that students develop the confidence and autonomy needed to navigate complex learning environments. Ultimately, supporting students’ metacognitive awareness and reflective engagement can lead to both improved perceived and actual academic success.

6. Conclusions

This study examined differences between undergraduate and postgraduate students in their use of technology in higher education across five key constructs: the excessive use of technology, online collaboration, the use of artificial intelligence in learning, the use of digital learning tools, and perceived academic success. The results showed that, although a few individual items differed in favor of undergraduate students, these differences did not support the expected direction of the hypotheses. Overall, the two groups displayed broadly similar patterns across four of the five constructs, suggesting that their technology-related learning behaviors are largely comparable. A significant difference only emerged in the use of artificial intelligence in education, suggesting that postgraduate students are more confident and experienced in applying AI tools to their academic work. Although several individual items in other constructs also showed statistically higher values for undergraduate students, these item-level differences were not aligned with the expected direction of the hypotheses and therefore do not change the overall pattern of findings. The findings contribute to the growing body of literature emphasizing that the quality of technology integration and students’ self-regulated learning skills are more crucial for academic success than the level of study itself. Across constructs, the results highlight that students benefit from technology when its use is intentional, balanced, and supported by institutional structures that promote ethical, reflective, and collaborative engagement. This research provides valuable insight into the learning behaviors of economics and business students and extends previous work by comparing study levels within a single academic context. Furthermore, the study extends the existing literature by situating these insights within the context of economics and business education, where digital proficiency and adaptability are becoming essential professional competencies. The results thus not only advance theoretical discussions about technology use in academia but also offer practical implications for developing curricula that prepare students for digitally mediated work environments.
Several limitations should be acknowledged. The study was conducted within a single faculty, offering an in-depth view of students’ technology-related learning practices in a specific academic context. Building on this approach, future studies could compare data across multiple faculties or universities to examine whether disciplinary cultures, curriculum structures, or institutional priorities lead to variations in students’ engagement with digital technologies and AI-supported learning. Longitudinal designs would provide deeper insight into how students’ technological competencies and learning attitudes evolve over time. Future studies could also explore how contextual factors such as teaching style, curriculum design, or institutional digital maturity influence students’ technology use and learning outcomes. Comparative research across disciplines or between public and private higher education institutions would additionally help clarify how different academic settings shape digital engagement patterns. In conclusion, this study reaffirms that technology alone does not determine academic success. Its impact depends on how consciously and purposefully it is integrated into students’ learning practices. By fostering digital competence, self-regulation, and ethical awareness, higher education institutions can create a sustainable and empowering learning environment for both undergraduate and postgraduate students.
Moreover, future studies could expand the scope of this research by including a wider age range of respondents, enabling comparisons between university students and graduates, particularly in relation to the use of AI-based learning tools. Such an extension would provide valuable insights into potential generational differences in technology adoption and AI literacy.
Future studies could extend this research by including students from multiple faculties or universities, which would allow for broader generalization of findings and a more diverse representation of study fields. To complement the self-reported data used in this study, future research could also incorporate objective measures of digital behavior or triangulate self-reports with behavioral data to reduce potential social desirability bias.
Since the present study focused primarily on differences between study cycles, further research could explore the underlying reasons for the absence of statistically significant differences across most constructs. Potential explanatory variables include students’ digital competence, motivation, prior experience with technology, and socio-economic background. Examining these factors would offer deeper insight into the mechanisms behind digital convergence and provide a more nuanced understanding of students’ digital practices in higher education.
Although the validated scales used in this study have been widely adopted in previous research, we did not apply multivariate statistical techniques, such as exploratory or confirmatory factor analysis, to examine the dimensionality of the constructs within our sample. Our approach was based on the study’s main objective, which was to compare undergraduate and postgraduate students using established constructs rather than to revalidate existing measurement models. However, future studies may benefit from using multivariate techniques such as confirmatory factor analysis or structural equation modeling to further investigate the factorial validity of the constructs and the relationships among them. Such approaches could provide additional insights into the measurement properties and interdependencies of the scales used.

Author Contributions

Conceptualization, V.Č., M.R. and P.T.; methodology, V.Č., M.R. and P.T.; software, V.Č., M.R. and P.T.; validation, V.Č., M.R. and P.T.; formal analysis, V.Č., M.R. and P.T.; investigation, V.Č., M.R. and P.T.; resources, V.Č., M.R. and P.T.; data curation, V.Č., M.R. and P.T.; writing—original draft preparation, V.Č., M.R. and P.T.; writing—review and editing, V.Č., M.R. and P.T.; visualization, V.Č., M.R. and P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Slovenian Research and Innovation Industry (research core funding No. P5–0023, ‘Entrepreneurship for Innovative Society’).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Saikia, S.; Sultana, Y.; Law, M.Y. An in-depth analysis of undergraduate students experiences in the transition from F2F learning to online learning. Asian Assoc. Open Univ. J. 2024, 1, 19–36. [Google Scholar] [CrossRef]
  2. Arowosegbe, A.; Alqahtani, J.S.; Oyelade, T. Perception of generative AI use in UK higher education. Front. Educ. 2024, 9, 1463208. [Google Scholar] [CrossRef]
  3. Zhao, L.; Liu, J.; Karimov, A.; Saarela, M. Assessing and developing college students’ digital learning power: An empirical study based on questionnaire survey in a Chinese university. Int. J. Educ. Technol. High. Educ. 2025, 22, 13. [Google Scholar] [CrossRef]
  4. Choudhary, R.; Shaik, Y.A.; Yadav, P.; Rashid, A. Generational differences in technology behavior: A systematic literature review. J. Infrastruct. Policy Dev. 2024, 8, 6755. [Google Scholar] [CrossRef]
  5. Stefán, F.; Ciesielski, M.; Weber, A.; Choromański, K.; Gotlib, D.; Taczanowska, K. Understanding generational differences in digital skills and recreational behaviour for effective visitor management in forest destinations. Sci. Rep. 2025, 15, 17887. [Google Scholar] [CrossRef] [PubMed]
  6. Culp-Roche, A.; Hampton, D.; Hensley, A.; Wilson, J.; Thaxton-Wiggins, A.; Otts, J.A.; Fruh, S.; Moser, D.K. Generational Differences in Faculty and Student Comfort with Technology Use. SAGE Open Nurs. 2020, 6, 2377960820941394. [Google Scholar] [CrossRef] [PubMed]
  7. Maxwell, D.; Oyarzun, B.; Kim, S.; Bong, J.Y. Generative AI in Higher Education: Demographic Differences in Student Perceived Readiness, Benefits, and Challenges. TechTrends 2025, 5, 1–12. [Google Scholar] [CrossRef]
  8. Jin, Y.; Jan, L.; Echeverria, V.; Gašević, D.; Maldonado, M.R. Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Comput. Educ. 2025, 8, 100348. [Google Scholar] [CrossRef]
  9. Akinwalere, S.N.; Ivanov, V.T. Artificial Intelligence in Higher Education: Challenges and Opportunities. Bord. Crossing 2022, 12, 1–15. [Google Scholar] [CrossRef]
  10. Castillo-Martínez, I.M.; Flores-Bueno, D.; Gómez-Puente, S.M.; Vite-León, V.O. AI in higher education: A systematic literature review. Front. Educ. 2024, 9, 1391485. [Google Scholar] [CrossRef]
  11. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  12. Reid, L.; Button, D.; Brommeyer, M. Challenging the Myth of the Digital Native: A Narrative Review. Nurs. Rep. 2023, 13, 573–600. [Google Scholar] [CrossRef] [PubMed]
  13. Mertala, P.; López-Pernas, S.; Vartiainen, H.; Saqr, M.; Tedre, M. Digital natives in the scientific literature: A topic modeling approach. Comput. Hum. Behav. 2024, 152, 108076. [Google Scholar] [CrossRef]
  14. Pongrac, D.; Alić, M.; Cafuta, B. Digital Competences of Digital Natives: Measuring Skills in the Modern Technology Environment. Informatics 2025, 12, 23. [Google Scholar] [CrossRef]
  15. Çoklar, A.N.; Tatli, A. Examining the Digital Nativity Levels of Digital Generations: From Generation X to Generation Z. Shanlax Int. J. Educ. 2021, 9, 433–444. [Google Scholar] [CrossRef]
  16. Han, K.; Suh, S.; Lee, J. Digital addiction and related factors among college students. J. Behav. Addict. 2023, 12, 112–124. [Google Scholar]
  17. Lau, K.P.; Chiu, D.K.W.; Ho, K.H.W.; Lo, P.; See-To, E.W.K. Educational Usage of Mobile Devices: Differences Between Postgraduate and Undergraduate Students. J. Acad. Librariansh. 2017, 43, 201–208. [Google Scholar] [CrossRef]
  18. Song, H.; Cai, L. Interactive learning environment as a source of critical thinking skills for college students. BMC Med. Educ. 2024, 12, 270. [Google Scholar] [CrossRef]
  19. Martín-Sómer, M.; Casado, C.; Gómez-Pozuelo, G. Utilising interactive applications as educational tools in higher education: Increased student participation by avoiding monotony. Educ. Chem. Eng. 2024, 46, 1–9. [Google Scholar] [CrossRef]
  20. Mand, S.K.; Cico, S.J.; Haas, M.R.C.; Schnabel, N.E.; Schnapp, B.H. Let’s get active: The use of technology-enhanced audience interaction to promote active learning. AEM Educ. Train. 2024, 8, 50–55. [Google Scholar] [CrossRef]
  21. Xu, F.; Du, J.T. Examining differences and similarities between graduate and undergraduate students’ user satisfaction with digital libraries. J. Acad. Librariansh. 2019, 45, 102072. [Google Scholar] [CrossRef]
  22. Rembielak, G. The value of postgraduate students’ opinions in the quality management of academic e-learning. J. Manag. Bus. Adm. 2021, 29, 35–52. [Google Scholar] [CrossRef]
  23. Bach, A.; Thiel, F. Collaborative online learning in higher education—Quality of digital interaction and associations with individual and group-related factors. Front. Educ. 2024, 9, 1356271. [Google Scholar] [CrossRef]
  24. de Nooijer, J.; Schneider, F.; Verstegen, D.M. Optimizing collaborative learning in online courses. Clin. Teach. 2021, 18, 19–23. [Google Scholar] [CrossRef] [PubMed]
  25. Mahdi, H.S.; Alfadda, H.A.; Alkhammash, R.; Osman, R. Undergraduates vs. postgraduates attitudes toward cooperative learning in online classes in different settings. PSU Res. Rev. 2023, 8, 577–591. [Google Scholar] [CrossRef]
  26. Renau, M.L. Comparative Study of Group Work Perceptions: Reluctance and Productivity in Undergraduates and Master’s Degree Students. Am. J. Sci. Educ. Res. 2024, 3, 1–6. [Google Scholar]
  27. Al Fadda, H. Undergraduates vs. postgraduates attitudes toward online cooperation. PSU Res. Rev. 2024, 8, 577–596. [Google Scholar] [CrossRef]
  28. Forman, T.M.; Miller, A.S. Student perceptions about online collaborative coursework. J. Curric. Teach. 2023, 12, 224–232. [Google Scholar] [CrossRef]
  29. Tsiani, E.; Lefkos, N.; Fachantidis, N. Perceptions of generative AI in education: Insights from undergraduate and master’s-level future teachers. J. Pedagog. Res. 2025, 9, 89–108. [Google Scholar] [CrossRef]
  30. Aldreabi, H.; Dahdoul, N.K.S.; Alhur, M.; Alzboun, N.; Alsalhi, N.R. Determinants of Student Adoption of Generative AI in Higher Education. Electron. J. E-Learn. 2025, 23, 15–33. [Google Scholar] [CrossRef]
  31. Black, R.W.; Tomlinson, B. University students describe how they adopt AI for writing and research in a general education course. Sci. Rep. 2025, 15, 8799. [Google Scholar] [CrossRef] [PubMed]
  32. Gandhi, M.; Singh, A.; Nair, R.; Thomas, M. Bridging the artificial intelligence divide: Perceptions of AI use in medical education. Front. Educ. 2024, 9, 11411577. [Google Scholar]
  33. Cohen, A.; Soffer, T.; Henderson, M. Students’ use of technology and their perceptions of its usefulness in higher education: International comparison. J. Comput. Assist. Learn. 2022, 38, 1321–1331. [Google Scholar] [CrossRef]
  34. Tareke, T.G.; Oo, T.Z.; Jozsa, K. Bridging theoretical gaps to improve students’ academic success in higher education in the digital era: A systematic literature review. Int. J. Educ. Res. Open 2025, 9, 100510. [Google Scholar] [CrossRef]
  35. Simon, A. Perceived quality of education: A comparative study of undergraduate and postgraduate social work students at Bhopal. Int. Soc. Work 2019, 62, 1401–1414. [Google Scholar] [CrossRef]
  36. Cunningham, M. An Investigation into the Relationship Between Perceived Academic Performance, Depression, Anxiety, and Stress; Gender Differences. Available online: https://norma.ncirl.ie/4922/1/megancunningham.pdf (accessed on 15 October 2025).
  37. Rea, D. College Students’ Perceptions of Academic Success: An Examination of Motivational Orientation. Available online: https://www.researchgate.net/publication/238317540_College_Students%27_Perceptions_of_Academic_Success_An_Examination_of_Motivational_Orientation (accessed on 15 October 2025).
  38. Abid, H.; Mohd, J.; Mohd, A.Q.; Rajiv, S. Understanding the role of digital technologies in education: A review. Sustain. Oper. Comput. 2022, 3, 275–285. [Google Scholar] [CrossRef]
  39. Navarro-Martínez, Ó.; Peña-Acuña, B. Technology usage and academic performance in the Pisa 2018 report. J. New Approaches Educ. Res. 2022, 11, 130–145. [Google Scholar] [CrossRef]
  40. Mishra, P.; Meshram, Y.; Dange, P.; Wankhede, A.; Bawankule, P.; Mane, S. E-black board system—A step towards digital education. Int. J. Sci. Res. Sci. Eng. Technol. 2019, 6, 48–51. [Google Scholar] [CrossRef]
  41. Venter, A. Exploring the downside to student online collaborations. Indep. J. Teach. Learn. 2024, 19, 64–78. [Google Scholar] [CrossRef]
  42. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  43. Wang, S.; Wang, F.; Zhu, Z.; Wang, J.; Tran, T.; Du, Z. Artificial intelligence in education: A systematic literature review. Expert Syst. Appl. 2024, 252, 124167. [Google Scholar] [CrossRef]
  44. Jeilani, A.; Abubakar, S. Perceived institutional support and its effects on student perceptions of AI learning in higher education: The role of mediating perceived learning outcomes and moderating technology self-efficacy. Front. Educ. 2025, 10, 1548900. [Google Scholar] [CrossRef]
  45. Lau, W.W.F. Effects of social media usage and social media multitasking on the academic performance of university students. Comput. Hum. Behav. 2017, 68, 286–291. [Google Scholar] [CrossRef]
  46. Saleem, F.; Chikhaoui, E.; Malik, M.I. Technostress in students and quality of online learning: Role of instructor and university support. Front. Educ. 2024, 9, 1309642. [Google Scholar] [CrossRef]
  47. Martín-Gutiérrez, J.; Mora, C.E.; Añorbe-Díaz, B.; González-Marrero, A. Virtual technologies trends in education. Eurasia J. Math. Sci. Technol. Educ. 2020, 13, 469–486. [Google Scholar]
  48. Al-Fraihat, D.; Joy, M.; Masa’deh, R.; Sinclair, J. Evaluating e-learning systems success: An empirical study. Comput. Hum. Behav. 2020, 102, 67–86. [Google Scholar] [CrossRef]
  49. Çebi, A.; Güyer, T. Students’ interaction patterns in online learning environments and their relationship with perceived learning. Comput. Educ. 2020, 25, 103968. [Google Scholar]
  50. Aristovnik, A.; Keržič, D.; Ravšelj, D.; Tomaževič, N.; Umek, L. Impacts of the COVID-19 pandemic on life of higher education students: A global perspective. Sustainability 2020, 12, 8438. [Google Scholar] [CrossRef]
  51. Sun, A.; Chen, X. Online education and its effective practice: A research review. J. Inf. Technol. Educ. 2016, 15, 157–190. [Google Scholar] [CrossRef]
  52. Javier-Aliaga, D.; Silva Neyra, O.R.; Calizaya-Milla, Y.E.; Saintila, J. Academic self-efficacy and digital competence in a sample of university students. Contemp. Educ. Technol. 2024, 16, ep540. [Google Scholar] [CrossRef]
  53. Chen, F. The relationship between digital literacy and college students’ academic achievement: The chain mediating role of learning adaptation and online self-regulated learning. Front. Psychol. 2025, 16, 1590649. [Google Scholar] [CrossRef]
  54. Shi, Y.; Liu, W.; Hu, X. Exploring how AI literacy and self-regulated learning relate to student writing performance and well-being in generative AI-supported higher education. Behav. Sci. 2025, 15, 705. [Google Scholar] [CrossRef]
  55. Huang, X.; Lee-Post, A. Enhancing online college students’ self-regulated learning: An intervention study. Internet High. Educ. 2025, 67, 101033. [Google Scholar] [CrossRef]
Table 1. Descriptive Statistics and Mann–Whitney Test for the construct “excessive use of technology”.
Table 1. Descriptive Statistics and Mann–Whitney Test for the construct “excessive use of technology”.
Excessive Use of TechnologyMann–Whitney UAsymp. Sig. (2-Tailed)First-Cycle StudentsSecond-Cycle Students
MeanMedianStd. DeviationMeanMedianStd. Deviation
I often find myself checking my smartphone during study sessions.9882.0000.9393.694.001.1833.714.001.207
Social media notifications frequently distract me when I am studying.9295.0000.3963.433.001.3053.403.001.245
I believe my academic performance could improve if I reduced my screen time.9566.5000.6463.764.001.1553.754.001.108
I frequently use my smartphone or computer for non-study purposes during study time.9574.5000.6563.493.001.1373.433.001.082
My use of technology often extends beyond the time I had allocated for it, cutting into my study time.9682.5000.7193.523.001.0943.444.001.046
I sometimes lose track of time when using social media, leading to shorter or rushed study sessions.9607.5000.5593.473.001.1843.404.001.206
Online content (like videos, articles, or social media posts) often diverts my attention from studying.9729.0000.7723.523.001.0103.424.001.191
I find it challenging to concentrate on studying after using my smartphone or other devices.9767.5000.7293.393.001.0733.303.001.202
I regularly use apps or features to limit my screen time during study hours.9244.0000.3562.583.001.3172.492.001.345
My sleep pattern is disturbed due to excessive use of technology, affecting my study schedule.9030.0000.1742.603.001.3272.352.001.168
I often use technology, such as study apps or online resources, because it helps me learn.8892.0000.0963.423.001.0533.153.001.191
I feel anxious if I am unable to check my phone or social media for a period of time, including study time.9587.5000.6702.402.001.2142.302.001.168
The use of digital devices for studying often leads me to multitask, reducing my study effectiveness.9641.0000.7453.333.001.1033.353.001.240
Table 2. Descriptive Statistics and Mann–Whitney Test for the construct “using an E-Board for learning”.
Table 2. Descriptive Statistics and Mann–Whitney Test for the construct “using an E-Board for learning”.
Using an E-Board for LearningMann–Whitney UAsymp. Sig. (2-Tailed)First-Cycle StudentsSecond-Cycle Students
MeanMedianStd. DeviationMeanMedianStd. Deviation
I find it more difficult to remember information learned from physical books compared to an E-Board.8747.5000.0872.172.001.1601.981.001.240
I often get distracted when studying from an E-Board, impacting my attention to the material.9390.0000.3662.813.001.2382.613.001.376
I feel that studying from physical books results in better memorization of the material.9251.5000.2563.894.001.1883.704.001.240
I struggle to focus on the material when using an E-Board for a prolonged period.9753.0000.7133.163.001.2033.103.001.348
The lack of tactile experience in E-Boards as compared to physical books affects my ability to learn.9546.5000.4943.083.001.1502.953.001.326
Navigating through digital content on an e-board is easier than flipping through pages of a book.9692.0000.7913.093.001.4422.973.001.355
I feel the interactive elements in E-Boards can sometimes distract from the main learning material.10,188.5000.6843.313.001.1283.363.001.186
My comprehension of the subject matter is better when studying from physical books than from E-Boards.9941.5000.9333.674.001.3153.644.001.257
I am more likely to skim through material on an E-Board rather than read it thoroughly.9916.5000.9273.373.001.2243.213.001.284
I feel that the lack of physical highlighting or annotating affects my ability to memorize the material on E-Boards.9896.5000.9563.293.001.2593.163.001.294
I experience more eye strain when using E-Boards, which affects my study sessions.10,531.5000.3043.493.001.1713.464.001.217
I feel that E-Boards do not promote as deep a level of reading and understanding as physical books do.9660.0000.7523.563.501.2233.463.001.225
The convenience of switching between different materials or resources on an E-Board can disrupt my focused study time.9871.5000.9413.213.001.1893.193.001.057
Despite the benefits of technology, I believe traditional book-based learning is more conducive to retaining and recalling information.9398.5000.3643.774.001.2403.624.001.236
Table 3. Descriptive Statistics and Mann–Whitney Test for the construct “online collaboration”.
Table 3. Descriptive Statistics and Mann–Whitney Test for the construct “online collaboration”.
Online CollaborationMann–Whitney UAsymp. Sig. (2-Tailed)First-Cycle StudentsSecond-Cycle Students
MeanMedianStd. DeviationMeanMedianStd. Deviation
I feel comfortable collaborating with my peers online.9665.5000.6173.664.001.1043.514.001.169
I find online collaboration tools (like Google Docs, Slack, etc.) easy to use.9985.0000.7323.974.001.0193.944.001.064
I believe that online collaboration enhances my learning experience.10,516.5000.4313.563.001.0773.594.001.124
I find it easy to communicate and share ideas with peers in an online environment.10,117.0000.7803.634.001.1433.674.001.025
My learning is enriched by the diverse perspectives shared during online collaborations.9665.0000.6303.523.001.0493.373.001.195
I feel my contributions are valued in online collaborative tasks.9482.0000.4473.494.001.0103.253.001.133
I find it difficult to manage my time effectively during online collaborative tasks.9238.5000.2063.083.000.9852.953.000.994
I find it challenging to establish a personal connection with my peers in an online environment.9571.5000.5342.983.001.1962.903.001.148
The feedback I receive from my peers during online collaboration improves my understanding of the subject.8937.0000.1123.523.001.0563.203.001.013
Online collaboration motivates me to stay engaged with the course material.9443.5000.3493.293.001.0743.093.001.048
I feel comfortable expressing disagreement or offering constructive criticism in an online collaborative setting.10,680.0000.2393.403.001.0593.443.001.100
Online collaborations help me develop skills that are relevant to my future career.9495.5000.5663.503.000.9363.373.001.049
I would prefer in-person collaborations over online collaborations.9110.0000.2093.583.001.0683.413.001.158
Table 4. Descriptive Statistics and Mann–Whitney Test for the construct “use of AI in the study”.
Table 4. Descriptive Statistics and Mann–Whitney Test for the construct “use of AI in the study”.
Use of AI in the StudyMann–Whitney UAsymp. Sig. (2-Tailed)First-Cycle StudentsSecond-Cycle Students
MeanMedianStd. DeviationMeanMedianStd. Deviation
I regularly use AI-based educational tools to assist in my learning (For example, ChatGPT, etc.).11,775.0000.0082.673.001.4313.133.001.373
AI-driven tutoring systems have been effective in supplementing my learning.11,107.0000.0992.873.001.2863.203.001.311
I believe AI technologies can personalize learning to suit my needs and pace.10,657.0000.2593.163.001.2293.383.001.236
I often rely on AI-based tools for grading or feedback on my assignments.9659.5000.6152.793.001.3362.783.001.323
AI tools that provide instant feedback help me to learn and rectify mistakes quickly.10,547.5000.4722.953.001.3443.123.001.342
I feel comfortable interacting with AI-based educational systems.10,907.5000.1332.883.001.3003.133.001.316
I believe that AI in the study can make learning more engaging and interactive.11,629.5000.0103.003.001.2073.343.001.247
I am concerned that AI use in the study could reduce the need for human interaction.9446.0000.3583.654.001.1753.473.001.241
I believe AI can help me manage my study schedule effectively by providing reminders and updates.9625.5000.5123.143.001.1223.043.001.143
AI has the potential to make complex topics easier to understand through visualizations and simulations.11,658.5000.0123.403.001.1333.724.001.061
I trust AI’s ability to provide accurate and unbiased information.9143.5000.2782.953.001.1922.913.001.186
I find AI-based tools like language translators or grammar checkers beneficial in my studies.11,345.0000.0313.423.001.0913.694.001.163
The use of AI in education will be beneficial for my future career, considering the increasing digitization in all sectors.10,784.5000.1963.283.001.1643.523.001.161
Table 5. Descriptive Statistics and Mann–Whitney Test for the construct “perceived academic success”.
Table 5. Descriptive Statistics and Mann–Whitney Test for the construct “perceived academic success”.
Perceived Academic SuccessMann–Whitney UAsymp. Sig. (2-Tailed)First-Cycle StudentsSecond-Cycle Students
MeanMedianStd. DeviationMeanMedianStd. Deviation
I meet the official performance requirements expected of a student.10,499.5000.4443.934.000.9483.974.001.014
I adequately complete assigned duties.10,150.0000.8154.104.000.8534.024.000.940
I fulfil responsibilities specified in the course outline.9854.0000.8244.174.000.8864.094.001.001
I perform tasks that are expected of me.10,528.0000.4014.124.000.9174.134.000.961
My performance is beyond demands.10,029.0000.9453.273.001.0153.213.000.968
I rate my performance in studies as positive and progressive.9911.0000.9943.253.001.0253.243.000.956
I try my best if I do not receive the best grade.10,805.0000.2123.864.001.0183.964.000.95
I gain practical knowledge that I can apply in everyday life.8677.5000.0593.704.000.9743.303.001.184
I believe that I have adapted well to the demands of the study environment and am achieving good results.10,301.5000.7173.774.000.9223.704.000.960
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Čančer, V.; Rožman, M.; Tominc, P. Beyond Differences: A Generational Convergence in Technology Use Among Business Students. Systems 2025, 13, 1095. https://doi.org/10.3390/systems13121095

AMA Style

Čančer V, Rožman M, Tominc P. Beyond Differences: A Generational Convergence in Technology Use Among Business Students. Systems. 2025; 13(12):1095. https://doi.org/10.3390/systems13121095

Chicago/Turabian Style

Čančer, Vesna, Maja Rožman, and Polona Tominc. 2025. "Beyond Differences: A Generational Convergence in Technology Use Among Business Students" Systems 13, no. 12: 1095. https://doi.org/10.3390/systems13121095

APA Style

Čančer, V., Rožman, M., & Tominc, P. (2025). Beyond Differences: A Generational Convergence in Technology Use Among Business Students. Systems, 13(12), 1095. https://doi.org/10.3390/systems13121095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop