Next Article in Journal
Consumer Attention, Green Attitude, and Climate Change Awareness in Green Purchase Behaviour: Insights from an Emerging Economy
Previous Article in Journal
Climate-Driven Changes in Air Quality: Trends Across Emission and Socioeconomic Pathways
Previous Article in Special Issue
Interaction Mechanism and Coupling Strategy of Higher Education and Innovation Capability in China Based on Interprovincial Panel Data from 2010 to 2022
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Mobile Microlearning: Evaluating Learners’ Perceptions and Learning Outcomes in IT Education

1
School of Computing and Technology, Eastern Mediterranean University, Famagusta 99628, North Cyprus, Türkiye
2
Faculty of Education, Eastern Mediterranean University, Famagusta 99628, North Cyprus, Türkiye
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(23), 10860; https://doi.org/10.3390/su172310860 (registering DOI)
Submission received: 7 November 2025 / Revised: 1 December 2025 / Accepted: 2 December 2025 / Published: 4 December 2025

Abstract

Mobile Microlearning (MML) has emerged as a sustainable digital learning strategy capable of improving cognitive efficiency, reducing learner fatigue, and supporting scalable instructional delivery. This study investigates how MML compares with conventional Mobile Learning (ML) when both formats deliver identical instructional content in an introductory programming module for undergraduate IT students. Sixty-eight students were randomly assigned to either an ML course or a redesigned MML course built from the same curriculum but reorganized into short, interactive micro-units. Learners completed a pre/post multiple-choice programming test and a five-scale Course Characteristics Questionnaire evaluating Ease of Use, Clarity & Coherence, Appeal, Difficulty, and Focus. Results indicated significant learning gains in both groups; however, the MML group demonstrated substantially greater improvement. Appeal, Difficulty, and Focus were the strongest predictors of learning growth, with Appeal and Difficulty significantly mediating the relationship between course format and performance. Because the two formats differed only in delivery design (not in content), the findings highlight micro-temporal structuring, lightweight interaction, and immediate feedback as key mechanisms driving the superiority of MML. Grounded in sustainability principles, the study shows that MML reduces cognitive load, enables more efficient study patterns, and provides a scalable, reusable content structure that supports accessible, resilient learning ecosystems. These results offer design-level insights for creating sustainable mobile instructional experiences in higher education IT programs.

1. Introduction

Mobile microlearning (MML) has become an increasingly significant pedagogical approach amidst the rising demand for sustainable, scalable, and cognitively efficient learning environments in higher education. With smartphones now representing the most widely adopted computing devices globally [1], mobile-based instructional models have gained prominence due to their accessibility, flexibility, and low resource requirements. However, the shift toward sustainability-oriented digital education requires not only mobile access but also learning designs that reduce cognitive load, optimize engagement, and support long-term, equitable learning participation.

1.1. Mobile Learning

Mobile learning (ML) refers to the delivery of educational content through mobile devices, enabling flexible and context-independent learning opportunities [2,3]. ML supports sustainable education by reducing reliance on printed materials and enabling scalable digital ecosystems [4]. However, ML often relies on lengthy videos or slide-based presentations that can impose high cognitive load, encourage multitasking, and exacerbate attention fragmentation, issues consistently reported in mobile learning literature [5,6].

1.2. Microlearning

Microlearning presents instructional content in short, focused units designed around the limits of working memory [7,8]. Research highlights microlearning’s ability to reduce cognitive overload, prevent fatigue, and enhance information retention through short, sequential learning episodes [9,10,11]. The approach is inherently sustainable: it supports time-efficient study habits, reduces mental effort, and allows learning to fit into fragmented daily routines, especially relevant for today’s mobile-first learners [12].

1.3. Mobile Microlearning (MML)

MML integrates mobile delivery with microlearning design principles, offering short, interactive sessions optimized for small screens, accompanied by automated feedback, gamified elements, and precise sequencing [13,14,15]. MML is conceptualized as a sustainability-aligned instructional method that supports Education for Sustainable Development (ESD) by enabling reusable micro-content, reducing cognitive strain, and providing inclusive learning opportunities through brief participation windows.

1.4. Need for Clear Mechanistic Explanation

Although prior research consistently concludes that MML enhances motivation, engagement, and performance [16,17,18], the mechanisms explaining why MML outperforms traditional ML remain underexplored. This study addresses that gap by investigating five design characteristics, Ease of Use, Clarity & Coherence, Appeal, Difficulty, and Focus, and examining how they shape learning growth within ML and MML contexts.

1.5. Sustainability Frame

Beyond cognitive benefits, MML strengthens sustainable education by the following:
  • Reducing resource consumption through lightweight digital content;
  • Supporting scalability and reusability across instructional contexts;
  • Enabling inclusive access for learners with limited time or bandwidth;
  • Reinforcing long-term digital resilience through micro-content ecosystems.

1.6. Purpose of the Study

The purpose of this study is to evaluate how MML compares with ML in terms of learning performance and learner perceptions within an undergraduate IT programming context. The study isolates design-driven effects by keeping content constant while modifying delivery structure.

2. Literature Review

2.1. Mobile Learning

Mobile learning (ML) has become a prominent educational modality due to smartphones’ global accessibility and portability [1,19]. ML allows learners to access content anytime and anywhere [2,20] and supports sustainable practices by reducing reliance on physical materials and enabling scalable digital delivery [3,21]. Nevertheless, ML introduces challenges such as distractions, multitasking, and screen-size limitations that can hamper attention and comprehension [5,6].

2.2. Microlearning

Microlearning delivers content in small, homogenous segments that align with cognitive load theory and the limited capacity of working memory [7,8]. Studies show that microlearning improves retention, reduces mental fatigue, and enhances learner motivation [9,10,12]. Its design inherently supports sustainability: it minimizes cognitive load, enables efficient engagement, and allows learners to study in short intervals, features particularly valuable for mobile-based contexts.

2.3. Mobile Microlearning

Mobile microlearning (MML) merges microlearning’s cognitive benefits with mobile devices’ flexibility. MML provides short, interactive sessions optimized for small screens and often includes quizzes, feedback, and gamified features [13,14]. MML has demonstrated effectiveness across various disciplines, including programming [22], language learning [23], journalism [15], and medical education [18,24].
The sustainability advantages of MML (flexibility, reusability, and reduced cognitive resource consumption) further reinforce its pedagogical value within higher education.

2.4. MML Design Mechanisms

Despite broad support for MML’s effectiveness, prior studies seldom examine how specific design features, such as appeal, difficulty, focus, clarity, and usability, contribute to learning outcomes. Accordingly, this study conceptualizes these five course characteristics as mechanistic pathways that shape cognitive and motivational experiences during mobile learning [25,26].

2.5. Research Gap

Although evidence supports MML’s benefits, few studies explain why and how MML works better than ML when the educational content remains constant. This study addresses this gap by systematically analyzing design-based differences and their impact on learning growth in IT education.

3. Method

3.1. Research Context and Participants

The study was conducted at Eastern Mediterranean University (EMU), a public, non-profit institution located in Famagusta, Northern Cyprus. EMU attracts a socio-economically diverse student body, combining local students with international students from the Middle East, Africa, Central Asia, and Europe. Although the university offers modern campus facilities and technology-rich learning environments, a considerable portion of students originate from families with modest or lower-middle-income backgrounds, where access to high-end computing devices may be limited. Smartphones, however, are nearly universal among EMU students and represent their primary digital learning tool, making the mobile-based instructional approach particularly well aligned with their habitual technology use. This socio-economic profile is relevant for interpreting the results: the effectiveness of MML may be partially attributable to its compatibility with the everyday digital practices and resource constraints of learners who rely heavily on mobile devices for academic work.
A total of 68 students participated (46 male, 22 female; Mage = 22.4, SD = 2.72).
Inclusion criteria were as follows:
(1)
No previous programming coursework;
(2)
Proficiency in English;
(3)
Ownership of a smartphone (Android or iOS).
Seven students were excluded (4 missing posttests; 3 incomplete courses). Participation was voluntary and embedded within a short extracurricular instructional module, with no academic grading stakes.

3.2. Research Design

The study employed a two-group randomized experimental design comparing the following:
  • Mobile Learning (ML) group: A conventional video-based mobile course.
  • Mobile Microlearning (MML) group: A redesigned course broken into micro-units with high interactivity.
Randomization was performed using a pre-generated allocation list (simple randomization), with allocation concealment enforced until students completed the pretest. Both groups were exposed to the same curriculum, same examples, and same assessment, ensuring that delivery design, not content, was the only manipulated variable.
The intervention spanned three days, allowing learners to engage with materials at their own pace, reflecting authentic mobile-learning behavior.

3.3. Course Design

3.3.1. Mobile Learning (ML) Condition

The Mobile Learning (ML) condition consisted of a 20–25 min instructional video converted from a slide-based lecture that covered six introductory programming topics, including variables and types, value versus reference types, classes and objects, fields and methods, access modifiers, and inheritance. Students were able to play, pause, and rewind the video at their own pace; however, the format offered minimal interaction, reflecting typical ML implementations reported in the previous literature [2,12]. Although ML does not necessarily imply learning exclusively through a smartphone, in this study, the term refers to the mobile delivery of long-form instructional content.

3.3.2. Mobile Microlearning (MML) Condition

The MML course was created using EdApp (a mobile microlearning authoring tool; URL: [27]).
The ML course was transformed into 26 micro-units, each designed to be completed in 1–3 min, following principles of:
  • Micro-temporal sequencing [10];
  • Single-concept focus [8];
  • Interactivity for engagement [16];
  • Instant feedback for cognitive reinforcement [14].
Each micro-unit included the following:
  • Swipe-based navigation of short textual statements;
  • Interactive card examples;
  • A multiple-choice micro-quiz;
  • Topic-level recap with true/false cards;
  • Automated feedback with scores.

3.4. Instruments

3.4.1. Programming Test

The programming test contained 30 multiple-choice items, developed based on the instructional content. Items had 6 options and were scored as +1 correct, 0 blank, −1 incorrect; total scores were normalized to a 0–100 scale.
Test Development Process
  • Items were drafted according to the six topics.
  • Two programming instructors independently reviewed all items.
  • A pilot test (N = 18) established clarity and difficulty distribution.
  • Item difficulty index (p-values) ranged from 0.32–0.78.
  • Item discrimination (point-biserial) ranged from 0.28–0.61.
Internal consistency: KR-20 = 0.82, acceptable for short conceptual tests.

3.4.2. Course Characteristics Questionnaire

The questionnaire evaluated learners’ perceptions across five scales, each containing 4 items:
  • Ease of Use;
  • Clarity & Coherence;
  • Appeal;
  • Difficulty;
  • Focus.
Items used a 5-point Likert scale (1 = strongly agree to 5 = strongly disagree), later normalized to 0–100.
Reliability and EFA Results
The five perception scales demonstrated satisfactory internal consistency, with Cronbach’s alpha values ranging from 0.71 to 0.91. Exploratory Factor Analysis using Principal Axis Factoring and oblimin rotation confirmed the unidimensionality of each scale. Factor loadings ranged from 0.42 to 0.89, and each scale explained between 63.7% and 83.5% of the variance. These results indicate that the questionnaire reliably captured distinct dimensions of learners’ perceptions related to ease of use, clarity and coherence, appeal, difficulty, and focus.

3.5. Procedure (Figure 1)

  • Participants completed a demographic survey.
  • They took the pretest (~20 min).
  • They were randomly assigned to ML or MML.
  • They were given three days to complete the course.
  • Upon return, they confirmed course completion and took the posttest (~18 min).
  • They completed the Course Characteristics Questionnaire.
  • Participants were debriefed.
Figure 1. Workflow Diagram.
Figure 1. Workflow Diagram.
Sustainability 17 10860 g001

3.6. Data Analysis

Data were analyzed using the following:
  • Mixed ANOVA for pre/post differences;
  • Independent t-tests for group comparisons;
  • Pearson correlations for relationships between perceptions and learning growth;
  • Multiple regression to identify predictors;
  • Nonparametric bootstrapped mediation;
  • Assumption checks:
    Normality (Shapiro–Wilk)
    Homogeneity (Levene’s test)
    >Multicollinearity (VIF < 3).

4. Results

4.1. Effects of Course Format on Programming Performance

A two-way mixed ANOVA examined whether programming performance differed across delivery formats (ML vs. MML) and over time (pretest vs. posttest). Descriptive statistics are presented in Table 1.
The analysis revealed a significant Course Format × Course Completion interaction, F(3, 64) = 15.39, p = 0.001, η2 = 0.33, indicating that both groups improved, but the MML group experienced substantially larger learning gains.
A robust main effect of Course Completion was also observed, F(3, 64) = 10.62, p = 0.001, η2 = 0.51, confirming that completing the course, regardless of format, enhanced programming knowledge.
These results show that identical content, when delivered through micro-structured, interactive mobile units, produces stronger learning gains than traditional mobile video delivery.

4.2. Reliability and Factor Structure of Course Characteristics

Internal consistency for the five questionnaire scales was high (Table 2a). All Cronbach’s alpha values ranged from 0.71 to 0.91, indicating solid reliability.
Exploratory Factor Analysis (EFA) confirmed unidimensionality for each scale (Table 2b). Factor loadings were strong (0.42–0.89), and each dimension explained substantial variance (63.7–83.5%).

4.3. Correlations Between Course Characteristics and Learning Growth

Learning Growth (posttest–pretest) correlated differently across ML and MML groups (Table 3).
Course Appeal and Course Difficulty showed significant associations with Learning Growth in both delivery formats.
Focus was significantly associated with Learning Growth only in the MML group.
This pattern suggests that focus-enhancing design mechanisms, micro-steps, recaps, card-based navigation may become meaningful only in micro-structured environments.
Given the exploratory nature of the correlation analyses, no formal correction for multiple testing was applied, consistent with similar microlearning studies.

4.4. Predictors of Learning Growth

A stepwise regression determined that Appeal, Difficulty, and Focus were significant predictors, jointly explaining 61.3% of the variance in Learning Growth, F(3, 64) = 24.6, p < 0.001.
Rather than functioning as mere subjective impressions, these learner-centered characteristics appear to capture the cognitive and emotional conditions under which mobile learning becomes more productive.

4.5. Mediation Effects

Mediation analyses revealed that Appeal partially mediated the relationship between course format and learning growth. While course format significantly predicted both Appeal and learning outcomes, the explanatory effect of format diminished after controlling for Appeal, indicating that the enhanced motivational qualities of the microlearning environment contributed meaningfully to its effectiveness. Difficulty operated as a full mediator: the MML condition significantly reduced perceived difficulty, which in turn predicted higher learning growth. Once Difficulty was accounted for, the direct effect of course format became non-significant, suggesting that reductions in cognitive load were a central mechanism explaining MML’s advantage.

4.6. Group Differences in Course Characteristics

t-tests confirmed that the MML group experienced significantly:
  • Higher Appeal (t(67) = −8.4, p < 0.001);
  • Higher Focus (t(67) = −11.5, p < 0.005);
  • Lower Difficulty (t(67) = 5.3, p < 0.001);
  • Clarity and Ease of Use did not differ significantly.

5. Discussion

The findings demonstrate a coherent, design-driven pattern: when identical instructional content is transformed into micro-units enriched with interactivity and frequent feedback, learners experience the material as more appealing, less difficult, and easier to focus on, conditions that collectively support stronger learning outcomes.
  • Key Insight 1: MML Does Not Simply “Shorten Content”: It Reconfigures Cognitive Experience
The significantly higher performance in the MML group is not attributed to content differences or increased instructional time, both were held constant. Instead, microlearning’s structural features appeared to shape learners’ cognitive trajectories:
  • Short, self-contained units reduce fatigue accumulation.
  • Interactive elements sustain curiosity.
  • Frequent feedback closes cognitive loops quickly.
While the study cannot claim causation, the consistency of associations across perceptions and outcomes suggests a plausible cognitive mechanism rather than a superficial format effect.
  • Key Insight 2: Appeal and Difficulty Are Mechanistic Predictors
The mediation analyses highlight two central mechanisms:
  • Appeal (positive emotion, enjoyment, engagement)
  • Difficulty (fatigue, cognitive load)
These characteristics did not merely reflect learner preference; they predicted actual learning gains.
In a high-impact framing, effective microlearning operates at the intersection of emotional engagement and cognitive efficiency.
  • Key Insight 3: Focus Matters, But Only When Design Enables It
Focus showed a significant correlation with Learning Growth only in the MML group. This suggests the following:
  • Focus is not a universal feature of mobile learning.
  • It emerges from design, not device.
  • Key Insight 4: Sustainability Contributions Extend Beyond Learning Outcomes
  • MML reduces cognitive waste (fatigue, inefficiency).
  • MML supports low-resource, reusable micro-content structures.
  • MML enables flexible participation, reducing dropout risks.
  • MML aligns with environmentally responsible digital ecosystems.
  • MML strengthens resilience by enabling stable learning even under fragmented schedules.
In short, Sustainable digital pedagogy is not only about environmental cost, but it is also about cognitive sustainability.

6. Sustainability Implications

The findings of this study position Mobile Microlearning (MML) not merely as a pedagogical enhancement but as a strategic contributor to sustainable digital education ecosystems. Accordingly, this section reframes the results through sustainability-oriented lenses, including resource efficiency, equity, scalability, digital resilience, and long-term institutional viability.
The sustainability implications of this study extend beyond pedagogical considerations by highlighting how microlearning supports long-term educational resilience. By organizing content into compact, single-concept units, MML reduces cognitive load and mental fatigue, promoting cognitive sustainability and allowing learners to maintain attention without exhaustion. Additionally, the micro-temporal structure aligns with fragmented modern routines, supporting temporal and behavioral sustainability by enabling consistent, low-strain engagement. The modular nature of micro-units reduces resource requirements for instructors and institutions, as these units can be reused, updated, and scaled with minimal environmental and labor cost. Because microlearning leverages widely available smartphones, it also supports digital inclusion, allowing students from diverse economic backgrounds to participate fully without reliance on expensive hardware or high-bandwidth environments. Finally, the brief, asynchronous nature of microlearning ensures that learning can continue even under disruptions, enhancing the resilience of educational systems.

7. Conclusions

This study investigated how Mobile Microlearning (MML) enhances learners’ performance and perceptions relative to traditional Mobile Learning (ML), using a tightly controlled design in which content, assessments, and learning objectives were held constant. Across all analyses, a consistent narrative emerged: the transformation of delivery design, rather than content, accounts for MML’s superior effectiveness.
Learners in the MML condition reported significantly higher Appeal, stronger Focus, and markedly lower Difficulty, and these characteristics significantly predicted learning growth. The mediation analyses further showed that MML improved performance primarily by increasing Appeal and reducing mental fatigue. These findings affirm that structured micro-units, immediate feedback, and interaction-rich navigation meaningfully shape cognitive and emotional engagement, ultimately enabling learners to succeed at higher levels.
By contextualizing these outcomes within sustainability-oriented educational discourse, the study demonstrates that MML is not merely a more engaging version of mobile learning but a strategic, sustainable, and cognitively efficient instructional paradigm. It supports inclusion, reduces cognitive and material resource demands, and strengthens the adaptability of higher education digital ecosystems, particularly in demanding fields such as IT education.
The implications for educational design are clear: short, reusable, feedback-driven micro-experiences represent a scalable and resilient pathway toward sustainable digital pedagogy. In a rapidly evolving technological landscape, MML offers a human-centered, resource-conscious, and future-ready model that aligns with global sustainability goals and the practical realities of modern learners.

Author Contributions

Conceptualization, Z.Y.; methodology, Z.Y.; software, Z.Y.; validation, Z.Y.; formal analysis, Z.Y.; investigation, Z.Y.; resources, Z.Y.; data curation, Z.Y.; writing—original draft preparation, Z.Y.; writing—review and editing, Z.Y.; visualization, Z.Y.; supervision, F.D.; project administration, F.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Eastern Mediterranean University (EMU) Human Research Ethics Committee (protocol code ETK00–2023–0127 and 23 June 2023).

Informed Consent Statement

All participants were fully informed about the purpose, procedures, confidentiality protections, and voluntary nature of the study. Written informed consent was obtained from every participant prior to participation, in accordance with institutional ethical guidelines.

Data Availability Statement

Due to ethical restrictions, individual-level participant data cannot be publicly shared. Aggregated datasets and all study instruments are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the study’s design, data collection, analyses, interpretation of results, manuscript preparation, or decision to publish the findings.

References

  1. International Telecommunication Union. World Telecommunication/ICT Indicators Database 2021; International Telecommunication Union: Geneva, Switzerland, 2021. [Google Scholar]
  2. Wong, L.H. A learner-centric view of mobile seamless learning. Br. J. Educ. Technol. 2012, 43, 19–23. [Google Scholar] [CrossRef]
  3. Ally, M. Mobile Learning: Transforming the Delivery of Education and Training; AU Press: Athabasca, AB, Canada, 2009. [Google Scholar]
  4. Agah, T.K.; Ayse, A. Differences between m-learning (mobile learning) and e-learning: Basic terminology and usage of m-learning in education. Procedia–Soc. Behav. Sci. 2011, 15, 1925–1930. [Google Scholar]
  5. Shail, M.S. Using microlearning on mobile applications to increase knowledge retention and work performance: A review of the literature. Cureus 2019, 11, e5307. [Google Scholar] [CrossRef] [PubMed]
  6. Seong, D.S.K.; Broga, J. Usability guidelines for designing mobile learning portals. In Proceedings of the Mobility 06 Conference Proceedings, Bangkok, Thailand, 25–27 October 2006. [Google Scholar]
  7. Simon, H.A. How big is a chunk? Science 1974, 183, 482–488. [Google Scholar] [CrossRef] [PubMed]
  8. Cowan, N. The focus of attention as observed in visual working memory tasks: Making sense of competing claims. Neuropsychologia 2011, 49, 1401–1406. [Google Scholar] [CrossRef] [PubMed]
  9. Cates, S.; Barron, D.; Ruddiman, P. MobiLearn Go: Mobile microlearning as an active, location-aware game. In Proceedings of the 19th International Conference on Human–Computer Interaction with Mobile Devices and Services, Vienna, Austria, 4–7 September 2017; pp. 1–8. [Google Scholar]
  10. Jahnke, I.; Lee, Y.M.; Pham, M.; He, H.; Austin, L. Unpacking the inherent design principles of mobile microlearning. Technol. Knowl. Learn. 2020, 25, 585–619. [Google Scholar] [CrossRef]
  11. Zheng, R.; Zhu, J.; Zhang, M.; Liu, R.; Wu, Q.; Yang, L. A resource-efficient deployment approach to mobile microlearning. Wirel. Commun. Mob. Comput. 2019, 2019, 7430860. [Google Scholar] [CrossRef]
  12. Lee, Y.M. Mobile microlearning: A systematic literature review and its implications. Interact. Learn. Environ. 2023, 31, 4636–4651. [Google Scholar] [CrossRef]
  13. Giurgiu, L. Microlearning: An evolving e-learning trend. Sci. Bull. 2017, 22, 18–23. [Google Scholar]
  14. Nikou, S.A.; Economides, A.A. Mobile-based microlearning and assessment: Impact on learning performance and motivation of high school students. J. Comput. Assist. Learn. 2018, 34, 269–278. [Google Scholar] [CrossRef]
  15. Lee, Y.M.; Jahnke, I.; Austin, L. Mobile microlearning design and effects on learning efficacy and learner experience. Educ. Technol. Res. Dev. 2021, 69, 885–915. [Google Scholar] [CrossRef]
  16. Dingler, T.; Weber, D.; Pielot, M.; Cooper, J.; Chang, C.-C.; Henze, N. Language learning on-the-go. In Proceedings of the MobileHCI ’17, Vienna, Austria, 4–7 September 2017. [Google Scholar] [CrossRef]
  17. Göschlberger, B.; Bruck, P.A. Gamification in mobile and workplace-integrated microlearning. In Proceedings of the iiWAS ’17, Salzburg, Austria, 4–6 December 2017. [Google Scholar]
  18. Simons, L.P.; Foerster, F.; Bruck, P.A.; Motiwalla, L.; Jonker, C.M. Microlearning app raises health competence: Hybrid service design. Health Technol. 2015, 5, 35–43. [Google Scholar] [CrossRef] [PubMed]
  19. Brom, C.; Levcik, D.; Buchtova, M.; Klement, D. Playing educational micro-games at high schools: Individually or collectively? Comput. Hum. Behav. 2015, 48, 682–694. [Google Scholar] [CrossRef]
  20. Virvou, M.; Alepis, E. Mobile educational features in authoring tools for personalized tutoring. Comput. Educ. 2005, 44, 53–68. [Google Scholar] [CrossRef]
  21. Agnes, K.H. How the higher education workforce adapts to technology. Internet High. Educ. 2012, 15, 247–254. [Google Scholar]
  22. Skalka, J.; Drlik, M. Educational model for improving programming skills using conceptual microlearning. In Proceedings of the ICL 2018, Kos Island, Greece, 25–28 September 2018; pp. 923–934. [Google Scholar]
  23. Fang, Q. A study of college English teaching mode in the context of micro-learning. In Proceedings of the International Conference on Management and Education, Humanities and Social Sciences, Hangzhou, China, 14–15 April 2018; pp. 235–239. [Google Scholar]
  24. Hui, B. Application of micro-learning in physiology teaching for adult nursing students. J. Qiqihar Univ. Med. 2014, 21, 3219–3220. [Google Scholar]
  25. Reeves, T.C.; Lin, L. The research we have is not the research we need. Educ. Technol. Res. Dev. 2020, 68, 1991–2001. [Google Scholar] [CrossRef] [PubMed]
  26. Loewenstein, G. The psychology of curiosity: A review and reinterpretation. Psychol. Bull. 1994, 116, 75–98. [Google Scholar] [CrossRef]
  27. EdApp, Mobile Microlearning Authoring Tool. Available online: www.edapp.com (accessed on 1 February 2023).
Table 1. Descriptive Statistics for Programming Performance.
Table 1. Descriptive Statistics for Programming Performance.
MeasureMobile Learning M(SD)Mobile Microlearning M(SD)
Pretest21.70 (1.10)23.15 (1.68)
Posttest63.31 (2.46)81.92 (2.72)
Learning Growth41.6158.77
Table 2. (a) Reliability of Course Characteristics Questionnaire. (b) EFA Summary.
Table 2. (a) Reliability of Course Characteristics Questionnaire. (b) EFA Summary.
(a)
Scaleα
Ease of Use0.71
Clarity & Coherence0.78
Appeal0.85
Difficulty0.91
Focus0.81
(b)
FactorLoading RangeTotal Variance ExplainedItemsDimension
10.47–0.7671.8%4Ease of Use
20.42–0.6563.7%4Clarity & Coherence
30.77–0.8981.9%4Appeal
40.53–0.8277.6%4Difficulty
50.51–0.7983.5%4Focus
Table 3. Characteristics × Learning Growth Correlations.
Table 3. Characteristics × Learning Growth Correlations.
Course CharacteristicML MML SDML rMML MMML SDMML r
Ease of Use681.200.13732.100.08
Clarity & Coherence560.920.06621.180.12
Appeal591.430.53 *890.750.62 **
Difficulty512.71−0.59 *261.90−0.68 *
Focus420.840.27721.010.41 *
* p < 0.01, ** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuca, Z.; Dabaj, F. Sustainable Mobile Microlearning: Evaluating Learners’ Perceptions and Learning Outcomes in IT Education. Sustainability 2025, 17, 10860. https://doi.org/10.3390/su172310860

AMA Style

Yuca Z, Dabaj F. Sustainable Mobile Microlearning: Evaluating Learners’ Perceptions and Learning Outcomes in IT Education. Sustainability. 2025; 17(23):10860. https://doi.org/10.3390/su172310860

Chicago/Turabian Style

Yuca, Zafer, and Fahme Dabaj. 2025. "Sustainable Mobile Microlearning: Evaluating Learners’ Perceptions and Learning Outcomes in IT Education" Sustainability 17, no. 23: 10860. https://doi.org/10.3390/su172310860

APA Style

Yuca, Z., & Dabaj, F. (2025). Sustainable Mobile Microlearning: Evaluating Learners’ Perceptions and Learning Outcomes in IT Education. Sustainability, 17(23), 10860. https://doi.org/10.3390/su172310860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop