Next Article in Journal
Monitoring and Analysis of Land Subsidence Induced by Social Aggregation Effects for Operational Subway via PS-InSAR: A Case Study in Guangzhou Metro Line 6, China
Previous Article in Journal
UAV Multispectral Imaging for Multi-Year Assessment of Crop Rotation Effects on Winter Rye
Previous Article in Special Issue
Retrieval-Augmented Vision–Language Agents for Child-Centered Encyclopedia Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment

Center for Transportation and Logistics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(21), 11493; https://doi.org/10.3390/app152111493
Submission received: 20 September 2025 / Revised: 21 October 2025 / Accepted: 21 October 2025 / Published: 28 October 2025
(This article belongs to the Special Issue Applications of Digital Technology and AI in Educational Settings)

Featured Application

This study demonstrates that Learning Analytics Dashboards are most effective when designed with actionable, motivationally framed feedback. In practice, MOOC providers and instructional designers can apply these design principles to creating dashboards that foster learner commitment and reduce cognitive burden. Specifically, dashboards should combine clear, low-inference visualizations with concise ARCS-framed feedback and visible pacing cues (e.g., weekly study indicators) to strengthen motivation and support self-regulated learning in online course environments.

Abstract

Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning on online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high inference costs. In this study, we designed and evaluated a LAD grounded in the COPES model of self-regulated learning and tailored to a credit-bearing Massive Open Online Course (MOOC) using a data-driven approach. We conducted a randomized controlled trial with 8745 learners, comparing a control group, a dashboard without feedback, and a dashboard with ARCS-framed actionable feedback. The results showed that the dashboard with feedback significantly increased learners’ likelihood of verification (i.e., paying for the certification track), with mixed effects on engagement and no measurable impact on final grades. These findings suggest that dashboards are not uniformly beneficial: while feedback-supported LADs can enhance motivation and persistence, dashboards that lack interpretive support may impose cognitive burdens without improving outcomes. This study contributes to the literature on learning analytics by (1) articulating the design principles for theoretically and contextually grounded LADs and (2) providing experimental evidence on their impact in authentic MOOC settings.

1. Introduction

Massive Open Online Courses (MOOCs) have expanded access to education worldwide, but they continue to face persistent challenges with learner engagement and success. One contributing factor is the lack of individualized feedback to support self-regulated learning (SRL), the iterative process of planning, monitoring, and evaluating one’s learning, which is strongly associated with improved achievement [1,2,3]. Although SRL is widely recognized as beneficial, many MOOC learners struggle to engage in it effectively [4,5].
Learning Analytics Dashboards (LADs) have been proposed as tools to foster SRL at scale by visualizing indicators of performance, progress, and engagement [6,7]. When designed well, dashboards can help learners reflect on their behavior and adapt strategies [8,9]. Yet their effectiveness remains contested. Reviews have shown that many LADs lack theoretical grounding or contextual alignment [7,10], and others caution that poorly designed dashboards may increase cognitive load, encourage unhelpful social comparison, or even discourage learners [11,12]. The evidence to date has been mixed, with many studies relying on small-scale pilots or lab settings that limit generalizability [13,14]. This has led to ongoing debate about whether LADs have truly lived up to their promise [11].
A recurring critique of LAD design is the absence of strong theoretical foundations. Matcha et al. [7] found that most dashboards lacked explicit connections to SRL theory, while more recent reviews suggest incremental progress but continued inconsistency. Paulsen et al. [15] argue that the field is moving “from analytics to learning” but note that theoretical integration is often partial. Similarly, Masiello [16] highlights that dashboards remain largely descriptive, translating data into visualizations but rarely into actionable, pedagogically meaningful guidance.
Another concern is the use of peer-referenced indicators. While showing learners how they compare to their peers can provide useful benchmarks, it can also trigger counterproductive forms of social comparison. Classic theory suggests that upward comparison often undermines motivation [17,18], and empirical studies confirm this risk in online learning settings [12,19]. To mitigate such effects, researchers have proposed using prior cohorts rather than real-time peers as reference points [20] or offering multiple benchmarks (e.g., passing, certificate-ready, mastery) aligned with diverse learner goals [21,22].
A third critique relates to inference cost, the difficulty of interpreting and acting on dashboard information. Complex or abstract visualizations have been shown to impose high cognitive demands [23,24], with their benefits often skewed toward more educated learners [20]. Recent studies emphasize that explanatory, goal-oriented designs reduce extraneous cognitive load [25], while poor designs can undermine motivation by eroding learners’ sense of competence [26]. Reducing inference costs requires not only careful visualization design but also the inclusion of actionable feedback. The ARCS model [27], which structures feedback to capture Attention, highlight Relevance, build Confidence, and foster Satisfaction, has proven effective in online learning environments [28,29].
Finally, LAD evaluation practices have often been limited. Many studies rely on small pilots, lab settings, or usability testing, restricting generalizability [13,14]. Field studies in authentic learning contexts are rarer but crucial, as they reveal confounding factors and provide stronger evidence for both researchers and practitioners [12,30,31,32]. Reviews consistently call for more controlled evaluations of dashboards in real courses with diverse learners [11].
This study addresses these gaps by designing a theory- and context-grounded LAD for a credit-bearing MOOC in supply chain management and evaluating it through a randomized controlled trial (RCT) with 8745 learners. Guided by the COPES model of SRL [33,34] and informed by historical course data, the dashboard incorporated pacing and progress indicators, with or without actionable ARCS-framed feedback. Our findings show that dashboards without feedback offered no measurable benefits, while dashboards with feedback significantly increased learners’ verification rates (a marker of commitment) but had mixed effects on engagement and no effect on final performance. These results suggest that dashboards are not inherently beneficial; their impact depends on specific design choices. By combining design principles with experimental evidence, this work contributes to ongoing debates about the value of LADs and offers practical guidance for building dashboards that support self-regulated learning at scale.

2. Materials and Methods

This section describes the context, design, and evaluation of the study. We begin with an overview of the MOOC that served as the research setting, then detail the development of the Learning Analytics Dashboard (LAD), and finally outline the experimental design used to assess its impact.

2.1. Course Context

The intervention was implemented in a 14-week MOOC in supply chain analytics, part of a credential-bearing online program in supply chain management hosted on the edX platform (edx/2U, Arlington, VA, USA). The course ran from April to August 2023 and enrolled 8745 learners, the majority of whom were working professionals. All enrolled learners were included in the sampling frame at the start of the course; no exclusions were applied beyond standard platform registration criteria, ensuring that the randomized groups represented the full course population. Learners could audit the course for free or enroll as verified learners by paying a fee. Only verified learners obtained access to graded assignments and the final exam and were eligible for a certificate if they achieved a grade of 60% or higher.
The course consisted of five instructional modules followed by a final exam. Each module included lecture videos, practice problems, and a graded assignment. Practice problems permitted three attempts with immediate automated feedback. After either answering correctly or exhausting all attempts, learners could access detailed explanations. These problems were designed to scaffold concepts that would be assessed in subsequent graded assignments. Graded assignments contributed 10% of the final grade, while the exam contributed 90%. Survey data from previous course runs (2021–2022), implemented using Qualtrics XM (Qualtrics LLC, Provo, UT, USA), indicated that verified learners typically pursued two main goals: earning a certificate for professional purposes or achieving a high grade as a pathway toward graduate credit. A recurring theme in these surveys was anxiety about pacing and uncertainty about progress, which motivated the development of a learner-facing dashboard to scaffold self-regulated learning.

2.2. Dashboard Design

The Learning Analytics Dashboard (LAD) was designed using the COPES model of self-regulated learning [33,34], which conceptualizes regulation as cycles of conditions, operations, products, evaluations, and standards. From this perspective, dashboards act as external feedback systems that can supplement or correct learners’ often biased self-assessments.
To contextualize the design, we conducted exploratory analyses on clickstream data from earlier runs of the course (2021–2022). These analyses were performed using Python 3.10 in Google Colab (Google LLC, Mountain View, CA, USA), relying on the pandas, numpy, and statsmodels libraries.Multiple linear regression was used to predict final grades from behavioral traces (see Table 1). Indicators were selected if they showed statistical significance in the linear model ( p < 0.05 ). This process yielded three indicators: (1) the number of unique lecture videos completed; (2) the number of unique practice problems submitted; and (3) the number of practice problem solutions viewed.
Consultations with the teaching staff underscored concerns about learners’ pacing behaviors and highlighted the critical role of structured study planning in online learning. These insights informed the inclusion of time-related features in the dashboard to scaffold effective pacing strategies. The decision was further motivated by evidence on the spacing effect, which demonstrates that learning is more effective when study sessions are distributed over time rather than massed together [35,36].
Two dashboard variants were developed to examine the role of feedback in shaping learners’ interpretation and use of these indicators. Both dashboards presented the indicators with basic visualizations and a space to display messages. The dashboard shown to Group A contained generic, static messages. The dashboard shown to Group B contained personalized and actionablefeedback messages. These personalized feedback messages were drafted using Keller’s ARCS model [27]: Attention (capture learner interest), Relevance (connect learning to learner goals and needs), Confidence (build belief in ability to succeed), and Satisfaction (reinforce accomplishment). Each message was linked to the learner’s current progress and automatically selected from a message bank.

2.3. Experimental Design

We conducted a three-arm randomized controlled trial (RCT) to evaluate the LAD’s impact on learner outcomes. Upon enrollment, learners were randomly assigned into one of three groups: (1) Group C (Control): no dashboard; (2) Group A (LAD without feedback): dashboard with indicators and generic messages; (3) Group B (LAD with feedback): dashboard with indicators plus ARCS-framed actionable feedback. Randomization occurred automatically at the time of enrollment, without a separate consent procedure, as learners participated under the platform’s standard terms of use.
Learners in Groups A and B accessed their dashboards through an “Engagement Dashboard” button on the course landing page. To ensure consistency in the interface, control learners saw a similar button leading to a survey. Engagement with the dashboard was voluntary, with no incentives to click. As learners were randomly assigned to conditions at enrollment, analyses followed an intent-to-treat design; dashboard usage frequency was not recorded, as the focus was on the impact of dashboard availability rather than self-selected engagement.
Three outcome variables were analyzed: verification status, engagement, and performance. Verification status was a binary variable indicating whether a learner upgraded to the verified track (i.e., paid for access to graded assignments, the exam, and the option to earn a certificate). Engagement was operationalized as the total time spent in the course. Engagement time was computed from clickstream logs as the sum of session durations, where each session comprised consecutive events separated by less than 10 min of inactivity. Total time was calculated as the cumulative duration of all sessions for each learner. Performance was measured as the final course grade, expressed on a 0–1 scale, and available only for verified learners. Verification status outcomes were analyzed using logistic regression, with Group C (no dashboard) as the reference category. Engagement and performance were analyzed using one-way ANOVAs, followed by Tukey’s HSD tests for post hoc comparisons. ANOVA was chosen to maintain comparability with prior learning analytics research and to address our primary goal of testing the mean group differences.
Because the engagement data were positively skewed, analyses were conducted on both raw and log-transformed values. Assumption checks indicated that log transformation improved distributional properties: Levene’s test confirmed homogeneity of variances across groups for the log-transformed data ( W = 2.74 , p = 0.065 ) but not for the raw data ( W = 4.26 , p = 0.014 ). The Shapiro–Wilk tests remained significant ( p < 0.001 ) due to the large sample size ( N = 8745 ), though normality improved substantially (mean W increased from 0.43 to 0.77). Given these improvements and to assess robustness, analyses were conducted on both the raw and log-transformed engagement values.
Assumption checks for performance data indicated deviations from normality (Shapiro–Wilk p < 0.001 ), but homogeneity of variances was satisfied (Levene’s p = 0.065 ). Given the bounded nature of grade data and the large sample size, the ANOVA was considered robust to these deviations, and analyses were conducted on raw values.
Effect sizes were reported as odds ratios (ORs) for logistic regression and eta squared ( η 2 ) for ANOVAs. Statistical significance was set at α = 0.05 . All analyses were performed in Python 3.12 in Google Colab (Google LLC, Mountain View, CA, USA), using the pandas, numpy, statsmodels, and scipy libraries.

3. Results

The results are organized into two parts. First, we present the design of the Learning Analytics Dashboard (LAD), derived from theory and contextual data. Second, we report the outcomes from the randomized controlled field experiment.

3.1. Learning Analytics Dashboard Design

Grounded in the COPES model and informed by historical learner data, we derived four principles for the design of a theory- and context-driven LAD: (1) use benchmarks from prior cohorts instead of real-time peers to avoid unproductive social comparison; (2) provide multiple goal-aligned standards to reflect the diverse intentions of MOOC learners; (3) combine low-inference visualizations with actionable, ARCS-framed feedback to reduce inference costs and support motivation; and (4) make pacing visible to promote spacing effects. These principles, along with their implementation in dashboard components and theoretical grounding, are summarized in Table 2.
The final LAD (Figure 1) implemented these principles through six components. Engagement KPIs and Module Videos (a, b) provided progress benchmarks and workload references, aligned with both passing and high-achievement trajectories. Final Exam Countdown, Weekly Streak, and Total Time Spent (c, d, e) emphasized pacing, encouraging learners to spread out study sessions rather than relying on cramming. All visualizations were deliberately kept simple (bar and line charts with explanatory tooltips) to minimize extraneous cognitive load. Groups A and B therefore used dashboards with identical visual layouts and indicators; their only difference lay in the feedback mechanism.
Messages (f) differed by dashboard variant: Group A received static, motivational, or course-related messages, while Group B received personalized messages to encourage sustained effort and suggest specific actions. Each learner in Group B was categorized into one of three pacing states, ahead, on-target, or behind, according to their progress relative to the course timeline. The classification followed the logic below:
n w < k i a i ,   ahead = k i a i ,   on-target > k i a i ,   behind
where
  • n = number of weeks since the learner enrolled;
  • w = total number of weeks between enrollment and the course’s final exam;
  • k i = number of activities of type i completed by the learner;
  • a i = total number of activities of type i available in the course;
  • i { lecture   videos ,   practice   problems ,   solution   checks } .
Learners were thus categorized as ahead if they had completed proportionally more activities than expected for the elapsed time, on-target if their progress matched the expected rate, and behind if they had completed fewer activities than expected. Table 3 presents representative messages for Groups A and B.

3.2. Experimental Evaluation

We evaluated the impact of the LAD through a randomized controlled trial with three conditions: Group C (control, no dashboard), Group A (dashboard without feedback), and Group B (dashboard with feedback). Outcomes were examined across three dimensions: verification (whether learners paid to pursue a certificate), engagement (time invested in the course), and performance (final grades on the course). These measures capture both learners’ behavioral commitment to the course and their academic achievement.

3.2.1. Verification

Verification rates are summarized in Table 4. Learners who received the dashboard with feedback (Group B) were more likely to upgrade to the verified track (18.1%) than those in either the control condition (15.7%) or using the dashboard without feedback (15.7%). Logistic regression confirmed a significant effect of condition, with the learners in Group B showing higher odds of verification relative to that for the control (OR = 1.19, 95% CI [1.04, 1.37], p = 0.012 ; Table 5). By contrast, the dashboard without feedback (Group A) had no effect compared to the control (OR = 1.00, 95% CI [0.87, 1.15], p = 0.990 ). These findings indicate that the dashboards enhanced with feedback increased learners’ willingness to invest financially in the course to pursue a certificate, whereas dashboards without feedback had no measurable effect.

3.2.2. Engagement

Engagement was measured as total time spent in the course (in hours). Descriptive statistics are reported in Table 6. A one-way ANOVA of the raw values indicated significant differences between groups, F(2, 8745) = 4.26, p = 0.014 , but the effect size was negligible ( η 2 = 0.001 ) (Table 7). Tukey’s post hoc tests showed that the learners in Group B (dashboard with feedback) spent significantly more time than those in Group A (dashboard without feedback) and marginally more than the control group (Group C; p = 0.056 ) (Table 7). However, due to the positive skew of the engagement distribution, we repeated the analysis on log-transformed data. In this robustness check, the group effect was no longer significant, F(2, 8745) = 1.53, p = 0.217 , and no pairwise contrasts reached significance (Table 8). Taken together, these results indicate that the differences observed in raw data were driven by a small subset of highly engaged learners, rather than a consistent effect across the population.
It is worth noting that the engagement values in Table 6 reflect averages across all enrolled learners, including a large number who registered but showed little or no activity. Consequently, mean engagement time appears low compared to that for highly active learners such as the illustrative example in Figure 1. Among learners who achieved a passing grade, the mean total engagement time was substantially higher (89.7 h), indicating wide variability in study commitment across the population.

3.2.3. Performance

Performance was analyzed using final course grades. This analysis was restricted to verified learners since they were the only ones who had access to graded assignments and the final exam. Descriptive statistics are reported in Table 9: learners in Group C (control) achieved slightly higher mean grades (M = 0.39) than those in Group B (M = 0.37) and Group A (M = 0.35), but the differences were small. A one-way ANOVA revealed no significant differences between groups (F(2, 1438) = 1.46, p = 0.233 ) with a negligible effect size ( η 2 = 0.002 ). Tukey’s post hoc comparisons confirmed that none of the pairwise contrasts were significant (all p > 0.20 ; Table 10). These results indicate that neither dashboard condition, with or without feedback, had a measurable effect on learners’ final grades.

4. Discussion

This study set out to design and evaluate a Learning Analytics Dashboard (LAD) grounded in theory and contextualized in a MOOC, with the goal of enhancing self-regulated learning (SRL) and improving learner outcomes. By combining insights from SRL theory, data-driven indicator selection, and prior critiques of LAD design, we proposed a set of design principles and tested their impact in an online course field experiment. The results highlight both the promise and the pitfalls of LADs: dashboards with actionable, ARCS-framed feedback increased learners’ likelihood of verification (paying for the option to get a certificate) and showed tentative signs of boosting engagement, while dashboards without feedback provided no measurable benefit and may have imposed additional cognitive costs. Across all conditions, no significant effects were observed on final performance.
These findings extend the ongoing debate captured by Kaliisa et al. [11], who noted that the evidence for positive effects of LADs remains mixed. Our study underscores that dashboards are not uniformly beneficial: simply visualizing learner data, without clear interpretive support, can increase inference costs and fail to motivate action. In contrast, dashboards that integrate low-inference visualizations with actionable feedback can foster motivation and persistence, leading to higher levels of commitment. In this sense, the question is not whether LADs “work” but under what design conditions they provide value.
The results also align with research on cognitive load and motivation. Explanatory, goal-oriented designs have been shown to reduce extraneous cognitive load [25], while poorly structured feedback can undermine learners’ sense of competence [26]. Our findings fit this dual mechanism: learners who received indicators without guidance saw little benefit, whereas those who received ARCS-framed feedback showed stronger commitment to the course. Feedback thus appears essential for transforming dashboard data into actionable strategies while also sustaining motivation.
A further contribution of this study concerns pacing. Building on evidence that distributed engagement predicts certification more strongly than total time-on-task [38], we incorporated features such as weekly streaks and time-on-task indicators. While these elements alone were insufficient to produce significant effects, their integration with actionable feedback may explain why the feedback condition produced more favorable outcomes. Consistent with this, Group B exhibited higher verification odds than that for the control and a modest raw engagement advantage over Group A that attenuated under log transformation (see Table 5, Table 7 and Table 8); however, no grade differences were observed (Table 10). This pattern suggests that pacing supports are most effective when combined with guidance that helps learners interpret and act on them.
Taken together, our findings emphasize that the impact of LADs depends not on their mere presence but on their theoretical grounding, contextualization, and feedback design. Effective dashboards need to reduce inference costs, support motivational needs, and provide meaningful pacing scaffolds. Future research should continue to explore which combinations of features influence different learner outcomes and how learner characteristics (e.g., prior achievement, goals, self-efficacy) moderate dashboard effectiveness.

Limitations and Future Work

This studyhas three main limitations. First, it was conducted in a single MOOC on supply chain management, primarily targeting working professionals, which constrains the generalizability to other subjects, learner populations, or formats such as instructor-paced or non-credit-bearing courses. Second, the dashboard was evaluated as a bundled intervention, making it impossible to isolate the specific contributions of individual components, for example, whether the increase in verification rates was driven by ARCS-framed messages, pacing indicators, or their interaction. Third, this study focused on short-term outcomes within a single course; we did not examine whether exposure to dashboards fostered lasting SRL practices or longer-term academic gains.
Future work should therefore test LADs across more diverse settings, employ experimental designs that isolate the contribution of individual features, and track learners longitudinally to capture sustained impacts. In parallel, our ongoing research agenda aims to refine the dashboard through component-level A/B tests and explore the integration of agentic AI systems capable of delivering adaptive, context-aware feedback in real time. These developments, together with multi-course replications, will help clarify how actionable, personalized feedback can strengthen self-regulated learning at scale.

5. Conclusions

This study designed and evaluated a Learning Analytics Dashboard (LAD) grounded in SRL theory and contextualized in a MOOC, testing its impact in an online course field experiment. The results show that dashboards are not inherently beneficial: a LAD without actionable feedback offered no measurable advantages and in some cases was associated with negative outcomes, while a LAD with ARCS-framed feedback increased learners’ verification rates and showed tentative benefits for engagement. No differences were found in final course performance across groups.
These findings suggest that LADs are most effective when they combine low-inference visualizations with actionable, motivationally framed feedback and when they make pacing strategies explicit. For practitioners, these results caution against deploying dashboards that simply display learner data while highlighting the value of embedding interpretive support that helps learners translate indicators into concrete actions. For researchers, this study underscores the need for designs that isolate the contribution of dashboard features and for evaluations across diverse contexts to determine when and for whom LADs are effective.
Ultimately, the question is not whether LADs have lived up to the hype but under what conditions they can deliver on their promise. By grounding dashboards in theory, contextualizing them with course-specific data, and embedding feedback that supports both cognition and motivation, future work can move toward LADs that reliably foster self-regulated learning and learner success at scale.

Author Contributions

Conceptualization: I.B. and E.P.-C.; methodology: I.B.; software: I.B.; validation: I.B. and E.P.-C.; formal analysis: I.B.; investigation: I.B.; resources: E.P.-C.; data curation: I.B.; writing—original draft preparation: I.B.; writing—review and editing: I.B. and E.P.-C.; visualization: I.B.; supervision: E.P.-C.; project administration: I.B.; funding acquisition: E.P.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Massachusetts Institute of Technology (MIT) Integrated Learning Initiative (MITili), 2022–2023 Grant.

Institutional Review Board Statement

Ethical review and approval were waived for this study. The research involved secondary analysis of fully de-identified learner data from the MITx MicroMasters Program in Supply Chain Management, hosted on the edX platform, and therefore did not require formal review by the MIT Committee on the Use of Humans as Experimental Subjects (COUHES).

Informed Consent Statement

Informed consent was waived for this study. The analysis used anonymized learner data collected through the edX platform under its standard Terms of Service, with no personally identifiable information disclosed.

Data Availability Statement

The learner data analyzed in this study were obtained under a data-use agreement with MIT Open Learning and the edX platform. Due to privacy and institutional restrictions, these data cannot be shared publicly. However, detailed methodological descriptions are provided in the paper to enable replication of the analytic procedures using comparable datasets.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef]
  2. Winne, P.H.; Hadwin, A.F. Studying as self-regulated learning. Metacogn. Educ. Theory Pract. 1998, 93, 27–30. [Google Scholar]
  3. Schunk, D.H.; Zimmerman, B.J. Self-regulated learning and performance: An introduction and an overview. In Handbook of Self-Regulation of Learning and Performance; Routledge: Abingdon, UK, 2011; pp. 15–26. [Google Scholar]
  4. Lin, X.; Lehman, J.D. Supporting learning of variable control in a computer-based biology environment: Effects of prompting college students to reflect on their own thinking. J. Res. Sci. Teach. 1999, 36, 837–858. [Google Scholar] [CrossRef]
  5. Berardi-Coletta, B.; Buyer, L.S.; Dominowski, R.L.; Rellinger, E.R. Metacognition and problem solving: A process-oriented approach. J. Exp. Psychol. Learn. Mem. Cogn. 1995, 21, 205–223. [Google Scholar] [CrossRef]
  6. Schwendimann, B.A.; Rodríguez-Triana, M.J.; Vozniuk, A.; Prieto, L.P.; Boroujeni, M.S.; Holzer, A.; Gillet, D.; Dillenbourg, P. Perceiving Learning at a Glance: A Systematic Literature Review of Learning Dashboard Research. IEEE Trans. Learn. Technol. 2017, 10, 30–41. [Google Scholar] [CrossRef]
  7. Matcha, W.; Gašević, D.; Pardo, A. A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Trans. Learn. Technol. 2020, 13, 226–245. [Google Scholar] [CrossRef]
  8. Aguilar, S.J.; Karabenick, S.A.; Teasley, S.D.; Baek, C. Associations between learning analytics dashboard exposure and motivation and self-regulated learning. Comput. Educ. 2021, 162, 104085. [Google Scholar] [CrossRef]
  9. Hattie, J.; Timperley, H. The Power of Feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef]
  10. Jivet, I.; Scheffel, M.; Drachsler, H.; Specht, M. Awareness Is Not Enough: Pitfalls of Learning Analytics Dashboards in the Educational Practice. In Proceedings of the Data Driven Approaches in Digital Education, Tallinn, Estonia, 12–15 September 2017; Springer International Publishing: New York, NY, USA, 2017; pp. 82–96. [Google Scholar]
  11. Kaliisa, R.; Misiejuk, K.; López-Pernas, S.; Khalil, M.; Saqr, M. Have learning analytics dashboards lived up to the hype? A systematic review of impact on students’ achievement, motivation, participation and attitude. In Proceedings of the 14th Learning Analytics and Knowledge Conference, Kyoto, Japan, 18–22 March 2024; pp. 295–304. [Google Scholar]
  12. Valle, N.; Antonenko, P.; Valle, D.; Sommer, M.; Huggins-Manley, A.C.; Dawson, K.; Kim, D.; Baiser, B. Predict or describe? How learning analytics dashboard design influences motivation and statistics anxiety in an online statistics course. Educ. Technol. Res. Dev. 2021, 69, 1405–1431. [Google Scholar] [CrossRef]
  13. Ez-zaouia, M.; Lavoué, E. EMODA: A tutor oriented multimodal and contextual emotional dashboard. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference, Vancouver, BC, Canada, 13–17 March 2017; pp. 429–438. [Google Scholar]
  14. Mejia, C.; Florian, B.; Vatrapu, R.; Bull, S.; Gomez, S.; Fabregat, R. A Novel Web-Based Approach for Visualization and Inspection of Reading Difficulties on University Students. IEEE Trans. Learn. Technol. 2017, 10, 53–67. [Google Scholar] [CrossRef]
  15. Paulsen, L.; Lindsay, E. Learning analytics dashboards are increasingly becoming about learning and not just analytics—A systematic review. Educ. Inf. Technol. 2024, 29, 14279–14308. [Google Scholar] [CrossRef]
  16. Masiello, I.; Mohseni, Z.; Palma, F.; Nordmark, S.; Augustsson, H.; Rundquist, R. A current overview of the use of learning analytics dashboards. Educ. Sci. 2024, 14, 82. [Google Scholar] [CrossRef]
  17. Festinger, L. A Theory of Social Comparison Processes. Hum. Relat. 1954, 7, 117–140. [Google Scholar] [CrossRef]
  18. Blanton, H.; Buunk, B.P.; Gibbons, F.X.; Kuyper, H. When better-than-others compare upward: Choice of comparison and comparative evaluation as independent predictors of academic performance. J. Personal. Soc. Psychol. 1999, 76, 420. [Google Scholar] [CrossRef]
  19. Tong, W.; Shakibaei, G. The role of social comparison in online learning motivation through the lens of social comparison theory. Acta Psychol. 2025, 258, 105291. [Google Scholar] [CrossRef]
  20. Davis, D.; Jivet, I.; Kizilcec, R.F.; Chen, G.; Hauff, C.; Houben, G.J. Follow the successful crowd: Raising MOOC completion rates through social comparison at scale. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference, Vancouver, BC, Canada, 13–17 March 2017; pp. 454–463. [Google Scholar]
  21. Zheng, S.; Rosson, M.B.; Shih, P.C.; Carroll, J.M. Understanding Student Motivation, Behaviors and Perceptions in MOOCs. In Proceedings of the CSCW ’15: 18th ACM Conference on Computer Supported CooperativeWork & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 1882–1895. [Google Scholar]
  22. Elliot, A.J.; Thrash, T.M. Achievement goals and the hierarchical model of achievement motivation. Educ. Psychol. Rev. 2001, 13, 139–156. [Google Scholar] [CrossRef]
  23. Dowding, D.; Merrill, J.A.; Onorato, N.; Barrón, Y.; Rosati, R.J.; Russell, D. The impact of home care nurses’ numeracy and graph literacy on comprehension of visual display information: Implications for dashboard design. J. Am. Med. Inform. Assoc. 2018, 25, 175–182. [Google Scholar] [CrossRef]
  24. Hou, X.; Nagashima, T.; Aleven, V. Design a Dashboard for Secondary School Learners to Support Mastery Learning in a Gamified Learning Environment. In Proceedings of the Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption, Toulouse, France, 12–16 September 2022; Springer International Publishing: New York, NY, USA, 2022; pp. 542–549. [Google Scholar]
  25. Cheng, N.; Zhao, W.; Xu, X.; Liu, H.; Tao, J. The influence of learning analytics dashboard information design on cognitive load and performance. Educ. Inf. Technol. 2024, 29, 19729–19752. [Google Scholar] [CrossRef]
  26. Evans, P.; Vansteenkiste, M.; Parker, P.; Kingsford-Smith, A.; Zhou, S. Cognitive load theory and its relationships with motivation: A self-determination theory perspective. Educ. Psychol. Rev. 2024, 36, 7. [Google Scholar] [CrossRef]
  27. Keller, J.M. Development and use of the ARCS model of instructional design. J. Instr. Dev. 1987, 10, 2. [Google Scholar] [CrossRef]
  28. Inkelaar, T.; Simpson, O. Challenging the ‘distance education deficit’ through ‘motivational emails’. Open Learn. 2015, 30, 152–163. [Google Scholar] [CrossRef]
  29. Parte, L.; Mellado, L. Motivational emails in distance university. J. Educ. Online 2021, 18, 3. [Google Scholar] [CrossRef]
  30. Davis, D.; Chen, G.; Jivet, I.; Hauff, C.; Houben, G.J. Encouraging Metacognition & Self-Regulation in MOOCs through Increased Learner Feedback. In Proceedings of the LAK Workshop on Learning Analytics for Learners, Edinburgh, UK, 26 April 2016; pp. 17–22. [Google Scholar]
  31. Kizilcec, R.F.; Davis, G.M.; Cohen, G.L. Towards Equal Opportunities in MOOCs: Affirmation Reduces Gender & Social-Class Achievement Gaps in China. In Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale, Cambridge, MA, USA, 20–21 April 2017; pp. 121–130. [Google Scholar]
  32. Davis, D.; Triglianos, V.; Hauff, C.; Houben, G.J. SRLx: A Personalized Learner Interface for MOOCs. In Proceedings of the Lifelong Technology-Enhanced Learning, Leeds, UK, 3–5 September 2018; Springer International Publishing: New York, NY, USA, 2018; pp. 122–135. [Google Scholar]
  33. Winne, P.H. Experimenting to bootstrap self-regulated learning. J. Educ. Psychol. 1997, 89, 397. [Google Scholar] [CrossRef]
  34. Winne, P.H. Modeling self-regulated learning as learners doing learning science: How trace data and learning analytics help develop skills for self-regulated learning. Metacogn. Learn. 2022, 17, 773–791. [Google Scholar] [CrossRef]
  35. Dempster, F.N. The spacing effect: A case study in the failure to apply the results of psychological research. Am. Psychol. 1988, 43, 627–634. [Google Scholar] [CrossRef]
  36. Carvalho, P.F.; Sana, F.; Yan, V.X. Self-regulated spacing in a massive open online course is related to better learning. npj Sci. Learn. 2020, 5, 2. [Google Scholar] [CrossRef]
  37. Murayama, K.; Elliot, A.J. The joint influence of personal achievement goals and classroom goal structures on achievement-relevant outcomes. J. Educ. Psychol. 2009, 101, 432–447. [Google Scholar] [CrossRef]
  38. Miyamoto, Y.R.; Coleman, C.; Williams, J.J.; Whitehill, J.; Nesterko, S.; Reich, J. Beyond time-on-task: The relationship between spaced study and certification in MOOCs. J. Learn. Anal. 2015, 2, 47–69. [Google Scholar] [CrossRef]
Figure 1. Screenshot of the Learning Analytics Dashboard (LAD) shown to learners in Group A (dashboard without feedback). Components (ae) provide low-inference visualizations of engagement and pacing, while component (f) delivers messages. Groups A and B shared identical visual components; however, Group A received only generic motivational or course-related messages, whereas Group B received more personalized, ARCS-structured feedback. The example shown corresponds to a highly engaged learner (152 h total) and was used for illustrative purposes.
Figure 1. Screenshot of the Learning Analytics Dashboard (LAD) shown to learners in Group A (dashboard without feedback). Components (ae) provide low-inference visualizations of engagement and pacing, while component (f) delivers messages. Groups A and B shared identical visual components; however, Group A received only generic motivational or course-related messages, whereas Group B received more personalized, ARCS-structured feedback. The example shown corresponds to a highly engaged learner (152 h total) and was used for illustrative purposes.
Applsci 15 11493 g001
Table 1. Exploratory regression results (MLR) for behavioral indicators.
Table 1. Exploratory regression results (MLR) for behavioral indicators.
IndicatorDefinitionMLR Significance
1Number of unique lecture videos completedLog entries for unique videos with ≥1 “stop_video” event0.001
2Number of unique practice problems submittedLog entries for unique practice problems with ≥1 submission p < 0.001
3Number of solutions for unique practice problems checkedRatio of solution views (“show_answer”) over incorrect attempts p < 0.001
4Time period between modulesTime difference between final attempt in one module and first “play_video” in the next0.379
5Time period within a moduleTime difference between first “play_video” and last attempt in same module0.009
6Number of video revisits during/after graded assignmentsLog entries for unique videos with ≥1 “play_video” after initial attempt in module test0.627
7Writing activities on discussion forumsLog entries for forum posts/comments with “created” event0.246
8Reading activities on discussion forumsLog entries for forum posts/comments with “viewed” event0.604
Table 2. Mapping of LAD design principles to dashboard components, SRL mechanisms (COPES), and supporting evidence.
Table 2. Mapping of LAD design principles to dashboard components, SRL mechanisms (COPES), and supporting evidence.
Design PrincipleDashboard Component(s)SRL Mechanism (COPES)Supporting Evidence
Benchmarks from prior cohortsEngagement KPIs (a)Standards and EvaluationReduces unproductive social comparison [12,17,19,20]
Multiple goal-aligned standardsEngagement KPIs (a), Module Videos (b)Standards and PlanningSupports diverse learner goals; avoids mismatches [21,22,37]
Low-inference visuals + actionable ARCS feedbackAll Visualizations (a–e) + Messages (f)Operations, Products, EvaluationReduces inference cost and extraneous load; supports competence and persistence [23,24,25,26,27]
Spacing effect (make pacing visible)Final Exam Countdown (c), Weekly Streak (d), Total Time Spent (e)Conditions, PlanningDistributed engagement predicts certification beyond time-on-task [35,36,38]
Table 3. Representative dashboard messages for Groups A and B.
Table 3. Representative dashboard messages for Groups A and B.
Group A: Static Generic MessagesGroup B: ARCS-Framed Actionable Feedback Messages (By Learner Progress)
“Think about what your goal is today.”
“The key concepts document is a good
source to search quickly for specific
concepts when studying.”
Ahead: ARCS-framed messages: “Excellent progress, you’re ahead of schedule! Maintaining this pace will help you get your certificate.” Actionable message: “You can use the extra time to review previous modules or try the supplemental materials.”
On-target: ARCS-framed messages: “You’re right on track. Steady weekly effort is the best predictor of course success.” Actionable message: “Schedule your next study session to keep your rhythm consistent and finish strong.”
Behind: ARCS-framed messages: “Good job with returning to the course! You’re a bit behind but you can still make it!” Actionable message: “Prioritize lecture videos and practice problems at the end of each unit.”
Table 4. Verification status by experimental group. Percentages reflect the proportion of learners who verified (paid for a certificate) out of all enrolled in that group.
Table 4. Verification status by experimental group. Percentages reflect the proportion of learners who verified (paid for a certificate) out of all enrolled in that group.
GroupVerified n (%)Total n
Group C (no dashboard)452 (15.7%)2886
Group A (no feedback)462 (15.7%)2952
Group B (with feedback)527 (18.1%)2907
Table 5. Logistic regression predicting verification (1 = paid for certificate). Group C (no dashboard) is the reference category.
Table 5. Logistic regression predicting verification (1 = paid for certificate). Group C (no dashboard) is the reference category.
PredictorOR95% CIp-Value
Intercept0.19[0.17, 0.21]<0.001
Group A (no feedback)1.00[0.87, 1.15]0.990
Group B (with feedback)1.19[1.04, 1.37]0.012
Table 6. Descriptive statistics for engagement (total time in hours) by experimental group.
Table 6. Descriptive statistics for engagement (total time in hours) by experimental group.
GroupnMeanSDMedian
Group A (no feedback)295210.5328.090.48
Group B (with feedback)290712.6432.540.46
Group C (control)288610.8428.720.44
Table 7. One-way ANOVA and Tukey’s post hoc results for engagement (raw values). Engagement was measured as total time in hours.
Table 7. One-way ANOVA and Tukey’s post hoc results for engagement (raw values). Engagement was measured as total time in hours.
SourceSum SqdfFp
Experimental group 7.59 × 10 3 24.260.014
Residual 7.79 × 10 6 8745
Effect size
η 2 0.001
Post hoc Tukey HSD comparisons
ComparisonMean diff (h)95% CIpSignificant
Group A vs. Group B2.11[0.28, 3.94]0.019Yes
Group A vs. Group C0.31[−1.52, 2.14]0.917No
Group B vs. Group C−1.80[−3.64, 0.04]0.056No (trend)
Table 8. One-way ANOVA and Tukey’s post hoc results for engagement (log-transformed values). Engagement was measured as total time in hours, log-transformed using log ( 1 + x ) .
Table 8. One-way ANOVA and Tukey’s post hoc results for engagement (log-transformed values). Engagement was measured as total time in hours, log-transformed using log ( 1 + x ) .
SourceSum SqdfFp
Experimental group 1.05 × 10 1 22.600.075
Residual 1.77 × 10 4 8745
Effect size
η 2 0.001
Post hoc Tukey HSD comparisons
ComparisonMean diff (log h)95% CIpSignificant
Group A vs. Group B0.067[−0.020, 0.154]0.168No
Group A vs. Group C−0.012[−0.099, 0.075]0.946No
Group B vs. Group C−0.079[−0.166, 0.009]0.088No
Table 9. Descriptive statistics for performance (final grade, verified learners only) by experimental group.
Table 9. Descriptive statistics for performance (final grade, verified learners only) by experimental group.
GroupnMeanSDMedian
Group A (no feedback)4620.3510.3600.245
Group B (with feedback)5270.3680.3680.280
Group C (control)4520.3920.3710.400
Table 10. One-way ANOVA and Tukey’s post hoc results for performance (final course grade, verified learners only).
Table 10. One-way ANOVA and Tukey’s post hoc results for performance (final course grade, verified learners only).
SourceSum SqdfFp
Group0.39221.460.233
Residual192.9931438
Effect size
η 2 0.002
Post hoc Tukey’s HSD comparisons
ComparisonMean diff95% CIpSignificant
Group A vs. Group B0.017[−0.038, 0.072]0.743No
Group A vs. Group C0.041[−0.016, 0.098]0.205No
Group B vs. Group C0.024[−0.031, 0.079]0.561No
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Borrella, I.; Ponce-Cueto, E. Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment. Appl. Sci. 2025, 15, 11493. https://doi.org/10.3390/app152111493

AMA Style

Borrella I, Ponce-Cueto E. Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment. Applied Sciences. 2025; 15(21):11493. https://doi.org/10.3390/app152111493

Chicago/Turabian Style

Borrella, Inma, and Eva Ponce-Cueto. 2025. "Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment" Applied Sciences 15, no. 21: 11493. https://doi.org/10.3390/app152111493

APA Style

Borrella, I., & Ponce-Cueto, E. (2025). Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment. Applied Sciences, 15(21), 11493. https://doi.org/10.3390/app152111493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop