1. Introduction
Learning Analytics (LA) has emerged as a global field of research and practice, attracting substantial investment and policy attention across higher education and K-12. Although there is a global market for learning analytics, recent multi-institutional studies have revealed a paradox that classroom teachers rarely use data dashboards to guide their instruction [
1,
2]. This stark contrast reveals a critical paradox: education invests in data tools yet fails to integrate them into practice, suggesting that many LA tools measure activity but do not generate actionable insights [
3,
4].
This research contributes to the broader field of decision-making through LA in education. Learning analytics is an interdisciplinary teaching technique that collects, analyses, and visualises large-scale learning data to offer real-time feedback and ongoing engagement. It is a significant issue in education, involving the integration of various disciplines to enhance the interaction between technology and humans in educational contexts. In recent years, ethical issues arising from the use of LA have garnered research interest. Critics argue that ethical considerations surrounding LA may hinder its effective implementation, thus limiting its potential benefits for students and educators [
5,
6]. However, the literature on addressing these issues in LA teaching interventions remains limited [
7]. Another area of interest among educational researchers is the guidance and influence of LA on the learning process [
8,
9,
10,
11]. Furthermore, regarding LA adoption, researchers suggest it is somewhat restricted [
4,
12,
13,
14,
15]. Overall, the identified research gaps concern (a) specific strategies to address ethical challenges in LA, (b) measurable outcomes from successful LA implementation in educational settings, and (c) how to improve training for educators to adopt and effectively utilise LA in their teaching practices.
Therefore, the main problem statement is “what conditions are required for LA interventions—designed around ethical safeguards and instructional guidance—to become pedagogically actionable and realistically adopted in formal educational settings?” Rather than attempting to cover the entire LA landscape exhaustively, this study focuses on three tightly scoped and complementary contributions: (a) ethics-by-design principles, (b) interface-based guidance effects, and (c) teacher adoption conditions. Therefore, the research questions (RQs) outlined in this paper are as follows.
First, to address the gap in ethical challenges, we ask (RQ1): How can a framework for ethics by design be developed to address ethical concerns in learning analytics implementation?
Second, to respond to the limited evidence of measurable outcomes, we ask (RQ2): What is the impact of interface-based learning analytics guidance (strong versus minimal) on students’ self-regulated learning skills and academic performance?
Concerning the second research question, we propose two hypotheses.
Hypothesis 1. The self-regulated learning skills of the experimental group do not differ significantly from those of the control group.
Hypothesis 2. The academic performance, as measured by the course grade of the experimental group, does not differ significantly from that of the control group.
Third, to respond to the challenge of limited teacher training and adoption, we ask (RQ3): What factors encourage or hinder the adoption of learning analytics in educational environments?
Overall, this investigation aims to examine the ethical and pedagogical challenges presented by LA to better understand its impact on learning outcomes. Furthermore, the paper investigates the frontiers of educational technology by emphasising the actionability and adoption of LA, while also addressing related ethical issues. In addition, this study makes three main contributions: A synthesised ethics-by-design framework for LA, derived from a meta-synthesis of 53 studies. Then, causal evidence via a quasi-experimental study showed that strong LA-based guidance improves SRL skills and academic performance compared with minimal guidance. Finally, a multi-layer adoption model for K-12 teachers, extending the Unified Theory of Acceptance and Use of Technology (UTAUT) with emotional, cultural, and human-centred constructs, validated through mixed-methods (surveys, focus groups, ethnographic observations).
This study builds on a series of prior empirical investigations conducted by the authors; however, it goes beyond reproducing earlier results. Specifically, while RQ1 and RQ2 draw on previously published empirical work, they are re-analysed and re-contextualised here to support a unified investigation of learning analytics actionability and adoption. The empirical data for RQ3 are newly collected, and the primary contribution of this article lies in the integrative synthesis of ethical design (RQ1), pedagogical effectiveness (RQ2), and adoption mechanisms (RQ3) into a single coherent socio-technical framework.
Concerning the structure of the paper, after the
Section 2,
Section 3 reviews ethical challenges and introduces ethics by design,
Section 4 examines the impact of interface-based guidance, and
Section 5 analyses adoption factors.
2. Methods
To address the three research questions holistically, we adopted a multi-method design in which each component corresponds to one research question but also informs the others. Taken together, these approaches provide a cohesive picture of LA as both a technological and socio-cultural innovation. Specifically, we frame our study as a sequence of mutually informing phases: Phase 1 maps ethical issues and develops the ethics-by-design framework; Phase 2 tests whether LA can deliver actionable pedagogical impact through interface-based guidance; and Phase 3 investigates adoption conditions, with ethics and pedagogical values as inputs.
Regarding the methodology employed, this study involves analysing data collected from individual surveys [
11,
15,
16]. We aim to answer the same research questions using an alternative methodology and to explore additional aspects of the original research questions. Specifically, we conduct a meta-synthesis of surveys conducted with mixed methods to deepen understanding, interpret findings, and generate new knowledge in the field of LA.
The research design used in this study aligns with the research questions: a literature meta-synthesis examined ethical issues, a quasi-experimental intervention evaluated the effects of strong versus minimal LA guidance on student outcomes, and a mixed-methods approach investigated factors influencing teachers’ adoption of LA. Each component is theoretically grounded in self-regulated learning theory and the UTAUT. The combination of quantitative and qualitative methods, including surveys, focus groups, and ethnographic observations, offers both breadth and depth, thereby enhancing the robustness and credibility of the findings. Notably, the intervention study extends beyond descriptive analysis by testing the causal effect of LA guidance on student learning outcomes and offering practical implementation insights.
Finally, ethical safeguards condition adoption, pedagogical effectiveness drives adoption, and adoption feeds back into the need for ethics by design. Hence, combining these approaches allows us to capture LA as a socio-technical ecosystem rather than a narrow intervention. Although each component could stand alone, their integration is necessary to address the core research problem of LA actionability and adoption, which cannot be understood through a single methodological lens.
Survey Designs and Instruments
This study employed multiple survey instruments, pre- and post-questionnaires, focus groups, and ethnographic observations. The following subsections describe the survey designs, item structure, participant characteristics, and analysis procedures, which together ensure transparency regarding the data collection and interpretation processes.
Instruments for RQ2—SRL Questionnaire (Pre/post)
To evaluate the impact of strong versus minimal LA guidance, a pre–post questionnaire measured students’ self-regulated learning (SRL) skills. Items used a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). Pre- and post-questionnaires were administered in weeks 1 and 13 of the semester. The internal consistency reliability across subscales ranged from Cronbach’s α = 0.73 to 0.87 in the original instrument; similar reliability was observed in this study. The questionnaire, adapted from SOL-Q-R, consisted of 42 items covering the subscales:
Metacognitive regulation (e.g., “I plan my study tasks before starting the lesson.”).
Time management (e.g., “I structure my study time effectively.”).
Perseverance (e.g., “I continue working even when the material is difficult.”).
Help-seeking (e.g., “I reach out for assistance when I do not understand a concept.”).
Regarding participants and procedure, 93 undergraduate students participated: 47 in the strong-guidance (SG) condition and 46 in the minimal-guidance (MG) condition. All participants were enrolled in the same degree programme, attended the same online course, and completed identical assignments. The only difference between groups was the level of LA-based guidance. ANCOVA was used to compare post-test SRL scores, with pre-test scores as covariates. Independent t-tests were used to compare course performance between groups.
Instruments for RQ3—Teacher Survey based on UTAUT Questionnaire
The adoption study used a structured questionnaire adapted from the Unified Theory of Acceptance and Use of Technology (UTAUT). The survey was administered during a national webinar titled “Analysis of Learning Data in Education,” which attracted over 70 K-12 teachers from mathematics, literature, computer science, and economics. The instrument consisted of Likert-scale items grouped into the following constructs:
Performance expectancy (e.g., “LA can help me improve my teaching strategies.”).
Effort expectancy (e.g., “Learning to use LA tools would require significant effort.”).
Facilitating conditions (e.g., “My school provides adequate technical support for new digital tools.”).
Social influence (e.g., “My colleagues would encourage me to use LA.”).
Emotions (motivation, anxiety, technophobia).
Self-efficacy (e.g., “I feel confident in interpreting learning data.”).
Human-centredness (e.g., “LA tools should be designed with teacher participation.”).
Focus Groups & Ethnographic Study
Five focus groups were conducted immediately after the webinar, each lasting approximately 45–60 min. Guiding prompts included: “What motivates you to adopt LA in your teaching?” and “What barriers prevent you from applying LA in your school?” The ethnographic study included observations and interviews with five teachers. Observations focused on everyday teaching practices, communication channels, and attitudes toward LA. Interviews explored teachers’ values, perceived risks, technical readiness, and expectations for future use.
Regarding analysis procedures, qualitative data from focus groups and ethnographic interviews were analysed using thematic analysis, following open coding and iterative theme refinement. Quantitative survey data were analysed descriptively and used to triangulate the qualitative results, resulting in the combined adoption model presented in
Figure 1. Where relevant, previously published datasets are reused to support cross-question synthesis rather than result replication, enabling theoretical integration across the three research questions.
3. Background—Ethical Challenges in Learning Analytics
To address the
first research question (How can a framework for ethics by design be developed to address ethical concerns in learning analytics implementation?), a review was conducted to map the critical concerns across the dimensions, particularly the ethics of LA. It is the age of big data, social networks, and cloud computing. Every piece of data is recorded, leaving a digital footprint that increases the volume and variety of educational learning data [
17]. Ethics is a framework of moral principles that concern what is right for the individual and the community [
18]. Ethics can also be defined as a set of complex rules that vary across cultures [
19]. The literature addresses numerous technological and pedagogical controversies that educational stakeholders encounter in LA. This section analyses these antagonisms by emphasising the diverse perspectives that must be considered.
For the review [
16], an extensive survey of the LA literature was conducted to understand and document current trends in LA and the ethical issues that have arisen. Regarding the article selection process, after searching through more than 500 articles that initially met the selection criteria and examining their abstracts and results, we selected a corpus of 53 articles that covered the ethical principles of LA. Finally, a bottom-up comparative analysis of the selected literature resulted in a classification scheme that describes the dimensions of LA: Object of analysis, Backend data processing technology, Target of intervention, Stakeholders, and Ethics. To answer the first research question, we present the results as a list of instructional values related to data management goals, including the significant ethical issues of LA highlighted in the articles studied.
Labelling and Algorithmic Fairness. Although data-driven instruction offers the benefits of enhancing learning outcomes and reducing student attrition, it raises concerns about labelling learners, specifically the risk of students being unfairly stereotyped. Scholes [
20] claims that the emphasis on self-regulation and personalisation may inadvertently overlook the diverse needs and learning styles of all students.
Therefore, instructors must guarantee that their feedback does not discourage or manipulate students. Additionally, any LA intervention should be implemented according to a specific instructional design framework, such as self-regulated learning theory [
21].
There are various reasons for errors in data analysis, such as misinterpretation of data and the use of misleading models [
22]. Systems depend on data, so incomplete, noisy, or unrepresentative data or models can cause incorrect decisions. Standard statistical methods may be inaccurate when used on unstructured textual data. As a result, from a teaching perspective, the outcomes might be inconsistent for learners.
Data Privacy and Ownership. Privacy is a fundamental human need; however, a significant challenge posed by big data is its global and persistent nature. In the past, stakeholders have addressed privacy through trust; however, some stakeholders lack mutual trust in LA. In addition, many educational institutions do not control the storage of trainee data because it is managed outside their institutions or even outside the country where they are based, where different laws may apply [
23]. Furthermore, views on privacy vary across cultures, and different countries have distinct perceptions of what constitutes ethical behaviour.
Data ownership is a complex legal and ethical issue in data management. The primary data belongs to its creator. However, in practice, the processed data no longer belongs to the learner. Ownership pertains to the data collected, the analytics used, and the resulting output of those analytics. Finally, Hoel et al. [
24] referred to the “learners’ right to be forgotten,” which relates to minimising the data and limiting its use.
Transparency and Duty to Act. Consent is an ongoing process that involves the individual permitting data collection and making decisions based on the outcomes of data processing. Arnold and Sclater [
25] stated that the ethical duty of educational institutions is to obtain the highest quality educational data to ensure they provide optimal support. This suggests that, if learners have the right to opt out, it could be unethical, as opting out might create significant gaps in the dataset. Finally, Herder and Kawase [
26] emphasised that knowledge and confidentiality are essential prerequisites for learners to provide their informed consent for data collection.
The cost of studying is high due to tuition fees and the time and effort required. Therefore, from both managerial and pedagogical perspectives, educational institutions should actively support and motivate their students [
20]. It is unethical to neglect the predictive value of managing learning data (e.g., performance) [
27]. Additionally, Prinsloo and Slade [
17] argued that educational institutions have an ethical duty to act when instructional data highlight the need for instructional interventions.
These ethical safeguards set the stage for testing whether LA can actually deliver actionable pedagogical value. Next, the role of LA-based guidance in the educational process and the factors that influence or impede its adoption in everyday educational practice are examined.
4. Actionable Guidance via Learning Analytics
Regarding the
second research question (What is the impact of interface-based learning analytics guidance on students’ self-regulated learning skills and academic performance?), several studies [
3,
28] have reported a slight improvement in learning outcomes through LA. Our focus is on LA’s contribution to improved learning outcomes, and the specific question we address is whether “strong (SG) vs. minimal (MG)” guidance [
7,
11] is practical. In particular, we emphasise the role of SG versus MG based on LA in the development of self-regulated learning skills and learning performance. Our intervention involves students in two comparative conditions. The MG group followed a low-prompting (reflection) approach, informing participants with visualised information. The SG group followed a high-prompting approach. Specifically, the instructor implemented an intervention protocol that included posting a traffic signal message indicating each student’s performance and facilitating online interactions among students and the instructor. The research objective is to investigate whether minimal and strong guidance based on the LA interface has the same impact on learning outcomes.
Participants and instructional context. The research [
11] was carried out as part of an undergraduate course in the seventh semester. An IT department at a technological education institution in Greece offered this course online. We chose this course because of the high dropout and failure rates in previous semesters. According to our design, 93 students participated—47 as an experimental group that received the LA intervention with SG. The control group served as a baseline measurement and comprised 46 students who received the LA with the MG intervention.
Results (RQ2)
Pre-analysis procedures. Before analysing group differences, all SRL subscale scores were screened for normality and internal consistency. Reliability was acceptable across scales (α = 0.73–0.87). Before conducting the ANCOVA, the analysis’s assumptions were examined. Independence of observations was ensured by the study design, as participants were assigned to only one experimental condition and contributed a single set of measurements. Normality of residuals was assessed by visual inspection of Q–Q plots and the Shapiro–Wilk test, which indicated no substantial deviations from normality. Homogeneity of variances was evaluated using Levene’s test and was found to be satisfactory. In addition, the assumption of homogeneity of regression slopes was examined by testing the interaction between the covariate (pre-test scores) and group membership, which was not statistically significant. These results indicate that the ANCOVA assumptions were adequately met. ANCOVA was then applied to compare post-intervention SRL scores between groups while controlling for pre-intervention measurements. Course performance was analysed using independent samples t-tests.
Hypothesis 1. The self-regulated learning skills of the experimental group do not differ significantly from those of the control group.
Regarding Hypothesis 1 (The self-regulated learning skills of the experimental group do not differ significantly from those of the control group), ANCOVAs were conducted on the self-regulated learning skills scores from the post-questionnaire, with the pre-questionnaire scores as covariates. Statistically significant differences (p < 0.01) were found across the five experimental and control conditions. The findings indicated that students in the experimental group scored higher, with statistically significant differences in the metacognitive activity subscales before and after the learning process, as well as in time management, perseverance, and help-seeking.
Hypothesis 2. The performance, as measured by the course grade of the experimental group, does not differ significantly from that of the control group.
Regarding Hypothesis 2 (Performance, as measured by the course grade of the experimental group, does not differ significantly from that of the control group), the mean score of the experimental group (M = 7.22, SD = 2.71) was higher than that of the control group (M = 5.28, SD = 3.95). Independent t-tests comparing group scores showed a statistically significant difference (t = 2.75, p = 0.007). Overall, the null hypothesis is rejected.
Discussing the above findings, this research highlights intervention strategies that improve learning outcomes and provides evidence that the SG mode is more effective than the MG mode. Specifically, the experimental group achieved higher scores than the control group, in line with previous studies [
29] reporting that learners tend to perform better when they receive well-targeted LA interventions.
Furthermore, the results showed that the degree of mentoring significantly influenced participants’ development of self-regulated learning skills [
30]. The effect size (ηp
2) indicated that variations in skill development were attributable to the level of mentoring, aligning with related studies [
28]. In conclusion, since the only difference between the groups was the level of mentoring, we suggest that effective mentoring leads to better learning outcomes and, consequently, to more actionable learning.
Overall, this study demonstrated that the SG group outperformed the MG group in enhancing learning outcomes. As both groups experienced the same instructional environment, we attribute this difference to the level of instruction. Having demonstrated learning impact, we next examine whether teachers are willing and able to adopt such tools in practice. The following section examines K-12 teachers’ perceptions and readiness for adopting LA.
5. Teachers’ Perceptions of Learning Analytics Adoption
Regarding the
third research question (What factors encourage or hinder the adoption of learning analytics in educational environments?), this section synthesises two studies [
15,
31] that focus on LA adoption in K-12 education. The introduction of LA in schools is considered necessary for social, economic, and pedagogical reasons [
14]. Despite increasing interest in implementing LA in schools, teachers are often characterised as technophobic and sceptical about adopting it [
13]. Few studies have provided empirical evidence on the educational challenges of adopting LA from classroom teachers’ perspectives. Specifically, few have examined the unified theory of acceptance and use of technology (UTAUT) [
32] to investigate teachers’ adoption of LA. Guided by the UTAUT [
33], we conducted a survey [
15] using a questionnaire and held five focus group interviews with K-12 teachers. We explored their perceptions of LA adoption, followed by an ethnographic study with observations and interviews.
A holistic theoretical framework can provide a structure for understanding the factors underlying LA adoption. We draw evidence from the UTAUT model, which is based on TAM, ARM, planned behaviour, and innovation diffusion theories, as a framework that sheds light on how teachers perceive LA adoption. Finally, by extending the UTAUT acceptance model to the educational setting, we link a well-established information systems theoretical framework to empirical studies in education.
Method. The survey was carried out in Greece through a webinar titled “Analysis of Learning Data in Education.” The participants included over 70 teachers specialising in mathematics, literature, and computer science. Additionally, the second (ethnographic) study involved five teachers teaching mathematics, literature, and economics (content-independent survey). In ethnographic research, there is a history of studies employing small-N designs, in which each teacher is treated as a replicated unit [
34]. Furthermore, ethnographic qualitative research investigates teachers’ beliefs and values in their daily practices. The data collection instruments used to gauge teachers’ perceptions included a research questionnaire and five focus groups conducted after the seminar. Moreover, in our ethnographic research, we utilised methods such as observation, interviews, and field notes.
Results (RQ3)
Survey responses (N = 73) were analysed descriptively and used to identify UTAUT-related patterns. Qualitative data from five focus groups and five ethnographic interviews were analysed using thematic analysis, with two researchers independently coding transcripts to enhance consistency and validity. Observational notes were used to triangulate self-reported perceptions.
Survey results. The survey questionnaire focused on teachers’ perceptions regarding the UTAUT model. Regarding perceived usefulness (performance expectancy), participants believe that LA enhances students’ self-regulated learning skills, particularly metacognitive skills, time management, help-seeking, and perseverance. Additionally, they stated that adopting LA could improve outcomes, including attendance, performance, and satisfaction. Lastly, the most valued items by respondents were the obligation to act and the importance of understanding learning.
Focus group results. The questionnaire directed the focus groups in exploring teachers’ motivations for adopting LA. The results (theory-centred themes with teacher-centred concepts in parentheses) are presented as follows:
Performance expectancy (participation, performance, and self-reflection);
Effort expectancy (workload, complexity);
Feelings (technophobia, anxiety, satisfaction);
Future use intentions (scepticism, guidance);
Facilitating conditions (training, technical infrastructure);
Social influence (community of practice, professional development);
Anthropocentricity (sense-making);
Self-efficacy (data literacy skills);
Data culture (resistance, comfort zone).
Ethnographic interview results. The interviews captured participating teachers’ readiness for adopting LA. The results—comprising theory-centred themes with teacher-centred concepts in parentheses—are presented as follows:
Performance expectancy (time management, awareness);
Effort expectancy (workload);
Feelings (anxiety, scepticism, satisfaction, expectation);
Future intentions for use (training, guidance, impact);
Human-centred LA (well-being, co-design, explainability);
Data culture (resistance, age differentiation);
Ethics (privacy, surveillance, trust, algorithm accuracy);
Social context (facilitating conditions, communication, professional development).
Observation results. During the ethnographic study, we engaged with the school community and documented numerous observations, focusing on teachers’ perceptions and guided by the UTAUT model. Regarding perceived usefulness (performance expectancy), participants believed that LA improved students’ persistence, suggesting that adopting LA could enhance student engagement. Specifically, Teacher 1 (T1) appeared interested in and requested training. She seeks professional development and uses LA for group work activities. Teacher 2 utilises LA for absence management interventions, while Teacher 3 shows a reserved interest, and Teacher 4 employs LA for communication purposes.
Mixed-method results. Figure 1 illustrates the classification model for the adoption process. The boxes display the (theory-centred) constructs along with some overlapping (participant-centred) concepts. Specifically, the green boxes are derived solely from the first study (questionnaire and focus groups). In contrast, the blue boxes come exclusively from the second/ethnographic study (observations and ethnographic interviews). Lastly, the white boxes are shared between both studies.
Specifically,
Figure 1 synthesises quantitative survey findings and qualitative evidence (focus groups and ethnographic observations) into a theory-building classification model extending UTAUT with emotional, cultural, ethical, and human-centred constructs. Arrows indicate theoretically inferred influences and empirically convergent relationships, not statistically tested causal paths. The model is intended as a conceptual and integrative framework, rather than a structural equation model.
Building on the above findings, we examine LA adoption from the perspective of the following constructs, thereby broadening the UTAUT framework.
Facilitating Conditions and Self-efficacy. Teachers have reported a lack of confidence in their data literacy skills [
34,
35] and express a need for practical training within schools to make LA more actionable. Additionally, the teaching community has yet to develop a shared understanding of LA, as it emphasises the shift from human mediation (agency) to algorithmic mediation.
Expected Performance. According to our research, most teachers recognise LA’s usefulness (perceived relative advantage). In addition, teachers reported several benefits, including providing early support to students at risk of failure and improving teaching strategies (target of intervention) [
36,
37]. Furthermore, teachers firmly believe that LA enhances students’ metacognitive skills (effectiveness of intervention). Conversely, some educators feel that the data does not adequately capture the subtle differences in teaching.
Human-centred Learning Analytics and Expected Effort. The success of LA cannot be judged solely by technical metrics; it must also be evaluated based on its effectiveness within educational institutions [
38,
39]. Anthropocentricity is a characteristic of systems designed by identifying critical stakeholders and their relationships [
40]. Involving teachers in the design of LA can be complex and time-consuming. However, engaging them through participatory and co-creation methods (shared experience) can transform an impersonal prototype into a successfully adopted system. Therefore, shifting LA from something imposed on teachers to a collaborative effort involving teachers exemplifies a human-centred approach [
41]. Finally, we emphasise the importance of human-centred learning analytics, highlighting student and teacher participation, system explainability, and ethical safeguards such as fairness and privacy, which add depth and societal relevance.
Usability and time were taken into account when evaluating LA’s usefulness relative to its perceived complexity [
42]. A common challenge faced by some teachers is workload and time constraints, which lead to reluctance to adopt new technologies [
43]. Lastly, we consider the view that the complexities of educational settings might hinder the adoption of LA, making its benefits less clear
Data Culture and the Intention to Use Learning Analytics. The implementation of analytical methods is not an inherent part of school culture, as analytics have originated outside educational contexts. Specifically, in the case of LA entering the school community, it is an externally driven need for change. Therefore, there is an additional challenge where teachers must accept the necessity of change and adapt the school’s environment to align with this new culture [
44]. Ultimately, we argue that the challenge of cultural change in educational institutions is a fundamental barrier that diminishes the potential benefits of LA.
Feelings and Social Influence. The most commonly reported positive emotions (psychological factors) were motivation and satisfaction, while the prevalent negative emotions included irritation and confusion. A comparison of the findings from the questionnaire, focus groups, and ethnographic interviews reveals notable similarities. Encouragement, confidence, and perseverance are consistent concepts (perceived compatibility). To explore contrasts, some teachers reported dissatisfaction and anxiety.
Additionally, teachers are motivated to use LA to enhance their professional profile and social influence [
14], which pertains to how their colleagues view them. Furthermore, interpersonal communication channels are crucial for spreading LA adoption through participatory design (shared experience). Lastly, we observe that both teachers’ lack of training and ethical considerations are significant barriers to the effective implementation of LA.
Overall,
Figure 2 illustrates the conceptual interdependence between ethics, pedagogical impact, and adoption in learning analytics. The bidirectional arrows do not imply strict causal relationships, but somewhat reciprocal influences and feedback mechanisms. Ethical safeguards constrain and shape pedagogical interventions, while demonstrated pedagogical impact reinforces trust and willingness to adopt learning analytics. Conversely, adoption practices feed back into ethical considerations and pedagogical refinement. The figure should therefore be interpreted as a conceptual model highlighting mutual dependencies rather than a causal pathway.
6. Discussion and Conclusions
The three dimensions examined in this study—ethics, pedagogical impact, and adoption—are not treated as independent outcomes but as interrelated components of a socio-technical system. Ethical safeguards (RQ1) serve as enabling conditions for trust and legitimacy, prerequisites for both pedagogical effectiveness and institutional acceptance of learning analytics. Demonstrating pedagogical impact through actionable guidance (RQ2) provides empirical justification for adoption, as stakeholders are unlikely to adopt systems that do not show measurable educational value. Finally, adoption conditions (RQ3) feed back into both ethics and pedagogy by shaping how learning analytics are interpreted, enacted, and sustained in real-world contexts.
Our findings suggest that ethics by design (RQ1), effective guidance (RQ2), and adoption conditions (RQ3) are mutually reinforcing dimensions: without ethical safeguards, adoption is undermined; without demonstrated pedagogical value, adoption stalls; and without adoption, ethical and pedagogical advances remain theoretical. The multi-method design, therefore, allows us to capture the interdependence of these factors and propose a more holistic model of LA actionability.
In RQ2, we asked whether strong vs. minimal guidance affects SRL skills and performance. Our findings show clear gains under strong guidance. This extends earlier correlational research [
28] by providing causal evidence and supports self-regulated learning theory, which predicts that external scaffolds promote metacognitive regulation. However, unlike previous studies, we also found improvements in perseverance and help-seeking, suggesting that LA-based interventions can influence both motivational and cognitive aspects of SRL.
Contrasting the perceptions between the groups, secondary school teachers gave more positive responses regarding their readiness to adopt LA. The higher cognitive maturity of secondary students compared to primary students enables the use of more innovative teaching approaches. Additionally, computer science teachers responded even more positively regarding their expectations of adopting LA. Their advanced ICT and data literacy skills provide them with greater resources and expertise for innovative teaching techniques. In this sense, we conclude that early adopters who invest time to integrate technology into their teaching are more likely to adopt innovative techniques and serve as agents of change.
According to UTAUT and studies on technology adoption, we found that the main factors encouraging the adoption of LA were (a) added value that makes teachers’ monitoring practices more structured (performance expectation), (b) emotions such as motivation, satisfaction, and confidence, and (c) social influence. Conversely, factors that inhibit LA adoption included: (a) effort expectancy involving time and effort, (b) self-efficacy due to lack of data handling skills, (c) facilitating conditions like training challenges and technological infrastructure issues, and (d) cultural change, notably the lack of recognition of LA’s added value. Furthermore, ethical concerns related to LA may outweigh its potential educational advantages. Overall, the findings are somewhat contradictory, reflecting both scepticism and a lack of confidence alongside expectations and enthusiasm for new opportunities. We identified numerous potentials for LA to improve learning experiences, but also significant challenges, particularly in K-12 environments.
Our experimental results (RQ2) demonstrate that the mere collection and visualisation of learning data (Minimal Guidance) is insufficient to improve learning outcomes. Significant improvements in SRL and academic performance were achieved only when data were transformed into proactive, personalised guidance (Strong Guidance). Therefore, we conclude that the pedagogical design of the intervention—specifically, the level and quality of guidance—is a more critical factor for enhancing teaching and learning than the volume of data alone.
Based on our mixed-methods findings (RQ3), we conclude that two core perceptions primarily drive teacher adoption of LA: (1) its perceived pedagogical usefulness (Performance Expectancy) for tasks like early intervention, and (2) the extent to which tools are designed with and for teachers (Human-Centredness). Qualitative data reveal that without co-design, even useful tools may be rejected as top-down impositions. Conversely, the most significant barriers are not simply technical but socio-cultural: the perceived increase in workload (Effort Expectancy) and a deep-seated resistance to data-driven cultural change within schools, which emerged as a dominant theme in our ethnographic work.
The ethics-by-design framework (RQ1) is not merely a theoretical recommendation but a prerequisite for adoption. Our qualitative findings (RQ3) explicitly show that teachers’ ethical concerns—regarding privacy, surveillance, and algorithmic fairness—are not abstract worries but concrete barriers to adoption. Teachers expressed scepticism and anxiety about tools that lack transparency. Therefore, we conclude that addressing these ethical issues through participatory design (e.g., co-creating consent protocols, explainable dashboards) is not ancillary but central to overcoming adoption resistance.
While our quasi-experiment was conducted in higher education and the adoption study focused on K-12 teachers, the interdependence of ethics, pedagogy, and adoption appears to be a foundational principle across educational contexts. However, the relative weight of specific barriers may differ. For instance, ‘data culture’ resistance may be more pronounced in K-12 settings with established traditional practices, whereas in higher education, ‘facilitating conditions’ such as institutional support may be a more salient factor. Future research should test this model across levels.
Next, we outline key points for research and practice derived from these results.
Big data alone cannot enhance teaching; more research into the pedagogical aspects of LA is necessary.
Ethics acts as a mediating factor in the impact and adoption of LA.
When we implement strong guidance through LA, the results indicate improved final achievement and enhanced students’ self-regulated learning skills.
Students favour the participatory and human-centred design of LA.
The limited adoption of LA may stem from insufficient teacher training rather than from LA’s effectiveness.
The factors that promote the adoption of LA are performance expectancy, anthropocentricity, social influence, and emotions.
The factors that hinder the adoption of LA are effort expectancy, facilitating conditions, ethical issues, and cultural change.
Overall, stakeholder co-design steps, consent artefacts, transparency notices, and bias checks could improve teachers’ trust, awareness, and explainability. Below are summary tables (
Table 1,
Table 2,
Table 3 and
Table 4) that synthesise the article’s main findings, providing a structured overview.
Consequently, these findings suggest that teacher adoption of LA is not merely a rational cost–benefit calculation (as assumed in UTAUT), but is shaped by affective factors (confidence, anxiety) and cultural norms. Extending adoption models to include these dimensions could yield a more realistic account of LA uptake in schools.
7. Future Research
The broader aim of this research was to explore the ethical valuation, learning impact, and adoption of learning data analysis in education. Specifically, LA is situated within an instructional design framework that emphasises the ethical (ethics-by-design) agency of LA and, together with appropriate instructor guidance, improves learning outcomes (such as performance, satisfaction, engagement, and self-regulatory skills), serving as an anthropocentric instructional tool. Research using mixed methods has demonstrated that LA, when supported by suitable guidance and targeted adoption models (UTAUT) that promote participatory and co-design approaches involving stakeholders, can effectively facilitate the utilisation and actionable use of LA by learners, teachers, and institutions across K-12 and higher education. Moreover, the proposed adoption model provides a theoretically grounded basis for future Structural Equation Modelling-based testing, particularly to examine mediation effects between human-centredness, ethical concerns, and intention to adopt LA.
In conclusion, interesting initial hypotheses can serve as valuable inputs for future research, guided by our insights. Future work in the examined research areas could build upon these findings and conclusions to further investigate the conditions necessary for the adoption of LA. Specifically, it could explore the attitudes of acceptance and resistance among secondary school students towards implementing LA through a multi-site ethnographic study. Additionally, we might examine whether the limited adoption of LA is due to a lack of institutional support rather than the technology’s effectiveness. Furthermore, future research could explore how ethical concerns might be balanced with the educational benefits of LA, potentially increasing its practical adoption. Finally, it is important to investigate how professional development activities can improve teachers’ competence with LA tools [
45].
While this study provides valuable insights, its scope is limited to data collected in Greece. This focus was chosen because Greece represents a context where LA adoption is still emerging, offering a valuable opportunity to study early-stage implementation challenges. However, institutional and cultural factors—such as data governance frameworks, teacher training systems, and prevailing attitudes toward technology—vary significantly across educational systems. As a result, our findings should be interpreted with caution when transferred to other contexts. Future studies should replicate and extend this research in diverse cultural and institutional settings to examine whether the factors we identified (e.g., performance expectancy, human-centred design, ethical safeguards) hold consistent relevance internationally.
Furthermore, the intervention is provided via a single online Greek course. Future studies should address generalisability (across different disciplines, institutions, and delivery methods) and outline a replication plan (e.g., cluster-randomised or stepped-wedge across sections). Subsequently, replication studies could use data of a size sufficient to be considered big data.
While our study focused on actionable insights from traditional analytics, the growing field of Artificial Intelligence (AI) offers transformative potential to address the challenges of adoption, actionability, and personalisation we have identified. Specifically, AI advances LA beyond descriptive and predictive analytics towards prescriptive and adaptive systems capable of creating truly personalised learning experiences at scale. In this context, machine learning techniques such as classification, clustering, semantic analysis, and summarisation (including LSA, LDA, and keyword extraction methods) can be employed for student profiling, evaluation, personalised feedback, participation, and sentiment analysis (e.g., satisfaction and anxiety). Therefore, the impact of AI on personalised educational analytics could be further explored.
Our study highlighted a key issue: a primary obstacle to adopting LA is teachers’ lack of self-efficacy and data literacy skills, along with a need for practical training. A future study will examine how a theory-based professional development programme affects teachers’ competence, confidence, and willingness to use LA tools in their practice.
Future research should involve key educational stakeholders by examining the findings in various social contexts. We believe that the effectiveness of LA might be overstated due to a lack of comprehensive studies on its long-term impact across different educational environments. Future research could also investigate the types of support that enhance teachers’ external and internal motivations to adopt LA. Additionally, while this study offers an analysis of teachers’ perceptions, it lacks longitudinal evidence on how students experience and respond to LA over time. Incorporating student attitudes and acceptance would improve the thoroughness of the adoption analysis. Finally, meta-analyses should examine the implementation and validation stages to assess the sustainability of the LA adoption process.