Next Article in Journal
MCS-VD: Alliance Chain-Driven Multi-Cloud Storage and Verifiable Deletion Scheme for Smart Grid Data
Previous Article in Journal
An Unsupervised Cloud-Centric Intrusion Diagnosis Framework Using Autoencoder and Density-Based Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking

by
Inés Alvarez-Icaza
1,2,
Luis Magdiel Oliva-Córdova
3,
Rasikh Tariq
1 and
José Luis Martín-Núñez
4,*
1
Institute for the Future of Education, Tecnologico de Monterrey, Mexico City 64849, Mexico
2
School of Architecture, Art and Design, Tecnologico de Monterrey, Mexico City 14380, Mexico
3
Faculty of Humanities, University of San Carlos of Guatemala, Guatemala City 01012, Guatemala
4
Institute for Education Sciences, Universidad Politécnica de Madrid, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Future Internet 2026, 18(1), 55; https://doi.org/10.3390/fi18010055
Submission received: 21 October 2025 / Revised: 10 December 2025 / Accepted: 29 December 2025 / Published: 19 January 2026

Abstract

Digital Learning Hubs and educational repositories are key tools to offer innovative educational experiences in the context of the digital transformation of education. However, their evaluation has often been approached from fragmented perspectives, limiting a comprehensive understanding of their role as integrated digital learning ecosystems. This study aimed to evaluate the functionalities, usability, and accessibility of 25 digital platforms through 1519 observations, conceiving these dimensions as interconnected components for supporting the development of complex and computational thinking, and testing five hypotheses related to their performance. A quantitative descriptive–correlational approach was employed for the study. Internal and external functionalities were assessed using a specifically designed instrument, while usability was analyzed according to ISO 9241: Ergonomics of human–system interaction (efficiency, effectiveness, and user satisfaction), and accessibility was evaluated based on WCAG 2.1 standards. The results showed that platforms with higher scores in internal functionalities, particularly personalization and evaluation, exhibited a positive correlation with higher usability metrics, including efficiency and user satisfaction. Accessibility limitations and weaknesses in external functionalities were identified as relevant factors affecting platform performance, with recurring shortcomings in Operable and Understandable. In addition, the availability of courses explicitly focused on complex and computational thinking revealed a gap in specialized training within the analyzed platforms. From a research perspective, this study contributes by providing a multi-criteria evaluation framework and comparative empirical evidence that clarifies the relationships between platform functionality, usability, accessibility, and the development of complex and computational thinking. These findings support developing a Digital Learning Hub as balanced, robust, and evidence informed digital learning ecosystems.

Graphical Abstract

1. Introduction

Digital technologies are transforming education globally, improving accessibility and enabling more personalized learning approaches [1]. According to the OECD report [2], digital education ecosystems are critical to this transformation. The report underlines the importance of technological infrastructure and the changing role of educators in this process, highlighting the need for a systemic approach to creating effective digital educational ecosystems. In recent years, with the consolidation of open educational hubs and the pandemic effect, education has experienced an acceleration in the digitization process. These hubs have facilitated massive access to high-quality educational resources, eliminating geographic and economic barriers, while the pandemic forced institutions to adopt digital technologies rapidly to favor distance learning [3]. In turn, the complexity of virtual ecosystems increases significantly with the integration of new technologies such as artificial intelligence (AI), augmented reality (AR), virtual reality (VR), and the Internet of Things (IoT). These technologies, while offering enormous potential to enrich the user experience and expand the applications of these ecosystems, also introduce additional challenges in terms of interOperable, security, and data management [4,5]. The increasing complexity demands constant evaluation to ensure these platforms remain usable, accessible, and functional. Without rigorous evaluation, the risk is that virtual ecosystems will become challenging for some users, including individuals with disabilities, thereby limiting educational and operational effectiveness [6].
Digital ecosystems must meet accessibility, usability, and functionality criteria to ensure that they are inclusive, practical, and valuable for all users, regardless of their abilities. Accessibility ensures that people with disabilities can use the system, which is a fundamental right and, in many cases, a legal obligation. Even if a system is of high quality in terms of content and functionality, if it is not accessible to all learners, especially those with disabilities, its value is significantly compromised [7]. Lack of accessibility reduces student satisfaction and can negatively affect their academic performance and overall educational experience [8,9]. Usability focuses on users interacting with the system efficiently, which enhances the user experience and promotes greater adoption. Lack of usability can lead to frustration and abandonment of the system, regardless of the accessibility and functionality it offers [10].
Functionality, on the other hand, ensures that the system meets its operational objectives and can adapt to various needs without compromising the quality of the experience. Poor functionality can severely limit the usefulness of a digital ecosystem, even if it is accessible and easy to use [11]. Most of the reviewed studies analyze accessibility, usability, or functionality separately. However, integrating accessibility, usability, and functionality into a holistic framework is essential to developing effective and sustainable digital ecosystems. Despite significant advances in each of these areas, prior research has often approached them in isolation, limiting a comprehensive understanding of how their combined performance influences learning outcomes and the development of advanced cognitive skills such as complex and computational thinking.
This study evaluated the internal and external functionalities, usability, and accessibility of digital learning hubs and educational repositories through a structured multi-criteria evaluation framework. It identified their impact on student perception of developing advanced skills such as complex and computational thinking. This study contributes to the literature by proposing and applying an integrated multi criteria evaluation framework that jointly examines functionality, usability, accessibility, and course availability, explicitly linking these dimensions to the development of complex and computational thinking in digital learning hubs and educational repositories. This evaluation is necessary to help these learning spaces fulfill their purpose of democratizing access to education and facilitating the development of complex thinking. These aspects are critical to fostering an adaptive learning environment that supports the development of complex and computational thinking, enabling students to critically and systemically address problems, identify patterns, abstract vital concepts, and interact creatively and algorithmically with content. Recent research has demonstrated that incorporating computational thinking in science education significantly enhances students’ ability to analyze and solve real-world problems through algorithmic modeling and data-driven decision-making [12]. By integrating multiple dimensions, including usability, accessibility, and course availability, this framework provides a comprehensive approach to assessing the effectiveness of digital learning environments. In addition, they contribute to the advancement of digital education by ensuring that educational platforms can evolve and adapt to the changing needs of students and society at large, promoting inclusive and practical learning.
The scope of this study is limited to a cross-sectional evaluation of selected digital learning hubs and educational repositories; therefore, longitudinal effects and direct measurements of learning outcomes are beyond the scope of the present analysis.

2. Literature Review and Conceptual Framework

2.1. Virtual Learning Ecosystems

Virtual learning ecosystems are integrated digital environments that combine various educational resources and tools designed to offer flexible, personalized, and adaptive learning experiences. These ecosystems respond to the growing demand for flexibility in education, allowing students to access content from anywhere and at any time, which enriches the educational process [13]. According to Briscoe, these ecosystems can evolve and dynamically adjust to changing conditions, making them especially effective in the academic environment [14].
Within these virtual ecosystems, digital learning hubs and open educational resource repositories are critical in providing organized structures and essential tools for educational interaction.
A digital learning hub (DLH) is defined as an integrated platform that combines educational resources, interactive tools, and advanced functionalities such as learning personalization, automated assessment, and real-time collaboration, offering an adaptive and enriching learning experience for users; it is also considered as an online platform that facilitates the development of educational activities through the use of digital technologies, promoting collaboration, innovation, and access to resources for both students and educators. Thus, DLHs constitute a collaborative network that enhances learning experience. For example, Saggah et al. propose a synergistic digital hub (SDH) that uses gamification elements to reinforce educational objectives, demonstrating the effectiveness of such hubs in creating engaging learning environments [15]. In this context, Langegård et al. highlight the importance of digital tools in these environments because of the challenges they represent and the significant impact these actions have on traditional learning methods [16].
Kucirkova and Littleton concur, highlighting the need for practical, community-oriented digital learning centers with collaborative and distributed learning approaches [17]. As noted, DLHs go beyond mere digital tools used as academic distractors; they are instrumental in driving digital transformation. Crupi et al. add to the definition of digital innovation hubs (DIHs), indicating that they facilitate digital transformation in the social and business sectors, illustrating the broader implications of digital hubs in fostering innovation and economic development, generating a linkage of academia with social development [18]. Similarly, Martins discusses the importance of digital hubs in human resource management, especially in promoting digital skills and networking opportunities [19]. This highlights the multifaceted nature of DLHs; they are helpful in all application contexts: educational, occupational, and economic.
On the other hand, an open educational repository (OER) is a functional and practical resource within digital ecosystems because it focuses on the provision and organization of publicly accessible educational materials, such as documents, videos, learning objects, research results, presentations and didactic resources in general, all designed to facilitate universal access to educational contents [20].
OERs serve multiple functions, such as improving access to educational materials, supporting collaboration among educators, and facilitating resource reuse in various educational contexts. OERs are instrumental in promoting open access to educational materials. They provide a framework for educators to freely share and adapt resources, thus fostering an environment conducive to collaborative teaching and learning practices. As Santos-Hermosa et al. points out, OER should store content and help educators adopt open educational practices, allowing the adaptation and modification of resources without economic, geographic, and social barriers [21]. Furthermore, the functionality of educational repositories extends; they act as knowledge stores where educational materials are cataloged and made accessible for use in different contexts and settings.
According to Richardson et al., digital repositories serve as electronic performance support systems that facilitate the organization and classification of educational resources, thus enhancing their usability for instructors [22]. Zibani et al. support this, describing research repositories as essential components of higher education institutions, capturing scholarly outcomes and supporting academic development [23].
In this sequence of ideas, it is essential to note that hubs and repositories are fundamental in developing a meaningful educational experience. Hubs provide interactive and personalized experiences, while repositories ensure open access to quality content. The synergy between these elements contributes to the cohesion of the digital ecosystem, integrating digital libraries, interactive modules, collaboration tools, and assessment systems into a unified platform. This simplifies access to resources and makes it easier for educators to manage the teaching process, allowing them to monitor students’ progress and adapt content according to their needs [20,24].
In addition, these ecosystems foster continuous and collaborative learning by integrating tools and resources into a cohesive environment, which enriches the educational experience and develops advanced technical skills in students [25]. However, ensuring a good user experience requires that these platforms maintain adequate cohesion between their tools and functionalities [26].
In this study, digital learning hubs and educational repositories are examined not only as standalone components of virtual learning ecosystems, but as integrated environments whose functionality, usability, and accessibility are jointly evaluated to understand their role in supporting advanced cognitive skills.

2.2. Quality of User Experience (UX)

User experience in e-learning ecosystems is closely linked to usability, facilitating intuitive navigation, quick access to educational resources, and efficient task completion. In e-learning environments, semantic integration and consistency in the user interface are crucial to maintaining usability and user experience, which improves efficiency and user satisfaction [25]. Vasileva shows in her findings that a well-structured design that supports collaboration and project management within an educational environment can significantly increase usability, thus improving the user experience [27].
Accessibility is another critical factor influencing the user experience within virtual learning ecosystems, especially regarding satisfaction and fairness. Digital accessibility is essential to ensure that people with special needs can fully participate in online learning, increasing user satisfaction and engagement [7]. Accessibility in this context is not limited to technical aspects but also encompasses the adaptability of educational content and user interfaces to meet different learning styles.
When a learning ecosystem can adapt to each user’s characteristics, it generates a more personalized and accessible experience. User satisfaction is significantly enhanced in learning ecosystems prioritizing inclusion and personalization, resulting in a more positive and practical user experience [28].
A functional ecosystem is essential for delivering optimal user experience, as it is directly influenced by the system’s ability to adapt to learners’ diverse and changing needs. When learners can navigate seamlessly between different tools and resources, their interaction with the environment becomes more intuitive and less frustrating, which increases their engagement and motivation [25]. The robust functionality of an educational digital ecosystem fosters research and innovation, allowing students to explore different methodologies and content adaptively. It improves the quality of academic experience and supports the development of critical and creative skills [11], such as complex thinking.
In this study, user experience directly informs the usability dimension of the evaluation framework, guiding the assessment of efficiency, effectiveness, and user satisfaction across digital learning hubs and repositories.

2.3. Complex Thinking

The development of complex thinking is presented to contemporary educators as a possibility for expanding their learners’ capacities and their own. Defined by Morin [29] as a way of thinking that integrates multidimensional knowledge, rejecting the reductionist consequences of simplifying what is real, this competence goal holds excellent potential [30] for constructing scenarios. In modern and future education, the linkage with other disciplines and areas of knowledge is presented as an indispensable capacity to address the problems of a complex reality [31] that appears around us and in various forms in distant latitudes, as well as in the present and the future, as a tremendous timeless and multispectral system.
In this sense, digital educational ecosystems consider the various dimensions relevant to constructing educational scenarios. These digital scenarios comprise a series of resources that [32] are articulated to offer a multiplicity of visions, connecting with the learner and the educator and providing narratives, conceptualizations, and paths through reality for achieving learning goals. These OER [33] also allow for a more significant number of people to reach without the restrictions of private property imposed in the educational environment, and by Sustainable Development Goal 4, “Quality Education” [34]. The combination of the development of complex thinking through open educational resources promises to give a great impulse to connect ideas and contextual priorities, transferred to other realities to construct a shared future.
Additionally, the application of digital technologies in constructing this resource type holds possibilities of adaptability and access to various learners, offering customizable paths. Using technologies such as NLP (Natural language processing) [35] allows artificial intelligence tools to construct such resources with different versions and supports to promote access. However, these applications require evaluation and review to ensure their production meets the requirements and learning purposes [36]. Therefore, the design of evaluation tools for this type of resource encourages the adequate production of open resources as promoters of inclusion and access through the development of complex thinking.
Within this research, complex thinking serves as a key reference for analyzing whether digital learning hubs provide learning environments that support multidimensional reasoning, critical integration of knowledge, and contextual problem solving.

2.4. Computational Thinking

Computational thinking (CT) is increasingly recognized as a vital skill that transcends the traditional boundaries of computer science and encompasses a range of problem-solving skills applicable across various disciplines. It consists of four key elements: abstraction, decomposition, algorithm design, and generalization, which have proven to be essential in problem-solving across multiple fields of knowledge [37]. Broadly defined, CT involves formulating problems using computational methods, designing systems, and understanding human behavior through the lens of computational principles [38,39]. This perspective should be integrated into educational curricula at all levels to prepare students for a technology-driven world [38,40]. In academic contexts, computational thinking is often divided into decomposition, pattern recognition, abstraction, and algorithm design [41,42]. These elements facilitate a structured approach to problem-solving, enabling students to tackle complex challenges systematically.
In the context of this study, computational thinking is used as an analytical lens to examine the availability of learning opportunities and resources explicitly designed to support abstraction, problem decomposition, and algorithmic reasoning within digital platforms.

2.5. Evaluation of Digital Learning Hubs

The evaluation of DLHs is a delicate and relevant process. Some studies, especially after the emergence of MOOCs (Massive Open Online Course), have focused on three types of evaluation: learning outcomes, learner engagement and participation, and user experience on digital platforms [43]. Informal digital learning, facilitated by online platforms and networking environments, plays a fundamental role in the development of computational thinking in higher education. Recent research has shown that digital networking skills can positively mediate the acquisition of computational competencies [44].
However, recent publications have highlighted ethical considerations, evaluating the risks of using educational resources generated with NLP-based applications integrated into DLHs due to biases, prejudices, and loss of human interaction [45]. It is important to emphasize that the evaluation should establish criteria for evaluating effectiveness against a reference framework that guarantees its quality.
The quality assessment of DLHs and OER has been reviewed since its massive emergence caused by the COVID-19 outbreak in 2020 [46]; however, the emergence has been revised, establishing priorities for the different actors in this process. Some of these priorities concern the retention of students in courses facilitated with OER through this platform [47]. However, a comparative analysis by Gherheș et al. indicates that the quality of DLHs can also be assessed through the student’s experience of online learning and that it depends not only on the skills of the teacher but also on the characteristics and digital skills of the students [48]. This suggests that DLHs should include strategies to develop learners’ digital competence, ensuring all participants can fully benefit from the learning opportunities.
Of particular importance in this regard is the ability of digital platforms to adapt to the specific needs of their users. The relevance of applying universal design principles is addressed [49]. The evaluation of these functionalities [50] and their implementation still represent a challenge, so the study on the appropriate way to perform the adaptation according to students’ needs and preferences, especially those with disabilities, a significant action to program new actions, should be strengthened.
While prior studies have examined accessibility, usability, functionality, or learning outcomes of digital learning platforms in isolation, relatively few have explicitly analyzed how these dimensions jointly shape the user experience and support the development of advanced cognitive skills such as complex and computational thinking within digital learning hubs.
Taken together, the literature reviewed in this section supports a conceptual framework in which digital learning hubs and open educational repositories are understood as integrated environments whose functionality, usability, and accessibility shape the user experience and, in turn, influence opportunities for developing complex and computational thinking. This relationship guides the design of the multi criterial evaluation framework applied in the present study.
The present study performs a comprehensive evaluation of DLHs, considering key elements such as internal and external functionalities, accessibility based on WCAG 2.1 principles, usability according to ISO 9241, and the availability of training options and resources in critical areas such as complex and computational thinking [51]. This analysis seeks to provide the field of education with conceptual and methodological tools that strengthen the design and improvement of DLHs and enhance the development of advanced competencies in complex and computational thinking.
Building on these prior approaches, the present study advances existing research by applying a unified multi criteria evaluation framework that integrates usability, accessibility, functionality, and course availability, explicitly considering their relationship with the development of complex and computational thinking.

3. Research Questions and Hypotheses

The analysis of digital learning platforms, particularly Digital Learning Hubs (DLHs) and Open Educational Repositories (OER), has gained increasing relevance due to their role in contemporary educational transformation. As discussed in the literature review (Section 2.2, Section 2.3, Section 2.4 and Section 2.5), prior studies highlight the importance of usability, accessibility, and functionality as independent quality dimensions, while recent work emphasizes their relevance for fostering advanced competencies such as complex thinking and computational thinking.
However, existing research has often addressed these dimensions in isolation, providing limited empirical evidence on how their combined performance shapes user experience and supports the development of higher-order cognitive skills within integrated digital learning ecosystems (Section 2.5). This gap motivates the need for a structured analytical approach capable of examining these relationships holistically.

3.1. Research Questions

To guide the empirical analysis, this study is driven by the following main research question (MRQ):
MRQ: 
How do the functionalities, usability, and accessibility of digital learning hubs and educational repositories influence their perceived effectiveness and their potential to support the development of complex and computational thinking?
This question is grounded in prior research on user experience and digital accessibility (Section 2.2), the educational value of complex thinking (Section 2.3), the role of computational thinking in digital environments (Section 2.4), and recent approaches to evaluating digital learning hubs (Section 2.5).

3.2. Hypotheses Development

Building on the theoretical foundations and empirical findings discussed in Section 2.2, Section 2.3, Section 2.4 and Section 2.5 and following established approaches to hypothesis-driven evaluation of digital learning environments [47,48,49], the following hypotheses are proposed. These hypotheses translate the conceptual relationships identified in the literature into measurable and testable propositions:
H1. 
Platforms with higher scores in internal functionalities have a better perception of usability (derived from studies linking personalization, content management, and system design to user experience and usability outcomes [50,51,52]).
H2. 
Platforms with higher scores in external functionalities have a higher perception of accessibility (based on research emphasizing device compatibility, interOperable, and external integrations as key accessibility enablers [50,51]).
H3. 
Educational platforms partially comply with the accessibility principles defined by WCAG 2.1, excelling in perceivability but presenting deficiencies in Operable and Understandable (consistent with previous accessibility audits of educational platforms highlighting uneven compliance with WCAG principles [53,54,55]).
H4. 
There is a significant correlation between usability and accessibility levels of digital platforms (supported by prior evidence suggesting strong interdependencies between usability and accessibility in digital learning environments [56]).
H5. 
The Learning Digital Hubs offer a wide range of multidisciplinary resources, but the availability of courses on complex and computational thinking remains limited, highlighting a gap in specialized training opportunities (aligned with prior analyses of MOOCs and online repositories that report insufficient integration of computational and complex thinking content [57,58]).
Together, these hypotheses operationalize the conceptual framework presented in the previous section by linking platform design dimensions (functionality, usability, and accessibility) with user experience and the potential for developing complex and computational thinking. This structure ensures coherence between the literature review, the research design, and the empirical analyses presented in the subsequent sections.

4. Methodology

This study adopted a quantitative research design with a descriptive-correlational approach based on the need to comprehensively evaluate educational digital platforms and their impact on learning. This design is based on collecting quantitative data to identify significant relationships between key dimensions, aligning with the structured approaches suggested by [59,60] to ensure reliable results in educational evaluations. The research was developed in four stages: (1) platform selection, (2) functionality evaluation, (3) usability analysis, and (4) accessibility evaluation ensuring an exhaustive and systematic assessment. International reference standards such as ISO 9241 and WCAG 2.1 were adopted as methodological frameworks to guide the evaluation process, rather than as certification mechanisms.

4.1. Platform Selection

The first stage involved identifying and evaluating 50 digital platforms, classified into Digital Learning Hubs and Open Educational Repositories. After applying selection criteria, 25 platforms were retained for the final analysis, comprising 13 Digital Learning Hubs and 12 Open Educational Repositories. This process adhered to the recommended criteria [61,62,63,64,65,66] and was adjusted to ensure relevance, diversity, and representativeness within the study. The selected platforms represent a diverse international landscape, primarily originating from Europe, North America, and global open-education initiatives. Platforms from regions such as East Asia were not included due to language constraints, access limitations, and the focus on platforms with broad international reach and publicly available evaluation data.
Although the hypotheses were formulated at a general level, this classification allowed for a more detailed comparative analysis of their functionalities, usability, accessibility, and course availability, providing additional insights into the differences between these platform types.
The selection criteria included:
-
Availability: publicly accessible platforms during the study period.
-
Relevance: international recognition in the educational field, identified through scientific literature and usage reports.
-
Coverage: A variety of educational resources are offered, such as content and academic levels.
-
Interactivity includes advanced functionalities such as learning personalization and collaborative tools.
Table 1 details the selected repositories, and Table 2 shows the digital learning hubs.

4.2. Evaluation of Functionalities

All evaluation stages were carried out by the research team following standardized protocols, with the usability assessment additionally supported by trained student participants. In this stage, the functionalities of the selected platforms, classified as internal and external, were analyzed based on their presence or absence. For this purpose, an evaluation instrument based on a structured multiple-option selection method was used, allowing evaluators to systematically identify the available functionalities from a predefined set of attributes within each category. Table 3 shows the criteria for assessing functionalities.
The functionalities were divided into the following categories: Internal functionalities refer to features that directly support the teaching–learning process and user interaction within the platforms, such as personalization, content management, assessment, and collaboration. External functionalities correspond to access-related and contextual features that enable initial interaction with the platforms, including registration, availability, communication channels, security, and language support
  • Internal functionalities: These include features directly related to the learning experience, such as personalization, content management, collaboration, assessment, gamification, and certification.
  • External functionalities: Consider elements of initial access and interaction, such as registration, communication, privacy, availability, and languages/localization.

4.3. Usability Evaluation

The third stage analyzed the usability of the platforms following the criteria established by the ISO 9241 standard. The dimensions evaluated were:
  • Effectiveness: ability to achieve specific objectives.
  • Efficiency: optimization of time and effort when interacting with the platforms.
  • Satisfaction: level of comfort and confidence perceived by users.
  • Simplicity: clarity of navigation and ease of use.
To ensure a rigorous and standardized evaluation, 194 trained students from Tecnológico de Monterrey participated in the process. Of these, 192 completed the full demographic questionnaire, while two participants provided incomplete demographic information but were retained for usability scoring, as their responses to the evaluation tasks were valid. The majority were men (130) and women (62), predominantly aged between 18 and 24 years (99%). Each platform was evaluated by multiple participants, ensuring diverse perspectives and a balanced assessment. Before the evaluation, students received specific training on usability principles and platform interaction guidelines to ensure consistency in responses. The participants, with previous experience in educational platforms, rated the usability dimensions using a 5-point Likert scale, where 1 represented “Strongly Disagree” and 5 “Strongly Agree”. Additionally, participants provided open-ended feedback on their user experience, complementing the numerical ratings. Table 4 shows the criteria for assessing usability.

4.4. Accessibility Evaluation

Accessibility was evaluated according to the principles established by WCAG 2.1, using the TAW (Web Accessibility Test) tool. WCAG 2.1 consists of 12 guidelines that group 78 success criteria under four main principles:
  • Perceivable: provision of textual alternatives and distinguishable content.
  • Operable: full use of the keyboard and adequate time to access the content.
  • Understandable: textual clarity and error correction.
  • Robust: compatibility with assistive technologies.
The TAW tool reports accessibility issues by identifying errors and warnings associated with WCAG 2.1 success criteria. To enable comparison across platforms, the number of detected issues was normalized and transformed into proportional scores at the principle level, where fewer violations indicate higher accessibility compliance. These scores were used for analytical purposes rather than formal certification. Table 5 shows the criteria used to evaluate accessibility.

4.5. Evaluation of Course Availability

To identify courses explicitly related to Complex Thinking and Computational Thinking, a structured search protocol was applied across all Digital Learning Hubs. The protocol included keyword-based searches using terms such as “computational thinking”, “complex thinking”, “problem solving”, “systems thinking”, and “algorithmic thinking”, as well as manual verification of course titles, descriptions, learning objectives, and curricular focus. Only courses explicitly aligned with these cognitive domains were classified as Complex/CT related, ensuring consistency and comparability across platforms.
The availability of courses on complex and computational thinking was assessed by analyzing digital learning hubs. A descriptive statistical analysis was conducted to compare the number of courses available across platforms. This evaluation aimed to determine how these platforms integrate advanced cognitive skill development within their educational offerings. The assessment was based on key criteria, including the total number of courses, the proportion of complex and computational thinking courses, access type, and certification availability, as detailed in Table 6.

4.6. Instrument Validation

To ensure the validity and reliability of the evaluation instrument, a structured validation process was conducted, including (1) expert review, (2) pilot testing, and (3) statistical reliability analysis. This study introduces a structured multi-criteria evaluation framework, integrating functionalities, usability, accessibility, and course availability, enabling a comprehensive assessment of Digital Learning Hubs. This framework ensures a systematic and objective evaluation by combining multiple key dimensions relevant to digital learning environments. The methodology established best practices in instrument validation, ensuring that the assessment criteria accurately captured platform functionalities and usability. Accessibility evaluation was conducted using an automated tool aligned with international standards. Table 7 presents the details of the validation process.

4.7. Data Analysis

The data analysis strategy was selected in direct alignment with the research questions and hypotheses presented in Section 3, as well as with the multi stage evaluation design described in Section 4.1, Section 4.2, Section 4.3, Section 4.4, Section 4.5 and Section 4.6. Each analytical technique was chosen to address a specific evaluative purpose, ensuring methodological coherence across stages and supporting transparent interpretation of results.
The data collected were analyzed statistically to identify patterns, correlations, and significant relationships between the dimensions evaluated. The following analyses were performed:
  • Descriptive: to identify general trends in functionality, usability, and accessibility, which allowed an initial characterization of the evaluated educational platforms.
  • Correlational: to explore the relationships between the evaluated dimensions, highlighting significant interactions, such as the relationship between internal functionalities and the perception of usability.
  • Comparative: to identify significant differences between Digital Learning Hubs and repositories, highlighting their strengths and weaknesses.
Data analysis was performed in Python 3.10, using statistical and visualization packages such as Pandas (v2.0.3), NumPy (v1.24.4), Matplotlib (v3.7.2), Seaborn (v0.12.2), and SciPy (v1.11.2). These packages allowed us to tabulate the data accurately, calculate measures such as averages and standard deviations, and generate clear and compelling visualizations of the results.
The data was organized and tabulated by assigning a proportional weight to each category evaluated based on its relevance to the user experience. In addition, global scores were calculated for each platform, integrating the dimensions of functionality, usability, and accessibility into a composite index that facilitated direct comparisons between Hubs and repositories.
This methodological approach provides a robust and systematic framework for evaluating digital educational platforms, generating relevant insights for continuous improvement in digital learning ecosystems.

5. Results

This study represents a comprehensive evaluation of digital educational platforms, encompassing both Digital Learning Hubs and Repositories. For this purpose, three main variables were considered, functionality, usability, and accessibility, which were selected for their direct impact on the user experience and their ability to reveal key differences between the two types of platforms. The results are presented in a structured approach that begins with an analysis of fundamental functionality, followed by a detailed exploration of usability and accessibility. This analysis framework establishes a solid foundation for discussing the hypotheses raised in the subsequent sections.
Figure 1 shows a detailed comparison between Digital Learning Hubs and Repositories on ten key functionalities. Functionality scores represent the percentage of platforms exhibiting each functionality within each platform type. It is observed that Hubs present consistently higher scores in areas such as content management (90 vs. 40) and communication (80 vs. 25), highlighting their focus on interaction and personalization. In contrast, repositories, while less prominent in aspects such as gamification (10 vs. 65) and evaluation (10 vs. 75), maintain competitive performance in security (70 vs. 75) and availability (85 vs. 90), essential for accessibility. These differences reflect complementary approaches that respond to different educational needs.
Figure 2 and Figure 3 provide representative examples of a DLH and a Repository, respectively, highlighting the specific functional differences in each type of platform. While Hubs prioritize interaction and diverse content management, repositories emphasize stability and accessibility. These visual comparisons consolidate the initial analysis and establish a framework for the subsequent sections.
Based on these observations, the following sections will examine how these results support or challenge the hypotheses raised. This approach will allow for a deeper understanding of each platform’s strengths and limitations and their implications for designing and evaluating digital educational tools.

5.1. Internal Functionalities and Usability

Figure 4 shows a significant positive correlation (r = 0.89, p < 0.001) between internal functionality scores and perceived usability in digital educational platforms. The reported correlation coefficient (r = 0.89) was calculated using Pearson’s correlation test, based on the aggregated internal functionality scores and the corresponding average usability scores across all evaluated platforms.
These internal functionalities include (F05) Personalization, (F04) Content management, (F07) Collaboration, (F06) Assessment, (F08) Gamification, and (F10) Certification and Recognition.
Platforms with higher internal functionality scores consistently exhibited higher average usability values compared to platforms with lower functionality scores, even though overall usability ratings remained within a moderate range. This is explained by the aggregation of multiple usability principles (U01–U07), each evaluated using a 5-point Likert scale, which causes usability values to cluster around mid-range scores rather than reaching the extremes of the scale.
The consistency observed in the data highlights the importance of internal functionalities as critical predictors of perceived usability. These results reinforce the initial hypothesis (H1) and underscore the impact of these features on user experience.
To ensure meaningful interpretation, usability was analyzed comparatively across platforms rather than as an absolute measure. This approach allows for identifying patterns, strengths, and weaknesses between platforms, rather than solely assessing whether a platform achieves a high or low usability score in isolation.
Table 8 breaks down the average scores obtained for each usability principle (U01–U07) in the Digital Learning Hubs (H01–H13) and educational repositories (R01–R12). Platforms such as H02 (Coursera) and H09 (OpenCourseWare) recorded the highest scores, with overall averages of 2.44 and 2.36, respectively. These platforms demonstrated outstanding compliance with fundamental principles such as effectiveness (U01), satisfaction (U03), and user-centered design (U07). On the other hand, platforms such as H06 (INTEF) and R01 (Archive) showed the lowest values, with overall averages of 1.86 and 1.67, respectively, reflecting notable deficiencies in all the principles evaluated.

5.2. External Functionalities and Accessibility

Figure 5 reveals a significant positive correlation (r = 0.91, p < 0.001) between external functionality scores and the perception of accessibility in digital educational platforms. These external functionalities include (F01) Registration, (F07) Communication, (F09) Security and Privacy, (F03) Access and Availability, and (F02) Languages/Localization.
  • Platforms with scores above 3.5 in external functionalities achieved accessibility levels close to 3.0, while those below 2.0 obtained accessibility values below 2.0. The observed relationship confirms the importance of these functionalities in the perception of accessibility, especially those that facilitate initial access and user interaction, thus supporting hypothesis (H2).

5.3. Accessibility Principles Compliance

Figure 6 shows the evaluation of the compliance of Digital Learning Hubs and repositories with the accessibility principles defined by WCAG 2.1. The results show that the platforms partially comply with these principles, with a better performance in the principle of Perceivable (A01), which reached averages of 3.4 in the Digital Learning Hubs and 3.2 in the repositories. This reflects significant progress in guidelines such as providing textual alternatives (1.1) and making content distinguishable (1.4).
However, the principles of Operable (A02) and Understandable (A03) show lower scores. Operable obtained averages of 2.5 in Digital Learning Hubs and 2.3 in repositories, highlighting difficulties in guidelines such as ensuring keyboard accessibility (2.1) and facilitating navigation (2.4). Understandable registered the lowest scores, with 2.1 in Digital Learning Hubs and 2.0 in repositories, with notable shortcomings in text readability (3.1) and help to avoid and correct errors (3.3).
To calculate these scores, the accessibility evaluation was conducted using the TAW (Web Accessibility Test) tool, which generates reports based on WCAG 2.1 success criteria. The number of violations per criterion was analyzed, and scores were assigned following a weighted system that considered the severity and frequency of the detected issues. The results were then grouped by WCAG 2.1 subprinciples, averaging the compliance levels within each principle to facilitate interpretation.
On the other hand, the Robust principle (A04) showed moderate performance, with 2.8 in Digital Learning Hubs and 3.1 in repositories, reflecting efforts in technological compatibility but with room for improvement.
The results partially confirm the hypothesis (H3). While the platforms excel in Perceivable, they perform to a limited extent in Operable and Understandable, which supports the need for improvements in these principles.

5.4. Accessibility and Usability in Platforms

Figure 7 explores the relationship between usability and accessibility scores of digital platforms, evidencing a significant positive correlation (r = 0.87, p < 0.001). Platforms with higher usability scores show better accessibility performance, regardless of whether they are DLHs or repositories.
The heat map reflects a concentration of platforms within comparable mid-range usability and accessibility values, highlighting consistent patterns of co-variation between both dimensions rather than absolute high-score levels. Importantly, repositories cluster in slightly lower score ranges than Digital Learning Hubs, indicating differences in design approach and user experience priorities.
These results confirm hypothesis (H4) and demonstrate that improving usability often translates into improved accessibility, underscoring the interconnection between these two aspects in digital platform design.

5.5. Topic Representation in Platforms

Figure 8 represents the percentage distribution of courses offered by Digital Learning Centers, classified into three thematic categories: complex thinking, computational thinking, and other topics. OpenCourseWare leads with 30% and 20% representation in complex and computational thinking, respectively, followed by Coursera (22% and 17%) and edX (20% and 15%). Udemy, although ranked fourth, also presents a significant contribution, with 18% in complex thinking and 14% in computational thinking. In contrast, platforms such as Alison (1.4% and 0.1%) and OpenLearn (0.5% and 2%) have minimal representation in these categories.
The “Other Subjects” category continues to dominate the offerings of most platforms. For example, FutureLearn, Khan Academy, and OpenLearn concentrate more than 90% of their courses in this category, while OpenCourseWare and Coursera show a more balanced distribution, with 50% and 61%, respectively. This pattern is evident in the heat map, where the blue color intensity highlights the relative proportions.
These results partially support Hypothesis H5, confirming that Digital Learning Hubs offer diverse resources but with limited representation in complex and computational thinking courses. While platforms like OpenCourseWare and Coursera lead in these areas, most hubs do not prioritize them. Additionally, findings on accessibility reveal strong performance in Perceivability but notable weaknesses in Operable and Understandable. Improving keyboard accessibility, navigation, and error prevention could enhance usability and inclusivity, ensuring these platforms better support learners engaging with advanced cognitive skills.

6. Discussion

This study’s analysis provides a detailed understanding of the structure of digital learning platforms in terms of their functionalities, usability, and accessibility. The most relevant findings are discussed below.
Specifically, integrating well-structured internal functionalities, such as personalization and content management, significantly improves the user experience, fostering higher levels of usability. Figure 1 supports that platform with internal scores above 3.5 achieved usability levels above 3.0, while those below 2.5 recorded usability levels below 2.0, with a significant positive correlation (r = 0.89, p < 0.001). This agrees with [50], who emphasize that robust internal functionalities significantly improve user perception, and [51], who emphasize that external functionalities are essential for a comprehensive experience. These results confirm H1 by demonstrating that internal functionalities directly impact perceived usability, underlining their importance in designing digital educational platforms.
External functionalities, such as intuitive registration methods and device compatibility, are critical to ensure inclusive accessibility in digital learning platforms. Figure 2 supports that platforms with higher scores on external functionalities present a significant positive correlation with accessibility levels (r = 0.91, p < 0.001). This is aligned with [52], who stress that external functionalities are determinants for equitable access, although [54] identifies persistent challenges in their universal implementation. H2 is accepted, which is evidence that external functionalities are vital in ensuring an accessible and inclusive educational experience.
Its impact on user perception and participation is significant in usability, measured through criteria such as effectiveness, efficiency, and satisfaction. Figure 2 shows that platforms with higher usability scores also exhibit consistently higher satisfaction, efficiency, and simplicity ratings compared to platforms with lower usability scores. These differences reflect relative performance levels derived from aggregated Likert scale evaluations rather than absolute percentage-based measures. This finding supports previous studies highlighting the role of intuitive and user centered design in enhancing engagement and perceived quality of digital learning platforms [52,55]. H3 is accepted, evidencing that platforms with better usability achieve higher levels of interaction and satisfaction, which is essential for meaningful learning experiences. Recent research confirms that computational thinking, when integrated into digital platforms with a structured pedagogical approach, enhances student engagement and learning effectiveness, reinforcing the importance of usability in these environments.
Likewise, accessibility and usability are closely related, functioning as interdependent dimensions that strengthen educational inclusion. Figure 3 shows a significant correlation (r = 0.87, p < 0.001) between accessibility and usability levels, where platforms with accessibility scores above 3.5 obtained usability evaluations above 3.0. This agrees with [56], who argue that improved usability is associated with more efficient accessibility, although [55] highlights limitations in effectively implementing both dimensions. H4 is accepted, underscoring the importance of jointly addressing these dimensions to maximize digital educational platforms’ impact.
Regarding the availability of courses related to complex and computational thinking, it was identified that they still need to be improved despite the wide multidisciplinary offer of Learning Digital Hubs. Table 2 reveals that only 10% of the platforms included more than five courses in these areas, while 70% of the resources were concentrated on general or multidisciplinary content. This agrees with [57], who identified significant gaps in the integration of computational thinking in MOOCs, and [55], who pointed out that STEM courses should integrate a focus on developing these competencies. H5 is partially accepted, evidencing a gap in the representation of advanced skills and highlighting the need to design educational strategies that prioritize these contents. A recent systematic review further supports these findings, revealing that most efforts to integrate computational thinking are concentrated in science and mathematics education, leaving significant gaps in other fields that could benefit from this approach [66,67].
For the equitable inclusion of learners in the development of the addressed competences in this study, i.e., complex and computational thinking resources, it becomes key to provide convenient, affordable, and usable resources. Especially for Spanish speakers, resources are limited and uncontextualized then the creation of a repository of OERs to develop these competences, while including contextual references and a variety of disciplinary areas and its interrelations with reality, plays a key role on the functionalities design of a new digital hub. Additionally, fostering a community that can exchange visions and expertise, becomes relevant to update the content and make it attractive and relevant. Finally, providing guidance for those who navigate the content in the search of collaboration with peers, creating content, or even facilitating mentoring, represents an opportunity to develop the competence in larger groups and in different levels of competences mastery.
The study highlights key implications for the design and improvement of Digital Learning Hubs and Open Educational Repositories. The strong link between internal functionalities and usability emphasizes the need for adaptive, user-centered design to enhance engagement and learning outcomes.
The partial compliance with WCAG 2.1 signals the importance of strengthening Operable and Understandable to foster inclusivity. Additionally, the limited availability of complex and computational thinking courses underscores the need for more specialized training resources.
These findings call for continuous evaluation frameworks that integrate usability, accessibility, and content alignment, ensuring that platforms evolve to meet emerging educational and technological demands. Table 9 summarizes the study’s contribution through the hypotheses.

Threats to Validity

Despite the rigorous validation process implemented in this study, certain limitations must be acknowledged. First, the composition of the participant group. While the digital hub evaluation process was performed by trained students and conducted rigorously by the research team, the age range of the sample was quite narrow and had students from only one institution that might potentially differ from those of older learners, individuals with less digital proficiency, or users from other educational contexts. This condition might cause a bias in the evaluation, therefore in the results [67]. Future work must include a wide age range and various educational contexts.
Second, while the study ensured multiple evaluators per platform to capture diverse perspectives, the assessment relied on subjective user feedback through a Likert scale and open-ended responses, which, despite providing valuable insights, may not fully capture the complexity of platform usability. Future research should consider incorporating diversity in professional backgrounds, and varying levels of digital literacy, to enhance the external validity of the study’s conclusions.
Third, although the multi-criteria evaluation framework ensures a comprehensive assessment by integrating key dimensions such as usability, accessibility, and course availability, the particularity of the reviewed platforms might have limited the study’s applicability. Future work must focus on exploring different validation approaches to perfect the framework and instrument to improve its reliability across different digital hubs types and classes.
From a research perspective, this study contributes to the literature on virtual learning ecosystems by proposing and empirically validating a unified multi-criteria evaluation framework that integrates functionality, usability, accessibility, and content alignment. Unlike previous studies that examine these dimensions separately, the proposed approach offers a holistic lens for assessing the quality and sustainability of digital learning hubs.

7. Conclusions

This study evaluated the functionalities, usability, and accessibility of digital educational platforms, including Learning Hubs and Repositories, to identify their capacity to promote advanced cognitive skills, such as complex and computational thinking. The results show that, although the Hubs present significant strengths in internal functionalities, such as personalization, content management, and usability, limitations persist in the representation of courses focused on complex and computational thinking, barely reaching 10% of the total resources evaluated. This evidences the need to design platforms with a more balanced integration of contents that address key cognitive competencies, reflecting opportunities to improve the educational impact of these tools [68].
In terms of educational practices, the findings underline the importance of robust functionalities and inclusive accessibility, supported by a positive correlation between these variables. The most effective platforms integrate compatibility and security and provide personalized learning experiences that foster user engagement. To enhance usability and accessibility in platforms with lower scores, it is recommended to implement improvements such as intuitive navigation, adaptive interfaces, enhanced screen reader compatibility, and periodic accessibility audits. Additionally, ensuring user participation in platform evaluation, particularly from individuals with disabilities, can contribute to refining accessibility strategies and improving the learning experience.
It is recommended that Learning Hubs be developed specifically to address these areas, including a more excellent representation of advanced skills. In addition, the value of digital tools and formative assessments is highlighted to enhance the learning experience and contribute to training in complex thinking.
Moreover, as STEM disciplines increasingly require interdisciplinary and technology-driven approaches, ensuring that these platforms effectively integrate computational thinking and problem-solving skills becomes essential for equipping students with the competencies needed in scientific and engineering fields. Expanding the representation of these skills within digital learning environments can enhance their role as critical resources for STEM education, bridging gaps in access to high-quality educational content.
Despite the advances, the study has limitations, such as its focus on a specific set of platforms and cross-sectional assessment, which restricts the extrapolation of the results. Future studies should expand the samples and explore how the evolution of functionalities, accessibility, and usability impact different educational contexts and disciplines. Finally, this work invites further research on the design and implementation of Learning Hubs that prioritize the integration of key competencies such as complex and computational thinking, positioning these platforms as essential tools for the education of the future. To maximize the effectiveness of these platforms, developers should integrate AI-driven adaptive learning systems that personalize content delivery, while educators should receive targeted training on computational thinking pedagogies to ensure its meaningful implementation across disciplines. Additionally, establishing open-access repositories with structured curricula and guidelines can support both learners and instructors in effectively embedding computational thinking into diverse educational frameworks.
Appendix A shows tables that globally demonstrate the criteria evaluated in the study.

Author Contributions

Conceptualization, I.A.-I. and L.M.O.-C.; methodology, I.A.-I. and L.M.O.-C.; software, R.T.; validation, I.A.-I. and L.M.O.-C.; formal analysis, L.M.O.-C.; investigation, I.A.-I. and L.M.O.-C.; data curation, L.M.O.-C.; writing—original draft preparation, I.A.-I., L.M.O.-C., R.T. and J.L.M.-N.; writing—review and editing, L.M.O.-C. and J.L.M.-N.; visualization, L.M.O.-C. and J.L.M.-N.; supervision, L.M.O.-C. and J.L.M.-N.; project administration, J.L.M.-N. and L.M.O.-C.; funding acquisition, I.A.-I. and J.L.M.-N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Tecnológico de Monterrey grant number IJXT070-23EG99001. And the APC was funded by the Institute for Education Sciences of Universidad Politécnica de Madrid.

Institutional Review Board Statement

The study was reviewed by the Institutional Research Ethics Committee (CIEI) of Tecnológico de Monterrey and was granted ethical exemption status (IFE-2024-01).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. Participation was voluntary and all data were collected and analyzed anonymously.

Data Availability Statement

Data are available from the authors upon reasonable request.

Acknowledgments

The authors would like to acknowledge the financial support of Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in the production of this work.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Criteria for Assessment Functionality.
Table A1. Criteria for Assessment Functionality.
FunctionalityCategoryDetailAverage LDHAverage
Repositories
DifferenceFrequency
LDH (%)
Frequency
Repositories (%)
Registration/LoginF01C1F01C1D1
F01C1D2
F01C1D3
F01C1D4
4.3
2.4
2
2
3.9
1.7
1.3
2
+0.4
+0.7
+0.7

86%
48%
41%
39%
78%
35%
26%
40%
F01C2F01C2D1
F01C2D2
F01C2D3
4.3
0.8
0.6
4
0.7
1
+0.3
+0.1
−0.4
87%
15%
12%
80%
14%
20%
F01C3F01C3D1
F01C3D2
F01C3D3
0.6
3.9
0.4
0.9
3.4
0.8
−0.3
+0.5
−0.4
13%
78%
9%
17%
67%
16%
Languages/LocalizationF02C1F02C1D1
F02C1D2
F02C1D3
F02C1D4
F02C1D5
F02C1D6
2.6
0.5
0.3
1.2
0.5
0.3
2.9
0.4
0.2
1.1
0.4
0.3
−0.3
+0.1
+0.1
+0.1
+0.1

51%
10%
7%
23%
10%
5%
59%
9%
5%
22%
9%
5%
F02C2F02C2D1
F02C2D2
2.3
2.7
2.4
2.6
−0.1
+0.1
46%
54%
49%
51%
Access/AvailabilityF03C1F03C1D1
F03C1D2
F03C1D3
F03C1D4
F03C1D5
F03C1D6
4.4
1.2
1
3
1.8
1.2
4.4
1
0.9
2.7
1.5
1

+0.2
+0.1
+0.3
+0.3
+0.2
88%
23%
21%
61%
35%
24%
88%
20%
18%
53%
30%
21%
F03C2F03C2D1
F03C2D2
F03C2D3
4.1
4.2
4.9
4.1
4.1
5.0

+0.1
−0.1
83%
84%
99%
82%
81%
99%
F03C3F03C3D1
F03C3D2
3.1
1.9
3.1
1.9

62%
38%
62%
38%
Content ManagementF04C1F04C1D1
F04C1D2
F04C1D3
F04C1D4
F04C1D5
F04C1D6
3.8
4
2.8
2
2.7
0.7
3.5
4
2.9
1.7
2.3
0.8
+0.3

−0.1
+0.3
+0.4
−0.1
75%
81%
57%
40%
54%
14%
70%
81%
57%
35%
46%
17%
F04C2F04C2D1
F04C2D2
F04C2D3
F04C2D4
4.6
1.6
1.4
2.9
4.5
1.4
1.4
2.7
+0.1
+0.2

+0.2
92%
33%
29%
57%
89%
27%
27%
54%
F04C3F04C3D1
F04C3D2
F04C3D3
F04C3D4
0.5
0.4
0.1
4
0.7
0.3
0
4
−0.2
+0.1
+0.1

10%
7%
2%
81%
13%
6%
1%
80%
F04C4F04C4D1
F04C4D2
F04C4D3
1.2
2.5
0.5
1.2
2.4
0.4

+0.1
+0.1
24%
50%
9%
25%
49%
8%
F04C5F04C5D1
F04C5D2
F04C5D3
1.2
3
0.7
1.2
3.1
0.8

−0.1
−0.1
25%
60%
15%
23%
61%
15%
Personalization/AdaptabilityF05C1F05C1D1
F05C1D2
2.1
2.9
2.6
2.4
−0.5
+0.4
43%
57%
52%
48%
F5C2F05C2D1
F05C2D2
2.5
2.5
2.7
2.3
−0.2
+0.2
51%
49%
55%
45%
F05C3F05C3D1
F05C3D2
2.7
2.3
3.1
1.9
−0.4
+0.4
53%
47%
61%
39%
F05C4F05C4D1
F05C4D2
2.8
2.2
3.2
1.8
−0.4
+0.4
57%
43%
65%
35%
Assessment/TrackingF06C1F06C1D1
F06C1D2
F06C1D3
F06C1D4
2.7
2.1
1.4
1.9
2.2
1.9
1.5
2.2
+0.5
+0.2
−0.1
−0.3
54%
42%
29%
37%
44%
37%
29%
45%
F06C2F06C2D1
F06C2D2
F06C2D3
F06C2D4
1.3
1.8
1.3
2.5
1.2
1.3
1.1
3
+0.1
+0.5
+0.2
−0.5
27%
36%
25%
50%
23%
26%
22%
60%
F06C3F06C3D1
F06C3D2
F06C3D3
2.1
1.4
1.5
2.4
1.1
1.5
−0.3
+0.3

41%
29%
30%
47%
23%
30%
Communication/CollaborationF07C1F07C1D1
F07C1D2
F07C1D3
1.3
0.1
3.6
1.8
0.1
3.1
−0.5

+0.5
26%
2%
72%
37%
1%
62%
F07C2F07C2D1
F07C2D2
F07C2D3
1.7
1
2.3
1.0
1.5
2.5
+0.7
−0.5
−0.2
34%
19%
47%
20%
29%
50%
F07C3F07C3D1
F07C3D2
F07C3D3
2.2
2.4
0.3
2.2
1.9
0.9

+0.5
−0.6
44%
49%
7%
45%
38%
18%
F07C4F07C4D1
F07C4D2
F07C4D3
F07C4D4
0.4
3.3
1.2
0.1
0.4
3.6
0.7
0.3

−0.3
+0.5
−0.2
8%
66%
24%
2%
8%
72%
14%
6%
GamificationF08C1F08C1D1
F08C1D2
F08C1D3
F08C1D4
F08C1D5
1.4
1
0
0.2
2.4
1.2
0.6
0
0.1
3.1
+0.2
+0.4

+0.1
−0.7
27%
20%
0%
5%
48%
23%
12%
0%
2%
62%
F08C2F08C2D1
F08C2D2
F08C2D3
1.4
1.2
2.4
1.1
0.8
3.1
+0.3
+0.4
−0.7
28%
24%
48%
22%
15%
62%
Security/PrivacyF09C1F09C1D1
F09C1D2
F09C1D3
F09C1D4
2.4
0.1
2.3
0.1
2.7
0.1
1.9
0.3
−0.3

+0.4
−0.2
48%
3%
47%
3%
55%
1%
38%
6%
F09C2F09C2D1
F09C2D2
F09C2D3
F09C2D4
1.3
0.5
2.5
0.6
1.7
0.2
2
1
−0.4
+0.3
+0.5
−0.4
27%
10%
50%
13%
34%
5%
41%
20%
F09C3F09C3D1
F09C3D2
F09C3D3
0.5
4.2
0.3
0.3
4
0.7
+0.2
+0.2
−0.4
11%
83%
6%
6%
79%
15%
F09C4F09C4D1
F09C4D2
F09C4D3
2.6
0.8
1.5
2.8
0.5
1.7
+0.2
+0.3
−0.2
53%
16%
31%
56%
10%
34%
Certifications/RecognitionsF10C1F10C1D1
F10C1D2
F10C1D3
F10C1D4
1.9
0.5
0.2
2.3
1.8
0.2
0.1
2.9
+0.1
+0.3
+0.1
−0.6
39%
10%
5%
46%
35%
4%
2%
58%
F10C2F10C2D1
F10C2D2
2.6
2.4
1.9
3.1
+0.7
−0.7
52%
48%
38%
62%
Table A2. Criteria for Assessing Usability.
Table A2. Criteria for Assessing Usability.
UsabilityCategoryDetailAverage LDHAverage
Repositories
DifferenceFrequency
LDH (%)
Frequency
Repositories (%)
EffectivenessU01C1U01C1D1
U01C1D2
U01C1D3
U01C1D4
U01C1D5
0.1
0.2
0.5
1.6
2.6
0.2
0.3
0.8
1.6
2.1
−0.1
−0.1
−0.3

+0.5
1%
4%
10%
33%
52%
3%
6%
15%
33%
43%
U01C2U01C2D1
U01C2D2
U01C2D3
U01C2D4
U01C2D5
0.1
0.2
0.6
1.9
2.2
0.2
0.3
0.8
1.8
1.9
−0.1
−0.1
−0.2
+0.1
+0.3
2%
4%
13%
38%
43%
4%
6%
16%
36%
38%
U01C3U01C3D1
U01C3D2
U01C3D3
U01C3D4
U01C3D5
0.1
0.2
0.7
1.8
2.3
0.2
0.4
0.8
1.7
1.9
−0.1
−0.2
−0.1
+0.1
+0.4
1%
4%
14%
35%
46%
4%
7%
16%
34%
38%
U01C4U01C4D1
U01C4D2
U01C4D3
U01C4D4
U01C4D5

0.2
0.6
2
2.3
0.1
0.3
0.8
1.9
1.9
−0.1
−0.1
−0.2
−0.1
+0.4
1%
3%
11%
39%
46%
3%
6%
15%
38%
38%
EfficiencyU02C1U02C1D1
U02C1D2
U02C1D3
U02C1D4
U02C1D5
0.1
0.2
0.6
1.8
2.3
0.2
0.4
0.8
1.7
1.8
−0.1
−0.2
−0.2
−0.1
+0.5
1%
4%
13%
37%
45%
5%
8%
17%
34%
36%
U02C2U02C2D1
U02C2D2
U02C2D3
U02C2D4
U02C2D5
0.1
0.4
0.8
2
1.7
0.3
0.5
1
1.7
1.5
−0.2
−0.1
−0.2
−0.3
+0.2
1%
8%
17%
41%
34%
6%
9%
19%
35%
31%
U02C3U02C3D1
U02C3D2
U02C3D3
U02C3D4
U02C3D5
0.1
0.3
0.8
2
1.8
0.2
0.5
1
1.8
1.5
−0.1
−0.2
−0.2
+0.2
+0.3
2%
6%
17%
39%
36%
4%
10%
19%
36%
31%
SatisfactionU03C1U03C1D1
U03C1D2
U03C1D3
U03C1D4
U03C1D4
0.1
0.3
0.7
1.9
2
0.3
0.5
0.8
1.8
1.6
−0.2
−0.2
−0.1
+0.1
+0.4
2%
6%
13%
38%
41%
6%
9%
17%
36%
33%
U03C2U03C2D1
U03C2D2
U03C2D3
U03C2D4
U03C2D5
0.1
0.3
0.7
1.9
2
0.3
0.5
0.9
1.8
1.6
−0.2
−0.2
−0.2
+0.1
+0.2
2%
6%
15%
39%
39%
5%
9%
18%
35%
33%
U03C3U03C3D1
U03C3D2
U03C3D3
U03C3D4
U03C3D5
0.1
0.4
0.8
1.8
1.9
0.3
0.5
0.9
1.7
1.6
−0.2
−0.1
−0.1
+0.1
+0.3
2%
8%
15%
37%
38%
6%
11%
18%
34%
31%
U03C4U03C4D1
U03C4D2
U03C4D3
U03C4D4
U03C4D5
0.1
0.3
0.7
1.9
2
0.2
0.4
1.1
1.6
1.7
−0.1
−0.1
−0.4
+0.3
+0.3
2%
5%
15%
38%
40%
4%
9%
21%
32%
33%
SimplicityU04C1U04C1D1
U04C1D2
U04C1D3
U04C1D4
U04C1D5
0.1
0.2
0.8
2
1.9
0.2
0.5
0.8
1.9
1.6
−0.1
−0.3

−0.1
+0.3
2%
4%
16%
41%
38%
5%
9%
16%
38%
33%
U04C2U04C2D1
U04C2D2
U04C2D3
U04C2D4
U04C2D5
0.1
0.3
0.8
1.8
2
0.3
0.5
0.8
1.7
1.7
−0.2
−0.2

+0.1
+0.3
2%
6%
16%
35%
41%
6%
9%
17%
34%
34%
U04C3U04C3D1
U04C3D2
U04C3D3
U04C3D4
U04C3D5
0.1
0.4
0.8
1.7
2
0.2
0.4
0.6
1.9
1.9
−0.1

+0.2
−0.2
+0.1
2%
7%
17%
33%
41%
4%
8%
12%
38%
38%
U04C4U04C4D1
U04C4D2
U04C4D3
U04C4D4
U04C4D5
0.1
0.2
0.6
2.1
2.1
0.2
0.4
0.7
1.9
1.8
−0.1
−0.2
−0.1
+0.2
+0.3
1%
3%
12%
41%
42%
4%
9%
14%
38%
36%
Compatibility and AccessibilityU05C1U05C1D1
U05C1D2
U05C1D3
U05C1D4
U05C1D5

0.1
0.4
1.9
2.5
0.1
0.2
0.5
1.8
2.4
−0.1
−0.1
−0.1
+0.1
+0.1
1%
2%
9%
38%
51%
3%
4%
10%
35%
48%
U05C2U05C2D1
U05C2D2
U05C2D3
U05C2D4
U05C2D5
0.1
0.2
0.8
2
1.9
0.2
0.3
0.8
1.8
2
−0.1
−0.1

+0.2
−0.1
1%
4%
16%
40%
39%
4%
5%
16%
35%
40%
InteractivityU06C1U06C1D1
U06C1D2
U06C1D3
U06C1D4
U06C1D5

0.1
0.4
1.9
2.5
0.2
0.2
0.6
1.8
2.2
−0.2
−0.1
−0.2
+0.1
+0.3
1%
3%
8%
37%
51%
4%
4%
12%
36%
44%
U06C2U06C2D1
U06C2D2
U06C2D3
U06C2D4
U06C2D5
0.1
0.1
0.5
1.8
2.5
0.2
0.2
0.6
1.9
2.1
−0.1
−0.1
−0.1
−0.1
+0.4
1%
2%
10%
36%
50%
3%
5%
12%
37%
42%
U06C3U06C3D1
U06C3D2
U06C3D3
U06C3D4
U06C3D5
0.1
0.1
0.6
2.2
2
0.2
0.4
0.6
2.1
1.7
−0.1
−0.3

+0.1
+0.3
1%
3%
12%
45%
39%
4%
7%
13%
42%
34%
User-Centered DesignU07C1U07C1D1
U07C1D2
U07C1D3
U07C1D4
U07C1D5
0.1
0.2
0.8
2.1
1.8
0.2
0.5
0.9
1.7
1.6
−0.1
−0.3
−0.1
+0.4
+0.2
2%
4%
17%
41%
37%
5%
10%
19%
34%
33%
U07C2U07C2D1
U07C2D2
U07C2D3
U07C2D4
U07C2D5
0.2
0.4
1.1
1.5
1.9
0.7
0.6
0.9
1.5
1.3
−0.5
−0.2
+0.2

+0.6
4%
8%
21%
30%
38%
14%
12%
19%
29%
26%
U07C3U07C3D1
U07C3D2
U07C3D3
U07C3D4
U07C3D5
0.1
0.2
0.8
2.1
1.8
0.3
0.6
1
1.5
1.6
−0.2
−0.4
−0.2
+0.6
+0.2
2%
4%
16%
41%
36%
6%
12%
19%
31%
32%

References

  1. Tawil, S.; Miao, F. Steering the Digital Transformation of Education: UNESCO’s Human-Centered Approach. Front. Digit. Educ. 2024, 1, 51–58. [Google Scholar] [CrossRef]
  2. Vincent-Lancrin, S.; González-Sancho, C. Interoperability: Unifying and Maximising Data Reuse within Digital Education Ecosystems; OECD Publishing: Paris, France, 2023. [Google Scholar]
  3. Bagga, M.K.; Agrati, L.S. Digitalization in education: Developing tools for effective learning and personalisation of education. Front. Educ. 2024, 9, 1463596. [Google Scholar] [CrossRef]
  4. Prado, B.D.B.; Gobbo Junior, J.A.; Bezerra, B.S. Emerging themes for digital accessibility in education. Sustainability 2023, 15, 11392. [Google Scholar] [CrossRef]
  5. Chong, H.T.; Lim, C.K.; Ahmed, M.F.; Tan, K.L.; Mokhtar, M.B. Virtual reality usability and accessibility for cultural heritage practices: Challenges mapping and recommendations. Electronics 2021, 10, 1430. [Google Scholar] [CrossRef]
  6. Fonseca, D.; García-Peñalvo, F.J.; Camba, J.D. New methods and technologies for enhancing usability and accessibility of educational data. Univers. Access Inf. Soc. 2021, 20, 421–427. [Google Scholar] [CrossRef]
  7. Dimitrova, M.; Bogdanova, G.; Noev, N.; Sabev, N.; Angelov, G.; Paunski, Y.; Ekmekci, M.; Krastev, A. Digital Accessibility for People with Special Needs: Conceptual Models and Innovative Ecosystems. In Proceedings of the 2023 8th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 20–23 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  8. Timbi-Sisalima, C.; Sánchez-Gordón, M.; Hilera-Gonzalez, J.R.; Otón-Tortosa, S. Quality Assurance in E-Learning: A Proposal from Accessibility to Sustainability. Sustainability 2022, 14, 3052. [Google Scholar] [CrossRef]
  9. Granić, A. Technology Acceptance and Adoption in Education. In Handbook of Open, Distance and Digital Education; Springer: Singapore, 2022. [Google Scholar]
  10. Rusu, C.; Rusu, V.; Roncagliolo, S.; González, C. Usability and user experience: What should we care about? Int. J. Inf. Technol. Syst. Approach 2015, 8, 1–12. [Google Scholar] [CrossRef]
  11. Chvanova, M.S. Digital educational ecosystem’s functional abilities for promotion of master’s degree students’ research and innovative activities. Tambov Univ. Rev. Ser. Humanit. 2023, 28, 1043–1062. [Google Scholar] [CrossRef]
  12. Krakowski, A.; Greenwald, E.; Roman, N.; Morales, C.; Loper, S. Computational Thinking for Science: Positioning coding as a tool for doing science. J. Res. Sci. Teach. 2024, 61, 1574–1608. [Google Scholar] [CrossRef]
  13. Márquez Díaz, J.E.; Domínguez Saldaña, C.A.; Rodríguez Avila, C.A. Virtual World as a Resource for Hybrid Education. Int. J. Emerg. Technol. Learn. 2020, 15, 94–109. [Google Scholar] [CrossRef]
  14. Briscoe, G.; Sadedin, S.; Wilde, P. Digital Ecosystems: Ecosystem-Oriented Architectures. Nat. Comput. 2011, 10, 1143–1194. [Google Scholar] [CrossRef]
  15. Saggah, A.; Atkins, A.S.; Campion, R.J. A Repository Collaboration Model to Gamify Education Using Synergistic Digital Hub. In Proceedings of the 2020 4th World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), London, UK, 27–28 July 2020; IEEE: New York, NY, USA, 2020; pp. 651–655. [Google Scholar]
  16. Langegård, U.; Kiani, K.; Nielsen, S.J.; Svensson, P.A. Nursing students’ experiences of a pedagogical transition from campus learning to distance learning using digital tools. BMC Nurs. 2021, 20, 23. [Google Scholar] [CrossRef]
  17. Kucirkova, N.; Littleton, K. Digital learning hubs: Theoretical and practical ideas for innovating massive open online courses. Learn. Media Technol. 2017, 42, 324–330. [Google Scholar] [CrossRef]
  18. Crupi, A.; Del Sarto, N.; Di Minin, A.; Gregori, G.L.; Lepore, D.; Marinelli, L.; Spigarelli, F. The digital transformation of SMEs–a new knowledge broker called the digital innovation hub. J. Knowl. Manag. 2020, 24, 1263–1288. [Google Scholar] [CrossRef]
  19. Martins, D. Digital Human Resources Management HUB: Exploring their Importance as Learning Space. In Proceedings of the 23rd European Conference on Knowledge Management, Naples, Italy, 1–2 September 2022; University of Aveiro: Aveiro, Portugal, 2022; Volume 2. [Google Scholar]
  20. Barykin, S.; Kapustina, I.; Kirillova, T.; Yadykin, V.; Konnikov, Y. Economics of Digital Ecosystems. J. Open Innov. Technol. Mark. Complex. 2020, 6, 124. [Google Scholar] [CrossRef]
  21. Santos-Hermosa, G.; Ferran-Ferrer, N.; Abadal, E. Repositories of open educational resources: An assessment of reuse and educational aspects. Int. Rev. Res. Open Distrib. Learn. 2017, 18, 84–120. [Google Scholar] [CrossRef]
  22. Richardson, J.C.; Castellanos Reyes, D.; Janakiraman, S.; Duha, M.S.U. The process of developing a digital repository for online teaching using design-based research. TechTrends 2023, 67, 217–230. [Google Scholar] [CrossRef]
  23. Zibani, P.; Rajkoomar, M.; Naicker, N. A systematic review of faculty research repositories at higher education institutions. Digit. Libr. Perspect. 2022, 38, 237–248. [Google Scholar] [CrossRef]
  24. Rivas, A.; González-Briones, A.; Hernández, G.; Prieto, J.; Chamoso, P. Artificial neural network analysis of the academic performance of students in virtual learning environments. Neurocomputing 2021, 423, 713–720. [Google Scholar] [CrossRef]
  25. Dodero, J.; Palomo-duarte, M.; Ruiz-Rube, I.; Traverso, I.; Mota, J.; Balderas, A. Learning Technologies and Semantic Integration of Learning Resources. IEEE Rev. Iberoam. Tecnol. Aprendiz. 2015, 10, 11–16. [Google Scholar] [CrossRef]
  26. Rötkönen, E.; Winschiers-Theophilus, H.; Winschiers-Goagoses, N.; Zaman, T.; Itenge, H.; Tan, D.Y.W.; Sutinen, E. Creating Smart Connected Learning Ecosystems: A Hybrid Model for Design-Based Learning. IxD&A 2022, 52, 81–100. [Google Scholar]
  27. Vasileva, T.; Tchoumatchenko, V.; Lakkala, M.; Kosonen, K. Infrastructure supporting collaborative project based learning in engineering education. Int. J. Eng. Educ. 2011, 27, 656–669. [Google Scholar]
  28. Jacka, L. Successful Integration of Virtual Worlds in Learning Environments: A Case Study of a Supportive Learning Ecosystem. Interdiscip. J. Virtual Learn. Med. Sci. 2021, 12, 169–176. [Google Scholar]
  29. Morin, E. Introduccion al Pensamiento Complejo; Gedisa Editorial: Barcelona, Spain, 1995. [Google Scholar]
  30. Montoya, M.S.R.; McGreal, R.; Agbu, J.-F.O. Horizontes digitales complejos en el futuro de la educación 4.0: Luces desde las recomendaciones de UNESCO. RIED Rev. Iberoam. Educ. Distancia 2022, 25, 9–21. [Google Scholar]
  31. Ramírez-Montoya, M.S.; Castillo-Martínez, I.M.; Sanabria-Zepeda, J.C.; Miranda, J. Complex Thinking in the Framework of Education 4.0 and Open Innovation—A Systematic Literature Review. J. Open Innov. Technol. Mark. Complex. 2022, 8, 4. [Google Scholar] [CrossRef]
  32. Sanabria-Z, J.C.; Castillo-Martínez, I.M.; González-Pérez, L.I.; Ramírez-Montoya, M.S. Complex thinking through a Transition Design-guided Ideathon: Testing an AI platform on the topic of sharing economy. Front. Educ. 2023, 8, 1186731. [Google Scholar] [CrossRef]
  33. O Ochieng, V.; Gyasi, R.M. Open educational resources and social justice: Potentials and implications for research productivity in higher educational institutions. E-Learn. Digit. Media 2021, 18, 105–124. [Google Scholar] [CrossRef]
  34. Chaleta, E. Higher education and sustainable development goals (SDG)-potential contribution of the undergraduate courses of the school of social sciences of the University of Évora. Sustainability 2021, 13, 1828. [Google Scholar] [CrossRef]
  35. Chowdhary, K.R. “Natural Language Processing”, en Fundamentals of Artificial Intelligence; Springer: New Delhi, India, 2020; pp. 603–649. [Google Scholar]
  36. Wu, R.; Yu, Z. Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br. J. Educ. Technol. 2023, 55, 10–33. [Google Scholar] [CrossRef]
  37. Chakraborty, P. Computer, computer science, and computational thinking: Relationship between the three concepts. Hum. Behav. Emerg. Technol. 2024, 2024, 5044787. [Google Scholar] [CrossRef]
  38. Buitrago Flórez, F.; Casallas, R.; Hernández, M.; Reyes, A.; Restrepo, S.; Danies, G. Changing a generation’s way of thinking: Teaching computational thinking through programming. Rev. Educ. Res. 2017, 87, 834–860. [Google Scholar] [CrossRef]
  39. Yadav, A.; Stephenson, C.; Hong, H. Computational thinking for teacher education. Commun. ACM 2017, 60, 55–62. [Google Scholar] [CrossRef]
  40. Alfaro Ponce, B.; Patiño, A.; Sanabria-Z, J. Components of computational thinking in citizen science games and its contribution to reasoning for complexity through digital game-based learning: A framework proposal. Cogent Educ. 2023, 10, 2191751. [Google Scholar] [CrossRef]
  41. Susanti, R.D.; Taufik, M. Analysis of Student Computational Thinking in Solving Social Statistics Problems. Supremum J. Math. Educ. 2021, 5, 1–9. [Google Scholar] [CrossRef]
  42. Eickelmann, B.; Labusch, A.; Vennemann, M. Computational thinking and problem-solving in the context of IEA-ICILS 2018. In Empowering Learners for Life in the Digital Age, Proceedings of the IFIP TC 3 Open Conference on Computers in Education, OCCE 2018, Linz, Austria, 24–28 June 2018; Revised Selected Papers; Springer International Publishing: Cham, Switzerland, 2019; pp. 14–23. [Google Scholar]
  43. Alturkistani, A.; Lam, C.; Foley, K.; Stenfors, T.; Blum, E.R.; Van Velthoven, M.H.; Meinert, E. Massive open online course evaluation methods: Systematic review. J. Med. Internet Res. 2020, 22, e13851. [Google Scholar] [CrossRef] [PubMed]
  44. Huallpa, J.J.; Al, E. Exploring the ethical considerations of using Chat GPT in university education. Period. Eng. Nat. Sci. (PEN) 2023, 11, 105. [Google Scholar] [CrossRef]
  45. Mehrvarz, M.; Keshavarzi, F.; Heidari, E.; McLaren, B.M. Improving computational thinking: The role of students’ networking skills and digital informal learning. Interact. Learn. Environ. 2024, 32, 6081–6095. [Google Scholar] [CrossRef]
  46. Huang, R.; Tlili, A.; Chang, T.-W.; Zhang, X.; Nascimbeni, F.; Burgos, D. Disrupted classes, undisrupted learning during COVID-19 outbreak in China: Application of open educational practices and resources. Smart Learn. Environ. 2020, 7, 19. [Google Scholar] [CrossRef] [PubMed]
  47. Nusbaum, A.T.; Cuttler, C.; Swindell, S. Open educational resources as a tool for educational equity: Evidence from an introductory psychology class. Front. Educ. 2020, 4, 152. [Google Scholar] [CrossRef]
  48. Gherheș, V.; Stoian, C.E.; Fărcașiu, M.A.; Stanici, M. E-learning vs. face-to-face learning: Analyzing students’ preferences and behaviors. Sustainability 2021, 13, 4381. [Google Scholar] [CrossRef]
  49. Rose, D. Universal design for learning. J. Spec. Educ. Technol. 2000, 15, 47–51. [Google Scholar] [CrossRef]
  50. Fatmawati, A. Evaluasi Usability pada Learning Management System OpenLearning Menggunakan System Usability Scale. Inovtek Polbeng-Seri Inform. 2021, 6, 120–134. [Google Scholar] [CrossRef]
  51. Creswell, J.W.; Poth, C.N. Qualitative Inquiry and Research Design: Choosing Among Five Approaches; Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  52. May, T.; Perry, B. Social Research: Issues, Methods and Process; McGraw-Hill Education: Maidenhead, UK, 2022. [Google Scholar]
  53. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; SAGE Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  54. Arshad, R.; Majeed, A.; Afzal, H.; Muzammal, M.; ur Rahman, A. Evaluation of navigational aspects of Moodle. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 258–264. [Google Scholar] [CrossRef]
  55. Ortega-Morán, J.F.; Pagador, J.B.; Sánchez-Peralta, L.F.; Sánchez-González, P.; Noguera, J.; Burgos, D.; Gómez, E.J.; Sánchez-Margallo, F.M. Validation of the three web quality dimensions of a minimally invasive surgery e-learning platform. Int. J. Med. Inform. 2017, 107, 1–10. [Google Scholar] [CrossRef] [PubMed]
  56. Soltanzadeh, L.; Sangar, A.B.; Majidzadeh, K. The Review of Usability Evaluation Methods on Tele health or Telemedicine Systems. Front. Health Inform. 2022, 11, 112. [Google Scholar] [CrossRef]
  57. Ryan, G.S.; Haroon, M.; Melvin, G. Evaluation of an educational website for parents of children with ADHD. Int. J. Med. Inform. 2015, 84, 974–981. [Google Scholar] [CrossRef] [PubMed]
  58. Kenter, R.M.F.; Schønning, A.; Inal, Y. Internet-Delivered Self-help for Adults With ADHD (MyADHD): Usability Study. Jmir Form. Res. 2022, 6, e37137. [Google Scholar] [CrossRef] [PubMed]
  59. Gordillo, A.; Barra, E.; Aguirre, S.; Quemada, J. The usefulness of usability and user experience evaluation methods on an e-Learning platform development from a developer’s perspective: A case study. In Proceedings of the 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, Madrid, Spain, 22–25 October 2014; IEEE: New York, NY, USA, 2014. [Google Scholar]
  60. Amante, L.; Souza, E.B.; Quintas-Mendes, A.; Miranda-Pinto, M. Designing a mooc on computational thinking, programming and robotics for early childhood educators and primary school teachers: A pilot test evaluation. Educ. Sci. 2023, 13, 863. [Google Scholar] [CrossRef]
  61. Umutlu, D. Tpack leveraged: A redesigned online educational technology course for stem preservice teachers. Australas. J. Educ. Technol. 2022, 38, 99–116. [Google Scholar] [CrossRef]
  62. Creswell, J.W.; Clark, V.L.P. Designing and Conducting Mixed Methods Research; Sage Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  63. Poma, A.; Rodríguez, G.; Torres, P. User Experience Evaluation in MOOC Platforms: A Hybrid Approach. In Iberoamerican Workshop on Human-Computer Interaction; Springer International Publishing: Cham, Switzerland, 2021; pp. 208–224. [Google Scholar]
  64. Abubakari, M.S.; Hungilo, G. Evaluating an e-Learning platform at graduate school based on user experience evaluation technique. J. Phys. Conf. Ser. 2021, 1737, 012019. [Google Scholar] [CrossRef]
  65. Martin, J.L.; Salvatierra, H.A.; González, J.R.H. MOOCs for all: Evaluating the accessibility of top MOOC platforms. Int. J. Eng. Educ. 2016, 32, 2274–2283. [Google Scholar]
  66. Nurbekova, Z.; Aimicheva, G.; Baigusheva, K.; Sembayev, T.; Mukametkali, M. A Decision-Making Platform for Educational Content Assessment Within a Stakeholder-Driven Digital Educational Ecosystem. Int. J. Eng. Pedagog. 2023, 13, 4–23. [Google Scholar] [CrossRef]
  67. Oliva-Cordova, L.M.; Garcia-Cabot, A.; Alejandra Recinos-Fernandez, S.; Suleny Bojorquez-Roque, M.; Amado-Salvatierra, H.R. Evaluating technological acceptance of virtual learning environments (VLE) in an emergency remote situation. Int. J. Eng. Educ. 2022, 38, 421–436. [Google Scholar]
  68. Bojórquez-Roque, M.S.; Garcia-Cabot, A.; Garcia-Lopez, E.; Oliva-Córdova, L.M. Digital competence learning ecosystem in higher education: A mapping and systematic review of the literature. IEEE Access 2024, 12, 87596–87614. [Google Scholar] [CrossRef]
Figure 1. The figure illustrates the positive correlation (r = 0.89, p < 0.001) between the scores of internal functionalities and the perceived usability of digital learning platforms. The trend line shows that platforms with higher functionality scores exhibit higher usability ratings.
Figure 1. The figure illustrates the positive correlation (r = 0.89, p < 0.001) between the scores of internal functionalities and the perceived usability of digital learning platforms. The trend line shows that platforms with higher functionality scores exhibit higher usability ratings.
Futureinternet 18 00055 g001
Figure 2. Screenshot of edX, an example of a Digital Learning Hub. EdX provides online courses from global universities and institutions, incorporating key functionalities such as personalized learning, interactive assessments, and accessibility features. This figure illustrates the platform’s structural design, which was analyzed in terms of usability, accessibility, and learning experience.
Figure 2. Screenshot of edX, an example of a Digital Learning Hub. EdX provides online courses from global universities and institutions, incorporating key functionalities such as personalized learning, interactive assessments, and accessibility features. This figure illustrates the platform’s structural design, which was analyzed in terms of usability, accessibility, and learning experience.
Futureinternet 18 00055 g002
Figure 3. Screenshot of HippoCampus.org, an example of an Open Educational Repository. HippoCampus.org offers open educational resources across multiple disciplines, emphasizing content availability and ease of access. This figure represents a repository model focused on resource distribution, which was evaluated based on its functionalities, usability, and accessibility.
Figure 3. Screenshot of HippoCampus.org, an example of an Open Educational Repository. HippoCampus.org offers open educational resources across multiple disciplines, emphasizing content availability and ease of access. This figure represents a repository model focused on resource distribution, which was evaluated based on its functionalities, usability, and accessibility.
Futureinternet 18 00055 g003
Figure 4. This figure shows the association between internal system functionalities and usability across the evaluated cases.
Figure 4. This figure shows the association between internal system functionalities and usability across the evaluated cases.
Futureinternet 18 00055 g004
Figure 5. The figure displays the positive correlation between external functionalities and evaluated accessibility, with a trend line reflecting the strong relationship between the two variables and minimal data dispersion.
Figure 5. The figure displays the positive correlation between external functionalities and evaluated accessibility, with a trend line reflecting the strong relationship between the two variables and minimal data dispersion.
Futureinternet 18 00055 g005
Figure 6. The figure shows the performance of Digital Learning Hubs and repositories in accessibility principles, with substantial compliance in Perceivability, moderate Robust, and deficiencies in Operable and Understandable.
Figure 6. The figure shows the performance of Digital Learning Hubs and repositories in accessibility principles, with substantial compliance in Perceivability, moderate Robust, and deficiencies in Operable and Understandable.
Futureinternet 18 00055 g006
Figure 7. Heatmap illustrates the relationship between usability and accessibility scores across Digital Learning Hubs and educational repositories. Color intensity reflects score magnitude, allowing the identification of patterns, clustering, and relative differences in platform performance.
Figure 7. Heatmap illustrates the relationship between usability and accessibility scores across Digital Learning Hubs and educational repositories. Color intensity reflects score magnitude, allowing the identification of patterns, clustering, and relative differences in platform performance.
Futureinternet 18 00055 g007
Figure 8. Distribution of courses in Digital Learning Hubs by thematic category. Bars represent the percentage of courses related to complex thinking, computational thinking, and other topics, highlighting differences in thematic focus across platforms.
Figure 8. Distribution of courses in Digital Learning Hubs by thematic category. Bars represent the percentage of courses related to complex thinking, computational thinking, and other topics, highlighting differences in thematic focus across platforms.
Futureinternet 18 00055 g008
Table 1. Open educational repository.
Table 1. Open educational repository.
No.Platform NameURL
R01Agregaagrega2.es
R02Archivearchive.org
R03Europeanaeuropeana.eu
R04HippoCampushippocampus.org
R05MERLOTmerlot.org
R06NDLndl.iitkgp.ac.in
R07OER Commonsoercommons.org
R08Open Michiganopen.umich.edu
R09OpenStaxopenstax.org
R10RITECrepositorio.tec.mx
R11TED-Eded.ted.com
R12Wikiversitywikiversity.org
Table 2. Digital Learning Hubs.
Table 2. Digital Learning Hubs.
No.Platform NameURL
H01Alisonalison.com
H02Courseracoursera.org
H03edXedx.org
H04FutureLearnfuturelearn.com
H05GitHub Educationeducation.github.com
H06INTEFintef.es
H07Khan Academykhanacademy.org
H08MiriadaXmiriadax.net
H09OpenCourseWareocw.mit.edu
H10OpenLearnopen.edu/openlearn
H11P2PUp2pu.org
H12Saylor Academysaylor.org
H13Udemyudemy.com
Table 3. Criteria For Assessing Functionalities.
Table 3. Criteria For Assessing Functionalities.
Functionality (10)Attributes (32)
F01Registration/LoginLogin methods, password recovery and security, ease of registration
F02Languages/LocalizationAvailable languages, cultural adaptation
F03Access/AvailabilityContent language, compatible devices, offline access to resources
F04Content ManagementContent types, resource organization, content updates, resource evaluation, user-generated content management
F05Personalization/AdaptabilityInterest-based recommendations, progress adaptation, skill-based adjustments, personalized learning paths
F06Assessment/TrackingAssessment tools, progress monitoring, customized reporting
F07Communication/CollaborationCommunication methods, collaborative tools, automated notifications, technical support
F08GamificationGamification elements, progress-based rewards
F09Security/PrivacyData protection, role-based access levels, security standards compliance, help center
F10Certifications/RecognitionCertification types, official recognition
Note: Functionalities are classified as internal or external according to their direct involvement in the teaching learning process (internal) or their role in enabling access and contextual interaction with the platform (external).
Table 4. Criteria For Assessing Usability.
Table 4. Criteria For Assessing Usability.
Category (7)Indicators (23)
U01Effectiveness- Easily find educational content.
- Facilitates access to necessary content.
- Efficient access to content.
- Provides appropriate tools for accessing content.
U02Efficiency- Finds content in minimal time.
- Optimizes effort and clicks required.
- Minimizes effort to locate content.
U03Satisfaction- The system is comfortable to use.
- Easy to use.
- Intuitive design with no frustration.
- Overall experience is satisfactory.
U04Simplicity- Navigation is straightforward.
-The interface is clear and not confusing.
- Avoids unnecessary visual elements.
- Interaction is easy without learning new tools.
U05Compatibility and Accessibility- Works well across different devices.
- Accessible for users with diverse technical needs.
U06Interactivity- Quickly responds to user actions.
- Provides a seamless experience.
- Allows uncomplicated interaction.
U07User-Centered Design- Designed with user needs in mind.
- Enables feedback on system usage.
- Understands user’s needs when accessing educational content.
Table 5. Criteria For Assessing Accessibility.
Table 5. Criteria For Assessing Accessibility.
No.Principle (4)Guideline (12)
A01Perceivable1.1. Provide text alternatives
1.2. Provide alternatives for time-based media
1.3. Create adaptable content
1.4. Make content distinguishable
A02Operable2.1. Make all functionality keyboard accessible
2.2. Provide enough time to read and use content
2.3. Avoid content that causes seizures
2.4. Help users navigate
A03Understandable3.1. Ensure text content is readable
3.2. Make web pages operate predictably
3.3. Help users avoid and correct errors
A04Robust4.1. Maximize compatibility with assistive technologies
Table 6. Criteria for Assessing Course Availability.
Table 6. Criteria for Assessing Course Availability.
No.CriterionDescription
CA1PlatformName of the Digital Learning Hub analyzed.
CA2Total CoursesTotal number of courses available on the platform.
CA3Complex Thinking Courses (%)The percentage of courses related to complex thinking is related to the total.
CA4Computational Thinking Courses (%)Percentage of courses focused on computational thinking relative to the total.
CA5Other Categories Courses (%)Proportion of courses in other subject areas.
CA6Access TypeDefines whether courses are free, require enrollment, or are paid.
CA7Assessment and CertificationIndicates whether courses include formal assessments and official certification upon completion.
Table 7. Instrument Validation Methods and Reliability Analysis.
Table 7. Instrument Validation Methods and Reliability Analysis.
Validation TypeMethod UsedDetails
Content ValidationExpert reviewFive experts in educational technology and human–computer interaction evaluated the instrument.
Pilot TestStudent evaluationConducted with 15 students to ensure clarity and consistency in the assessment criteria.
Reliability AnalysisCronbach’s Alpha (α)Functionalities (α = 0.83), Usability (α = 0.87), Overall (α = 0.85).
Accessibility ToolTAW (Web Accessibility Test)Automated tool based on WCAG 2.1, widely validated in accessibility studies.
Table 8. Assessing Usability in Platforms.
Table 8. Assessing Usability in Platforms.
U01U02U03U04U05U06U07 x - σ
H012.32.32.42.22.32.32.22.30.07
H022.52.42.52.42.52.42.42.440.05
H032.42.32.42.32.42.42.32.360.05
H042.22.12.22.12.22.22.12.160.05
H051.91.921.91.921.91.930.05
H061.81.81.81.81.91.91.81.860.05
H072.42.32.42.32.42.42.32.360.05
H08222.122.12.122.060.03
H092.32.22.32.22.32.32.32.260.05
H1022222.12.122.030.03
H111.81.81.81.81.91.81.81.850.02
H121.91.91.91.921.91.91.940.02
H132.32.22.32.22.32.32.32.260.05
R011.71.61.71.61.71.71.71.670.0
R021.81.81.81.81.91.81.81.850.0
R0322222.12.122.010.0
R041.91.921.9221.91.950.0
R052.12.12.22.12.22.22.12.160.0
R061.91.91.91.921.91.91.950.0
R071.91.91.91.921.91.91.950.0
R081.81.81.81.81.91.81.81.850.0
R091.91.91.91.921.91.91.950.0
R101.71.71.71.71.81.71.71.720.0
R112.32.32.32.32.42.42.32.320.1
R121.81.81.81.81.91.81.81.850.0
Note: Color shading is used only to visually highlight relative score ranges across items; darker tones indicate higher values and lighter/red tones indicate lower values. All highlighted cells follow the same criteria. 
Table 9. Summary of Hypotheses Testing and Main Findings.
Table 9. Summary of Hypotheses Testing and Main Findings.
HypothesisResultComments
H1AcceptedA positive correlation (r = 0.89, p < 0.001) confirms the critical role of internal functionalities.
H2AcceptedSignificant relationships (r = 0.91, p < 0.001) emphasize external functionalities’ impact on inclusivity.
H3Partially AcceptedHigh performance in perceivability; significant deficiencies in Operable and Understandable.
H4AcceptedStrong correlation (r = 0.87, p < 0.001); demonstrates the interconnectedness of these variables.
H5Partially AcceptedOnly 10% of the evaluated platforms provide substantial resources in these areas, underscoring significant opportunities for enhancement.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alvarez-Icaza, I.; Oliva-Córdova, L.M.; Tariq, R.; Martín-Núñez, J.L. Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking. Future Internet 2026, 18, 55. https://doi.org/10.3390/fi18010055

AMA Style

Alvarez-Icaza I, Oliva-Córdova LM, Tariq R, Martín-Núñez JL. Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking. Future Internet. 2026; 18(1):55. https://doi.org/10.3390/fi18010055

Chicago/Turabian Style

Alvarez-Icaza, Inés, Luis Magdiel Oliva-Córdova, Rasikh Tariq, and José Luis Martín-Núñez. 2026. "Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking" Future Internet 18, no. 1: 55. https://doi.org/10.3390/fi18010055

APA Style

Alvarez-Icaza, I., Oliva-Córdova, L. M., Tariq, R., & Martín-Núñez, J. L. (2026). Digital Learning Hubs: Evaluating Their Role in Fostering Complex and Computational Thinking. Future Internet, 18(1), 55. https://doi.org/10.3390/fi18010055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop