Abstract
Hybrid education is a model that combines different settings within the learning process. In this paper, four dimensions related to different features of the learning process are considered, namely, space, time, language, and frameset. The first one relates to its location, the second one relates to when it takes place, the third one relates to how it is imparted, and the fourth one relates to the way in which it is conducted. The goal is to modify learning features in each session of a course to increase student engagement and improve academic performance. Additionally, this layout may also help students prepare for potential disruptive events in the future, which might have an impact on the way class sessions are run. The results obtained confirmed a statistically significant improvement in academic performance with respect to the previous course, which was taught in a traditional manner, as well as a high level of engagement. However, the actual sample size was not sufficient to detect the effect size achieved; hence, further research should be conducted with a larger sample size.
1. Introduction
Hybrid education integrates traditional on-site learning with innovative online learning via a combination of face-to-face instruction and remote instruction (Kniffin & Greenleaf, 2024). Hybrid education combines the benefits of face-to-face interactions with the use of technology and offers multiple possibilities to tailor the learning process in order to fit the needs of students (Bidarra et al., 2023). In this way, the more direct relationship between teachers and learners provided by on-site education is mixed with the flexibility and customization allowed by online education (Guerrero-Quiñonez et al., 2023). However, hybrid teaching and learning also involve some challenges, such as technological issues and communication problems, along with time management difficulties and assessment complexities (Mayer et al., 2024). Furthermore, this approach may also lead to a lower sense of belonging due to the reduced interactions, as well as altered social dynamics, and even academic performance drops (Klimplová, 2024). Nonetheless, hybrid education has been consolidated since the COVID-19 pandemic and has been widely adopted in multiple educational contexts (Colina-Ysea et al., 2024).
On the other hand, learning environments may be modified in order to improve the learning process or to adapt to changing situations (Volotovska et al., 2024). In fact, there are studies in the literature stating that changing learning environments lead to a rise in students’ interest and motivation (Schweder & Raufelder, 2022). Likewise, other studies highlight the connection between changing aspects of the learning environment and improving academic results (Mørk et al., 2020), which also may enhance the overall classroom experience (Gaskins et al., 2015). In summary, hybrid education requires the integration of on-site and online settings with respect to the learning environment, where each of those settings may be adjusted in different ways in order to better adapt to the particularities of a given course. In this context, the main purpose of this paper was to implement a hybrid education scheme in a home automation course where the learning environment was modified in each session.
Several studies in the literature related to the use of hybrid learning spaces have recently been carried out. For instance, Ramírez-Mera et al. explored the role of digital technologies in the transformation of higher education spaces in both quantitative and qualitative manners, concluding that hybrid environments support high-quality education, which involves multidirectional interactions, as well as adaptability and personalization of the learning process (Ramírez-Mera et al., 2025). Likewise, Monika and Kristanto investigated the applicability of hybrid education in college, concluding that it has positive effects in social interaction, active engagement, and critical inquiry, thus showing the great potential of hybrid classrooms (Monika & Kristanto, 2024). Regarding learning environments, Ma highlighted the different values for some features between on-site and online learning, such as space and time, stating that hybrid learning must effectively combine them all in order to be effective (Ma, 2023).
Gudoniene et al. presented a systematic review on hybrid education focusing on five research questions. The first one was related to the pedagogical frameworks used in hybrid education, where the use of active learning methods was the most adopted solution. The second one was related to the enhancement of students’ engagement in hybrid education, where collaborative and interactive activities were reported as the most appropriate ones. The third one was related to the impact of technological integration in hybrid learning scenarios, which presented a wide array of options, from simple multimedia equipment integrated into an LMS solution to the use of sophisticated robotic telepresence devices. The fourth one was related to assess the influence of training and support in the ability to implement hybrid education, with the authors concluding that it is a crucial point in successful deployments. And the fifth one was related to the tools to monitor students’ progress and providing effective support, with the authors concluding that it is necessary to have comprehensive training and support from all stakeholders, such as administrative personnel, the management board, and IT staff (Gudoneine et al., 2025).
Khakimova and Kayumova claimed that the main point for hybrid education to succeed was the implementation of a powerful learning management system to centralize all learning and monitoring resources, which can help manage the on-site and online paradigms at once (Khakimova & Kayumova, 2022). Nørgård presented the concept of hybridity in hybrid learning, which integrates simultaneous teaching for on-site and online attendants and a technological setup to facilitate interactions among all attendants, as well as active engagement and community building for all (Nørgård, 2021). Kokko et al. discussed the importance of effective training and support for teachers in setting up and managing hybrid classes, stating that this is a critical point for success in hybrid education (Kokko et al., 2024). Similarly, Almusaed et al. pointed out the importance of meaningful training and support for students in order to maximize the success of hybrid learning setups as this is the proper way for remote attendants to seamlessly communicate with in-person attendants and with the instructor (Almusaed et al., 2023).
Yumnam and Thomas proposed a set of best practices for hybrid education, such as to invest in professional development, enhance infrastructure and support services, foster a culture of innovation, embrace data-driven decision-making, and promote equity and inclusivity (Yumnam & Thomas, 2021). Li et al. established the most common challenges for teachers in hybrid learning, such as the increase in workload related to class preparation, the difficulty in dealing with face-to-face and online learners at the same time, the unfamiliarity with interactive learning design for both situations, and the difficulties associated with the monitoring of both types of students at once. The main recommendation is to provide hardware and software support in order to tackle the technical challenges inherent to hybrid setups (Li et al., 2023).
Considering this background, the question “What is the main purpose of this study?” can be answered as “The main purpose of this study is to design a course following the hybrid learning paradigm where the learning environment is modified in each session”. A follow-up question is “What are the expected findings of this study?”, and the answer is “The literature points out a relationship between hybrid learning and academic performance, as well as between hybrid learning and engagement”. Another question is “What are the expected findings regarding academic performance and engagement with hybrid learning?”, and the answer is “One of the goals is to establish a quantitative relationship between hybrid learning with a changing learning environment and variation in terms of academic performance and engagement with respect to a previous course taught in a traditional manner”.
A further question is “If there is a variation in academic performance with hybrid learning, is it statistically significant?”, and the answer is “The variation observed in academic performance between the outcome with hybrid learning and the outcome with traditional learning is, or is not, statistically significant, according to the results obtained in this study”. A follow-up question is “If there is a variation in academic performance with hybrid learning, is the actual sample size sufficient to detect it?”, and the answer is “The variation observed in academic performance between the outcome with hybrid learning and the outcome with traditional learning is, or is not, sufficient to detect it, according to the results obtained in this study”. A last question is “Regarding the variation in engagement, is it statistically significant?”, and the answer is “The variation observed in engagement between the course with hybrid learning and the course with traditional learning is, or is not, statistically significant, according to the results obtained in this study”.
Therefore, in order to assess how the changes in the learning environment would affect academic performance, a changing layout was put in place in a home automation course, which is classified within STEM education. The idea was to change the particular settings for each given session in order to check how students would adapt to it. Furthermore, this type of experience could improve preparedness in the case of an unexpected disruptive event such as a pandemic or a natural disaster, which would prevent the learning process from being delivered in the normal way (Bekereci-Sahin & Aslan, 2025). The settings are considered herein as learning dimensions and are defined as space, time, language, and frameset. Specifically, space refers to the place where the learning process is held, time refers to the moment when the learning process takes place, language refers to how participants communicate in the learning process, and frameset refers to the way in which the learning process is conducted.
Hence, the research question proposed is whether it would be possible to set up a changing learning environment in a STEM course, with a high level of engagement and a higher academic performance compared to traditional masterclass courses. In this sense, the research goal is to achieve a high degree of engagement according to a specific construct, as well as a rise in academic performance according to the same course taught in a traditional manner. The results obtained from this comparison between the outcome of this innovative course held this academic year and the traditional course held the last academic year is the main contribution of this paper. In this way, not only were the descriptive statistics from the distribution of scores in both courses studied, but also inferential statistics were calculated in order to check whether the potential increase in the results in the current course was statistically significant, as well as the values of effect size achieved and sample size required. Additionally, a further contribution is the proposal of a coefficient of effort for hybrid learning for measuring the degree of difficulty of the different setups for each dimension.
2. Related Work
To start with, the Section 2.1 describes the evolution of personalized learning. On the other hand, the Section 2.2 presents an outline of the related work referring to different dimensions in the education field. Eventually, the Section 2.3 focuses on the related work referring to the dimensions considered herein, which are space, time, language, and frameset.
2.1. Evolution of Personalized Learning
Personalized learning can be implemented across different learning modalities. Nonetheless, there are some instances in the literature where there is no clear advantage between taking a traditional or an online approach in the learning process. As stated above, each system has its own benefits and drawbacks. However, it seems that the results with any of those approaches are highly contingent on specific contextual factors (Chenari et al., 2024). The factors influencing the effectiveness of the learning process include those related to learners and teachers, along with those referring to the instructional design, while the technological, institutional, and environmental factors should also be taken into account (Sherif & Amudha, 2025). In fact, there are some instances in the literature regarding studies with courses taught with the same instructor and course materials, though with different instructional modalities. For example, Black et al. concluded that student performance revealed no noteworthy differences among the learning modalities, which were in-person, online, or hybrid (Black et al., 2025). Similarly, Alarifi and Song reported that student performance showed no clear tendency among learning modalities (Alarifi & Song, 2024).
On the other hand, online education could be seen as an evolution of distance education, where instructors and learners were physically separated (Kentnor, 2015). The communications between them evolved as the technology advanced, including parcel post, telephone calls, radio broadcasts, and audio and video tapes (Kurzman, 2013). However, a remarkable stepstone in modern remote education was the design of the Internet, HTTP, and HTML around 1990 by Tim Berners-Lee, which led to the concepts of the World Wide Web and URL system (Berners-Lee, 1999). The capabilities of the World Wide Web have evolved since the Web 1.0 paradigm in the 1990s, when users were merely consumers of information. After that, the Web 2.0 paradigm emerged in the 2000s, where users were both producers and consumers. Afterwards, the Web 3.0 paradigm appeared in the 2010s, where the processing of contents became customizable (Ibrahim, 2021). Additionally, the new Web 4.0 paradigm, which is predicted to rise in the 2020s, has not yet been clearly established. Nonetheless, it is expected to extend the processing of web services with a wide range of technologies, such as the Internet of Things, artificial intelligence, and big data (Almeida, 2017).
There is some parallelism in the literature between the evolution of the concept of Web x.0 and the concept of Education x.0 (Huk, 2021), even though such a comparison is not accepted by all authors. Assuming this similarity, the concept of Education 1.0 was based on a one-way communication with no interaction with the contents. This concept could be associated with static websites in Web 1.0 as students could only access online resources such as websites or e-books (Gerstein, 2014). Then, the concept of Education 2.0 was based on a two-way communication with interaction between contents and users and also among users. This concept could be linked to dynamic websites in Web 2.0 as students could not only communicate but also contribute and collaborate (Vagelatos et al., 2010). In turn, the concept of Education 3.0 was based on a personalized, self-determined education, where co-creation of contents and proactivity were key factors. This concept could be aligned with the semantic web, also known as Web 3.0, where web experiences are tailor-made according to each user’s profile, such that the context is taken into consideration (Twyman, 2014).
However, the concept of Education 4.0 is usually related to Industry 4.0 in the literature, maybe because there is no universal definition of Web 4.0 (Chakraborty et al., 2023). One of the main features of Education 4.0 is the possibility of accessing learning contents from any location, at any time, with any device, and at the student’s own pace (Magetos et al., 2024). Also, personalization could be achieved through the use of customizable software applications. Moreover, the focus on practical activities such as project-based learning, problem-based learning, and team-based learning helps students to not only acquire technical skills but also soft skills (Bonfield et al., 2020). In this way, student-centered learning through the definition of tailor-made learning paths is available thanks to the combination of different technologies in order to create the appropriate learning environment for each individual student (Gueye & Expósito, 2023). The rapid deployment of online education due to COVID-19 led institutions to take advantage of the possibilities offered by Education 4.0 (Li et al., 2023).
Therefore, one of the key points in Education 4.0 relates to the customization of the education process, also known as personalized learning (Gunawardena et al., 2024). This concept represents tailoring instruction in order to meet the particular needs, learning styles, and goals of each given learner (Ning et al., 2025). Personalized learning can be undertaken through direct guidance in order to address the personalized needs of learners, although Education 4.0 brings the necessary technology so as to deliver customized education not only for each learner but also for different circumstances in the learning environment (Tuo et al., 2025). Focusing on changing learning environments, it has been reported that modifications in the learning environment may have a positive influence on the motivation of students (Scheweder & Raufelder, 2024). Moreover, it has been stated that changes in the learning environment may lead to foster deeper learning (El Sheikh & Assaad, 2018). It has been claimed that the use of physically flexible learning spaces may facilitate multimodal pedagogies so as to meet the needs of individual learners (Grannäs & Stavem, 2020).
2.2. Related Work About Learning Dimensions
The concept of dimension has been defined in several ways and according to several features in the literature regarding the teaching/learning process. For instance, the Felder and Silverman Learning Styles Model (FSLSM) addresses the learning differences among students according to four dimensions, each offering two alternative values. The first dimension focuses on processing information and classifies learners into active and reflective. The second one focuses on perceiving information and categorizes learners into sensing and intuitive. The third one focuses on input information and distinguishes learners into visual and verbal. The fourth one focuses on understanding information and divides learners into sequential and global (Jamali & Mohamad, 2018).
Liu et al. reviewed the three main dimensions of the learning experience for students, namely, their perception of the learning environment along with their attitudes and behaviors during the learning process as well as the learning activities (Liu et al., 2023). Toiviainen et al. established six dimensions for properly configuring learning spaces, which are all organized in pairs, namely, social–spatial, material–instrumental, moral–ethical, political–economical, personal–professional, and temporal–developmental (Toiviainen et al., 2022). Chaffar and Frasson proposed six dimensions for affective learning, namely, emotional, social, aesthetic, moral, spiritual, and motivational (Chaffar & Frasson, 2012). Jackson highlighted the dimensions related to affect and emotion in the educational field, treating them as key players (Jackson, 2018).
Nicol et al. exhibited the social dimensions for online learning, related to how learners interact to each other and with their instructors by means of electronic communication networks, leading to the conclusion that the social context of online learning qualitatively differs from traditional in-person learning (Nicol et al., 2003). Vega-Martínez et al. discuss the dimensions involved in learning orientation in business-related learning, finding a direct relationship between commitment to learning with competitiveness, as well as between shared vision and competitiveness, whereas the relationship between open-mindedness and competitiveness is not considered particularly important (Vega-Martínez et al., 2020). Lagrutta et al. described a theoretical framework for learning space structural dimensions, including physical settings, technological resources, organizational resources, actors and interactions, and culture and atmosphere (Lagrutta et al., 2023).
Miyake and Kirschner studied the social and interactive dimensions referring to collaborative learning, considering the shift from in-class learning, where group members are within a common space, as well as to online learning, where asynchronous and distributed groups are formed. The latter are considered as computer-supported collaborative learning (CSCL) environments, where the main difference with classrooms regarding collaborative learning is in the social aspects as technological and pedagogical aspects have indeed been improved in recent times (Miyake & Kirschner, 2014). Mohammadi et al. identified three dimensions in flipped learning, namely, input, teaching and learning process, and output. The first one depended on the equipment, learner, and teacher, while the second one depended on learner preparation, teacher preparation, and learning activities and interaction, whereas the third one depended on both implicit and objective results (Mohammadi et al., 2023).
Christ et al. presented a model built upon three basic dimensions used in German-speaking countries, which are cognitive activation, classroom management, and student support. These are considered as opportunities provided and interact with the use made by students of learning opportunities, namely, depth of processing, time-on-task, and need satisfaction, in order to eventually lead to the outcomes of the learning process, namely, achievement and motivation (Christ et al., 2024). Murray explored the social dimensions of learner autonomy centered in language learning. In that context, three key dimensions were established, namely, the emotional one, related to obtaining learning autonomy and self-regulation; the spacial one, referring to the different characteristics of in-class learning and distant language learning; and the political dimension, concerning the complexity of learning networks and their social features (Murray, 2014).
2.3. Related Work About Space, Time, Language, and Frameset
Sinha and Gärdenfors presented a study related to the treatment of space and time in language learning. They proposed an event-based account for representing time and temporal relations in language. They distinguished between deictically based representations and sequentially based representations, where the former sees temporal events from the standpoint of the current moment, while the latter sees them as tenseless, where no standpoint is taken. Nonetheless, they claim that spatial and temporal dimensions are tied together in most cultures as both time representations are closely related to space (Sinha & Gärdenfors, 2018).
Nocchi and Blin also undertook a study related to space and time dimensions in language learning by means of virtual worlds, where the concepts of heterotopia and chronotope were used to conceptualize both dimensions in the real and virtualized realities. With regard to space, heterotopia accounts for a single place containing two or more locations, which may be all compatible, as in the case of augmented reality, or otherwise incompatible, as in the case of virtual reality. With respect to time, the chronotope is used to conceptualize interconnected spatial and temporal relationships, which leads to the description of time in learning spaces, and it accounts for a single moment containing two or more historical periods. Both concepts allow the description of different setups related to cyber experiences (Nocchi & Blin, 2013).
Pumpe and Jonkmann described the influence of decision latitude in distance learning. That concept represents allowing flexibility with regards to time and place of study, along with the selection of learning strategies in order to achieve the best possible outcome out of the learning process. With respect to space, the physical space where the learning process takes place was considered as well as the tools being used during the learning process, which could be both physical and digital. Regarding time, it was considered either synchronous or asynchronous and also considered in categorizing the learning process, which could be full-time, part-time, or flexitime (Pumpe & Jonkmann, 2025).
Tolentino and Tokowicz reviewed how the similarity between a first language and a second language, namely, L1 and L2, respectively, influences the processing of the latter, and their conclusion states that non-native speakers may exhibit native-like processing behavior of the second language (Tolentino & Tokowicz, 2011). In this way, there are some aspects which are deeply influenced by L1, such as how time is conceptualized spatially (Cheng & Wu, 2024) or or how time references are dealt with (Alcaraz-Carrión & Valenzuela, 2021). However, it is widely assumed that the use of more than one language on a regular basis has positive effects in the learning process (Bialystok, 2017). The concept of translanguaging, where two or more languages are used interchangeably in educational environments, is developing (Hamman-Ortiz, 2020). In fact, translanguaging is at the core of dual language bilingual education (Duffy & Feist, 2023), whereas it is also related to language entropy, which is a measurement adapted from information theory, aiming at quantifying the diversity of language use across different contexts (De Luca et al., 2020).
On the other hand, the concept of frameset has been recently coined, and it refers to the setup of a classroom and is directly related to the level of interactivity between teachers and learners. In this sense, the project Extended Learning for Higher Education Teachers and Trainers, whose acronym is XL4HET, funded by the European Union, identifies different didactic settings according to a set of variables, such as the number of students, the expected interactions among the participants, the tools available to support contents, and the flexibility of learning spaces (XL4HET, 2024b). Based on those elements, three classroom settings were identified, namely, frontal class, interactive class, and hands-on class. The first one represents a masterclass-style session, where a teacher imparts the lesson and the students merely listen and ask questions. The second one accounts for a session where there is a high degree of interaction among teachers and students. The third one describes a session where students take the active role, whereas teachers play the role of facilitators (XL4HET, 2024a).
In summary, the concepts of space, time, language, and frameset have slightly different meanings in the literature, although each of them can be clearly defined for educational contexts. In this sense, space represents the place where the learning process takes place, whether a physical location or a virtual one. Time represents the moment when the learning process takes place for the student with respect to the moment when it is delivered by the teacher and is considered either synchronous or asynchronous. Moreover, language defines whether the class session is delivered in mother tongue, namely, L1; in the common foreign language, namely, L2; or even with a different foreign language. Furthermore, frameset represents the setup of a classroom, which is usually masterclass, interactive class, or hands-on class. All these concepts will be used in the following section to craft a coefficient of effort, which evaluates the difficulty of undertaking a class in a hybrid environment.
3. Dimensions
To begin with, the first subsection describes the dimensions considered herein, which are space, time, language, and frameset. Then, the second subsection describes the possible combinations available with the values contemplated herein. To finish up, the third subsection describes a novel concept branded as coefficient of effort.
3.1. Space, Time, Language, and Frameset
Four dimensions are considered in this study on hybrid education, namely, space, time, language, and frameset. Three possible options are assigned to each of those dimensions, which are in turn associated with one of the values within the integer set {0, 1, 2}. The lowest value accounts for the option attached to traditional education, whereas the middle value represents the option tied to innovative education, while the highest value portrays the option linked to hybrid education, where both approaches are considered at once (Roig et al., 2025).
Table 1 displays the operationalization of variables, where the values available for each dimension are shown, along with their meaning. Regarding the space dimension, it refers to the learning place; thus, it relates to the question “Where?”. With respect to the time dimension, it refers to the learning moment; hence, it relates to the question “When?”. With regards to the language dimension, it refers to the learning language; thus, it relates to the question “How?”. With respect to the frameset dimension, it refers to the learning setup; thus, it relates to the way in which the session is carried out.
Table 1.
Operationalization of variables.
3.2. Combinations Available
There are up to different combinations of the 4 dimensions, namely, space, time, language, and frameset, each of which with 3 possible options. Hence, the teaching/learning process could potentially be undertaken in 81 different ways, which offers plenty of possibilities. An easy way to identify each of those combinations available could be by using a quadruplet, whose 4 members are the values for each dimension, thus staying within the range {0, 1, 2}. The resulting list would have 81 components, ranging from (0, 0, 0, 0) to to (2, 2, 2, 2).
However, in order to offer a shorter list, Table 2 displays all combinations available if only space, time, and language are considered, thus leaving aside the frameset dimension. In this way, each combination can be identified by a specific triplet, composed of an integer value for each variable. It is to be noted that the triplets are listed in a truth table for clarity purposes; specifically, each triplet is composed of three symbols of a ternary alphabet, where the available digits are 0, 1, and 2.
Table 2.
Combinations available for the 3 dimensions considered: space, time, and language.
On the other hand, Table 3 presents some of the combinations available when the frameset dimension is also considered. Furthermore, the sum of the values being part of each quadruplet is also shown, as will be mentioned in the next subsection. As a side note, the implementation of all scenarios available would be facilitated with the use of a learning management system, which is commonly known as an LMS, in order to deliver the contents associated with the teaching/learning process no matter the values given for each dimension.
Table 3.
Combinations available for the dimensions considered: space, time, language, and frameset.
3.3. Coefficient of Effort
The components of each combination available can be used to craft a coefficient, which may be tied to the relative degree of difficulty for a teacher to impart a session. It can be labeled as coefficient of effort as the higher the coefficient is, the higher the effort to impart the session. The reason for this is that the lowest value for each dimension is assigned to the way traditional learning is imparted, whereas the middle value for each dimension is associated with the way innovative learning is imparted, while the highest value for each dimension is tied to hybrid learning, where both approaches are taken at once.
In other words, if a given variable within one of the quadruplets defined above has a value of 0, then a traditional setting is involved, so it accounts for no extra effort for a teacher. Otherwise, if a given variable has a value of 1, then extra effort is required for an teacher to impart a session. Additionally, if a particular value has a value of 2, then an even greater effort is needed as the traditional and the innovative approaches must be taken at the same time, each with its own particularities.
Therefore, a coefficient of effort may be established so as to measure the relative difficulty of preparing for a given session. The most straightforward manner to obtain this coefficient is to add up the four components of a quadruplet. In this way, a natural number within the range between 0 and 8 is achieved, where the former accounts for all traditional settings, and the latter represents all hybrid settings. Generally speaking, if the number of dimensions considered is n, then the possible values for this coefficient of effort would range from 0 to . This could be seen as the plain version of the coefficient of effort, where the values assigned to any dimension have the same relative value. This is the case of the “Sum” column shown in Table 3, where different combinations with sum values from 0 to 8 are depicted.
However, a more realistic method would account for assigning different weights to each value before calculating the sum, which would take into consideration the relative extra effort required so as to carry out an innovative session or a hybrid session regarding each particular dimension. In this way, specific measurements would have to be made so as to properly calibrate that extra effort. As a consequence, this would lead to defining the weighted version of the coefficient of effort, which would most likely be more precise to account for the potential extra difficulty regarding implement an innovative or a hybrid session regarding each given dimension.
Alternatively, this weighted version could be defined by means of generic values. For instance, variable could be linked to the space dimension, variable could be tied to the time dimension, variable could be attached to the language dimension, and variable could be connected to the frameset dimension. In this context, the innovative approach could be assigned to subindex 1 of each variable, whereas the hybrid approach could be associated with subindex 2 of each variable. Nonetheless, the actual values of , , , , , , , and should be established through further research in order to devise accurate values for those weights.
4. Methods
First of all, the course on home automation proposed is described. After that, the schedule for the proposed course is exhibited, and the values of the dimensions for each session are displayed. Next, the coefficient of effort for the seminar proposed is calculated according to the features of each session. Afterwards, the evaluation of the course is explained in detail. Eventually, the statistical study carried out to obtain the results is reported.
4.1. Course on Home Automation
We set up a course on home automation at a college where students were familiarized with with the basic concepts of the KNX standard. This course was organized in the current academic year 2024–2025, and it was attended by 20 students. The course was organized into seven sessions, and the learning environment changed in each of the sessions. The changes in layout were related to the four dimensions described above, namely, space, time, language, and frameset. The aim of this changing environment was to break the routine in the learning process. In this way, the main idea was to try and increase student motivation with those changes in order to enhance academic performance. Additionally, this changing layout allowed students to be trained to face hypothetical unexpected disruptive events, such as the COVID-19 pandemic in the year 2020, that may disrupt their usual learning process.
During the last academic year 2023–2024, the same course was run in a traditional manner, and it was also attended by 20 students. In this way, the same curriculum theory and practice was delivered to the registered students. The former was carried out through masterclasses, and the latter was performed on-site and synchronously. Moreover, the classes were imparted in the primary language of the students, namely, Spanish, according to the format of frontal classes. The evaluation of this course was conducted through a written exam along with a practical exercise, where each part had a value of 50%.
Focusing on the theoretical knowledge, the course covered the basics of the KNX standard, such as its main features and its transmission media. Focusing on the practical skills, the course used the official platform to configure and manage KNX devices, namely, ETS, whereas practical scenarios were created and managed through a platform called KNX Virtual. In this way, the proposed course was an introduction to KNX, which may well be associated with a KNX basic course in order to impart the necessary knowledge to the trainees in order for them to deal with elementary KNX deployments.
4.2. Schedule for the Seminar Proposed
The course was run during 7 sessions, where the settings for space, time, and language were modified in every session, as well as the frameset in which the session was delivered. Table 4 depicts the planning of the seminar, along with the specific features for each session.
Table 4.
Planning of the sessions within the course.
The first session was organized as an on-site masterclass, where the instructor presented the course guidelines to the students within the classroom. This session was held in-person and synchronously. It was imparted in the primary language, and the frameset of the session was a frontal class, where the teacher played the active role. The aim of this session was to explain the basics of the KNX standard.
The second session was set up as an on-site practical class, where students within the classroom were familiarized with with a pair of software applications for home automation. This session was held in-person and synchronously. It was imparted in the secondary language, and the frameset of the session was an interactive class, where the active role was shared between the teacher and students. The aim of that session was to get used to the software applications ETS and KNX Virtual.
The third session consisted of a self-managed workshop, where students got together in small groups of two members within the classroom in order to undertake a specific activity on a team basis. This session was held in-person and asynchronously. It was imparted in the primary language as students had no restriction language-wise, and the frameset of the session was a hands-on class, where the active role was played by students, and the teacher only acted as a facilitator. The aim of that session was to work in breakout rooms in order to complete an activity with the software applications ETS and KNX Virtual.
The fourth session was organized as a live online masterclass, where the instructor presented some advanced guidelines to the students outside the classroom. This session was held online and synchronously. It was imparted in the primary language, and the frameset of the session was a frontal class, where the teacher played the active role. The aim of this session was to explain some advanced concepts of the software applications ETS and KNX Virtual.
The fifth session was set up as a live online practical class, where students started working with advanced concepts within the software applications selected for home automation. This session was held online and synchronously. It was imparted in the secondary language, and the frameset of the session was an interactive class, where the active role was shared between the teacher and students. The aim of that session was to get used to the advanced concepts with the software applications ETS and KNX Virtual.
The sixth session consisted of a homework session, where students got together outside the classroom in the same small groups established above in order to undertake a specific activity on a team basis. This session was held online and asynchronously. It was imparted in the primary language as students had no restriction language-wise, and the frameset of the session was a hands-on class, where the active role was played by students, and the teacher acted only as an online facilitator. The aim of that session was to work in breakout rooms in order to complete an advanced activity with the software applications ETS and KNX Virtual.
The seventh session was organized as a series of pitch presentations, where each team of two members had to give its own presentation. This session was held on-site and synchronously. It was delivered in the secondary language, and the frameset of the session was a frontal class, where all teams had to give presentations in order to show their work and how it was carried out to their peers and also to the teacher. The aim of that session was for each team to exhibit the results of the work performed.
4.3. Coefficient of Effort for the Proposed Course
Sticking to the plain version of the coefficient of effort, the possible values for each dimension are 0, 1, and 2, where the first one accounts for the traditional setting, the second one represents the innovative setting, and the third one represents the hybrid setting. In this context, Table 5 depicts the plain coefficient of effort for each of the sessions planned.
Table 5.
Plain coefficient of effort for the sessions proposed.
According to those results, session ID 6 is the toughest to deal with as the plain coefficient of effort yields 4, followed by sessions ID 3 and ID 5, where the plain coefficient of effort is 3. On the other hand, session ID 1 is the easiest to deal with as the plain coefficient of effort is 0, which is related to a traditional learning environment.
On the other hand, taking the weighted version of the coefficient of effort, the traditional setting remains 0. The innovative setting is the corresponding variable with subindex 1, namely, , , , and . Moreover, the hybrid setting is the corresponding variable with subindex 2, namely, , , , and . In this context, Table 6 exhibits the weighted coefficient of effort for each of the sessions proposed.
Table 6.
Weighted coefficient of effort for the sessions planned.
According to those results, no conclusion can be drawn without knowing the value of the variables used as those values would affect the results obtained. Hence, further research needs to be undertaken in order to properly adjust those values to the real effort needed to deploy a session with each corresponding feature.
4.4. Evaluation of the Course
This course was evaluated through the presentation of a small project carried out in groups composed of two members. All teams were randomly selected at the beginning of session 3, which was dedicated to starting the project in the form of a self-managed workshop with the basic concepts of KNX explained during sessions 1 and 2. At a later stage, each team had to complete the project during session 6, which was organized as a homework session, where the advanced concepts of KNX explained during sessions 4 and 5 were applied. Eventually, session 7 was dedicated to each team delivering a pitch presentation with a time constraint of 5 min. Each team had to prepare a presentation based on what the project involved and how it was conducted.
Each presentation was evaluated on a peer-review basis, where a clean-slate construct was prepared for that purpose. However, as the construct was a brand new one, it had to have been previously validated by a panel of experts in the field. Hence, six elements were included in a 4-point Likert-type scale in order for a panel of 5 experts to rate them according to two dimensions, namely, the construction of each item and its clarity. On the Likert-type scale, 1 point represents fully disagree, 2 points represent disagree, 3 points represent agree, and 4 points represent fully agree. It is to be noted that there was no neutral score, such that judges were forced to decide whether they agreed or disagreed with each item, either fully or otherwise.
Once the rating was performed by all members in the panel of experts, Aiken’s V test (Aiken, 1985) was performed. The threshold selected was 0.87, which was the original benchmark proposed by Aiken, thus accounting for a tight agreement among the judges. The six elements initially selected were divided into two categories, namely, configuration and communication. In this way, the three items belonging to the former were types of devices, types of functions, and originality, while the three items belonging to the latter were project functionality, proof of concept, and presentation skills.
After the elements within the construct were validated by the judges, then the construct was ready to be used for peer-review purposes. In this way, after each pitch presentation, all students had to rate it in with the previously validated construct, even though the rating was conducted with a 5-point Likert-type scale. On the Likert-type scale, 1 point represents fully disagree, 2 points represent disagree, 3 points represent neither agree nor disagree, 4 points represent agree, and 5 points represent fully agree. In this way, students had the same rating options as the judges along with a extra option for a neutral score. Furthermore, the dimensions considered in this construct for peer-review rating were only the categories indicated in the validation construct, namely, configuration and communication, each of which are composed of the three elements stated above.
It is to be noted that the use of a 4-point Likert-type scale in a validation instrument for the judges and the use of 5-point Likert-type scale in a peer-review instrument for students has already been discussed in the literature (Adelson & McCoach, 2010). The difference between both rating schemes is the introduction of a neutral option in the latter which is not present in the former. In this way, judges are forced to make a decision, thus preventing respondents from choosing the safe neutral option, which in fact improves validity on controversial topics. On the other hand, students have a neutral option regarding their ratings, thus offering them a wider range of options. Nonetheless, some key considerations for choosing a 4-point Likert-type rating scale are respondent expertise, topic sensitivity, and reliability of the results (Medina et al., 2019).
Furthermore, it has to be considered that the peer-review process can be influenced by different factors, such as personal relationships, leniency/severity bias, and lack of experience. Therefore, some measures can be taken so as to compensate those factors for further course editions with respect to student peer-review, such as the addition of calibration exercises in order for students to practice rating against determined benchmarks to strengthen consistency, along with the combination of peer-review with instructor evaluation, or even external evaluation, in order to balance student judgments. On the other hand, all students registered in the course participate in peer-review ratings, such that individual biases are minimized and the anonymity of reviewers is not needed as long as aggregated data is used.
During the last session of the seminar, each team had to give its pitch presentation, which was followed by one minute for all students to rate it with the peer-review construct. Additionally, after all presentations were delivered, then a further construct was delivered to the students in order to assess their level of engagement with the whole course. The ISA engagement scale (Soane et al., 2012) was chosen for this purpose. This scale was originally aimed at working environments, although it is also used in learning environments (Mañas-Rodríguez et al., 2016). It contains 3 dimensions, namely, intellectual, social, and affective, each of which has 3 elements, thus accounting for 9 questions overall. This is a standard construct which had already been validated at the design stage (Nwachukwu & Osa-Izeko, 2022). The rating of each element was performed with a 7-point Likert-type scale, where 1 point represents fully disagree, 2 points represent disagree, 3 points represent partially disagree, 4 points represent neither agree nor disagree, 5 points represent partially agree, 6 points represent agree, and 7 points represent fully agree. In this context, the expected average score for each dimension and overall is 6 as that is the score related to agree.
In summary, three constructs are needed to assess the academic performance and the level of engagement of this course. The first one was intended for a panel of experts in order to validate the items to be used for peer-review. The second one was intended for all students in order to rate each pitch presentation, which establishes the academic performance. The third one was aimed at all students in order to rate the level of engagement during the course.
4.5. Statistical Study of the Results
As stated above, the academic performance of this course was calculated on a peer-review basis. Specifically, as each peer-review construct was composed of 6 items, the academic performance for each team was established as the average of all items for all constructs submitted. Therefore, the highest possible mark would be 5 points, which would be the case where all items in all constructs are rated as fully agree. On the other hand, this course was held in a Spanish college; thus, the Spanish grading system applies (Polytechnic University of Valencia, n.d.). In this way, the grades ranged from 0 to 10, including decimal marks, whereas the passing grade was 5. Hence, it was straightforward to convert the average rating obtained in the peer-review construct by each team to the Spanish grading system as it was only necessary to multiply the average rating obtained by 2, as . As a side note, the score obtained by a particular team was assigned to all its members.
Therefore, the rating obtained by each student in the current academic year was calculated as the average score attained through the 5-point Likert-type peer-review construct after the corresponding team pitch presentation. In this way, the rating of both members of a given team was determined by averaging all scores obtained in all six items within the peer-review construct. Eventually, the academic performance of each student was derived by doubling the rating obtained in order to adapt that rating to the Spanish grading system, whose top mark is 10. On the other hand, the academic performance of each student in the previous academic year was equally weighted between a written exam and a practical exercise, where both components were evaluated according to the Spanish grading system. Hence, the comparison of the academic performance across both years can be conducted fairly as the same grading system is shared.
Apart from the average rating per team, other descriptive statistics were calculated regarding centralization statistics and dispersion statistics. The average belongs to the former, which includes other measurements such as mode, median, and quartiles. Moreover, the measurements related to the latter were variance, standard deviation, and coefficient of variation. The reliability of the scores achieved in both academic years was calculated through Cronbach’s alpha, which also gives information about the degree of correlation between the distribution of scores attained in both years.
Regarding the inferential statistics, a t-test was undertaken in order to check whether the potential increase in the results obtained in the current course with a changing layout was statistically significant with respect to the results achieved in the past course with a traditional layout. Furthermore, the effect size achieved was calculated through Cohen’s d, which in turn was used to calculate the sample size required to attain that effect size, considering the most common values of for type-I errors and for type-II errors. Eventually, the actual sample size and the sample size required were compared in order to check whether the actual sample size was sufficient to detect the effect size achieved.
It is to be noted that the statistical power considered in this study was 0.80, which is the most common value in the literature as other common values like 0.90 and 0.95 are used far less frequently in similar studies. In fact, the statistical power can be seen as the probability of not committing a type-II error; hence, its value can be calculated as 1–. Furthermore, the sample size used in this study was the overall number of attendants to the course; thus, it was assigned as a predefined value. Nonetheless, a priori power analysis may guide sample design in future editions of the course, such that if the sample is too small, then the use of bootstrapping methods might help validate results with more stability.
On the other hand, it should be noted that the data used in the three constructs employed in this study are ordinal, and the response options were mapped to a subset of consecutive natural numbers that were equally spaced. However, the reliance on mean scores may raise some concerns as the intervals between those values might not be equal, and the use of frequencies and percentages would account for more appropriate results. Therefore, the results obtained in each construct need to be carefully looked into in order to establish whether the use of ordinal data is more suitable or otherwise whether the use of frequencies and percentages is more appropriate.
Regarding the first construct, namely, the instrument used by judges to validate the proposed elements, the results must be ordinal in nature in order to calculate the average score of the whole construct, which is a key player for calculating Aiken’s V test. With respect to the second construct, namely, the instrument used by students to evaluate the pitch presentations on a peer-review basis, the results have to be ordinal in nature in order to translate them into the Spanish grading system, which requires a grade between 0 and 10, thereby accounting for the academic performance in the course. With regards to the third construct, namely, the instrument used by students to assess their level of engagement, the results need to be ordinal in nature in order to determine the average score per dimension and overall, which is the primary aim of this construct.
Therefore, all three constructs need to be measured using ordinal values, each serving a specific purpose. Nonetheless, the ordinal scores obtained with those constructs might also be calculated in the form of frequencies and percentages in order to analyze the data from a different perspective. It should be noted that the former are also known as absolute frequencies, which are determined by counting, whereas the latter are also known as relative frequencies, which are calculated by dividing the former by the total count.
4.6. Limitations of This Study
It is to be highlighted that this study investigated the academic performance and engagement in a course attended by 20 students in two consecutive editions, where the former was imparted in a traditional manner and the latter was imparted in an innovative manner. Hence, the main limitation lies in the characteristics of the samples as all students within each year were considered. Thus, although the statistical analysis of course scores provides valuable insights into students’ academic performance and engagement, those findings are based on a relatively small group of participants. In this way, the results cannot be generalized to broader populations without some caution as differences in institutional context, instructional approaches, and student backgrounds may significantly affect outcomes, making it difficult to claim external validity.
Another limitation concerns the reliance on a limited number of assessment instruments. With respect to the final grades in the current year, they are only composed of the average scores obtained with the peer-review construct, which may lead to some degree of unfairness due to personal relations or lack of expertise. Hence, combining the peer-review approach with instructor evaluations may lead to a fairer assessment. With regards to the final grades in the previous year, they provide a balance between theoretical knowledge and applied skills, even though they may not capture the complexity of student learning, such as collaboration or sustained effort throughout a semester.
A further limitation involves the interpretation of the statistical results, which is constrained by variability across different components of the course. The courses run during the previous year and the present year were imparted in a traditional and in an innovative manner, respectively, which may have had some influence on the results obtained, thus leading to potential reliability concerns. Moreover, the use of self- and peer-assessments may introduce subjective bias, which can affect the precision of reported averages. Hence, while the statistical results provide a useful overview, they should be interpreted with awareness of these potential sources of error.
5. Results
Table 7 shows the elements included in the construct aimed at judges for validation purposes, along with its classification in categories. Regarding the configuration category, item 1 refers to the type of devices, whereas item 2 relates to the types of functions, and item 3 focuses on originality. With respect to the communication category, item 4 refers to the project functionality, item 5 relates to the proof of concept, and item 6 focuses on presentation skills.
Table 7.
Items for validation purposes.
Table 8 exhibits the average values of the construct aimed at judges in order to validate the elements. Specifically, item 1 averaged 3.8 in construction, whereas item 2 averaged 3.4 in clarity. Item 2 averaged 3.6 in both construction and clarity, while the same average scores were obtained for item 3. Item 4 averaged 3.8 in construction, whereas item 2 averaged 3.6 in clarity. Item 5 averaged 3.6 in both construction and clarity, while item 6 averaged 3.8 in the same dimensions. In summary, the average score per dimension was 3.7 for construction and 3.6 for clarity, which led to an average score for the whole construct of 3.65. That value was used to calculate Aiken’s V test, leading to a value of 0.88.
Table 8.
Aiken’s V test for validation purposes (ordinal format).
Table 9 displays the most relevant descriptive statistics, focusing on centralization measurements. The outcome is shown per dimension and overall. Specifically, the average for the configuration dimension was 9.30, whereas it was 8.97 for the communication dimension, thus leading to an overall average of 9.13. The mode obtained in both dimensions and overall was 10 in all cases. The minimum was 7.33 for each dimension, while the overall minimum was 7.66. The first quartile achieved 8.67 for configuration and 8 for communication, thus yielding an overall value of 8.67. The second quartile attained 10 for configuration and 9 for communication, thus leading to an overall value of 9.17. The third quartile obtained 10 for both dimensions and overall, which was also the case for the maximum.
Table 9.
Centralization statistics related to the scores in the current academic year.
Moreover, Table 10 depicts the most relevant descriptive statistics with the focus on dispersion measurements. The outcome is shown per dimension and overall. Specifically, the ranges obtained in each dimension are from 7.33 to 10, although the overall range is from 7.67 to 10. The variance obtained for configuration was 1.37, whereas it was 1.23 for communication, while the overall value was 1.70. The standard deviation attained for configuration was 1.17, while it was 1.11 for communication, while the overall value was 1.30. The coefficient of variation obtained for configuration was 0.13, whereas it was 0.12 for communication, while the overall value was 0.14.
Table 10.
Dispersion statistics related to the scores in the current academic year.
Table 11 shows the outcome of Cronbach’s alpha, calculated per each dimension, as well as overall. Regarding the configuration dimension, the value obtained was 0.70, whereas the value attained for the communication dimension was 0.78, while the overall value achieved was 0.74.
Table 11.
Cronbach’s alpha related to the scores in the current academic year.
Table 12 exhibits the Pearson correlation coefficient and the Spearman’s rank correlation coefficient, calculated with the distribution of scores related to each dimension. With respect to the Pearson’s correlation coefficient, the value obtained was 0.31, while the value attained for the Spearman’s rank correlation coefficient was 0.35.
Table 12.
Correlation coefficients related to the scores in the current academic year.
Additionally, an inferential statistic test was carried out in order to establish whether the improvements obtained are statistically significant. In order to do so, the relevant data from the last and the current academic years are the sample sizes, the averages, also known as means, and the standard deviations. In this way, sample A represents the past academic year, whose size is , while its mean is , and its standard deviation is , as stated above. On the other hand, sample B represents the present academic year, whose size is , while its mean is , and its standard deviation , as shown in Table 13.
Table 13.
Key descriptive statistics in order to apply a t-test.
Hence, in order to determine whether there is a statistically significant difference between both samples, a two-tailed t-test was performed. The t-value obtained was 2.44, which led to a p-value of 0.02 after considering 38 degrees of freedom. Therefore, as the alpha value chosen was (also known as the s-value), which is the most common case, and it can be stated that the increase in the results obtained this year was statistically significant. This implies that the differences found in both samples are unlikely to happen due to chance alone, so it can establish the statistical significance of the reported improvements. This is displayed in Table 14.
Table 14.
Inferential statistics for the distribution of scores in each year.
Furthermore, the effect size achieved was calculated, followed by the sample size calculation in order to determine the number of participants needed to detect that effect size. In order to first calculate the effect size, it was necessary to divide the difference of means by the pooled standard deviation, which represents the average of the marks for their respective course means, thus acting as a weighted average of standard deviations. It usually results in values between 0 and 1, even though it might be higher than 1 in some cases.
The difference of means was 0.93, whereas the pooled standard deviation was 1.21; thus, the resulting Cohen’s d, which is the value giving the standardized effect size, was 0.77. This value is around the upper boundary of the interval between 0.5 and 0.8, where the former accounts for medium effect and the latter represents large effect. Table 15 displays the values involved in the calculation of the effect size.
Table 15.
Effect size achieved.
Taking the Cohen’s d value of 0.77, along with the alpha value of 0.05 and the beta value of 0.20, which are the most common values quoted in the literature, results in a sample size of 27. Therefore, as the sample size used is 20 participants for each course, it yields an average of 20 participants. This number is lower than 27; hence, the actual sample size used is not sufficient to detect the effect size given by 0.77. As a consequence, further research should be carried out with a larger group to achieve the effect size desired of 0.77. Table 16 summarizes the relevant data in the sample size calculation.
Table 16.
Sample size to obtain the effect size achieved.
Regarding the level of engagement, Table 17 depicts the items associated with each dimension in the ISA construct. Specifically, the intellectual dimension contains three elements: Q1 states that “I focus hard on my work”, while Q2 states that “I concentrate on my work”, and Q3 states that “I pay a lot of attention to my work”. The social dimension also contains three items: Q4 states that “I share the same work values as my colleagues”, while Q5 states that “I share the same work goals as my colleagues”, and Q6 states that “I share the same work attitudes as my colleagues”. The affective dimension contains three questions as well: Q7 states that “I feel positive about my work”, Q8 states that “I feel energetic in my work”, and Q9 states that “I am enthusiastic in my work”.
Table 17.
Questions within the ISA engagement scale.
Furthermore, Table 18 exhibits the average scores achieved in the ratings of the items associated with each dimension, as well as the overall average scores, for the previous and the present academic years. Regarding the former, the intellectual dimension obtained an average score of 6.08, the social dimension 6.13, and the affective dimension 6.12, resulting in an overall average score of 6.11. With respect to the latter, the intellectual dimension obtained an average score of 6.53, the social dimension 6.65, and the affective dimension 6.77, resulting in an overall average score of 6.62.
Table 18.
Average scores for the level of engagement obtained.
6. Discussion
Table 7 shows the six elements originally proposed for inclusion in the peer-review construct, with the aim of presenting them to the students after the pitch presentation of each team. Those elements were evenly distributed into two categories, namely, configuration and communication. In this way, the former included the items related to the configuration skills referred to the project design, whereas the latter included the items related to the communication skills referred to the pitch presentation.
Table 8 exhibits the results of Aiken’s V test by showing the average ratings of each of the elements proposed to the judges according to two dimensions, namely, construction and clarity. It can be seen that all items were rated with average scores between 3.4 and 3.8 in both dimensions. This represents that there was a certain degree of discrepancy among judges, even though the ratings were quite high on average as the top mark was 4. Furthermore, the averages per dimensions and overall were calculated, which led to eventually determine the value of Aiken’s V test. It accounted for 0.88, which is slightly higher than the benchmark proposed by Aiken, namely, 0.87 (García-Ceberino et al., 2020). Hence, all items within the construct were validated; thus, they were ready to be included in the peer-review construct to rate the students’ pitch presentations.
Table 9 focuses on the centralization statistics with the outcome obtained in the peer-review assessments conducted by the students after each pitch presentation. Specifically, those centralization statistics are applied per dimension and overall. Those values show that the outcome achieved in the configuration dimension is higher than the outcome attained in the communication dimension. This fact can be explained due to the fact that STEM students often perform better when dealing with technical skills than with soft skills. Nonetheless, the difference between them is relatively small. Moreover, the average score overall is 9.13 out of 10, which is an outstanding result in the Spanish grading system.
Table 10 exhibits the dispersion statistics established in the peer-review results, where the outcome obtained per dimension and overall are quite close to each other. In fact, the ranges obtained in all cases are quite close, whereas the values obtained for the standard deviation are all slightly higher than one. The coefficient of variations obtained in all cases are in the range between 0.12 and 0.14. In this way, all of them are under the upper boundary of low variability, which is 0.15 (Li et al., 2018). Hence, it can be stated that the scores collected per dimension and overall have low variability; thus, most of them are around the corresponding mean values.
Table 11 displays the values of Cronbach’s alpha per dimension and overall. All those values are located in the range of 0.7 and 0.8, which indicates an acceptable reliability of the scores collected. Regarding the correlation between the scores obtained in both dimensions, Table 12 depicts the Pearson correlation coefficient and the Spearman’s rank correlation coefficient. The former measures the degree of linear correlation between the scores of both dimensions, whereas the latter measures the degree of monotonic correlation between them. The results obtained are 0.31 for the former and 0.35 for the latter; thus, both are located within the interval between 0.30 and 0.49 in absolute values, which indicates a moderate correlation between the scores of both dimensions (López-Martín & Ardura-Martínez, 2023).
Table 13 summarizes the relevant descriptive statistics in order to carry out a two-tailed t-test between two samples, which are their samples sizes, their means, and their standard deviations. In this way, the two-tailed t-student value obtained with the distribution of scores related to the current year and the last year permits inference of whether the difference found between both distributions of scores is statistically significant. Table 14 exhibits the two-tailed t-student value found, as well as the degrees of freedom associated with the number of samples involved. Both values allow calculation of the p-value, which happens to be 0.02. This value is lower than the s-value established in this case, which is 0.05, that being the most commonly used in the literature. Hence, as the p-value is lower than the s-value, it can be concluded that the differences in the distribution of scores obtained in both years are not due to merely chance, so such a difference is statistically significant (Imbens, 2021).
Table 15 displays the effect size achieved by means of calculating Cohen’s d. The value obtained is 0.77, which is quite close to 0.80, which is the boundary for a large effect size. This indicates that the means of both distributions of scores are separated by a distance of around 0.8 standard deviations. In other words, the effect size obtained can be considered as large, which means that such an effect size would be easy to see even when the sample size is not high (Gülkesen et al., 2022).
Table 16 depicts the sample size needed to detect the sample size achieved. The calculation used the most usual values in the literature for and , which are 0.05 and 0.20, respectively. The sample size required was calculated with Lehr’s approximation, whose closest upper integer is 27, as well as with the appropriate expression, which also accounts for 27. On the other hand, the actual sample size is 20, which is the average number of attendants taking part in the course in both years. Hence, as the sample size required is greater than the actual sample size, it can be stated that the actual sample size is not sufficient to detect the effect size achieved (Althubaiti, 2023).
Table 17 shows the dimensions and the elements of the ISA engagement scale, which is the construct used to measure the level of engagement of the students during the course in the current academic year. It is composed of three dimensions, namely, intellectual, social, and affective, each of which contains three elements.
Table 18 exhibits the average results per dimension and overall obtained in the ISA engagement scale. In the current academic year, the intellectual dimension gives an average of 6.53, while the social dimension presents an average of 6.65, and the affective dimension has an average of 6.67. All of them are above the threshold of 6, which is assigned to “agree” in the 7-point Likert-type scale. Hence, it can be considered that the intellectual engagement, the social engagement, and the affective engagement are all high (Sidharta, 2019). As a consequence, the overall average is also higher than 6, which accounts for an overall high level of engagement (Laranjeira & Teixeira, 2025). Regarding the values obtained, all of them are above 6.5, which means that those values are closer to the top score, namely, 7, than to the lower threshold, namely, 6. Furthermore, the intellectual dimension had a slightly lower value than the social dimension and the affective dimension, although the difference is quite small. In summary, it can be concluded that students became substantially engaged, which may well be closely related to the academic results (Espey, 2022).
Regarding the previous academic year, the intellectual dimension gives an average of 6.08, whereas the social dimension presents an average of 6.13, and the affective dimension has an average of 6.12. All three values are slightly above the threshold of 6; thus, all three kinds of engagement can be seen as high. Consequently, the overall average is slightly greater than 6, which represents an overall high level of engagement. Hence, the level of engagement achieved in both academic years is high because all average ratings are higher than 6 in both years. However, there is a clear difference between the average scores obtained in the current year compared to those attained in the last year as the current edition received an overall average score roughly half a point higher than the previous edition. The reason for this increase in engagement may be associated with the fact that students consider the changing layout adopted for each course session in the current year to be more attractive and motivating.
In summary, three constructs were used in this study: the first one was used by judges to validate peer-review elements, the second one was used by students to rate the pitch presentations on a peer-review basis, and the third one was used by students to assess their level of engagement during the course. The findings obtained in the first construct yielded an Aiken’s V value of 0.88, which is greater than the benchmark of 0.87 proposed by Aiken. Consequently, all elements within the first construct were validated, and in turn, they were all included in the second construct. As mentioned, Aiken’s threshold is considered to be tight in the literature as it forces a low degree of discrepancy among judges. However, there are other common thresholds allowing for a higher degree of discrepancy among judges as in the case of 0.70 proposed by Charter (Charter, 2003) or 0.50 proposed by Cicchetti (Cicchetti, 1994).
On the other hand, the findings obtained in the second construct yield slightly better results in the configuration dimension than in the communication dimension. This point is not unusual for STEM students as they tend to demonstrate greater competence in technical skills compared to soft skills. Both dimensions present similar values for dispersion statistics, reliability, and correlation level. Additionally, the difference in academic performance found in the present course with respect to the previous course is statistically significant, which indicates the positive role played by the changing layout in the learning process. However, the actual sample size, namely, the number of students registered per class, was lower than the required sample size, namely, the number of students needed to detect the effect size achieved. Hence, further research must be undertaken with a greater cohort of students.
Regarding the findings obtained in the third construct, the average scores obtained with the ISA engagement scale in the current academic year deliver values higher than 6 in all dimensions and overall. All the average scores achieved are greater than 6.5; thus, they are closer to the maximum score than to the threshold score. Furthermore, the average scores attained with the same construct in the previous academic year also deliver values greater than 6 in all dimensions and overall, although such scores are slightly higher than 6; thus, they are far from the maximum score. Therefore, a rise was observed in the engagement level in the present course with respect to the previous course, which indicates the valuable contribution of the changing layout in the learning process.
Eventually, all those results answer the research question proposed at the end of the introduction in a positive way as it was possible to set up a changing learning environment in a STEM course where there was a high level of engagement and a higher academic performance compared to a similar course taught in a traditional manner. Therefore, the research goal was achieved as the level of engagement was high, and the increase in academic performance was statistically significant. However, the actual sample size was not sufficient to detect the effect size achieved; thus, further research needs to be carried out with a larger number of students involved.
7. Conclusions
In this paper, the setup of a college course on home automation where the learning environment has been modified in each session has been described. Those setup modifications were applied to four dimensions, namely, the space where the learning process is held, the time when the learning process takes place, the language in which the learning process is delivered, and the frameset in which the learning process is conducted. The motivation for these constant changes in each session was to increase student engagement, improve academic performance, and prepare students for potential disruptive events, such as a pandemic or natural disaster, which could interrupt their usual routines.
Each of the four dimensions considered in this study can take on three possible values, namely, {0, 1, 2}. In this context, the lowest value represents the scenario of traditional education, the middle value accounts for the scenario of innovative education, and the highest value represents the scenario of hybrid education, where both traditional and innovative education are provided at once. Hence, the higher the value for any dimension, the more difficult it is to impart a session class. Focusing on space, the lowest value refers to in-class, the middle value refers to online, and the highest value refers to a blended scenario. Regarding time, the lowest value refers to synchronous, the middle value refers to asynchronous, while the greatest value refers to a bichronous scenario. Centering on language, the lowest value refers to first language, the middle value refers to second language, while the highest value refers to bilingual. Regarding frameset, the lowest value refers to the frontal class, the middle value refers to the interactive class, and the greatest value refers to the hands-on class.
Regarding the options available for each dimension, a coefficient of effort has been proposed in order to measure the relative difficulty of delivering a given session. In this way, different values have been assigned to the possible options within a particular dimension according to the difficulty of conducting a class session. Two variations of this coefficient of effort have been presented, where a plain one assigns a natural number to each option, while a weighted one assigns a real number to each option. The former is easier to deploy, whereas the latter is more precise, although further research is needed to establish the proper weights.
With respect to the structure of the course, it was organized into seven sessions, each with its own combination of values related to space, time, language, and frameset. Some of those sessions were focused on theoretical concepts, whereas some others were centered on carrying out a group-based practical activity. That activity had to be completed before the final class session, which was dedicated to all teams delivering their pitch presentations, where each team had to display their work and how it was carried out.
The evaluation of the course was carried out on a peer-review basis with a newly developed construct, which was previously validated by a panel of five experts by means of Aiken’s V test. The last class session was dedicated to delivering pitch presentations, which all students evaluated using this construct. The academic performance was calculated based on the collected ratings, with the average score close to the top mark and the coefficient of variation indicating low variability. Furthermore, Cronbach’s alpha showed an acceptable reliability of the results obtained, whereas the correlation coefficients denoted a moderate correlation.
Regarding the inferential statistics, a two-tailed t-test was performed, which led to a p-value of 0.02. This value was lower than the most common s-value described in the literature, which is 0.05; hence, the difference between the distribution of scores obtained in the current and previous year was statistically significant. Additionally, the effect size achieved was calculated as Cohen’s d, with a value of 0.77. This was quite close to 0.80, which is considered as a large effect, indicating that the observed difference between both score distributions is practically significant and likely meaningful in real-world contexts.
However, the sample size required to detect the effect size achieved was 27, whereas the actual sample size was 20. Hence, as the actual sample size was lower than the sample size needed, it can be concluded that the actual sample size was not sufficient to detect the effect size achieved. Therefore, further research should be carried out with a higher number of students.
With respect to the level of engagement achieved, the results of the ISA engagement scale for the three dimensions involved, namely, intellectual, social, and affective, presented values above 6.5 out of 7; thus, the overall value also exceeded 6.5. Hence, considering that a value of 6 was required in all dimensions so as to increase engagement, it can be concluded that this level was achieved.
Therefore, the results regarding academic performance and engagement suggest that implementing a STEM course within a dynamic learning environment was more effective than delivering a similar course in a traditional setting. Those results may be attributed to different motivating factors among the students, such as the novelty of the layout and the practice of working collaboratively in teams, as well as the potential biases associated with peer-review assessments.
In summary, the deployment of this course, in which the learning environment changed constantly across sessions, can be considered a success, as student engagement was high and the improvement in academic performance compared to the previous course imparted last year was statistically significant. However, further research needs to be carried out with a higher number of students involved.
Author Contributions
Conceptualization, P.J.R.; formal analysis, P.J.R.; supervision, P.J.R., S.A., K.G., C.B. and C.J.; validation, P.J.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
This study was conducted in accordance with the Declaration of Helsinki. No approval by the Institutional Ethics Committee was necessary as all data were collected anonymously from capable, consenting adults. The data are not traceable to participating individuals. The procedure complies with the general data protection regulation (GDPR).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| CAT | Computer-Aided Translation |
| ICT | Information and Communication Technologies |
| IP | Internet Protocol |
| IT | Information Technology |
| LMS | Learning Management System |
| PBL | Project-Based Learning |
| SDL | Self-Directed Learning |
| SEG | Serious Educational Games |
| STEM | Science, Technology, Engineering, Mathematics |
| TBL | Team-Based Learning |
References
- Adelson, J. L., & McCoach, B. (2010). Measuring the mathematical attitudes of elementary students: The effects of a 4-point or 5-point likert-type scale. Educational and Psychological Measurement, 70(5), 796–807. [Google Scholar] [CrossRef]
- Aiken, L. R. (1985). Three coefficients for analyzing the reliability and validity of rating. Educational and Psychological Measurement, 45(1), 131–142. [Google Scholar] [CrossRef]
- Alarifi, B. N., & Song, S. (2024). Online vs in-person learning in higher education: Effects on student achievement and recommendations for leadership. Nature, 11, 86. [Google Scholar] [CrossRef]
- Alcaraz-Carrión, D., & Valenzuela, J. (2021). Time as space vs. time as quantity in Spanish: A co-speech gesture study. Language and Cognition, 14(1), 1–18. [Google Scholar] [CrossRef]
- Almeida, F. (2017). Concept and dimensions of web 4.0. International Journal of Computers and Technology, 16(7), 7040–7046. [Google Scholar] [CrossRef]
- Almusaed, A., Almssad, A., & Rico-Cortez, M. (2023, April 13–16). Maximizing student engagement in a hybrid learning environment: A comprehensive review and analysis. International Conference on Humanities, Social and Education Sciences, Denver, CO, USA. [Google Scholar]
- Althubaiti, A. (2023). Sample size determination: A practical guide for health researchers. Journal of General and Family Medicine, 24(2), 72–78. [Google Scholar] [CrossRef] [PubMed]
- Bekereci-Sahin, M., & Aslan, R. (2025). Navigating online microteaching: Pre-service teachers’ experiences and insights. Pedagogies: An International Journal, 2025, 2535343. [Google Scholar] [CrossRef]
- Berners-Lee, T. (1999). Weaving the web. The original design and ultimate destiny of the world wide web by its inventor (1st ed., pp. 1–226). Harper. [Google Scholar]
- Bialystok, E. (2017). The bilingual adaptation: How minds accommodate experience. Psychological Bulletin, 143(3), 233–262. [Google Scholar] [CrossRef]
- Bidarra, J., Rocio, V., Sousa, N., & Coutinho-Rodrigues, J. (2023). Problems and prospects of hybrid learning in higher education. Open Learning: The Journal of Open, Distance and e-Learning, 2023, 2404036. [Google Scholar] [CrossRef]
- Black, K., Bullis, A., Marcotte, J., McGuire, P., Wagoner, J., Wiley, J., Wrobel, R., & Weiss, D. J. (2025). How instructional modalities shape learning: A longitudinal study of student perceptions, engagement, and outcomes in general chemistry II during COVID-19 and post-Pandemic. Journal of Chemical Education, 102(6), 2355–2363. [Google Scholar] [CrossRef]
- Bonfield, C. A., Salter, M., Longmuir, A., Benson, M., & Adachi, C. (2020). Transformation or evolution?: Education 4.0, teaching and learning in the digital age. Higher Education Pedagogies, 5(1), 223–246. [Google Scholar] [CrossRef]
- Chaffar, S., & Frasson, C. (2012). Affective dimensions of learning. In Encyclopedia of the sciences of learning (pp. 169–172). Springer. [Google Scholar]
- Chakraborty, S., González-Triana, Y., Mendoza, J., & Galatro, D. (2023). Insights on mapping Industry 4.0 and Education 4.0. Frontiers in Education, 8, 1150190. [Google Scholar] [CrossRef]
- Charter, R. A. (2003). A breakdown of reliability coefficients by test type and reliability method, and the clinical implications of low reliability. The Journal of General Psychology, 130(3), 290–304. [Google Scholar] [CrossRef] [PubMed]
- Chenari, M. U., Sarvestani, M. S., Azarkhavarani, A. R., Izadi, S., & Cirella, G. T. (2024). Enhancing E-learning in higher education: Lessons learned from the pandemic. E-Learning and Digital Media, 2024, 20427530241268. [Google Scholar] [CrossRef]
- Cheng, S., & Wu, Z. (2024). Spatialization of time in bilinguals: What do we make of the effect of the testing language? Frontiers in Psychology, 15, 1355065. [Google Scholar] [CrossRef]
- Christ, A. A., Capon-Sieber, V., Köhler, C., Klieme, E., & Praetorius, A. K. (2024). Revisiting the three basic dimensions model: A critical empirical investigation of the indirect effects of student-perceived teaching quality on student outcomes. Frontline Learning Research, 12(1), 1349. [Google Scholar] [CrossRef]
- Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6(4), 284–290. [Google Scholar] [CrossRef]
- Colina-Ysea, F., Pantigoso-Leython, N., Abad-Lezama, I., Calla-Vásquez, K., Chávez-Campó, S., Sanabria-Boudri, F. M., & Soto-Rivera, C. (2024). Implementation of hybrid education in peruvian public universities: The challenges. Education Sciences, 14(4), 419. [Google Scholar] [CrossRef]
- De Luca, V., Rothman, J., Bialystok, E., & Pliatsikas, C. (2020). Duration and extent of bilingual experience modulate neurocognitive outcomes. NeuroImage, 204, 116222. [Google Scholar] [CrossRef] [PubMed]
- Duffy, S. E., & Feist, M. I. (2023). Conceptualizing time through language and space. In Time, metaphor, and language—A cognitive science perspective (1st ed., pp. 82–111). Cambridge University Press. [Google Scholar]
- El Sheikh, S., & Assaad, R. Y. (2018). The impact of changing learning environment on students’ learning in marketing education: A case-study applied in higher education in Egypt. Compass Journal of Learning and Teaching, 11(2), 675. [Google Scholar] [CrossRef]
- Espey, M. (2022). Variation in individual engagement in team-based learning and final exam performance. International Review of Economics Education, 41, 100251. [Google Scholar] [CrossRef]
- García-Ceberino, J. M., Antúnez, A., Ibáñez, S. J., & Feu, S. (2020). Design and validation of the instrument for the measurement of learning and performance in football. International Journal of Environmental Research and Public Health, 17(13), 4629. [Google Scholar] [CrossRef]
- Gaskins, W., Johnson, J., Maltbie, C., & Kukreti, A. (2015). Changing the learning environment in the college of engineering and applied science using challenge based learning. International Journal of Engineering Pedagogy (IJEP), 5(1), 33–41. [Google Scholar] [CrossRef]
- Gerstein, J. (2014). Moving from education 1.0 through education 2.0 towards education 3.0. In L.-M. Blaschke, C. Kenyon, & S. Hase (Eds.), Experiences in self-determined learning (pp. 83–99). Create Space Independent Publishing Platform. [Google Scholar]
- Grannäs, J., & Stavem, S. M. (2020). Transitions through remodelling teaching and learning environments. Education Enquiry, 12(3), 266–281. [Google Scholar] [CrossRef]
- Gudoneine, D., Staneviciene, E., Huet, I., Dickel, J., Dieng, D., Degroote, J., Rocio, V., Butkiene, R., & Casanova, D. (2025). Hybrid teaching and learning in higher education: A systematic literature review. Sustainability, 17(2), 756. [Google Scholar] [CrossRef]
- Guerrero-Quiñonez, A. J., Bedoya-Flores, M. C., Mosquera-Quiñonez, E. F., Ango-Ramos, E. D., & Tambaco, R. L. (2023). Hybrid education: Current challenges. Ibero-American Journal of Education & Society Research, 3(1), 276–279. [Google Scholar] [CrossRef]
- Gueye, M. L., & Expósito, E. (2023). Education 4.0: Proposal of a model for autonomous management of learning processes. Lecture Notes in Computer Science, 13821, 106–117. [Google Scholar]
- Gunawardena, M., Bishop, P., & Aviruppola, K. (2024). Personalized learning: The simple, the complicated, the complex and the chaotic. Teaching and Teacher Education, 139, 104429. [Google Scholar] [CrossRef]
- Gülkesen, K. H., Bora, F., Ilhanli, N., Avsar, E., & Zayim, N. (2022). Cohen’s d and physicians’ opinion on effect size: A questionnaire on anemia treatment. Journal of Investigative Medicine, 70(3), 814–819. [Google Scholar] [CrossRef] [PubMed]
- Hamman-Ortiz, L. (2020). Cultivating a critical translanguaging space in dual language bilingual education. International Multilingual Research Journal, 18(2), 119–139. [Google Scholar] [CrossRef]
- Huk, T. (2021). From education 1.0 to education 4.0–Challenges for the contemporary school. The New Education Review, 1, 36–46. [Google Scholar] [CrossRef]
- Ibrahim, A. K. (2021). Evolution of the web: From web 1.0 to 4.0. Qubahan Academic Journal, 1(3), 20–28. [Google Scholar] [CrossRef]
- Imbens, G. W. (2021). Statistical significance, p-values, and the reporting of uncertainty. Journal of Economic Perspectives, 35(3), 157–174. [Google Scholar] [CrossRef]
- Jackson, C. (2018). Affective dimensions of learning. In Contemporary theories of learning (2nd ed., pp. 139–152). Routledge. [Google Scholar]
- Jamali, A. R., & Mohamad, M. M. (2018). Dimensions of learning styles among engineering students. Journal of Physics: Conference Series, 1049, 012055. [Google Scholar] [CrossRef]
- Kentnor, H. (2015). Distance education and the evolution of online learning in the United States. Curriculum and Teaching Dialogue, 17(1&2), 21–34. [Google Scholar]
- Khakimova, M. F., & Kayumova, M. S. (2022, December 15). Factors that increase the effectiveness of hybrid teaching in a digital educational environment. 6th International Conference on Future Networks & Distributed Systems (ICFNDS’22), Tashkent, Uzbekistan. [Google Scholar]
- Klimplová, L. (2024, July 1–3). Hybrid teaching in higher education: Current insights and future research directions. 16th International Conference on Education and New Learning Technologies (EDULEARN24) (pp. 5727–5736), Palma de Mallorca, Spain. [Google Scholar]
- Kniffin, L. E., & Greenleaf, J. (2024). Hybrid teaching and learning in higher education: An appreciative inquiry. International Journal of Teaching and Learning in Higher Education, 35(2), 13. [Google Scholar]
- Kokko, M., Pramila-Savukoski, S., Ojala, J., Kuivila, H. M., Juntunen, J., Törmänen, T., & Mikkonen, K. (2024). The effect of educational intervention on hybrid teaching competence of health sciences and medical educators’—A mixed methods study. Scandinavian Journal of Educational Research, 69(5), 973–988. [Google Scholar] [CrossRef]
- Kurzman, P. A. (2013). The evolution of distance learning and online education. Journal of Teaching in Social Work, 33(4–5), 331–338. [Google Scholar] [CrossRef]
- Lagrutta, R., Carlucci, D., Santasiero, F., & Schiuma, G. (2023, September 7–8). Distinguishing the dimensions of learning spaces: A systematic literature review. 24th European Conference on Knowledge Management (Vol. 24, ), Lisbon, Portugal. [Google Scholar]
- Laranjeira, M., & Teixeira, M. O. (2025). Relationships between engagement, achievement and well-being: Validation of the engagement in higher education scale. Studies in Higher Education, 50(4), 756–770. [Google Scholar] [CrossRef]
- Li, K. C., Wong, B. T. M., Kwan, R., Chan, H. T., Wu, M. M. F., & Cheung, S. K. S. (2023). Evaluation of hybrid learning and teaching practices: The perspective of academics. Sustainability, 15(8), 6780. [Google Scholar] [CrossRef]
- Li, X., Xin, X., Peng, Z., Zhang, H., Yi, C., & Li, B. (2018). Analysis of the spatial variability of land surface variables for ET estimation: Case study in HiWATER campaign. Remote Sensing, 10(1), 91. [Google Scholar] [CrossRef]
- Li, Y., Doewes, R. I., Al-Abyadh, M. H. A., & Islam, M. M. (2023). How does remote education facilitate student performance? Appraising of sustainable learning perspective midst of COVID-19. Economic Research, 36, 2162561. [Google Scholar] [CrossRef]
- Liu, Y. D., Morard, S., Adinda, S., Sánchez, E., & Trestini, M. (2023, October 26–27). A systematic review: Criteria and dimensions of learning experience. 22nd European Conference on e-Learning–ECEL 2023 (Vol. 22, ), Pretoria, South Africa. [Google Scholar]
- López-Martín, E., & Ardura-Martínez, D. (2023). The effect size in scientific publication. Educación XX1, 26(1), 9–17. [Google Scholar]
- Ma, Z. (2023). Hybrid learning: A new learning model that connects online and offline. Journal of Education and Educational Research, 3(2), 130–132. [Google Scholar] [CrossRef]
- Magetos, D., Mitropoulos, S., & Douligeris, C. (2024, September 20–22). The evolution of the web and its impact on education: From web 1.0 to the metaverse. 9th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM 2024) (pp. 103–108), Athens, Greece. [Google Scholar]
- Mañas-Rodríguez, M. A., Alcaraz-Pardo, L., Pecino-Medina, V., & Limbert, C. (2016). Validation of the Spanish version of soane’s ISA engagement scale. Journal of Work and Organizational Psychology, 32(2), 87–93. [Google Scholar] [CrossRef]
- Mayer, S., Refaie, R. A., & Uebernickel, F. (2024). The challenges and opportunities of hybrid education with location asynchrony: Implications for education policy. Policy Futures in Education, 2024, 14782103231224507. [Google Scholar] [CrossRef]
- Medina, M. S., Smith, W. T., Kolluru, S., Sheaffer, E. A., & DiVall, M. (2019). A Review of Strategies for Designing, Administering, and Using Student Ratings of Instruction. Americal Journal of Pharmaceutical Education, 83(5), 7177. [Google Scholar] [CrossRef]
- Miyake, N., & Kirschner, P. A. (2014). The social and interactive dimensions of collaborative learning. In The cambridge handbook of the learning sciences (pp. 418–438). Cambridge University Press. [Google Scholar]
- Mohammadi, Z. S., Sharifian, L., Morad, S., & Araghieh, A. (2023). Identifying the dimensions and components of education based on flipped learning in elementary school. Iranian Journal of Educational Sociology, 6(2), 45–53. [Google Scholar] [CrossRef]
- Monika, Y., & Kristanto, S. B. (2024). Adapting education: Navigating hybrid classrooms in the post-pandemic era. Journal of Digital Learning and Education, 4(2), 156–166. [Google Scholar] [CrossRef]
- Mørk, G., Magne, T. A., Cartensen, T., Stigen, L., Åsli, L. A., Gramstad, A., Johnson, S. G., & Bonsaksen, T. (2020). Associations between learning environment variables and students’ approaches to studying: A cross-sectional study. BMC Medical Education, 20, 120. [Google Scholar] [CrossRef] [PubMed]
- Murray, G. (2014). Exploring the social dimensions of autonomy in language learning. In Social dimensions of autonomy in language learning (pp. 3–11). Palgrave McMillan. [Google Scholar]
- Nicol, D., Minity, I., & Sinclair, C. (2003). The social dimensions of online learning. Innovations in Education and Teaching International, 40(3), 270–280. [Google Scholar] [CrossRef]
- Ning, J., Ma, Z., Yao, J., Wang, Q., & Zhang, B. (2025). Personalized learning supported by learning analytics: A systematic review of functions, pathways, and educational outcomes. Interactive Learning Environments, 2025, 2478437. [Google Scholar] [CrossRef]
- Nocchi, S., & Blin, F. (2013, September 11–14). Understanding presence, affordance and the time/space dimensions for language learning in virtual worlds. 2013 EUROCALL Conference, Évora, Portugal. [Google Scholar]
- Nørgård, R. T. (2021). Theorising hybrid lifelong learning. British Journal of Educational Technology, 52(4), 1709–1723. [Google Scholar] [CrossRef]
- Nwachukwu, C., & Osa-Izeko, E. (2022). Assessment of Intellectual, Social and Affective (ISA) engagement of academics in nigerian universities. Nigerian Journal of Management Sciences, 23(2), 20–28. [Google Scholar]
- Polytechnic University of Valencia. (n.d.). Spanish grading system. Available online: https://www.upv.es/entidades/OPII/infoweb/pi/info/1147768normali.html (accessed on 20 August 2025).
- Pumpe, I. E., & Jonkmann, K. (2025). Study demands and resources in distance education—Their associations with engagement, emotional exhaustion, and academic success. Education Sciences, 15(6), 664. [Google Scholar] [CrossRef]
- Ramírez-Mera, U. N., Ramírez-Díaz, J. A., & Palencia, M. M. (2025). Transforming hybrid learning spaces in higher education: Digital technology integration and the role of socio-material and embodied spaces across the pandemic timeline. Higher Education Research & Development, 2025, 2503821. [Google Scholar]
- Roig, P. J., Alcaraz, S., Gilly, K., Bernad, C., & Juiz, C. (2025). Combining space, time, and language in active learning setups. Education Sciences, 15(6), 672. [Google Scholar] [CrossRef]
- Schweder, S., & Raufelder, D. (2022). Students’ interest and self-efficacy and the impact of changing learning environments. Contemporary Educational Psychology, 70, 102082. [Google Scholar] [CrossRef]
- Schweder, S., & Raufelder, D. (2024). Does changing learning environments affect student motivation? Learning and Instruction, 89, 101829. [Google Scholar] [CrossRef]
- Sherif, M. N., & Amudha, R. (2025). Effectiveness of e-learning among higher education students: A literature review. International Journal of Research in Management, 7(1), 129–135. [Google Scholar] [CrossRef]
- Sidharta, I. (2019). The intellectual, social, affective engagement scale (ISA engagement scale): A validation study. Jurnal Computech & Bisnis, 13(1), 50–57. [Google Scholar]
- Sinha, C., & Gärdenfors, P. (2018). Time, space, and events in language and cognition: A comparative view. Annals of the New York Academy of Sciences, 1326, 72–81. [Google Scholar] [CrossRef]
- Soane, E., Truss, C., Alfes, K., Shantz, A., Rees, C., & Gatenby, M. (2012). Development and application of a new measure of employee engagement: The ISA engagement scale. Human Resource Development International, 15(5), 529–547. [Google Scholar] [CrossRef]
- Toiviainen, H., Sadik, S., Bound, H., Pasqualoni, P., & Ramsamy-Prat, P. (2022). Dimensions of expansion for configuring learning spaces in global work. Journal of Workplace Learning, 34(1), 41–57. [Google Scholar] [CrossRef]
- Tolentino, L. C., & Tokowicz, N. (2011). Across language, space, and time: A review of the role of cross-language similarity in L2 (Morpho)Syntactic processing as revealed by fMRI and ERP methods. Studies in Second Language Acquisition, 33(1), 91–125. [Google Scholar] [CrossRef]
- Tuo, P., Bicakci, M., Ziegler, A., & Zhang, B. (2025). Measuring personalized learning in the smart classroom learning environment: Development and validation of an instrument. Education Sciences, 15(5), 620. [Google Scholar] [CrossRef]
- Twyman, J. S. (2014). Envisioning Education 3.0: The Fusion of Behavior analysis, learning science and technology. Revista Mexicana de Análisis de la Conducta, 40(2), 20–38. [Google Scholar] [CrossRef]
- Vagelatos, A. T., Foskolos, F. K., & Komninos, T. P. (2010, September 10–12). Education 2.0: Bringing innovation to the classroom. 14th Panhellenic Conference on Informatics, Tripoli, Greece. [Google Scholar]
- Vega-Martínez, J. E., Martínez-Serna, M. C., & Parga-Montoya, N. (2020). Dimensions of learning orientation and its impact on organizational performance and competitiveness in SMEs. Journal of Business Economics & Management, 21(2), 395–420. [Google Scholar]
- Volotovska, T., Mykhailiv, H. M., Porada, O., Zelman, L., & Kyshakevych, S. (2024). Change management within the learning environment: The role of leadership and innovative management. Conhecimento & Diversidade, 16(43), 116–152. [Google Scholar]
- XL4HET–Extended Learning for Higher Education Teachers and Trainers. (2024a). The teacher practical handbook to extended learning. Available online: https://www.extendedlearning.eu/wp-content/uploads/2024/05/2021-1-IT02-KA220-HED-000027596_R2_Teacher-guide-EN.pdf (accessed on 20 August 2025).
- XL4HET–Extended Learning for Higher Education Teachers and Trainers. (2024b). Toolbox of pilot practices in extended learning. Available online: https://www.extendedlearning.eu/wp-content/uploads/2024/05/2021-1-IT02-KA220-HED-000027596_R1_Toolbox-EN.pdf (accessed on 20 August 2025).
- Yumnam, V., & Thomas, N. T. (2021). The hybrid learning revolution: A paradigm shift in education. Ilkogretim Online–Elementary Education Online, 20(6), 6175–6186. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).