sensors-logo

Journal Browser

Journal Browser

From Sensor Data to Educational Insights

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 60832

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information and Communication Engineering, University of Murcia, 28040 Madrid, Spain
Interests: learning analytics; games; technology-enhanced learning; computational social science; human-computer interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Monash University, Australia
Interests: learning analytics; sensors; human–computer interaction; co-design

E-Mail Website
Guest Editor
DIPF | Leibniz Institute for Research and Information in Education, Rostocker Straße 6, 60323 Frankfurt am Main, Germany
Interests: Artificial Intelligence in Education, Hybrid AI; Multimodal Learning Analytics, Intelligent Tutoring System; Human-Computer Interaction

E-Mail Website
Guest Editor
DIPF|Leibniz Institute for Research and Information in Education, Germany
Interests: sensor-based learning; multimodal learning analytics; human-computer interaction

Special Issue Information

Dear Colleagues,

Technology is gradually being incorporated as an integral part of learning at all educational levels. This technology includes the now pervasive presence of virtual learning environments (VLEs), but also the inclusion of devices that are used/worn by learners or that are present in the classroom. This new educational ecosystem has greatly facilitated data capture about learners, and thus, several research areas such as learning analytics (LA), educational data mining (EDM), and artificial intelligence in education (AIED) have grown exponentially during the last decade. The inferences about learning that can be made by solely analyzing trace data from VLEs are rather limited. Therefore, the research communities have started to move beyond the data obtained from these VLEs by incorporating data from external sources such as sensors, pervasive devices, and computer vision systems. Within the context of education, this subfield is often denominated as multimodal learning analytics (MMLA), but the use of these data sources is also common in broader research areas, such as affective computing or human-computer interaction (HCI). The promise is to potentially augment and improve the extent and quality of the analysis that can be performed with these new data sources. The challenge is how to embed sensors and resulting data representations in authentic educational settings in pedagogically meaningful and ethical ways.

In this Special Issue, we welcome publications that include approaches to convert data captured using sensors (e.g., cameras, smartphones, microphones or temperature sensors), wearables (e.g., smart wristbands, watches, or glasses) or other IoT devices (e.g., interactive whiteboards, eBooks or tablets) into meaningful educational insights. The submitted articles need to appropriately explain how the inclusiveness of data from such devices can augment the analyses performed to improve teaching, learning or the educational context where it occurs (e.g., in classrooms, VLEs or other educational spaces).

This Special Issue focuses on all kinds of empirical case studies that fulfil the aforementioned criteria, but also experimental architectures or positioning/survey papers. The topics of interest include but are not limited to:

  • Empirical case studies that include data from sensors and IoT devices to make an impact in teaching and learning practices;
  • Learner modeling and intelligent tutoring using multimodal data sources;
  • Critical views or theoretical perspectives regarding how to transform data from these sensors and IoT devices into educational insights;
  • Systematic literature reviews or surveys about the role of data from sensors and IoT devices in research areas such as LA, EDM, AIED, affective computing or HCI to improve education;
  • Architectures or frameworks to manage the orchestration of these sensors and IoT devices to improve education;
  • Privacy, security, and ethical concerns about the use of these sensor data in educational settings.

Dr. José A. Ruipérez-Valiente
Dr. Roberto Martinez-Maldonado
Dr. Daniele Di Mitri
Dr. Jan Schneider
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors and IoT devices in education
  • learning analytics
  • educational data mining
  • artificial intelligence in education
  • affective computing
  • human–computer interaction
  • multimodal learning analytics
  • technology-enhanced learning
  • orchestration
  • multisensorial networks in education

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

6 pages, 198 KiB  
Editorial
From Sensor Data to Educational Insights
by José A. Ruipérez-Valiente, Roberto Martínez-Maldonado, Daniele Di Mitri and Jan Schneider
Sensors 2022, 22(21), 8556; https://doi.org/10.3390/s22218556 - 7 Nov 2022
Cited by 7 | Viewed by 1768
Abstract
Technology is gradually becoming an integral part of learning at all levels of educational [...] Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)

Research

Jump to: Editorial, Other

27 pages, 1622 KiB  
Article
A Data-Driven Approach to Quantify and Measure Students’ Engagement in Synchronous Virtual Learning Environments
by Xavier Solé-Beteta, Joan Navarro, Brigita Gajšek, Alessandro Guadagni and Agustín Zaballos
Sensors 2022, 22(9), 3294; https://doi.org/10.3390/s22093294 - 25 Apr 2022
Cited by 15 | Viewed by 3103
Abstract
In face-to-face learning environments, instructors (sub)consciously measure student engagement to obtain immediate feedback regarding the training they are leading. This constant monitoring process enables instructors to dynamically adapt the training activities according to the perceived student reactions, which aims to keep them engaged [...] Read more.
In face-to-face learning environments, instructors (sub)consciously measure student engagement to obtain immediate feedback regarding the training they are leading. This constant monitoring process enables instructors to dynamically adapt the training activities according to the perceived student reactions, which aims to keep them engaged in the learning process. However, when shifting from face-to-face to synchronous virtual learning environments (VLEs), assessing to what extent students are engaged to the training process during the lecture has become a challenging and arduous task. Typical indicators such as students’ faces, gestural poses, or even hearing their voice can be easily masked by the intrinsic nature of the virtual domain (e.g., cameras and microphones can be turned off). The purpose of this paper is to propose a methodology and its associated model to measure student engagement in VLEs that can be obtained from the systematic analysis of more than 30 types of digital interactions and events during a synchronous lesson. To validate the feasibility of this approach, a software prototype has been implemented to measure student engagement in two different learning activities in a synchronous learning session: a masterclass and a hands-on session. The obtained results aim to help those instructors who feel that the connection with their students has weakened due to the virtuality of the learning environment. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

18 pages, 4005 KiB  
Article
When Eyes Wander Around: Mind-Wandering as Revealed by Eye Movement Analysis with Hidden Markov Models
by Hsing-Hao Lee, Zih-Ling Chen, Su-Ling Yeh, Janet Huiwen Hsiao and An-Yeu (Andy) Wu
Sensors 2021, 21(22), 7569; https://doi.org/10.3390/s21227569 - 14 Nov 2021
Cited by 17 | Viewed by 4015
Abstract
Mind-wandering has been shown to largely influence our learning efficiency, especially in the digital and distracting era nowadays. Detecting mind-wandering thus becomes imperative in educational scenarios. Here, we used a wearable eye-tracker to record eye movements during the sustained attention to response task. [...] Read more.
Mind-wandering has been shown to largely influence our learning efficiency, especially in the digital and distracting era nowadays. Detecting mind-wandering thus becomes imperative in educational scenarios. Here, we used a wearable eye-tracker to record eye movements during the sustained attention to response task. Eye movement analysis with hidden Markov models (EMHMM), which takes both spatial and temporal eye-movement information into account, was used to examine if participants’ eye movement patterns can differentiate between the states of focused attention and mind-wandering. Two representative eye movement patterns were discovered through clustering using EMHMM: centralized and distributed patterns. Results showed that participants with the centralized pattern had better performance on detecting targets and rated themselves as more focused than those with the distributed pattern. This study indicates that distinct eye movement patterns are associated with different attentional states (focused attention vs. mind-wandering) and demonstrates a novel approach in using EMHMM to study attention. Moreover, this study provides a potential approach to capture the mind-wandering state in the classroom without interrupting the ongoing learning behavior. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

26 pages, 3934 KiB  
Article
Mobile Sensing with Smart Wearables of the Physical Context of Distance Learning Students to Consider Its Effects on Learning
by George-Petru Ciordas-Hertel, Sebastian Rödling, Jan Schneider, Daniele Di Mitri, Joshua Weidlich and Hendrik Drachsler
Sensors 2021, 21(19), 6649; https://doi.org/10.3390/s21196649 - 7 Oct 2021
Cited by 7 | Viewed by 3570
Abstract
Research shows that various contextual factors can have an impact on learning. Some of these factors can originate from the physical learning environment (PLE) in this regard. When learning from home, learners have to organize their PLE by themselves. This paper is concerned [...] Read more.
Research shows that various contextual factors can have an impact on learning. Some of these factors can originate from the physical learning environment (PLE) in this regard. When learning from home, learners have to organize their PLE by themselves. This paper is concerned with identifying, measuring, and collecting factors from the PLE that may affect learning using mobile sensing. More specifically, this paper first investigates which factors from the PLE can affect distance learning. The results identify nine types of factors from the PLE associated with cognitive, physiological, and affective effects on learning. Subsequently, this paper examines which instruments can be used to measure the investigated factors. The results highlight several methods involving smart wearables (SWs) to measure these factors from PLEs successfully. Third, this paper explores how software infrastructure can be designed to measure, collect, and process the identified multimodal data from and about the PLE by utilizing mobile sensing. The design and implementation of the Edutex software infrastructure described in this paper will enable learning analytics stakeholders to use data from and about the learners’ physical contexts. Edutex achieves this by utilizing sensor data from smartphones and smartwatches, in addition to response data from experience samples and questionnaires from learners’ smartwatches. Finally, this paper evaluates to what extent the developed infrastructure can provide relevant information about the learning context in a field study with 10 participants. The evaluation demonstrates how the software infrastructure can contextualize multimodal sensor data, such as lighting, ambient noise, and location, with user responses in a reliable, efficient, and protected manner. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

27 pages, 28739 KiB  
Article
Two-Dimensional Cartesian Coordinate System Educational Toolkit: 2D-CACSET
by Víctor H. Castañeda-Miranda, Luis F. Luque-Vega, Emmanuel Lopez-Neri, Jesús Antonio Nava-Pintor, Héctor A. Guerrero-Osuna and Gerardo Ornelas-Vargas
Sensors 2021, 21(18), 6304; https://doi.org/10.3390/s21186304 - 21 Sep 2021
Cited by 8 | Viewed by 4235
Abstract
Engineering education benefits from the application of modern technology, allowing students to learn essential Science, Technology, Engineering, and Mathematics (STEM) related concepts through hands-on experiences. Robotic kits have been used as an innovative tool in some educational fields, being readily accepted and adopted. [...] Read more.
Engineering education benefits from the application of modern technology, allowing students to learn essential Science, Technology, Engineering, and Mathematics (STEM) related concepts through hands-on experiences. Robotic kits have been used as an innovative tool in some educational fields, being readily accepted and adopted. However, most of the time, such kits’ knowledge level requires understanding basic concepts that are not always appropriate for the student. A critical concept in engineering is the Cartesian Coordinate System (CCS), an essential tool for every engineering, from graphing functions to data analysis in robotics and control applications and beyond. This paper presents the design and implementation of a novel Two-Dimensional Cartesian Coordinate System Educational Toolkit (2D-CACSET) to teach the two-dimensional representations as the first step to construct spatial thinking. This innovative educational toolkit is based on real-time location systems using Ultra-Wide Band technology. It comprises a workbench, four Anchors pinpointing X+, X, Y+, Y axes, seven Tags representing points in the plane, one listener connected to a PC collecting the position of the Tags, and a Graphical User Interface displaying these positions. The Educational Mechatronics Conceptual Framework (EMCF) enables constructing knowledge in concrete, graphic, and abstract levels. Hence, the students acquire this knowledge to apply it further down their career path. For this paper, three instructional designs were designed using the 2D-CACSET and the EMCF to learn about coordinate axes, quadrants, and a point in the CCS. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

21 pages, 8290 KiB  
Article
An Adaptive Game-Based Learning Strategy for Children Road Safety Education and Practice in Virtual Space
by Noman Khan, Khan Muhammad, Tanveer Hussain, Mansoor Nasir, Muhammad Munsif, Ali Shariq Imran and Muhammad Sajjad
Sensors 2021, 21(11), 3661; https://doi.org/10.3390/s21113661 - 25 May 2021
Cited by 49 | Viewed by 7465
Abstract
Virtual reality (VR) has been widely used as a tool to assist people by letting them learn and simulate situations that are too dangerous and risky to practice in real life, and one of these is road safety training for children. Traditional video- [...] Read more.
Virtual reality (VR) has been widely used as a tool to assist people by letting them learn and simulate situations that are too dangerous and risky to practice in real life, and one of these is road safety training for children. Traditional video- and presentation-based road safety training has average output results as it lacks physical practice and the involvement of children during training, without any practical testing examination to check the learned abilities of a child before their exposure to real-world environments. Therefore, in this paper, we propose a 3D realistic open-ended VR and Kinect sensor-based training setup using the Unity game engine, wherein children are educated and involved in road safety exercises. The proposed system applies the concepts of VR in a game-like setting to let the children learn about traffic rules and practice them in their homes without any risk of being exposed to the outside environment. Thus, with our interactive and immersive training environment, we aim to minimize road accidents involving children and contribute to the generic domain of healthcare. Furthermore, the proposed framework evaluates the overall performance of the students in a virtual environment (VE) to develop their road-awareness skills. To ensure safety, the proposed system has an extra examination layer for children’s abilities evaluation, whereby a child is considered fit for real-world practice in cases where they fulfil certain criteria by achieving set scores. To show the robustness and stability of the proposed system, we conduct four types of subjective activities by involving a group of ten students with average grades in their classes. The experimental results show the positive effect of the proposed system in improving the road crossing behavior of the children. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

22 pages, 1765 KiB  
Article
Towards Automatic Collaboration Analytics for Group Speech Data Using Learning Analytics
by Sambit Praharaj, Maren Scheffel, Marcel Schmitz, Marcus Specht and Hendrik Drachsler
Sensors 2021, 21(9), 3156; https://doi.org/10.3390/s21093156 - 2 May 2021
Cited by 21 | Viewed by 4986
Abstract
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from [...] Read more.
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

18 pages, 2108 KiB  
Article
Table Tennis Tutor: Forehand Strokes Classification Based on Multimodal Data and Neural Networks
by Khaleel Asyraaf Mat Sanusi, Daniele Di Mitri, Bibeg Limbu and Roland Klemke
Sensors 2021, 21(9), 3121; https://doi.org/10.3390/s21093121 - 30 Apr 2021
Cited by 26 | Viewed by 4870
Abstract
Beginner table-tennis players require constant real-time feedback while learning the fundamental techniques. However, due to various constraints such as the mentor’s inability to be around all the time, expensive sensors and equipment for sports training, beginners are unable to get the immediate real-time [...] Read more.
Beginner table-tennis players require constant real-time feedback while learning the fundamental techniques. However, due to various constraints such as the mentor’s inability to be around all the time, expensive sensors and equipment for sports training, beginners are unable to get the immediate real-time feedback they need during training. Sensors have been widely used to train beginners and novices for various skills development, including psychomotor skills. Sensors enable the collection of multimodal data which can be utilised with machine learning to classify training mistakes, give feedback, and further improve the learning outcomes. In this paper, we introduce the Table Tennis Tutor (T3), a multi-sensor system consisting of a smartphone device with its built-in sensors for collecting motion data and a Microsoft Kinect for tracking body position. We focused on the forehand stroke mistake detection. We collected a dataset recording an experienced table tennis player performing 260 short forehand strokes (correct) and mimicking 250 long forehand strokes (mistake). We analysed and annotated the multimodal data for training a recurrent neural network that classifies correct and incorrect strokes. To investigate the accuracy level of the aforementioned sensors, three combinations were validated in this study: smartphone sensors only, the Kinect only, and both devices combined. The results of the study show that smartphone sensors alone perform sub-par than the Kinect, but similar with better precision together with the Kinect. To further strengthen T3’s potential for training, an expert interview session was held virtually with a table tennis coach to investigate the coach’s perception of having a real-time feedback system to assist beginners during training sessions. The outcome of the interview shows positive expectations and provided more inputs that can be beneficial for the future implementations of the T3. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

19 pages, 3470 KiB  
Article
Studying Collaboration Dynamics in Physical Learning Spaces: Considering the Temporal Perspective through Epistemic Network Analysis
by Milica Vujovic, Ishari Amarasinghe and Davinia Hernández-Leo
Sensors 2021, 21(9), 2898; https://doi.org/10.3390/s21092898 - 21 Apr 2021
Cited by 6 | Viewed by 2907
Abstract
The role of the learning space is especially relevant in the application of active pedagogies, for example those involving collaborative activities. However, there is limited evidence informing learning design on the potential effects of collaborative learning spaces. In particular, there is a lack [...] Read more.
The role of the learning space is especially relevant in the application of active pedagogies, for example those involving collaborative activities. However, there is limited evidence informing learning design on the potential effects of collaborative learning spaces. In particular, there is a lack of studies generating evidence derived from temporal analyses of the influence of learning spaces on the collaborative learning process. The temporal analysis perspective has been shown to be essential in the analysis of collaboration processes, as it reveals the relationships between students’ actions. The aim of this study is to explore the potential of a temporal perspective to broaden understanding of the effects of table shape on collaboration when different group sizes and genders are considered. On-task actions such as explanation, discussion, non-verbal interaction, and interaction with physical artefacts were observed while students were engaged in engineering design tasks. Results suggest that table shape influences student behaviour when taking into account different group sizes and different genders. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

27 pages, 2091 KiB  
Article
EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
by Pankaj Chejara, Luis P. Prieto, Adolfo Ruiz-Calleja, María Jesús Rodríguez-Triana, Shashi Kant Shankar and Reet Kasepalu
Sensors 2021, 21(8), 2863; https://doi.org/10.3390/s21082863 - 19 Apr 2021
Cited by 19 | Viewed by 3923
Abstract
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account [...] Read more.
Multimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

23 pages, 6884 KiB  
Article
Implementation of a MEIoT Weather Station with Exogenous Disturbance Input
by Héctor A. Guerrero-Osuna, Luis F. Luque-Vega, Miriam A. Carlos-Mancilla, Gerardo Ornelas-Vargas, Víctor H. Castañeda-Miranda and Rocío Carrasco-Navarro
Sensors 2021, 21(5), 1653; https://doi.org/10.3390/s21051653 - 27 Feb 2021
Cited by 9 | Viewed by 3191
Abstract
Due to the emergence of the coronavirus disease (COVID 19), education systems in most countries have adapted and quickly changed their teaching strategy to online teaching. This paper presents the design and implementation of a novel Internet of Things (IoT) device, called MEIoT [...] Read more.
Due to the emergence of the coronavirus disease (COVID 19), education systems in most countries have adapted and quickly changed their teaching strategy to online teaching. This paper presents the design and implementation of a novel Internet of Things (IoT) device, called MEIoT weather station, which incorporates an exogenous disturbance input, within the National Digital Observatory of Smart Environments (OBNiSE) architecture. The exogenous disturbance input involves a wind blower based on a DC brushless motor. It can be controlled, via Node-RED platform, manually through a sliding bar, or automatically via different predefined profile functions, modifying the wind speed and the wind vane sensor variables. An application to Engineering Education is presented with a case study that includes the instructional design for the least-squares regression topic for linear, quadratic, and cubic approximations within the Educational Mechatronics Conceptual Framework (EMCF) to show the relevance of this proposal. This work’s main contribution to the state-of-the-art is to turn a weather monitoring system into a hybrid hands-on learning approach thanks to the integrated exogenous disturbance input. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

33 pages, 5846 KiB  
Article
Epistemic Network Analyses of Economics Students’ Graph Understanding: An Eye-Tracking Study
by Sebastian Brückner, Jan Schneider, Olga Zlatkin-Troitschanskaia and Hendrik Drachsler
Sensors 2020, 20(23), 6908; https://doi.org/10.3390/s20236908 - 3 Dec 2020
Cited by 11 | Viewed by 3605
Abstract
Learning to solve graph tasks is one of the key prerequisites of acquiring domain-specific knowledge in most study domains. Analyses of graph understanding often use eye-tracking and focus on analyzing how much time students spend gazing at particular areas of a graph—Areas of [...] Read more.
Learning to solve graph tasks is one of the key prerequisites of acquiring domain-specific knowledge in most study domains. Analyses of graph understanding often use eye-tracking and focus on analyzing how much time students spend gazing at particular areas of a graph—Areas of Interest (AOIs). To gain a deeper insight into students’ task-solving process, we argue that the gaze shifts between students’ fixations on different AOIs (so-termed transitions) also need to be included in holistic analyses of graph understanding that consider the importance of transitions for the task-solving process. Thus, we introduced Epistemic Network Analysis (ENA) as a novel approach to analyze eye-tracking data of 23 university students who solved eight multiple-choice graph tasks in physics and economics. ENA is a method for quantifying, visualizing, and interpreting network data allowing a weighted analysis of the gaze patterns of both correct and incorrect graph task solvers considering the interrelations between fixations and transitions. After an analysis of the differences in the number of fixations and the number of single transitions between correct and incorrect solvers, we conducted an ENA for each task. We demonstrate that an isolated analysis of fixations and transitions provides only a limited insight into graph solving behavior. In contrast, ENA identifies differences between the gaze patterns of students who solved the graph tasks correctly and incorrectly across the multiple graph tasks. For instance, incorrect solvers shifted their gaze from the graph to the x-axis and from the question to the graph comparatively more often than correct solvers. The results indicate that incorrect solvers often have problems transferring textual information into graphical information and rely more on partly irrelevant parts of a graph. Finally, we discuss how the findings can be used to design experimental studies and for innovative instructional procedures in higher education. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

23 pages, 4444 KiB  
Systematic Review
Detecting Emotions through Electrodermal Activity in Learning Contexts: A Systematic Review
by Anne Horvers, Natasha Tombeng, Tibor Bosse, Ard W. Lazonder and Inge Molenaar
Sensors 2021, 21(23), 7869; https://doi.org/10.3390/s21237869 - 26 Nov 2021
Cited by 59 | Viewed by 6878
Abstract
There is a strong increase in the use of devices that measure physiological arousal through electrodermal activity (EDA). Although there is a long tradition of studying emotions during learning, researchers have only recently started to use EDA to measure emotions in the context [...] Read more.
There is a strong increase in the use of devices that measure physiological arousal through electrodermal activity (EDA). Although there is a long tradition of studying emotions during learning, researchers have only recently started to use EDA to measure emotions in the context of education and learning. This systematic review aimed to provide insight into how EDA is currently used in these settings. The review aimed to investigate the methodological aspects of EDA measures in educational research and synthesize existing empirical evidence on the relation of physiological arousal, as measured by EDA, with learning outcomes and learning processes. The methodological results pointed to considerable variation in the usage of EDA in educational research and indicated that few implicit standards exist. Results regarding learning revealed inconsistent associations between physiological arousal and learning outcomes, which seem mainly due to underlying methodological differences. Furthermore, EDA frequently fluctuated during different stages of the learning process. Compared to this unimodal approach, multimodal designs provide the potential to better understand these fluctuations at critical moments. Overall, this review signals a clear need for explicit guidelines and standards for EDA processing in educational research in order to build a more profound understanding of the role of physiological arousal during learning. Full article
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Show Figures

Figure 1

Back to TopTop