E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advanced Sensors Technology in Education"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (20 June 2019).

Special Issue Editors

Guest Editor
Dr. Ruben Gonzalez Crespo

Faculty of Engineering, Universidad Internacional de La Rioja, Av. de la Paz, 137, 26006 Logroño, La Rioja, Spain
Website | E-Mail
Interests: soft computing; accessibility; artificial intelligence; learning analytics
Guest Editor
Prof. Dr. Daniel Burgos

Research Institute for Innovation & Technology in Education, Universidad Internacional de La Rioja, Av. de la Paz, 137, 26006 Logroño, La Rioja, Spain
Website | E-Mail
Interests: educational technology; educational innovation; e-learning; open education; learning analytics

Special Issue Information

Dear Colleagues,

One of the most well-known requirements in educational settings is the need to know what happens during a course, lesson plan or full academic programme. This is true for any type of education but in particular for open education, which has multiple dimensions of openness. On the one hand, educators (i.e., teachers, professors, tutors, etc.) and practitioners of open education need to reshape the course plan according to the actual features of the learners (e.g., learning styles, motivation, performance, etc.) and they therefore require real-time analytical information to supervise, assess, adapt and offer feedback to the learners. On the other hand, open education offers specific opportunities through online learning using open educational resources (OER). The online environments and platforms provide huge amount of data on all activities (a huge Excel sheet, known as big data).

More importantly, open education with open teaching and learning is now commonly shaped by a learner-centred approach that pushes the learners to be the driver of their own learning. That is, learners require awareness to self-assess their progress along the course and make decisions regarding their next steps.

All kind of sensors that supports these tasks are welcome to improve the quality of this new epoch in the online education paradigm. Different sensors tracking biometrics that allow augmented reality activities, actuators, hardware, like wearables, and different software applications will improve the expected results.

Dr. Ruben Gonzalez Crespo
Prof. Dr. Daniel Burgos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • technology-enhanced learning
  • augmented reality
  • learning analytics
  • virtual sensors

Published Papers (12 papers)

View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:

Research

Open AccessArticle
Physical and Tactical Demands of the Goalkeeper in Football in Different Small-Sided Games
Sensors 2019, 19(16), 3605; https://doi.org/10.3390/s19163605
Received: 20 June 2019 / Revised: 9 August 2019 / Accepted: 14 August 2019 / Published: 19 August 2019
PDF Full-text (846 KB) | HTML Full-text | XML Full-text
Abstract
Background: Several studies have examined the differences between the different small-sided game (SSG) formats. However, only one study has analysed how the different variables that define SSGs can modify the goalkeeper’s behavior. The aim of the present study was to analyze how the [...] Read more.
Background: Several studies have examined the differences between the different small-sided game (SSG) formats. However, only one study has analysed how the different variables that define SSGs can modify the goalkeeper’s behavior. The aim of the present study was to analyze how the modification of the pitch size in SSGs affects the physical demands of the goalkeepers. Methods: Three professional male football goalkeepers participated in this study. Three different SSG were analysed (62 m × 44 m for a large pitch; 50 m × 35 m for a medium pitch and 32 m × 23 m for a small pitch). Positional data of each goalkeeper was gathered using an 18.18 Hz global positioning system. The data gathered was used to compute players’ spatial exploration index, standard ellipse area, prediction ellipse area The distance covered, distance covered in different intensities and accelerations/decelerations were used to assess the players’ physical performance. Results and Conclusions: There were differences between small and large SSGs in relation to the distances covered at different intensities and pitch exploration. Intensities were lower when the pitch size was larger. Besides that, the pitch exploration variables increased along with the increment of the pitch size. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Using Depth Cameras to Detect Patterns in Oral Presentations: A Case Study Comparing Two Generations of Computer Engineering Students
Sensors 2019, 19(16), 3493; https://doi.org/10.3390/s19163493
Received: 4 July 2019 / Revised: 1 August 2019 / Accepted: 5 August 2019 / Published: 9 August 2019
PDF Full-text (5085 KB) | HTML Full-text | XML Full-text
Abstract
Speaking and presenting in public are critical skills for academic and professional development. These skills are demanded across society, and their development and evaluation are a challenge faced by higher education institutions. There are some challenges to evaluate objectively, as well as to [...] Read more.
Speaking and presenting in public are critical skills for academic and professional development. These skills are demanded across society, and their development and evaluation are a challenge faced by higher education institutions. There are some challenges to evaluate objectively, as well as to generate valuable information to professors and appropriate feedback to students. In this paper, in order to understand and detect patterns in oral student presentations, we collected data from 222 Computer Engineering (CE) fresh students at three different times, over two different years (2017 and 2018). For each presentation, using a developed system and Microsoft Kinect, we have detected 12 features related to corporal postures and oral speaking. These features were used as input for the clustering and statistical analysis that allowed for identifying three different clusters in the presentations of both years, with stronger patterns in the presentations of the year 2017. A Wilcoxon rank-sum test allowed us to evaluate the evolution of the presentations attributes over each year and pointed out a convergence in terms of the reduction of the number of features statistically different between presentations given at the same course time. The results can further help to give students automatic feedback in terms of their postures and speech throughout the presentations and may serve as baseline information for future comparisons with presentations from students coming from different undergraduate courses. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Beyond Reality—Extending a Presentation Trainer with an Immersive VR Module
Sensors 2019, 19(16), 3457; https://doi.org/10.3390/s19163457
Received: 18 June 2019 / Revised: 2 August 2019 / Accepted: 4 August 2019 / Published: 7 August 2019
PDF Full-text (2117 KB) | HTML Full-text | XML Full-text
Abstract
The development of multimodal sensor-based applications designed to support learners with the improvement of their skills is expensive since most of these applications are tailor-made and built from scratch. In this paper, we show how the Presentation Trainer (PT), a multimodal sensor-based application [...] Read more.
The development of multimodal sensor-based applications designed to support learners with the improvement of their skills is expensive since most of these applications are tailor-made and built from scratch. In this paper, we show how the Presentation Trainer (PT), a multimodal sensor-based application designed to support the development of public speaking skills, can be modularly extended with a Virtual Reality real-time feedback module (VR module), which makes usage of the PT more immersive and comprehensive. The described study consists of a formative evaluation and has two main objectives. Firstly, a technical objective is concerned with the feasibility of extending the PT with an immersive VR Module. Secondly, a user experience objective focuses on the level of satisfaction of interacting with the VR extended PT. To study these objectives, we conducted user tests with 20 participants. Results from our test show the feasibility of modularly extending existing multimodal sensor-based applications, and in terms of learning and user experience, results indicate a positive attitude of the participants towards using the application (PT+VR module). Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Introducing Low-Cost Sensors into the Classroom Settings: Improving the Assessment in Agile Practices with Multimodal Learning Analytics
Sensors 2019, 19(15), 3291; https://doi.org/10.3390/s19153291
Received: 28 June 2019 / Revised: 23 July 2019 / Accepted: 24 July 2019 / Published: 26 July 2019
PDF Full-text (14841 KB) | HTML Full-text | XML Full-text
Abstract
Currently, the improvement of core skills appears as one of the most significant educational challenges of this century. However, assessing the development of such skills is still a challenge in real classroom environments. In this context, Multimodal Learning Analysis techniques appear as an [...] Read more.
Currently, the improvement of core skills appears as one of the most significant educational challenges of this century. However, assessing the development of such skills is still a challenge in real classroom environments. In this context, Multimodal Learning Analysis techniques appear as an attractive alternative to complement the development and evaluation of core skills. This article presents an exploratory study that analyzes the collaboration and communication of students in a Software Engineering course, who perform a learning activity simulating Scrum with Lego® bricks. Data from the Scrum process was captured, and multidirectional microphones were used in the retrospective ceremonies. Social network analysis techniques were applied, and a correlational analysis was carried out with all the registered information. The results obtained allowed the detection of important relationships and characteristics of the collaborative and Non-Collaborative groups, with productivity, effort, and predominant personality styles in the groups. From all the above, we can conclude that the Multimodal Learning Analysis techniques offer considerable feasibilities to support the process of skills development in students. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Evaluation of the [email protected] Game-Based Learning–Teaching Approach
Sensors 2019, 19(15), 3251; https://doi.org/10.3390/s19153251
Received: 20 June 2019 / Revised: 16 July 2019 / Accepted: 18 July 2019 / Published: 24 July 2019
PDF Full-text (3238 KB) | HTML Full-text | XML Full-text
Abstract
The constructivist approach is interested in creating knowledge through active engagement and encourages students to build their knowledge from their experiences in the world. Learning through digital game making is a constructivist approach that allows students to learn by developing their own games, [...] Read more.
The constructivist approach is interested in creating knowledge through active engagement and encourages students to build their knowledge from their experiences in the world. Learning through digital game making is a constructivist approach that allows students to learn by developing their own games, enhancing problem-solving skills and fostering creativity. In this context two tools, [email protected] App and the Project Management Dashboard (PMD), were developed to enable students from different countries to be able to adapt their learning material by programming and designing games for their academic subjects, therefore integrating the game mechanics, dynamics, and aesthetics into the academic curriculum. This paper focuses on presenting the validation context as well as the evaluation of these tools. The Hassenzahl model and AttrakDiff survey were used for measuring users’ experience and satisfaction, and for understanding emotional responses, thus providing information that enables testing of the acceptability and usability of the developed apps. After two years of usage of code-making apps (i.e., [email protected] and its pre-design version Pocket Code), the pupils processed knowledge from their academic subjects spontaneously as game-based embedded knowledge. The students demonstrated creativity, a practical approach, and enthusiasm regarding making games focused on academic content that led them to learning, using mobile devices, sensors, images, and contextual information. This approach was widely accepted by students and teachers as part of their everyday class routines. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Can You Ink While You Blink? Assessing Mental Effort in a Sensor-Based Calligraphy Trainer
Sensors 2019, 19(14), 3244; https://doi.org/10.3390/s19143244
Received: 20 June 2019 / Revised: 17 July 2019 / Accepted: 20 July 2019 / Published: 23 July 2019
PDF Full-text (7123 KB) | HTML Full-text | XML Full-text
Abstract
Sensors can monitor physical attributes and record multimodal data in order to provide feedback. The application calligraphy trainer, exploits these affordances in the context of handwriting learning. It records the expert’s handwriting performance to compute an expert model. The application then uses the [...] Read more.
Sensors can monitor physical attributes and record multimodal data in order to provide feedback. The application calligraphy trainer, exploits these affordances in the context of handwriting learning. It records the expert’s handwriting performance to compute an expert model. The application then uses the expert model to provide guidance and feedback to the learners. However, new learners can be overwhelmed by the feedback as handwriting learning is a tedious task. This paper presents the pilot study done with the calligraphy trainer to evaluate the mental effort induced by various types of feedback provided by the application. Ten participants, five in the control group and five in the treatment group, who were Ph.D. students in the technology-enhanced learning domain, took part in the study. The participants used the application to learn three characters from the Devanagari script. The results show higher mental effort in the treatment group when all types of feedback are provided simultaneously. The mental efforts for individual feedback were similar to the control group. In conclusion, the feedback provided by the calligraphy trainer does not impose high mental effort and, therefore, the design considerations of the calligraphy trainer can be insightful for multimodal feedback designers. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Use of Computing Devices as Sensors to Measure Their Impact on Primary and Secondary Students’ Performance
Sensors 2019, 19(14), 3226; https://doi.org/10.3390/s19143226
Received: 20 May 2019 / Revised: 17 July 2019 / Accepted: 18 July 2019 / Published: 22 July 2019
PDF Full-text (545 KB) | HTML Full-text | XML Full-text
Abstract
The constant innovation in new technologies and the increase in the use of computing devices in different areas of the society have contributed to a digital transformation in almost every sector. This digital transformation has also reached the world of education, making it [...] Read more.
The constant innovation in new technologies and the increase in the use of computing devices in different areas of the society have contributed to a digital transformation in almost every sector. This digital transformation has also reached the world of education, making it possible for members of the educational community to adopt Learning Management Systems (LMS), where the digital contents replacing the traditional textbooks are exploited and managed. This article aims to study the relationship between the type of computing device from which students access the LMS and how affects their performance. To achieve this, the LMS accesses of students in a school comprising from elementary to bachelor’s degree stages have been monitored by means of different computing devices acting as sensors to gather data such as the type of device and operating system used by the students.The main conclusion is that students who access the LMS improve significantly their performance and that the type of device and the operating system has an influence in the number of passed subjects. Moreover, a predictive model has been generated to predict the number of passed subjects according to these factors, showing promising results. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Detecting Mistakes in CPR Training with Multimodal Data and Neural Networks
Sensors 2019, 19(14), 3099; https://doi.org/10.3390/s19143099
Received: 20 May 2019 / Revised: 24 June 2019 / Accepted: 5 July 2019 / Published: 13 July 2019
PDF Full-text (2255 KB) | HTML Full-text | XML Full-text
Abstract
This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body [...] Read more.
This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
A Visual Dashboard to Track Learning Analytics for Educational Cloud Computing
Sensors 2019, 19(13), 2952; https://doi.org/10.3390/s19132952
Received: 10 April 2019 / Revised: 1 July 2019 / Accepted: 2 July 2019 / Published: 4 July 2019
PDF Full-text (900 KB) | HTML Full-text | XML Full-text
Abstract
Cloud providers such as Amazon Web Services (AWS) stand out as useful platforms to teach distributed computing concepts as well as the development of Cloud-native scalable application architectures on real-world infrastructures. Instructors can benefit from high-level tools to track the progress of students [...] Read more.
Cloud providers such as Amazon Web Services (AWS) stand out as useful platforms to teach distributed computing concepts as well as the development of Cloud-native scalable application architectures on real-world infrastructures. Instructors can benefit from high-level tools to track the progress of students during their learning paths on the Cloud, and this information can be disclosed via educational dashboards for students to understand their progress through the practical activities. To this aim, this paper introduces CloudTrail-Tracker, an open-source platform to obtain enhanced usage analytics from a shared AWS account. The tool provides the instructor with a visual dashboard that depicts the aggregated usage of resources by all the students during a certain time frame and the specific use of AWS for a specific student. To facilitate self-regulation of students, the dashboard also depicts the percentage of progress for each lab session and the pending actions by the student. The dashboard has been integrated in four Cloud subjects that use different learning methodologies (from face-to-face to online learning) and the students positively highlight the usefulness of the tool for Cloud instruction in AWS. This automated procurement of evidences of student activity on the Cloud results in close to real-time learning analytics useful both for semi-automated assessment and student self-awareness of their own training progress. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Time Orientation Technologies in Special Education
Sensors 2019, 19(11), 2571; https://doi.org/10.3390/s19112571
Received: 15 May 2019 / Revised: 2 June 2019 / Accepted: 3 June 2019 / Published: 6 June 2019
PDF Full-text (8529 KB) | HTML Full-text | XML Full-text
Abstract
A device to train children in time orientation has been designed, developed and evaluated. It is framed within a long-term cooperation action between university and special education school. It uses a specific cognitive accessible time display: Time left in the day is represented [...] Read more.
A device to train children in time orientation has been designed, developed and evaluated. It is framed within a long-term cooperation action between university and special education school. It uses a specific cognitive accessible time display: Time left in the day is represented by a row of luminous elements initially on. Time passing is represented by turning off sequentially and gradually each luminous element every 15 min. Agenda is displayed relating time to tasks with standard pictograms for further accessibility. Notifications of tasks-to-come both for management support and anticipation to changes uses visual and auditory information. Agenda can be described in an Alternative and Augmentative Communication pictogram language already used by children, supporting individual and class activities on agenda. Validation has been performed with 16 children in 12 classrooms of four special education schools. Methodology for evaluation compares both prior and posterior assessments which are based in the International Classification of Functioning, Disability and Health (ICF) from the World Health Organization (WHO), together with observation registers. Results show consistent improvement in performances related with time orientation. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Touch-Typing Detection Using Eyewear: Toward Realizing a New Interaction for Typing Applications
Sensors 2019, 19(9), 2022; https://doi.org/10.3390/s19092022
Received: 18 March 2019 / Revised: 25 April 2019 / Accepted: 28 April 2019 / Published: 30 April 2019
PDF Full-text (2380 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Typing skills are important in the digital information society of this generation. As a method to improve typing speed, in this study, we focused on the training of touch typing that enables typing a key without looking at the keyboard. For support of [...] Read more.
Typing skills are important in the digital information society of this generation. As a method to improve typing speed, in this study, we focused on the training of touch typing that enables typing a key without looking at the keyboard. For support of touch-typing training, it is efficient to apply a penalty if a learner looks at the keyboard; however, to realize the penalty method, the computer needs to be able to recognize whether the learner looked at the keyboard. We, therefore, proposed a method to detect a learner’s eye gaze, namely, using eyewear to detect whether the learner looked at the keyboard, and then evaluating the detection accuracy of our proposed method. We examined the necessity for our system by analyzing the relationship between a learner’s eye gaze and touch-typing skills. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Open AccessArticle
Data-Driven Interaction Review of an Ed-Tech Application
Sensors 2019, 19(8), 1910; https://doi.org/10.3390/s19081910
Received: 12 February 2019 / Revised: 18 April 2019 / Accepted: 18 April 2019 / Published: 22 April 2019
PDF Full-text (2720 KB) | HTML Full-text | XML Full-text
Abstract
Smile and Learn is an Ed-Tech company that runs a smart library with more that 100 applications, games and interactive stories, aimed at children aged two to 10 and their families. The platform gathers thousands of data points from the interaction with the [...] Read more.
Smile and Learn is an Ed-Tech company that runs a smart library with more that 100 applications, games and interactive stories, aimed at children aged two to 10 and their families. The platform gathers thousands of data points from the interaction with the system to subsequently offer reports and recommendations. Given the complexity of navigating all the content, the library implements a recommender system. The purpose of this paper is to evaluate two aspects of such system focused on children: the influence of the order of recommendations on user exploratory behavior, and the impact of the choice of the recommendation algorithm on engagement. The assessment, based on data collected between 15 October 2018 and 1 December 2018, required the analysis of the number of clicks performed on the recommendations depending on their ordering, and an A/B/C testing where two standard recommendation algorithms were compared with a random recommendation that served as baseline. The results suggest a direct connection between the order of the recommendation and the interest raised, and the superiority of recommendations based on popularity against other alternatives. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top