Next Article in Journal
New Data on the Messapian Necropolis of Monte D’Elia in Alezio (Apulia, Italy) from Topographical and Geophysical Surveys
Next Article in Special Issue
Physical and Tactical Demands of the Goalkeeper in Football in Different Small-Sided Games
Previous Article in Journal
Acoustic Classification of Surface and Underwater Vessels in the Ocean Using Supervised Machine Learning
Previous Article in Special Issue
Beyond Reality—Extending a Presentation Trainer with an Immersive VR Module
Open AccessArticle

Using Depth Cameras to Detect Patterns in Oral Presentations: A Case Study Comparing Two Generations of Computer Engineering Students

1
Centro de Ciências, Tecnologias e Saúde, Universidade Federal de Santa Catarina, Araranguá 88906072, Brazil
2
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
3
Escuela de Ingeniería Civil Informática, Universidad de Valparaíso, Valparaíso 2362735, Chile
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(16), 3493; https://doi.org/10.3390/s19163493
Received: 4 July 2019 / Revised: 1 August 2019 / Accepted: 5 August 2019 / Published: 9 August 2019
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
  |  
PDF [5085 KB, uploaded 9 August 2019]
  |  

Abstract

Speaking and presenting in public are critical skills for academic and professional development. These skills are demanded across society, and their development and evaluation are a challenge faced by higher education institutions. There are some challenges to evaluate objectively, as well as to generate valuable information to professors and appropriate feedback to students. In this paper, in order to understand and detect patterns in oral student presentations, we collected data from 222 Computer Engineering (CE) fresh students at three different times, over two different years (2017 and 2018). For each presentation, using a developed system and Microsoft Kinect, we have detected 12 features related to corporal postures and oral speaking. These features were used as input for the clustering and statistical analysis that allowed for identifying three different clusters in the presentations of both years, with stronger patterns in the presentations of the year 2017. A Wilcoxon rank-sum test allowed us to evaluate the evolution of the presentations attributes over each year and pointed out a convergence in terms of the reduction of the number of features statistically different between presentations given at the same course time. The results can further help to give students automatic feedback in terms of their postures and speech throughout the presentations and may serve as baseline information for future comparisons with presentations from students coming from different undergraduate courses. View Full-Text
Keywords: MS Kinect; multimodal learning analytics; oral presentations; k-means; educational data mining MS Kinect; multimodal learning analytics; oral presentations; k-means; educational data mining
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Roque, F.; Cechinel, C.; Weber, T.O.; Lemos, R.; Villarroel, R.; Miranda, D.; Munoz, R. Using Depth Cameras to Detect Patterns in Oral Presentations: A Case Study Comparing Two Generations of Computer Engineering Students. Sensors 2019, 19, 3493.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top