Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = classroom scenes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4076 KB  
Article
Enhancing Lecture Interactivity Through Virtual Reality
by Marián Matys, Martin Gašo, Tomáš Balala and Ľuboslav Dulina
Appl. Sci. 2026, 16(2), 711; https://doi.org/10.3390/app16020711 - 9 Jan 2026
Viewed by 359
Abstract
Although conventional lectures can provide a wide range of information to a large group of people, maintaining attention and ensuring knowledge transfer can be a challenge. Therefore, it is important to look for new, engaging, and effective approaches. This pilot feasibility study explores [...] Read more.
Although conventional lectures can provide a wide range of information to a large group of people, maintaining attention and ensuring knowledge transfer can be a challenge. Therefore, it is important to look for new, engaging, and effective approaches. This pilot feasibility study explores the effectiveness of virtual reality (VR) in increasing student engagement and knowledge transfer during lectures in the field of supply chain logistics and inventory selection systems. An educational VR game was developed through the systematic design of application logic, the creation of 3D assets, the construction of virtual scenes, and the implementation of gameplay. The application simulates three inventory picking methods: conventional selection, Pick by Light, and Pick by Vision systems. A total of 22 master’s students participated in the pilot study. They tested three different versions of the VR game, compared the time they needed to complete it, and participated in a guided discussion and questionnaire. The preliminary student reports indicated that students felt more engaged in the learning process and reported a perceived higher engagement with inventory picking systems compared to the traditional lecture format. On the other hand, participants mentioned concerns about nausea and the unavailability of VR headsets. The pilot results indicate that VR shows potential as an educational tool for teaching industrial logistics because it transforms the typical classroom environment into a more active and playful one, leading to a more natural understanding of the subject. Full article
(This article belongs to the Special Issue Advances in Virtual Reality Applications)
Show Figures

Figure 1

28 pages, 515 KB  
Review
From Cues to Engagement: A Comprehensive Survey and Holistic Architecture for Computer Vision-Based Audience Analysis in Live Events
by Marco Lemos, Pedro J. S. Cardoso and João M. F. Rodrigues
Multimodal Technol. Interact. 2026, 10(1), 8; https://doi.org/10.3390/mti10010008 - 8 Jan 2026
Viewed by 1123
Abstract
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and [...] Read more.
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and proposes a novel, holistic architecture to address this gap, with this architecture being the main contribution of the paper. The paper identifies and defines five core constructs essential for a robust analysis: Attention, Emotion and Sentiment, Body Language, Scene Dynamics, and Behaviours. Through a selective review of state-of-the-art techniques for each construct, the necessity of a multimodal approach that surpasses the limitations of isolated indicators is highlighted. The work synthesises a fragmented field into a unified taxonomy and introduces a modular architecture that integrates these constructs with practical, business-oriented metrics such as Commitment, Conversion, and Retention. Finally, by integrating cognitive, affective, and behavioural signals, this work provides a roadmap for developing operational systems that can transform live event experience and management through data-driven, real-time analytics. Full article
Show Figures

Figure 1

30 pages, 2818 KB  
Article
LAViTSPose: A Lightweight Cascaded Framework for Robust Sitting Posture Recognition via Detection– Segmentation–Classification
by Shu Wang, Adriano Tavares, Carlos Lima, Tiago Gomes, Yicong Zhang, Jiyu Zhao and Yanchun Liang
Entropy 2025, 27(12), 1196; https://doi.org/10.3390/e27121196 - 25 Nov 2025
Viewed by 638
Abstract
Sitting posture recognition, defined as automatically localizing and categorizing seated human postures, has become essential for large-scale ergonomics assessment and longitudinal health-risk monitoring in classrooms and offices. However, in real-world multi-person scenes, pervasive occlusions and overlaps induce keypoint misalignment, causing global-attention backbones to [...] Read more.
Sitting posture recognition, defined as automatically localizing and categorizing seated human postures, has become essential for large-scale ergonomics assessment and longitudinal health-risk monitoring in classrooms and offices. However, in real-world multi-person scenes, pervasive occlusions and overlaps induce keypoint misalignment, causing global-attention backbones to fail to localize critical local structures. Moreover, annotation scarcity makes small-sample training commonplace, leaving models insufficiently robust to misalignment perturbations and thereby limiting cross-domain generalization. To address these challenges, we propose LAViTSPose, a lightweight cascaded framework for sitting posture recognition. Concretely, a YOLOR-based detector trained with a Range-aware IoU (RaIoU) loss yields tight person crops under partial visibility; ESBody suppresses cross-person leakage and estimates occlusion/head-orientation cues; a compact ViT head (MLiT) with Spatial Displacement Contact (SDC) and a learnable temperature (LT) mechanism performs skeleton-only classification with a local structural-consistency regularizer. From an information-theoretic perspective, our design enhances discriminative feature compactness and reduces structural entropy under occlusion and annotation scarcity. We conducted a systematic evaluation on the USSP dataset, and the results show that LAViTSPose outperforms existing methods on both sitting posture classification and face-orientation recognition while meeting real-time inference requirements. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

24 pages, 3721 KB  
Article
Interactive Environment-Aware Planning System and Dialogue for Social Robots in Early Childhood Education
by Jiyoun Moon and Seung Min Song
Appl. Sci. 2025, 15(20), 11107; https://doi.org/10.3390/app152011107 - 16 Oct 2025
Viewed by 796
Abstract
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and [...] Read more.
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and mapping (SLAM) accurately perceives the environment by constructing a semantic scene representation that includes attributes such as position, size, color, purpose, and material of objects, as well as their positional relationships. Second, the automated planning system enables stable task execution even in changing environments through planning domain definition language (PDDL)-based planning and replanning capabilities. Third, the visual question answering module leverages scene graphs and SPARQL conversion of natural language queries to answer children’s questions and engage in context-based conversations. The experiment conducted in a real kindergarten classroom with children aged 6 to 7 years validated the accuracy of object recognition and attribute extraction for semantic SLAM, the task success rate of the automated planning system, and the natural language question answering performance of the visual question answering (VQA) module.The experimental results confirmed the proposed system’s potential to support natural social interaction with children and its applicability as an educational tool. Full article
(This article belongs to the Special Issue Robotics and Intelligent Systems: Technologies and Applications)
Show Figures

Figure 1

16 pages, 749 KB  
Article
The Use of 360-Degree Video to Reduce Anxiety and Increase Confidence in Mental Health Nursing Students: A Mixed Methods Preliminary Study
by Caroline Laker, Pamela Knight-Davidson and Andrew McVicar
Nurs. Rep. 2025, 15(5), 157; https://doi.org/10.3390/nursrep15050157 - 30 Apr 2025
Cited by 1 | Viewed by 1295
Abstract
Background: Stress affects 45% of NHS staff. More research is needed to explore how to develop resilient mental health nurses who face multiple workplace stressors, including interacting with distressed clients. Higher Education Institutions are uniquely placed to introduce coping skills that help reduce [...] Read more.
Background: Stress affects 45% of NHS staff. More research is needed to explore how to develop resilient mental health nurses who face multiple workplace stressors, including interacting with distressed clients. Higher Education Institutions are uniquely placed to introduce coping skills that help reduce anxiety and increase confidence for pre-registration nurses entering placements for the first time. Methods: A convenience sample of first year mental health student nurses (whole cohort), recruited before their first clinical placement, were invited to participate. Following a mixed methods design, we developed a 360-degree virtual reality (VR) video, depicting a distressed service user across three scenes, filmed in a real-life decommissioned in-patient ward. Participants followed the service user through the scenes, as though in real life. We used the video alongside a cognitive reappraisal/solution-focused/VERA worksheet and supportive clinical supervision technique to explore students’ experiences of VR as an educative tool and to help build emotional coping skills. Results: N = 21 mental health student nurses were recruited to the study. Behavioural responses to the distressed patient scenario were varied. Students that had prior experience in health work were more likely to feel detached from the distress of the service user. Although for some students VR provided a meaningful learning experience in developing emotional awareness, other students felt more like a ‘fly on the wall’ than an active participant. Empathetic and compassionate responses were strongest in those who perceived a strong immersive effect. Overall, the supportive supervision appeared to decrease the anxiety of the small sample involved, but confidence was not affected. Conclusion: The use of 360-degree VR technology as an educative, classroom-based tool to moderate anxiety and build confidence in pre-placement mental health nursing students was partially supported by this study. The effectiveness of such technology appeared to be dependent on the degree to which ‘immersion’ and a sense of presence were experienced by students. Our cognitive reappraisal intervention proved useful in reducing anxiety caused by ‘the patient in distress scenario’ but only for students who achieved a deep immersive effect. Students with prior exposure to distressing events (in their personal lives and in clinical settings) might have developed other coping mechanisms (e.g., detachment). These findings support the idea that ‘presence’ is a subjective VR experience and can vary among users. Full article
Show Figures

Figure 1

21 pages, 4811 KB  
Article
YOLO-AMM: A Real-Time Classroom Behavior Detection Algorithm Based on Multi-Dimensional Feature Optimization
by Yi Cao, Qian Cao, Chengshan Qian and Deji Chen
Sensors 2025, 25(4), 1142; https://doi.org/10.3390/s25041142 - 13 Feb 2025
Cited by 12 | Viewed by 5391
Abstract
Classroom behavior detection is a key task in constructing intelligent educational environments. However, the existing models are still deficient in detail feature capture capability, multi-layer feature correlation, and multi-scale target adaptability, making it challenging to realize high-precision real-time detection in complex scenes. This [...] Read more.
Classroom behavior detection is a key task in constructing intelligent educational environments. However, the existing models are still deficient in detail feature capture capability, multi-layer feature correlation, and multi-scale target adaptability, making it challenging to realize high-precision real-time detection in complex scenes. This paper proposes an improved classroom behavior detection algorithm, YOLO-AMM, to solve these problems. Firstly, we constructed the Adaptive Efficient Feature Fusion (AEFF) module to enhance the fusion of semantic information between different features and improve the model’s ability to capture detailed features. Then, we designed a Multi-dimensional Feature Flow Network (MFFN), which fuses multi-dimensional features and enhances the correlation information between features through the multi-scale feature aggregation module and contextual information diffusion mechanism. Finally, we proposed a Multi-Scale Perception and Fusion Detection Head (MSPF-Head), which significantly improves the adaptability of the head to different scale targets by introducing multi-scale feature perception, feature interaction, and fusion mechanisms. The experimental results showed that compared with the YOLOv8n model, YOLO-AMM improved the mAP0.5 and mAP0.5-0.95 by 3.1% and 4.0%, significantly improving the detection accuracy. Meanwhile, YOLO-AMM increased the detection speed (FPS) by 12.9 frames per second to 169.1 frames per second, which meets the requirement for real-time detection of classroom behavior. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

16 pages, 2344 KB  
Article
ADYOLOv5-Face: An Enhanced YOLO-Based Face Detector for Small Target Faces
by Linrunjia Liu, Gaoshuai Wang and Qiguang Miao
Electronics 2024, 13(21), 4184; https://doi.org/10.3390/electronics13214184 - 25 Oct 2024
Cited by 10 | Viewed by 6317
Abstract
Benefiting from advancements in generic object detectors, significant progress has been achieved in the field of face detection. Among these algorithms, the You Only Look Once (YOLO) series plays an important role due to its low training computation cost. However, we have observed [...] Read more.
Benefiting from advancements in generic object detectors, significant progress has been achieved in the field of face detection. Among these algorithms, the You Only Look Once (YOLO) series plays an important role due to its low training computation cost. However, we have observed that face detectors based on lightweight YOLO models struggle with accurately detecting small faces. This is because they preserve more semantic information for large faces while compromising the detailed information for small faces. To address this issue, this study makes two contributions to enhance detection performance, particularly for small faces: (1) modifying the neck part of the architecture by integrating a Gather-and-Distribute mechanism instead of the traditional Feature Pyramid Network to tackle the information fusion challenges inherent in YOLO-based models; and (2) incorporating an additional detection head specifically designed for detecting small faces. To evaluate the performance of the proposed face detector, we introduce a new dataset named XD-Face for the face detection task. In the experimental section, the proposed model is trained using the Wider Face dataset and evaluated on both Wider Face and XD-face datasets. Experimental results demonstrate that the proposed face detector outperforms other excellent face detectors across all datasets involving small faces and achieved improvements of 1.1%, 1.09%, and 1.35% in the AP50 metric on the WiderFace validation dataset compared to the baseline YOLOv5s-based face detector. Full article
Show Figures

Figure 1

15 pages, 2762 KB  
Article
Research on Student Classroom Behavior Detection Based on the Real-Time Detection Transformer Algorithm
by Lihua Lin, Haodong Yang, Qingchuan Xu, Yanan Xue and Dan Li
Appl. Sci. 2024, 14(14), 6153; https://doi.org/10.3390/app14146153 - 15 Jul 2024
Cited by 11 | Viewed by 5833
Abstract
With the rapid development of artificial intelligence and big data technology, intelligent education systems have become a key research focus in the field of modern educational technology. This study aims to enhance the intelligence level of educational systems by accurately detecting student behavior [...] Read more.
With the rapid development of artificial intelligence and big data technology, intelligent education systems have become a key research focus in the field of modern educational technology. This study aims to enhance the intelligence level of educational systems by accurately detecting student behavior in the classroom using deep learning techniques. We propose a method for detecting student classroom behavior based on an improved RT DETR (Real-Time Detection Transformer) object detection algorithm. By combining actual classroom observation data with AI-generated data, we create a comprehensive and diverse student behavior dataset (FSCB-dataset). This dataset not only more realistically simulates the classroom environment but also effectively addresses the scarcity of datasets and reduces the cost of dataset construction. The study introduces MobileNetV3 as a lightweight backbone network, reducing the model parameters to one-tenth of the original while maintaining nearly the same accuracy. Additionally, by incorporating learnable position encoding and dynamic upsampling techniques, the model significantly improves its ability to recognize small objects and complex scenes. Test results on the FSCB-dataset show that the improved model achieves significant improvements in real-time performance and computational efficiency. The lightweight network is also easy to deploy on mobile devices, demonstrating its practicality in resource-constrained environments. Full article
Show Figures

Figure 1

20 pages, 4867 KB  
Article
MultiFusedNet: A Multi-Feature Fused Network of Pretrained Vision Models via Keyframes for Student Behavior Classification
by Somsawut Nindam, Seung-Hoon Na and Hyo Jong Lee
Appl. Sci. 2024, 14(1), 230; https://doi.org/10.3390/app14010230 - 26 Dec 2023
Cited by 3 | Viewed by 2596
Abstract
This research proposes a deep learning method for classifying student behavior in classrooms that follow the professional learning community teaching approach. We collected data on five student activities: hand-raising, interacting, sitting, turning around, and writing. We used the sum of absolute differences (SAD) [...] Read more.
This research proposes a deep learning method for classifying student behavior in classrooms that follow the professional learning community teaching approach. We collected data on five student activities: hand-raising, interacting, sitting, turning around, and writing. We used the sum of absolute differences (SAD) in the LUV color space to detect scene changes. The K-means algorithm was then applied to select keyframes using the computed SAD. Next, we extracted features using multiple pretrained deep learning models from the convolutional neural network family. The pretrained models considered were InceptionV3, ResNet50V2, VGG16, and EfficientNetB7. We leveraged feature fusion, incorporating optical flow features and data augmentation techniques, to increase the necessary spatial features of selected keyframes. Finally, we classified the students’ behavior using a deep sequence model based on the bidirectional long short-term memory network with an attention mechanism (BiLSTM-AT). The proposed method with the BiLSTM-AT model can recognize behaviors from our dataset with high accuracy, precision, recall, and F1-scores of 0.97, 0.97, and 0.97, respectively. The overall accuracy was 96.67%. This high efficiency demonstrates the potential of the proposed method for classifying student behavior in classrooms. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7857 KB  
Article
Multi-Scale Audio Spectrogram Transformer for Classroom Teaching Interaction Recognition
by Fan Liu and Jiandong Fang
Future Internet 2023, 15(2), 65; https://doi.org/10.3390/fi15020065 - 2 Feb 2023
Cited by 10 | Viewed by 5421
Abstract
Classroom interactivity is one of the important metrics for assessing classrooms, and identifying classroom interactivity through classroom image data is limited by the interference of complex teaching scenarios. However, audio data within the classroom are characterized by significant student–teacher interaction. This study proposes [...] Read more.
Classroom interactivity is one of the important metrics for assessing classrooms, and identifying classroom interactivity through classroom image data is limited by the interference of complex teaching scenarios. However, audio data within the classroom are characterized by significant student–teacher interaction. This study proposes a multi-scale audio spectrogram transformer (MAST) speech scene classification algorithm and constructs a classroom interactive audio dataset to achieve interactive teacher–student recognition in the classroom teaching process. First, the original speech signal is sampled and pre-processed to generate a multi-channel spectrogram, which enhances the representation of features compared with single-channel features; Second, in order to efficiently capture the long-range global context of the audio spectrogram, the audio features are globally modeled by the multi-head self-attention mechanism of MAST, and the feature resolution is reduced during feature extraction to continuously enrich the layer-level features while reducing the model complexity; Finally, a further combination with a time-frequency enrichment module maps the final output to a class feature map, enabling accurate audio category recognition. The experimental comparison of MAST is carried out on the public environment audio dataset and the self-built classroom audio interaction datasets. Compared with the previous state-of-the-art methods on public datasets AudioSet and ESC-50, its accuracy has been improved by 3% and 5%, respectively, and the accuracy of the self-built classroom audio interaction dataset has reached 92.1%. These results demonstrate the effectiveness of MAST in the field of general audio classification and the smart classroom domain. Full article
Show Figures

Figure 1

18 pages, 4295 KB  
Article
Deep Learning-Based Cost-Effective and Responsive Robot for Autism Treatment
by Aditya Singh, Kislay Raj, Teerath Kumar, Swapnil Verma and Arunabha M. Roy
Drones 2023, 7(2), 81; https://doi.org/10.3390/drones7020081 - 23 Jan 2023
Cited by 124 | Viewed by 9535
Abstract
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not [...] Read more.
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not just in the classrooms but also in the in-house clinical practices. With the rapid advancement in deep learning techniques, robots became more capable of handling human behaviour. In this paper, we present a cost-efficient, socially designed robot called ‘Tinku’, developed to assist in teaching special needs children. ‘Tinku’ is low cost but is full of features and has the ability to produce human-like expressions. Its design is inspired by the widely accepted animated character ‘WALL-E’. Its capabilities include offline speech processing and computer vision—we used light object detection models, such as Yolo v3-tiny and single shot detector (SSD)—for obstacle avoidance, non-verbal communication, expressing emotions in an anthropomorphic way, etc. It uses an onboard deep learning technique to localize the objects in the scene and uses the information for semantic perception. We have developed several lessons for training using these features. A sample lesson about brushing is discussed to show the robot’s capabilities. Tinku is cute, and loaded with lots of features, and the management of all the processes is mind-blowing. It is developed in the supervision of clinical experts and its condition for application is taken care of. A small survey on the appearance is also discussed. More importantly, it is tested on small children for the acceptance of the technology and compatibility in terms of voice interaction. It helps autistic kids using state-of-the-art deep learning models. Autism Spectral disorders are being increasingly identified today’s world. The studies show that children are prone to interact with technology more comfortably than a with human instructor. To fulfil this demand, we presented a cost-effective solution in the form of a robot with some common lessons for the training of an autism-affected child. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 36850 KB  
Article
A Serious Mixed-Reality Game for Training Police Officers in Tagging Crime Scenes
by Giovanni Acampora, Pasquale Trinchese, Roberto Trinchese and Autilia Vitiello
Appl. Sci. 2023, 13(2), 1177; https://doi.org/10.3390/app13021177 - 16 Jan 2023
Cited by 24 | Viewed by 5849
Abstract
Recognizing and collecting evidence at a crime scene are essential tasks for gathering information about perpetrators and/or the dynamics of a criminal event. Hence, the success of a crime investigation is strongly based on the ability of forensic investigators to perform these tasks. [...] Read more.
Recognizing and collecting evidence at a crime scene are essential tasks for gathering information about perpetrators and/or the dynamics of a criminal event. Hence, the success of a crime investigation is strongly based on the ability of forensic investigators to perform these tasks. Recent studies observing and comparing the performance of experts and novices have highlighted the importance of experience and training for search and recovery strategies at crime scenes. Therefore, relevant training programs in evidence-recovery techniques should be attended by novices to improve their skills. However, the knowledge transfer between skills acquired in the classroom and their practical application in the field is a challenging task. In order to relieve this problem, this paper proposes a serious mixed-reality game, which is called TraceGame, aiming to support the training activities of novice forensic investigators by improving their skills related to the search and recovery of evidence at crime scenes. The purpose of the game is to identify the greatest number of useful traces present in a crime scene that is physically reconstructed at the training site as quickly as possible. As shown in an experimental session, TraceGame is a promising tool for supporting the training of novice forensic investigators. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 2437 KB  
Article
Pose Mask: A Model-Based Augmentation Method for 2D Pose Estimation in Classroom Scenes Using Surveillance Images
by Shichang Liu, Miao Ma, Haiyang Li, Hanyang Ning and Min Wang
Sensors 2022, 22(21), 8331; https://doi.org/10.3390/s22218331 - 30 Oct 2022
Cited by 2 | Viewed by 3752
Abstract
Solid developments have been seen in deep-learning-based pose estimation, but few works have explored performance in dense crowds, such as a classroom scene; furthermore, no specific knowledge is considered in the design of image augmentation for pose estimation. A masked autoencoder was shown [...] Read more.
Solid developments have been seen in deep-learning-based pose estimation, but few works have explored performance in dense crowds, such as a classroom scene; furthermore, no specific knowledge is considered in the design of image augmentation for pose estimation. A masked autoencoder was shown to have a non-negligible capability in image reconstruction, where the masking mechanism that randomly drops patches forces the model to build unknown pixels from known pixels. Inspired by this self-supervised learning method, where the restoration of the feature loss induced by the mask is consistent with tackling the occlusion problem in classroom scenarios, we discovered that the transfer performance of the pre-trained weights could be used as a model-based augmentation to overcome the intractable occlusion in classroom pose estimation. In this study, we proposed a top-down pose estimation method that utilized the natural reconstruction capability of missing information of the MAE as an effective occluded image augmentation in a pose estimation task. The difference with the original MAE was that instead of using a 75% random mask ratio, we regarded the keypoint distribution probabilistic heatmap as a reference for masking, which we named Pose Mask. To test the performance of our method in heavily occluded classroom scenes, we collected a new dataset for pose estimation in classroom scenes named Class Pose and conducted many experiments, the results of which showed promising performance. Full article
Show Figures

Figure 1

19 pages, 3614 KB  
Article
Experiment Information System Based on an Online Virtual Laboratory
by Chuanyan Hao, Anqi Zheng, Yuqi Wang and Bo Jiang
Future Internet 2021, 13(2), 27; https://doi.org/10.3390/fi13020027 - 24 Jan 2021
Cited by 33 | Viewed by 8177
Abstract
In the information age, MOOCs (Massive Open Online Courses), micro-classes, flipping classroom, and other blended teaching scenes have improved students learning outcomes. However, incorporating technologies into experimental courses, especially electronic and electrical experiments, has its own characteristics and difficulties. The focus of this [...] Read more.
In the information age, MOOCs (Massive Open Online Courses), micro-classes, flipping classroom, and other blended teaching scenes have improved students learning outcomes. However, incorporating technologies into experimental courses, especially electronic and electrical experiments, has its own characteristics and difficulties. The focus of this paper is to introduce virtual technology into an electronic circuit experiment course and to explore its teaching strategy, thereby realizing the informatization of experiment teaching. First, this paper explores the design concepts and implementation details of the digital circuit virtual laboratory, which is then developed based on previous literature and a prequestionnaire to users. Second, the informatization process of the experiment learning model based on traditional custom lab benches is shown through a blended learning scheme that integrates the online virtual laboratory. Finally, the experiment information system is verified and analyzed with a control group experiment and questionnaires. The blended program turned out to be an effective teaching model to complement the deficiencies in existing physical laboratories. The research conclusions show that the virtual experiment system provides students with a rich, efficient, and expansive experimental experience, in particular, the flexibility, repeatability, and visual appeal of a virtual platform could promote the development of students’ abilities in active learning, reflective thinking, and creativity. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

35 pages, 37612 KB  
Article
Visual Comfort in Modern University Classrooms
by Yun-Shang Chiou, Satryo Saputro and Dany Perwita Sari
Sustainability 2020, 12(9), 3930; https://doi.org/10.3390/su12093930 - 11 May 2020
Cited by 26 | Viewed by 8748
Abstract
Universities are at the front line of promoting sustainability. The wellbeing of its students plays a key role in advancing such agendas. In the past decade, many university classrooms have been equipped with a projector; however, the lighting design of the classroom remains [...] Read more.
Universities are at the front line of promoting sustainability. The wellbeing of its students plays a key role in advancing such agendas. In the past decade, many university classrooms have been equipped with a projector; however, the lighting design of the classroom remains unchanged. This paper presents a visual comfort study of modern university classrooms by considering three working surfaces: the student’s desk, whiteboard, and projector screen. The study cross-examines the quality of the classroom lighting by high dynamic range image (HDRi) photography and the students’ well-being from user satisfaction surveys. Comparisons are organized based on the seating area of the student, the type of learning (text-based or image-based) in the classroom, and the lighting scene with and without a projector in use. The spot illuminance, luminance, HDRi spatial luminance distribution and the Unified Glare Rating (UGR) are the parameters used to describe lighting quality. This paper found that more than 70% of the respondents experienced some adverse physical symptoms, and nearly 50% felt that the lighting condition was not ideal for task performance. UGR indicated the presence of minor glare problems in whiteboard-based teaching scenarios, and daylight was too strong to be utilized. The results suggest that the lighting design needs to involve a luminance distribution minded approach for the students’ wellbeing in classroom learning. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

Back to TopTop