Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (93)

Search Parameters:
Keywords = computer vision education

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4162 KiB  
Article
Discovering the Emotions of Frustration and Confidence During the Application of Cognitive Tests in Mexican University Students
by Marco A. Moreno-Armendáriz, Jesús Mercado-Ríos, José E. Valdez-Rodríguez, Rolando Quintero and Victor H. Ponce-Ponce
Big Data Cogn. Comput. 2025, 9(8), 195; https://doi.org/10.3390/bdcc9080195 - 24 Jul 2025
Viewed by 312
Abstract
Emotion detection using computer vision has advanced significantly in recent years, achieving remarkable performance that, in some cases, surpasses that of humans. Convolutional neural networks (CNNs) excel in this task by capturing facial features that allow for effective emotion classification. However, most research [...] Read more.
Emotion detection using computer vision has advanced significantly in recent years, achieving remarkable performance that, in some cases, surpasses that of humans. Convolutional neural networks (CNNs) excel in this task by capturing facial features that allow for effective emotion classification. However, most research focuses on basic emotions, such as happiness, anger, or sadness, neglecting more complex emotions, like frustration. People set expectations or goals to meet; if they do not happen, frustration arises, generating reactions such as annoyance, anger, and disappointment, which can harm confidence and motivation. These aspects make it especially relevant in mental health and educational contexts, where detecting it could help mitigate its adverse effects. In this research, we developed a CNN-based approach to detect frustration through facial expressions. The scarcity of specific datasets for this task led us to create an experimental protocol to generate our dataset. This classification task presents a high degree of difficulty due to the variability in facial expressions among different participants when feeling frustrated. Despite this, our new model achieved an F1-score of 0.8080, thus obtaining an adequate baseline model. Full article
(This article belongs to the Special Issue Application of Deep Neural Networks)
Show Figures

Figure 1

16 pages, 2050 KiB  
Article
Analysis, Evaluation, and Prediction of Machine Learning-Based Animal Behavior Imitation
by Yu Qi, Siyu Xiong and Bo Wu
Electronics 2025, 14(14), 2816; https://doi.org/10.3390/electronics14142816 - 13 Jul 2025
Viewed by 328
Abstract
Expressive imitation in the performing arts is typically trained through animal behavior imitation, aiming not only to reproduce action trajectories but also to recreate rhythm, style and emotional states. However, evaluation of such animal imitation behaviors relies heavily on teachers’ subjective judgments, lacking [...] Read more.
Expressive imitation in the performing arts is typically trained through animal behavior imitation, aiming not only to reproduce action trajectories but also to recreate rhythm, style and emotional states. However, evaluation of such animal imitation behaviors relies heavily on teachers’ subjective judgments, lacking structured criteria, exhibiting low inter-rater consistency and being difficult to quantify. To enhance the objectivity and interpretability of the scoring process, this study develops a machine learning and structured pose data-based auxiliary evaluation framework for imitation quality. The proposed framework innovatively constructs three types of feature sets, namely baseline, ablation, and enhanced, and integrates recursive feature elimination with feature importance ranking to identify a stable and interpretable set of core structural features. This enables the training of machine learning models with strong capabilities in structured modeling and sensitivity to informative features. The analysis of the modeling results indicates that temporal–rhythm features play a significant role in score prediction and that only a small number of key feature values are required to model teachers’ ratings with high precision. The proposed framework not only lays a methodological foundation for standardized and AI-assisted evaluation in performing arts education but also expands the application boundaries of computer vision and machine learning in this field. Full article
Show Figures

Figure 1

23 pages, 3752 KiB  
Article
Food Waste Detection in Canteen Plates Using YOLOv11
by João Ferreira, Paulino Cerqueira and Jorge Ribeiro
Appl. Sci. 2025, 15(13), 7137; https://doi.org/10.3390/app15137137 - 25 Jun 2025
Viewed by 831
Abstract
This work presents a Computer Vision (CV) platform for Food Waste (FW) detection in canteen plates exploring a research gap in automated FW detection using CV models. A machine learning methodology was followed, starting with the creation of a custom dataset of canteen [...] Read more.
This work presents a Computer Vision (CV) platform for Food Waste (FW) detection in canteen plates exploring a research gap in automated FW detection using CV models. A machine learning methodology was followed, starting with the creation of a custom dataset of canteen plates images before and after lunch or dinner, and data augmentation techniques were applied to enhance the model’s robustness. Subsequently, a CV model was developed using YOLOv11 to classify the percentage of FW on a plate, distinguishing between edible food items and non-edible discarded material. To evaluate the performance of the model, we used a real dataset as well as three benchmarking datasets with food plates, in which it could be detected waste. For the real dataset, the system achieved a mean average precision (mAP) of 0.343, a precision of 0.62, and a recall of 0.322 on the test set as well as demonstrating high accuracy in classifying waste considering the traditional evaluation metrics on the benchmarking datasets. Given these promising results and the provision of open-source code on a GitHub repository, the platform can be readily utilized by the research community and educational institutions to monitor FW in student meals and proactively implement reduction strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence and Numerical Simulation in Food Engineering)
Show Figures

Figure 1

14 pages, 2121 KiB  
Article
Community-Integrated Project-Based Learning for Interdisciplinary Engineering Education: A Mechatronics Case Study of a Rideable 5-Inch Gauge Railway
by Hirotaka Tsutsumi
Educ. Sci. 2025, 15(7), 806; https://doi.org/10.3390/educsci15070806 - 23 Jun 2025
Viewed by 608
Abstract
This study presents a case of community-integrated project-based learning (PBL) at a Japanese National Institute of Technology (KOSEN). Three students collaborated to design and build a rideable 5-inch gauge railway system, integrating mechanical design, brushless motor control, and computer vision. The project was [...] Read more.
This study presents a case of community-integrated project-based learning (PBL) at a Japanese National Institute of Technology (KOSEN). Three students collaborated to design and build a rideable 5-inch gauge railway system, integrating mechanical design, brushless motor control, and computer vision. The project was showcased at public events and a partner high school, providing authentic feedback and enhancing learning relevance. Over 15 weeks, students engaged in hands-on prototyping, interdisciplinary teamwork, and real-world problem-solving. The course design was grounded in four educational frameworks: experiential learning, situated learning, constructive alignment, and self-regulated learning (SRL). SRL refers to students’ ability to plan, monitor, and reflect on their learning—a key skill for managing complex engineering tasks. A mixed-methods evaluation—including surveys, reflections, classroom observations, and communication logs—revealed significant gains in technical competence, engagement, and learner autonomy. Although limited by a small sample size, the study offers detailed insights into how small-scale, resource-conscious PBL can support meaningful interdisciplinary learning and community involvement. This case illustrates how the KOSEN approach, combining technical education with real-world application, can foster both domain-specific and transferable skills, and provides a model for broader implementation of authentic, student-driven engineering education. Full article
(This article belongs to the Topic Advances in Online and Distance Learning)
Show Figures

Figure 1

22 pages, 932 KiB  
Review
Advances in Video Emotion Recognition: Challenges and Trends
by Yun Yi, Yunkang Zhou, Tinghua Wang and Jin Zhou
Sensors 2025, 25(12), 3615; https://doi.org/10.3390/s25123615 - 9 Jun 2025
Viewed by 1053
Abstract
Video emotion recognition (VER), situated at the convergence of affective computing and computer vision, aims to predict the primary emotion evoked in most viewers through video content, with extensive applications in video recommendation, human–computer interaction, and intelligent education. This paper commences with an [...] Read more.
Video emotion recognition (VER), situated at the convergence of affective computing and computer vision, aims to predict the primary emotion evoked in most viewers through video content, with extensive applications in video recommendation, human–computer interaction, and intelligent education. This paper commences with an analysis of the psychological models that constitute the foundation of VER theory. The paper further elaborates on datasets and evaluation metrics commonly utilized in VER. Then, the paper reviews VER algorithms according to their categories, and compares and analyzes the experimental results of classic methods on four datasets. Based on a comprehensive analysis and investigations, the paper identifies the prevailing challenges currently faced in the VER field, including gaps between emotional representations and labels, large-scale and high-quality VER datasets, and the efficient integration of multiple modalities. Furthermore, this study proposes potential research directions to address these challenges, e.g., advanced neural network architectures, efficient multimodal fusion strategies, high-quality emotional representation, and robust active learning strategies. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

11 pages, 12478 KiB  
Article
Computer Vision-Based Obstacle Detection Mobile System for Visually Impaired Individuals
by Gisel Katerine Bastidas-Guacho, Mario Alejandro Paguay Alvarado, Patricio Xavier Moreno-Vallejo, Patricio Rene Moreno-Costales, Nayely Samanta Ocaña Yanza and Jhon Carlos Troya Cuestas
Multimodal Technol. Interact. 2025, 9(5), 48; https://doi.org/10.3390/mti9050048 - 18 May 2025
Viewed by 918
Abstract
Traditional tools, such as canes, are no longer enough to subsist the mobility and orientation of visually impaired people in complex environments. Therefore, technological solutions based on computer vision tasks are presented as promising alternatives to help detect obstacles. Object detection models are [...] Read more.
Traditional tools, such as canes, are no longer enough to subsist the mobility and orientation of visually impaired people in complex environments. Therefore, technological solutions based on computer vision tasks are presented as promising alternatives to help detect obstacles. Object detection models are easy to couple to mobile systems, do not require a large consumption of resources on mobile phones, and act in real-time to alert users of the presence of obstacles. However, existing object detectors were mostly trained with images from platforms such as Kaggle, and the number of existing objects is still limited. For this reason, this study proposes to implement a mobile system that integrates an object detection model for the identification of obstacles intended for visually impaired people. Additionally, the mobile application integrates multimodal feedback through auditory and haptic interaction, ensuring that users receive real-time obstacle alerts via voice guidance and vibrations, further enhancing accessibility and responsiveness in different navigation contexts. The chosen scenario to develop the obstacle detection application is the Specialized Educational Unit Dr. Luis Benavides for impaired people, which is the source of images for building the dataset for the model and evaluating it with impaired individuals. To determine the best model, the performance of YOLO is evaluated by means of a precision adjustment through the variation of epochs, using a proprietary data set of 7600 diverse images. The YOLO-300 model turned out to be the best, with a mAP of 0.42. Full article
Show Figures

Figure 1

17 pages, 852 KiB  
Review
A Review of Multimodal Interaction in Remote Education: Technologies, Applications, and Challenges
by Yangmei Xie, Liuyi Yang, Miao Zhang, Sinan Chen and Jialong Li
Appl. Sci. 2025, 15(7), 3937; https://doi.org/10.3390/app15073937 - 3 Apr 2025
Cited by 1 | Viewed by 1853
Abstract
Multimodal interaction technology has become a key aspect of remote education by enriching student engagement and learning results as it utilizes the speech, gesture, and visual feedback as various sensory channels. This publication reflects on the latest breakthroughs in multimodal interaction and its [...] Read more.
Multimodal interaction technology has become a key aspect of remote education by enriching student engagement and learning results as it utilizes the speech, gesture, and visual feedback as various sensory channels. This publication reflects on the latest breakthroughs in multimodal interaction and its usage in remote learning environments, including a multi-layered discussion that addresses various levels of learning and understanding. It showcases the main technologies, such as speech recognition, computer vision, and haptic feedback, that enable the visitors and learning portals to exchange data fluidly. In addition, we investigate the function of multimodal learning analytics in order to measure the cognitive and emotional states of students, targeting personalized feedback and refining instructional strategies. Though multimodal communication may bring a historical improvement to the mode of online education, the platform still faces many issues, such as media synchronization, higher computational demand, physical adaptability, and privacy concerns. These problems demand further research in the fields of algorithm optimization, access to technology guidance, and the ethical use of big data. This paper presents a systematic review of the application of multimodal interaction in remote education. Through the analysis of 25 selected research papers, this review explores key technologies, applications, and challenges in the field. By synthesizing existing findings, this study highlights the role of multimodal learning analytics, speech recognition, gesture-based interaction, and haptic feedback in enhancing remote learning. Full article
(This article belongs to the Special Issue Current Status and Perspectives in Human–Computer Interaction)
Show Figures

Figure 1

29 pages, 6518 KiB  
Article
Generative AI Models (2018–2024): Advancements and Applications in Kidney Care
by Fnu Neha, Deepshikha Bhati and Deepak Kumar Shukla
BioMedInformatics 2025, 5(2), 18; https://doi.org/10.3390/biomedinformatics5020018 - 3 Apr 2025
Cited by 1 | Viewed by 2567
Abstract
Kidney disease poses a significant global health challenge, affecting millions and straining healthcare systems due to limited nephrology resources. This paper examines the transformative potential of Generative AI (GenAI), Large Language Models (LLMs), and Large Vision Models (LVMs) in addressing critical challenges in [...] Read more.
Kidney disease poses a significant global health challenge, affecting millions and straining healthcare systems due to limited nephrology resources. This paper examines the transformative potential of Generative AI (GenAI), Large Language Models (LLMs), and Large Vision Models (LVMs) in addressing critical challenges in kidney care. GenAI supports research and early interventions through the generation of synthetic medical data. LLMs enhance clinical decision-making by analyzing medical texts and electronic health records, while LVMs improve diagnostic accuracy through advanced medical image analysis. Together, these technologies show promise for advancing patient education, risk stratification, disease diagnosis, and personalized treatment strategies. This paper highlights key advancements in GenAI, LLMs, and LVMs from 2018 to 2024, focusing on their applications in kidney care and presenting common use cases. It also discusses their limitations, including knowledge cutoffs, hallucinations, contextual understanding challenges, data representation biases, computational demands, and ethical concerns. By providing a comprehensive analysis, this paper outlines a roadmap for integrating these AI advancements into nephrology, emphasizing the need for further research and real-world validation to fully realize their transformative potential. Full article
Show Figures

Figure 1

9 pages, 893 KiB  
Article
Real-Time Monitoring of Personal Protective Equipment Adherence Using On-Device Artificial Intelligence Models
by Yam Horesh, Renana Oz Rokach, Yotam Kolben and Dean Nachman
Sensors 2025, 25(7), 2003; https://doi.org/10.3390/s25072003 - 22 Mar 2025
Viewed by 736
Abstract
Personal protective equipment (PPE) is crucial for infection prevention and is effective only when worn correctly and consistently. Health organizations often use education or inspections to mitigate non-compliance, but these are costly and have limited success. This study developed a novel on-device, AI-based [...] Read more.
Personal protective equipment (PPE) is crucial for infection prevention and is effective only when worn correctly and consistently. Health organizations often use education or inspections to mitigate non-compliance, but these are costly and have limited success. This study developed a novel on-device, AI-based computer vision system to monitor healthcare worker PPE adherence in real time. Using a custom-built image dataset of 7142 images of 11 participants wearing various combinations of PPE (mask, gloves, gown), we trained a series of binary classifiers for each PPE item. By utilizing a lightweight MobileNetV3 model, we optimized the system for edge computing on a Raspberry Pi 5 single-board computer, enabling rapid image processing without the need for external servers. Our models achieved high accuracy in identifying individual PPE items (93–97%), with an overall accuracy of 85.58 ± 0.82% when all items were correctly classified. Real-time evaluation with 11 unseen medical staff in a cardiac intensive care unit demonstrated the practical viability of our system, maintaining a high per-item accuracy of 87–89%. This study highlights the potential for AI-driven solutions to significantly improve PPE compliance in healthcare settings, offering a cost-effective, efficient, and reliable tool for enhancing patient safety and mitigating infection risks. Full article
Show Figures

Figure 1

21 pages, 2702 KiB  
Article
Analyzing Fairness of Computer Vision and Natural Language Processing Models
by Ahmed Rashed, Abdelkrim Kallich and Mohamed Eltayeb
Information 2025, 16(3), 182; https://doi.org/10.3390/info16030182 - 27 Feb 2025
Viewed by 2034
Abstract
Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research [...] Read more.
Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on assessing and mitigating biases for unstructured datasets using Computer Vision (CV) and Natural Language Processing (NLP) models. The primary objective is to present a comparative analysis of the performance of mitigation algorithms from the two fairness libraries. This analysis involves applying the algorithms individually, one at a time, in one of the stages of the ML lifecycle, pre-processing, in-processing, or post-processing, as well as sequentially across more than one stage. The results reveal that some sequential applications improve the performance of mitigation algorithms by effectively reducing bias while maintaining the model’s performance. Publicly available datasets from Kaggle were chosen for this research, providing a practical context for evaluating fairness in real-world machine learning workflows. Full article
Show Figures

Graphical abstract

16 pages, 2665 KiB  
Article
Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy
by Yucheng Li, Victoria Nelson, Cuong T. Nguyen, Irene Suh, Suvranu De, Ka-Chun Siu and Carl Nelson
Electronics 2025, 14(4), 793; https://doi.org/10.3390/electronics14040793 - 18 Feb 2025
Viewed by 706
Abstract
Laparoscopic cholecystectomy (LC) is the standard procedure for gallbladder removal, but improper identification of anatomical structures can lead to biliary duct injury (BDI). The critical view of safety (CVS) is a standardized technique designed to mitigate this risk. However, existing surgical training systems [...] Read more.
Laparoscopic cholecystectomy (LC) is the standard procedure for gallbladder removal, but improper identification of anatomical structures can lead to biliary duct injury (BDI). The critical view of safety (CVS) is a standardized technique designed to mitigate this risk. However, existing surgical training systems primarily emphasize haptic feedback and physical skill development, making them expensive and less accessible. This paper presents the next-generation Portable Camera-Aided Surgical Simulator (PortCAS), a cost-effective, portable, vision-based surgical training simulator designed to enhance cognitive skill acquisition in LC. The system consists of an enclosed physical module equipped with a vision system, a single-board computer for real-time instrument tracking, and a virtual simulation interface that runs on a user-provided computer. Unlike traditional simulators, PortCAS prioritizes cognitive training over force-based interactions, eliminating the need for costly haptic components. The system was evaluated through user studies assessing accuracy, usability, and training effectiveness. Results demonstrate that PortCAS provides a sufficiently accurate tracking performance for training surgical skills such as CVS, offering a scalable and accessible solution for surgical education. Full article
(This article belongs to the Special Issue Virtual Reality Applications in Enhancing Human Lives)
Show Figures

Figure 1

20 pages, 2011 KiB  
Article
Machine Learning Approaches for Real-Time Mineral Classification and Educational Applications
by Paraskevas Tsangaratos, Ioanna Ilia, Nikolaos Spanoudakis, Georgios Karageorgiou and Maria Perraki
Appl. Sci. 2025, 15(4), 1871; https://doi.org/10.3390/app15041871 - 11 Feb 2025
Cited by 1 | Viewed by 2247
Abstract
The main objective of the present study was to develop a real-time mineral classification system designed for multiple detection, which integrates classical computer vision techniques with advanced deep learning algorithms. The system employs three CNN architectures—VGG-16, Xception, and MobileNet V2—designed to identify multiple [...] Read more.
The main objective of the present study was to develop a real-time mineral classification system designed for multiple detection, which integrates classical computer vision techniques with advanced deep learning algorithms. The system employs three CNN architectures—VGG-16, Xception, and MobileNet V2—designed to identify multiple minerals within a single frame and output probabilities for various mineral types, including Pyrite, Aragonite, Quartz, Obsidian, Gypsum, Azurite, and Hematite. Among these, MobileNet V2 demonstrated exceptional performance, achieving the highest accuracy (98.98%) and the lowest loss (0.0202), while Xception and VGG-16 also performed competitively, excelling in feature extraction and detailed analyses, respectively. Gradient-weighted Class Activation Mapping visualizations illustrated the models’ ability to capture distinctive mineral features, enhancing interpretability. Furthermore, a stacking ensemble approach achieved an impressive accuracy of 99.71%, effectively leveraging the complementary strengths of individual models. Despite its robust performance, the ensemble method poses computational challenges, particularly for real-time applications on resource-constrained devices. The application of this methodology in Mineral Quest, an educational Python-based game, underscores its practical potential in geology education, mining, and geological surveys, offering an engaging and accurate tool for real-time mineral classification. Full article
Show Figures

Figure 1

16 pages, 436 KiB  
Article
Improved Localization and Recognition of Handwritten Digits on MNIST Dataset with ConvGRU
by Yalin Wen, Wei Ke and Hao Sheng
Appl. Sci. 2025, 15(1), 238; https://doi.org/10.3390/app15010238 - 30 Dec 2024
Cited by 1 | Viewed by 1296
Abstract
Video location prediction for handwritten digits presents unique challenges in computer vision due to the complex spatiotemporal dependencies and the need to maintain digit legibility across predicted frames, while existing deep learning-based video prediction models have shown promise, they often struggle with preserving [...] Read more.
Video location prediction for handwritten digits presents unique challenges in computer vision due to the complex spatiotemporal dependencies and the need to maintain digit legibility across predicted frames, while existing deep learning-based video prediction models have shown promise, they often struggle with preserving local details and typically achieve clear predictions for only a limited number of frames. In this paper, we present a novel video location prediction model based on Convolutional Gated Recurrent Units (ConvGRU) that specifically addresses these challenges in the context of handwritten digit sequences. Our approach introduces three key innovations. Firstly, we introduce a specialized decoupling model using modified Generative Adversarial Networks (GANs) that effectively separates background and foreground information, significantly improving prediction accuracy. Secondly, we introduce an enhanced ConvGRU architecture that replaces traditional linear operations with convolutional operations in the gating mechanism, substantially reducing spatiotemporal information loss. Finally, we introduce an optimized parameter-tuning strategy that ensures continuous feature transmission while maintaining computational efficiency. Extensive experiments on both the MNIST dataset and custom mobile datasets demonstrate the effectiveness of our approach. Our model achieves a structural similarity index of 0.913 between predicted and actual sequences, surpassing current state-of-the-art methods by 1.2%. Furthermore, we demonstrate superior performance in long-term prediction stability, with consistent accuracy maintained across extended sequences. Notably, our model reduces training time by 9.5% compared to existing approaches while maintaining higher prediction accuracy. These results establish new benchmarks for handwritten digit video prediction and provide practical solutions for real-world applications in digital education, document processing, and real-time handwriting recognition systems. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

17 pages, 21533 KiB  
Article
From Junk to Genius: Robotic Arms and AI Crafting Creative Designs from Scraps
by Jiaqi Liu, Xiang Chen and Shengliang Yu
Buildings 2024, 14(12), 4076; https://doi.org/10.3390/buildings14124076 - 22 Dec 2024
Viewed by 1401
Abstract
As sustainable architecture is increasingly emphasizing material reuse, this study proposes a novel, interactive workflow that integrates robotic arms and artificial intelligence to transform waste materials from architectural models into creative design components. Unlike existing recycling efforts, which focus on the construction phase, [...] Read more.
As sustainable architecture is increasingly emphasizing material reuse, this study proposes a novel, interactive workflow that integrates robotic arms and artificial intelligence to transform waste materials from architectural models into creative design components. Unlike existing recycling efforts, which focus on the construction phase, this research uniquely targeted discarded architectural model materials, particularly polystyrene foam, that are often overlooked, despite their environmental impact. The workflow combined computer vision and machine learning, utilizing the YOLOv5 model, which achieved a classification accuracy exceeding 83% for the polygon, rectangle, and circle categories, demonstrating a superior recognition performance. Robotic sorting demonstrated the ability to process up to six foam blocks per minute under controlled conditions. By integrating Stable Diffusion, we further generated speculative architectural renderings, enhancing creativity and design exploration. Participant testing revealed that human interaction reduced stacking errors by 57% and significantly improved user satisfaction. Moreover, human–robot collaboration not only corrected robotic errors, but also fostered innovative and collaborative solutions, demonstrating the system’s potential as a versatile tool for education and industry while promoting sustainability in design. Thus, this workflow offers a scalable approach to creative material reuse, promoting sustainable practices from the model-making stage of architectural design. While these initial results are promising, further research is needed to adapt this technique for larger-scale construction materials, addressing real-world constraints and broadening its applicability. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

11 pages, 244 KiB  
Article
Prevalence of Obesity and Dental Caries in Kindergarten Children During the First Decade of Saudi Vision 2030: A Cross-Sectional Study
by Heba M. Elkhodary, Deema J. Farsi, Nada J. Farsi, Logain K. Alattas, Ali B. Alshaikh and Najat M. Farsi
Children 2024, 11(12), 1531; https://doi.org/10.3390/children11121531 - 18 Dec 2024
Cited by 1 | Viewed by 1264
Abstract
Background/Objectives: Obesity and dental caries are significant health issues affecting children worldwide. This study aims to investigate the prevalence of obesity and dental caries among kindergarten children in Saudi Arabia during the early implementation years of the Vision 2030 initiative. Specifically, it examines [...] Read more.
Background/Objectives: Obesity and dental caries are significant health issues affecting children worldwide. This study aims to investigate the prevalence of obesity and dental caries among kindergarten children in Saudi Arabia during the early implementation years of the Vision 2030 initiative. Specifically, it examines the obesity rates between public and private kindergartens and assesses the correlation between obesity and caries risk. Methods: We conducted a cross-sectional study involving a stratified sample of 347 kindergarten children in Jeddah, Saudi Arabia, from September 2022 to March 2023, as part of a larger project assessing the obesity and dental caries prevalence in school-aged children. Their body mass index (BMI) was computed after their weight and height were measured. Following an oral examination, the decayed, missing, and filled teeth (dmft) scores were noted. The relationships between dmft and BMI, sex, and school type were studied using non-parametric tests, and its predictors were assessed as well. Results: Our findings indicate that 15.3% of the children were classified as obese based on the BMI measurements, while 9.8% were categorized as overweight. The prevalence of obesity did not show significant differences by school type when classified by BMI. The mean dmft score was 2.8 ± 3.6, with those children in public kindergartens demonstrating significantly higher dmft scores compared to their private counterparts (p < 0.001). Notably, there was no observed relationship between obesity and caries activity. Conclusions: Despite the implementation of Saudi Vision 2030, the high prevalence of obesity and dental caries among kindergarten children suggests that the current health initiatives may be insufficient. The lack of a relationship between obesity and caries activity highlights the complexity of these health issues and the need for targeted interventions. To improve the health outcomes, it is recommended to enhance the awareness campaigns regarding oral health and nutrition, increase access to preventive dental care, and integrate nutrition education into kindergarten curricula. Full article
(This article belongs to the Section Global Pediatric Health)
Back to TopTop