Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,686)

Search Parameters:
Keywords = video-monitoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 903 KiB  
Article
Gender Differences in Visual Information Perception Ability: A Signal Detection Theory Approach
by Yejin Lee and Kwangtae Jung
Appl. Sci. 2025, 15(15), 8621; https://doi.org/10.3390/app15158621 (registering DOI) - 4 Aug 2025
Abstract
The accurate perception of visual stimuli in human–machine systems is crucial for improving system safety, usability, and task performance. The widespread adoption of digital technology has significantly increased the importance of visual interfaces and information. Therefore, it is essential to design visual interfaces [...] Read more.
The accurate perception of visual stimuli in human–machine systems is crucial for improving system safety, usability, and task performance. The widespread adoption of digital technology has significantly increased the importance of visual interfaces and information. Therefore, it is essential to design visual interfaces and information with user characteristics in mind to ensure accurate perception of visual information. This study employed the Cognitive Perceptual Assessment for Driving (CPAD) to evaluate and compare gender differences in the ability to perceive visual signals within complex visual stimuli. The experimental setup included a computer with CPAD installed, along with a touch monitor, mouse, joystick, and keyboard. The participants included 11 male and 20 female students, with an average age of 22 for males and 21 for females. Prior to the experiment, participants were instructed to determine whether a signal stimulus was present: if a square, presented as the signal, was included in the visual stimulus, they moved the joystick to the left; otherwise, they moved it to the right. Each participant performed a total of 40 trials. The entire experiment was recorded on video to measure overall response times. The experiment measured the number of correct detections of signal presence, response times, the number of misses (failing to detect the signal when present), and false alarms (detecting the signal when absent). The analysis of experimental data revealed no significant differences in perceptual ability or response times for visual stimuli between genders. However, males demonstrated slightly superior perceptual ability and marginally shorter response times compared to females. Analyses of sensitivity and response bias, based on signal detection theory, also indicated a slightly higher perceptual ability in males. In conclusion, although these differences were not statistically significant, males demonstrated a slightly better perception ability for visual stimuli. The findings of this study can inform the design of information, user interfaces, and visual displays in human–machine systems, particularly in light of the recent trend of increased female participation in the industrial sector. Future research will focus on diverse types of visual information to further validate these findings. Full article
Show Figures

Figure 1

13 pages, 1780 KiB  
Article
The Use of Sound Recorders to Remotely Measure Grass Intake Behaviour in Horses
by Daisy E. F. Taylor, Bryony E. Lancaster and Andrea D. Ellis
Animals 2025, 15(15), 2273; https://doi.org/10.3390/ani15152273 - 4 Aug 2025
Abstract
Visual observation to record grass intake is time-consuming and labour-intensive. Technological methods, such as activity monitors, have been used but only record head position. This study aimed to evaluate sound recorders attached to headcollars to acoustically measure grass intake behaviour in horses as [...] Read more.
Visual observation to record grass intake is time-consuming and labour-intensive. Technological methods, such as activity monitors, have been used but only record head position. This study aimed to evaluate sound recorders attached to headcollars to acoustically measure grass intake behaviour in horses as a low-cost alternative method. Pilot Study 1 assessed 6 × 11 min periods comparing bites/min and chews/min between video footage (VD) and sound recorders (SR). Grazing was identified audibly (SRear) and visually through soundwave pattern software (SRwav). Chew rates (SRear: 47 ± 5 chews/min, VD: 43 ± 4 chews/min) were similar between methods. Pilot Study 2 compared hourly grass intake times between SRwav and visual observation (VO) for two horses during a 3 h period. Results showed significant correlation between methods (rho = 0.99, p < 0.01, Spearman). The main study measured intake behaviour using SRwav and VO methods for three free-ranging horses during 3 h observation periods over multiple days, adding up to 3 × 24 h in winter and in spring (n = 48). Mean differences per period between SRwav and VO were 1.8% ± 3 s.d. Foraging duration per period measured with SRwav closely matched VO (r2 = 0.99, p < 0.001). Sound recorders accurately recorded grass intake time and chews in grazing horses during moderate weather conditions. Full article
(This article belongs to the Section Equids)
Show Figures

Figure 1

21 pages, 3755 KiB  
Article
Thermal and Expansion Analysis of the Lebanese Flatbread Baking Process Using a High-Temperature Tunnel Oven
by Yves Mansour, Pierre Rahmé, Nemr El Hajj and Olivier Rouaud
Appl. Sci. 2025, 15(15), 8611; https://doi.org/10.3390/app15158611 (registering DOI) - 4 Aug 2025
Abstract
This study investigates the thermal dynamics and material behavior involved in the baking process for Lebanese flatbread, focusing on the heat transfer mechanisms, water loss, and dough expansion under high-temperature conditions. Despite previous studies on flatbread baking using impingement or conventional ovens, this [...] Read more.
This study investigates the thermal dynamics and material behavior involved in the baking process for Lebanese flatbread, focusing on the heat transfer mechanisms, water loss, and dough expansion under high-temperature conditions. Despite previous studies on flatbread baking using impingement or conventional ovens, this work presents the first experimental investigation of the traditional Lebanese flatbread baking process under realistic industrial conditions, specifically using a high-temperature tunnel oven with direct flame heating, extremely short baking times (~10–12 s), and peak temperatures reaching ~650 °C, which are essential to achieving the characteristic pocket formation and texture of Lebanese bread. This experimental study characterizes the baking kinetics of traditional Lebanese flatbread, recording mass loss pre- and post-baking, thermal profiles, and dough expansion through real-time temperature measurements and video recordings, providing insights into the dough’s thermal response and expansion behavior under high-temperature conditions. A custom-designed instrumented oven with a steel conveyor and a direct flame burner was employed. The dough, prepared following a traditional recipe, was analyzed during the baking process using K-type thermocouples and visual monitoring. Results revealed that Lebanese bread undergoes significant water loss due to high baking temperatures (~650 °C), leading to rapid crust formation and pocket development. Empirical equations modeling the relationship between baking time, temperature, and expansion were developed with high predictive accuracy. Additionally, an energy analysis revealed that the total energy required to bake Lebanese bread is approximately 667 kJ/kg, with an overall thermal efficiency of only 21%, dropping to 16% when preheating is included. According to previous CFD (Computational Fluid Dynamics) simulations, most heat loss in similar tunnel ovens occurs via the chimney (50%) and oven walls (29%). These findings contribute to understanding the broader thermophysical principles that can be applied to the development of more efficient baking processes for various types of bread. The empirical models developed in this study can be applied to automating and refining the industrial production of Lebanese flatbread, ensuring consistent product quality across different baking environments. Future studies will extend this work to alternative oven designs and dough formulations. Full article
(This article belongs to the Special Issue Chemical and Physical Properties in Food Processing: Second Edition)
Show Figures

Figure 1

21 pages, 4252 KiB  
Article
AnimalAI: An Open-Source Web Platform for Automated Animal Activity Index Calculation Using Interactive Deep Learning Segmentation
by Mahtab Saeidifar, Guoming Li, Lakshmish Macheeri Ramaswamy, Chongxiao Chen and Ehsan Asali
Animals 2025, 15(15), 2269; https://doi.org/10.3390/ani15152269 - 3 Aug 2025
Viewed by 123
Abstract
Monitoring the activity index of animals is crucial for assessing their welfare and behavior patterns. However, traditional methods for calculating the activity index, such as pixel intensity differencing of entire frames, are found to suffer from significant interference and noise, leading to inaccurate [...] Read more.
Monitoring the activity index of animals is crucial for assessing their welfare and behavior patterns. However, traditional methods for calculating the activity index, such as pixel intensity differencing of entire frames, are found to suffer from significant interference and noise, leading to inaccurate results. These classical approaches also do not support group or individual tracking in a user-friendly way, and no open-access platform exists for non-technical researchers. This study introduces an open-source web-based platform that allows researchers to calculate the activity index from top-view videos by selecting individual or group animals. It integrates Segment Anything Model2 (SAM2), a promptable deep learning segmentation model, to track animals without additional training or annotation. The platform accurately tracked Cobb 500 male broilers from weeks 1 to 7 with a 100% success rate, IoU of 92.21% ± 0.012, precision of 93.87% ± 0.019, recall of 98.15% ± 0.011, and F1 score of 95.94% ± 0.006, based on 1157 chickens. Statistical analysis showed that tracking 80% of birds in week 1, 60% in week 4, and 40% in week 7 was sufficient (r ≥ 0.90; p ≤ 0.048) to represent the group activity in respective ages. This platform offers a practical, accessible solution for activity tracking, supporting animal behavior analytics with minimal effort. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
by Luca Cruciata, Salvatore Contino, Marianna Ciccarelli, Roberto Pirrone, Leonardo Mostarda, Alessandra Papetti and Marco Piangerelli
Sensors 2025, 25(15), 4750; https://doi.org/10.3390/s25154750 - 1 Aug 2025
Viewed by 205
Abstract
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly [...] Read more.
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly on raw RGB images without requiring skeleton reconstruction, joint angle estimation, or image segmentation. A single ViT model simultaneously classifies eight anatomical regions, enabling efficient multi-label posture assessment. Training is supervised using a multimodal dataset acquired from synchronized RGB video and full-body inertial motion capture, with ergonomic risk labels derived from RULA scores computed on joint kinematics. The system is validated on realistic, simulated industrial tasks that include common challenges such as occlusion and posture variability. Experimental results show that the ViT model achieves state-of-the-art performance, with F1-scores exceeding 0.99 and AUC values above 0.996 across all regions. Compared to previous CNN-based system, the proposed model improves classification accuracy and generalizability while reducing complexity and enabling real-time inference on edge devices. These findings demonstrate the model’s potential for unobtrusive, scalable ergonomic risk monitoring in real-world manufacturing environments. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

28 pages, 5699 KiB  
Article
Multi-Modal Excavator Activity Recognition Using Two-Stream CNN-LSTM with RGB and Point Cloud Inputs
by Hyuk Soo Cho, Kamran Latif, Abubakar Sharafat and Jongwon Seo
Appl. Sci. 2025, 15(15), 8505; https://doi.org/10.3390/app15158505 (registering DOI) - 31 Jul 2025
Viewed by 128
Abstract
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving [...] Read more.
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving purposes. However, previous studies have solely focused on single-source external videos, which limits the activity recognition capabilities of the deep learning algorithm. This paper introduces a novel multi-modal deep learning-based methodology for recognizing excavator activities, utilizing multi-stream input data. It processes point clouds and RGB images using the two-stream long short-term memory convolutional neural network (CNN-LSTM) method to extract spatiotemporal features, enabling the recognition of excavator activities. A comprehensive dataset comprising 495,000 video frames of synchronized RGB and point cloud data was collected across multiple construction sites under varying conditions. The dataset encompasses five key excavator activities: Approach, Digging, Dumping, Idle, and Leveling. To assess the effectiveness of the proposed method, the performance of the two-stream CNN-LSTM architecture is compared with that of single-stream CNN-LSTM models on the same RGB and point cloud datasets, separately. The results demonstrate that the proposed multi-stream approach achieved an accuracy of 94.67%, outperforming existing state-of-the-art single-stream models, which achieved 90.67% accuracy for the RGB-based model and 92.00% for the point cloud-based model. These findings underscore the potential of the proposed activity recognition method, making it highly effective for automatic real-time monitoring of excavator activities, thereby laying the groundwork for future integration into digital twin systems for proactive maintenance and intelligent equipment management. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

19 pages, 3130 KiB  
Article
Deep Learning-Based Instance Segmentation of Galloping High-Speed Railway Overhead Contact System Conductors in Video Images
by Xiaotong Yao, Huayu Yuan, Shanpeng Zhao, Wei Tian, Dongzhao Han, Xiaoping Li, Feng Wang and Sihua Wang
Sensors 2025, 25(15), 4714; https://doi.org/10.3390/s25154714 - 30 Jul 2025
Viewed by 210
Abstract
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping [...] Read more.
The conductors of high-speed railway OCSs (Overhead Contact Systems) are susceptible to conductor galloping due to the impact of natural elements such as strong winds, rain, and snow, resulting in conductor fatigue damage and significantly compromising train operational safety. Consequently, monitoring the galloping status of conductors is crucial, and instance segmentation techniques, by delineating the pixel-level contours of each conductor, can significantly aid in the identification and study of galloping phenomena. This work expands upon the YOLO11-seg model and introduces an instance segmentation approach for galloping video and image sensor data of OCS conductors. The algorithm, designed for the stripe-like distribution of OCS conductors in the data, employs four-direction Sobel filters to extract edge features in horizontal, vertical, and diagonal orientations. These features are subsequently integrated with the original convolutional branch to form the FDSE (Four Direction Sobel Enhancement) module. It integrates the ECA (Efficient Channel Attention) mechanism for the adaptive augmentation of conductor characteristics and utilizes the FL (Focal Loss) function to mitigate the class-imbalance issue between positive and negative samples, hence enhancing the model’s sensitivity to conductors. Consequently, segmentation outcomes from neighboring frames are utilized, and mask-difference analysis is performed to autonomously detect conductor galloping locations, emphasizing their contours for the clear depiction of galloping characteristics. Experimental results demonstrate that the enhanced YOLO11-seg model achieves 85.38% precision, 77.30% recall, 84.25% AP@0.5, 81.14% F1-score, and a real-time processing speed of 44.78 FPS. When combined with the galloping visualization module, it can issue real-time alerts of conductor galloping anomalies, providing robust technical support for railway OCS safety monitoring. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

23 pages, 4510 KiB  
Article
Identification and Characterization of Biosecurity Breaches on Poultry Farms with a Recent History of Highly Pathogenic Avian Influenza Virus Infection Determined by Video Camera Monitoring in the Netherlands
by Armin R. W. Elbers and José L. Gonzales
Pathogens 2025, 14(8), 751; https://doi.org/10.3390/pathogens14080751 - 30 Jul 2025
Viewed by 349
Abstract
Biosecurity measures applied on poultry farms, with a recent history of highly pathogenic avian influenza virus infection, were monitored using 24 h/7 days-per-week video monitoring. Definition of biosecurity breaches were based on internationally acknowledged norms. Farms of four different production types (two broiler, [...] Read more.
Biosecurity measures applied on poultry farms, with a recent history of highly pathogenic avian influenza virus infection, were monitored using 24 h/7 days-per-week video monitoring. Definition of biosecurity breaches were based on internationally acknowledged norms. Farms of four different production types (two broiler, two layer, two breeder broiler, and one duck farm) were selected. Observations of entry to and exit from the anteroom revealed a high degree of biosecurity breaches in six poultry farms and good biosecurity practices in one farm in strictly maintaining the separation between clean and potentially contaminated areas in the anteroom. Hand washing with soap and water and/or using disinfectant lotion was rarely observed at entry to the anteroom and was almost absent at exit. Egg transporters did not disinfect fork-lift wheels when entering the egg-storage room nor change or properly disinfect footwear. The egg-storage room was not cleaned and disinfected after egg transport by the farmer. Similarly, footwear and trolley wheels were not disinfected when introducing young broilers or ducklings to the poultry unit. Biosecurity breaches were observed when introducing bedding material in the duck farm. This study shows a need for an engaging awareness and training campaign for poultry farmers and their co-workers as well as for transporters to promote good biosecurity practices. Full article
Show Figures

Figure 1

22 pages, 554 KiB  
Systematic Review
Smart Homes: A Meta-Study on Sense of Security and Home Automation
by Carlos M. Torres-Hernandez, Mariano Garduño-Aparicio and Juvenal Rodriguez-Resendiz
Technologies 2025, 13(8), 320; https://doi.org/10.3390/technologies13080320 - 30 Jul 2025
Viewed by 390
Abstract
This review examines advancements in smart home security through the integration of home automation technologies. Various security systems, including surveillance cameras, smart locks, and motion sensors, are analyzed, highlighting their effectiveness in enhancing home security. These systems enable users to monitor and control [...] Read more.
This review examines advancements in smart home security through the integration of home automation technologies. Various security systems, including surveillance cameras, smart locks, and motion sensors, are analyzed, highlighting their effectiveness in enhancing home security. These systems enable users to monitor and control their homes in real-time, providing an additional layer of security. The document also examines how these security systems can enhance the quality of life for users by providing greater convenience and control over their domestic environment. The ability to receive instant alerts and access video recordings from anywhere allows users to respond quickly to unexpected situations, thereby increasing their sense of security and well-being. Additionally, the challenges and future trends in this field are addressed, emphasizing the importance of designing solutions that are intuitive and easy to use. As technology continues to evolve, it is crucial for developers and manufacturers to focus on creating products that seamlessly integrate into users’ daily lives, facilitating their adoption and use. This comprehensive state-of-the-art review, based on the Scopus database, provides a detailed overview of the current status and future potential of smart home security systems. It highlights how ongoing innovation in this field can lead to the development of more advanced and efficient solutions that not only protect homes but also enhance the overall user experience. Full article
(This article belongs to the Special Issue Smart Systems (SmaSys2024))
Show Figures

Figure 1

24 pages, 1408 KiB  
Systematic Review
Fear Detection Using Electroencephalogram and Artificial Intelligence: A Systematic Review
by Bladimir Serna, Ricardo Salazar, Gustavo A. Alonso-Silverio, Rosario Baltazar, Elías Ventura-Molina and Antonio Alarcón-Paredes
Brain Sci. 2025, 15(8), 815; https://doi.org/10.3390/brainsci15080815 - 29 Jul 2025
Viewed by 356
Abstract
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting [...] Read more.
Background/Objectives: Fear detection through EEG signals has gained increasing attention due to its applications in affective computing, mental health monitoring, and intelligent safety systems. This systematic review aimed to identify the most effective methods, algorithms, and configurations reported in the literature for detecting fear from EEG signals using artificial intelligence (AI). Methods: Following the PRISMA 2020 methodology, a structured search was conducted using the string (“fear detection” AND “artificial intelligence” OR “machine learning” AND NOT “fnirs OR mri OR ct OR pet OR image”). After applying inclusion and exclusion criteria, 11 relevant studies were selected. Results: The review examined key methodological aspects such as algorithms (e.g., SVM, CNN, Decision Trees), EEG devices (Emotiv, Biosemi), experimental paradigms (videos, interactive games), dominant brainwave bands (beta, gamma, alpha), and electrode placement. Non-linear models, particularly when combined with immersive stimulation, achieved the highest classification accuracy (up to 92%). Beta and gamma frequencies were consistently associated with fear states, while frontotemporal electrode positioning and proprietary datasets further enhanced model performance. Conclusions: EEG-based fear detection using AI demonstrates high potential and rapid growth, offering significant interdisciplinary applications in healthcare, safety systems, and affective computing. Full article
(This article belongs to the Special Issue Neuropeptides, Behavior and Psychiatric Disorders)
Show Figures

Figure 1

19 pages, 750 KiB  
Article
Parents as First Responders: Experiences of Emergency Care in Children with Nemaline Myopathy: A Qualitative Study
by Raúl Merchán Arjona, Juan Francisco Velarde-García, Enrique Pacheco del Cerro and Alfonso Meneses Monroy
Nurs. Rep. 2025, 15(8), 271; https://doi.org/10.3390/nursrep15080271 - 29 Jul 2025
Viewed by 256
Abstract
Background: Nemaline myopathy is a rare congenital neuromuscular disease associated with progressive weakness and frequent respiratory complications. In emergency situations, families often serve as the first and only responders. The aim of this study is to explore how parents in Spain care [...] Read more.
Background: Nemaline myopathy is a rare congenital neuromuscular disease associated with progressive weakness and frequent respiratory complications. In emergency situations, families often serve as the first and only responders. The aim of this study is to explore how parents in Spain care for children with nemaline myopathy during emergency situations, focusing on the clinical responses performed at home and the organizational challenges encountered when interacting with healthcare systems. Methods: A qualitative phenomenological study was conducted with 17 parents from 10 families belonging to the Asociación Yo Nemalínica. Semi-structured interviews were performed via video calls, transcribed verbatim, and analyzed using Giorgi’s descriptive method and ATLAS.ti software (version 24). Methodological rigor was ensured through triangulation, reflexivity, and member validation. Results: Four themes were identified. First, families were described as acting under extreme pressure and in isolation during acute home emergencies, often providing cardiopulmonary resuscitation and respiratory support without professional backup. Second, families managed ambiguous signs of deterioration using clinical judgment and home monitoring tools, often preventing fatal outcomes. Third, parents frequently assumed guiding roles in emergency departments due to a lack of clinician familiarity with the disease, leading to delays or errors. Finally, the transition to the Pediatric Intensive Care Unit was marked by emotional distress and rapid decision-making, with families often participating in critical choices about invasive procedures. These findings underscore the complex, multidisciplinary nature of caregiving. Conclusions: Parents play an active clinical role during emergencies and episodes of deterioration. Their lived experience should be formally integrated into emergency protocols and the continuity of care strategies to improve safety and outcomes. Full article
Show Figures

Figure 1

22 pages, 1359 KiB  
Article
Fall Detection Using Federated Lightweight CNN Models: A Comparison of Decentralized vs. Centralized Learning
by Qasim Mahdi Haref, Jun Long and Zhan Yang
Appl. Sci. 2025, 15(15), 8315; https://doi.org/10.3390/app15158315 - 25 Jul 2025
Viewed by 248
Abstract
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to [...] Read more.
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to train deep learning models across decentralized data sources without compromising user privacy. The pipeline begins with data acquisition, in which annotated video-based fall-detection datasets formatted in YOLO are used to extract image crops of human subjects. These images are then preprocessed, resized, normalized, and relabeled into binary classes (fall vs. non-fall). A stratified 80/10/10 split ensures balanced training, validation, and testing. To simulate real-world federated environments, the training data is partitioned across multiple clients, each performing local training using pretrained CNN models including MobileNetV2, VGG16, EfficientNetB0, and ResNet50. Two FL topologies are implemented: a centralized server-coordinated scheme and a ring-based decentralized topology. During each round, only model weights are shared, and federated averaging (FedAvg) is applied for global aggregation. The models were trained using three random seeds to ensure result robustness and stability across varying data partitions. Among all configurations, decentralized MobileNetV2 achieved the best results, with a mean test accuracy of 0.9927, F1-score of 0.9917, and average training time of 111.17 s per round. These findings highlight the model’s strong generalization, low computational burden, and suitability for edge deployment. Future work will extend evaluation to external datasets and address issues such as client drift and adversarial robustness in federated environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

54 pages, 1242 KiB  
Review
Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling
by Sabrine Dhaouadi, Mohamed Moncef Ben Khelifa, Ala Balti and Pascale Duché
Sensors 2025, 25(15), 4612; https://doi.org/10.3390/s25154612 - 25 Jul 2025
Viewed by 233
Abstract
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and [...] Read more.
Optical sensor technologies are reshaping obesity detection by enabling non-invasive, dynamic analysis of biomechanical and morphological biomarkers. This review synthesizes recent advances in three key areas: optical gait analysis, vision-based pose estimation, and depth-sensing voxel modeling. Gait analysis leverages optical sensor arrays and video systems to identify obesity-specific deviations, such as reduced stride length and asymmetric movement patterns. Pose estimation algorithms—including markerless frameworks like OpenPose and MediaPipe—track kinematic patterns indicative of postural imbalance and altered locomotor control. Human voxel modeling reconstructs 3D body composition metrics, such as waist–hip ratio, through infrared-depth sensing, offering precise, contactless anthropometry. Despite their potential, challenges persist in sensor robustness under uncontrolled environments, algorithmic biases in diverse populations, and scalability for widespread deployment in existing health workflows. Emerging solutions such as federated learning and edge computing aim to address these limitations by enabling multimodal data harmonization and portable, real-time analytics. Future priorities involve standardizing validation protocols to ensure reproducibility, optimizing cost-efficacy for scalable deployment, and integrating optical systems with wearable technologies for holistic health monitoring. By shifting obesity diagnostics from static metrics to dynamic, multidimensional profiling, optical sensing paves the way for scalable public health interventions and personalized care strategies. Full article
Show Figures

Figure 1

18 pages, 8446 KiB  
Article
Evaluation of Single-Shot Object Detection Models for Identifying Fanning Behavior in Honeybees at the Hive Entrance
by Tomyslav Sledevič
Agriculture 2025, 15(15), 1609; https://doi.org/10.3390/agriculture15151609 - 25 Jul 2025
Viewed by 271
Abstract
Thermoregulatory fanning behavior in honeybees is a vital indicator of colony health and environmental response. This study presents a novel dataset of 18,000 annotated video frames containing 57,597 instances capturing fanning behavior at the hive entrance across diverse conditions. Three state-of-the-art single-shot object [...] Read more.
Thermoregulatory fanning behavior in honeybees is a vital indicator of colony health and environmental response. This study presents a novel dataset of 18,000 annotated video frames containing 57,597 instances capturing fanning behavior at the hive entrance across diverse conditions. Three state-of-the-art single-shot object detection models (YOLOv8, YOLO11, YOLO12) are evaluated using standard RGB input and two motion-enhanced encodings: Temporally Stacked Grayscale (TSG) and Temporally Encoded Motion (TEM). Results show that models incorporating temporal information via TSG and TEM significantly outperform RGB-only input, achieving up to 85% mAP@50 with real-time inference capability on high-performance GPUs. Deployment tests on the Jetson AGX Orin platform demonstrate feasibility for edge computing, though with accuracy–speed trade-offs in smaller models. This work advances real-time, non-invasive monitoring of hive health, with implications for precision apiculture and automated behavioral analysis. Full article
Show Figures

Figure 1

19 pages, 3862 KiB  
Article
Estimation of Total Hemoglobin (SpHb) from Facial Videos Using 3D Convolutional Neural Network-Based Regression
by Ufuk Bal, Faruk Enes Oguz, Kubilay Muhammed Sunnetci, Ahmet Alkan, Alkan Bal, Ebubekir Akkuş, Halil Erol and Ahmet Çağdaş Seçkin
Biosensors 2025, 15(8), 485; https://doi.org/10.3390/bios15080485 - 25 Jul 2025
Viewed by 411
Abstract
Hemoglobin plays a critical role in diagnosing various medical conditions, including infections, trauma, hemolytic disorders, and Mediterranean anemia, which is particularly prevalent in Mediterranean populations. Conventional measurement methods require blood sampling and laboratory analysis, which are often time-consuming and impractical during emergency situations [...] Read more.
Hemoglobin plays a critical role in diagnosing various medical conditions, including infections, trauma, hemolytic disorders, and Mediterranean anemia, which is particularly prevalent in Mediterranean populations. Conventional measurement methods require blood sampling and laboratory analysis, which are often time-consuming and impractical during emergency situations with limited medical infrastructure. Although portable oximeters enable non-invasive hemoglobin estimation, they still require physical contact, posing limitations for individuals with circulatory or dermatological conditions. Additionally, reliance on disposable probes increases operational costs. This study presents a non-contact and automated approach for estimating total hemoglobin levels from facial video data using three-dimensional regression models. A dataset was compiled from 279 volunteers, with synchronized acquisition of facial video and hemoglobin values using a commercial pulse oximeter. After preprocessing, the dataset was divided into training, validation, and test subsets. Three 3D convolutional regression models, including 3D CNN, channel attention-enhanced 3D CNN, and residual 3D CNN, were trained, and the most successful model was implemented in a graphical interface. Among these, the residual model achieved the most favorable performance on the test set, yielding an RMSE of 1.06, an MAE of 0.85, and a Pearson correlation coefficient of 0.73. This study offers a novel contribution by enabling contactless hemoglobin estimation from facial video using 3D CNN-based regression techniques. Full article
Show Figures

Figure 1

Back to TopTop