Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,350)

Search Parameters:
Keywords = machine vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 16427 KB  
Article
A Multidimensional Assessment Framework for Urban Green Perception Using Large Vision Models and Mixed Reality
by Jingchao Wang, Yuehao Cao, Ximing Yue and Lulu Wang
Buildings 2026, 16(4), 877; https://doi.org/10.3390/buildings16040877 (registering DOI) - 22 Feb 2026
Abstract
Accurately assessing urban green perception is crucial for sustainable urban development and human well-being, yet conventional approaches often depend on simplistic objective metrics and non-immersive, screen-based subjective surveys, undermining ecological validity. This study develops and validates a multidimensional assessment framework that integrates Large [...] Read more.
Accurately assessing urban green perception is crucial for sustainable urban development and human well-being, yet conventional approaches often depend on simplistic objective metrics and non-immersive, screen-based subjective surveys, undermining ecological validity. This study develops and validates a multidimensional assessment framework that integrates Large Vision Models (LVMs) and Mixed Reality (MR) to couple objective environmental features with immersive human perception. The framework comprises 30 objective and 6 subjective indicators; state-of-the-art LVMs including DINOv2 and Depth Anything were applied to accurately extract objective features from Street View Imagery (SVI); and the MR device, Meta Quest 3, was utilized for the immersive collection of high-quality subjective data. In an empirical study with 74 volunteers in Shenzhen, China, machine learning models trained on MR-based data achieved 20–50% higher R2 for subjective perception than models trained on traditional screen-based data. The validated framework was then applied to 61,131 SVIs citywide to map the spatial distribution of multidimensional green perception and to quantify relationships between objective and subjective indicators. Going beyond technical validation, this study demonstrates how the framework serves as a critical tool for urban planning and landscape upgrading. By diagnosing perceptual deficits where greening quantity does not translate into quality experiences, the framework supports a paradigm shift from quantity-oriented greening to perception-oriented spatial optimization. These findings offer actionable insights for policymakers to prioritize interventions that effectively enhance public health and environmental equity in high-density cities. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
22 pages, 920 KB  
Article
Growth and Development Dynamic of the Lena Population Siberian Sturgeon (Acipenser baerii Brandt, 1869) Bred in a Recirculating Aquaculture System
by Anna A. Belous, Petr I. Otradnov, Amina K. Nikipelova, Nikolay V. Bardukov, Vladislav I. Nikipelov, Grigoriy A. Shishanov, Alisa S. Rakova, Polina S. Ilyushina, Igor V. Gusev and Natalia A. Zinovieva
Animals 2026, 16(4), 677; https://doi.org/10.3390/ani16040677 (registering DOI) - 21 Feb 2026
Abstract
Siberian sturgeon (Acipenser baerii Brandt, 1869), characterized by its rapid mass accumulation and high survival rate under industrial breeding conditions, is one of the most promising aquacultural species. This research aimed to study the growth and development of farmed Siberian sturgeon ( [...] Read more.
Siberian sturgeon (Acipenser baerii Brandt, 1869), characterized by its rapid mass accumulation and high survival rate under industrial breeding conditions, is one of the most promising aquacultural species. This research aimed to study the growth and development of farmed Siberian sturgeon (Acipenser baerii Brandt, 1869) to improve breeding programs. This research was conducted at the Federal Research Center for Animal Husbandry named after Academy Member L.K. Ernst and focused on the Lena population broodstock of Siberian sturgeon of the April 2022 generation (n = 98), grown in a recirculating aquaculture system (RAS). The experiment took into account body weight (W, g) and eleven morphological measurements: L—absolute length (cm); LR—fish body length increase (cm/day); l—commercial length (cm); L2—fork length (cm); HL—head length (cm); PV—pectoventral distance (cm); VA—ventroanal distance (cm); pl1—peduncle length (cm); H—body height (cm); h—peduncle height (cm); SC—body thickness (cm); GC—body circumference (cm); and Cc—peduncle circumference (cm). These measurements were taken from the same sample of fish at five different time points, all belonging to the same generation and approximately the same age. Measurements were taken every 3 to 9 months: 1 y (group G1), 1 y. 5 m. (group G2), 2 y. 2 m. (group G3), 2 y. 5 m. (group G4), 3 y. 2 m. (group G5), and 3 y. 5 m. (group G6). To evaluate the rate of growth and development, relative speed of growth (SGR) and relative speed of lengthening (SLR) during the observation period were determined. To characterize the fish’s exterior, we evaluated Fulton’s condition factor (KF) and the leanness index (Q). With increasing age, there was a significant (p < 0.01) decline in both SGR (from 0.454 to 0.065 g%/day) and SLR (from 0.132 to 0.028 cm%/day), which reflects changes in the fish’s physiological processes tied to the transition from the growth phase to the puberty phase. Relatively large variability was observed in body weight (Cv = 19.7–30.4%) compared to morphological measurements (Cv = 5.7–14.9%). Correlations between morphological measurements and the body weight of the fish varied from low to high (r = 0.22–0.97). Equations that allow for very precise (coefficient of determination R2 = 0.800–0.933) estimation of the fish’s body weight based on morphological measurements were developed. The most preferable predictors were measurements of H (R2 = 0.931), SC (R2 = 0.933), and L2 (R2 = 0.930). These morphological measurements are promising candidates for future development of contactless live weight detection using computer vision and machine learning algorithms. The study of live weight conjugacy at different ages showed that the best time to use this measurement to select fish for reproduction is at the age of 2 y. 2 m. or older. Acquired data can be used for the development and improvement of programs for the selection and breeding of Siberian sturgeon grown in a recirculating aquaculture system. Full article
(This article belongs to the Section Aquatic Animals)
21 pages, 6376 KB  
Article
Unraveling Patch Size Effects in Vision Transformers: Adversarial Robustness in Hyperspectral Image Classification
by Shashi Kiran Chandrappa, Sidike Paheding and Abel A. Reyes-Angulo
Remote Sens. 2026, 18(4), 656; https://doi.org/10.3390/rs18040656 (registering DOI) - 21 Feb 2026
Abstract
Vision Transformers (ViTs) have demonstrated strong performance in hyperspectral image (HSI) classification; however, their robustness is highly sensitive to patch size. This study investigates the impact of spatial patch size on clean accuracy and adversarial robustness using a standard ViT and a Channel [...] Read more.
Vision Transformers (ViTs) have demonstrated strong performance in hyperspectral image (HSI) classification; however, their robustness is highly sensitive to patch size. This study investigates the impact of spatial patch size on clean accuracy and adversarial robustness using a standard ViT and a Channel Attention Fusion variant (ViT-CAF). Patch sizes from 1 × 1 to 19 × 19 are evaluated across four benchmark datasets under FGSM, BIM, CW, PGD, and RFGSM attacks. Descriptive results show that smaller patches, particularly 1 × 1 and 3 × 3, generally yield higher adversarial accuracy, while larger patches amplify localized perturbations and degrade robustness. Parameter analysis indicates that patch-size-dependent variations arise mainly from the embedding layer, with the Transformer backbone remaining fixed, confirming that robustness differences are driven primarily by spatial context rather than model capacity. These findings reveal a trade-off between spatial granularity and adversarial resilience and provide guidance for patch size selection in ViT-based HSI applications. Full article
26 pages, 6887 KB  
Article
Decoding Urban Riverscape Perception: An Interpretable Machine Learning Approach Integrating Computer Vision and High-Fidelity 3D Models
by Yuzhen Tang, Shensheng Chen, Wenhui Xu, Jinxuan Ren and Junjie Luo
ISPRS Int. J. Geo-Inf. 2026, 15(2), 91; https://doi.org/10.3390/ijgi15020091 - 20 Feb 2026
Viewed by 44
Abstract
Visual perception serves as a crucial interface connecting human psychology with the built environment. However, current studies on urban riverscapes often rely on static 2D imagery, failing to capture the spatial depth and immersive experience essential for ecological validity. Furthermore, the “black box” [...] Read more.
Visual perception serves as a crucial interface connecting human psychology with the built environment. However, current studies on urban riverscapes often rely on static 2D imagery, failing to capture the spatial depth and immersive experience essential for ecological validity. Furthermore, the “black box” nature of traditional machine learning models hinders the understanding of how specific environmental features drive public perception. To address these gaps, this study proposes an innovative framework integrating high-fidelity 3D models, computer vision (CV), and interpretable artificial intelligence (XAI). Using the River Thames (London) and the River Seine (Paris) as diverse case studies, we constructed high-precision 3D digital twins to quantify 3D spatial metrics (e.g., Viewshed Area, H/W Ratio) and applied the SegFormer model to extract 2D visual elements (e.g., Green View Index) from water-based panoramic imagery. Subjective perception data were collected via immersive Virtual Reality (VR) experiments. Random Forest models combined with SHAP were employed to decode the non-linear driving mechanisms of perception. The results reveal three universal principles: (1) Sense of Affluence and Vibrancy are primarily driven by high building density and vertical enclosure, challenging the traditional preference for openness in waterfronts; (2) Scenic Beauty is determined by a synergy of high Green View Index and quality artificial interfaces, suggesting a preference for nature-culture integration; (3) Sense of Boredom is significantly positively correlated with Viewshed Area, indicating that empty prospects without visual foci lead to monotony. This study demonstrates the efficacy of integrating Digital Twins and XAI in revealing robust perception mechanisms across different urban contexts, providing a scientific, evidence-based tool for precision urban planning and riverside regeneration. Full article
Show Figures

Figure 1

19 pages, 3583 KB  
Article
Edge AI-Based Gait-Phase Detection for Closed-Loop Neuromodulation in SCI Mice
by Ahnsei Shon, Justin T. Vernam, Xiaolong Du and Wei Wu
Sensors 2026, 26(4), 1311; https://doi.org/10.3390/s26041311 - 18 Feb 2026
Viewed by 207
Abstract
Real-time detection of gait phase is a critical challenge for closed-loop neuromodulation systems aimed at restoring locomotion after spinal cord injury (SCI). However, many existing gait analysis approaches rely on offline processing or computationally intensive models that are unsuitable for low-latency, embedded deployment. [...] Read more.
Real-time detection of gait phase is a critical challenge for closed-loop neuromodulation systems aimed at restoring locomotion after spinal cord injury (SCI). However, many existing gait analysis approaches rely on offline processing or computationally intensive models that are unsuitable for low-latency, embedded deployment. In this study, we present a hybrid AI-based sensing architecture that enables real-time kinematic extraction and on-device gait-phase classification for closed-loop neuromodulation in SCI mice. A vision AI module performs marker-assisted, high-speed pose estimation to extract hindlimb joint angles during treadmill locomotion, while a lightweight edge AI model deployed on a microcontroller classifies gait phase and generates real-time phase-dependent stimulation triggers for closed-loop neuromodulation. The integrated system generalized to unseen SCI gait patterns without injury-specific retraining and enabled precise phase-locked biphasic stimulation in a bench-top closed-loop evaluation. This work demonstrates a low-latency, attachment-free sensing and control framework for gait-responsive neuromodulation, supporting future translation to wearable or implantable closed-loop neurorehabilitation systems. Full article
Show Figures

Figure 1

19 pages, 2621 KB  
Article
Defective Photovoltaic Module Detection Using EfficientNet-B0 in the Machine Vision Environment
by Minseop Shin, Junyoung Seo, In-Bae Lee and Sojung Kim
Machines 2026, 14(2), 232; https://doi.org/10.3390/machines14020232 - 16 Feb 2026
Viewed by 112
Abstract
Machine vision based on artificial intelligence technology is being actively utilized to reduce defect rates in the photovoltaic module production process. This study aims to propose a machine vision approach using EfficientNet-B0 for defective photovoltaic module detection. In particular, the proposed approach is [...] Read more.
Machine vision based on artificial intelligence technology is being actively utilized to reduce defect rates in the photovoltaic module production process. This study aims to propose a machine vision approach using EfficientNet-B0 for defective photovoltaic module detection. In particular, the proposed approach is applied to the electroluminescence (EL) operation, which identifies microcracks in PV modules by using polarization current. The proposed approach extracts low-level structures and local brightness variations, such as busbars, fingers, and cell boundaries, from a single convolutional block. Furthermore, the mobile inverted bottleneck convolution (MBConv) block progressively transforms defect patterns—such as microcracks and dark spots—that appear at various shooting angles into high-level feature representations. The converted image is then processed using global average pooling (GAP), Dropout, and a final fully connected layer (Dense) to calculate the probability of a defective module. A sigmoid activation function is then used to determine whether a PV module is defective. Experiments show that the proposed Efficient-B0-based methodology can stably achieve defect detection accuracy comparable to AlexNet and GoogLeNet, despite its relatively small number of parameters and fast processing speed. Therefore, this study will contribute to increasing the efficiency of EL operation in industrial fields and improving the productivity of PV modules. Full article
Show Figures

Figure 1

48 pages, 3308 KB  
Review
From Neurons to Networks: A Holistic Review of Electroencephalography (EEG) from Neurophysiological Foundations to AI Techniques
by Christos Kalogeropoulos, Konstantinos Theofilatos and Seferina Mavroudi
Signals 2026, 7(1), 17; https://doi.org/10.3390/signals7010017 - 16 Feb 2026
Viewed by 389
Abstract
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination [...] Read more.
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination of traditional and modern analytical pillars. Statistical and Time-Series Analysis, Spectral and Time-Frequency Analysis, Spatial Analysis and Source Modelling, Connectivity and Network Analysis, and Nonlinear and Chaotic Analysis are explored. Afterwards, while acknowledging the historical role of Machine Learning (ML) and Deep Learning (DL) architectures, such as Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), this review shifts the primary focus toward current state-of-the-art Artificial Intelligence (AI) trends. We place emphasis on the emergence of Foundation Models, including Large Language Models (LLMs) and Large Vision Models (LVMs), adapted for high-dimensional neural sequences. Finally, we explore the integration of Generative AI for data augmentation and review Explainable AI (XAI) frameworks designed to bridge the gap between “black-box” decoding and clinical interpretability. We conclude that the next generation of EEG analysis will likely converge into Neuro-Symbolic architectures, synergising the massive generative power of foundation models with the rigorous, rule-based interpretability of classical signal theory. Full article
Show Figures

Figure 1

21 pages, 6687 KB  
Article
Visual Navigation Line Detection and Extraction for Hybrid Rapeseed Seed Production Parent Rows
by Ping Jiang, Xiaolong Wang, Siliang Xiang, Cong Liu, Wenwu Hu and Yixin Shi
Agriculture 2026, 16(4), 454; https://doi.org/10.3390/agriculture16040454 - 14 Feb 2026
Viewed by 146
Abstract
We aim to address the insufficient robustness of navigational line detection for rapeseed seed production sires in complex field scenarios and the challenges faced by existing models in balancing precision, real-time performance, and resource consumption. Taking YOLOv8n-seg as the baseline, we first introduced [...] Read more.
We aim to address the insufficient robustness of navigational line detection for rapeseed seed production sires in complex field scenarios and the challenges faced by existing models in balancing precision, real-time performance, and resource consumption. Taking YOLOv8n-seg as the baseline, we first introduced the ADown module to mitigate feature subsampling information loss and enhance computational efficiency. Subsequently, the DySample module was employed to strengthen target feature representation and improve object discrimination in complex scenarios. Finally, the c2f module was replaced with c2f_FB to optimise feature fusion and reinforce multi-scale feature integration. Performance was evaluated through comparative experiments, ablation studies, and scenario testing. The model achieves an average precision of 99.2%, mAP50-95 of 84.5%, a frame rate of 90.21 frames per second, and 2.6 million parameters, demonstrating superior segmentation performance in complex scenarios. SegNav-YOLOv8n balances performance and resource requirements, validating the effectiveness of the improvements and providing reliable technical support for navigating agricultural machinery in rapeseed seed production. Full article
Show Figures

Figure 1

29 pages, 3204 KB  
Systematic Review
A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models
by Muhammad Ishaq, Dario Calogero Guastella, Giuseppe Sutera and Giovanni Muscato
Appl. Sci. 2026, 16(4), 1929; https://doi.org/10.3390/app16041929 - 14 Feb 2026
Viewed by 138
Abstract
Background: Falls are a leading cause of morbidity and mortality among older adults, creating a need for technologies that can automatically detect falls and summon timely assistance. The rapid evolution of sensor technologies and artificial intelligence has led to a proliferation of fall [...] Read more.
Background: Falls are a leading cause of morbidity and mortality among older adults, creating a need for technologies that can automatically detect falls and summon timely assistance. The rapid evolution of sensor technologies and artificial intelligence has led to a proliferation of fall detection systems (FDS). This systematic review synthesizes the recent literature to provide a comprehensive overview of the current technological landscape. Objective: The objective of this review is to systematically analyze and synthesize the evidence from the academic literature on fall detection technologies. The review focuses on three primary areas: the sensor modalities used for data acquisition, the computational models employed for fall classification, and the emerging trend of shifting from reactive detection to proactive fall risk prediction. Methods: A systematic search of electronic databases was conducted for studies published between 2008 and 2025. Following the PRISMA guidelines, 130 studies met the inclusion criteria and were selected for analysis. Information regarding sensor technology, algorithm type, validation methods, and key performance outcomes was extracted and thematically synthesized. Results: The analysis identified three dominant categories of sensor technologies: wearable systems (primarily Inertial Measurement Units), ambient systems (including vision-based, radar, WiFi, and LiDAR), and hybrid systems that fuse multiple data sources. Computationally, the field has shown a progression from threshold-based algorithms to classical machine learning and is now dominated by deep learning architectures, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers. Many studies report high performance, with accuracy, sensitivity, and specificity often exceeding 95%. An important trend is the expansion of research from post-fall detection to proactive fall risk assessment and pre-impact fall prediction, which aim to prevent falls before they cause injury. Conclusions: The technological capabilities for fall detection are well-developed, with deep learning models and a variety of sensor modalities demonstrating high accuracy in controlled settings. However, a critical gap remains; our analysis reveals that 98.5% of studies rely on simulated falls, with only two studies validating against real-world, unanticipated falls in the target demographic. Future research should prioritize real-world validation, address practical implementation challenges such as energy efficiency and user acceptance, and advance the development of integrated, multi-modal systems for effective fall risk management. Full article
Show Figures

Figure 1

24 pages, 3572 KB  
Article
Integrated Wavefront Detection for Large-Aperture Segmented Planar Mirrors: Concept & Principle
by Rui Sun, Qichang An and Xiaoxia Wu
Photonics 2026, 13(2), 189; https://doi.org/10.3390/photonics13020189 - 14 Feb 2026
Viewed by 150
Abstract
Planar mirrors play a crucial role in autocollimation testing and optical beam relay systems of telescopes and other fields. However, for the next-generation large-aperture telescopes, typical monolithic planar mirrors fall short in meeting anticipated performance requirements, owing to their high costs and fabrication [...] Read more.
Planar mirrors play a crucial role in autocollimation testing and optical beam relay systems of telescopes and other fields. However, for the next-generation large-aperture telescopes, typical monolithic planar mirrors fall short in meeting anticipated performance requirements, owing to their high costs and fabrication limitations. Here, a new integrated multimodal testing method for 3–4m-class segmented planar mirrors is proposed. The presented system utilizes an innovative keystone architecture with a central mirror and keystone-shaped segments, which is superior to the traditional hexagonal architecture. To facilitate rapid coarse alignment, a machine vision system based on edge detection is investigated. Furthermore, the dispersed fringe technique is used for robust co-phasing. By using a segmented planar mirror designed with sub-aperture stitching strategy and combining local apertures, the system cost was reduced and high-precision measurement was achieved. Eventually, the alignment, co-focus and co-phasing measurements based on the proposed concept were completed, and the transfer characteristics were determined by analyzing the Optical Transfer Function (OTF). Test data shows co-phasing accuracy of better than 30 nm RMS (root-mean-square) and alignment accuracy less than 10 arcseconds. In addition, the system uses small-aperture mirrors in autocollimation testing to facilitate flexible alignment and testing of individual segments. The test optical path is configured to match the effective focal length of the system under test, and the optical lever effect of reflectors enhances the alignment sensitivity. The method combines autocollimation and wavefront sensing which allows the approach to provide high-precision control of co-focus, co-phasing, and surface errors correction. Full article
(This article belongs to the Special Issue Advances in Optical Fiber Sensing Technology)
28 pages, 14898 KB  
Article
Deep Learning for Classification of Internal Defects in Fused Filament Fabrication Using Optical Coherence Tomography
by Valentin Lang, Qichen Zhu, Malgorzata Kopycinska-Müller and Steffen Ihlenfeldt
Appl. Syst. Innov. 2026, 9(2), 42; https://doi.org/10.3390/asi9020042 - 14 Feb 2026
Viewed by 227
Abstract
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion [...] Read more.
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion defects, moisture sensitivity, and insufficient melting. Process monitoring therefore focuses on early defect detection to minimize failed builds and costs, while ultimately enabling process optimization and adaptive control to mitigate defects during fabrication. For this purpose, a data processing pipeline for monitoring Optical Coherence Tomography images acquired in Fused Filament Fabrication is introduced. Convolutional neural networks are used for the automatic classification of tomographic cross-sections. A dataset of tomographic images passes semi-automatic labeling, preprocessing, model training and evaluation. A sliding window detects outlier regions in the tomographic cross-sections, while masks suppress peripheral noise, enabling label generation based on outlier ratios. Data are split into training, validation, and test sets using block-based partitioning to limit leakage. The classification model employs a ResNet-V2 architecture with BottleneckV2 modules. Hyperparameters are optimized, with N = 2, K = 2, dropout 0.5, and learning rate 0.001 yielding best performance. The model achieves 0.9446 accuracy and outperforms EfficientNet-B0 and VGG16 in accuracy and efficiency. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

36 pages, 1999 KB  
Review
Artificial Intelligence in Construction Health and Safety: Use Cases, Benefits and Barriers
by Adetayo Onososen and Innocent Musonda
Safety 2026, 12(1), 30; https://doi.org/10.3390/safety12010030 - 13 Feb 2026
Viewed by 190
Abstract
Despite sustained efforts to improve construction health and safety (CHS), accident and injury rates remain persistently high, driving increased interest in Artificial Intelligence (AI)-enabled safety solutions. This study presents a thematic systematic literature review of 148 peer-reviewed journal articles published between 2013 and [...] Read more.
Despite sustained efforts to improve construction health and safety (CHS), accident and injury rates remain persistently high, driving increased interest in Artificial Intelligence (AI)-enabled safety solutions. This study presents a thematic systematic literature review of 148 peer-reviewed journal articles published between 2013 and 2025, conducted in accordance with PRISMA guidelines and sourced from Scopus. The synthesis identifies four dominant thematic areas: AI use cases, adoption barriers, realised benefits, and future research directions. Findings indicate a strong concentration of studies on vision-based monitoring, predictive hazard detection, and automated risk assessment, while organisational, ethical, and governance dimensions remain comparatively underexplored. Recurring impediments include data quality limitations, algorithmic opacity, fragmented digital ecosystems, and organisational resistance, highlighting persistent non-technical constraints on implementation. Reported benefits consistently emphasise improved predictive accuracy, real-time situational awareness, and proactive safety intervention, signalling a transition from reactive compliance-based approaches toward anticipatory, data-driven safety management. Based on these patterns, future research should prioritise explainable AI, interoperable data infrastructures, and cross-disciplinary integration to support scalable and trustworthy AI adoption in CHS. Full article
Show Figures

Figure 1

20 pages, 5744 KB  
Article
FibroidX: Vision Transformer-Powered Prognosis and Recurrence Prediction for Uterine Fibroids Using Ultrasound Images
by Fatma M. Talaat, Yathreb Bayan Mohamed, Amira Abdulrahman, Mohamed Salem and Mohamed Shehata
Cancers 2026, 18(4), 605; https://doi.org/10.3390/cancers18040605 - 12 Feb 2026
Viewed by 198
Abstract
Background/Objectives: One of the common gynecological issues that can have a major effect on women’s reproductive health and quality of life is uterine fibroids (UFs). For personalized treatment planning and a reduction in long-term consequences, early fibroid prognosis and recurrence prediction are essential. [...] Read more.
Background/Objectives: One of the common gynecological issues that can have a major effect on women’s reproductive health and quality of life is uterine fibroids (UFs). For personalized treatment planning and a reduction in long-term consequences, early fibroid prognosis and recurrence prediction are essential. In this context, prognosis refers to anticipated symptom progression and treatment response, while recurrence prediction estimates the likelihood of regrowth after interventions such as myomectomy, uterine artery embolization (UAE), or new fibroid formation during follow-up. Conventional techniques for predicting the prognosis and recurrence of UFs depend on imaging, clinical evaluations, and statistical models; nevertheless, they frequently have limited accuracy and are subjective. Methods: Therefore, we introduce FibroidX, which utilizes vision transformers and self-attention processes to improve forecast accuracy, automate feature extraction, and offer customized risk evaluations to overcome these obstacles. Prognosis encompasses overall disease progression, symptom severity, and response to therapy, whereas recurrence prediction focuses on post-treatment regrowth or new fibroid formation. Results: The dataset comprises 1990 ultrasound images split into training-test sets (80-20). With an accuracy of 98.4%, the suggested model outperformed baseline models like Model A (92.3%) and Model B (94.1%), exhibiting exceptional performance. A significant percentage of accurately anticipated cases was ensured by the precision and recall values, which were 97.8% and 96.9%, respectively. The model’s balanced precision-recall trade-off is highlighted by its F1-score of 97.3%, and its exceptional class distinction is confirmed by its AUC-ROC score of 0.99. Conclusions: The model was suitable for real-time applications, with an average inference time of 0.02 s per sample. The proposed method showed its effectiveness and reliability in prediction tasks. It achieved a 15% increase in accuracy and a 12% reduction in the false positive rate compared to traditional machine learning techniques. Full article
Show Figures

Figure 1

17 pages, 294 KB  
Review
Facial Expressions as a Nexus for Health Assessment
by Jinani Sooriyaarachchi and Di Jiang
Bioengineering 2026, 13(2), 208; https://doi.org/10.3390/bioengineering13020208 - 12 Feb 2026
Viewed by 381
Abstract
Facial expressions are crucial in conveying emotions and for engaging in social interactions. The facial musculature activations and their pattern of movements under emotions are similar in all humans; hence, facial expressions are considered a behavioral phenotype. Facial features related to the expression [...] Read more.
Facial expressions are crucial in conveying emotions and for engaging in social interactions. The facial musculature activations and their pattern of movements under emotions are similar in all humans; hence, facial expressions are considered a behavioral phenotype. Facial features related to the expression of various emotions change under different health impairments, including cognitive decline and pain experience. Hence, evaluating these facial expression deviations in comparison to healthy baseline conditions can help in the early detection of health impairments. Recent advances in machine learning and computer vision have introduced a multitude of tools for extracting human facial features, and researchers have explored the application of these tools in early screening and detection of different health conditions. Advances in these studies can especially help in telemedicine applications and in remote patient monitoring, potentially reducing the current excessive demand on the healthcare system. In addition, once developed, these technologies can assist healthcare professionals in emergency room triage, early diagnosis, and treatment. The aim of the present review is to discuss the available tools that can objectively measure facial features and to record the studies that use these tools in various health assessments. Our findings indicate that analyzing facial expressions for the detection of multiple health impairments is indeed feasible. However, for these technologies to achieve reliable real-world deployment, they must incorporate disease-specific facial features and address existing limitations, including concerns related to patient privacy. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

42 pages, 3053 KB  
Review
A Comprehensive Review of Deepfake Detection Techniques: From Traditional Machine Learning to Advanced Deep Learning Architectures
by Ahmad Raza, Abdul Basit, Asjad Amin, Zeeshan Ahmad Arfeen, Muhammad I. Masud, Umar Fayyaz and Touqeer Ahmed Jumani
AI 2026, 7(2), 68; https://doi.org/10.3390/ai7020068 - 11 Feb 2026
Viewed by 772
Abstract
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical [...] Read more.
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical methods of image processing from 2018 to 2025 with a specific focus on the trade-off between accuracy, computing efficiency, and cross-dataset generalization. Through lavish analysis of a robust peer-reviewed studies using three benchmark data sets (FaceForensics++, DFDC, Celeb-DF) we expose important truths to bring some of the field’s prevailing assumptions into question. Our analysis produces three important results that radically change the understanding of detection abilities and limitations. Transformer-based architectures have significantly better cross-dataset generalization (11.33% performance decline) than CNN-based (more than 15% decline), at the expense of computation (3–5× more). To the contrary, there is no strong reason to assume the superiority of deep learning, and the performance of traditional machine learning methods (in our case, Random Forest) is quite comparable (accuracy of 99.64% on the DFDC) with dramatically lower computing needs, which opens up the prospects for their application in resource-constrained deployment scenarios. Most critically, we demonstrate deterioration of performance (10–15% on average) systematically across all methodological classes and we provide empirical support for the fact that current detection systems are, to a high degree, learning dataset specific compression artifacts, rather than deepfake characteristics that are generalizable. These results highlight the importance of moving from an accuracy-focused evaluation approach toward more comprehensive evaluation approaches that balance either generalization capability, computational feasibility, or practical deployment constraints, and therefore further direct future research efforts towards designing systems for detection that could be deployed in practical applications. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

Back to TopTop