Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (213)

Search Parameters:
Keywords = colour learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5552 KB  
Proceeding Paper
Detection of Net Blotch Disease of Barley Using UAV-Based RGB and Multispectral Imagery at Plot Scale
by Huajian Liu, Reddy Pullanagari, Dillon Campbell, Marnie Denlay, Molly Hennekam, Hari Dadu, Paul Telfer, Stewart Coventry and Bettina Berger
Biol. Life Sci. Forum 2026, 57(1), 7; https://doi.org/10.3390/blsf2026057007 - 1 Apr 2026
Viewed by 102
Abstract
Net blotch, caused by Pyrenophora teres, is a major barley disease that occurs in two forms, spot form net blotch (SFNB) and net form net blotch (NFNB), reducing grain yield and quality worldwide. Accurate detection is critical for disease management and breeding [...] Read more.
Net blotch, caused by Pyrenophora teres, is a major barley disease that occurs in two forms, spot form net blotch (SFNB) and net form net blotch (NFNB), reducing grain yield and quality worldwide. Accurate detection is critical for disease management and breeding resistant cultivars; however, traditional disease scoring is labour-intensive and error-prone. This study evaluates the use of UAV-based red–green–blue (RGB) and multispectral imagery, combined with machine learning, for determining net blotch infection levels at the plot scale across multiple sites and seasons in Australia. Various colour features, vegetation indices, and algorithms were tested, including a cross-domain testing for model generalisation. We propose a robust UAV-driven pipeline enabling precise disease monitoring and phenotyping in barley breeding programs. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Agronomy (IECAG 2025))
Show Figures

Graphical abstract

34 pages, 9802 KB  
Article
Attention-Enhanced GAN for Spatial–Spectral Fusion and Chlorophyll-a Inversion in Chen Lake, China
by Chenxi Zeng, Cheng Shang, Yankun Wang, Shan Jiang, Ningsheng Chen, Chengyu Geng, Yadong Zhou and Yun Du
Sensors 2026, 26(7), 2107; https://doi.org/10.3390/s26072107 - 28 Mar 2026
Viewed by 301
Abstract
The Sentinel-3 Ocean and Land Colour Instrument (OLCI) is designed for water monitoring. Its 21-spectral bands serve as the basis for the precise retrieval of water quality parameters. However, its coarse resolution restricts the depiction of the spatial distribution of water quality parameters [...] Read more.
The Sentinel-3 Ocean and Land Colour Instrument (OLCI) is designed for water monitoring. Its 21-spectral bands serve as the basis for the precise retrieval of water quality parameters. However, its coarse resolution restricts the depiction of the spatial distribution of water quality parameters in small inland water bodies. Spatial–spectral fusion is a common method to address the inherent constraints between the spatial and spectral resolutions of sensors. Central to the popular methods is the deep learning-based method. Nonetheless, deep-learning-based models still face challenges in fusing Sentinel-2 Multi-Spectral Instrument (MSI) and Sentinel-3 OLCI data. Here, we propose a Multi-Scale-Attention-based Unsupervised Generative Adversarial Network (MSA-UGAN), which effectively integrates OLCI’s spectral advantage and MSI’s spatial resolution. Quantitative evaluation was conducted against five benchmark methods, including traditional approaches (GS, SFIM, MTF-GLP) and deep learning models (SRCNN, UCGAN). The results show that MSA-UGAN achieves the best overall performance: QNR (0.9709) and SSIM (0.9087) are the highest, while SAM (1.1331), spatial distortion (DS = 0.0389), and spectral distortion (Dλ = 0.0252) are the lowest. This shows that MSA-UGAN can better preserve the spatial details of S2 MSI and the spectral features of S3 OLCI data. Moreover, ERGAS (2.2734) also performs excellently in the comparative experiments. The experiment of Chlorophyll-a inversion using the fused image in Chen Lake revealed a spatial gradient ranging from 3.25 to 19.33 µg/L, with the highest concentrations in the southwestern nearshore waters, likely associated with aquaculture. These results jointly indicate that MSA-UGAN can generate high-spatial-resolution multispectral images, and the fused images can be effectively utilized for water quality monitoring, thereby providing essential data support for the precision management and scientific decision-making regarding inland lakes. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

29 pages, 2065 KB  
Article
Effects of Caffeine Ingestion on Morning Cognitive and Muscle Strength Measures in Males: A Standardized Approach
by João P. S. Agulhari, Neil Chester, Magali Giacomoni, Karl C. Gibbons, Dani Hajdukiewicz, Haydyn L. O’Brien, Thomas D. O’Brien, Jack Jensen, Briony Lucas, Samantha L. Moss, Samuel A. Pullinger and Ben J. Edwards
Nutrients 2026, 18(6), 954; https://doi.org/10.3390/nu18060954 - 18 Mar 2026
Viewed by 1009
Abstract
Background/Objectives: We investigated whether ingestion of caffeine (~1 h before) was beneficial to subsequent morning (07:30 h), mood, strength and cognitive measures. Methods: Fourteen recreationally active males were recruited and completed six sessions: (i) one repetition maximum (1RM) for bench press [...] Read more.
Background/Objectives: We investigated whether ingestion of caffeine (~1 h before) was beneficial to subsequent morning (07:30 h), mood, strength and cognitive measures. Methods: Fourteen recreationally active males were recruited and completed six sessions: (i) one repetition maximum (1RM) for bench press and back squat; (ii) two familiarization sessions of strength measures; (iv) three experimental conditions administered in a double-blinded, randomized counterbalanced design order, either caffeine (Caffeine [CAFF], 300 mg or 2.8–4.3 mg/kg body weight), placebo (Placebo [PLAC]) ingested at 06:30 h, or no-pill control (No Pill [NoPill]). For each experimental session, on arrival at the laboratory, rectal and skin temperature were measured as well as a battery of cognitive performance through a battery of tests (trail-making test, Rey’s auditory verbal learning test, and Stroop word–colour interference test). Thereafter, maximum voluntary contraction on an isometric chair (MVC) without and with stimulation was conducted, and three repetitions were performed at 40, 60 and 80% of 1RM for bench press and back squat. Average power (AP), average velocity (AV), peak velocity (PV), mean propulsive velocity (MPV), average acceleration (RDV), displacement (D) and time-to-peak velocity (tPV) were recorded using MuscleLab linear encoders. Rating of perceived exertion and effort was asked after each set (RPE). The data was analysed using a general linear model with repeated measures. Results: MVC peak-force values with and without stimulation showed a significant increase in the CAFF condition compared to values for NoPill and with stimulation PLAC conditions (stim: Δ9.0 and 8.7%; no stim: 8.3%; p < 0.05; η2p = 0.33 and 0.42). Greater muscle % activation was achieved for the CAFF than the other conditions (~6%, p ≤ 0.042; η2p = 0.33). In the non-stimulated MVC, RPE was perceived as easier (4.8%, p = 0.04). AV and MPV values were higher in both bench press (Δ3.3 and 4.6%) and back squat (Δ7.7 and 9.2%) in CAFF than the PLAC condition (p = 0.031; η2p = 0.24 and 0.23 and 0.24 and 0.32). CAFF improved auditory total recall compared to NoPill (9.5%, p = 0.040; η2p = 0.22). Conclusions: Early morning ingestion of caffeine improved MVC to levels observed by others in the evening, as well as some aspects of bench press, back squat and recall performance. Caffeine ingestion had no effect on core temperature, mood, tiredness, alertness or other measures of cognitive performance. Full article
(This article belongs to the Section Sports Nutrition)
Show Figures

Figure 1

32 pages, 1006 KB  
Review
Exploring Textile Fibre Characterisation: A Review of Vibrational Spectroscopy and Chemometrics
by Diva Santos, A. Margarida Teixeira, M. Leonor Sousa, Andréa Marinho and Clara Sousa
Textiles 2026, 6(1), 34; https://doi.org/10.3390/textiles6010034 - 18 Mar 2026
Viewed by 331
Abstract
The identification/classification of textile fibres is essential in manufacturing, forensic science, cultural heritage preservation, and recycling. Conventional methods, including solubility tests, optical microscopy, and chromatographic techniques, are often destructive, labour-intensive, and limited in scope. Vibrational spectroscopy, particularly near-infrared (NIR), Fourier-transform infrared (FTIR), and [...] Read more.
The identification/classification of textile fibres is essential in manufacturing, forensic science, cultural heritage preservation, and recycling. Conventional methods, including solubility tests, optical microscopy, and chromatographic techniques, are often destructive, labour-intensive, and limited in scope. Vibrational spectroscopy, particularly near-infrared (NIR), Fourier-transform infrared (FTIR), and Raman spectroscopy, has emerged as a rapid, non-destructive, and accurate alternative for fibre analysis. However, multi-composition textiles, dyes, finishing agents, and ageing effects frequently cause overlapping spectral features, hampering direct interpretation. This review examines the combined use of vibrational spectroscopy and chemometrics for textile fibre discrimination. It critically evaluates the performance of different spectroscopic techniques in classifying natural, synthetic, and blended fibres. The role of multivariate analysis methods, such as PCA, PLS, LDA, SIMCA, and machine learning algorithms, in improving spectral interpretation and classification accuracy is highlighted. Key factors affecting model robustness, including spectral pre-processing, sample heterogeneity, moisture, and colour, are also discussed. The integration of spectroscopy with chemometrics provides a robust, scalable, and sustainable solution for fibre identification, supporting quality control, fraud detection, and circular economy initiatives. This approach demonstrates significant potential for both research and industrial applications. Full article
Show Figures

Graphical abstract

28 pages, 14615 KB  
Article
Anatomic Interactive Atlas of the Loggerhead Sea Turtle (Caretta caretta) Coelomic Cavity
by Alberto Arencibia, Aday Melián and Jorge Orós
Animals 2026, 16(5), 754; https://doi.org/10.3390/ani16050754 - 28 Feb 2026
Viewed by 406
Abstract
The coelomic cavity of sea turtles is affected by congenital, developmental, traumatic, infectious, and organ- or system-specific disorders, making accurate anatomical knowledge essential for veterinary practice. This study presents an open-access, interactive two-dimensional (2D) anatomical atlas of the coelomic cavity of the loggerhead [...] Read more.
The coelomic cavity of sea turtles is affected by congenital, developmental, traumatic, infectious, and organ- or system-specific disorders, making accurate anatomical knowledge essential for veterinary practice. This study presents an open-access, interactive two-dimensional (2D) anatomical atlas of the coelomic cavity of the loggerhead sea turtle (Caretta caretta), developed using images obtained from osteology, gross anatomical dissections, computed tomography (CT), and magnetic resonance imaging (MRI). The atlas comprises six osteology images, sixteen anatomical dissection images, eight transverse CT images acquired using bone and soft-tissue windows, six three-dimensional (3D) volume-rendered CT images, and fourteen MRI images (four transverse, five dorsal, and five sagittal), all provided in PNG format. Relevant anatomical structures were segmented and colour-coded for each figure using manual layer-based segmentation software. The Unity 3D platform was employed for image visualisation and assessment, supporting the development of interactive two-dimensional content. This atlas serves as a useful interactive tool for anatomical learning and clinical reference for professionals and students engaged in the conservation of loggerhead sea turtles. Full article
(This article belongs to the Section Herpetology)
Show Figures

Figure 1

32 pages, 3836 KB  
Review
Application of Visual Information in Music Education Digital Technologies: A Scoping Review
by Bahareh Behzadaval, Laura Serra Marin and Luc Nijs
Educ. Sci. 2026, 16(2), 309; https://doi.org/10.3390/educsci16020309 - 13 Feb 2026
Viewed by 1214
Abstract
The relationship between sound and visual representation has long intrigued artists and educators, with historical explorations ranging from colour–music correspondence to alternative notations and graphic visualisations of music. Recent advances in digital technologies have significantly expanded the pedagogical potential of visual information in [...] Read more.
The relationship between sound and visual representation has long intrigued artists and educators, with historical explorations ranging from colour–music correspondence to alternative notations and graphic visualisations of music. Recent advances in digital technologies have significantly expanded the pedagogical potential of visual information in music education. However, there is still no comprehensive review mapping how visual information is applied in digital music education tools. This scoping review maps the application of visual modalities in original digital tools for music teaching and learning, drawing on 63 studies published between 2014 and 2024. Following Arksey and O’Malley’s five-stage framework and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) reporting guidelines, this review analyses the methodological characteristics, pedagogical foundations, and design features of these tools. Findings reveal a dominant focus on performance skills and individual learning, often supported by visual feedback and interactivity. However, other aspects of learning such as creativity, responsiveness, and collaboration remain underexplored. While references to concepts such as multimodality and embodied learning are common, a robust theoretical grounding is frequently lacking or implicit. This review calls for a shift from technology-driven innovation toward pedagogy-led design, advocating for a more holistic educational approach and more rigorous empirical research. Implications highlight the potential of visual information not only to support performance skill acquisition but also to foster creative, expressive, and collaborative dimensions of music learning. Full article
Show Figures

Figure 1

25 pages, 4064 KB  
Article
Application of CNN and Vision Transformer Models for Classifying Crowns in Pine Plantations Affected by Diplodia Shoot Blight
by Mingzhu Wang, Christine Stone and Angus J. Carnegie
Forests 2026, 17(1), 108; https://doi.org/10.3390/f17010108 - 13 Jan 2026
Viewed by 477
Abstract
Diplodia shoot blight is an opportunistic fungal pathogen infesting many conifer species and it has a global distribution. Depending on the duration and severity of the disease, affected needles appear yellow (chlorotic) for a brief period before becoming red or brown in colour. [...] Read more.
Diplodia shoot blight is an opportunistic fungal pathogen infesting many conifer species and it has a global distribution. Depending on the duration and severity of the disease, affected needles appear yellow (chlorotic) for a brief period before becoming red or brown in colour. These symptoms can occur on individual branches or over the entire crown. Aerial sketch-mapping or the manual interpretation of aerial photography for tree health surveys are labour-intensive and subjective. Recently, however, the application of deep learning (DL) techniques to detect and classify tree crowns in high-spatial-resolution imagery has gained significant attention. This study evaluated two complementary DL approaches for the detection and classification of Pinus radiata trees infected with diplodia shoot blight across five geographically dispersed sites with varying topographies over two acquisition years: (1) object detection using YOLOv12 combined with Segment Anything Model (SAM) and (2) pixel-level semantic segmentation using U-Net, SegFormer, and EVitNet. The three damage classes for the object detection approach were ‘yellow’, ‘red-brown’ (both whole-crown discolouration) and ‘dead tops’ (partially discoloured crowns), while for the semantic segmentation the three classes were yellow, red-brown, and background. The YOLOv12m model achieved an overall mAP50 score of 0.766 and mAP50–95 of 0.447 across all three classes, with red-brown crowns demonstrating the highest detection accuracy (mAP50: 0.918, F1 score: 0.851). For semantic segmentation models, SegFormer showed the strongest performance (IoU of 0.662 for red-brown and 0.542 for yellow) but at the cost of longest training time, while EVitNet offered the most cost-effective solution achieving comparable accuracy to SegFormer but with a superior training efficiency with its lighter architecture. The accurate identification and symptom classification of crown damage symptoms support the calibration and validation of satellite-based monitoring systems and assist in the prioritisation of ground-based diagnosis or management interventions. Full article
(This article belongs to the Section Forest Health)
Show Figures

Figure 1

25 pages, 3364 KB  
Article
Automated Weed Detection in Red Beet (Beta vulgaris L., Conditiva Group, cv. Kestrel F1) Using Deep Learning Models
by Oscar Leonardo García-Navarrete, Anibal Bregon Bregon and Luis Manuel Navas-Gracia
Agronomy 2026, 16(2), 167; https://doi.org/10.3390/agronomy16020167 - 9 Jan 2026
Cited by 1 | Viewed by 446
Abstract
Weed competition in red beet (Beta vulgaris L. Conditiva Group) directly reduces crop yield and quality, making detection and eradication essential. This study proposed a three-phase experimental protocol for multi-class detection (cultivation and six types of weeds) based on RGB (red-green-blue) colour [...] Read more.
Weed competition in red beet (Beta vulgaris L. Conditiva Group) directly reduces crop yield and quality, making detection and eradication essential. This study proposed a three-phase experimental protocol for multi-class detection (cultivation and six types of weeds) based on RGB (red-green-blue) colour images acquired in a greenhouse, using state-of-the-art deep learning (DL) models (YOLO and RT-DETR family). The objective was to evaluate and optimise performance by identifying the combination of architecture, model scale and input resolution that minimises false negatives (FN) without compromising robust overall performance. The experimental design was conceived as an iterative improvement process, in which each phase refines models, configurations, and selection criteria based on performance from the previous phase. In phase 1, the base models YOLOv9s and RT-DETR-l were compared at 640 × 640 px; in phase 2, the YOLOv8s, YOLOv9s, YOLOv10s, YOLO11s, YOLO12s and RT-DETR-l models were compared at 640 × 640 px and the best ones were selected using the F1 score and the FN rate. In phase 3, the YOLOv9 (s = small, m = medium, c = compact, e = extended) and YOLOv10 (s = small, m = medium, l = large, x = extra-large) families were scaled according to the number of parameters (s/m/c-e/l-x sizes) and resolutions of 1024 × 1024 and 2048 × 2048 px. The best results were achieved with YOLOv9e-2048 (F1: 0.738; mAP@0.5 (mean Average Precision): 0.779; FN: 28.3%) and YOLOv10m-2048 (F1: 0.744; mAP@0.5: 0.775; FN: 27.5%). In conclusion, the three-phase protocol allows for the objective selection of the combination of architecture, scale, and resolution for weed detection in greenhouses. Increasing the resolution and scale of the model consistently reduced FNs, raising the sensitivity of the system without affecting overall performance; this is agronomically relevant because each FN represents an untreated weed. Full article
Show Figures

Figure 1

25 pages, 2448 KB  
Article
The Clinical Significance of the Manchester Colour Wheel in a Sample of People Treated for Insured Injuries
by John Edward McMahon, Ashley Craig and Ian Douglas Cameron
J. Clin. Med. 2026, 15(1), 75; https://doi.org/10.3390/jcm15010075 - 22 Dec 2025
Viewed by 479
Abstract
Background/Objectives: The Manchester Colour Wheel (MCW) was developed as an alternative way of assessing health status, mood and treatment outcomes. There has been a dearth of research on this alternative assessment approach. The present study examines the sensitivity of the MCW to [...] Read more.
Background/Objectives: The Manchester Colour Wheel (MCW) was developed as an alternative way of assessing health status, mood and treatment outcomes. There has been a dearth of research on this alternative assessment approach. The present study examines the sensitivity of the MCW to pain, psychological factors and recovery status in 1098 people with insured injuries treated in an interdisciplinary clinic. Methods: A deidentified data set of clients treated in a multidisciplinary clinic was conveyed to the researchers, containing results of MCW and injury-specific psychometric tests at intake, as well as recovery status at discharge. Systematic machine modelling was applied. Results: There were no significant differences between the four injury types studied: motor crash-related Whiplash Associated Disorder (WAD) and workplace-related Shoulder Injury (SI), Back Injury (BI) and Neck Injury (NI) on the MCW. Augmenting the MCW with Machine Learning (ML) models showed overall classification rates for Classification and Regression Tree (CRT) of 75.6% for Anxiety, 70.3% classified for Depression and 68.5% for Stress, and Quick Unbiased Efficient Statistical Trees could identify 68.5% of Pain Catastrophisation and 62.7% of Kinesiophobia. Combining MCW with psychometric measurements markedly increased the predictive power, with a CRT model predicting WAD recovery status with 80.7% accuracy, SI recovery status 81.7% accuracy and BI recovery status with 78% accuracy. A Naïve Bayes Classifier predicted recovery status in NI with 96.4% accuracy. However, this likely represents overfitting. Conclusions: Overall, MCW augmented with ML offers a promising alternative to questionnaires, and the MCW appears to measure some unique psychological features that contribute to recovery from injury. Full article
(This article belongs to the Section Mental Health)
Show Figures

Figure 1

42 pages, 12738 KB  
Article
Spectral Indices and Principal Component Analysis for Lithological Mapping in the Erongo Region, Namibia
by Ryan Theodore Benade and Oluibukun Gbenga Ajayi
Appl. Sci. 2025, 15(24), 13251; https://doi.org/10.3390/app152413251 - 18 Dec 2025
Viewed by 882
Abstract
The mineral deposits in Namibia’s Erongo region are renowned and frequently associated with complex geological environments, including calcrete-hosted paleochannels and hydrothermal alteration zones. Mineral extraction is hindered by high operational costs, restricted accessibility and stringent environmental regulations. To address these challenges, this study [...] Read more.
The mineral deposits in Namibia’s Erongo region are renowned and frequently associated with complex geological environments, including calcrete-hosted paleochannels and hydrothermal alteration zones. Mineral extraction is hindered by high operational costs, restricted accessibility and stringent environmental regulations. To address these challenges, this study proposes an integrated approach that combines satellite remote sensing and machine learning to map and identify mineralisation-indicative zones. Sentinel 2 Multispectral Instrument (MSI) and Landsat 8 Operational Land Imager (OLI) multispectral data were employed due to their global coverage, spectral fidelity and suitability for geological investigations. Normalized Difference Vegetation Index (NDVI) masking was applied to minimise vegetation interference. Spectral indices—the Clay Index, Carbonate Index, Iron Oxide Index and Ferrous Iron Index—were developed and enhanced using false-colour composites. Principal Component Analysis (PCA) was used to reduce redundancy and extract significant spectral patterns. Supervised classification was performed using Support Vector Machine (SVM), Random Forest (RF) and Maximum Likelihood Classification (MLC), with validation through confusion matrices and metrics such as Overall Accuracy, User’s Accuracy, Producer’s Accuracy and the Kappa coefficient. The results showed that RF achieved the highest accuracy on Landsat 8 and MLC outperformed others on Sentinel 2, while SVM showed balanced performance. Sentinel 2’s higher spatial resolution enabled improved delineation of alteration zones. This approach supports efficient and low-impact mineral prospecting in remote environments. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

23 pages, 6739 KB  
Article
SPX-GNN: An Explainable Graph Neural Network for Harnessing Long-Range Dependencies in Tuberculosis Classifications in Chest X-Ray Images
by Muhammed Ali Pala and Muhammet Burhan Navdar
Diagnostics 2025, 15(24), 3236; https://doi.org/10.3390/diagnostics15243236 - 18 Dec 2025
Cited by 3 | Viewed by 889
Abstract
Background/Objectives: Traditional medical image analysis methods often suffer from locality bias, limiting their ability to model long-range contextual relationships between spatially distributed anatomical structures. To overcome this challenge, this study proposes SPX-GNN (Superpixel Explainable Graph Neural Network). This novel method reformulates image [...] Read more.
Background/Objectives: Traditional medical image analysis methods often suffer from locality bias, limiting their ability to model long-range contextual relationships between spatially distributed anatomical structures. To overcome this challenge, this study proposes SPX-GNN (Superpixel Explainable Graph Neural Network). This novel method reformulates image analysis as a structural graph learning problem, capturing both local anomalies and global topological patterns in a holistic manner. Methods: The proposed framework decomposes images into semantically coherent superpixel regions, converting them into graph nodes that preserve topological relationships. Each node is enriched with a comprehensive feature vector encoding complementary diagnostic clues, including colour (CIELAB), texture (LBP and Haralick), shape (Hu moments), and spatial location. A Graph Neural Network is then employed to learn the relational dependencies between these enriched nodes. The method was rigorously evaluated using 5-fold stratified cross-validation on a public dataset comprising 4200 chest X-ray images. Results: SPX-GNN demonstrated exceptional performance in tuberculosis classification, achieving a mean accuracy of 99.82%, an F1-score of 99.45%, and a ROC-AUC of 100.00%. Furthermore, an integrated Explainable Artificial Intelligence module addresses the black box problem by generating semantic importance maps, which illuminate the decision mechanism and enhance clinical reliability. Conclusions: SPX-GNN offers a novel approach that successfully combines high diagnostic accuracy with methodological transparency. By providing a robust and interpretable workflow, this study presents a promising solution for medical imaging tasks where structural information is critical, paving the way for more reliable clinical decision support systems. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

21 pages, 1667 KB  
Article
Advanced Retinal Lesion Segmentation via U-Net with Hybrid Focal–Dice Loss and Automated Ground Truth Generation
by Ahmad Sami Al-Shamayleh, Mohammad Qatawneh and Hany A. Elsalamony
Algorithms 2025, 18(12), 790; https://doi.org/10.3390/a18120790 - 14 Dec 2025
Cited by 1 | Viewed by 935
Abstract
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject [...] Read more.
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject to interobserver tendencies, especially in large screening projects. This work introduces an end-to-end deep learning pipeline for automated retinal lesion segmentation, tailored to datasets without available expert pixel-level reference annotations. The approach is specifically designed for our needs. A novel multi-stage automated ground truth mask generation method, based on colour space analysis, entropy filtering and morphological operations, and creating reliable pseudo-labels from raw retinal images. These pseudo-labels then serve as the training input for a U-Net architecture, a convolutional encoder–decoder architecture for biomedical image segmentation. To address the inherent class imbalance often encountered in medical imaging, we employ and thoroughly evaluate a novel hybrid loss function combining Focal Loss and Dice Loss. The proposed pipeline was rigorously evaluated on the ‘Eye Image Dataset’ from Kaggle, achieving a state-of-the-art segmentation performance with a Dice Similarity Coefficient of 0.932, Intersection over Union (IoU) of 0.865, Precision of 0.913, and Recall of 0.897. This work demonstrates the feasibility of achieving high-quality retinal lesion segmentation even in resource-constrained environments where extensive expert annotations are unavailable, thus paving the way for more accessible and scalable ophthalmological diagnostic tools. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 17286 KB  
Article
Vision-Based Trajectory Reconstruction in Human Activities: Methodology and Application
by Jasper Lottefier, Peter Van den Broeck and Katrien Van Nimmen
Sensors 2025, 25(24), 7577; https://doi.org/10.3390/s25247577 - 13 Dec 2025
Viewed by 643
Abstract
Modern civil engineering structures, such as footbridges, are increasingly susceptible to vibrations induced by human activities, emphasizing the importance of accurately assessing crowd-induced loading. Developing realistic load models requires detailed insight into the underlying crowd dynamics, which in turn depend on the coordination [...] Read more.
Modern civil engineering structures, such as footbridges, are increasingly susceptible to vibrations induced by human activities, emphasizing the importance of accurately assessing crowd-induced loading. Developing realistic load models requires detailed insight into the underlying crowd dynamics, which in turn depend on the coordination between individuals and the spatial organization of the group. A deeper understanding of these human–human interactions is therefore essential for capturing the collective behaviour that governs crowd-induced vibrations. This paper presents a vision-based trajectory reconstruction methodology that captures individual movement trajectories in both small groups and large-scale running events. The approach integrates colour-based image segmentation for instrumented participants, deep learning–based object detection for uninstrumented crowds, and a homography-based projection method to map image coordinates to world space. The methodology is applied to empirical data from two urban running events and controlled experiments, including both stationary and dynamic camera perspectives. Results show that the framework reliably reconstructs individual trajectories under varied field conditions, applicable to both walking and running activities. The approach enables scalable monitoring of human activities and provides high-resolution spatio-temporal data for studying human–human interactions and modelling crowd dynamics. In this way, the findings highlight the potential of vision-based methods as practical, non-intrusive tools for analysing human-induced loading in both research and applied engineering contexts. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

25 pages, 7384 KB  
Article
Remote Sensing-Assisted Physical Modelling of Complex Spatio-Temporal Nitrate Leaching Patterns from Silvopastoral Systems
by Kiril Manevski, Magdalena Ullfors, Maarit Mäenpää, Uffe Jørgensen, Ji Chen and Anne Grete Kongsted
Remote Sens. 2025, 17(24), 3965; https://doi.org/10.3390/rs17243965 - 8 Dec 2025
Viewed by 617
Abstract
Affordable optical data from Unmanned Aerial Vehicles (UAVs) coupled with process-based models could constitute an integrative platform to map complex spatio-temporal patterns of nitrate leaching and reduce uncertainties in tightening the nitrogen (N) cycle of silvopastoral systems. This study uses field data from [...] Read more.
Affordable optical data from Unmanned Aerial Vehicles (UAVs) coupled with process-based models could constitute an integrative platform to map complex spatio-temporal patterns of nitrate leaching and reduce uncertainties in tightening the nitrogen (N) cycle of silvopastoral systems. This study uses field data from a commercial farm in Denmark with lactating sows housed in paddocks with pastures flanking a central zone of poplars, either pruned (P) or unpruned (tall, T), each with resources (feed and hut) on the same (S) or opposite side (O) of the tree zone. The poplar leaf area index derived from canopy cover using a computer vision approach on true-colour UAV imagery was fed to a process-based model alongside soil data and geostatistical analyses to derive the soil water balance across the paddocks and explicitly map the variation in soil nitrate leaching. The results showed clear patterns not seen before of nitrate leaching hotspots shifting from high values in the pre-study year without animals to diluted lower values in the main study year involving the pigs. The results also showed a seasonal and spatial variation of 7 to 860 kg N ha−1 year−1, a wide leaching range otherwise difficult to capture, by employing only a process-based model using mean effective parameters. Nitrate leaching was in the order PO > PS > TO > TS. The N cycle was tightened with T regardless of S/O. The approach could be improved with more machine learning-aided process-based modelling to operationally monitor complex silvopastoral systems to alleviate nitrate leaching in outdoor pig systems. Full article
Show Figures

Graphical abstract

11 pages, 3022 KB  
Article
A Real-World Comparison of Three Deep Learning Systems for Diabetic Retinopathy in Remote Australia
by Jocelyn J. Drinkwater, Qiang Li, Kerry Woods, Emma Douglas, Mark Chia, Yukun Zhou, Steve Bartnik, Yachana Shah, Vaibhav Shah, Pearse A. Keane and Angus W. Turner
Diabetology 2025, 6(12), 146; https://doi.org/10.3390/diabetology6120146 - 1 Dec 2025
Viewed by 761
Abstract
Background/objective: Deep learning systems (DLSs) may improve access to screening for diabetic retinopathy (DR), a leading cause of vision loss. Therefore, the aim was to prospectively compare the performance of three DLSs, Google ARDA, Thirona RetCADTM, and EyRIS SELENA+, in the [...] Read more.
Background/objective: Deep learning systems (DLSs) may improve access to screening for diabetic retinopathy (DR), a leading cause of vision loss. Therefore, the aim was to prospectively compare the performance of three DLSs, Google ARDA, Thirona RetCADTM, and EyRIS SELENA+, in the detection of referable DR in a real-world setting. Methods: Participants with self-reported diabetes presented to a mobile facility for DR screening in the remote Pilbara region of Western Australia, which has a high proportion of First Nations people. Sensitivity, specificity, and other performance indicators were calculated for each DLS, compared to grading by an ophthalmologist adjudication panel. Results: Single field colour fundus photographs from 188 eyes of 94 participants (51% male, 70% First Nations Australians, and mean ± SD age of 60.3 ± 12.0 years) were assessed; 39 images had referable DR, 135 had no referable DR, and 14 images were ungradable. The sensitivity/specificity of ARDA was 100% (95% CI: 91.03–100%)/94.81% (89.68–97.47%), RetCAD was 97.37% (86.50–99.53%)/97.01% (92.58–98.83%) and SELENA+ was 91.67% (78.17–97.13%)/80.80% (73.02–86.74%). Conclusions: In a small, real-world service evaluation, comprising majority First Nations people from remote Western Australia, DLSs had high sensitivity and specificity for detecting referable DR. A comparative service evaluation can be useful to highlight differences between DLSs, especially in unique settings or with minority populations. Full article
(This article belongs to the Special Issue New Perspectives and Future Challenges in Diabetic Retinopathy)
Show Figures

Graphical abstract

Back to TopTop