Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,433)

Search Parameters:
Keywords = real-world dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7291 KB  
Article
Evaluating LiDAR Perception Algorithms for All-Weather Autonomy
by Himanshu Gupta, Achim J. Lilienthal and Henrik Andreasson
Sensors 2025, 25(24), 7436; https://doi.org/10.3390/s25247436 (registering DOI) - 6 Dec 2025
Abstract
LiDAR is used in autonomous driving for navigation, obstacle avoidance, and environment mapping. However, adverse weather conditions introduce noise into sensor data, potentially degrading the performance of perception algorithms and compromising the safety and reliability of autonomous driving systems. Hence, in this paper, [...] Read more.
LiDAR is used in autonomous driving for navigation, obstacle avoidance, and environment mapping. However, adverse weather conditions introduce noise into sensor data, potentially degrading the performance of perception algorithms and compromising the safety and reliability of autonomous driving systems. Hence, in this paper, we investigate the limitations of LiDAR perception algorithms in adverse weather conditions, explore ways to mitigate the effects of noise, and propose future research directions to achieve all-weather autonomy with LiDAR sensors. Using real-world datasets and synthetically generated dense fog, we characterize the noise in adverse weather such as snow, rain, and fog; their effect on sensor data; and how to effectively mitigate the noise for tasks like object detection, localization, and SLAM. Specifically, we investigate point cloud filtering methods and compare them based on their ability to denoise point clouds, focusing on processing time, accuracy, and limitations. Additionally, we evaluate the impact of adverse weather on state-of-the-art 3D object detection, localization, and SLAM methods, as well as the effect of point cloud filtering on the algorithms’ performance. We find that point cloud filtering methods are partially successful at removing noise due to adverse weather, but must be fine-tuned for the specific LiDAR, application scenario, and type of adverse weather. 3D object detection was negatively affected by adverse weather, but performance improved with dynamic filtering algorithms. We found that heavy snowfall does not affect localization when using a map constructed in clear weather, but it fails in dense fog due to a low number of feature points. SLAM also failed in thick fog outdoors, but it performed well in heavy snowfall. Filtering algorithms have varied effects on SLAM performance depending on the type of scan-matching algorithm. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 2343 KB  
Article
Emissions-Based Predictive Maintenance Framework for Hybrid Electric Vehicles Using Laboratory-Simulated Driving Conditions
by Abdulrahman Obaid, Jafar Masri and Mohammad Ismail
Vehicles 2025, 7(4), 155; https://doi.org/10.3390/vehicles7040155 (registering DOI) - 6 Dec 2025
Abstract
This study presents a predictive maintenance framework for hybrid electric vehicles (HEVs) based on emissions behaviour under laboratory-simulated driving conditions. Vehicle speed, road gradient, and ambient temperature were selected as the principal input variables affecting emission levels. Using simulated datasets, three machine learning [...] Read more.
This study presents a predictive maintenance framework for hybrid electric vehicles (HEVs) based on emissions behaviour under laboratory-simulated driving conditions. Vehicle speed, road gradient, and ambient temperature were selected as the principal input variables affecting emission levels. Using simulated datasets, three machine learning model, specifically Linear Regression, Multilayer Perceptron (MLP), as well as Random Forest, were trained and evaluated. Within that set, the Random Forest model demonstrated the best performance, achieving an R2 score of 0.79, Mean Absolute Error (MAE) of 12.57 g/km, and root mean square error (RMSE) of 15.4 g/km, significantly outperforming both Linear Regression and MLP. A MATLAB-based graphical interface was developed to allow real-time classification of emission severity using defined thresholds (Normal ≤ 150 g/km, Warning ≤ 220 g/km, Critical > 220 g/km) and to provide automatic maintenance recommendations derived from the predicted emissions. Scenario-based validation confirmed the system’s ability to detect emission anomalies, which might function as early indicators of mechanical degradation when interpreted relative to operating conditions. The proposed framework, developed using laboratory-simulated datasets, provides a practical, interpretable, and accurate solution for emissions-based predictive maintenance. Although the results demonstrate feasibility, the framework should be further confirmed with real-world on-road data prior to large-scale use. Full article
(This article belongs to the Special Issue Data-Driven Intelligent Transportation Systems)
Show Figures

Figure 1

20 pages, 4204 KB  
Systematic Review
A Multidimensional Benchmark of Public EEG Datasets for Driver State Monitoring in Brain–Computer Interfaces
by Sirine Ammar, Nesrine Triki, Mohamed Karray and Mohamed Ksantini
Sensors 2025, 25(24), 7426; https://doi.org/10.3390/s25247426 (registering DOI) - 6 Dec 2025
Abstract
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) hold significant potential for enhancing driver safety through real-time monitoring of cognitive and affective states. However, the development of reliable BCI systems for Advanced Driver Assistance Systems (ADAS) depends on the availability of high-quality, publicly accessible EEG datasets [...] Read more.
Electroencephalography (EEG)-based brain-computer interfaces (BCIs) hold significant potential for enhancing driver safety through real-time monitoring of cognitive and affective states. However, the development of reliable BCI systems for Advanced Driver Assistance Systems (ADAS) depends on the availability of high-quality, publicly accessible EEG datasets collected during driving tasks. Existing datasets lack standardized parameters and contain demographic biases, which undermine their reliability and prevent the development of robust systems. This study presents a multidimensional benchmark analysis of seven publicly available EEG driving datasets. We compare these datasets across multiple dimensions, including task design, modality integration, demographic representation, accessibility, and reported model performance. This benchmark synthesizes existing literature without conducting new experiments. Our analysis reveals critical gaps, including significant age and gender biases, overreliance on simulated environments, insufficient affective monitoring, and restricted data accessibility. These limitations hinder real-world applicability and reduce ADAS performance. To address these gaps and facilitate the development of generalizable BCI systems, this study provides a structured, quantitative benchmark analysis of publicly available driving EEG datasets, suggesting criteria and recommendations for future dataset design and use. Additionally, we emphasize the need for balanced participant distributions, standardized emotional annotation, and open data practices. Full article
(This article belongs to the Section Cross Data)
Show Figures

Figure 1

28 pages, 683 KB  
Article
A New Topp–Leone Heavy-Tailed Odd Burr X-G Family of Distributions with Applications
by Fastel Chipepa, Bassant Elkalzah, Broderick Oluyede, Neo Dingalo and Abdurahman Aldukeel
Symmetry 2025, 17(12), 2093; https://doi.org/10.3390/sym17122093 - 5 Dec 2025
Abstract
This paper introduces the Topp–Leone Heavy-Tailed Odd Burr X-G (TL-HT-OBX-G) family of distributions (FOD), designed to model diverse data patterns. The new distribution is an infinite linear combination of the established exponentiated-G distributions. We used the established properties of the exponentiated-G distribution to [...] Read more.
This paper introduces the Topp–Leone Heavy-Tailed Odd Burr X-G (TL-HT-OBX-G) family of distributions (FOD), designed to model diverse data patterns. The new distribution is an infinite linear combination of the established exponentiated-G distributions. We used the established properties of the exponentiated-G distribution to infer the properties of the new FOD. The properties considered include the quantile function, moments and moment generating functions, probability-weighted moments, order statistics, stochastic orderings, and Rényi entropy. Parameter estimation is performed using multiple techniques, such as maximum likelihood, least squares, weighted least squares, Anderson–Darling, Cramér–von Mises, and Right-Tail Anderson–Darling. The maximum likelihood estimation method produced superior results in the Monte Carlo simulation studies. A special case of the developed model was applied to three real-world datasets. The model parameters were estimated using the maximum likelihood method. The selected special model was compared to other competing models, and goodness-of-fit was evaluated by the use of several goodness-of-fit statistics. The developed model fit the selected real-world datasets better than all the selected competing models. The new FOD provides a new framework for data modeling in health sciences and reliability datasets. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

22 pages, 1610 KB  
Article
A Novel Automatic Detection and Positioning Strategy for Buried Cylindrical Objects Based on B-Scan GPR Images
by Yubao Liu, Zhenda Zeng, Hang Ye, Xinyu Sun, Zhiqiang Zou and Dongguo Zhou
Electronics 2025, 14(24), 4799; https://doi.org/10.3390/electronics14244799 - 5 Dec 2025
Abstract
This paper presents DeepMask-GPR, a novel deep learning framework for automatic detection and geometric estimation of buried cylindrical objects in ground-penetrating radar (GPR) B-scan images. Built upon Mask R-CNN, the proposed method integrates hyperbola detection, apex localization, and real-world coordinate mapping in an [...] Read more.
This paper presents DeepMask-GPR, a novel deep learning framework for automatic detection and geometric estimation of buried cylindrical objects in ground-penetrating radar (GPR) B-scan images. Built upon Mask R-CNN, the proposed method integrates hyperbola detection, apex localization, and real-world coordinate mapping in an end-to-end architecture. A curvature-enhanced dual-channel input improves the visibility of weak hyperbolic patterns, while a quadratic regression loss guides the network to recover precise geometric parameters. DeepMask-GPR eliminates the need for raw signal data or manual post-processing, enabling robust and scalable deployment in field scenarios. On two public datasets, DeepMask-GPR achieves consistently higher TPR/IoU for spatial localization than baselines. On an in-house B-scan set, it attains low MAE/RMSE for radius estimation. Full article
(This article belongs to the Special Issue Applications of Image Processing and Sensor Systems)
28 pages, 5110 KB  
Article
WISEST: Weighted Interpolation for Synthetic Enhancement Using SMOTE with Thresholds
by Ryotaro Matsui, Luis Guillen, Satoru Izumi and Takuo Suganuma
Sensors 2025, 25(24), 7417; https://doi.org/10.3390/s25247417 - 5 Dec 2025
Abstract
Imbalanced learning occurs when rare but critical events are missed because classifiers are trained primarily on majority-class samples. This paper introduces WISEST, a locality-aware weighted-interpolation algorithm that generates synthetic minority samples within a controlled threshold near class boundaries. Benchmarked on more than a [...] Read more.
Imbalanced learning occurs when rare but critical events are missed because classifiers are trained primarily on majority-class samples. This paper introduces WISEST, a locality-aware weighted-interpolation algorithm that generates synthetic minority samples within a controlled threshold near class boundaries. Benchmarked on more than a hundred real-world imbalanced datasets, such as KEEL, with different imbalance ratios, noise levels, geometries, and other security and IoT sets (IoT-23 and BoT–IoT), WISEST consistently improved minority detection in at least one of the metrics on about half of those datasets, achieving up to a 25% relative recall increase and up to an 18% increase in F1 compared to the original training and other approaches. However, in most cases, WISEST’s trade-off gains are in accuracy and precision depending on the dataset and classifier. These results indicate that WISEST is a practical and robust option when minority support and borderline structure permit safe synthesis, although no single sampler uniformly outperforms others across all datasets. Full article
(This article belongs to the Special Issue Advances in Security of Mobile and Wireless Communications)
Show Figures

Figure 1

20 pages, 14885 KB  
Article
MultiPhysio-HRC: A Multimodal Physiological Signals Dataset for Industrial Human–Robot Collaboration
by Andrea Bussolan, Stefano Baraldo, Oliver Avram, Pablo Urcola, Luis Montesano, Luca Maria Gambardella and Anna Valente
Robotics 2025, 14(12), 184; https://doi.org/10.3390/robotics14120184 - 5 Dec 2025
Abstract
Human–robot collaboration (HRC) is a key focus of Industry 5.0, aiming to enhance worker productivity while ensuring well-being. The ability to perceive human psycho-physical states, such as stress and cognitive load, is crucial for adaptive and human-aware robotics. This paper introduces MultiPhysio-HRC, a [...] Read more.
Human–robot collaboration (HRC) is a key focus of Industry 5.0, aiming to enhance worker productivity while ensuring well-being. The ability to perceive human psycho-physical states, such as stress and cognitive load, is crucial for adaptive and human-aware robotics. This paper introduces MultiPhysio-HRC, a multimodal dataset containing physiological, audio, and facial data collected during real-world HRC scenarios. The dataset includes electroencephalography (EEG), electrocardiography (ECG), electrodermal activity (EDA), respiration (RESP), electromyography (EMG), voice recordings, and facial action units. The dataset integrates controlled cognitive tasks, immersive virtual reality experiences, and industrial disassembly activities performed manually and with robotic assistance, to capture a holistic view of the participants’ mental states. Rich ground truth annotations were obtained using validated psychological self-assessment questionnaires. Baseline models were evaluated for stress and cognitive load classification, demonstrating the dataset’s potential for affective computing and human-aware robotics research. MultiPhysio-HRC is publicly available to support research in human-centered automation, workplace well-being, and intelligent robotic systems. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

27 pages, 9435 KB  
Article
Research on an Intelligent Grading Method for Beef Freshness in Complex Backgrounds Based on the DEVA-ConvNeXt Model
by Xiuling Yu, Yifu Xu, Chenxiao Qu, Senyue Guo, Shuo Jiang, Linqiang Chen and Yang Zhou
Foods 2025, 14(24), 4178; https://doi.org/10.3390/foods14244178 - 5 Dec 2025
Abstract
This paper presents a novel DEVA-ConvNeXt model to address challenges in beef freshness grading, including data collection difficulties, complex backgrounds, and model accuracy issues. The Alpha-Background Generation Shift (ABG-Shift) technology enables rapid generation of beef image datasets with complex backgrounds. By incorporating the [...] Read more.
This paper presents a novel DEVA-ConvNeXt model to address challenges in beef freshness grading, including data collection difficulties, complex backgrounds, and model accuracy issues. The Alpha-Background Generation Shift (ABG-Shift) technology enables rapid generation of beef image datasets with complex backgrounds. By incorporating the Dynamic Non-Local Coordinate Attention (DNLC) and Enhanced Depthwise Convolution (EDW) modules, the model enhances feature extraction in complex environments. Additionally, Varifocal Loss (VFL) accelerates key feature learning, reducing training time and improving convergence speed. Experimental results show that DEVA-ConvNeXt outperforms models like ResNet101 and ShuffleNet V2 in terms of overall performance. Compared to the baseline model ConvNeXt, it achieves significant improvements in recognition Accuracy (94.8%, a 6.2% increase), Precision (94.8%, a 5.4% increase), Recall (94.6%, a 5.9% increase), and F1 score (94.7%, a 6.0% increase). Furthermore, real-world deployment and testing on embedded devices confirm the feasibility of this method in terms of accuracy and speed, providing valuable technical support for beef freshness grading and equipment design. Full article
(This article belongs to the Section Food Engineering and Technology)
Show Figures

Figure 1

24 pages, 3486 KB  
Article
Zero-Shot Industrial Anomaly Detection via CLIP-DINOv2 Multimodal Fusion and Stabilized Attention Pooling
by Junjie Jiang, Zongxiang He, Anping Wan, Khalil AL-Bukhaiti, Kaiyang Wang, Peiyi Zhu and Xiaomin Cheng
Electronics 2025, 14(24), 4785; https://doi.org/10.3390/electronics14244785 - 5 Dec 2025
Abstract
Industrial visual inspection demands high-precision anomaly detection amid scarce annotations and unseen defects. This paper introduces a zero-shot framework leveraging multimodal feature fusion and stabilized attention pooling. CLIP’s global semantic embeddings are hierarchically aligned with DINOv2’s multi-scale structural features via a Dual-Modality Attention [...] Read more.
Industrial visual inspection demands high-precision anomaly detection amid scarce annotations and unseen defects. This paper introduces a zero-shot framework leveraging multimodal feature fusion and stabilized attention pooling. CLIP’s global semantic embeddings are hierarchically aligned with DINOv2’s multi-scale structural features via a Dual-Modality Attention (DMA) mechanism, enabling effective cross-modal knowledge transfer for capturing macro- and micro-anomalies. A Stabilized Attention-based Pooling (SAP) module adaptively aggregates discriminative representations using self-generated anomaly heatmaps, enhancing localization accuracy and mitigating feature dilution. Trained solely in auxiliary datasets with multi-task segmentation and contrastive losses, the approach requires no target-domain samples. Extensive evaluation across seven benchmarks (MVTec AD, VisA, BTAD, MPDD, KSDD, DAGM, DTD-Synthetic) demonstrates state-of-the-art performance, achieving 93.4% image-level AUROC, 94.3% AP, 96.9% pixel-level AUROC, and 92.4% AUPRO on average. Ablation studies confirm the efficacy of DMA and SAP, while qualitative results highlight superior boundary precision and noise suppression. The framework offers a scalable, annotation-efficient solution for real-world industrial anomaly detection. Full article
Show Figures

Figure 1

15 pages, 3348 KB  
Article
Efficient Dataset Creation for MEMS-Based Magnetic Sensor Systems in Intelligent Transportation Applications
by Michal Hodoň, Peter Šarafín, Lukáš Formanek and Andrea Kociánová
Sensors 2025, 25(24), 7407; https://doi.org/10.3390/s25247407 - 5 Dec 2025
Abstract
This article describes the innovative use of an advanced annotation tool designed specifically for creating datasets tailored to MEMS (Micro-Electro-Mechanical Systems) sensor systems for the intelligent transportation domain. By optimizing the data annotation process, this tool significantly enhances the efficiency and accuracy of [...] Read more.
This article describes the innovative use of an advanced annotation tool designed specifically for creating datasets tailored to MEMS (Micro-Electro-Mechanical Systems) sensor systems for the intelligent transportation domain. By optimizing the data annotation process, this tool significantly enhances the efficiency and accuracy of dataset development, which is critical for the optimal performance and reliability of MEMS-based applications. The tool was tested with a specialized sensor system based on magnetometers for traffic flow monitoring, demonstrating its practical applications and effectiveness in real-world scenarios. The proposed approach offered a clear improvement over manual labelling by reducing the time needed per event and increasing the number of events that could be processed, without compromising the consistency of the assigned labels. The discussion includes a detailed overview of the tool’s features, its integration into existing workflows, as well as the benefits it offers engineers and researchers in the field of sensor technology. Full article
(This article belongs to the Special Issue Sensors in Intelligent Transport Systems)
Show Figures

Figure 1

19 pages, 4054 KB  
Article
DSGF-YOLO: A Lightweight Deep Neural Network for Traffic Accident Detection and Severity Classifications
by Weijun Li, Huawei Xie and Peiteng Lin
Vehicles 2025, 7(4), 153; https://doi.org/10.3390/vehicles7040153 - 5 Dec 2025
Abstract
Traffic accidents pose unpredictable and severe social and economic challenges. Rapid and accurate accident detection, along with reliable severity classification, is essential for timely emergency response and improved road safety. This study proposes DSGF-YOLO, an enhanced deep learning framework based on the YOLOv13 [...] Read more.
Traffic accidents pose unpredictable and severe social and economic challenges. Rapid and accurate accident detection, along with reliable severity classification, is essential for timely emergency response and improved road safety. This study proposes DSGF-YOLO, an enhanced deep learning framework based on the YOLOv13 architecture, developed for automated road accident detection and severity classification. The proposed methodology integrates two novel components: the DS-C3K2-FasterNet-Block module, which enhances local feature extraction and computational efficiency, and the Grouped Channel-Wise Self-Attention (G-CSA) module, which strengthens global context modeling and small-object perception. Comprehensive experiments on a diverse traffic accident dataset validate the effectiveness of the proposed framework. The results show that DSGF-YOLO achieves higher precision, recall, and mean average precision than state-of-the-art models such as Faster R-CNN, DETR, and other YOLO variants, while maintaining real-time performance. These findings highlight its potential for intelligent transportation systems and real-world accident monitoring applications. Full article
Show Figures

Figure 1

20 pages, 3620 KB  
Article
EMS-UKAN: An Efficient KAN-Based Segmentation Network for Water Leakage Detection of Subway Tunnel Linings
by Meide He, Lei Tan, Xiaohui Yang, Fei Liu, Zhimin Zhao and Xiaochun Wu
Appl. Sci. 2025, 15(24), 12859; https://doi.org/10.3390/app152412859 - 5 Dec 2025
Abstract
Water leakage in subway tunnel linings poses significant risks to structural safety and long-term durability, making accurate and efficient leakage detection a critical task. Existing deep learning methods, such as UNet and its variants, often suffer from large parameter sizes and limited ability [...] Read more.
Water leakage in subway tunnel linings poses significant risks to structural safety and long-term durability, making accurate and efficient leakage detection a critical task. Existing deep learning methods, such as UNet and its variants, often suffer from large parameter sizes and limited ability to capture multi-scale features, which restrict their applicability in real-world tunnel inspection. To address these issues, we propose an Efficient Multi-Scale U-shaped KAN-based Segmentation Network (EMS-UKAN) for detecting water leakage in subway tunnel linings. To reduce computational cost and enable edge-device deployment, the backbone replaces conventional convolutional layers with depthwise separable convolutions, and an Edge-Enhanced Depthwise Separable Convolution Module (EEDM) is incorporated in the decoder to strengthen boundary representation. The PKAN Block is introduced in the bottleneck to enhance nonlinear feature representation and improve the modeling of complex relationships among latent features. In addition, an Adaptive Multi-Scale Feature Extraction Block (AMS Block) is embedded within early skip connections to capture both fine-grained and large-scale leakage features. Extensive experiments on the newly collected Tunnel Water Leakage (TWL) dataset demonstrate that EMS-UKAN outperforms classical models, achieving competitive segmentation performance. In addition, it effectively reduces computational complexity, providing a practical solution for real-world tunnel inspection. Full article
Show Figures

Figure 1

26 pages, 3269 KB  
Article
DiagNeXt: A Two-Stage Attention-Guided ConvNeXt Framework for Kidney Pathology Segmentation and Classification
by Hilal Tekin, Şafak Kılıç and Yahya Doğan
J. Imaging 2025, 11(12), 433; https://doi.org/10.3390/jimaging11120433 - 4 Dec 2025
Abstract
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these [...] Read more.
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these challenges through an integrated use of attention-enhanced ConvNeXt architectures for both segmentation and classification. In the first stage, DiagNeXt-Seg employs a U-Net-based design incorporating Enhanced Convolutional Blocks (ECBs) with spatial attention gates and Atrous Spatial Pyramid Pooling (ASPP) to achieve precise multi-class kidney segmentation. In the second stage, DiagNeXt-Cls utilizes the segmented regions of interest (ROIs) for pathology classification through a hierarchical multi-resolution strategy enhanced by Context-Aware Feature Fusion (CAFF) and Evidential Deep Learning (EDL) for uncertainty estimation. The main contributions of this work include: (1) enhanced ConvNeXt blocks with large-kernel depthwise convolutions optimized for 3D medical imaging, (2) a boundary-aware compound loss combining Dice, cross-entropy, focal, and distance transform terms to improve segmentation precision, (3) attention-guided skip connections preserving fine-grained spatial details, (4) hierarchical multi-scale feature modeling for robust pathology recognition, and (5) a confidence-modulated classification approach integrating segmentation quality metrics for reliable decision-making. Extensive experiments on a large kidney CT dataset comprising 3847 patients demonstrate that DiagNeXt achieves 98.9% classification accuracy, outperforming state-of-the-art approaches by 6.8%. The framework attains near-perfect AUC scores across all pathology classes (Normal: 1.000, Tumor: 1.000, Cyst: 0.999, Stone: 0.994) while offering clinically interpretable uncertainty maps and attention visualizations. The superior diagnostic accuracy, computational efficiency (6.2× faster inference), and interpretability of DiagNeXt make it a strong candidate for real-world integration into clinical kidney disease diagnosis and treatment planning systems. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

26 pages, 6470 KB  
Article
Impact of Synthetic Data on Deep Learning Models for Earth Observation: Photovoltaic Panel Detection Case Study
by Enes Hisam, Jesus Gimeno, David Miraut, Manolo Pérez-Aixendri, Marcos Fernández, Rossana Gini, Raúl Rodríguez, Gabriele Meoni and Dursun Zafer Seker
ISPRS Int. J. Geo-Inf. 2025, 14(12), 481; https://doi.org/10.3390/ijgi14120481 - 4 Dec 2025
Abstract
This study explores the impact of synthetic data, both physically based and generatively created, on deep learning analytics for earth observation (EO), focusing on the detection of photovoltaic panels. A YOLOv8 object detection model was trained using a publicly available, multi-resolution very high [...] Read more.
This study explores the impact of synthetic data, both physically based and generatively created, on deep learning analytics for earth observation (EO), focusing on the detection of photovoltaic panels. A YOLOv8 object detection model was trained using a publicly available, multi-resolution very high resolution (VHR) EO dataset (0.8 m, 0.3 m, and 0.1 m), comprising 3716 images from various locations in Jiangsu Province, China. Three benchmarks were established using only real EO data. Subsequent experiments evaluated how the inclusion of synthetic data, in varying types and quantities, influenced the model’s ability to detect photovoltaic panels in VHR imagery. Physically based synthetic images were generated using the Unity engine, which allowed the generation of a wide range of realistic scenes by varying scene parameters automatically. This approach produced not only realistic RGB images but also semantic segmentation maps and pixel-accurate masks identifying photovoltaic panel locations. Generative synthetic data were created using diffusion-based models (DALL·E 3 and Stable Diffusion XL), guided by prompts to simulate satellite-like imagery containing solar panels. All synthetic images were manually reviewed, and corresponding annotations were ensured to be consistent with the real dataset. Integrating synthetic with real data generally improved model performance, with the best results achieved when both data types were combined. Performance gains were dependent on data distribution and volume, with the most significant improvements observed when synthetic data were used to meet the YOLOv8-recommended minimum of 1500 images per class. In this setting, combining real data with both physically based and generative synthetic data yielded improvements of 1.7% in precision, 3.9% in recall, 2.3% in mAP@50, and 3.3% in mAP@95 compared to training with real data alone. The study also emphasizes the importance of carefully managing the inclusion of synthetic data in training and validation phases to avoid overfitting to synthetic features, with the goal of enhancing generalization to real-world data. Additionally, a pre-training experiment using only synthetic data, followed by fine-tuning with real images, demonstrated improved early-stage training performance, particularly during the first five epochs, highlighting potential benefits in computationally constrained environments. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

19 pages, 2090 KB  
Article
Towards In-Vehicle Non-Contact Estimation of EDA-Based Arousal with LiDAR
by Jonas Brandstetter, Eva-Maria Knoch and Frank Gauterin
Sensors 2025, 25(23), 7395; https://doi.org/10.3390/s25237395 - 4 Dec 2025
Abstract
Driver monitoring systems are increasingly relying on physiological signals to assess cognitive and emotional states for improved safety and user experience. Electrodermal activity (EDA) is a particularly informative biomarker of arousal but is conventionally measured with skin-contact electrodes, limiting its applicability in vehicles. [...] Read more.
Driver monitoring systems are increasingly relying on physiological signals to assess cognitive and emotional states for improved safety and user experience. Electrodermal activity (EDA) is a particularly informative biomarker of arousal but is conventionally measured with skin-contact electrodes, limiting its applicability in vehicles. This work explores the feasibility of non-contact EDA estimation using Light Detection and Ranging (LiDAR) as a novel sensing modality. In a controlled laboratory setup, LiDAR reflection intensity from the forehead was recorded simultaneously with conventional finger-based EDA. Both classification and regression tasks were performed as follows: feature-based machine learning models (e.g., Random Forest and Extra Trees) and sequence-based deep learning models (e.g., CNN, LSTM, and TCN) were evaluated. Results demonstrate that LiDAR signals capture arousal-related changes, with the best regression model (Temporal Convolutional Network) achieving a mean absolute error of 14.6 on the normalized arousal factor scale (–50 to +50) and a correlation of r = 0.85 with ground-truth EDA. While random split validations yielded high accuracy, performance under leave-one-subject-out evaluation highlighted challenges in cross-subject generalization. The algorithms themselves were not the primary research focus but served to establish feasibility of the approach. These findings provide the first proof-of-concept that LiDAR can remotely estimate EDA-based arousal without direct skin contact, addressing a central limitation of current driver monitoring systems. Future research should focus on larger datasets, multimodal integration, and real-world driving validation to advance LiDAR towards practical in-vehicle deployment. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Back to TopTop