Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (115)

Search Parameters:
Keywords = best detector selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 98915 KB  
Article
vinum-Analytics
by Nuno Ferreira, Filipe Pinto, António Valente, Diana Augusto, Manuela Reis and Salviano Soares
Mach. Learn. Knowl. Extr. 2026, 8(4), 106; https://doi.org/10.3390/make8040106 - 18 Apr 2026
Viewed by 91
Abstract
Old-vine vineyards often contain dozens of grapevine varieties intermingled and irregularly distributed, making plant-level varietal identification slow and expensive when based on ampelography or molecular approaches. This paper proposes a field-oriented computer-vision pipeline for Vitis vinifera variety identification using images with a natural [...] Read more.
Old-vine vineyards often contain dozens of grapevine varieties intermingled and irregularly distributed, making plant-level varietal identification slow and expensive when based on ampelography or molecular approaches. This paper proposes a field-oriented computer-vision pipeline for Vitis vinifera variety identification using images with a natural background from the historic “Vinha Maria Teresa” parcel (Quinta do Crasto, Portugal). A single-class YOLO11 detector is trained to localize the vine leaf and generate standardized crops, and a YOLO11 classifier is then fine-tuned on leaf regions of interest (ROIs) for eight selected varieties in the Douro UNESCO region. We annotated 2015 vineyard images for classification and supplemented detection training with 2648 additional leaf images; detectors (YOLO11n/s/m) were benchmarked under four augmentation regimes and evaluated on a fixed 48-image subset, including runtime on CPU and GPU. The best detector reached mAP@50–95 of 0.918 on the benchmark, while YOLO11n achieved ∼27 FPS on CPU for fast cropping. On a 303-image test set, the best classifier (YOLO11s with mixed augmentations) achieved 94.06% Top-1 accuracy, 93.92% macro-F1, and 100% Top-5 accuracy with remaining errors concentrated among morphologically similar varieties. To assess deployment-oriented performance, classifiers trained under three input settings (manual crops, detector-generated crops, and full images) were evaluated on a held-out 48-image benchmark subset; removing the detection step reduced Top-1 accuracy from 75.00% to 68.75%, while the gap between manual and automatic crops was only 2.44 pp on successfully detected images with detection failures (14.6%) representing the primary operational bottleneck. Repeated retraining of the best manual-crop YOLO11s configuration across multiple random seeds showed stable performance with low variability in Top-1 accuracy and macro-F1. Under identical training conditions, ResNet50 and EfficientNet-B0 provided competitive baselines, but YOLO11s remained the strongest overall model on the held-out field benchmark. These results indicate that lightweight leaf detection plus crop-based classification can support scalable varietal identification in old vineyards under realistic acquisition conditions. Full article
(This article belongs to the Section Learning)
35 pages, 5649 KB  
Article
Cross-Dataset Benchmarking of Deep Learning Models for Surface Defect Classification in Metal Parts
by Fábio Mendes da Silva, João Manuel R. S. Tavares, António Mendes Lopes and Antonio Ramos Silva
Appl. Sci. 2026, 16(6), 3022; https://doi.org/10.3390/app16063022 - 20 Mar 2026
Cited by 1 | Viewed by 441
Abstract
Accurate surface defect classification is critical for industrial quality control. Although Deep Learning achieves strong results on individual datasets, most prior studies benchmark only a narrow set of models under inconsistent pipelines, limiting comparability and industrial relevance. This work introduces the first systematic [...] Read more.
Accurate surface defect classification is critical for industrial quality control. Although Deep Learning achieves strong results on individual datasets, most prior studies benchmark only a narrow set of models under inconsistent pipelines, limiting comparability and industrial relevance. This work introduces the first systematic benchmark of ten architectures—CNNs (CNN, ResNet18/50), lightweight models (MobileNetV2, SuperSimpleNet, GhostNet, EfficientNetV2), Vision Transformers (Swin Transformer), a hybrid CNN–Transformer (CoAtNet), and a one-stage detector (YOLOv12)—across five public defect datasets (NEU-DET, X-SDD, KolektorSDD2, DAGM, MTDD) under a unified pipeline. Results show that Swin Transformer and CoAtNet achieve the best performance (mean F1-scores 90.8% and 85.5%), while EfficientNetV2 underperformed (41.9%), underscoring the need for domain-specific benchmarks. Lightweight models such as MobileNetV2, GhostNet, and SuperSimpleNet deliver competitive accuracy at much lower cost, offering practical solutions for edge deployment. By bridging the gap between academic benchmarks and manufacturing requirements, this study provides actionable guidance for selecting defect detection models in automated inspection. Full article
Show Figures

Figure 1

25 pages, 6302 KB  
Article
Artificial Intelligence-Based Detection of On-Ground Chestnuts Toward Automated Picking
by Kaixuan Fang, Yuzhen Lu and Xinyang Mu
AgriEngineering 2026, 8(3), 116; https://doi.org/10.3390/agriengineering8030116 - 19 Mar 2026
Viewed by 587
Abstract
Traditional mechanized chestnut harvesting is too costly for small producers, non-selective, and prone to damaging nuts. Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology. However, developing a reliable chestnut detection system faces challenges [...] Read more.
Traditional mechanized chestnut harvesting is too costly for small producers, non-selective, and prone to damaging nuts. Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology. However, developing a reliable chestnut detection system faces challenges in complex environments with shading, varying natural light conditions, and interference from weeds, fallen leaves, stones, and other foreign on-ground objects, which have remained unaddressed. This study collected 319 images of chestnuts on the orchard floor, containing 6524 annotated chestnuts. A comprehensive set of 29 state-of-the-art real-time object detectors, including 14 in the YOLO (v11–v13) and 15 in the RT-DETR (v1–v4) families at various model scales, was systematically evaluated through replicated modeling experiments for chestnut detection. Experimental results show that the YOLOv12m model achieved the best mAP@0.5 of 95.1% among all the evaluated models, while RT-DETRv2-R101 was the most accurate variant among the RT-DETR models, with mAP@0.5 of 91.1%. In terms of mAP@[0.5:0.95], the YOLOv11x model achieved the best accuracy of 80.1%. All models demonstrated significant potential for real-time chestnut detection, and YOLO models outperformed RT-DETR models in terms of both detection accuracy and inference, making them better suited for on-board deployment. This work lays a foundation for developing AI-based, vision-guided intelligent chestnut harvest systems. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Agriculture)
Show Figures

Figure 1

33 pages, 9350 KB  
Article
Machine Learning-Based Inversion of Axial-Segment Characterization for Spent Fuel Materials
by Qi Zhang, Zining Ni, Qi Huang, Chao Yang and Zhenping Chen
Coatings 2026, 16(3), 329; https://doi.org/10.3390/coatings16030329 - 8 Mar 2026
Viewed by 356
Abstract
The burnup, initial enrichment, and cooling time of spent nuclear fuel collectively determine the activities of key gamma-emitting nuclides (e.g., 134Cs, 137Cs, 154Eu). In safeguards verification, a non-destructive assay (NDA) using radiation detectors can directly acquire the gamma-ray emission signatures [...] Read more.
The burnup, initial enrichment, and cooling time of spent nuclear fuel collectively determine the activities of key gamma-emitting nuclides (e.g., 134Cs, 137Cs, 154Eu). In safeguards verification, a non-destructive assay (NDA) using radiation detectors can directly acquire the gamma-ray emission signatures associated with these characteristic nuclides. Previous studies have reported empirical relationships between the activities of nuclides such as 134Cs, 137Cs, and 154Eu and the assembly burnup. However, the non-uniform axial power distribution in fuel assemblies leads to variations in axial-segment burnup. Accordingly, this study utilizes a nuclide sample database of a typical pressurized water reactor (PWR) assembly generated by OpenMC 0.15.3 depletion calculations. The calculated results are analyzed, and a sensitivity analysis of the hydrogen-to-uranium atomic ratio (H/U) on the characteristic nuclides is presented, confirming the necessity of incorporating the H/U ratio as an input parameter to improve the cross-condition generalization of the surrogate models. Subsequently, MLP and CNN based on PyTorch 2.9.1 (CUDA 13.0 build: 2.9.1+cu130), and XGBoost 3.0.2 models are implemented to invert axial-segment burnup, initial enrichment, and the number densities of selected actinides under various discrete operating conditions based on characteristic nuclide activities. A comparative analysis of the prediction results from different feature inversion methods is provided. The results indicate that the MLP model performs best with Method A, which incorporates absolute 137Cs activity and the 154Eu/137Cs ratio, achieving a relative prediction deviation of only 5.2% for initial enrichment. Under Method C, the XGBoost model attains a relative prediction deviation of only 0.9% for axial-segment burnup (BU_zone). Full article
Show Figures

Figure 1

27 pages, 4413 KB  
Article
Cross-Protocol Domain Gap in Internet of Things Intrusion and Anomaly Detection: An Empirical Internet Protocol-to-Bluetooth Low Energy Study of Domain-Adversarial Training
by Hyejin Jin
Sensors 2026, 26(4), 1184; https://doi.org/10.3390/s26041184 - 11 Feb 2026
Viewed by 549
Abstract
Intrusion and anomaly detectors trained on Internet Protocol (IP) traffic are increasingly deployed in heterogeneous IoT environments where Bluetooth Low Energy (BLE) links coexist with IP networks. We quantify the cross-protocol domain gap in an IP → BLE transfer setting under unsupervised domain [...] Read more.
Intrusion and anomaly detectors trained on Internet Protocol (IP) traffic are increasingly deployed in heterogeneous IoT environments where Bluetooth Low Energy (BLE) links coexist with IP networks. We quantify the cross-protocol domain gap in an IP → BLE transfer setting under unsupervised domain adaptation (UDA), where target labels are unavailable for training and model selection. Using 14 lightweight window-level statistics and leakage-aware splits, we benchmark classical baselines and alignment methods (CORAL and MMD) against domain-adversarial neural networks (DANNs). Under random window splits, DANNs can yield modest target gains but exhibit strong seed sensitivity and non-monotonic domain confusion. We propose R3, a domain-aware checkpoint rule that combines near-best source validation with domain discriminator accuracy as a proxy for alignment, improving the target ROC-AUC by ~+0.053 across three representative seeds and producing more consistent AP gains over 20 seeds. However, under a stricter capture-wise leave-one-capture-out (LOCO) protocol, UDA collapses to near-chance ranking and can underperform simple baselines, highlighting the risk of optimistic random splits. Finally, we show that transferring a source-tuned threshold can trigger unsafe operating points (micro-FPR = 1.0 on benign-only captures), motivating PR-based metrics and calibration/operating-point audits. We have released derived feature tables, split definitions, and scripts to support reproducibility under restricted raw data access. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)
Show Figures

Figure 1

30 pages, 11402 KB  
Article
Striping Noise Reduction: A Detector-Selection Approach in Multi-Column Scanning Radiometers
by Xiaowei Jia, Xiuju Li, Tao Wen and Changpei Han
Remote Sens. 2026, 18(2), 233; https://doi.org/10.3390/rs18020233 - 11 Jan 2026
Viewed by 527
Abstract
Striping noise is a common problem in multi-detector scanning radiometers on remote sensing satellites, typically caused by response inconsistency among detector elements. For payloads with a multi-column redundant architecture, this paper proposes a detector-selection framework that jointly considers sensitivity and uniformity from the [...] Read more.
Striping noise is a common problem in multi-detector scanning radiometers on remote sensing satellites, typically caused by response inconsistency among detector elements. For payloads with a multi-column redundant architecture, this paper proposes a detector-selection framework that jointly considers sensitivity and uniformity from the perspective of detector-element selection to mitigate striping noise. First, the degree of detector consistency is quantified using the Inter-Row Brightness Temperature Difference (IRBTD). Then, a dynamic programming approach based on the Viterbi algorithm is employed to select detector elements row by row with linear time complexity, optimizing the process through a weighted cost function that integrates sensitivity and consistency. Experiments on raw data from the FY-4B Geostationary High-speed Imager (GHI) show that the method reduces inconsistency by 10–40% while increasing the noise-equivalent temperature difference (NEdT) by only 1–4% (≤4 mK). The average IRBTD decreases by approximately 20–100 mK, and high-frequency striping energy is significantly suppressed (reduction of 50–90%). The algorithm exhibits linear time complexity and low computational overhead, making it suitable for real-time on-board processing. Its weighting parameter enables flexible trade-offs between sensitivity and uniformity. By suppressing striping noise directly during the detector-selection stage without introducing data distortion or requiring calibration adjustments, the proposed method can be widely applied to scanning radiometers that employ multi-column long-linear-arrays. Full article
(This article belongs to the Special Issue Remote Sensing Data Preprocessing and Calibration)
Show Figures

Graphical abstract

23 pages, 4663 KB  
Article
Element Evaluation and Selection for Multi-Column Redundant Long-Linear-Array Detectors Using a Modified Z-Score
by Xiaowei Jia, Xiuju Li and Changpei Han
Remote Sens. 2026, 18(2), 224; https://doi.org/10.3390/rs18020224 - 9 Jan 2026
Viewed by 406
Abstract
New-generation geostationary meteorological satellite radiometric imagers widely employ multi-column redundant long-linear-array detectors, for which the Best Detector Selection (BDS) strategy is crucial for enhancing the quality of remote sensing data. Addressing the limitation of current BDS methods that often rely on a single [...] Read more.
New-generation geostationary meteorological satellite radiometric imagers widely employ multi-column redundant long-linear-array detectors, for which the Best Detector Selection (BDS) strategy is crucial for enhancing the quality of remote sensing data. Addressing the limitation of current BDS methods that often rely on a single metric and thus fail to fully exploit the detector’s comprehensive performance, this paper proposes a detector evaluation method based on a modified Z-score. This method systematically categorizes detector metrics into three types: positive, negative, and uniformity. It introduces, for the first time, spectral response deviation (SRD) as an effective quantitative measure for the Spectral Response Function (SRF) and employs a robust normalization strategy using the Interquartile Range (IQR) instead of standard deviation, enabling multi-dimensional detector evaluation and selection. Validation using laboratory data from the FY-4C/AGRI long-wave infrared band demonstrates that, compared to traditional single-metric optimization strategies, the best detectors selected by our method show significant improvement across multiple performance indicators, markedly enhancing both data quality and overall system performance. The proposed method features low computational complexity and strong adaptability, supporting on-orbit real-time detector optimization and dynamic updates, thereby providing reliable technical support for high-quality processing of remote sensing data from geostationary meteorological satellites. Full article
(This article belongs to the Special Issue Remote Sensing Data Preprocessing and Calibration)
Show Figures

Figure 1

24 pages, 3622 KB  
Article
Deep Learning-Based Intelligent Monitoring of Petroleum Infrastructure Using High-Resolution Remote Sensing Imagery
by Nannan Zhang, Hang Zhao, Pengxu Jing, Yan Gao, Song Liu, Jinli Shen, Shanhong Huang, Qihong Zeng, Yang Liu and Miaofen Huang
Processes 2026, 14(1), 28; https://doi.org/10.3390/pr14010028 - 20 Dec 2025
Viewed by 757
Abstract
The rapid advancement of high-resolution remote sensing technology has significantly expanded observational capabilities in the oil and gas sector, enabling more precise identification of petroleum infrastructure. Remote sensing now plays a critical role in providing real-time, continuous monitoring. Manual interpretation remains the predominant [...] Read more.
The rapid advancement of high-resolution remote sensing technology has significantly expanded observational capabilities in the oil and gas sector, enabling more precise identification of petroleum infrastructure. Remote sensing now plays a critical role in providing real-time, continuous monitoring. Manual interpretation remains the predominant approach, yet is plagued by multiple limitations. To overcome the limitations of manual interpretation in large-scale monitoring of upstream petroleum assets, this study develops an end-to-end, deep learning-driven framework for intelligent extraction of key oilfield targets from high-resolution remote sensing imagery. Specific aims are as follows: (1) To leverage temporal diversity in imagery to construct a representative training dataset. (2) To automate multi-class detection of well sites, production discharge pools, and storage facilities with high precision. This study proposes an intelligent monitoring framework based on deep learning for the automatic extraction of petroleum-related features from high-resolution remote sensing imagery. Leveraging the temporal richness of multi-temporal satellite data, a geolocation-based sampling strategy was adopted to construct a dedicated petroleum remote sensing dataset. The dataset comprises over 8000 images and more than 30,000 annotated targets across three key classes: well pads, production ponds, and storage facilities. Four state-of-the-art object detection models were evaluated—two-stage frameworks (Faster R-CNN, Mask R-CNN) and single-stage algorithms (YOLOv3, YOLOv4)—with the integration of transfer learning to improve accuracy, generalization, and robustness. Experimental results demonstrate that two-stage detectors significantly outperform their single-stage counterparts in terms of mean Average Precision (mAP). Specifically, the Mask R-CNN model, enhanced through transfer learning, achieved an mAP of 89.2% across all classes, exceeding the best-performing single-stage model (YOLOv4) by 11 percentage points. This performance gap highlights the trade-off between speed and accuracy inherent in single-shot detection models, which prioritize real-time inference at the expense of precision. Additionally, comparative analysis among similar architectures confirmed that newer versions (e.g., YOLOv4 over YOLOv3) and the incorporation of transfer learning consistently yield accuracy improvements of 2–4%, underscoring its effectiveness in remote sensing applications. Three oilfield areas were selected for practical application. The results indicate that the constructed model can automatically extract multiple target categories simultaneously, with average detection accuracies of 84% for well sites and 77% for production ponds. For multi-class targets over 100 square kilometers, manual detection previously required one day but now takes only one hour. Full article
Show Figures

Figure 1

29 pages, 33246 KB  
Article
Regional Forest Wildfire Mapping Through Integration of Sentinel-2 and Landsat 8 Data in Google Earth Engine with Semi-Automatic Training Sample Generation
by Yue Chen, Weili Kou, Xiong Yin, Rui Wang, Jiangxia Ye and Qiuhua Wang
Remote Sens. 2025, 17(24), 4038; https://doi.org/10.3390/rs17244038 - 16 Dec 2025
Viewed by 1635
Abstract
Accurate mapping of burned forest areas in mountainous regions is essential for wildfire assessment and post-fire ecological management. This study develops an FS-SNIC-ML workflow that integrates multi-source optical fusion, semi-automatic sample generation, feature selection, and object-based machine-learning classification to support reliable burned-area mapping [...] Read more.
Accurate mapping of burned forest areas in mountainous regions is essential for wildfire assessment and post-fire ecological management. This study develops an FS-SNIC-ML workflow that integrates multi-source optical fusion, semi-automatic sample generation, feature selection, and object-based machine-learning classification to support reliable burned-area mapping under complex terrain conditions. A pseudo-invariant feature (PIFS)-based fusion of Sentinel-2 and Landsat 8 imagery was employed to generate cloud-free, gap-free, and spectrally consistent pre- and post-fire reflectance datasets. Burned and unburned samples were constructed using a semi-automatic SAM–GLCM–PCA–Otsu procedure and county-level stratified sampling to ensure spatial representa-tiveness. Feature selection using LR, RF, and Boruta identified dNBR, dNDVI, and dEVI as the most discriminative variables. Within the SNIC-supported GEOBIA framework, four classifiers were evaluated; RF performed best, achieving overall accuracies of 92.02% for burned areas and 94.04% for unburned areas, outperforming SVM, CART, and KNN. K-means clustering of dNBR revealed spatial variation in fire conditions, while geographical detector analysis showed that NDVI, temperature, soil moisture, and their pairwise interactions were the dominant drivers of wildfire hotspot density. The proposed workflow provides an effective and transferable approach for high-precision burned-area extraction and quantification of wildfire-driving factors in mountainous forest regions. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Burned Area Mapping)
Show Figures

Figure 1

23 pages, 3296 KB  
Article
Enhancing the Effectiveness of Juvenile Protection: Deep Learning-Based Facial Age Estimation via JPSD Dataset Construction and YOLO-ResNet50
by Yuqiang Wu, Qingyang Gao, Yichen Lin, Zhanhai Yang and Xinmeng Wang
Appl. Syst. Innov. 2025, 8(6), 185; https://doi.org/10.3390/asi8060185 - 29 Nov 2025
Viewed by 921
Abstract
An increasing number of juveniles are accessing adult-oriented venues, such as bars and nightclubs, where supervision is frequently inadequate, thereby elevating their risk of both offline harm and unmonitored exposure to harmful online content. Existing facial age estimation systems, which are primarily designed [...] Read more.
An increasing number of juveniles are accessing adult-oriented venues, such as bars and nightclubs, where supervision is frequently inadequate, thereby elevating their risk of both offline harm and unmonitored exposure to harmful online content. Existing facial age estimation systems, which are primarily designed for adults, have significant limitations when it comes to protecting juveniles, hindering the efficiency of supervising them in key venues. To address these challenges, this study proposes a facial age estimation solution for juvenile protection. First, we have designed a ‘detection–cropping–classification’ framework comprising three stages. This first detects facial regions using a detection algorithm, then crops the image before inputting the results into a classification model for age estimation. Secondly, we constructed the the Juvenile Protection Surveillance and Detection (JPSD) Dataset by integrating five public datasets: UTKface, AgeDB, APPA-REAL, MegaAge and FG-NET. This dataset contains 14,260 images categorised into four age groups: 0–8 years, 8–14 years, 14–18 years and over 18 years. Thirdly, we conducted baseline model comparisons. In the object detection phase, three YOLO algorithms were selected for face recognition. In the age estimation phase, traditional convolutional neural networks (CNNs), such as ResNet50 and VGG16, were contrasted with vision transformer (ViT)-based models, such as ViT and BiFormer. Gradient-weighted Class Activation Mapping (Grad-CAM) was used for visual analysis to highlight differences in the models’ decision-making processes. Experiments revealed that YOLOv11 is the optimal detector for accurate facial localisation, and that ResNet50 is the best base classifier for enhancing age-sensitive feature extraction, outperforming BiFormer. The results show that the framework achieves Recall of 89.17% for the 0–8 age group and 95.17% for the over-18 age group. However, we have found that the current model has low Recall rates for the 8–14 and 14–18 age groups. Therefore, in the near term, we emphasise that this technology should only be used as a decision-support tool under strict human-in-the-loop supervision. This study provides an essential dataset and technical framework for juvenile facial age estimation, offering support for juvenile online protection, smart policing and venue supervision. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

28 pages, 879 KB  
Article
Performance Bounds of Ranging Precision in SPAD-Based dToF LiDAR
by Hao Wu, Yingyu Wang, Shiyi Sun, Lijie Zhao, Limin Tong, Linjie Shen and Jiang Zhu
Sensors 2025, 25(19), 6184; https://doi.org/10.3390/s25196184 - 6 Oct 2025
Viewed by 1845
Abstract
LiDAR with direct time-of-flight (dToF) technology based on single-photon avalanche diode detectors (SPADs) has been widely adopted in various applications. However, a comprehensive theoretical understanding of its fundamental ranging performance bounds—particularly the degradation caused by pile-up effects due to system dead time and [...] Read more.
LiDAR with direct time-of-flight (dToF) technology based on single-photon avalanche diode detectors (SPADs) has been widely adopted in various applications. However, a comprehensive theoretical understanding of its fundamental ranging performance bounds—particularly the degradation caused by pile-up effects due to system dead time and the potential benefits of photon-number-resolving detectors—remains incomplete and has not been systematically established in prior work. In this work, we present the first theoretical derivation of the Cramér–Rao lower bound (CRLB) for dToF systems explicitly accounting for dead time effects, generalize the analysis to SPADs with photon-number-resolving capabilities, and further validate the results through Monte Carlo simulations and maximum likelihood estimation. Our analysis reveals that pile-up not only reduces the information contained within individual ToF but also introduces a previously overlooked statistical coupling between distance and photon flux rate, further degrading ranging precision. The derived CRLB enables the determination of the optimal optical photon flux, laser pulse width (with FWHM of 0.56τ), and ToF quantization resolution that yield the best achievable ranging precision, showing that an optimal precision of approximately 0.53τ/N remains theoretically achievable, where τ is TDC resolution and N is the number of laser pulses. The analysis further quantifies the limited performance improvement enabled by increased photon-number resolution, which exhibits rapidly diminishing returns. Overall, these findings establish a unified theoretical framework for understanding the fundamental limits of SPAD-based dToF LiDAR, filling a gap left by earlier studies and providing concrete design guidelines for the selection of optimal operating points. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 2868 KB  
Article
Leveraging Transfer Learning for Determining Germination Percentages in Gray Mold Disease (Botrytis cinerea)
by Luis M. Gómez-Meneses, Andrea Pérez, Angélica Sajona, Luis F. Patiño, Jorge Herrera-Ramírez, Juan Carrasquilla and Jairo C. Quijano
AgriEngineering 2025, 7(9), 303; https://doi.org/10.3390/agriengineering7090303 - 18 Sep 2025
Viewed by 1273
Abstract
The rapid and accurate identification of pathogenic spores is essential for the early diagnosis of diseases in modern agriculture. Gray mold disease, caused by Botrytis cinerea, is a significant threat to several crops and is traditionally controlled using fungicides or, alternatively, by [...] Read more.
The rapid and accurate identification of pathogenic spores is essential for the early diagnosis of diseases in modern agriculture. Gray mold disease, caused by Botrytis cinerea, is a significant threat to several crops and is traditionally controlled using fungicides or, alternatively, by UV-C radiation. Classically, the determination of conidial germination percentage, a key indicator for assessing pathogen viability, has been a manual, time-consuming, and error-prone process. This study proposes an approach based on deep learning, using one-stage detectors to automate the detection and counting of germinated and non-germinated conidia in microscopy images. We trained and assessed the performance of three models under several metrics: YOLOv8, YOLOv11, and RetinaNET. The results show that these three architectures provide an efficient and accurate solution for the recognition of gray mold conidia viability. Selecting the best model, we performed the task of detecting and counting conidia for determining the germination percentage on samples treated with different UV-C radiation dosages. The results show that these deep-learning models achieved counting accuracies that closely matched those obtained with conventional manual methods, yet they delivered results far more rapidly. Because they operate continuously without fatigue or operator bias, these models begin to open possibilities, after widening field tests and datasets, for efficient and fully automated monitoring pipelines for disease management in the agro-industry. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

29 pages, 3400 KB  
Article
Synthetic Data Generation for Machine Learning-Based Hazard Prediction in Area-Based Speed Control Systems
by Mariusz Rychlicki and Zbigniew Kasprzyk
Appl. Sci. 2025, 15(15), 8531; https://doi.org/10.3390/app15158531 - 31 Jul 2025
Cited by 1 | Viewed by 1664
Abstract
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a [...] Read more.
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a continuous vehicle speed monitoring system to minimize the risk of traffic accidents caused by speeding. The SUMO traffic simulator was used to model driver behavior in the analyzed area and within a given road network. Data from OpenStreetMap and field measurements from over a dozen speed detectors were integrated. Preliminary tests were carried out to record vehicle speeds. Based on these data, several simulation scenarios were run and compared to real-world observations using average speed, the percentage of speed limit violations, root mean square error (RMSE), and percentage compliance. A new metric, the Combined Speed Accuracy Score (CSAS), has been introduced to assess the consistency of simulation results with real-world data. For this study, a basic hazard prediction model was developed using LoRaWAN sensor network data and environmental contextual variables, including time, weather, location, and accident history. The research results in a method for evaluating and selecting the simulation scenario that best represents reality and drivers’ propensities to exceed speed limits. The results and findings demonstrate that it is possible to produce synthetic data with a level of agreement exceeding 90% with real data. Thus, it was shown that it is possible to generate synthetic data for machine learning in hazard prediction for area-based speed control systems using traffic simulators. Full article
Show Figures

Figure 1

25 pages, 378 KB  
Article
Markov Observation Models and Deepfakes
by Michael A. Kouritzin
Mathematics 2025, 13(13), 2128; https://doi.org/10.3390/math13132128 - 29 Jun 2025
Cited by 1 | Viewed by 765
Abstract
Herein, expanded Hidden Markov Models (HMMs) are considered as potential deepfake generation and detection tools. The most specific model is the HMM, while the most general is the pairwise Markov chain (PMC). In between, the Markov observation model (MOM) is proposed, where the [...] Read more.
Herein, expanded Hidden Markov Models (HMMs) are considered as potential deepfake generation and detection tools. The most specific model is the HMM, while the most general is the pairwise Markov chain (PMC). In between, the Markov observation model (MOM) is proposed, where the observations form a Markov chain conditionally on the hidden state. An expectation-maximization (EM) analog to the Baum–Welch algorithm is developed to estimate the transition probabilities as well as the initial hidden-state-observation joint distribution for all the models considered. This new EM algorithm also includes a recursive log-likelihood equation so that model selection can be performed (after parameter convergence). Once models have been learnt through the EM algorithm, deepfakes are generated through simulation, while they are detected using the log-likelihood. Our three models were compared empirically in terms of their generative and detective ability. PMC and MOM consistently produced the best deepfake generator and detector, respectively. Full article
Show Figures

Figure 1

15 pages, 3017 KB  
Article
Assessment of Spectral Computed Tomography Image Quality and Detection of Lesions in the Liver Based on Image Reconstruction Algorithms and Virtual Tube Voltage
by Areej Hamami, Mohammad Aljamal, Nora Almuqbil, Mohammad Al-Harbi and Zuhal Y. Hamd
Diagnostics 2025, 15(8), 1043; https://doi.org/10.3390/diagnostics15081043 - 19 Apr 2025
Cited by 1 | Viewed by 1579
Abstract
Background: Spectral detector computed tomography (SDCT) has demonstrated superior diagnostic performance and image quality in liver disease assessment compared with traditional CT. Selecting the right reconstruction algorithm and tube voltage is essential to avoid increased noise and diagnostic errors. Objectives: This [...] Read more.
Background: Spectral detector computed tomography (SDCT) has demonstrated superior diagnostic performance and image quality in liver disease assessment compared with traditional CT. Selecting the right reconstruction algorithm and tube voltage is essential to avoid increased noise and diagnostic errors. Objectives: This study evaluated improvements in image quality achieved using various virtual tube voltages and reconstruction algorithms for diagnosing common liver diseases with spectral CT. Methods: This retrospective study involved forty-seven patients who underwent spectral CT scans for liver conditions, including fatty liver, hemangiomas, and metastatic lesions. The assessment utilized signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), with images reconstructed using various algorithms (IMR, iDose) at different levels and virtual tube voltages. Three experienced radiologists analyzed the reconstructed images to identify the best reconstruction methods and tube voltage combinations for diagnosing these liver pathologies. Results: The signal-to-noise ratio (SNR) was highest for spectral CT images using the IMR3 algorithm in metastatic, hemangioma, and fatty liver cases. A strong positive correlation was found between IMR3 at 120 keV and 70 keV (p-value = 0.000). In contrast, iDOSE2 at 120 keV and 70 keV showed a low correlation of 0.291 (p-value = 0.045). Evaluators noted that IMR1 at 70 keV provided the best visibility for liver lesions (mean = 3.58), while IMR3 at 120 keV had the lowest image quality (mean = 2.65). Conclusions: Improvements in image quality were noted with SDCT, especially in SNR values for liver tissues at low radiation doses and a specific IMR level. The IMR1 algorithm reduced noise, enhancing the visibility of liver lesions for better diagnosis. Full article
(This article belongs to the Special Issue Computed Tomography Imaging in Medical Diagnosis, 2nd Edition)
Show Figures

Figure 1

Back to TopTop