Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (725)

Search Parameters:
Keywords = single best model approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2740 KiB  
Article
Lightweight Anomaly Detection in Digit Recognition Using Federated Learning
by Anja Tanović and Ivan Mezei
Future Internet 2025, 17(8), 343; https://doi.org/10.3390/fi17080343 - 30 Jul 2025
Viewed by 132
Abstract
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point [...] Read more.
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point arithmetic. The system is trained on a reduced MNIST dataset (1000 resized samples) and evaluated using EMNIST and MNIST-C for anomaly detection. Seven fully connected autoencoder architectures are first evaluated on a PC to explore the impact of model size and batch size on training time and anomaly detection performance. Selected models are then re-implemented in the C programming language and deployed on a single ESP32 device, achieving training times as short as 12 min, inference latency as low as 9 ms, and F1 scores of up to 0.87. Autoencoders are further tested on ten devices in a real-world federated learning experiment using Wi-Fi. We explore non-IID and IID data distribution scenarios: (1) digit-specialized devices and (2) partitioned datasets with varying content and anomaly types. The results show that small unmodified autoencoder models can be effectively trained and evaluated directly on low-power hardware. The best models achieve F1 scores of up to 0.87 in the standard IID setting and 0.86 in the extreme non-IID setting. Despite some clients being trained on corrupted datasets, federated aggregation proves resilient, maintaining high overall performance. The resource analysis shows that more than half of the models and all the training-related allocations fit entirely in internal RAM. These findings confirm the feasibility of local float32 training and collaborative anomaly detection on low-cost hardware, supporting scalable and privacy-preserving edge intelligence. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
Show Figures

Figure 1

24 pages, 1508 KiB  
Article
Genomic Prediction of Adaptation in Common Bean (Phaseolus vulgaris L.) × Tepary Bean (P. acutifolius A. Gray) Hybrids
by Felipe López-Hernández, Diego F. Villanueva-Mejía, Adriana Patricia Tofiño-Rivera and Andrés J. Cortés
Int. J. Mol. Sci. 2025, 26(15), 7370; https://doi.org/10.3390/ijms26157370 - 30 Jul 2025
Viewed by 193
Abstract
Climate change is jeopardizing global food security, with at least 713 million people facing hunger. To face this challenge, legumes as common beans could offer a nature-based solution, sourcing nutrients and dietary fiber, especially for rural communities in Latin America and Africa. However, [...] Read more.
Climate change is jeopardizing global food security, with at least 713 million people facing hunger. To face this challenge, legumes as common beans could offer a nature-based solution, sourcing nutrients and dietary fiber, especially for rural communities in Latin America and Africa. However, since common beans are generally heat and drought susceptible, it is imperative to speed up their molecular introgressive adaptive breeding so that they can be cultivated in regions affected by extreme weather. Therefore, this study aimed to couple an advanced panel of common bean (Phaseolus vulgaris L.) × tolerant Tepary bean (P. acutifolius A. Gray) interspecific lines with Bayesian regression algorithms to forecast adaptation to the humid and dry sub-regions at the Caribbean coast of Colombia, where the common bean typically exhibits maladaptation to extreme heat waves. A total of 87 advanced lines with hybrid ancestries were successfully bred, surpassing the interspecific incompatibilities. This hybrid panel was genotyped by sequencing (GBS), leading to the discovery of 15,645 single-nucleotide polymorphism (SNP) markers. Three yield components (yield per plant, and number of seeds and pods) and two biomass variables (vegetative and seed biomass) were recorded for each genotype and inputted in several Bayesian regression models to identify the top genotypes with the best genetic breeding values across three localities on the Colombian coast. We comparatively analyzed several regression approaches, and the model with the best performance for all traits and localities was BayesC. Also, we compared the utilization of all markers and only those determined as associated by a priori genome-wide association studies (GWAS) models. Better prediction ability with the complete SNP set was indicative of missing heritability as part of GWAS reconstructions. Furthermore, optimal SNP sets per trait and locality were determined as per the top 500 most explicative markers according to their β regression effects. These 500 SNPs, on average, overlapped in 5.24% across localities, which reinforced the locality-dependent nature of polygenic adaptation. Finally, we retrieved the genomic estimated breeding values (GEBVs) and selected the top 10 genotypes for each trait and locality as part of a recommendation scheme targeting narrow adaption in the Caribbean. After validation in field conditions and for screening stability, candidate genotypes and SNPs may be used in further introgressive breeding cycles for adaptation. Full article
(This article belongs to the Special Issue Plant Breeding and Genetics: New Findings and Perspectives)
Show Figures

Figure 1

22 pages, 16421 KiB  
Article
Deep Neural Network with Anomaly Detection for Single-Cycle Battery Lifetime Prediction
by Junghwan Lee, Longda Wang, Hoseok Jung, Bukyu Lim, Dael Kim, Jiaxin Liu and Jong Lim
Batteries 2025, 11(8), 288; https://doi.org/10.3390/batteries11080288 - 30 Jul 2025
Viewed by 311
Abstract
Large-scale battery datasets often contain anomalous data due to sensor noise, communication errors, and operational inconsistencies, which degrade the accuracy of data-driven prognostics. However, many existing studies overlook the impact of such anomalies or apply filtering heuristically without rigorous benchmarking, which can potentially [...] Read more.
Large-scale battery datasets often contain anomalous data due to sensor noise, communication errors, and operational inconsistencies, which degrade the accuracy of data-driven prognostics. However, many existing studies overlook the impact of such anomalies or apply filtering heuristically without rigorous benchmarking, which can potentially introduce biases into training and evaluation pipelines. This study presents a deep learning framework that integrates autoencoder-based anomaly detection with a residual neural network (ResNet) to achieve state-of-the-art prediction of remaining useful life at the cycle level using only a single-cycle input. The framework systematically filters out anomalous samples using multiple variants of convolutional and sequence-to-sequence autoencoders, thereby enhancing data integrity before optimizing and training the ResNet-based models. Benchmarking against existing deep learning approaches demonstrates a significant performance improvement, with the best model achieving a mean absolute percentage error of 2.85% and a root mean square error of 40.87 cycles, surpassing prior studies. These results indicate that autoencoder-based anomaly filtering significantly enhances prediction accuracy, reinforcing the importance of systematic anomaly detection in battery prognostics. The proposed method provides a scalable and interpretable solution for intelligent battery management in electric vehicles and energy storage systems. Full article
(This article belongs to the Special Issue Machine Learning for Advanced Battery Systems)
Show Figures

Figure 1

24 pages, 5200 KiB  
Article
DRFAN: A Lightweight Hybrid Attention Network for High-Fidelity Image Super-Resolution in Visual Inspection Applications
by Ze-Long Li, Bai Jiang, Liang Xu, Zhe Lu, Zi-Teng Wang, Bin Liu, Si-Ye Jia, Hong-Dan Liu and Bing Li
Algorithms 2025, 18(8), 454; https://doi.org/10.3390/a18080454 - 22 Jul 2025
Viewed by 291
Abstract
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially [...] Read more.
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially under complex degradation scenarios, resulting in blurry edges and structural artifacts. To address this challenge, we propose a Dense Residual Fused Attention Network (DRFAN), a novel lightweight hybrid architecture designed to enhance high-frequency texture recovery in challenging degradation conditions. Moreover, by coupling convolutional layers and attention mechanisms through gated interaction modules, the DRFAN enhances local details and global dependencies with linear computational complexity, enabling the efficient utilization of multi-level spatial information while effectively alleviating the loss of high-frequency texture details. To evaluate its effectiveness, we conducted ×4 super-resolution experiments on five public benchmarks. The DRFAN achieves the best performance among all compared lightweight models. Visual comparisons show that the DRFAN restores more accurate geometric structures, with up to +1.2 dB/+0.0281 SSIM gain over SwinIR-S on Urban100 samples. Additionally, on a domain-specific rice grain dataset, the DRFAN outperforms SwinIR-S by +0.19 dB in PSNR and +0.0015 in SSIM, restoring clearer textures and grain boundaries essential for industrial quality inspection. The proposed method provides a compelling balance between model complexity and image reconstruction fidelity, making it well-suited for deployment in resource-constrained visual systems and industrial applications. Full article
Show Figures

Figure 1

25 pages, 1566 KiB  
Article
Combining QSAR and Molecular Docking for the Methodological Design of Novel Radiotracers Targeting Parkinson’s Disease
by Juan A. Castillo-Garit, Mar Soria-Merino, Karel Mena-Ulecia, Mónica Romero-Otero, Virginia Pérez-Doñate, Francisco Torrens and Facundo Pérez-Giménez
Appl. Sci. 2025, 15(15), 8134; https://doi.org/10.3390/app15158134 - 22 Jul 2025
Viewed by 237
Abstract
Parkinson’s disease (PD) is a neurodegenerative disorder marked by the progressive loss of dopaminergic neurons in the nigrostriatal pathway. The dopamine active transporter (DAT), a key protein involved in dopamine reuptake, serves as a selective biomarker for dopaminergic terminals in the striatum. DAT [...] Read more.
Parkinson’s disease (PD) is a neurodegenerative disorder marked by the progressive loss of dopaminergic neurons in the nigrostriatal pathway. The dopamine active transporter (DAT), a key protein involved in dopamine reuptake, serves as a selective biomarker for dopaminergic terminals in the striatum. DAT binding has been extensively studied using in vivo imaging techniques such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). To support the design of new radiotracers targeting DAT, we employ Quantitative Structure–Activity Relationship (QSAR) analysis on a structurally diverse dataset composed of 57 compounds with known affinity constants for DAT. The best-performing QSAR model includes four molecular descriptors and demonstrates robust statistical performance: R2 = 0.7554, Q2LOO = 0.6800, and external R2 = 0.7090. These values indicate strong predictive capability and model stability. The predicted compounds are evaluated using a docking methodology to check the correct coupling and interactions with the DAT. The proposed approach—combining QSAR modeling and docking—offers a valuable strategy for screening and optimizing potential PET/SPECT radiotracers, ultimately aiding in the neuroimaging and early diagnosis of Parkinson’s disease. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Biomedical Informatics)
Show Figures

Figure 1

23 pages, 2908 KiB  
Article
A Gradient Enhanced Efficient Global Optimization-Driven Aerodynamic Shape Optimization Framework
by Niyazi Şenol, Hasan U. Akay and Şahin Yiğit
Aerospace 2025, 12(7), 644; https://doi.org/10.3390/aerospace12070644 - 21 Jul 2025
Viewed by 303
Abstract
The aerodynamic optimization of airfoil shapes remains a critical research area for enhancing aircraft performance under various flight conditions. In this study, the RAE 2822 airfoil was selected as a benchmark case to investigate and compare the effectiveness of surrogate-based methods under an [...] Read more.
The aerodynamic optimization of airfoil shapes remains a critical research area for enhancing aircraft performance under various flight conditions. In this study, the RAE 2822 airfoil was selected as a benchmark case to investigate and compare the effectiveness of surrogate-based methods under an Efficient Global Optimization (EGO) framework and an adjoint-based approach in both single-point and multi-point optimization settings. Prior to optimization, the computational fluid dynamics (CFD) model was validated against experimental data to ensure accuracy. For the surrogate-based methods, Kriging (KRG), Kriging with Partial Least Squares (KPLS), Gradient-Enhanced Kriging (GEK), and Gradient-Enhanced Kriging with Partial Least Squares (GEKPLS) were employed. In the single-point optimization, the GEK method achieved the highest drag reduction, outperforming other approaches, while in the multi-point case, GEKPLS provided the best overall improvement. Detailed comparisons were made against existing literature results, with the proposed methods showing competitive and superior performance, particularly in viscous, transonic conditions. The results underline the importance of incorporating gradient information into surrogate models for achieving high-fidelity aerodynamic optimizations. The study demonstrates that surrogate-based methods, especially those enriched with gradient information, can effectively match or exceed the performance of gradient-based adjoint methods within reasonable computational costs. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

28 pages, 2612 KiB  
Article
Optimizing Economy with Comfort in Climate Control System Scheduling for Indoor Ice Sports Venues’ Spectator Zones Considering Demand Response
by Zhuoqun Du, Yisheng Liu, Yuyan Xue and Boyang Liu
Algorithms 2025, 18(7), 446; https://doi.org/10.3390/a18070446 - 20 Jul 2025
Viewed by 169
Abstract
With the growing popularity of ice sports, indoor ice sports venues are drawing an increasing number of spectators. Maintaining comfort in spectator zones presents a significant challenge for the operational scheduling of climate control systems, which integrate ventilation, heating, and dehumidification functions. To [...] Read more.
With the growing popularity of ice sports, indoor ice sports venues are drawing an increasing number of spectators. Maintaining comfort in spectator zones presents a significant challenge for the operational scheduling of climate control systems, which integrate ventilation, heating, and dehumidification functions. To explore economic cost potential while ensuring user comfort, this study proposes a demand response-integrated optimization model for climate control systems. To enhance the model’s practicality and decision-making efficiency, a two-stage optimization method combining multi-objective optimization algorithms with the technique for order preference by similarity to an ideal solution (TOPSIS) is proposed. In terms of algorithm comparison, the performance of three typical multi-objective optimization algorithms—NSGA-II, standard MOEA/D, and Multi-Objective Brown Bear Optimization (MOBBO)—is systematically evaluated. The results show that NSGA-II demonstrates the best overall performance based on evaluation metrics including runtime, HV, and IGD. Simulations conducted in China’s cold regions show that, under comparable comfort levels, schedules incorporating dynamic tariffs are significantly more economically efficient than those that do not. They reduce operating costs by 25.3%, 24.4%, and 18.7% on typical summer, transitional, and winter days, respectively. Compared to single-objective optimization approaches that focus solely on either comfort enhancement or cost reduction, the proposed multi-objective model achieves a better balance between user comfort and economic performance. This study not only provides an efficient and sustainable solution for climate control scheduling in energy-intensive buildings such as ice sports venues but also offers a valuable methodological reference for energy management and optimization in similar settings. Full article
Show Figures

Figure 1

23 pages, 2695 KiB  
Article
Estimation of Subtropical Forest Aboveground Biomass Using Active and Passive Sentinel Data with Canopy Height
by Yi Wu, Yu Chen, Chunhong Tian, Ting Yun and Mingyang Li
Remote Sens. 2025, 17(14), 2509; https://doi.org/10.3390/rs17142509 - 18 Jul 2025
Viewed by 356
Abstract
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest [...] Read more.
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest aboveground biomass (AGB) in Chenzhou City, Hunan Province, China. In addition, a canopy height model, constructed from a digital surface model (DSM) derived from Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) and an ICESat-2-corrected SRTM DEM, is incorporated to quantify its impact on the accuracy of AGB estimation. The results indicate the following: (1) The incorporation of multi-source remote sensing data significantly improves the accuracy of AGB estimation, among which the RF model performs the best (R2 = 0.69, RMSE = 24.26 t·ha−1) compared with the single-source model. (2) The canopy height model (CHM) obtained from InSAR-LiDAR effectively alleviates the signal saturation effect of optical and SAR data in high-biomass areas (>200 t·ha−1). When FCH is added to the RF model combined with multi-source remote sensing data, the R2 of the AGB estimation model is improved to 0.74. (3) In 2018, AGB in Chenzhou City shows clear spatial heterogeneity, with a mean of 51.87 t·ha−1. Biomass increases from the western hilly part (32.15–68.43 t·ha−1) to the eastern mountainous area (89.72–256.41 t·ha−1), peaking in Dongjiang Lake National Forest Park (256.41 t·ha−1). This study proposes a comprehensive feature integration framework that combines red-edge spectral indices for capturing vegetation physiological status, SAR-derived texture metrics for assessing canopy structural heterogeneity, and canopy height metrics to characterize forest three-dimensional structure. This integrated approach enables the robust and accurate monitoring of carbon storage in subtropical forests. Full article
(This article belongs to the Collection Feature Paper Special Issue on Forest Remote Sensing)
Show Figures

Figure 1

37 pages, 6001 KiB  
Article
Deep Learning-Based Crack Detection on Cultural Heritage Surfaces
by Wei-Che Huang, Yi-Shan Luo, Wen-Cheng Liu and Hong-Ming Liu
Appl. Sci. 2025, 15(14), 7898; https://doi.org/10.3390/app15147898 - 15 Jul 2025
Viewed by 385
Abstract
This study employs a deep learning-based object detection model, GoogleNet, to identify cracks in cultural heritage images. Subsequently, a semantic segmentation model, SegNet, is utilized to determine the location and extent of the cracks. To establish a scale ratio between image pixels and [...] Read more.
This study employs a deep learning-based object detection model, GoogleNet, to identify cracks in cultural heritage images. Subsequently, a semantic segmentation model, SegNet, is utilized to determine the location and extent of the cracks. To establish a scale ratio between image pixels and real-world dimensions, a parallel laser-based measurement approach is applied, enabling precise crack length calculations. The results indicate that the percentage error between crack lengths estimated using deep learning and those measured with a caliper is approximately 3%, demonstrating the feasibility and reliability of the proposed method. Additionally, the study examines the impact of iteration count, image quantity, and image category on the performance of GoogleNet and SegNet. While increasing the number of iterations significantly improves the models’ learning performance in the early stages, excessive iterations lead to overfitting. The optimal performance for GoogleNet was achieved at 75 iterations, whereas SegNet reached its best performance after 45,000 iterations. Similarly, while expanding the training dataset enhances model generalization, an excessive number of images may also contribute to overfitting. GoogleNet exhibited optimal performance with a training set of 66 images, while SegNet achieved the best segmentation accuracy when trained with 300 images. Furthermore, the study investigates the effect of different crack image categories by classifying datasets into four groups: general cracks, plain wall cracks, mottled wall cracks, and brick wall cracks. The findings reveal that training GoogleNet and SegNet with general crack images yielded the highest model performance, whereas training with a single crack category substantially reduced generalization capability. Full article
Show Figures

Figure 1

18 pages, 3006 KiB  
Article
Non-Linear Regression with Repeated Data—A New Approach to Bark Thickness Modelling
by Krzysztof Ukalski and Szymon Bijak
Forests 2025, 16(7), 1160; https://doi.org/10.3390/f16071160 - 14 Jul 2025
Viewed by 188
Abstract
Broader use of multioperational machines in forestry requires efficient methods for determining various timber parameters. Here, we present a novel approach to model the bark thickness (BT) as a function of stem diameter. Stem diameter (D) is any diameter measured along the bole, [...] Read more.
Broader use of multioperational machines in forestry requires efficient methods for determining various timber parameters. Here, we present a novel approach to model the bark thickness (BT) as a function of stem diameter. Stem diameter (D) is any diameter measured along the bole, not a specific one. The following four regression models were tested: marginal model (MM; reference), classical nonlinear regression with independent residuals (M1), nonlinear regression with residuals correlated within a single tree (M2), and nonlinear regression with the correlation of residuals and random components, taking into account random changes between the trees (M3). Empirical data consisted of larch (Larix sp. Mill.) BT measurements carried out at two sites in northern Poland. Relative root square mean error (RMSE%) and adjusted R-squared (R2adj) served to compare the fitted models. Model fit was tested for each tree separately, and all trees were combined. Of the analysed models, M3 turned out to be the best fit for both the individual tree and all tree levels. The fit of the regression function M3 for SITE1 (50-year-old, pure stand located in northern Poland) was 87.44% (R2adj), and for SITE2 (63-year-old, pure stand situated in the north of Poland) it was 80.6%. Taking into account the values of RMSE%, at the individual tree level the M3 model fit at location SITE1 was closest to the MM, while at SITE2 it was better than the MM. For the most comprehensive regression model, M3, it was checked how the error of the bark thickness estimate varied with stem diameter at different heights (from the base of the trees to the top). In general, the model’s accuracy increased with greater tree height. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

40 pages, 3646 KiB  
Article
Novel Deep Learning Model for Glaucoma Detection Using Fusion of Fundus and Optical Coherence Tomography Images
by Saad Islam, Ravinesh C. Deo, Prabal Datta Barua, Jeffrey Soar and U. Rajendra Acharya
Sensors 2025, 25(14), 4337; https://doi.org/10.3390/s25144337 - 11 Jul 2025
Viewed by 569
Abstract
Glaucoma is a leading cause of irreversible blindness worldwide, yet early detection can prevent vision loss. This paper proposes a novel deep learning approach that combines two ophthalmic imaging modalities, fundus photographs and optical coherence tomography scans, as paired images from the same [...] Read more.
Glaucoma is a leading cause of irreversible blindness worldwide, yet early detection can prevent vision loss. This paper proposes a novel deep learning approach that combines two ophthalmic imaging modalities, fundus photographs and optical coherence tomography scans, as paired images from the same eye of each patient for automated glaucoma detection. We develop separate convolutional neural network models for fundus and optical coherence tomography images and a fusion model that integrates features from both modalities for each eye. The models are trained and evaluated on a private clinical dataset (Bangladesh Eye Hospital and Institute Ltd.) consisting of 216 healthy eye images (108 fundus, 108 optical coherence tomography) from 108 patients and 200 glaucomatous eye images (100 fundus, 100 optical coherence tomography) from 100 patients. Our methodology includes image preprocessing pipelines for each modality, custom convolutional neural network/ResNet-based architectures for single-modality analysis, and a two-branch fusion network combining fundus and optical coherence tomography feature representations. We report the performance (accuracy, sensitivity, specificity, and area under curve) of the fundus-only, optical coherence tomography-only, and fusion models. In addition to a fixed test set evaluation, we perform five-fold cross-validation, confirming the robustness and consistency of the fusion model across multiple data partitions. On our fixed test set, the fundus-only model achieves 86% accuracy (AUC 0.89) and the optical coherence tomography-only model, 84% accuracy (AUC 0.87). Our fused model reaches 92% accuracy (AUC 0.95), an absolute improvement of 6 percentage points and 8 percentage points over the fundus and OCT baselines, respectively. McNemar’s test on pooled five-fold validation predictions (b = 3, c = 18) yields χ2=10.7 (p = 0.001), and on optical coherence tomography-only vs. fused (b_o = 5, c_o = 20) χo2=9.0 (p = 0.003), confirming that the fusion gains are significant. Five-fold cross-validation further confirms these improvements (mean AUC 0.952±0.011. We also compare our results with the existing literature and discuss the clinical significance, limitations, and future work. To the best of our knowledge, this is the first time a novel deep learning model has been used on a fusion of paired fundus and optical coherence tomography images of the same patient for the detection of glaucoma. Full article
(This article belongs to the Special Issue AI and Big Data Analytics for Medical E-Diagnosis)
Show Figures

Figure 1

37 pages, 100736 KiB  
Article
Hybrid GIS-Transformer Approach for Forecasting Sentinel-1 Displacement Time Series
by Lama Moualla, Alessio Rucci, Giampiero Naletto, Nantheera Anantrasirichai and Vania Da Deppo
Remote Sens. 2025, 17(14), 2382; https://doi.org/10.3390/rs17142382 - 10 Jul 2025
Cited by 1 | Viewed by 309
Abstract
This study presents a deep learning-based approach for forecasting Sentinel-1 displacement time series, with particular attention to irregular temporal patterns—an aspect often overlooked in previous works. Displacement data were generated using the Parallel Small BAseline Subset (P-SBAS) technique via the Geohazard Thematic Exploitation [...] Read more.
This study presents a deep learning-based approach for forecasting Sentinel-1 displacement time series, with particular attention to irregular temporal patterns—an aspect often overlooked in previous works. Displacement data were generated using the Parallel Small BAseline Subset (P-SBAS) technique via the Geohazard Thematic Exploitation Platform (G-TEP). Initial experiments on a regular dataset from Lombardy employed Long Short-Term Memory (LSTM) models to forecast multiple future time steps. Empirical analysis determined that optimal forecasting is achieved with a 50-time-step input sequence, and that predicting 10% of the input sequence length strikes a balance between temporal coverage and accuracy. The investigation then extended to irregular datasets from Lisbon and Washington, comparing two preprocessing strategies: imputation and the inclusion of time intervals as a second feature. While imputation improved one-step predictions, it was inadequate for multi-step forecasting. To address this, a Time-Gated LSTM (TG-LSTM) was implemented. TG-LSTM outperformed standard LSTM for irregular data in one-step prediction but faced limitations in handling heteroscedasticity and computational cost during multi-step forecasting. These issues were effectively resolved using Temporal Fusion Transformers (TFT), which achieved the best performance, with RMSE values of 1.71 mm/year (Lisbon) and 1.26 mm/year (Washington). A key contribution of this work is the development of a GIS-integrated forecasting toolbox that incorporates LSTM models for regular sequences and TG-LSTM/TFT models for irregular ones. The toolbox enables both single- and multi-step displacement predictions, offering a scalable solution for geohazard monitoring and early warning applications. Full article
Show Figures

Figure 1

17 pages, 1937 KiB  
Article
Hybrid Deep Learning Model for Improved Glaucoma Diagnostic Accuracy
by Nahum Flores, José La Rosa, Sebastian Tuesta, Luis Izquierdo, María Henriquez and David Mauricio
Information 2025, 16(7), 593; https://doi.org/10.3390/info16070593 - 10 Jul 2025
Viewed by 308
Abstract
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as [...] Read more.
Glaucoma is an irreversible neurodegenerative disease that affects the optic nerve, leading to partial or complete vision loss. Early and accurate detection is crucial to prevent vision impairment, which necessitates the development of highly precise diagnostic tools. Deep learning (DL) has emerged as a promising approach for glaucoma diagnosis, where the model is trained on datasets of fundus images. To improve the detection accuracy, we propose a hybrid model for glaucoma detection that combines multiple DL models with two fine-tuning strategies and uses a majority voting scheme to determine the final prediction. In experiments, the hybrid model achieved a detection accuracy of 96.55%, a sensitivity of 98.84%, and a specificity of 94.32%. Integrating datasets was found to improve the performance compared to using them separately even with transfer learning. When compared to individual DL models, the hybrid model achieved a 20.69% improvement in accuracy compared to the best model when applied to a single dataset, a 13.22% improvement when applied with transfer learning across all datasets, and a 1.72% improvement when applied to all datasets. These results demonstrate the potential of hybrid DL models to detect glaucoma more accurately than individual models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 2260 KiB  
Article
Study of Detection of Typical Pesticides in Paddy Water Based on Dielectric Properties
by Shuanggen Huang, Mei Yang, Junshi Huang, Longwei Shang, Qi Chen, Fang Peng, Muhua Liu, Yan Wu and Jinhui Zhao
Agronomy 2025, 15(7), 1666; https://doi.org/10.3390/agronomy15071666 - 9 Jul 2025
Viewed by 250
Abstract
Due to the dramatic increase in pesticide usage and improper application, large amounts of unused pesticides enter the environment through paddy water, causing severe pesticide pollution. To find a rapid method for identifying pesticide types and predicting their concentrations, the dielectric properties frequency [...] Read more.
Due to the dramatic increase in pesticide usage and improper application, large amounts of unused pesticides enter the environment through paddy water, causing severe pesticide pollution. To find a rapid method for identifying pesticide types and predicting their concentrations, the dielectric properties frequency response of pesticides was analyzed in paddy water. A rapid detection method for typical pesticides such as chlorpyrifos, isoprothiolane, imidacloprid and carbendazim was studied based on their dielectric properties. In this paper, amplitude and phase frequency response data for blank paddy water samples and 15 types of paddy water samples containing pesticides were collected at 10 different temperatures. Principal component analysis (PCA) and competitive adaptive reweighted sampling (CARS) were used to extract characteristic frequencies. A species identification model based on support vector machine (SVM) for rapid detection of pesticides in paddy water was established using amplitude and phase frequency response data separately. Frequency response data of 431 sets from nine types of paddy water samples were divided into training and prediction sets in a 3:1 ratio, and a content prediction model based on artificial neural networks (ANN) with multiple inputs and single output was established using amplitude and phase frequency response data after CARS feature extraction. The experimental results show that both PCA-SVM and CARS-SVM species identification models established using amplitude and phase frequency response data have excellent identification effects, reaching over 90%. The PCA-SVM model based on phase frequency response data has the best identification effect for typical pesticides in paddy water with a prediction recognition accuracy range of 97.5–100%. The ANN content prediction model established using phase frequency response data performs well, and the highest R2 prediction values of chlorpyrifos, isoprothiolane, imidacloprid and carbendazim in paddy water were 0.8249, 0.8639, 0.9113 and 0.8368 respectively. The research established a dielectric property detection method for the identification and content prediction of typical pesticides in paddy water, providing a theoretical basis for the hardware design of capacitive sensors based on dielectric property and the detection of pesticide residues in paddy water. This provides a new method and approach for pesticide residue detection, which is of great significance for scientific pesticide application and sustainable agricultural development. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

24 pages, 4465 KiB  
Article
A Deep Learning-Based Echo Extrapolation Method by Fusing Radar Mosaic and RMAPS-NOW Data
by Shanhao Wang, Zhiqun Hu, Fuzeng Wang, Ruiting Liu, Lirong Wang and Jiexin Chen
Remote Sens. 2025, 17(14), 2356; https://doi.org/10.3390/rs17142356 - 9 Jul 2025
Viewed by 326
Abstract
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress [...] Read more.
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress in radar echo extrapolation. However, most of these extrapolation network architectures are built upon convolutional neural networks, using radar echo images as input. Typically, radar echo intensity values ranging from −5 to 70 dBZ with a resolution of 5 dBZ are converted into 0–255 grayscale images from pseudo-color representations, which inevitably results in the loss of important echo details. Furthermore, as the extrapolation time increases, the smoothing effect inherent to convolution operations leads to increasingly blurred predictions. To address the algorithmic limitations of deep learning-based echo extrapolation models, this study introduces three major improvements: (1) A Deep Convolutional Generative Adversarial Network (DCGAN) is integrated into the ConvLSTM-based extrapolation model to construct a DCGAN-enhanced architecture, significantly improving the quality of radar echo extrapolation; (2) Considering that the evolution of radar echoes is closely related to the surrounding meteorological environment, the study incorporates specific physical variable products from the initial zero-hour field of RMAPS-NOW (the Rapid-update Multiscale Analysis and Prediction System—NOWcasting subsystem), developed by the Institute of Urban Meteorology, China. These variables are encoded jointly with high-resolution (0.5 dB) radar mosaic data to form multiple radar cells as input. A multi-channel radar echo extrapolation network architecture (MR-DCGAN) is then designed based on the DCGAN framework; (3) Since radar echo decay becomes more prominent over longer extrapolation horizons, this study departs from previous approaches that use a single model to extrapolate 120 min. Instead, it customizes time-specific loss functions for spatiotemporal attenuation correction and independently trains 20 separate models to achieve the full 120 min extrapolation. The dataset consists of radar composite reflectivity mosaics over North China within the range of 116.10–117.50°E and 37.77–38.77°N, collected from June to September during 2018–2022. A total of 39,000 data samples were matched with the initial zero-hour fields from RMAPS-NOW, with 80% (31,200 samples) used for training and 20% (7800 samples) for testing. Based on the ConvLSTM and the proposed MR-DCGAN architecture, 20 extrapolation models were trained using four different input encoding strategies. The models were evaluated using the Critical Success Index (CSI), Probability of Detection (POD), and False Alarm Ratio (FAR). Compared to the baseline ConvLSTM-based extrapolation model without physical variables, the models trained with the MR-DCGAN architecture achieved, on average, 18.59%, 8.76%, and 11.28% higher CSI values, 19.46%, 19.21%, and 19.18% higher POD values, and 19.85%, 11.48%, and 9.88% lower FAR values under the 20 dBZ, 30 dBZ, and 35 dBZ reflectivity thresholds, respectively. Among all tested configurations, the model that incorporated three physical variables—relative humidity (rh), u-wind, and v-wind—demonstrated the best overall performance across various thresholds, with CSI and POD values improving by an average of 16.75% and 24.75%, respectively, and FAR reduced by 15.36%. Moreover, the SSIM of the MR-DCGAN models demonstrates a more gradual decline and maintains higher overall values, indicating superior capability in preserving echo structural features. Meanwhile, the comparative experiments demonstrate that the MR-DCGAN (u, v + rh) model outperforms the MR-ConvLSTM (u, v + rh) model in terms of evaluation metrics. In summary, the model trained with the MR-DCGAN architecture effectively enhances the accuracy of radar echo extrapolation. Full article
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)
Show Figures

Figure 1

Back to TopTop