Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (139)

Search Parameters:
Keywords = DR-Net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1831 KiB  
Review
Deep Learning Techniques for Retinal Layer Segmentation to Aid Ocular Disease Diagnosis: A Review
by Oliver Jonathan Quintana-Quintana, Marco Antonio Aceves-Fernández, Jesús Carlos Pedraza-Ortega, Gendry Alfonso-Francia and Saul Tovar-Arriaga
Computers 2025, 14(8), 298; https://doi.org/10.3390/computers14080298 - 22 Jul 2025
Viewed by 36
Abstract
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and [...] Read more.
Age-related ocular conditions like macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma are leading causes of irreversible vision loss globally. Optical coherence tomography (OCT) provides essential non-invasive visualization of retinal structures for early diagnosis, but manual analysis of these images is labor-intensive and prone to variability. Deep learning (DL) techniques have emerged as powerful tools for automating the segmentation of the retinal layer in OCT scans, potentially improving diagnostic efficiency and consistency. This review systematically evaluates the state of the art in DL-based retinal layer segmentation using the PRISMA methodology. We analyze various architectures (including CNNs, U-Net variants, GANs, and transformers), examine the characteristics and availability of datasets, discuss common preprocessing and data augmentation strategies, identify frequently targeted retinal layers, and compare performance evaluation metrics across studies. Our synthesis highlights significant progress, particularly with U-Net-based models, which often achieve Dice scores exceeding 0.90 for well-defined layers, such as the retinal pigment epithelium (RPE). However, it also identifies ongoing challenges, including dataset heterogeneity, inconsistent evaluation protocols, difficulties in segmenting specific layers (e.g., OPL, RNFL), and the need for improved clinical integration. This review provides a comprehensive overview of current strengths, limitations, and future directions to guide research towards more robust and clinically applicable automated segmentation tools for enhanced ocular disease diagnosis. Full article
Show Figures

Figure 1

34 pages, 712 KiB  
Review
Transformation of Demand-Response Aggregator Operations in Future US Electricity Markets: A Review of Technologies and Open Research Areas with Game Theory
by Styliani I. Kampezidou and Dimitri N. Mavris
Appl. Sci. 2025, 15(14), 8066; https://doi.org/10.3390/app15148066 - 20 Jul 2025
Viewed by 143
Abstract
The decarbonization of electricity generation by 2030 and the realization of a net-zero economy by 2050 are central to the United States’ climate strategy. However, large-scale renewable integration introduces operational challenges, including extreme ramping, unsafe dispatch, and price volatility. This review investigates how [...] Read more.
The decarbonization of electricity generation by 2030 and the realization of a net-zero economy by 2050 are central to the United States’ climate strategy. However, large-scale renewable integration introduces operational challenges, including extreme ramping, unsafe dispatch, and price volatility. This review investigates how demand–response (DR) aggregators and distributed loads can support these climate goals while addressing critical operational challenges. We hypothesize that current DR aggregator frameworks fall short in the areas of distributed load operational flexibility, scalability with the number of distributed loads (prosumers), prosumer privacy preservation, DR aggregator and prosumer competition, and uncertainty management, limiting their potential to enable large-scale prosumer participation. Using a systematic review methodology, we evaluate existing DR aggregator and prosumer frameworks through the proposed FCUPS criteria—flexibility, competition, uncertainty quantification, privacy, and scalability. The main results highlight significant gaps in current frameworks: limited support for decentralized operations; inadequate privacy protections for prosumers; and insufficient capabilities for managing competition, uncertainty, and flexibility at scale. We conclude by identifying open research directions, including the need for game-theoretic and machine learning approaches that ensure privacy, scalability, and robust market participation. Addressing these gaps is essential to shape future research agendas and to enable DR aggregators to contribute meaningfully to US climate targets. Full article
Show Figures

Figure 1

33 pages, 5602 KiB  
Article
CELM: An Ensemble Deep Learning Model for Early Cardiomegaly Diagnosis in Chest Radiography
by Erdem Yanar, Fırat Hardalaç and Kubilay Ayturan
Diagnostics 2025, 15(13), 1602; https://doi.org/10.3390/diagnostics15131602 - 25 Jun 2025
Viewed by 489
Abstract
Background/Objectives: Cardiomegaly—defined as the abnormal enlargement of the heart—is a key radiological indicator of various cardiovascular conditions. Early detection is vital for initiating timely clinical intervention and improving patient outcomes. This study investigates the application of deep learning techniques for the automated diagnosis [...] Read more.
Background/Objectives: Cardiomegaly—defined as the abnormal enlargement of the heart—is a key radiological indicator of various cardiovascular conditions. Early detection is vital for initiating timely clinical intervention and improving patient outcomes. This study investigates the application of deep learning techniques for the automated diagnosis of cardiomegaly from chest X-ray (CXR) images, utilizing both convolutional neural networks (CNNs) and Vision Transformers (ViTs). Methods: We assembled one of the largest and most diverse CXR datasets to date, combining posteroanterior (PA) images from PadChest, NIH CXR, VinDr-CXR, and CheXpert. Multiple pre-trained CNN architectures (VGG16, ResNet50, InceptionV3, DenseNet121, DenseNet201, and AlexNet), as well as Vision Transformer models, were trained and compared. In addition, we introduced a novel stacking-based ensemble model—Combined Ensemble Learning Model (CELM)—that integrates complementary CNN features via a meta-classifier. Results: The CELM achieved the highest diagnostic performance, with a test accuracy of 92%, precision of 99%, recall of 89%, F1-score of 0.94, specificity of 92.0%, and AUC of 0.90. These results highlight the model’s high agreement with expert annotations and its potential for reliable clinical use. Notably, Vision Transformers offered competitive performance, suggesting their value as complementary tools alongside CNNs. Conclusions: With further validation, the proposed CELM framework may serve as an efficient and scalable decision-support tool for cardiomegaly screening, particularly in resource-limited settings such as intensive care units (ICUs) and emergency departments (EDs), where rapid and accurate diagnosis is imperative. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

27 pages, 20364 KiB  
Article
A Comparative Study of Lesion-Centered and Severity-Based Approaches to Diabetic Retinopathy Classification: Improving Interpretability and Performance
by Gang-Min Park, Ji-Hoon Moon and Ho-Gil Jung
Biomedicines 2025, 13(6), 1446; https://doi.org/10.3390/biomedicines13061446 - 12 Jun 2025
Viewed by 444
Abstract
Background: Despite advances in artificial intelligence (AI) for Diabetic Retinopathy (DR) classification, traditional severity-based approaches often lack interpretability and fail to capture specific lesion-centered characteristics. To address these limitations, we constructed the National Medical Center (NMC) dataset, independently annotated by medical professionals with [...] Read more.
Background: Despite advances in artificial intelligence (AI) for Diabetic Retinopathy (DR) classification, traditional severity-based approaches often lack interpretability and fail to capture specific lesion-centered characteristics. To address these limitations, we constructed the National Medical Center (NMC) dataset, independently annotated by medical professionals with detailed labels of major DR lesions, including retinal hemorrhages, microaneurysms, and exudates. Methods: This study explores four critical research questions. First, we assess the analytical advantages of lesion-centered labeling compared to traditional severity-based labeling. Second, we investigate the potential complementarity between these labeling approaches through integration experiments. Third, we analyze how various model architectures and classification strategies perform under different labeling schemes. Finally, we evaluate decision-making differences between labeling methods using visualization techniques. We benchmarked the lesion-centered NMC dataset against the severity-based public Asia Pacific Tele-Ophthalmology Society (APTOS) dataset, conducting experiments with EfficientNet—a convolutional neural network architecture—and diverse classification strategies. Results: Our results demonstrate that binary classification effectively identifies severe non-proliferative Diabetic Retinopathy (Severe NPDR) exhibiting complex lesion patterns, while relationship-based learning enhances performance for underrepresented classes. Transfer learning from NMC to APTOS notably improved severity classification, achieving performance gains of 15.2% in mild cases and 66.3% in severe cases through feature fusion using Bidirectional Feature Pyramid Network (BiFPN) and Feature Pyramid Network (FPN). Visualization results confirmed that lesion-centered models focus more precisely on pathological features. Conclusions: Our findings highlight the benefits of integrating lesion-centered and severity-based information to enhance both accuracy and interpretability in DR classification. Future research directions include spatial lesion mapping and the development of clinically grounded learning methodologies. Full article
(This article belongs to the Section Endocrinology and Metabolism Research)
Show Figures

Figure 1

24 pages, 58563 KiB  
Article
Interpretable Deep Learning for Diabetic Retinopathy: A Comparative Study of CNN, ViT, and Hybrid Architectures
by Weijie Zhang, Veronika Belcheva and Tatiana Ermakova
Computers 2025, 14(5), 187; https://doi.org/10.3390/computers14050187 - 12 May 2025
Viewed by 1477
Abstract
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have [...] Read more.
Diabetic retinopathy (DR) is a leading cause of vision impairment worldwide, requiring early detection for effective treatment. Deep learning models have been widely used for automated DR classification, with Convolutional Neural Networks (CNNs) being the most established approach. Recently, Vision Transformers (ViTs) have shown promise, but a direct comparison of their performance and interpretability remains limited. Additionally, hybrid models that combine CNN and transformer-based architectures have not been extensively studied. This work systematically evaluates CNNs (ResNet-50), ViTs (Vision Transformer and SwinV2-Tiny), and hybrid models (Convolutional Vision Transformer, LeViT-256, and CvT-13) on DR classification using publicly available retinal image datasets. The models are assessed based on classification accuracy and interpretability, applying Grad-CAM and Attention-Rollout to analyze decision-making patterns. Results indicate that hybrid models outperform both standalone CNNs and ViTs, achieving a better balance between local feature extraction and global context awareness. The best-performing model (CvT-13) achieved a Quadratic Weighted Kappa (QWK) score of 0.84 and an AUC of 0.93 on the test set. Interpretability analysis shows that CNNs focus on fine-grained lesion details, while ViTs exhibit broader but less localized attention. These findings provide valuable insights for optimizing deep learning models in medical imaging, supporting the development of clinically viable AI-driven DR screening systems. Full article
Show Figures

Figure 1

19 pages, 2252 KiB  
Article
Enhanced ResNet50 for Diabetic Retinopathy Classification: External Attention and Modified Residual Branch
by Menglong Feng, Yixuan Cai and Shen Yan
Mathematics 2025, 13(10), 1557; https://doi.org/10.3390/math13101557 - 9 May 2025
Viewed by 701
Abstract
One of the common microvascular complications in diabetic patients is diabetic retinopathy (DR), which primarily impacts the retinal blood vessels. As the course of diabetes progresses, the incidence of DR gradually increases, and, in serious situations, it can cause vision loss and even [...] Read more.
One of the common microvascular complications in diabetic patients is diabetic retinopathy (DR), which primarily impacts the retinal blood vessels. As the course of diabetes progresses, the incidence of DR gradually increases, and, in serious situations, it can cause vision loss and even blindness. Diagnosing DR early is essential to mitigate its consequences, and deep learning models provide an effective approach. In this study, we propose an improved ResNet50 model, which replaces the 3 × 3 convolution in the residual structure by introducing an external attention mechanism, which improves the model’s awareness of global information and allows the model to grasp the characteristics of the input data more thoroughly. In addition, multiscale convolution is added to the residual branch, which further improves the ability of the model to extract local features and global features, and improves the processing accuracy of image details. In addition, the Sophia optimizer is introduced to replace the traditional Adam optimizer, which further optimizes the classification performance of the model. In this study, 3662 images from the Kaggle open dataset were used to generate 20,184 images for model training after image preprocessing and data augmentation. Experimental results show that the improved ResNet50 model achieves a classification accuracy of 96.68% on the validation set, which is 4.36% higher than the original architecture, and the Kappa value is increased by 5.45%. These improvements contribute to the early diagnosis of DR and decrease the likelihood of blindness among patients. Full article
Show Figures

Figure 1

35 pages, 7003 KiB  
Article
Federated LeViT-ResUNet for Scalable and Privacy-Preserving Agricultural Monitoring Using Drone and Internet of Things Data
by Mohammad Aldossary, Jaber Almutairi and Ibrahim Alzamil
Agronomy 2025, 15(4), 928; https://doi.org/10.3390/agronomy15040928 - 10 Apr 2025
Cited by 1 | Viewed by 782
Abstract
Precision agriculture is necessary for dealing with problems like pest outbreaks, a lack of water, and declining crop health. Manual inspections and broad-spectrum pesticide application are inefficient, time-consuming, and dangerous. New drone photography and IoT sensors offer quick, high-resolution, multimodal agricultural data collecting. [...] Read more.
Precision agriculture is necessary for dealing with problems like pest outbreaks, a lack of water, and declining crop health. Manual inspections and broad-spectrum pesticide application are inefficient, time-consuming, and dangerous. New drone photography and IoT sensors offer quick, high-resolution, multimodal agricultural data collecting. Regional diversity, data heterogeneity, and privacy problems make it hard to conclude these data. This study proposes a lightweight, hybrid deep learning architecture called federated LeViT-ResUNet that combines the spatial efficiency of LeViT transformers with ResUNet’s exact pixel-level segmentation to address these issues. The system uses multispectral drone footage and IoT sensor data to identify real-time insect hotspots, crop health, and yield prediction. The dynamic relevance and sparsity-based feature selector (DRS-FS) improves feature ranking and reduces redundancy. Spectral normalization, spatial–temporal alignment, and dimensionality reduction provide reliable input representation. Unlike centralized models, our platform trains over-dispersed client datasets using federated learning to preserve privacy and capture regional trends. A huge, open-access agricultural dataset from varied environmental circumstances was used for simulation experiments. The suggested approach improves on conventional models like ResNet, DenseNet, and the vision transformer with a 98.9% classification accuracy and 99.3% AUC. The LeViT-ResUNet system is scalable and sustainable for privacy-preserving precision agriculture because of its high generalization, low latency, and communication efficiency. This study lays the groundwork for real-time, intelligent agricultural monitoring systems in diverse, resource-constrained farming situations. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

25 pages, 5958 KiB  
Article
Characterization of Energy Profile and Load Flexibility in Regional Water Utilities for Cost Reduction and Sustainable Development
by B. M. Ruhul Amin, Rakibuzzaman Shah, Suryani Lim, Tanveer Choudhury and Andrew Barton
Sustainability 2025, 17(8), 3364; https://doi.org/10.3390/su17083364 - 9 Apr 2025
Viewed by 737
Abstract
Water utilities use a significant amount of electrical energy due to the rising demand for wastewater treatment driven by environmental and economic reasons. The growing demand for energy, rising energy costs, and the drive toward achieving net-zero emissions require a sustainable energy future [...] Read more.
Water utilities use a significant amount of electrical energy due to the rising demand for wastewater treatment driven by environmental and economic reasons. The growing demand for energy, rising energy costs, and the drive toward achieving net-zero emissions require a sustainable energy future for the water industry. This can be achieved by integrating onsite renewable energy sources (RESs), energy storage, demand management, and participation in demand response (DR) programs. This paper analyzes the energy profile and load flexibility of water utilities using a data-driven approach to reduce energy costs by leveraging RESs for regional water utilities. It also assesses the potential for DR participation across different types of water utilities, considering peak-load shifting and battery storage installations. Given the increasing frequency of extreme weather events, such as bushfires, heatwaves, droughts, and prolonged cold and wet season floods, regional water industries in Australia serve as a relevant case study of sectors already impacted by these challenges. First, the data characteristics across the water and energy components of regional water industries are analyzed. Next, barriers and challenges in data acquisition and processing in water industries are identified and recommendations are made for improving data coordination (interoperability) to enable the use of a single platform for identifying DR opportunities. Finally, the energy profile and load flexibility of regional water industries are examined to evaluate onsite generation and battery storage options for participating in DR operations. Operational data from four regional sites across two regional Australian water utilities are used in this study. Full article
Show Figures

Figure 1

15 pages, 1619 KiB  
Article
Optimal Convolutional Networks for Staging and Detecting of Diabetic Retinopathy
by Minyar Sassi Hidri, Adel Hidri, Suleiman Ali Alsaif, Muteeb Alahmari and Eman AlShehri
Information 2025, 16(3), 221; https://doi.org/10.3390/info16030221 - 13 Mar 2025
Cited by 1 | Viewed by 579
Abstract
Diabetic retinopathy (DR) is the main ocular complication of diabetes. Asymptomatic for a long time, it is subject to annual screening using dilated fundus or retinal photography to look for early signs. Fundus photography and optical coherence tomography (OCT) are used by ophthalmologists [...] Read more.
Diabetic retinopathy (DR) is the main ocular complication of diabetes. Asymptomatic for a long time, it is subject to annual screening using dilated fundus or retinal photography to look for early signs. Fundus photography and optical coherence tomography (OCT) are used by ophthalmologists to assess retinal thickness and structure, as well as detect edema, hemorrhage, and scarring. The effectiveness of ConvNet no longer needs to be demonstrated, and its use in the field of imaging has made it possible to overcome many barriers, which were until now insurmountable with old methods. Throughout this study, a robust and optimal deep ConvNet is proposed to analyze fundus images and automatically distinguish between healthy, moderate, and severe DR. The proposed model combines the use of the ConvNet architecture taken from ImageNet, data augmentation, class balancing, and transfer learning in order to establish a benchmarking test. A significant improvement at the level of middle class which corresponds to the early stage of DR, which was the major problem in previous studies. By eliminating the need for retina specialists and broadening access to retinal care, the proposed model is substantially more robust in objectively early staging and detecting DR. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

22 pages, 8375 KiB  
Article
From Pixels to Diagnosis: Early Detection of Diabetic Retinopathy Using Optical Images and Deep Neural Networks
by Amira J. Zaylaa and Sylva Kourtian
Appl. Sci. 2025, 15(5), 2684; https://doi.org/10.3390/app15052684 - 3 Mar 2025
Viewed by 1752
Abstract
The detection of diabetic retinopathy (DR) is challenging, as the current diagnostic methods rely heavily on the expertise of specialists and require the mass screening of diabetic patients. The prevalence of avoidable vision impairment due to DR necessitates the exploration of alternative diagnostic [...] Read more.
The detection of diabetic retinopathy (DR) is challenging, as the current diagnostic methods rely heavily on the expertise of specialists and require the mass screening of diabetic patients. The prevalence of avoidable vision impairment due to DR necessitates the exploration of alternative diagnostic techniques. Specifically, it is necessary to develop reliable automatic methods to enable the early diagnosis and detection of DR from optical images. To address the lack of such methods, this research focused on employing various pre-trained deep neural networks (DNNs) and statistical metrics to provide an automatic framework for detecting DR in optical images. The receiver operating characteristic (ROC) was employed to examine the performance of each network. Ethically obtained real datasets were utilized to validate and enhance the robustness of the proposed detection framework. The experimental results showed that, in terms of the overall performance in DR detection, ResNet-50 was the best, followed by GoogleNet, with 99.44% sensitivity, while they were similar in terms of accuracy (93.56%). ResNet-50 outperformed GoogleNet in terms of the specificity (89.74%) and precision (90.07%) of DR detection. The ROC curves of both ResNet-50 and GoogleNet yielded optimal results, followed by SqueezeNet. MobileNet-v2 showed the weakest performance in terms of the ROC, while all networks showed negligible errors in diagnosis and detection. These results show that the automatic detection and diagnosis framework for DR is a promising tool enabling doctors to diagnose DR early and save time. As future directions, it is necessary to develop a grading algorithm and to explore other strategies to further improve the automatic detection and diagnosis of DR and integrate it into digital slit lamp machines. Full article
(This article belongs to the Special Issue Diagnosis and Therapy for Retinal Diseases)
Show Figures

Figure 1

15 pages, 8698 KiB  
Article
Geometric Self-Supervised Learning: A Novel AI Approach Towards Quantitative and Explainable Diabetic Retinopathy Detection
by Lucas Pu, Oliver Beale and Xin Meng
Bioengineering 2025, 12(2), 157; https://doi.org/10.3390/bioengineering12020157 - 6 Feb 2025
Viewed by 1230
Abstract
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We [...] Read more.
Background: Diabetic retinopathy (DR) is the leading cause of blindness among working-age adults. Early detection is crucial to reducing DR-related vision loss risk but is fraught with challenges. Manual detection is labor-intensive and often misses tiny DR lesions, necessitating automated detection. Objective: We aimed to develop and validate an annotation-free deep learning strategy for the automatic detection of exudates and bleeding spots on color fundus photography (CFP) images and ultrawide field (UWF) retinal images. Materials and Methods: Three cohorts were created: two CFP cohorts (Kaggle-CFP and E-Ophtha) and one UWF cohort. Kaggle-CFP was used for algorithm development, while E-Ophtha, with manually annotated DR-related lesions, served as the independent test set. For additional independent testing, 50 DR-positive cases from both the Kaggle-CFP and UWF cohorts were manually outlined for bleeding and exudate spots. The remaining cases were used for algorithm training. A multiscale contrast-based shape descriptor transformed DR-verified retinal images into contrast fields. High-contrast regions were identified, and local image patches from abnormal and normal areas were extracted to train a U-Net model. Model performance was evaluated using sensitivity and false positive rates based on manual annotations in the independent test sets. Results: Our trained model on the independent CFP cohort achieved high sensitivities for detecting and segmenting DR lesions: microaneurysms (91.5%, 9.04 false positives per image), hemorrhages (92.6%, 2.26 false positives per image), hard exudates (92.3%, 7.72 false positives per image), and soft exudates (90.7%, 0.18 false positives per image). For UWF images, the model’s performance varied by lesion size. Bleeding detection sensitivity increased with lesion size, from 41.9% (6.48 false positives per image) for the smallest spots to 93.4% (5.80 false positives per image) for the largest. Exudate detection showed high sensitivity across all sizes, ranging from 86.9% (24.94 false positives per image) to 96.2% (6.40 false positives per image), though false positive rates were higher for smaller lesions. Conclusions: Our experiments demonstrate the feasibility of training a deep learning neural network for detecting and segmenting DR-related lesions without relying on their manual annotations. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

25 pages, 4789 KiB  
Article
Application of Deep Learning Framework for Early Prediction of Diabetic Retinopathy
by Fahad Mostafa, Hafiz Khan, Fardous Farhana and Md Ariful Haque Miah
AppliedMath 2025, 5(1), 11; https://doi.org/10.3390/appliedmath5010011 - 5 Feb 2025
Viewed by 1410
Abstract
Diabetic retinopathy (DR) is a severe microvascular complication of diabetes that affects the eyes, leading to progressive damage to the retina and potential vision loss. Timely intervention and detection are crucial for preventing irreversible damage. With the advancement of technology, deep learning (DL) [...] Read more.
Diabetic retinopathy (DR) is a severe microvascular complication of diabetes that affects the eyes, leading to progressive damage to the retina and potential vision loss. Timely intervention and detection are crucial for preventing irreversible damage. With the advancement of technology, deep learning (DL) has emerged as a powerful tool in medical diagnostics, offering a promising solution for the early prediction of DR. This study compares four convolutional neural network architectures, DenseNet201, ResNet50, VGG19, and MobileNetV2, for predicting DR. The evaluation is based on both accuracy and training time data. MobileNetV2 outperforms other models, with a validation accuracy of 78.22%, and ResNet50 has the shortest training time (15.37 s). These findings emphasize the trade-off between model accuracy and computational efficiency, stressing MobileNetV2’s potential applicability for DR prediction due to its balance of high accuracy and a reasonable training time. Performing a 5-fold cross-validation with 100 repetitions, the ensemble of MobileNetV2 and a Graph Convolution Network exhibits a validation accuracy of 82.5%, significantly outperforming MobileNetV2 alone, which shows a 5-fold validation accuracy of 77.4%. This superior performance is further validated by the area under the receiver operating characteristic curve (ROC) metric, demonstrating the enhanced capability of the ensemble method in accurately detecting diabetic retinopathy. This suggests its competence in effectively classifying data and highlights its robustness across multiple validation scenarios. Moreover, the proposed clustering approach can find damaged locations in the retina using the developed Isolate Regions of Interest method, which achieves almost a 90% accuracy. These findings are useful for researchers and healthcare practitioners looking to investigate efficient and effective powerful models for predictive analytics to diagnose diabetic retinopathy. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

30 pages, 3765 KiB  
Article
Efficient Distributed Denial of Service Attack Detection in Internet of Vehicles Using Gini Index Feature Selection and Federated Learning
by Muhammad Dilshad, Madiha Haider Syed and Semeen Rehman
Future Internet 2025, 17(1), 9; https://doi.org/10.3390/fi17010009 - 1 Jan 2025
Cited by 1 | Viewed by 1530
Abstract
Considering that smart vehicles are becoming interconnected through the Internet of Vehicles, cybersecurity threats like Distributed Denial of Service (DDoS) attacks pose a great challenge. Detection methods currently face challenges due to the complex and enormous amounts of data inherent in IoV systems. [...] Read more.
Considering that smart vehicles are becoming interconnected through the Internet of Vehicles, cybersecurity threats like Distributed Denial of Service (DDoS) attacks pose a great challenge. Detection methods currently face challenges due to the complex and enormous amounts of data inherent in IoV systems. This paper presents a new approach toward improving DDoS attack detection by using the Gini index in feature selection and Federated Learning during model training. The Gini index assists in filtering out important features, hence simplifying the models for higher accuracy. FL enables decentralized training across many devices while preserving privacy and allowing scalability. The results show that the case for this approach is in detecting DDoS attacks, bringing out data confidentiality, and reducing computational load. As noted in this paper, the average accuracy of the models is 91%. Moreover, different types of DDoS attacks were identified by employing our proposed technique. Precisions achieved are as follows: DrDoS_DNS: 28.65%, DrDoS_SNMP: 28.94%, DrDoS_UDP: 9.20%, and NetBIOS: 20.61%. In this research, we foresee the potential for harvesting from integrating advanced feature selection with FL so that IoV systems can meet modern cybersecurity requirements. It also provides a robust and efficient solution for the future automotive industry. By carefully selecting only the most important data features and decentralizing the model training to devices, we reduce both time and memory usage. This makes the system much faster and lighter on resources, making it perfect for real-time IoV applications. Our approach is both effective and efficient for detecting DDoS attacks in IoV environments. Full article
Show Figures

Figure 1

30 pages, 6099 KiB  
Article
Partial Attention in Global Context and Local Interaction for Addressing Noisy Labels and Weighted Redundancies on Medical Images
by Minh Tai Pham Nguyen, Minh Khue Phan Tran, Tadashi Nakano, Thi Hong Tran and Quoc Duy Nam Nguyen
Sensors 2025, 25(1), 163; https://doi.org/10.3390/s25010163 - 30 Dec 2024
Viewed by 1410
Abstract
Recently, the application of deep neural networks to detect anomalies on medical images has been facing the appearance of noisy labels, including overlapping objects and similar classes. Therefore, this study aims to address this challenge by proposing a unique attention module that can [...] Read more.
Recently, the application of deep neural networks to detect anomalies on medical images has been facing the appearance of noisy labels, including overlapping objects and similar classes. Therefore, this study aims to address this challenge by proposing a unique attention module that can assist deep neural networks in focusing on important object features in noisy medical image conditions. This module integrates global context modeling to create long-range dependencies and local interactions to enable channel attention ability by using 1D convolution that not only performs well with noisy labels but also consumes significantly less resources without any dimensionality reduction. The module is then named Global Context and Local Interaction (GCLI). We have further experimented and proposed a partial attention strategy for the proposed GCLI module, aiming to efficiently reduce weighted redundancies. This strategy utilizes a subset of channels for GCLI to produce attention weights instead of considering every single channel. As a result, this strategy can greatly reduce the risk of introducing weighted redundancies caused by modeling global context. For classification, our proposed method is able to assist ResNet34 in achieving up to 82.5% accuracy on the Chaoyang test set, which is the highest figure among the other SOTA attention modules without using any processing filter to reduce the effect of noisy labels. For object detection, the GCLI is able to boost the capability of YOLOv8 up to 52.1% mAP50 on the GRAZPEDWRI-DX test set, demonstrating the highest performance among other attention modules and ranking second in the mAP50 metric on the VinDR-CXR test set. In terms of model complexity, our proposed GCLI module can consume fewer extra parameters up to 225 times and has inference speed faster than 30% compared to the other attention modules. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 2956 KiB  
Article
Optimizing Heat Pump Control in an NZEB via Model Predictive Control and Building Simulation
by Christian Baumann, Philipp Wohlgenannt, Wolfgang Streicher and Peter Kepplinger
Energies 2025, 18(1), 100; https://doi.org/10.3390/en18010100 - 30 Dec 2024
Cited by 4 | Viewed by 1076
Abstract
EU regulations get stricter from 2028 on by imposing net-zero energy building (NZEB) standards on new residential buildings including on-site renewable energy integration. Heat pumps (HP) using thermal building mass, and Model Predictive Control (MPC) provide a viable solution to this problem. However, [...] Read more.
EU regulations get stricter from 2028 on by imposing net-zero energy building (NZEB) standards on new residential buildings including on-site renewable energy integration. Heat pumps (HP) using thermal building mass, and Model Predictive Control (MPC) provide a viable solution to this problem. However, the MPC potential in NZEBs considering the impact on indoor comfort have not yet been investigated comprehensively. Therefore, we present a co-simulative approach combining MPC optimization and IDA ICE building simulation. The demand response (DR) potential of a ground-source HP and the long-term indoor comfort in an NZEB located in Vorarlberg, Austria over a one year period are investigated. Optimization is performed using Mixed-Integer Linear Programming (MILP) based on a simplified RC model. The HP in the building simulation is controlled by power signals obtained from the optimization. The investigation shows reductions in electricity costs of up to 49% for the HP and up to 5% for the building, as well as increases in PV self-consumption and the self-sufficiency ratio by up to 4% pt., respectively, in two distinct optimization scenarios. Consequently, the grid consumption decreased by up to 5%. Moreover, compared to the reference PI controller, the MPC scenarios enhanced indoor comfort by reducing room temperature fluctuations and lowering the average percentage of people dissatisfied by 1% pt., resulting in more stable indoor conditions. Especially precooling strategies mitigated overheating risks in summer and ensured indoor comfort according to EN 16798-1 class II standards. Full article
(This article belongs to the Special Issue Energy Efficiency and Energy Performance in Buildings)
Show Figures

Figure 1

Back to TopTop