Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,604)

Search Parameters:
Keywords = deep learning (DL)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1078 KB  
Systematic Review
Evaluating Artificial Intelligence Models for ICU Length of Stay Prediction: A Systematic Review and Meta-Analysis
by Carlos Zepeda-Lugo, Andrea Insfran-Rivarola, Marcos Sanchez-Lizarraga, Sharon Macias-Velasquez, Ana-Pamela Arevalos, Yolanda Baez-Lopez and Diego Tlapa
Healthcare 2026, 14(9), 1131; https://doi.org/10.3390/healthcare14091131 - 23 Apr 2026
Abstract
Background/Objectives: Efficient management of intensive care unit (ICU) resources is a critical challenge for modern healthcare systems, which must balance high-quality patient care with operational and financial performance. ICU length of stay (LOS) is a key metric of clinical complexity and hospital efficiency. [...] Read more.
Background/Objectives: Efficient management of intensive care unit (ICU) resources is a critical challenge for modern healthcare systems, which must balance high-quality patient care with operational and financial performance. ICU length of stay (LOS) is a key metric of clinical complexity and hospital efficiency. However, traditional methods for predicting LOS often fail to capture the complex, nonlinear interactions among physiological, demographic, and treatment-related variables. Machine learning (ML) and deep learning (DL) models have emerged as promising tools for enhancing predictive accuracy and supporting data-driven decision-making. Methods: This study presents a systematic review and meta-analysis of ML and DL approaches for predicting ICU LOS in adult patients. Following PRISMA guidelines, eight scientific databases were searched, yielding 33 eligible studies published between 2015 and 2025. Results: Mixed medical–surgical ICUs were the most common setting (51.5%), and 45.5% of datasets were sourced from public repositories. Most studies (19/33) focused on binary classification of prolonged stays, although thresholds ranged from >48 h to ≥14 days. The pooled results from ten studies yielded an AUROC of 0.9005 (95% CI: 0.8890–0.9121), indicating strong predictive capability across diverse clinical contexts. Subgroup analyses showed comparable performance between specialized surgical and general ICUs. Conclusions: These findings suggest that AI-driven LOS prediction models exhibit strong discriminatory power for ICU LOS prediction, supporting hospital capacity planning. However, to translate this into reliable clinical support, the methodological heterogeneity, scarcity of external validation, and near absence of calibration reporting identified in this review need to be addressed. Full article
(This article belongs to the Section Healthcare and Sustainability)
22 pages, 5140 KB  
Article
Application of Deep Multi-Scale Representation Learning Based on Eye-Tracking and Facial Expression Data in Cognitive Decline Assessment
by Yanfeng Xue, Xianpeng Luo, Shuai Guo and Tao Song
Sensors 2026, 26(9), 2600; https://doi.org/10.3390/s26092600 - 23 Apr 2026
Abstract
Digital biomarkers derived from eye-tracking and facial expression hold significant potential for the non-invasive screening of cognitive decline (CD). However, existing approaches predominantly rely on single-task or feature engineering-based unimodal methods, which struggle to capture complex temporal behavioral patterns. While deep learning (DL) [...] Read more.
Digital biomarkers derived from eye-tracking and facial expression hold significant potential for the non-invasive screening of cognitive decline (CD). However, existing approaches predominantly rely on single-task or feature engineering-based unimodal methods, which struggle to capture complex temporal behavioral patterns. While deep learning (DL) excels at extracting hierarchical features and intricate temporal dynamics from behavioral sequences, its application in this specific multimodal sensing domain remains exploratory. Addressing this gap, this study designed an assessment system integrating five multi-dimensional cognitive paradigms and collected eye-tracking and facial expression data from 20 healthy controls (HC) and 20 individuals with CD. For these multimodal sequences, we propose a deep neural network capable of multi-scale representation learning. By utilizing subspace exploration and multi-scale convolutions, this architecture extracts deep representations directly from data and incorporates a decision fusion mechanism to enhance diagnostic robustness. Experimental results demonstrate that our method achieves a 90% classification accuracy, outperforming machine learning models. Furthermore, statistical analyses conducted in this study validated several features associated with CD and also explored some novel potential behavioral patterns. This study confirms the feasibility of a DL framework based on eye-tracking and facial expression signals for identifying CD, providing a reference for developing objective and efficient digital screening tools. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

31 pages, 7259 KB  
Article
Enhancing IoT Network Security: A BPSO-Optimized Attention-GRU Deep Learning Framework for Intrusion Detection
by Abdallah Elayan and Michel Kadoch
Computers 2026, 15(5), 266; https://doi.org/10.3390/computers15050266 - 23 Apr 2026
Abstract
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or [...] Read more.
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or breaches. Intrusion Detection Systems (IDSs) have emerged as a vital tool for protecting networks and IoT environments from threats. Various IDSs have been proposed in the literature; however, the lack of optimal feature learning, computational efficiency, and reliance on obsolete datasets poses significant challenges, limiting their effectiveness against evolving cyber threats. Moreover, traditional IDSs struggle to efficiently manage the high-dimensional and imbalanced nature of IoT network traffic data. To address these challenges, this research proposes a hybrid deep learning (DL)-based IDS integrating Binary Particle Swarm Optimization (BPSO), MultiHead Attention mechanisms (MHA), and a deep Gated Recurrent Unit (GRU) architecture, improving detection effectiveness while reducing computational overhead. Our proposed approach also utilizes a Target Sampling strategy to balance class distributions, enhancing the model’s ability to accurately identify minority attacks. The BPSO algorithm is employed to identify the most influential features from the high-dimensional network traffic datasets, enhancing model interpretability and supporting more efficient learning. This optimized feature subset is then fed into a GRU-based DL architecture augmented with MHA, which performs sequence processing and attention-based learning for intrusion detection. The performance of the proposed model is evaluated utilizing the BoT-IoT and the CIC-IDS2017 benchmark datasets, ensuring a comprehensive assessment of anomaly detection capabilities. Extensive experimental results demonstrate the superior performance of the proposed model, achieving a recall of 98.42% and 99.76%, with F1-score of 98.94% and 99.76% for binary classification and a recall of 99.79% and 98.69%, with F1-score of 99.89% and 98.04% for multiclass classification on the BoT-IoT and CIC-IDS2017 datasets, respectively, highlighting the effectiveness of our model in enhancing threat detection for computer networks and IoT environments in comparison to recent state-of-the-art IDSs. Full article
17 pages, 359 KB  
Systematic Review
The Use of Artificial Intelligence in Planning Dental Implant Procedures: A Systematic Review
by Gulvash Zaman, Rabia S. Khan, Adam Spacey, Cemal Ucer and Simon Wright
Dent. J. 2026, 14(5), 248; https://doi.org/10.3390/dj14050248 - 23 Apr 2026
Abstract
Background: Artificial intelligence (AI) is increasingly being integrated into dental implantology, particularly in treatment planning, a critical phase for implant success. Traditionally dependent on clinician expertise, planning can now be supported by AI-assisted systems that aim to improve diagnostic accuracy, precision, and efficiency. [...] Read more.
Background: Artificial intelligence (AI) is increasingly being integrated into dental implantology, particularly in treatment planning, a critical phase for implant success. Traditionally dependent on clinician expertise, planning can now be supported by AI-assisted systems that aim to improve diagnostic accuracy, precision, and efficiency. Objective: To synthesise recent evidence on the use of AI in dental implant planning, particularly its ability to analyse cone beam computed tomography (CBCT) imaging to identify edentulous regions and assess bone dimensions compared with conventional planning methods. Methods: A systematic search was conducted across PubMed, Scopus, Google Scholar, and the Cochrane Library, with additional manual searches from October 2024 to July 2025. Eligibility was defined using the Population, Intervention, Comparison, Outcome (PICO) framework, focusing on adults undergoing implant procedures planned using AI-assisted CBCT imaging and deep learning (DL) models, particularly U-Net architectures, for CBCT segmentation. Results: Ten studies were included, AI systems demonstrated high accuracy (92–99.7%) in detecting teeth and edentulous regions, with precision and recall frequently exceeding 90%. AI-assisted planning also showed improved efficiency, and, in one study, higher implant success rates compared with traditional planning (92% vs. 78%). However, variability in study design, inconsistent reporting, and limited ethical oversight were noted. Conclusions: AI, particularly DL models applied to CBCT imaging, shows strong potential to enhance diagnostic precision and efficiency in dental implant planning. Nevertheless, the field requires standardised evaluation metrics, larger datasets, and well-designed clinical trials before widespread clinical implementation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Rehabilitation)
Show Figures

Figure 1

30 pages, 1435 KB  
Review
A Review of Machine Learning Modeling Approaches of Spatiotemporal Urbanization and Land Use Land Cover
by Farasath Hasan, Jian Liu and Xintao Liu
Smart Cities 2026, 9(5), 74; https://doi.org/10.3390/smartcities9050074 - 22 Apr 2026
Abstract
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there [...] Read more.
Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming the modeling of complex spatiotemporal urban processes such as urban growth, sprawl, shrinkage, redevelopment, and Land Use/Land Cover Change (LULCC). However, despite rapid methodological innovation, applications remain fragmented, and there is limited synthesis of how AI-based models complement, extend, or supersede conventional approaches. This study addresses this gap through a systematic review of 6356 records, from which 120 articles were selected for detailed analysis. It investigates: (i) how ML/DL techniques are embedded within spatiotemporal modeling frameworks; (ii) their use in simulating urbanization dynamics and land-use (LU) transitions; (iii) methodological and performance gains relative to traditional statistical and rule-based models; and (iv) emerging research frontiers and limitations. The review shows that LULCC dominates current applications, with Artificial Neural Networks (ANNs) as the most prevalent ML method, increasingly complemented by DL architectures. Across cases, AI is primarily used to learn non-linear transition dynamics, represent spatial and temporal dependencies, identify influential drivers, and improve classification performance and computational efficiency. Building on these insights, the paper synthesizes the roles of AI in spatiotemporal urban modeling and outlines forward-looking research directions to support more robust, transparent, and policy-relevant applications for urban sustainability. Full article
28 pages, 994 KB  
Review
Deep Learning for Credit Risk Prediction: A Survey of Methods, Applications, and Challenges
by Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
Information 2026, 17(4), 395; https://doi.org/10.3390/info17040395 - 21 Apr 2026
Viewed by 76
Abstract
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures [...] Read more.
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures capable of capturing complex nonlinearities, temporal dynamics, and relational dependencies in borrower data. This study provides a comprehensive review of DL methods applied to credit risk prediction, covering multi-layer perceptron, recurrent and convolutional neural networks, transformer, and graph neural networks. We examine benchmark and large-scale datasets, highlight peer-reviewed applications across corporate, consumer, and peer-to-peer lending, and evaluate the benefits of DL relative to classical machine learning. In addition, we critically assess key challenges and identify emerging opportunities. By synthesising methods, applications, and open challenges, this paper offers a roadmap for advancing trustworthy deep learning in credit risk modelling and bridging the gap between academic research and industry deployment. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
20 pages, 847 KB  
Review
Closing the Loop in Neuromodulation: A Review of Machine Learning Approaches for EEG-Guided Transcranial Magnetic Stimulation
by Elena Mongiardini and Paolo Belardinelli
Algorithms 2026, 19(4), 323; https://doi.org/10.3390/a19040323 - 21 Apr 2026
Viewed by 192
Abstract
Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) provides a powerful framework to probe and modulate human cortical and corticospinal excitability. In recent years, brain state-dependent EEG–TMS paradigms have gained increasing interest by synchronizing stimulation to ongoing neural activity. However, traditional approaches relying [...] Read more.
Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) provides a powerful framework to probe and modulate human cortical and corticospinal excitability. In recent years, brain state-dependent EEG–TMS paradigms have gained increasing interest by synchronizing stimulation to ongoing neural activity. However, traditional approaches relying on single oscillatory features or fixed thresholds have yielded heterogeneous and often inconsistent results, motivating the adoption of machine learning (ML) and artificial intelligence (AI) methods to model brain state in a multivariate, data-driven manner. This review synthesizes current ML and deep learning (DL) approaches aimed at predicting cortical and corticospinal excitability from pre-stimulus EEG. We contextualize these methods within brain state-dependent EEG–TMS frameworks based on oscillatory phase, power, and network-level features, and within evolving definitions of brain state that move beyond local biomarkers toward distributed, large-scale, and dynamically evolving neural representations. The reviewed studies span feature-engineered models, data-driven decoding approaches, and emerging adaptive closed-loop frameworks. Finally, we discuss key methodological challenges, translational barriers, and future directions toward personalized, interpretable, and fully closed-loop neuromodulation systems. Full article
Show Figures

Figure 1

35 pages, 2823 KB  
Article
FedCycle: An Improved Federated Learning Framework for Assessment Across Modalities and Domains
by Betul Dundar, Ebru Akcapinar Sezer, Feyza Yildirim Okay and Suat Ozdemir
Electronics 2026, 15(8), 1752; https://doi.org/10.3390/electronics15081752 - 21 Apr 2026
Viewed by 174
Abstract
Artificial Intelligence (AI) systems based on traditional Deep Learning (DL) are expected to play a leading role in the early detection of various diseases in healthcare applications. However, there are two major drawbacks of these systems: protecting patient privacy and obtaining sufficiently large, [...] Read more.
Artificial Intelligence (AI) systems based on traditional Deep Learning (DL) are expected to play a leading role in the early detection of various diseases in healthcare applications. However, there are two major drawbacks of these systems: protecting patient privacy and obtaining sufficiently large, high-quality datasets to train reliable models. In traditional DL, collecting data from different sources on a single central server increases system complexity and raises serious privacy and security concerns. Federated Learning (FL) makes it possible to train models locally at multiple data locations while collaboratively improving a global model without exposing raw data, making it a promising architectural solution for privacy preservation. Although previous studies have reported that FL can achieve performance comparable to centralized DL approaches, traditional FL approaches often struggle to maintain consistent performance across different settings. This limitation becomes more noticeable when heterogeneous data distributions, modalities, and domains are involved. In these situations, client drift, overfitting, and generalization capability of the global model arise as major challenges. Thus, this study presents FedCycle as an incremental improvement of the FedAvg algorithm. It modifies the aggregation frequency. It aims to overcome these drawbacks and make the global model more stable and efficient. The FedCycle eliminates centralized data collection, enhances data security, and effectively reduces client drift and overfitting by supporting model training across heterogeneous data distributions, modalities, and domains. The performance evaluation involves extensive experiments using various real-world breast cancer image datasets, namely BREAKHIS, ROBOFLOW, RSNA, BUSI, and BCFPP. The presented method is evaluated against both traditional DL and FL approaches using accuracy, precision, recall, F1-score, and AUC. The findings confirm that applying fine-tuning within FedCycle reduces overfitting during training. As a result, FedCycle achieves performance improvements of 7.75% and 4.65% in accuracy and F1-score on the RSNA and BCFPP datasets compared to traditional DL approaches, while also providing an average improvement of approximately 1.5% in accuracy and F1-score across BREAKHIS, ROBOFLOW, and BUSI datasets compared to FedAvg. Full article
(This article belongs to the Special Issue Federated Learning and Its Application)
Show Figures

Figure 1

45 pages, 2083 KB  
Systematic Review
AI-Driven Breast Cancer Diagnosis: A Systematic Review of Imaging Modalities, Deep Learning, and Explainability
by Margo Sabry, Hossam Magdy Balaha, Khadiga M. Ali, Ali Mahmoud, Dibson Gondim, Mohammed Ghazal, Tayseer Hassan A. Soliman and Ayman El-Baz
Cancers 2026, 18(8), 1305; https://doi.org/10.3390/cancers18081305 - 20 Apr 2026
Viewed by 334
Abstract
Background: This article provides a comprehensive overview of recent advancements in artificial intelligence (AI) and deep-learning technologies for breast cancer (BC) diagnosis across various imaging modalities. Methods: A systematic review was conducted in strict adherence to the PRISMA guidelines, incorporating a [...] Read more.
Background: This article provides a comprehensive overview of recent advancements in artificial intelligence (AI) and deep-learning technologies for breast cancer (BC) diagnosis across various imaging modalities. Methods: A systematic review was conducted in strict adherence to the PRISMA guidelines, incorporating a comparative analysis of 65 peer-reviewed studies published between 2018 and 2024. The evaluation focused on diagnostic performance, architectural developments, and clinical integration strategies. Results: The review synthesizes primary findings on convolutional neural networks (CNNs), emerging architectures including graph neural networks, and hybrid models, with diagnostic accuracy, risk prediction, and personalized screening strategies identified as the leading research domains. Notable achievements include CNNs attaining up to 98.5% accuracy in mammography and Vision Transformers reaching 96% in histopathological analysis. Furthermore, the implementation of explainable AI methodologies, such as SHAP, LIME, and Grad-CAM, is emphasized for maintaining transparency, trust, and accountability in clinical decision-making. Conclusions: AI constitutes a pivotal factor in facilitating early BC diagnosis and optimizing treatment outcomes. Nevertheless, significant challenges persist, including dataset heterogeneity, model generalizability, standardization of imaging protocols, computational resource limitations, and the seamless integration of these technologies into established clinical workflows. Future research must prioritize robust multi-dataset validation and standardized implementation frameworks to overcome existing limitations and advance successful BC diagnostic practices. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

31 pages, 4910 KB  
Article
Comparative Evaluation of Machine Learning and Deep Learning Models for Tropical Cyclone Track and Intensity Forecasting in the North Atlantic Basin
by Henry A. Ogu, Liping Liu and Yuh-Lang Lin
Atmosphere 2026, 17(4), 418; https://doi.org/10.3390/atmos17040418 - 20 Apr 2026
Viewed by 126
Abstract
Accurate forecasts of tropical cyclone (TC) track and intensity with a sufficient lead time are critical for disaster preparedness and risk mitigation. Traditional numerical weather prediction models, while fundamental to operational forecasting, often exhibit systematic errors due to limitations in observations, physical parameterizations, [...] Read more.
Accurate forecasts of tropical cyclone (TC) track and intensity with a sufficient lead time are critical for disaster preparedness and risk mitigation. Traditional numerical weather prediction models, while fundamental to operational forecasting, often exhibit systematic errors due to limitations in observations, physical parameterizations, and model resolution. In recent years, machine learning (ML) and deep learning (DL) approaches have emerged as promising data-driven alternatives for improving TC forecasts. This study presents a comparative evaluation of six ML and DL models—Random Forest (RF), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Categorical Boosting (CatBoost), Artificial Neural Network (ANN), and Convolutional Neural Network (CNN)—for forecasting TC track and intensity in the North Atlantic basin. The models are trained using the National Hurricane Center’s (NHC) HURDAT2 best-track dataset for storms from 1990 to 2019 and evaluated on an independent test set from the 2020 season. Model performance is compared across all models and benchmarked against the 2020 mean Decay-SHIFOR5 intensity error, CLIPER5 track errors, and the NHC official forecast (OFCL) errors. Forecast skill is assessed using mean absolute error (MAE) with 95% bootstrap confidence intervals and the coefficient of determination (R2) across lead times of 6, 12, 18, 24, 48, and 72 h. The results show that: (1) several ML and DL models achieve intensity forecast performance that is broadly comparable in magnitude to the 2020 mean OFCL benchmarks, with an average error reduction of 5–11% at the 24 h lead time; (2) among the ML models, XGBoost and CatBoost slightly outperform LightGBM and RF in accuracy, while LightGBM demonstrates the highest computational efficiency; and (3) among the DL models, CNNs outperform ANNs in predictive accuracy and intensity forecasting efficiency, while ANNs exhibit lower computational cost for track forecast. Bootstrap confidence intervals indicate relatively low variability in model errors, supporting the statistical stability of the results within the 2020 season. However, these results reflect within-season variability and do not necessarily generalize across different years or climatological conditions. Overall, the findings demonstrate the potential of ML/DL-based approaches to complement existing operational forecast systems and enhance TC track and intensity forecasting in the North Atlantic basin. Full article
(This article belongs to the Special Issue Machine Learning for Atmospheric and Remote Sensing Research)
17 pages, 1247 KB  
Article
Report-Level Impact of DL Assistance on Teleradiology Quality Support for Brain Metastases: Real-World Clinical Practice at a Single Tertiary Center
by Jieun Roh, Hye Jin Baek, Seung Kug Baik, Bora Chung, Kwang Ho Choi, Hwaseong Ryu and Bong Kyeong Son
Diagnostics 2026, 16(8), 1211; https://doi.org/10.3390/diagnostics16081211 - 17 Apr 2026
Viewed by 165
Abstract
Objective: Existing deep learning (DL) studies on brain metastasis have largely focused on algorithm or reader performance in controlled settings, whereas its role in routine teleradiology quality support remains unestablished. We evaluated the report-level impact of DL assistance on brain metastasis interpretation in [...] Read more.
Objective: Existing deep learning (DL) studies on brain metastasis have largely focused on algorithm or reader performance in controlled settings, whereas its role in routine teleradiology quality support remains unestablished. We evaluated the report-level impact of DL assistance on brain metastasis interpretation in a real-world teleradiology workflow using dual-sequence MRI. Materials and Methods: In this retrospective study, 600 patients who underwent contrast-enhanced dual-sequence brain MRI during two consecutive 3-month periods before (pre-DL, n = 286) and after (post-DL, n = 314) DL integration into teleradiology workflow were analyzed. Ten board-certified teleradiologists interpreted all the cases with or without DL-generated overlays. Report-level diagnostic metrics were assessed against a consensus reference standard established by faculty neuroradiologists. Subsequently, exploratory case-level stratified sensitivity analyses were performed for metastasis-positive examinations based on lesion multiplicity and the largest lesion size. Teleradiologists’ perceptions were assessed using a post-interpretation survey. Results: Compared with the pre-DL group, the post-DL group showed higher sensitivity (77.7% vs. 90.8%, p < 0.001), specificity (82.3% vs. 90.8%, p = 0.002), accuracy (80.8% vs. 90.8%, p < 0.001), positive predictive value (68.2% vs. 85.7%, p < 0.001), and negative predictive value (88.3% vs. 94.2%, p = 0.011). False-positive and false-negative rates were lower after DL implementation (11.9% vs. 5.7%, p = 0.009; 7.3% vs. 3.5%, p = 0.045). Sensitivity gains were most pronounced for cases with single metastasis (74.6% vs. 91.2%, p = 0.007) and with the largest lesion ≤ 5 mm (74.3% vs. 92.0%, p = 0.004), whereas sensitivity was similar for multiple metastases and for cases with a largest lesion > 5 mm. Survey responses suggested favorable usability and diagnostic support. Conclusions: In this real-world teleradiology workflow, DL implementation was associated with higher report-level diagnostic metrics and fewer false interpretations. DL assistance may help support quality control for brain metastasis interpretation, particularly in more subtle and diagnostically challenging cases, although radiologist judgment remains essential for subtle or borderline lesions. Full article
(This article belongs to the Special Issue AI-Assisted Diagnostics in Telemedicine and Digital Health)
34 pages, 1891 KB  
Review
Deep Learning and Cardiovascular Diseases: An Updated Narrative Review
by Angelika Myśliwiec, Dorota Bartusik-Aebisher, Marvin Xavierselvan, Avijit Paul and David Aebisher
J. Clin. Med. 2026, 15(8), 3053; https://doi.org/10.3390/jcm15083053 - 16 Apr 2026
Viewed by 444
Abstract
Background: Artificial intelligence (AI) and deep learning (DL) are rapidly changing the field of diagnostics and imaging in cardiology, offering tools for automatic segmentation, quantification of changes, and risk stratification. These technologies have the potential to increase diagnostic accuracy, work efficiency, and [...] Read more.
Background: Artificial intelligence (AI) and deep learning (DL) are rapidly changing the field of diagnostics and imaging in cardiology, offering tools for automatic segmentation, quantification of changes, and risk stratification. These technologies have the potential to increase diagnostic accuracy, work efficiency, and individualization of patient care. Methods: This structured narrative review critically evaluates clinically validated applications of artificial intelligence (AI) and deep learning (DL) in cardiovascular medicine, focusing on imaging (echocardiography, coronary CT angiography, cardiac MRI, and ECG), risk stratification, and biomarker integration. A systematic literature search was conducted in PubMed for studies published between January 2015 and December 2026, supplemented by references from key articles. Original English-language studies reporting quantitative clinical outcomes were included, with 78 studies ultimately analyzed. Results: AI and DL models, including convolutional neural networks and transformers, achieved performance comparable to experts in cardiac imaging, myocardial perfusion assessment, valve defect detection, and coronary event prediction. Multimodal approaches improved diagnostic accuracy and reproducibility, while explainable AI enhanced transparency and clinical confidence. Deep learning also enabled faster image acquisition and processing without compromising precision. Conclusions: AI and DL have transformative potential in cardiology, offering fast, accurate, and scalable diagnostic tools. The integration of multimodal data, the validation of algorithms in prospective studies, and ensuring the transparency of models are key. Future research should focus on prospective, multicenter validations and the ethical and safe implementation of AI in everyday clinical practice. Full article
Show Figures

Figure 1

17 pages, 1597 KB  
Article
Strategic Approach for Enhancing Deep Learning Models
by Oded Koren, Yoav Gvili and Liron J. Friedman
Algorithms 2026, 19(4), 311; https://doi.org/10.3390/a19040311 - 16 Apr 2026
Viewed by 234
Abstract
Modern large language models have achieved remarkable growth and performance across domains, yet their intense use of resources and high computational costs present challenges to scalability and sustainability. Current attempts to surpass baseline (naïve) AutoDL (Automated Deep Learning) models often rely on complex [...] Read more.
Modern large language models have achieved remarkable growth and performance across domains, yet their intense use of resources and high computational costs present challenges to scalability and sustainability. Current attempts to surpass baseline (naïve) AutoDL (Automated Deep Learning) models often rely on complex manipulations to yield marginal accuracy gains while demanding deep domain knowledge and computational intensity. To address known inefficiencies in computation and implementation, this study proposes a strategic approach for enhancing processing without compromising model accuracy or performance through a simplified, scalable methodology. We present a novel AutoDL weight optimization model method that analyzes the most accurate deep learning starting point and achieves the highest outcomes while considering the additional “presetting” analysis overhead. Using 20 real-world datasets, we conducted experiments across three models, six weight configurations, and ten seeds, totaling 62,400 epochs. In all experiments, the optimized model outperformed baselines, achieving higher accuracy across every dataset while reducing preprocessing to only two epochs per seed. These results demonstrate that minimal preprocessing—limited to two epochs per seed—can substantially lower computational demand while maintaining precision. As the global demand for AI deployment accelerates, this conservation-oriented approach will be critical to sustaining innovation within resource and infrastructural constraints, enabling advances in computational sustainability and responsible AI development, tangible savings across multiple dimensions of resource consumption, and broader access to deep learning technologies. Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
Show Figures

Figure 1

37 pages, 1793 KB  
Systematic Review
The Role of Artificial Intelligence in Prognosis, Recurrence Prediction, and Treatment Outcomes in Laryngeal Cancer: A Systematic Review
by Hadi Afandi Al-Hakami, Ismail A. Abdullah, Nora S. Almutairi, Rimaz R. Aldawsari, Ghadah Ali Alluqmani, Halah Ahmed Fallatah, Yara Saud Alsulami, Elyas Mohammed Alasiri, Rahaf D. Alsufyani, Raghad Ayman Alorabi and Reffal Mohammad Aldainiy
Cancers 2026, 18(8), 1257; https://doi.org/10.3390/cancers18081257 - 16 Apr 2026
Viewed by 361
Abstract
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial [...] Read more.
Background: Laryngeal cancer (LC), a common subtype of head and neck cancers (HNC), is most frequently represented by laryngeal squamous cell carcinoma (LSCC). Prognosis largely depends on early detection; however, traditional prognostic tools, including tumor-node-metastasis (TNM) staging, often show limited predictive accuracy. Artificial intelligence (AI), including machine learning (ML), natural language processing, and deep learning (DL), has emerged as a promising approach to improving cancer diagnosis, prognosis, and treatment planning by analyzing clinical data and medical imaging. Objective: This systematic review assesses the role of AI in prognosis, recurrence prediction, and treatment outcomes in LC. Methods: PubMed, MEDLINE, Scopus, Web of Science, IEEE Xplore, and ScienceDirect were searched up to January 2025. A total of 1062 records were identified; after title/abstract screening and full-text assessment, 29 studies were included. Eligible studies involved adult patients with LC and applied AI to diagnose, prognose, predict recurrence, or assess treatment outcomes using human datasets. Study quality and risk of bias were evaluated using the QUADAS-2 and QUIPS. Results: The 29 included studies were mostly retrospective, with sample sizes ranging from 10 to 63,000 patients. Most focused on LSCC, with a higher prevalence in males. The studies utilized various AI techniques, including deep learning models such as convolutional neural networks (CNNs) and DeepSurv, as well as ML algorithms like random survival forest, gradient boosting machines, random forest, k-nearest neighbors, naïve Bayes, and decision trees. AI models demonstrated strong prognostic performance, surpassing Cox regression and TNM staging in predicting survival and recurrence. Several studies reported outcomes related to treatment, such as chemotherapy response, occult lymph node metastasis, and the need for salvage surgery. Methodological quality varied, with biases related to patient selection and confounding factors. Conclusions: AI has the potential to improve prognosis estimation, recurrence prediction, and treatment outcome assessment in LC. However, although AI can be a helpful addition to clinical decision-making, more prospective studies, external validation, and standardized evaluation are necessary before these technologies can be confidently adopted in everyday clinical practice. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

28 pages, 1775 KB  
Article
A Deep Learning-Assisted Multi-Relay DCSK Communication System
by Tingting Huang, Shengmin Hong, Jundong Chen and Liangyi Kang
Sensors 2026, 26(8), 2420; https://doi.org/10.3390/s26082420 - 15 Apr 2026
Viewed by 131
Abstract
This paper proposes a novel multi-relay deep learning-assisted differential chaos shift keying (MR-DL-DCSK) communication system to enhance the capabilities of existing chaos-based cooperative communication systems. Channel quality significantly affects transmission reliability. However, existing channel quality evaluation methods require channel state information (CSI). To [...] Read more.
This paper proposes a novel multi-relay deep learning-assisted differential chaos shift keying (MR-DL-DCSK) communication system to enhance the capabilities of existing chaos-based cooperative communication systems. Channel quality significantly affects transmission reliability. However, existing channel quality evaluation methods require channel state information (CSI). To address this limitation, a deep neural network (DNN) classifier is employed at the receiver in this paper to perform joint channel quality assessment and symbol demodulation. We propose a channel quality-aware relay coordination strategy: at the relay stage, all relays assess their channel qualities using the DNN-output probability distribution, and relays with lower channel quality align their decoded bits with the bits from the relay with the highest channel quality before forwarding; at the destination stage, the destination selects the signal with the highest channel quality probability for final demodulation. This joint detection approach enables reliable demodulation without requiring explicit CSI, while the channel quality-aware relay coordination mechanism ensures that signals from the most reliable links are prioritized for final decision. Comprehensive simulation results demonstrate that the proposed multi-relay DL-DCSK system achieves superior bit error rate performance. Furthermore, the system exhibits excellent generalization capability when tested on vehicle-to-vehicle (V2V) communication channels modeled by the double-generalized gamma distribution, validating its practical applicability in diverse wireless environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Back to TopTop