Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (170)

Search Parameters:
Keywords = deep neural network recommender system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2154 KB  
Review
Application of Machine Learning in Food Safety Risk Assessment
by Qingchuan Zhang, Zhe Lu, Zhenqiao Liu, Jialu Li, Mingchao Chang and Min Zuo
Foods 2025, 14(23), 4005; https://doi.org/10.3390/foods14234005 - 22 Nov 2025
Viewed by 604
Abstract
With the increasing globalization of supply chains, ensuring food safety has become more complex, necessitating advanced approaches for risk assessment. This study aims to review the transformative role of machine learning (ML) and deep learning (DL) in enabling intelligent food safety management by [...] Read more.
With the increasing globalization of supply chains, ensuring food safety has become more complex, necessitating advanced approaches for risk assessment. This study aims to review the transformative role of machine learning (ML) and deep learning (DL) in enabling intelligent food safety management by efficiently analyzing high-quality and nonlinear data. We systematically summarize recent advances in the application of ML and DL, focusing on key areas such as biotoxin detection, heavy metal contamination, analysis of pesticide and veterinary drug residues, and microbial risk prediction. While traditional algorithms including support vector machines and random forests demonstrate strong performance in classification and risk evaluation, unsupervised methods such as K-means and hierarchical cluster analysis facilitate pattern recognition in unlabeled datasets. Furthermore, novel DL architectures, such as convolutional neural networks, recurrent neural networks, and transformers, enable automated feature extraction and multimodal data integration, substantially improving detection accuracy and efficiency. In conclusion, we recommend future work to emphasize model interpretability, multi-modal data fusion, and integration into HACCP systems, thereby supporting intelligent, interpretable, and real-time food safety management. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Graphical abstract

24 pages, 1680 KB  
Review
Leveraging Artificial Intelligence for Sustainable Tutoring and Dropout Prevention in Higher Education: A Scoping Review on Digital Transformation
by Washington Raúl Fierro Saltos, Fabian Eduardo Fierro Saltos, Veloz Segura Elizabeth Alexandra and Edgar Fabián Rivera Guzmán
Information 2025, 16(9), 819; https://doi.org/10.3390/info16090819 - 22 Sep 2025
Cited by 2 | Viewed by 1245
Abstract
The increasing integration of artificial intelligence into educational processes offers new opportunities to address critical issues in higher education, such as student dropout, academic underperformance, and the need for personalized tutoring. This scoping review aims to map the scientific literature on the use [...] Read more.
The increasing integration of artificial intelligence into educational processes offers new opportunities to address critical issues in higher education, such as student dropout, academic underperformance, and the need for personalized tutoring. This scoping review aims to map the scientific literature on the use of AI techniques to predict academic performance, risk of dropout, and the need for academic advising, with an emphasis on e-learning or technology-mediated environments. The study follows the Joanna Briggs Institute PCC strategy, and the review was reported following the PRISMA-ScR checklist for search reporting. A total of 63 peer-reviewed empirical studies (2019–2025) were included after systematic filtering from the Scopus and Web of Science databases. The findings reveal that supervised machine learning models, such as decision trees, random forests, and neural networks, dominate the field, with an emerging interest in deep learning, transfer learning, and explainable AI. Academic, behavioral, emotional, and contextual variables are integrated into increasingly complex and interpretable models. Most studies focus on undergraduate students in digital and hybrid learning contexts, particularly in regions with high dropout rates. The review highlights the potential of AI to enable early intervention and improve the effectiveness of tutoring systems, while noting limitations such as lack of model generalization and ethical concerns. Recommendations are provided for future research and institutional integration. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

42 pages, 5040 KB  
Systematic Review
A Systematic Review of Machine Learning Analytic Methods for Aviation Accident Research
by Aziida Nanyonga, Ugur Turhan and Graham Wild
Sci 2025, 7(3), 124; https://doi.org/10.3390/sci7030124 - 4 Sep 2025
Cited by 1 | Viewed by 2469
Abstract
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over [...] Read more.
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over the past 25 years. Through a comprehensive search on Scopus and backward reference searches via Google Scholar, 87 of the most relevant papers were identified. The investigation focused on the application context, ML techniques employed, data sources, and the implications of contextual nuances for safety analysis outcomes. ML techniques have been effective for post-accident analysis, predictive, and real-time incident detection across diverse aviation scenarios. Supervised, unsupervised, and semi-supervised learning methods, including neural networks, decision trees, support vector machines, and deep learning models, have all been applied for analyzing accidents, identifying patterns, and forecasting potential incidents. Notably, data sources such as the Aviation Safety Reporting System (ASRS) and the National Transportation Safety Board (NTSB) datasets were the most used. Transparency, fairness, and bias mitigation emerge as critical factors that shape the credibility and acceptance of ML-based safety research in aviation. The review revealed seven recommended future research directions: (1) interpretable AI; (2) real-time prediction; (3) hybrid models; (4) handling of unbalanced datasets; (5) privacy and data security; (6) human–machine interface for safety professionals; (7) regulatory implications. These directions provide a blueprint for further ML-based aviation safety research. This review underscores the role of ML applications in shaping aviation safety practices, thereby enhancing safety for all stakeholders. It serves as a constructive and cautionary guide for researchers, practitioners, and decision-makers, emphasizing the value of ML when used appropriately to transform aviation safety to be more data-driven and proactive. Full article
Show Figures

Figure 1

24 pages, 1646 KB  
Article
Differential Weighting and Flexible Residual GCN-Based Contrastive Learning for Recommendation
by Fuqiang Xie, Min Wang, Jianrong Peng and Dingcai Shen
Symmetry 2025, 17(8), 1320; https://doi.org/10.3390/sym17081320 - 14 Aug 2025
Cited by 1 | Viewed by 678
Abstract
The recommendation system based on graphs aims to infer the symmetrical relationship between unconnected users and items nodes. Graph convolutional neural networks (GCNs) are powerful deep learning models widely used in recommender systems, showcasing outstanding performance. However, existing GCN-based recommendation models still suffer [...] Read more.
The recommendation system based on graphs aims to infer the symmetrical relationship between unconnected users and items nodes. Graph convolutional neural networks (GCNs) are powerful deep learning models widely used in recommender systems, showcasing outstanding performance. However, existing GCN-based recommendation models still suffer from the well-known issue of over-smoothing, which remains a significant obstacle to improve the recommendation performance. Additionally, traditional neighborhood aggregation methods of GCN-based recommendation models do not differentiate the nodes’ importance and also exert a certain negative impact on the recommendation effect of the model. To address these problems, we first propose a simple yet efficient GCN-based recommendation model, named WR-GCN, with a node-based dynamic weighting method and a flexible residual strategy. WR-GCN can effectively alleviate the over-smoothing issue and utilize the interaction information among the graph nodes, enhancing the recommendation performance. Furthermore, building upon the outstanding performance of contrastive learning (CL) in recommendation systems and its robust capability to address data sparsity issues, we integrate the proposed WR-GCN into a simple CL framework to form a more potent recommendation model, WR-GCL, which incorporates an initial embedding controlling method to strike a balanced state of high-frequency information. We have conducted extensive experiments on the proposed WR-GCN and WR-GCL models on multiple datasets. The experimental results show that WR-GCN and WR-GCL outperform several state-of-the-art baselines. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 571 KB  
Article
SHARP: Blockchain-Powered WSNs for Real-Time Student Health Monitoring and Personalized Learning
by Zeqiang Xie, Zijian Li and Xinbing Liu
Sensors 2025, 25(16), 4885; https://doi.org/10.3390/s25164885 - 8 Aug 2025
Cited by 2 | Viewed by 1176
Abstract
With the rapid advancement of the Internet of Things (IoT), artificial intelligence (AI), and blockchain technologies, educational research has increasingly explored smart and personalized learning systems. However, current approaches often suffer from fragmented integration of health monitoring and instructional adaptation, insufficient prediction accuracy [...] Read more.
With the rapid advancement of the Internet of Things (IoT), artificial intelligence (AI), and blockchain technologies, educational research has increasingly explored smart and personalized learning systems. However, current approaches often suffer from fragmented integration of health monitoring and instructional adaptation, insufficient prediction accuracy of physiological states, and unresolved concerns regarding data privacy and security. To address these challenges, this study introduces SHARP, a novel blockchain-enhanced wireless sensor networks (WSNs) framework designed for real-time student health monitoring and personalized learning in smart educational environments. Wearable sensors enable continuous collection of physiological data, including heart rate variability, body temperature, and stress indicators. A deep neural network (DNN) processes these inputs to detect students’ physical and affective states, while a reinforcement learning (RL) algorithm dynamically generates individualised educational recommendations. A Proof-of-Authority (PoA) blockchain ensures secure, immutable, and transparent data management. Preliminary evaluations in simulated smart classrooms demonstrate significant improvements: the DNN achieves a 94.2% F1-score in state recognition, the RL module reduces critical event response latency, and energy efficiency improves by 23.5% compared to conventional baselines. Notably, intervention groups exhibit a 156% improvement in quiz scores over control groups. Compared to existing solutions, SHARP uniquely integrates multi-sensor physiological monitoring, real-time AI-based personalization, and blockchain-secured data governance in a unified framework. This results in superior accuracy, higher energy efficiency, and enhanced data integrity compared to prior IoT-based educational platforms. By combining intelligent sensing, adaptive analytics, and secure storage, SHARP offers a scalable and privacy-preserving solution for next-generation smart education. Full article
(This article belongs to the Special Issue Sensor-Based Recommender System for Smart Education and Smart Living)
Show Figures

Figure 1

27 pages, 2496 KB  
Article
A Context-Aware Tourism Recommender System Using a Hybrid Method Combining Deep Learning and Ontology-Based Knowledge
by Marco Flórez, Eduardo Carrillo, Francisco Mendes and José Carreño
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 194; https://doi.org/10.3390/jtaer20030194 - 2 Aug 2025
Cited by 2 | Viewed by 3881
Abstract
The Santurbán paramo is a sensitive high-mountain ecosystem exposed to pressures from extractive and agricultural activities, as well as increasing tourism. In response, this study presents a context-aware recommendation system designed to support sustainable tourism through the integration of deep neural networks and [...] Read more.
The Santurbán paramo is a sensitive high-mountain ecosystem exposed to pressures from extractive and agricultural activities, as well as increasing tourism. In response, this study presents a context-aware recommendation system designed to support sustainable tourism through the integration of deep neural networks and ontology-based semantic modeling. The proposed system delivers personalized recommendations—such as activities, accommodations, and ecological routes—by processing user preferences, geolocation data, and contextual features, including cost and popularity. The architecture combines a trained TensorFlow Lite model with a domain ontology enriched with GeoSPARQL for geospatial reasoning. All inference operations are conducted locally on Android devices, supported by SQLite for offline data storage, which ensures functionality in connectivity-restricted environments and preserves user privacy. Additionally, the system employs geofencing to trigger real-time environmental notifications when users approach ecologically sensitive zones, promoting responsible behavior and biodiversity awareness. By incorporating structured semantic knowledge with adaptive machine learning, the system enables low-latency, personalized, and conservation-oriented recommendations. This approach contributes to the sustainable management of natural reserves by aligning individual tourism experiences with ecological protection objectives, particularly in remote areas like the Santurbán paramo. Full article
Show Figures

Figure 1

27 pages, 569 KB  
Article
Construction Worker Activity Recognition Using Deep Residual Convolutional Network Based on Fused IMU Sensor Data in Internet-of-Things Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
IoT 2025, 6(3), 36; https://doi.org/10.3390/iot6030036 - 28 Jun 2025
Viewed by 1221
Abstract
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a [...] Read more.
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a deep residual convolutional neural network (ResNet) architecture integrated with multi-sensor fusion techniques. The proposed system processes data from multiple inertial measurement unit sensors strategically positioned on workers’ bodies to identify and classify construction-related activities accurately. A comprehensive pre-processing pipeline is implemented, incorporating Butterworth filtering for noise suppression, data normalization, and an adaptive sliding window mechanism for temporal segmentation. Experimental validation is conducted using the publicly available VTT-ConIoT dataset, which includes recordings of 16 construction activities performed by 13 participants in a controlled laboratory setting. The results demonstrate that the ResNet-based sensor fusion approach outperforms traditional single-sensor models and other deep learning methods. The system achieves classification accuracies of 97.32% for binary discrimination between recommended and non-recommended activities, 97.14% for categorizing six core task types, and 98.68% for detailed classification across sixteen individual activities. Optimal performance is consistently obtained with a 4-second window size, balancing recognition accuracy with computational efficiency. Although the hand-mounted sensor proved to be the most effective as a standalone unit, multi-sensor configurations delivered significantly higher accuracy, particularly in complex classification tasks. The proposed approach demonstrates strong potential for real-world applications, offering robust performance across diverse working conditions while maintaining computational feasibility for IoT deployment. This work advances the field of innovative construction by presenting a practical solution for real-time worker activity monitoring, which can be seamlessly integrated into existing IoT infrastructures to promote workplace safety, streamline construction processes, and support data-driven management decisions. Full article
Show Figures

Figure 1

29 pages, 1602 KB  
Article
A Recommender System Model for Presentation Advisor Application Based on Multi-Tower Neural Network and Utility-Based Scoring
by Maria Vlahova-Takova and Milena Lazarova
Electronics 2025, 14(13), 2528; https://doi.org/10.3390/electronics14132528 - 22 Jun 2025
Viewed by 3555
Abstract
Delivering compelling presentations is a critical skill across academic, professional, and public domains—yet many presenters struggle with structuring content, maintaining visual consistency, and engaging their audience effectively. Existing tools offer isolated support for design or delivery but fail to promote long-term skill development. [...] Read more.
Delivering compelling presentations is a critical skill across academic, professional, and public domains—yet many presenters struggle with structuring content, maintaining visual consistency, and engaging their audience effectively. Existing tools offer isolated support for design or delivery but fail to promote long-term skill development. This paper presents a novel intelligent application, the Presentation Advisor application, powered by a personalized recommendation engine that goes beyond fixing slide content and visualization, enabling users to build presentation competence. The recommendation engine leverages a model based on hybrid multi-tower neural network architecture enhanced with temporal encoding, problem sequence modeling, and utility-based scoring to deliver adaptive context-aware feedback. Unlike current tools, the presented system analyzes user-submitted presentations to detect common issues and delivers curated educational content tailored to user preferences, presentation types, and audiences. The system also incorporates strategic cold-start mitigation, ensuring high-quality recommendations even for new users or unseen content. Comprehensive experimental evaluations demonstrate that the suggested model significantly outperforms content-based filtering, collaborative filtering, autoencoders, and reinforcement learning approaches across both accuracy and personalization metrics. By combining cutting-edge recommendation techniques with a pedagogical framework, the Presentation Advisor application enables users not only to improve individual presentations but to become consistently better presenters over time. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

38 pages, 2728 KB  
Review
Sequential Recommendation System Based on Deep Learning: A Survey
by Peiyang Wei, Hongping Shu, Jianhong Gan, Xun Deng, Yi Liu, Wenying Sun, Tinghui Chen, Can Hu, Zhenzhen Hu, Yonghong Deng, Wen Qin and Zhibin Li
Electronics 2025, 14(11), 2134; https://doi.org/10.3390/electronics14112134 - 24 May 2025
Cited by 3 | Viewed by 6024
Abstract
With the rapid development of deep learning in artificial intelligence, sequential recommendation systems play an increasingly important role in e-commerce, social media, digital entertainment, and other fields. This work systematically reviews the research progress of deep learning in sequential recommendation systems from a [...] Read more.
With the rapid development of deep learning in artificial intelligence, sequential recommendation systems play an increasingly important role in e-commerce, social media, digital entertainment, and other fields. This work systematically reviews the research progress of deep learning in sequential recommendation systems from a methodological perspective. This paper focuses on analyzing three dominant technical paradigms: contrastive learning, graph neural networks, and attention mechanisms, elucidating their theoretical innovations and evolutionary trajectories in sequential recommendation systems. Through empirical investigation, we categorize the prevailing evaluation metrics, benchmark datasets, and characteristic distributions of typical application scenarios within this domain. This work further proposes promising avenues for sequential recommendation systems in the future. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

27 pages, 1758 KB  
Article
Cybersecure XAI Algorithm for Generating Recommendations Based on Financial Fundamentals Using DeepSeek
by Iván García-Magariño, Javier Bravo-Agapito and Raquel Lacuesta
AI 2025, 6(5), 95; https://doi.org/10.3390/ai6050095 - 2 May 2025
Viewed by 2410
Abstract
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This [...] Read more.
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This work proposes a methodology to automate investment decision recommendations with clear explanations. It utilizes generative AI, guided by prompt engineering, to interpret price predictions derived from neural networks. The methodology also includes the Artificial Intelligence Trust, Risk, and Security Management (AI TRiSM) model to provide robust security recommendations for the system. The proposed system provides long-term investment recommendations based on the financial fundamentals of companies, such as the price-to-earnings ratio (PER) and the net margin of profits over the total revenue. The proposed explainable artificial intelligence (XAI) system uses DeepSeek for describing recommendations and suggested companies, as well as several charts based on Shapley additive explanation (SHAP) values and local-interpretable model-agnostic explanations (LIMEs) for showing feature importance. Results: In the experiments, we compared the profitability of the proposed portfolios, ranging from 8 to 28 stock values, with the maximum expected price increases for 4 years in the NASDAQ-100 and S&P-500, where both bull and bear markets were, respectively, considered before and after the custom duties increases in international trade by the USA in April 2025. The proposed system achieved an average profitability of 56.62% while considering 120 different portfolio recommendations. Conclusions: A t-Student test confirmed that the difference in profitability compared to the index was statistically significant. A user study revealed that the participants agreed that the portfolio explanations were useful for trusting the system, with an average score of 6.14 in a 7-point Likert scale. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
Show Figures

Figure 1

15 pages, 717 KB  
Article
Integration of Causal Models and Deep Neural Networks for Recommendation Systems in Dynamic Environments: A Case Study in StarCraft II
by Fernando Moreira, Jairo Ivan Velez-Bedoya and Jeferson Arango-López
Appl. Sci. 2025, 15(8), 4263; https://doi.org/10.3390/app15084263 - 12 Apr 2025
Cited by 1 | Viewed by 1766
Abstract
In the context of real-time strategy video games like StarCraft II, strategic decision-making is a complex challenge that requires adaptability and precision. This research creates a mixed recommendation system that uses causal models and deep neural networks to improve its ability to suggest [...] Read more.
In the context of real-time strategy video games like StarCraft II, strategic decision-making is a complex challenge that requires adaptability and precision. This research creates a mixed recommendation system that uses causal models and deep neural networks to improve its ability to suggest the best strategies based on the resources and conditions of the game. PySC2 and the official StarCraft II API collected data from 100 controlled matches, standardizing conditions with the Terran race. We created fake data using a Conditional Tabular Generative Adversarial Network to address data scarcity situations. These data were checked for accuracy using Kolmogorov–Smirnov tests and correlation analysis. The causal model, implemented with PyMC, captured key causal relationships between variables such as resources, military units, and strategies. These predictions were integrated as additional features into a deep neural network trained with PyTorch. The results show that the hybrid system is 1.1% more accurate and has a higher F1 score than a pure neural network. It also changes its suggestions based on the resources it has access to. However, certain limitations were identified, such as a bias toward offensive strategies in the original data. This approach highlights the potential of combining causal knowledge with machine learning for recommendation systems in dynamic environments. Full article
Show Figures

Figure 1

23 pages, 7010 KB  
Article
The Explanation and Sensitivity of AI Algorithms Supplied with Synthetic Medical Data
by Dan Munteanu, Simona Moldovanu and Mihaela Miron
Electronics 2025, 14(7), 1270; https://doi.org/10.3390/electronics14071270 - 24 Mar 2025
Cited by 2 | Viewed by 1508
Abstract
The increasing complexity and importance of medical data in improving patient care, advancing research, and optimizing healthcare systems led to the proposal of this study, which presents a novel methodology by evaluating the sensitivity of artificial intelligence (AI) algorithms when provided with real [...] Read more.
The increasing complexity and importance of medical data in improving patient care, advancing research, and optimizing healthcare systems led to the proposal of this study, which presents a novel methodology by evaluating the sensitivity of artificial intelligence (AI) algorithms when provided with real data, synthetic data, a mix of both, and synthetic features. Two medical datasets, the Pima Indians Diabetes Database (PIDD) and the Breast Cancer Wisconsin Dataset (BCWD), were used, employing the Gaussian Copula Synthesizer (GCS) and the Synthetic Minority Oversampling Technique (SMOTE) to generate synthetic data. We classified the new datasets using fourteen machine learning (ML) models incorporated into PyCaret AutoML (Automated Machine Learning) and two deep neural networks, evaluating performance using accuracy (ACC), F1-score, Area Under the Curve (AUC), Matthews Correlation Coefficient (MCC), and Kappa metrics. Local Interpretable Model-agnostic Explanations (LIME) provided the explanation and justification for classification results. The quality and content of the medical data are very important; thus, when the classification of original data is unsatisfactory, a good recommendation is to create synthetic data with the SMOTE technique, where an accuracy of 0.924 is obtained, and supply the AI algorithms with a combination of original and synthetic data. Full article
(This article belongs to the Special Issue Explainable AI: Methods, Applications, and Challenges)
Show Figures

Figure 1

22 pages, 2102 KB  
Systematic Review
Advancing Diabetic Retinopathy Screening: A Systematic Review of Artificial Intelligence and Optical Coherence Tomography Angiography Innovations
by Alireza Hayati, Mohammad Reza Abdol Homayuni, Reza Sadeghi, Hassan Asadigandomani, Mohammad Dashtkoohi, Sajad Eslami and Mohammad Soleimani
Diagnostics 2025, 15(6), 737; https://doi.org/10.3390/diagnostics15060737 - 15 Mar 2025
Cited by 5 | Viewed by 4759
Abstract
Background/Objectives: Diabetic retinopathy (DR) remains a leading cause of preventable blindness, with its global prevalence projected to rise sharply as diabetes incidence increases. Early detection and timely management are critical to reducing DR-related vision loss. Optical Coherence Tomography Angiography (OCTA) now enables [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) remains a leading cause of preventable blindness, with its global prevalence projected to rise sharply as diabetes incidence increases. Early detection and timely management are critical to reducing DR-related vision loss. Optical Coherence Tomography Angiography (OCTA) now enables non-invasive, layer-specific visualization of the retinal vasculature, facilitating more precise identification of early microvascular changes. Concurrently, advancements in artificial intelligence (AI), particularly deep learning (DL) architectures such as convolutional neural networks (CNNs), attention-based models, and Vision Transformers (ViTs), have revolutionized image analysis. These AI-driven tools substantially enhance the sensitivity, specificity, and interpretability of DR screening. Methods: A systematic review of PubMed, Scopus, WOS, and Embase databases, including quality assessment of published studies, investigating the result of different AI algorithms with OCTA parameters in DR patients was conducted. The variables of interest comprised training databases, type of image, imaging modality, number of images, outcomes, algorithm/model used, and performance metrics. Results: A total of 32 studies were included in this systematic review. In comparison to conventional ML techniques, our results indicated that DL algorithms significantly improve the accuracy, sensitivity, and specificity of DR screening. Multi-branch CNNs, ensemble architectures, and ViTs were among the sophisticated models with remarkable performance metrics. Several studies reported that accuracy and area under the curve (AUC) values were higher than 99%. Conclusions: This systematic review underscores the transformative potential of integrating advanced DL and machine learning (ML) algorithms with OCTA imaging for DR screening. By synthesizing evidence from 32 studies, we highlight the unique capabilities of AI-OCTA systems in improving diagnostic accuracy, enabling early detection, and streamlining clinical workflows. These advancements promise to enhance patient management by facilitating timely interventions and reducing the burden of DR-related vision loss. Furthermore, this review provides critical recommendations for clinical practice, emphasizing the need for robust validation, ethical considerations, and equitable implementation to ensure the widespread adoption of AI-OCTA technologies. Future research should focus on multicenter studies, multimodal integration, and real-world validation to maximize the clinical impact of these innovative tools. Full article
(This article belongs to the Special Issue Artificial Intelligence Application in Cornea and External Diseases)
Show Figures

Figure 1

30 pages, 7287 KB  
Article
Context-Aware Tomato Leaf Disease Detection Using Deep Learning in an Operational Framework
by Divas Karimanzira
Electronics 2025, 14(4), 661; https://doi.org/10.3390/electronics14040661 - 8 Feb 2025
Cited by 7 | Viewed by 3082
Abstract
Tomato cultivation is a vital agricultural practice worldwide, yet it faces significant challenges due to various diseases that adversely affect crop yield and quality. This paper presents a novel tomato disease detection system within an operational framework that leverages an innovative deep learning-based [...] Read more.
Tomato cultivation is a vital agricultural practice worldwide, yet it faces significant challenges due to various diseases that adversely affect crop yield and quality. This paper presents a novel tomato disease detection system within an operational framework that leverages an innovative deep learning-based classifier, specifically a Vision Transformer (ViT) integrated with cascaded group attention (CGA) and a modified Focaler-CIoU (Complete Intersection over Union) loss function. The proposed method aims to enhance the accuracy and robustness of disease detection by effectively capturing both local and global contextual information while addressing the challenges of sample imbalance in the dataset. To improve interpretability, we integrate Explainable Artificial Intelligence (XAI) techniques, enabling users to understand the rationale behind the model’s classifications. Additionally, we incorporate a large language model (LLM) to generate comprehensive, context-aware explanations and recommendations based on the identified diseases and other relevant factors, thus bridging the gap between technical analysis and user comprehension. Our evaluation against state-of-the-art deep learning methods, including convolutional neural networks (CNNs) and other transformer-based models, demonstrates that the ViT-CGA model significantly outperforms existing techniques, achieving an overall accuracy of 96.5%, an average precision of 93.9%, an average recall of 96.7%, and an average F1-score of 94.2% for tomato leaf disease classification. The integration of CGA and Focaler-CIoU loss not only contributes to improved model interpretability and stability but also empowers farmers and agricultural stakeholders with actionable insights, fostering informed decision making in disease management. This research advances the field of automated disease detection in crops and provides a practical framework for deploying deep learning solutions in agricultural settings, ultimately supporting sustainable farming practices and enhancing food security. Full article
Show Figures

Figure 1

15 pages, 10789 KB  
Article
Deep Double Towers Click Through Rate Prediction Model with Multi-Head Bilinear Fusion
by Yuan Zhang, Xiaobao Cheng, Wei Wei and Yangyang Meng
Symmetry 2025, 17(2), 159; https://doi.org/10.3390/sym17020159 - 22 Jan 2025
Viewed by 2175
Abstract
The click-through rate (CTR) forecast is among the mainstream research directions in the domain of recommender systems, especially in online advertising suggestions. Among them, the multilayer perceptron (MLP) has been extensively utilized as the cornerstone of deep CTR prediction models. However, current neural [...] Read more.
The click-through rate (CTR) forecast is among the mainstream research directions in the domain of recommender systems, especially in online advertising suggestions. Among them, the multilayer perceptron (MLP) has been extensively utilized as the cornerstone of deep CTR prediction models. However, current neural network-based CTR prediction models commonly employ a single MLP network to capture nonlinear interactions between high-order features, while disregarding the interaction among differentiated features, resulting in poor model performance. Although studies such as DeepFM have proposed dual-branch interaction models to learn complex features, they still fall short of achieving more nuanced feature fusion. To address these challenges, we propose a novel model, the Deep Double Towers model (DDT), which improves the accuracy of CTR prediction through multi-head bilinear fusion while incorporating symmetry in its architecture. Specifically, the DDT model leverages symmetric parallel MLP networks to capture the interactions between differentiated features in a more structured and balanced manner. Furthermore, the multi-head bilinear fusion layer enables refined feature fusion through symmetry-aware operations, ensuring that feature interactions are aligned and symmetrically integrated. Experimental results on publicly available datasets, such as Criteo and Avazu, show that DDT surpasses existing models in improving the accuracy of CTR prediction, with symmetry contributing to more effective and balanced feature fusion. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Image Processing and Computer Vision)
Show Figures

Figure 1

Back to TopTop