Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,954)

Search Parameters:
Keywords = heterogenous learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1160 KiB  
Article
Multi-User Satisfaction-Driven Bi-Level Optimization of Electric Vehicle Charging Strategies
by Boyin Chen, Jiangjiao Xu and Dongdong Li
Energies 2025, 18(15), 4097; https://doi.org/10.3390/en18154097 (registering DOI) - 1 Aug 2025
Abstract
The accelerating integration of electric vehicles (EVs) into contemporary transportation infrastructure has underscored significant limitations in traditional charging paradigms, particularly in accommodating heterogeneous user requirements within dynamic operational environments. This study presents a differentiated optimization framework for EV charging strategies through the systematic [...] Read more.
The accelerating integration of electric vehicles (EVs) into contemporary transportation infrastructure has underscored significant limitations in traditional charging paradigms, particularly in accommodating heterogeneous user requirements within dynamic operational environments. This study presents a differentiated optimization framework for EV charging strategies through the systematic classification of user types. A multidimensional decision-making environment is established for three representative user categories—residential, commercial, and industrial—by synthesizing time-variant electricity pricing models with dynamic carbon emission pricing mechanisms. A bi-level optimization architecture is subsequently formulated, leveraging deep reinforcement learning (DRL) to capture user-specific demand characteristics through customized reward functions and adaptive constraint structures. Validation is conducted within a high-fidelity simulation environment featuring 90 autonomous EV charging agents operating in a metropolitan parking facility. Empirical results indicate that the proposed typology-driven approach yields a 32.6% average cost reduction across user groups relative to baseline charging protocols, with statistically significant improvements in expenditure optimization (p < 0.01). Further interpretability analysis employing gradient-weighted class activation mapping (Grad-CAM) demonstrates that the model’s attention mechanisms are well aligned with theoretically anticipated demand prioritization patterns across the distinct user types, thereby confirming the decision-theoretic soundness of the framework. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

20 pages, 2223 KiB  
Article
Category Attribute-Oriented Heterogeneous Resource Allocation and Task Offloading for SAGIN Edge Computing
by Yuan Qiu, Xiang Luo, Jianwei Niu, Xinzhong Zhu and Yiming Yao
J. Sens. Actuator Netw. 2025, 14(4), 81; https://doi.org/10.3390/jsan14040081 (registering DOI) - 1 Aug 2025
Abstract
Space-Air-Ground Integrated Network (SAGIN), which is considered a network architecture with great development potential, exhibits significant cross-domain collaboration characteristics at present. However, most of the existing works ignore the matching and adaptability of differential tasks and heterogeneous resources, resulting in significantly inefficient task [...] Read more.
Space-Air-Ground Integrated Network (SAGIN), which is considered a network architecture with great development potential, exhibits significant cross-domain collaboration characteristics at present. However, most of the existing works ignore the matching and adaptability of differential tasks and heterogeneous resources, resulting in significantly inefficient task execution and undesirable network performance. As a consequence, we formulate a category attribute-oriented resource allocation and task offloading optimization problem with the aim of minimizing the overall scheduling cost. We first introduce a task–resource matching matrix to facilitate optimal task offloading policies with computation resources. In addition, virtual queues are constructed to take the impacts of randomized task arrival into account. To solve the optimization objective which jointly considers bandwidth allocation, transmission power control and task offloading decision effectively, we proposed a deep reinforcement learning (DRL) algorithm framework considering type matching. Simulation experiments demonstrate the effectiveness of our proposed algorithm as well as superior performance compared to others. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

25 pages, 2859 KiB  
Article
Feature-Based Normality Models for Anomaly Detection
by Hui Yie Teh, Kevin I-Kai Wang and Andreas W. Kempa-Liehr
Sensors 2025, 25(15), 4757; https://doi.org/10.3390/s25154757 (registering DOI) - 1 Aug 2025
Abstract
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important [...] Read more.
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important because many applications are gravitating towards utilising low-cost sensors for Internet of Things deployments. While these sensors offer cost-effectiveness and customisation, their data quality does not match that of their high-end counterparts. To improve sensor data quality while addressing the challenges of anomaly detection in Internet of Things applications, we present an anomaly detection framework that learns a normality model of sensor data. The framework models the typical behaviour of individual sensors, which is crucial for the reliable detection of sensor data anomalies, especially when dealing with sensors observing significantly different signal characteristics. Our framework learns sensor-specific normality models from a small set of anomaly-free training data while employing an unsupervised feature engineering approach to select statistically significant features. The selected features are subsequently used to train a Local Outlier Factor anomaly detection model, which adaptively determines the boundary separating normal data from anomalies. The proposed anomaly detection framework is evaluated on three real-world public environmental monitoring datasets with heterogeneous sensor readings. The sensor-specific normality models are learned from extremely short calibration periods (as short as the first 3 days or 10% of the total recorded data) and outperform four other state-of-the-art anomaly detection approaches with respect to F1-score (between 5.4% and 9.3% better) and Matthews correlation coefficient (between 4.0% and 7.6% better). Full article
(This article belongs to the Special Issue Innovative Approaches to Cybersecurity for IoT and Wireless Networks)
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 (registering DOI) - 1 Aug 2025
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Figure 1

23 pages, 3580 KiB  
Article
Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence
by Siyang Liu, Nanliang Shan, Xianqiang Bao and Xinghua Xu
Sensors 2025, 25(15), 4752; https://doi.org/10.3390/s25154752 (registering DOI) - 1 Aug 2025
Abstract
Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this [...] Read more.
Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing’s near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified. Full article
Show Figures

Figure 1

29 pages, 6397 KiB  
Article
Task Travel Time Prediction Method Based on IMA-SURBF for Task Dispatching of Heterogeneous AGV System
by Jingjing Zhai, Xing Wu, Qiang Fu, Ya Hu, Peihuang Lou and Haining Xiao
Biomimetics 2025, 10(8), 500; https://doi.org/10.3390/biomimetics10080500 (registering DOI) - 1 Aug 2025
Abstract
The heterogeneous automatic guided vehicle (AGV) system, composed of several AGVs with different load capability and handling function, has good flexibility and agility to operational requirements. Accurate task travel time prediction (T3P) is vital for the efficient operation of heterogeneous AGV systems. However, [...] Read more.
The heterogeneous automatic guided vehicle (AGV) system, composed of several AGVs with different load capability and handling function, has good flexibility and agility to operational requirements. Accurate task travel time prediction (T3P) is vital for the efficient operation of heterogeneous AGV systems. However, T3P remains a challenging problem due to individual task correlations and dynamic changes in model input/output dimensions. To address these challenges, a biomimetics-inspired learning framework based on a radial basis function (RBF) neural network with an improved mayfly algorithm and a selective update strategy (IMA-SURBF) is proposed. Firstly, a T3P model is constructed by using travel-influencing factors as input and task travel time as output of the RBF neural network, where the input/output dimension is determined dynamically. Secondly, the improved mayfly algorithm (IMA), a biomimetic metaheuristic method, is adopted to optimize the initial parameters of the RBF neural network, while a selective update strategy is designed for parameter updates. Finally, simulation experiments on model design, parameter initialization, and comparison with deep learning-based models are conducted in a complex assembly line scenario to validate the accuracy and efficiency of the proposed method. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

17 pages, 5062 KiB  
Article
DropDAE: Denosing Autoencoder with Contrastive Learning for Addressing Dropout Events in scRNA-seq Data
by Wanlin Juan, Kwang Woo Ahn, Yi-Guang Chen and Chien-Wei Lin
Bioengineering 2025, 12(8), 829; https://doi.org/10.3390/bioengineering12080829 (registering DOI) - 31 Jul 2025
Viewed by 175
Abstract
Single-cell RNA sequencing (scRNA-seq) has revolutionized molecular biology and genomics by enabling the profiling of individual cell types, providing insights into cellular heterogeneity. Deep learning methods have become popular in single cell analysis for tasks such as dimension reduction, cell clustering, and data [...] Read more.
Single-cell RNA sequencing (scRNA-seq) has revolutionized molecular biology and genomics by enabling the profiling of individual cell types, providing insights into cellular heterogeneity. Deep learning methods have become popular in single cell analysis for tasks such as dimension reduction, cell clustering, and data imputation. In this work, we introduce DropDAE, a denoising autoencoder (DAE) model enhanced with contrastive learning, to specifically address the dropout events in scRNA-seq data, where certain genes show very low or even zero expression levels due to technical limitations. DropDAE uses the architecture of a denoising autoencoder to recover the underlying data patterns while leveraging contrastive learning to enhance group separation. Our extensive evaluations across multiple simulation settings based on synthetic data and a real-world dataset demonstrate that DropDAE not only reconstructs data effectively but also further improves clustering performance, outperforming existing methods in terms of accuracy and robustness. Full article
Show Figures

Figure 1

32 pages, 17155 KiB  
Article
Machine Learning Ensemble Methods for Co-Seismic Landslide Susceptibility: Insights from the 2015 Nepal Earthquake
by Tulasi Ram Bhattarai and Netra Prakash Bhandary
Appl. Sci. 2025, 15(15), 8477; https://doi.org/10.3390/app15158477 (registering DOI) - 30 Jul 2025
Viewed by 153
Abstract
The Mw 7.8 Gorkha Earthquake of 25 April 2015 triggered over 25,000 landslides across central Nepal, with 4775 events concentrated in Gorkha District alone. Despite substantial advances in landslide susceptibility mapping, existing studies often overlook the compound role of post-seismic rainfall and lack [...] Read more.
The Mw 7.8 Gorkha Earthquake of 25 April 2015 triggered over 25,000 landslides across central Nepal, with 4775 events concentrated in Gorkha District alone. Despite substantial advances in landslide susceptibility mapping, existing studies often overlook the compound role of post-seismic rainfall and lack robust spatial validation. To address this gap, we validated an ensemble machine learning framework for co-seismic landslide susceptibility modeling by integrating seismic, geomorphological, hydrological, and anthropogenic variables, including cumulative post-seismic rainfall. Using a balanced dataset of 4775 landslide and non-landslide instances, we evaluated the performance of Logistic Regression (LR), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost) models through spatial cross-validation, SHapley Additive exPlanations (SHAP) explainability, and ablation analysis. The RF model outperformed all others, achieving an accuracy of 87.9% and a Receiver Operating Characteristic (ROC) Area Under the Curve (AUC) value of 0.94, while XGBoost closely followed (AUC = 0.93). Ensemble models collectively classified over 95% of observed landslides into High and Very High susceptibility zones, demonstrating strong spatial reliability. SHAP analysis identified elevation, proximity to fault, peak ground acceleration (PGA), slope, and rainfall as dominant predictors. Notably, the inclusion of post-seismic rainfall substantially improved recall and F1 scores in ablation experiments. Spatial cross-validation revealed the superior generalizability of ensemble models under heterogeneous terrain conditions. The findings underscore the value of integrating post-seismic hydrometeorological factors and spatial validation into susceptibility assessments. We recommend adopting ensemble models, particularly RF, for operational hazard mapping in earthquake-prone mountainous regions. Future research should explore the integration of dynamic rainfall thresholds and physics-informed frameworks to enhance early warning systems and climate resilience. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

13 pages, 3360 KiB  
Review
Technological Advances in Pre-Operative Planning
by Mikolaj R. Kowal, Mohammed Ibrahim, André L. Mihaljević, Philipp Kron and Peter Lodge
J. Clin. Med. 2025, 14(15), 5385; https://doi.org/10.3390/jcm14155385 - 30 Jul 2025
Viewed by 160
Abstract
Surgery remains a healthcare intervention with significant risks for patients. Novel technologies can now enhance the peri-operative workflow, with artificial intelligence (AI) and extended reality (XR) to assist with pre-operative planning. This review focuses on innovation in AI, XR and imaging for hepato-biliary [...] Read more.
Surgery remains a healthcare intervention with significant risks for patients. Novel technologies can now enhance the peri-operative workflow, with artificial intelligence (AI) and extended reality (XR) to assist with pre-operative planning. This review focuses on innovation in AI, XR and imaging for hepato-biliary surgery planning. The clinical challenges in hepato-biliary surgery arise from heterogeneity of clinical presentations, the need for multiple imaging modalities and highly variable local anatomy. AI-based models have been developed for risk prediction and multi-disciplinary tumor (MDT) board meetings. The future could involve an on-demand and highly accurate AI-powered decision tool for hepato-biliary surgery, assisting the surgeon to make the most informed decision on the treatment plan, conferring the best possible outcome for individual patients. Advances in AI can also be used to automate image interpretation and 3D modelling, enabling fast and accurate 3D reconstructions of patient anatomy. Surgical navigation systems utilizing XR are already in development, showing an early signal towards improved patient outcomes when used for hepato-biliary surgery. Live visualization of hepato-biliary anatomy in the operating theatre is likely to improve operative safety and performance. The technological advances in AI and XR provide new applications in pre-operative planning with potential for patient benefit. Their use in surgical simulation could accelerate learning curves for surgeons in training. Future research must focus on standardization of AI and XR study reporting, robust databases that are ethically and data protection-compliant, and development of inter-disciplinary tools for various healthcare applications and systems. Full article
(This article belongs to the Special Issue Surgical Precision: The Impact of AI and Robotics in General Surgery)
Show Figures

Figure 1

18 pages, 8520 KiB  
Article
Cross-Layer Controller Tasking Scheme Using Deep Graph Learning for Edge-Controlled Industrial Internet of Things (IIoT)
by Abdullah Mohammed Alharthi, Fahad S. Altuwaijri, Mohammed Alsaadi, Mourad Elloumi and Ali A. M. Al-Kubati
Future Internet 2025, 17(8), 344; https://doi.org/10.3390/fi17080344 - 30 Jul 2025
Viewed by 85
Abstract
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking [...] Read more.
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking Scheme (CLCTS). The scheme operates through two primary phases: task grouping assignment and cross-layer control. In the first phase, controller nodes executing similar tasks are grouped based on task timing to achieve monotonic and synchronized completions. The second phase governs controller re-tasking both within and across these groups. Graph structures connect the groups to facilitate concurrent tasking and completion. A learning model is trained on inverse outcomes from the first phase to mitigate task acceptance errors (TAEs), while the second phase focuses on task migration learning to reduce task prolongation. Edge nodes interlink the groups and synchronize tasking, migration, and re-tasking operations across IIoT layers within unified completion periods. Departing from simulation-based approaches, this study presents a fully implemented framework that combines learning-driven scheduling with coordinated cross-layer control. The proposed CLCTS achieves an 8.67% reduction in overhead, a 7.36% decrease in task processing time, and a 17.41% reduction in TAEs while enhancing the completion ratio by 13.19% under maximum edge node deployment. Full article
Show Figures

Figure 1

40 pages, 3463 KiB  
Review
Machine Learning-Powered Smart Healthcare Systems in the Era of Big Data: Applications, Diagnostic Insights, Challenges, and Ethical Implications
by Sita Rani, Raman Kumar, B. S. Panda, Rajender Kumar, Nafaa Farhan Muften, Mayada Ahmed Abass and Jasmina Lozanović
Diagnostics 2025, 15(15), 1914; https://doi.org/10.3390/diagnostics15151914 - 30 Jul 2025
Viewed by 275
Abstract
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, [...] Read more.
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, cross-domain ML applications, and a critical discussion on ethical integration in smart diagnostics. The review focuses on the role of big data analysis and ML towards better diagnosis, improved efficiency of operations, and individualized care for patients. It explores the principal challenges of data heterogeneity, privacy, computational complexity, and advanced methods such as federated learning (FL) and edge computing. Applications in real-world settings, such as disease prediction, medical imaging, drug discovery, and remote monitoring, illustrate how ML methods, such as deep learning (DL) and natural language processing (NLP), enhance clinical decision-making. A comparison of ML models highlights their value in dealing with large and heterogeneous healthcare datasets. In addition, the use of nascent technologies such as wearables and Internet of Medical Things (IoMT) is examined for their role in supporting real-time data-driven delivery of healthcare. The paper emphasizes the pragmatic application of intelligent systems by highlighting case studies that reflect up to 95% diagnostic accuracy and cost savings. The review ends with future directions that seek to develop scalable, ethical, and interpretable AI-powered healthcare systems. It bridges the gap between ML algorithms and smart diagnostics, offering critical perspectives for clinicians, data scientists, and policymakers. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

34 pages, 1156 KiB  
Systematic Review
Mathematical Modelling and Optimization Methods in Geomechanically Informed Blast Design: A Systematic Literature Review
by Fabian Leon, Luis Rojas, Alvaro Peña, Paola Moraga, Pedro Robles, Blanca Gana and Jose García
Mathematics 2025, 13(15), 2456; https://doi.org/10.3390/math13152456 - 30 Jul 2025
Viewed by 185
Abstract
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed [...] Read more.
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed blast modelling and optimisation is provided. Methods: A Scopus–Web of Science search (2000–2025) retrieved 2415 records; semantic filtering and expert screening reduced the corpus to 97 studies. Topic modelling with Bidirectional Encoder Representations from Transformers Topic (BERTOPIC) and bibliometrics organised them into (i) finite-element and finite–discrete element simulations, including arbitrary Lagrangian–Eulerian (ALE) formulations; (ii) geomechanics-enhanced empirical laws; and (iii) machine-learning surrogates and multi-objective optimisers. Results: High-fidelity simulations delimit blast-induced damage with ≤0.2 m mean absolute error; extensions of the Kuznetsov–Ram equation cut median-size mean absolute percentage error (MAPE) from 27% to 15%; Gaussian-process and ensemble learners reach a coefficient of determination (R2>0.95) while providing closed-form uncertainty; Pareto optimisers lower peak particle velocity (PPV) by up to 48% without productivity loss. Synthesis: Four themes emerge—surrogate-assisted PDE-constrained optimisation, probabilistic domain adaptation, Bayesian model fusion for digital-twin updating, and entropy-based energy metrics. Conclusions: Persisting challenges in scalable uncertainty quantification, coupled discrete–continuous fracture solvers, and rigorous fusion of physics-informed and data-driven models position blast design as a fertile test bed for advances in applied mathematics, numerical analysis, and machine-learning theory. Full article
Show Figures

Figure 1

20 pages, 732 KiB  
Review
AI Methods Tailored to Influenza, RSV, HIV, and SARS-CoV-2: A Focused Review
by Achilleas Livieratos, George C. Kagadis, Charalambos Gogos and Karolina Akinosoglou
Pathogens 2025, 14(8), 748; https://doi.org/10.3390/pathogens14080748 - 30 Jul 2025
Viewed by 209
Abstract
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based [...] Read more.
Artificial intelligence (AI) techniques—ranging from hybrid mechanistic–machine learning (ML) ensembles to gradient-boosted decision trees, support-vector machines, and deep neural networks—are transforming the management of seasonal influenza, respiratory syncytial virus (RSV), human immunodeficiency virus (HIV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Symptom-based triage models using eXtreme Gradient Boosting (XGBoost) and Random Forests, as well as imaging classifiers built on convolutional neural networks (CNNs), have improved diagnostic accuracy across respiratory infections. Transformer-based architectures and social media surveillance pipelines have enabled real-time monitoring of COVID-19. In HIV research, support-vector machines (SVMs), logistic regression, and deep neural network (DNN) frameworks advance viral-protein classification and drug-resistance mapping, accelerating antiviral and vaccine discovery. Despite these successes, persistent challenges remain—data heterogeneity, limited model interpretability, hallucinations in large language models (LLMs), and infrastructure gaps in low-resource settings. We recommend standardized open-access data pipelines and integration of explainable-AI methodologies to ensure safe, equitable deployment of AI-driven interventions in future viral-outbreak responses. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

30 pages, 5307 KiB  
Article
Self-Normalizing Multi-Omics Neural Network for Pan-Cancer Prognostication
by Asim Waqas, Aakash Tripathi, Sabeen Ahmed, Ashwin Mukund, Hamza Farooq, Joseph O. Johnson, Paul A. Stewart, Mia Naeini, Matthew B. Schabath and Ghulam Rasool
Int. J. Mol. Sci. 2025, 26(15), 7358; https://doi.org/10.3390/ijms26157358 - 30 Jul 2025
Viewed by 169
Abstract
Prognostic markers such as overall survival (OS) and tertiary lymphoid structure (TLS) ratios, alongside diagnostic signatures like primary cancer-type classification, provide critical information for treatment selection, risk stratification, and longitudinal care planning across the oncology continuum. However, extracting these signals solely from sparse, [...] Read more.
Prognostic markers such as overall survival (OS) and tertiary lymphoid structure (TLS) ratios, alongside diagnostic signatures like primary cancer-type classification, provide critical information for treatment selection, risk stratification, and longitudinal care planning across the oncology continuum. However, extracting these signals solely from sparse, high-dimensional multi-omics data remains a major challenge due to heterogeneity and frequent missingness in patient profiles. To address this challenge, we present SeNMo, a self-normalizing deep neural network trained on five heterogeneous omics layers—gene expression, DNA methylation, miRNA abundance, somatic mutations, and protein expression—along with the clinical variables, that learns a unified representation robust to missing modalities. Trained on more than 10,000 patient profiles across 32 tumor types from The Cancer Genome Atlas (TCGA), SeNMo provides a baseline that can be readily fine-tuned for diverse downstream tasks. On a held-out TCGA test set, the model achieved a concordance index of 0.758 for OS prediction, while external evaluation yielded 0.73 on the CPTAC lung squamous cell carcinoma cohort and 0.66 on an independent 108-patient Moffitt Cancer Center cohort. Furthermore, on Moffitt’s cohort, baseline SeNMo fine-tuned for TLS ratio prediction aligned with expert annotations (p < 0.05) and sharply separated high- versus low-TLS groups, reflecting distinct survival outcomes. Without altering the backbone, a single linear head classified primary cancer type with 99.8% accuracy across the 33 classes. By unifying diagnostic and prognostic predictions in a modality-robust architecture, SeNMo demonstrated strong performance across multiple clinically relevant tasks, including survival estimation, cancer classification, and TLS ratio prediction, highlighting its translational potential for multi-omics oncology applications. Full article
(This article belongs to the Section Molecular Pathology, Diagnostics, and Therapeutics)
Show Figures

Figure 1

21 pages, 2255 KiB  
Article
Cloud-Based Architecture for Hydrophone Data Acquisition and Processing of Surface and Underwater Vehicle Detection
by Francisco Pérez Carrasco, Anaida Fernández García, Alberto García, Verónica Ruiz Bejerano, Álvaro Gutiérrez and Alberto Belmonte-Hernández
J. Mar. Sci. Eng. 2025, 13(8), 1455; https://doi.org/10.3390/jmse13081455 - 30 Jul 2025
Viewed by 191
Abstract
This paper presents a cloud-based architecture for the acquisition, transmission, and processing of acoustic data from hydrophone arrays, designed to enable the detection and monitoring of both surface and underwater vehicles. The proposed system offers a modular and scalable cloud infrastructure that supports [...] Read more.
This paper presents a cloud-based architecture for the acquisition, transmission, and processing of acoustic data from hydrophone arrays, designed to enable the detection and monitoring of both surface and underwater vehicles. The proposed system offers a modular and scalable cloud infrastructure that supports real-time and distributed processing of hydrophone data collected in diverse aquatic environments. Acoustic signals captured by heterogeneous hydrophones—featuring varying sensitivity and bandwidth—are streamed to the cloud, where several machine learning algorithms can be deployed to extract distinguishing acoustic signatures from vessel engines and propellers in interaction with water. The architecture leverages cloud-based services for data ingestion, processing, and storage, facilitating robust vehicle detection and localization through propagation modeling and multi-array geometric configurations. Experimental validation demonstrates the system’s effectiveness in handling high-volume acoustic data streams while maintaining low-latency processing. The proposed approach highlights the potential of cloud technologies to deliver scalable, resilient, and adaptive acoustic sensing platforms for applications in maritime traffic monitoring, harbor security, and environmental surveillance. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop