Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,285)

Search Parameters:
Keywords = hybrid learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
50 pages, 17736 KB  
Article
Swin–YOLOv12: A Hybrid Transformer-Based Deep Learning Approach for Enhanced Real-Time Brain Tumor Detection in MRI Images
by Mubashar Tariq and Kiho Choi
Mathematics 2026, 14(9), 1447; https://doi.org/10.3390/math14091447 (registering DOI) - 25 Apr 2026
Abstract
Brain tumors (BTs) arise from the abnormal growth of cells within brain tissue and may spread rapidly, making them a major cause of mortality worldwide. Early detection of BTs remains highly challenging due to the brain’s complex structure and the heterogeneous nature of [...] Read more.
Brain tumors (BTs) arise from the abnormal growth of cells within brain tissue and may spread rapidly, making them a major cause of mortality worldwide. Early detection of BTs remains highly challenging due to the brain’s complex structure and the heterogeneous nature of tumors. Magnetic Resonance Imaging (MRI) provides detailed information about tumor size, location, and shape, thereby supporting clinical decision-making for treatments such as chemotherapy, radiation therapy, and surgery. Traditional machine learning (ML) approaches mainly rely on manual feature extraction, whereas recent advances in Computer-Aided Diagnosis (CAD) and deep learning (DL) have enabled more accurate detection of small and complex tumor regions. To improve automated tumor detection, we propose a hybrid Swin–YOLO framework that combines the Swin Transformer (ST) with the latest CNN-based YOLOv12 model. In this framework, the Swin Transformer serves as the main backbone for feature extraction, while the Feature Pyramid Network (FPN) and Path Aggregation Network (PANet) are employed in the neck to better capture multi-scale features. For training, we used the publicly available Br35H dataset and applied data augmentation to enhance the model’s robustness and generalization capability. The experimental results show that the proposed framework achieved 99.7% accuracy, 99.4% mAP@50, and 87.2% mAP@50:95. Furthermore, we incorporated Explainable Artificial Intelligence (XAI) techniques, including Grad-CAM and SHAP, to improve the interpretability of the model by visually highlighting the tumor regions that contributed most to the prediction. In addition, we developed NeuroVision AI, a web-based application designed to support faster and more accurate clinical decision-making. Although the proposed model demonstrated strong performance on the dataset, these results should be interpreted within the context of the current experimental setting. Full article
25 pages, 9962 KB  
Review
Data-Driven Quantum Simulation of Artificial Quantum Materials with Rydberg Atoms
by Minhyuk Kim
Materials 2026, 19(9), 1758; https://doi.org/10.3390/ma19091758 (registering DOI) - 25 Apr 2026
Abstract
Programmable quantum simulators based on Rydberg atom arrays provide a versatile platform for data-driven quantum simulation of strongly correlated systems, combinatorial optimization problems, and artificial quantum materials. In this review, we present a unified perspective on how materials-inspired effective Hamiltonians can be engineered [...] Read more.
Programmable quantum simulators based on Rydberg atom arrays provide a versatile platform for data-driven quantum simulation of strongly correlated systems, combinatorial optimization problems, and artificial quantum materials. In this review, we present a unified perspective on how materials-inspired effective Hamiltonians can be engineered and probed in Rydberg arrays, highlighting representative phenomena such as quantum phase transitions, frustrated spin-liquid–like states, symmetry-protected topological phases, and nonequilibrium dynamics. We further discuss recent progress in machine learning-based approaches, including phase identification from experimental snapshots, neural network quantum states, Hamiltonian learning, and quantum reservoir computing. A central theme is the emergence of closed-loop classical–quantum hybrid workflows, in which quantum simulation, measurement, and classical inference are integrated through iterative feedback. These developments position Rydberg atom arrays not only as programmable simulators but also as data-driven platforms for the scalable exploration, characterization, and design of complex quantum materials. Full article
25 pages, 8307 KB  
Article
A Physics–Data Hybrid Framework Using Uncalibrated Consumer CMOS Vision: Pilot Study on Monocular Automatic TUG Assessment Towards Early Parkinson’s Disease Risk Screening
by Yuxiang Qiu, Xiaodong Sun, Fan Yang, Jarred Fastier-Wooller, Shun Muramatsu, Michitaka Yamamoto and Toshihiro Itoh
Micromachines 2026, 17(5), 523; https://doi.org/10.3390/mi17050523 (registering DOI) - 25 Apr 2026
Abstract
The Timed Up and Go (TUG) test is a clinical gold standard for assessing elderly mobility, yet its automated deployment in home-monitoring and resource-limited areas is hindered by high hardware costs and expert calibration requirements. This study introduces a Physics–Data Hybrid framework specifically [...] Read more.
The Timed Up and Go (TUG) test is a clinical gold standard for assessing elderly mobility, yet its automated deployment in home-monitoring and resource-limited areas is hindered by high hardware costs and expert calibration requirements. This study introduces a Physics–Data Hybrid framework specifically designed for uncalibrated consumer-grade CMOS cameras, enabling a “plug-and-play” solution for early Parkinson’s disease (PD) risk screening. The proposed pipeline integrates learning-based pose perception with a self-evolving physics model to recover absolute metric-scale motion without manual checkerboard calibration. A noise-adaptive fusion strategy is implemented to reconcile 2D pixel dynamics with 3D kinematic consistency, overcoming the inherent scale ambiguity of monocular vision. Crucially, this framework enables the extraction of high-dimensional spatiotemporal parameters—such as stride length coefficient of variation and mean gait velocity—which provide a finer diagnostic resolution for capturing subtle motor fluctuations than conventional timing-only systems. Results from our pilot study with a cohort of 10 subjects demonstrate that these extracted metric features serve as decisive markers for risk staging simulated by dual-task-induced cognitive-motor-interference, achieving 98% screening accuracy and an overall classification accuracy of 87.32%. This framework provides a robust, low-cost tool for ubiquitous telehealth, potentially supporting early PD risk assessment in underserved populations. Full article
19 pages, 1618 KB  
Article
Simulation and Correction Study of Solar Irradiance in Guangdong Based on WRF-Solar and Random Forest
by Yuanhong He, Zheng Li, Fang Zhou and Zhiqiu Gao
Energies 2026, 19(9), 2077; https://doi.org/10.3390/en19092077 (registering DOI) - 24 Apr 2026
Abstract
To improve solar irradiance simulation accuracy for precise photovoltaic power forecasting, we developed a hybrid framework combining WRF-Solar numerical simulation and random forest (RF) machine learning for a PV plant in Guangdong, China. Weather conditions were objectively classified into clear, intermittent cloudy, and [...] Read more.
To improve solar irradiance simulation accuracy for precise photovoltaic power forecasting, we developed a hybrid framework combining WRF-Solar numerical simulation and random forest (RF) machine learning for a PV plant in Guangdong, China. Weather conditions were objectively classified into clear, intermittent cloudy, and overcast using the Daily Variability Index (DVI) and Daily Clear-sky Index (DCI). We calibrated the WRF-Solar model’s microphysics and radiative transfer schemes via sensitivity tests to optimize overcast-sky performance, then applied RF correction to the simulated irradiance. Results show that RF correction significantly reduces simulation errors for intermittent and overcast conditions, while the original WRF-Solar outperforms the corrected results under clear skies due to RF overfitting. Full article
(This article belongs to the Special Issue Advanced Artificial Intelligence for Photovoltaic Energy Systems)
38 pages, 6938 KB  
Article
DeepSense: An Adaptive Scalable Ensemble Framework for Industrial IoT Anomaly Detection
by Amir Firouzi and Ali A. Ghorbani
Sensors 2026, 26(9), 2662; https://doi.org/10.3390/s26092662 (registering DOI) - 24 Apr 2026
Abstract
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge [...] Read more.
The Industrial Internet of Things (IIoT) has become a cornerstone of modern industrial automation, enabling real-time monitoring, intelligent decision-making, and large-scale connectivity across cyber–physical systems. However, the growing scale, heterogeneity, and dynamic behavior of IIoT environments significantly expand the attack surface and challenge the effectiveness of conventional security mechanisms. In this paper, we propose DeepSense, a hybrid and adaptive anomaly and intrusion detection framework specifically designed for resource-constrained and heterogeneous IIoT deployments. DeepSense integrates three complementary components: DataSense, a realistic data pipeline and experimental testbed supporting synchronized sensor and network data processing; RuleSense, a lightweight rule-based detection layer that provides fast, deterministic, and interpretable anomaly screening at the edge; and NeuroSense, a learning-driven detection module comprising an adaptive ensemble of 22 machine learning and deep learning models spanning classical, neural, hybrid, and Transformer-based architectures. NeuroSense operates as a second detection stage that validates suspicious events flagged by RuleSense and enables both coarse-grained and fine-grained attack classification. To support rigorous and practical assessment, this work further introduces a comprehensive performance evaluation framework that extends beyond accuracy-centric metrics by jointly considering detection quality, latency, resource efficiency, and detection coverage, alongside an optimization-based process for selecting Pareto-optimal model ensembles under realistic IIoT constraints. Extensive experiments across diverse detection scenarios demonstrate that DeepSense exhibits strong generalization, lower false positive rates, and robust performance under evolving attack behaviors. The proposed framework provides a scalable and efficient IIoT security solution that meets the operational requirements of Industry 4.0 and the resilience-oriented objectives of Industry 5.0. Full article
21 pages, 6210 KB  
Article
Robust Path Planning via Deep Reinforcement Learning
by Daeyeol Kang, Jongyoon Park and Pileun Kim
Sensors 2026, 26(9), 2658; https://doi.org/10.3390/s26092658 - 24 Apr 2026
Abstract
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research [...] Read more.
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research aims to enhance the robustness of path planning by addressing these challenges. To achieve this goal, we propose a hybrid approach that integrates the flexible decision-making capabilities of deep reinforcement learning with the stability of traditional path planning. The proposed model adopts the Twin Delayed Deep Deterministic Policy Gradient (TD3) network as its base. Notably, we pre-process LiDAR point cloud data to extract only essential features for the state representation, thereby preventing performance degradation from high-dimensional inputs and improving computational efficiency. Our model optimizes the learning process through two core strategies. First, it prioritizes experience data generated during training based on negative rewards, guiding the model to learn more frequently from critical failures rather than redundant successes. Second, it dynamically compares the action proposed by the TD3 network with a goal-oriented action from a classical path-planning algorithm in real time. By selecting the action with the higher estimated value, the model guides the policy toward a stable and effective trajectory from the earliest stages of training. To validate the efficacy of our approach, we conducted simulation-based experiments comparing the performance of the proposed model with existing reinforcement learning networks. To ensure statistical significance and mitigate the impact of random initialization, all reported results are averaged over 10 independent runs with different random seeds. The results quantitatively demonstrate that our model achieves significantly higher and more stable reward values, confirming a robust improvement in the path-planning process. Full article
(This article belongs to the Special Issue Advancements in Autonomous Navigation Systems for UAVs)
19 pages, 3718 KB  
Article
Sustainable Landslide Risk Assessment in Zonguldak Province Using AHP and Artificial Intelligence: Integration with InSAR and Inventory Data
by Senol Hakan Kutoglu and Deniz Arca
Sustainability 2026, 18(9), 4263; https://doi.org/10.3390/su18094263 (registering DOI) - 24 Apr 2026
Abstract
This study evaluates the landslide susceptibility of Zonguldak Province, Türkiye, by integrating the Analytical Hierarchy Process (AHP), artificial intelligence (AI) algorithms, and SBAS-InSAR deformation data. Eight environmental and geological parameters—elevation, slope, aspect, lithology, hydrogeology, land use, and distances to rivers and roads—were weighted [...] Read more.
This study evaluates the landslide susceptibility of Zonguldak Province, Türkiye, by integrating the Analytical Hierarchy Process (AHP), artificial intelligence (AI) algorithms, and SBAS-InSAR deformation data. Eight environmental and geological parameters—elevation, slope, aspect, lithology, hydrogeology, land use, and distances to rivers and roads—were weighted using AHP and analyzed through 25 AI models. Among them, the Ensemble Bagged Trees (EBT) algorithm achieved the highest predictive accuracy (84%), demonstrating strong adaptability to complex geological datasets. The resulting susceptibility maps were validated using both traditional landslide inventories and InSAR-derived deformation maps, achieving an overall agreement of 83.05%. This dual-validation approach allows for the identification of unrecorded or active slope movements not captured in existing inventories. The combined use of AHP and AI significantly improves model reliability by incorporating both expert judgment and data-driven learning. The study introduces a novel hybrid framework for landslide susceptibility mapping and provides a valuable reference for disaster risk management and spatial planning in regions with complex topography. This study also contributes to sustainability by supporting risk-informed land-use planning, reducing potential economic losses, and enhancing environmental resilience in landslide-prone regions. The proposed framework aligns with sustainable development goals by integrating geospatial technologies and data-driven approaches for long-term hazard mitigation. Full article
(This article belongs to the Section Hazards and Sustainability)
28 pages, 1065 KB  
Article
Normalising Flow Enhanced GARCH Models: A Two-Stage Framework for Flexible Innovation Modelling in Financial Time Series
by Abdullah Hassan, Farai Mlambo and Wilson Tsakane Mongwe
Risks 2026, 14(5), 100; https://doi.org/10.3390/risks14050100 - 24 Apr 2026
Abstract
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of [...] Read more.
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of innovation misspecification. In the first stage, we estimate standard GARCH variants (sGARCH, TGARCH, and gjrGARCH) to extract standardised residuals. In the second stage, a Masked Autoregressive Flow learns the underlying residual distribution, with samples from the flow subsequently driving the GARCH recursion for out-of-sample forecasting. Evaluated on 13 daily financial series (six FX pairs and seven equities), NF-GARCH demonstrates systematic, statistically significant improvements in forecast accuracy for skewed-t baselines. Wilcoxon signed-rank tests confirm superior performance specifically for gjrGARCH-sstd and sGARCH-sstd specifications. While the framework offers enhanced flexibility and generative realism, we observe that computational overhead is increased, and the log-variance specification of eGARCH exhibits instability when paired with flow-based innovations. These results suggest that while NF-GARCH effectively captures empirical tail behaviour in univariate settings, future research should explore conditional flow architectures and multivariate extensions to account for time-varying innovation shapes. For risk management, gains are most relevant where skewed-t baselines are used and where closer residual realism supports scenario analysis; effect sizes remain modest relative to model risk and implementation cost. Full article
(This article belongs to the Special Issue Volatility Modeling in Financial Market)
20 pages, 1775 KB  
Article
AI-Driven Energy Management for Sustainable Transformation of Recreational Boats: A Simulation Study for the Croatian Adriatic Coast
by Jasmin Ćelić, Aleksandar Cuculić, Ivan Panić and Marko Vukšić
Appl. Sci. 2026, 16(9), 4186; https://doi.org/10.3390/app16094186 - 24 Apr 2026
Abstract
Croatia hosts one of the most intensive recreational boating activities in the Mediterranean, with over 134,600 registered vessels along 5835 km of Adriatic coastline. This paper presents an AI-driven simulation framework for evaluating electrification pathways for the Croatian recreational vessel fleet. A key [...] Read more.
Croatia hosts one of the most intensive recreational boating activities in the Mediterranean, with over 134,600 registered vessels along 5835 km of Adriatic coastline. This paper presents an AI-driven simulation framework for evaluating electrification pathways for the Croatian recreational vessel fleet. A key contribution is the explicit treatment of the AIS data gap: recreational vessels in Croatia are not required to carry AIS transponders, so synthetic operational profiles calibrated from manufacturer specifications and verified economic data are used instead. Six machine learning architectures are compared for vessel energy demand forecasting, with a proposed Transformer-based model achieving the best simulated performance. Fleet-weighted Monte Carlo simulation across three electrification scenarios suggests that an AI-optimised hybrid configuration can, subject to use intensity, reduce per-vessel CO2 emissions by up to 56.8% relative to conventional engines. Techno-economic analysis shows payback periods ranging from over 15 years for low-use private owners to 7–9 years for charter operators, supporting targeted incentive design. The framework is intended to be transferable to other Mediterranean coastal regions facing comparable data and operational constraints. Full article
(This article belongs to the Special Issue AI Applications in the Maritime Sector)
Show Figures

Figure 1

20 pages, 539 KB  
Article
Hybrid Blended WiFi Fingerprint Indoor Localization Using Multi-Task Learning and Feature-Space WKNN
by Yujie Li and Sang-Chul Kim
Appl. Sci. 2026, 16(9), 4184; https://doi.org/10.3390/app16094184 - 24 Apr 2026
Abstract
WiFi fingerprinting remains attractive for indoor localization because it reuses existing wireless infrastructure, yet RSSI fingerprints are high-dimensional, sparse, and often ambiguous across adjacent floors and building regions. This study develops a hybrid blended localization framework that combines multi-task learning with feature-space weighted [...] Read more.
WiFi fingerprinting remains attractive for indoor localization because it reuses existing wireless infrastructure, yet RSSI fingerprints are high-dimensional, sparse, and often ambiguous across adjacent floors and building regions. This study develops a hybrid blended localization framework that combines multi-task learning with feature-space weighted k-nearest-neighbor refinement. A shared neural encoder predicts building labels, floor labels, and normalized coordinates from 520-dimensional WiFi fingerprints, and the learned embedding space is then used for semantically constrained WKNN correction. The final model is trained with AdamW, a learning rate of 8×104, batch size 512, and a joint loss over building classification, floor classification, and coordinate regression, without a learning-rate scheduler. Experiments on a public WiFi fingerprint dataset show that the hybrid model achieves the strongest overall localization robustness among the evaluated non-ensemble methods. On the official validation split, it obtains a mean localization error of 9.01, a median error of 6.25, and an RMSE of 12.95 in the dataset coordinate units. On the internal semantic validation split, it reaches 94.81% floor classification accuracy and 97.62% building classification accuracy. Floor-wise and building–floor analyses further show that the largest errors are concentrated in a small number of difficult semantic regions, especially the highest floor and sparsely constrained partitions. Full article
34 pages, 9046 KB  
Article
Predicting the Strength of Sustainable Graphene-Enhanced Cementitious Composites Using Novel Machine Learning and Explainable AI Techniques
by Sanjog Chhetri Sapkota, Moinul Haq, Bipin Thapa, Sabin Adhikari, Anupam Dhakal, Roshan Paudel, Aashish Ghimire and Tushar Bansal
Infrastructures 2026, 11(5), 146; https://doi.org/10.3390/infrastructures11050146 - 24 Apr 2026
Abstract
The prediction of the compressive strength (CS) for sustainable concrete reinforced with graphene nanoplatelets (GNPs) is difficult as a result of nonlinear interactions between chemical composition, dispersion state, and curing conditions. To address this, an interpretable ensemble machine learning framework is developed to [...] Read more.
The prediction of the compressive strength (CS) for sustainable concrete reinforced with graphene nanoplatelets (GNPs) is difficult as a result of nonlinear interactions between chemical composition, dispersion state, and curing conditions. To address this, an interpretable ensemble machine learning framework is developed to provide accurate predictions of CS. The major input parameters used are sand content, graphene diameters, graphene thicknesses, and percentages of GNP to sand (GNP%; w/w), water-to-cement ratio W/C, ultrasonication period UST time (s), curing age CA day(s), while the CS (in MPa) is the target output. The random forest (RF) and XGBoost (XGB) models are incorporated into two novel metaheuristic optimization techniques, the Drawer-based optimization algorithm (DOA) and the Giant Trevally Optimizer (GTO), to enhance hyperparameter tuning and generalization. For all models, DOA XGB hybrids are the most predictive, with testing R2 values up to 0.98; RMSE of around 2.9 MPa; MAE is approximately 2.0 MPa, and well over 97% within ±20% prediction error boundaries. The explainable artificial intelligence methodologies like Shapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), partial dependence plots, and Individual Conditional Expectation plots reveal curing age and graphene thickness as the dominant parameters. High strengths above 70 MPa are always achieved from higher curing age, w/c ratio (from 0.3 to 0.4), and graphene dosage (from 0.5 to 2.5%). A Python GUI is developed for efficient and accurate strength predictions suitable for practical applications. The proposed approach provides a robust, interpretable, and efficient alternative to extensive testing for GNP-reinforced concrete. Full article
42 pages, 3267 KB  
Systematic Review
Fiber-Optic Sensor-Based Structural Health Monitoring with Machine Learning: A Task-Oriented and Cross-Domain Review
by Yasir Mahmood, Nof Yasir, Kathryn Quenette, Gul Badin, Ying Huang and Luyang Xu
Sensors 2026, 26(9), 2641; https://doi.org/10.3390/s26092641 - 24 Apr 2026
Abstract
Structural health monitoring (SHM) plays an increasingly important role in managing aging, safety-critical infrastructure under growing environmental and operational demands. In recent years, fiber-optic sensors (FOSs) have attracted significant attention for SHM applications due to their immunity to electromagnetic interference, durability in harsh [...] Read more.
Structural health monitoring (SHM) plays an increasingly important role in managing aging, safety-critical infrastructure under growing environmental and operational demands. In recent years, fiber-optic sensors (FOSs) have attracted significant attention for SHM applications due to their immunity to electromagnetic interference, durability in harsh environments, multiplexing capability, and suitability for both localized and fully distributed measurements. In parallel, advances in machine learning (ML) have enabled new approaches for extracting actionable insights from large, high-dimensional sensing datasets. This paper presents a systematic review of FOS-based SHM systems integrated with ML across civil, transportation, energy, marine, and aerospace infrastructures. Following PRISMA 2020 guidelines, peer-reviewed studies were identified and synthesized to examine sensing principles, deployment configurations, data characteristics, and learning-based analytical strategies. Fiber optic technologies are categorized into point-based, quasi-distributed, and fully distributed systems, and their capabilities for capturing strain, temperature, and spatiotemporal structural responses are critically evaluated. ML approaches are examined from a task-oriented perspective, including damage detection, localization, severity assessment, environmental compensation, and prognosis, with emphasis on the alignment between sensing configurations and appropriate learning paradigms. Key challenges remain, particularly regarding large data volumes, environmental variability, limited labeled damage datasets, model generalization, and system-level integration. Emerging directions such as physics-informed and hybrid learning, transfer learning, uncertainty-aware modeling, and integration with digital twins are discussed as pathways toward more robust and scalable SHM systems. By jointly addressing sensing physics and data-driven intelligence, this review provides a structured reference and practical roadmap for advancing intelligent FOS-based SHM in next-generation infrastructure. Full article
(This article belongs to the Special Issue Smart Sensor Technology for Structural Health Monitoring)
24 pages, 2958 KB  
Article
DK-VCA Net: A Topography-Aware Dual-Decomposition Framework for Mountain Traffic Flow Forecasting
by Chuanhe Shi, Shuai Fu, Zhen Zeng, Nan Zheng, Haizhou Cheng and Xu Lei
Information 2026, 17(5), 407; https://doi.org/10.3390/info17050407 - 24 Apr 2026
Abstract
Traffic flow prediction is important for traffic management and safety control in mountainous areas. In these environments, traffic flow is affected by complex terrain, changing weather, and mixed vehicle types, so the resulting time series often show strong fluctuation and poor stability. Many [...] Read more.
Traffic flow prediction is important for traffic management and safety control in mountainous areas. In these environments, traffic flow is affected by complex terrain, changing weather, and mixed vehicle types, so the resulting time series often show strong fluctuation and poor stability. Many existing prediction models were developed for urban roads or flat highways, and their performance is therefore limited in mountainous scenarios. To address this problem, this paper proposes a hybrid model called DK-VCA Net. The model combines adaptive signal decomposition with a terrain-aware deep learning structure to separate useful traffic variation from complex noise. It also integrates traffic flow, speed, slope, and weather information to better describe mountain traffic conditions. The proposed method is evaluated using real traffic data collected at 5 min intervals from detection stations on the Guibi Expressway in Guizhou Province, China, during September 2020. Experimental results show that DK-VCA Net achieves better prediction accuracy than several representative baseline models, including 1D-CNN, LSTM, Transformer, STWave, and Mamba. Across the 15 min, 30 min, and 60 min forecasting tasks, the proposed model reduces the average RMSE by 14.8% compared with the conventional 1D-CNN model and by 8.9% compared with the baseline Transformer model. The ablation study further proves the effectiveness of the decomposition strategy, terrain-related features, and the attention mechanism. The results show that the proposed method is effective for traffic flow prediction in the studied mountainous highway scenario. Full article
22 pages, 1328 KB  
Review
Bridging Traditional Modeling and Artificial Intelligence in Measles Epidemiology: Methods, Applications, and Future Directions—A Narrative Review
by Andrei Florentin Baiasu, Alexandra-Daniela Rotaru-Zavaleanu, Ana-Maria Boldea, Mihai-Andrei Ruscu, Mircea-Sebastian Serbanescu and Lucretiu Radu
J. Clin. Med. 2026, 15(9), 3242; https://doi.org/10.3390/jcm15093242 - 24 Apr 2026
Abstract
Measles remains one of the most contagious infectious diseases globally and continues to pose substantial public health risks despite decades of effective vaccination. This narrative review examines both classical and contemporary computational approaches used for measles monitoring, prediction, and control, with particular attention [...] Read more.
Measles remains one of the most contagious infectious diseases globally and continues to pose substantial public health risks despite decades of effective vaccination. This narrative review examines both classical and contemporary computational approaches used for measles monitoring, prediction, and control, with particular attention given to the emerging role of artificial intelligence (AI). We synthesized findings from 46 studies; 31 focused directly on measles and 15 on methodologically relevant studies from related infectious diseases (COVID-19, influenza, malaria), selected through searches of PubMed, Scopus, Web of Science, IEEE Xplore, and preprint servers, conducted between June and December 2025. Traditional compartmental models (SIR, SEIR, MSEIR), statistical tools (ARIMA, SARIMA), and seroepidemiological analysis provide transparent, well-characterized frameworks for estimating transmission dynamics and simulating intervention scenarios. Spatial modeling, network analysis, and Monte Carlo simulations have added geographic granularity to outbreak characterization. More recently, AI and machine learning (ML) methods, including supervised algorithms (Random Forest, XGBoost, SVM), deep learning architectures (CNN, LSTM), and hybrid mechanistic ML models, have shown improved predictive performance by integrating multiple data sources: epidemiological records, demographic profiles, mobility patterns, and behavioral indicators. AI-based approaches appear most valuable for high-dimensional risk prediction and image-based diagnostic tasks, while classical models retain clear advantages for policy-oriented scenario analysis. However, no AI-based or hybrid model identified in this review has been adopted into routine national measles surveillance or used for vaccination policy decisions at scale. Important challenges remain: data quality varies across settings, model generalizability cannot be assumed, and computational infrastructure disparities limit deployment in high-burden regions. Explainable AI, federated learning, workforce training for model interpretation, and integration of vaccination registries with mobility and genomic surveillance data represent concrete future directions for strengthening computational support for measles elimination. Full article
(This article belongs to the Special Issue New Advances of Infectious Disease Epidemiology)
Show Figures

Figure 1

20 pages, 10122 KB  
Data Descriptor
A Decadal Dataset of Offshore Weather and Normalized Wind–Solar Power Yield for Long-Term Evolution and Capacity Siting Planning in the Beibu Gulf, China
by Ziniu Li, Xin Guo, Zhonghao Qian, Aihua Zhou, Lin Peng and Suyang Zhou
Data 2026, 11(5), 92; https://doi.org/10.3390/data11050092 - 24 Apr 2026
Abstract
For offshore renewable energy planning and intelligent power management, access to long-term, high-resolution, and physically consistent meteorological and power generation records is essential. Such data supports a wide range of tasks, including resource assessment, hybrid system capacity sizing, grid operation planning, and data-driven [...] Read more.
For offshore renewable energy planning and intelligent power management, access to long-term, high-resolution, and physically consistent meteorological and power generation records is essential. Such data supports a wide range of tasks, including resource assessment, hybrid system capacity sizing, grid operation planning, and data-driven forecasting model development. This article presents the construction of a 10-year continuous hourly dataset for 16 deep-sea grid sites in the Beibu Gulf, China, spanning from January 2016 to December 2025. The raw meteorological variables, including 10 m wind speed, wind direction, solar irradiance, and 2 m air temperature, were retrieved from the NASA POWER satellite database and subsequently cleaned using a 24 h periodic substitution algorithm designed to preserve the physical integrity of daily weather cycles. The dataset is organized into two sub-datasets, the Historical Weather Dataset and the Normalized Power Yield Dataset, with the latter providing normalized wind and solar power outputs on a 1.0 per-unit (p.u.) basis derived from a wind turbine power curve model and a PV thermodynamic model. All 32 CSV files are freely accessible online with UTF-8 encoding. The utility of the dataset is illustrated through two representative application cases including offshore site selection with hybrid capacity sizing and physics-informed deep learning forecasting, demonstrating its suitability for both engineering analysis and machine learning model development. Full article
Show Figures

Figure 1

Back to TopTop