Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (465)

Search Parameters:
Keywords = scale-recurrent network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 7008 KB  
Article
Development of a TimesNet–NLinear Framework Based on Seasonal-Trend Decomposition Using LOESS for Short-Term Motion Response of Floating Offshore Wind Turbines
by Xinheng Zhang, Yao Cheng, Peng Dou, Yihan Xing, Renwei Ji, Pei Zhang, Puyi Yang, Xiaosen Xu and Shuaishuai Wang
J. Mar. Sci. Eng. 2026, 14(6), 571; https://doi.org/10.3390/jmse14060571 - 19 Mar 2026
Abstract
Floating offshore wind turbines (FOWTs) exhibit complex motions under marine environmental loads and frequently undergo coupled oscillations across multiple degrees of freedom (DOFs). Accurate short-term motion prediction of these responses is crucial for operational safety and maintenance. To overcome the limitations of traditional [...] Read more.
Floating offshore wind turbines (FOWTs) exhibit complex motions under marine environmental loads and frequently undergo coupled oscillations across multiple degrees of freedom (DOFs). Accurate short-term motion prediction of these responses is crucial for operational safety and maintenance. To overcome the limitations of traditional “black-box” models under complex aero-hydrodynamic loads, this study proposes STL–TimesNet–NLinear, a novel physics-informed deep learning framework. The framework utilizes STL decomposition to explicitly decouple motion signals: NLinear captures non-stationary low-frequency slow drifts, while TimesNet extracts multi-periodic wave-frequency responses. The model was evaluated across different platform typologies—a 5 MW semi-submersible and a larger-scale 15 MW Spar-type platform—under various typical operational and extreme environmental conditions. Model performance was evaluated using comparative and ablation experiments. At a prediction-ahead time (PAT) of 5 s, the proposed model achieves coefficients of determination (R2) exceeding 0.95. Even at longer PATs, the R2 remains above 0.90, consistently outperforming multiple benchmark models. Compared to traditional recurrent neural networks (e.g., LSTM), it decreases the Mean Absolute Error (MAE) for pitch motion under extreme sea states by 54.7% and increases the R2 to 0.9573. Furthermore, the inference latency is only 2.4 ms per step. These findings confirm that the proposed STL–TimesNet–NLinear model provides fast and accurate solutions for the short-term motion response prediction of FOWTs, demonstrating valuable potential applications for enhancing the safety planning of offshore wind turbine operation and maintenance. Full article
(This article belongs to the Special Issue Breakthrough Research in Marine Structures)
Show Figures

Figure 1

25 pages, 4870 KB  
Article
Multi-Scale Dilated Autoformer for UAV Energy Consumption Forecasting
by Zalza Karima, Muhammad Fairuz Mummtaz, Khairi Hindriyandhito Nurcahyo, Ida Bagus Krishna Yoga Utama and Yeong Min Jang
Drones 2026, 10(3), 215; https://doi.org/10.3390/drones10030215 - 18 Mar 2026
Abstract
Understanding power consumption conditions is necessary for optimizing UAV energy use, particularly during flight under varying weather conditions and environmental factors. Maintaining UAV energy while accounting for multiple influencing variables and vulnerability to weather conditions provides an appropriate case study for advanced predictive [...] Read more.
Understanding power consumption conditions is necessary for optimizing UAV energy use, particularly during flight under varying weather conditions and environmental factors. Maintaining UAV energy while accounting for multiple influencing variables and vulnerability to weather conditions provides an appropriate case study for advanced predictive modeling. This study investigates UAV power consumption during hovering flight by forecasting power usage using a MDFA network to improve prediction accuracy and better adapt to rapid weather-induced variations. To capture intricate temporal dependencies and recurrent oscillatory behavior, the integrated model combines multi-scale dilated convolutions with a Fourier-enhanced mechanism. According to the experimental results, this model achieves 3% error reductions under all tested flight conditions, indicating a significant improvement in performance. Overall, the MDFA model consistently showed better performance under high power consumption conditions than under low power consumption conditions, and it produced the lowest error in heavy flight compared to low and medium flight. Full article
Show Figures

Figure 1

30 pages, 1713 KB  
Article
Safe-Calibrated TCN–Transformer Transfer Learning for Reliable Battery SoH Estimation Under Lab-to-Field Domain Shift
by Kumbirayi Nyachionjeka and Ehab H. E. Bayoumi
World Electr. Veh. J. 2026, 17(3), 149; https://doi.org/10.3390/wevj17030149 - 17 Mar 2026
Abstract
Battery state-of-health (SoH) estimation is central to transportation electrification because it conditions safety limits, warranty accounting, power capability management, and long-horizon fleet optimization. Although deep temporal architectures can achieve high laboratory accuracy, field deployment is frequently limited by laboratory (Lab)-to-field (L2F) domain shift [...] Read more.
Battery state-of-health (SoH) estimation is central to transportation electrification because it conditions safety limits, warranty accounting, power capability management, and long-horizon fleet optimization. Although deep temporal architectures can achieve high laboratory accuracy, field deployment is frequently limited by laboratory (Lab)-to-field (L2F) domain shift that alters input statistics, feature definitions, and noise regimes. Under such a shift, predictors may remain strongly monotonic, preserving degradation ordering and become operationally unreliable due to systematic output distortion (e.g., compression/warping of the SoH scale). A deployment-complete L2F transfer learning pipeline is presented, built around a gated Temporal Convolutional Network (TCN)–Transformer fusion backbone, domain-specific adapters and heads, alignment-regularized fine-tuning, and row-level inference via sliding-window overlap averaging. To address the dominant deployment failure mode, a Safe Calibration stage robustly filters calibration pairs and selects among candidate calibrators under a strict do-no-harm criterion. On an unseen deployment stream (2154 labeled rows), overlap-averaged raw inference achieves MAE = 0.0439, RMSE = 0.0501, and R2 = 0.7451, consistent with mid-to-high SoH range compression, while Safe Calibration (Isotonic-Balanced selected) corrects nonlinear scaling without violating monotonic structure, improving to MAE = 0.0188, RMSE = 0.0252, and R2 = 0.9357 to obtain a complete understanding of the challenges due to domain shifts, evaluation is extended to include other architecture baselines such as TCN-only, Transformer-only, Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM), and a Ridge regression baseline. Also added is explicit alignment and calibration ablations that include CORAL off/on, that is, none vs. Safe-Global vs. Context-Aware under identical leakage-safe splits and the same overlap-averaged deployment inference operator. This work goes beyond peak-score reporting and looks at the robustness of a pipeline under domain shift, which is quantified across four random seeds and multiple deployment streams, with uncertainty summarized via mean ± std and bootstrap confidence intervals for Mean of Absolute value of Errors (MAE)/Root of the Mean of the Square of Errors (RMSE) computed from per-example absolute errors. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

17 pages, 4808 KB  
Article
Predicting Groundwater Depth Using Historical Data Trend Decomposition: Based on the VMD-LSTM Hybrid Deep Learning Model
by Jie Yue, Hong Guo, Deng Pan, Huanxiang Wang, Yawen Xin, Furong Yu, Yingying Shao and Rui Dun
Water 2026, 18(6), 689; https://doi.org/10.3390/w18060689 - 15 Mar 2026
Abstract
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater [...] Read more.
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater level time series exhibit strong nonlinear and non-stationary characteristics, posing great challenges to the accurate prediction of groundwater level dynamics. Most existing prediction models rely on sufficient hydro-meteorological and exploitation data that are difficult to obtain in water-scarce regions, or fail to effectively decouple the multi-scale features of non-stationary groundwater level signals, resulting in limited prediction accuracy and insufficient generalization ability. To address these research gaps, this study takes Zhengzhou, a typical water-deficient city in the Yellow River Basin, as the study area, and proposes a hybrid deep learning framework combining Variational Mode Decomposition (VMD) and Long Short-Term Memory (LSTM) neural network for predicting shallow and intermediate-deep groundwater level changes. Kolmogorov–Arnold Networks (KANs) and Gated Recurrent Units (GRUs) are selected as benchmark models to verify the superior performance of the proposed framework. In this framework, the non-stationary groundwater level signal is adaptively decomposed into Intrinsic Mode Functions (IMFs) with distinct frequency characteristics via VMD. An independent LSTM model is constructed for each IMF to capture its unique temporal variation pattern, and the final groundwater level prediction is obtained by linearly reconstructing the predicted results of all IMFs. The results show that the coefficient of determination (R2) of the VMD-LSTM model exceeds 0.90 for all monitoring datasets, with low Mean Absolute Error (MAE) and Mean Squared Error (MSE). It significantly outperforms the benchmark models in handling nonlinear and non-stationary time series features. Using only historical groundwater level data as input, the proposed framework effectively overcomes the limitation of insufficient driving variables in data-scarce regions and fully explores the multi-scale evolution of groundwater dynamics through the synergistic effect of multi-scale decomposition and deep learning. The method presented in this study provides a novel and reliable technical approach for groundwater level prediction in water-deficient and data-limited areas, and also offers scientific support for the rational management and sustainable utilization of regional groundwater resources. Future research will incorporate driving factors such as meteorology and exploitation to further improve the model’s ability to capture abrupt changes in groundwater level dynamics. Full article
Show Figures

Figure 1

27 pages, 10919 KB  
Article
Annual 10 m Mapping of Winter Fallow Fields in the Wanjiang Plain Using Sentinel-1/2 and a Random Forest–FR-Net Framework: Dynamics and Environmental Associations
by Shi Chen, Yinlan Huang and Shasha Hu
ISPRS Int. J. Geo-Inf. 2026, 15(3), 123; https://doi.org/10.3390/ijgi15030123 - 13 Mar 2026
Viewed by 84
Abstract
Winter fallow fields (WFF) are widespread across humid subtropical croplands in the Yangtze River Economic Belt, exerting direct implications for annual land-use efficiency and winter production potential. However, acquiring fine-scale, year-to-year WFF information remains challenging due to frequent cloud contamination and the high [...] Read more.
Winter fallow fields (WFF) are widespread across humid subtropical croplands in the Yangtze River Economic Belt, exerting direct implications for annual land-use efficiency and winter production potential. However, acquiring fine-scale, year-to-year WFF information remains challenging due to frequent cloud contamination and the high fragmentation of agricultural parcels. Here, we mapped the annual 10 m WFF distribution in the Wanjiang Plain for six winter seasons (2019–2024). We employed a hierarchical mapping framework that integrates winter-stage Sentinel-1/2 composites with a Random Forest (RF) pre-classifier and a Fine Resolution Network (FR-Net) refinement module. Parcel-wise validation demonstrated robust and consistent performance across years (pooled OA = 0.969, F1-score = 0.969, MCC = 0.938). Spatiotemporal analyses revealed that WFF persistently occupied 52.3–65.6% of the regional cropland (7.59 × 103–9.52 × 103 km2), exhibiting a pronounced “hot-north, cold-south” spatial clustering. Approximately 52% of the cropland experienced high fallow recurrence (>67% frequency), forming stable high-recurrence cores. Furthermore, our MaxEnt association model (AUC = 0.739) identified relief amplitude, slope, and silt content as the most influential biophysical constraints. While these correlational variables act as proxies for underlying drainage and workability constraints rather than deterministic drivers, our high-fidelity 10-m WFF layers provide a consistent, policy-relevant baseline for hotspot-oriented screening and targeted winter-cropping optimization. Full article
Show Figures

Figure 1

37 pages, 4154 KB  
Article
Banking Efficiency Under Systemic Uncertainty: A Bibliometric Lens on Sustainability
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea and Nicoleta Mihaela Doran
Int. J. Financial Stud. 2026, 14(3), 74; https://doi.org/10.3390/ijfs14030074 - 12 Mar 2026
Viewed by 113
Abstract
This study delves into how the literature conceptualizes banking efficiency as a capability shaping sustainability-oriented pathways under conditions of systemic uncertainty, including recurrent economic–financial disruptions and geopolitical shocks. Using records indexed in the Web of Science Core Collection, the study combines bibliometric mapping [...] Read more.
This study delves into how the literature conceptualizes banking efficiency as a capability shaping sustainability-oriented pathways under conditions of systemic uncertainty, including recurrent economic–financial disruptions and geopolitical shocks. Using records indexed in the Web of Science Core Collection, the study combines bibliometric mapping with conceptual structuring to examine publication dynamics, collaboration networks, and the thematic evolution of research linking bank efficiency, green finance intermediation, sustainable digital innovation, and risk governance. The study reveals a multidimensional knowledge base organized around two converging streams: (i) research on efficiency, stability, and crisis transmission emphasizing intermediation quality, performance under stress, and prudential responses; and (ii) sustainability and innovation scholarship focusing on how financial systems enable eco-innovation diffusion and low-carbon transition through capital allocation, governance mechanisms, and digitally enabled transformation. Across these streams, banking efficiency is increasingly discussed not merely as a performance ratio, but as a strategic capability that becomes particularly salient in crisis environments: it can reduce intermediation frictions when funding conditions tighten, strengthen screening and monitoring of green projects amid elevated uncertainty, and support the continuity and scaling of eco-innovations by improving decision speed and resource allocation through digital tools. Collaboration patterns indicate growing interdisciplinary engagement—especially among European and Asian institutions—where crisis, sustainability, and innovation perspectives are integrated into systems-based approaches to green finance. Building on these insights, the article outlines a research agenda oriented toward innovation outcomes in turbulent contexts, emphasizing (a) measurement strategies that connect efficiency to eco-innovation diffusion and adoption rates during stress periods; (b) comparative analyses of how policy incentives and green market signals interact with bank efficiency across crisis episodes; and (c) hybrid methodological designs combining econometric identification, network analytics, scenario-based stress framing, and AI-enabled analytical tools to capture nonlinear dynamics in efficiency–innovation linkages. Overall, the study clarifies how banking efficiency may condition the capacity of financial institutions to sustain green investment intermediation and advance eco-innovation pathways when uncertainty is systemic rather than episodic. Full article
(This article belongs to the Special Issue Digital Banking, FinTech, and AI for Climate and Sustainable Finance)
Show Figures

Figure 1

19 pages, 2727 KB  
Article
Plasmid-Driven Resistome Diversity in 9700 Escherichia coli Genomes Across Phylogroups and Sequence Types
by Adel Azour, Ghassan M. Matar and Melhem Bilen
Antibiotics 2026, 15(3), 287; https://doi.org/10.3390/antibiotics15030287 - 12 Mar 2026
Viewed by 112
Abstract
Background/Objectives: Plasmids are key vehicles for the dissemination of antimicrobial resistance (AMR), yet their contribution to the global resistome architecture of Escherichia coli remains poorly resolved. This study aimed to quantify how plasmid backbones shape the distribution, mobility, and stabilization of resistance [...] Read more.
Background/Objectives: Plasmids are key vehicles for the dissemination of antimicrobial resistance (AMR), yet their contribution to the global resistome architecture of Escherichia coli remains poorly resolved. This study aimed to quantify how plasmid backbones shape the distribution, mobility, and stabilization of resistance genes across diverse phylogenetic backgrounds. Methods: We analyze 9700 high-quality genomes spanning major phylogroups and sequence types. Plasmidome reconstruction was integrated with lineage-resolved antimicrobial resistance gene (ARG) mapping to characterize plasmid–ARG associations and evolutionary patterns. Results: Although most antimicrobial resistance genes (ARGs) are chromosomal, plasmids disproportionately encode clinically important determinants including blaNDM-5, mcr-1.1, and multiple blaCTX-M alleles that show strong, recurrent associations with a restricted set of backbone families, most notably IncX3, IncX4, IncI, and IncF. These conserved plasmid–gene modules recur across phylogenetic backgrounds and continental scales. We identify a marked divergence in evolutionary strategies: generalist phylogroups (A, B1, D) maintain plasmid-rich and highly diverse resistomes, whereas globally dominant Extraintestinal Pathogenic E. coli (ExPEC) clones such as ST131 and ST410 exhibit reduced plasmid dependency and frequent chromosomal integration of extended-spectrum β-lactamase (ESBL) genes, particularly blaCTX-M-15, consistent with a shift toward vertically stabilized resistomes. By integrating plasmidome reconstruction with lineage-resolved ARG mapping, this study delivers the most extensive plasmid-focused resistome analysis to date, revealing highly modular plasmid–ARG networks structured around a small number of high-risk backbone types. These backbones account for the majority of globally relevant ARGs, including 64.6% of blaNDM-5 and 76.4% of mcr-1.1 detections. Conclusions: Together, our findings establish plasmid lineages rather than individual genes or clones as central units of AMR dissemination and critical targets for future genomic surveillance and intervention strategies. Full article
Show Figures

Figure 1

18 pages, 4042 KB  
Article
Markov Transition Fields-Based Dual-Modal Fusion Method on Transient Stability Assessment for Power Systems
by Min Yan, Qian Chen, Zhihua Huang, Beiqi Qian, Lei Zhang, Yifan Ding and Zehua Su
Energies 2026, 19(6), 1417; https://doi.org/10.3390/en19061417 - 11 Mar 2026
Viewed by 100
Abstract
There is an extremely urgent need to develop a transient stability assessment method for new power systems with greater rapidity and higher accuracy due to the increased complexity and difficulty caused by massive nonlinear power electronics-dominated generation and loads. In recent years, computing [...] Read more.
There is an extremely urgent need to develop a transient stability assessment method for new power systems with greater rapidity and higher accuracy due to the increased complexity and difficulty caused by massive nonlinear power electronics-dominated generation and loads. In recent years, computing power has increased significantly, meaning that artificial intelligence (AI) algorithms have develop rapidly, and large-scale AI models have become available. Among them, deep learning (DL) algorithms have received more attention due to their inherent advantages, on which assessment strategy and methods are based, but these algorithms are still not sufficiently applicable. Therefore, a Markov Transition Field (MTF)-based dual-modal fusion method for transient stability assessment of power systems is proposed in this paper. First, the influence and effect on transient stability assessment by the fusion of both image modality and time series modality are studied. Then, for enhancing key features, the strategy to convert the time series modality into image modality by MTF is established, which allows the features to be described at multiple time scales and the feature correlation between different time points to be strengthened. Thus, features from image modality and time series modality are extracted, respectively, by Convolutional Neural Networks (CNNs), and gated recurrent units are adopted; the extracted features are further fused by a concatenation fusion method. It is demonstrated by the simulation results that the accuracy of the transient stability assessment is improved effectively by the aforementioned fusion method. Full article
(This article belongs to the Special Issue Advanced in Modeling, Analysis and Control of Microgrids)
Show Figures

Figure 1

27 pages, 4887 KB  
Article
Urban Freight in Casablanca: Congestion, Emissions, and Welfare Losses from Large-Scale Simulation-Based Dynamic Assignment
by Amine Mohamed El Amrani, Mouhsene Fri, Othmane Benmoussa and Naoufal Rouky
Smart Cities 2026, 9(3), 48; https://doi.org/10.3390/smartcities9030048 - 10 Mar 2026
Viewed by 185
Abstract
Urban business-to-business distribution in Casablanca relies heavily on light commercial vehicles (LCVs) operating in a constrained street environment where loading/unloading access, intersection capacity, and recurring bottlenecks jointly shape performance and environmental impacts. However, high-resolution freight origin–destination (OD) observations and junction calibration data are [...] Read more.
Urban business-to-business distribution in Casablanca relies heavily on light commercial vehicles (LCVs) operating in a constrained street environment where loading/unloading access, intersection capacity, and recurring bottlenecks jointly shape performance and environmental impacts. However, high-resolution freight origin–destination (OD) observations and junction calibration data are limited, which complicates direct estimations of congestion and externalities attributable to commercial activity. This study develops a reproducible, large-scale modeling workflow that couples tour-based freight demand generation in order units with simulation-based traffic assignment (SBA) on a metropolitan network and translates network performance into emissions and monetary losses. Warehouses are modeled as primary producers and commercial activity zones as attractors via sector-tagged production and attraction functions; the resulting order distribution is converted to OD vehicle trips using the tour-based trip generation procedure with the mean targets-per-tour fixed to one to ensure numerical stability, yielding a direct-shipment approximation appropriate for stress–response analysis. Junction impedance is represented through turn-type volume–delay relationships and node-level impedance procedures, and congestion is evaluated using vehicle kilometers traveled/vehicle hours traveled (VKT/VHT)-based indicators, delay-intensity measures, and link/node bottleneck rankings. Across demand-scaling scenarios, VKT increases from 302,159 to 1,017,686 veh·km/day, while network delay rises nonlinearly from 392.5 to 2738.4 veh·h/day, indicating saturation-driven amplification of time losses. The Handbook of Emission Factors for Road Transport (HBEFA)-compatible emission estimates scale with activity: total carbon dioxide (CO2) increases from 154.1 to 519.5 t/day, and nitrogen oxides (NOx) and particulate matter (PM2.5) totals rise proportionally under fixed fleet assumptions. Monetizing delay with a purchasing-power-adjusted value-of-time range yields a congestion cost per trip that increases from approximately 0.20 to 0.41 Moroccan dirham, MAD/trip (at 60 MAD/veh·h), consistent with rising delay intensity. Bottleneck extraction shows welfare losses to be structurally concentrated on a small persistent corridor set, led by ‘Boulevard de la Résistance’, with recurrent hotspots including ‘Rue d’Arcachon’ and ‘Rue d’Ifni’. The framework supports policy-relevant reporting of congestion, emissions, and welfare impacts under data scarcity, with explicit sensitivity bounds. Full article
(This article belongs to the Special Issue Cost-Effective Transportation Planning for Smart Cities)
Show Figures

Figure 1

13 pages, 2079 KB  
Article
Trend Prediction of Distribution Network Fault Symptoms Based on XLSTM-Informer Fusion Model
by Zhen Chen, Lin Gao and Yuanming Cheng
Energies 2026, 19(6), 1389; https://doi.org/10.3390/en19061389 - 10 Mar 2026
Viewed by 145
Abstract
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches [...] Read more.
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches face a critical dilemma: traditional recurrent neural network (RNN) models (e.g., LSTM) suffer from vanishing gradients and memory bottlenecks in long-sequence forecasting, making it difficult to capture long-term evolutionary trends. In contrast, while standard Transformer models excel at global modeling, their smoothing effect renders them insensitive to subtle transient abrupt changes such as voltage sags, and they incur high computational complexity. To address the dual challenges of “difficulty in capturing transient abrupt changes” and “inability to simultaneously handle long-term trends,” this paper proposes a fault precursor trend prediction model that integrates Extended Long Short-Term Memory (XLSTM) with Informer, termed XLSTM-Informer. To tackle the challenge of extracting transient features, an XLSTM-based local encoder is constructed. By replacing the conventional Sigmoid activation with an improved exponential gating mechanism, the model achieves significantly enhanced sensitivity to instantaneous fluctuations in voltage and current. Additionally, a matrix memory structure is introduced to effectively mitigate information forgetting issues during long-sequence training. To overcome the challenge of modeling long-term dependencies, Informer is employed as the global decoder. Leveraging its ProbSparse sparse self-attention mechanism, the model substantially reduces computational complexity while accurately capturing long-range temporal dependencies. Experimental results on a real-world distribution network dataset demonstrate that the proposed model achieves substantially lower Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) compared to standalone CNN, LSTM, and other baseline models, as well as conventional LSTM–Informer hybrid approaches. Particularly under extreme operating conditions—such as sustained high summer loads and winter heating peak loads—the model successfully overcomes the trade-off limitations of traditional methods, enabling simultaneous and accurate prediction of both local precursors and global trends. This provides a reliable technical foundation for proactive warning systems in distribution networks. Full article
Show Figures

Figure 1

43 pages, 4986 KB  
Review
Alcalase for Food-Protein-Derived Bioactive Peptides: Trends, Gaps, and Translational Opportunities
by Jesús Guadalupe Pérez-Flores, Laura García-Curiel, Emmanuel Pérez-Escalante, Elizabeth Contreras-López, Gabriela Mariana Rodríguez-Serrano, Marisa Rivera-Arredondo, Israel Oswaldo Ocampo-Salinas, José Antonio Sánchez-Franco, Rita Paz-Samaniego and José Antonio Guerrero-Solano
Macromol 2026, 6(1), 16; https://doi.org/10.3390/macromol6010016 - 9 Mar 2026
Viewed by 186
Abstract
Comparative studies report inconsistent peptide yields, bioactivities, and sensory outcomes for Alcalase across substrates, creating uncertainty about when it should be favored over other proteases. This study mapped research on hydrolysis of food proteins with Alcalase to quantify scientific output, organize thematic trends, [...] Read more.
Comparative studies report inconsistent peptide yields, bioactivities, and sensory outcomes for Alcalase across substrates, creating uncertainty about when it should be favored over other proteases. This study mapped research on hydrolysis of food proteins with Alcalase to quantify scientific output, organize thematic trends, and identify gaps relevant to peptide-based functional foods. A bibliometric analysis of Web of Science records (2004–2024) was performed in R (bibliometrix), using co-occurrence networks, temporal overlays, and conceptual mapping. The dataset comprised 203 documents from 78 sources, exhibiting a 10.3% annual growth rate and a 36.9% international co-authorship rate. Themes clustered around antioxidant and angiotensin-converting enzyme (ACE) inhibitory peptides, particularly in dairy and marine matrices, are supported by workflows combining Alcalase hydrolysis with size-guided ultrafiltration, RP-HPLC (Reverse Phase High-Performance Liquid Chromatography), and, more recently, in silico analyses and encapsulation studies. Recurrent limitations were identified: heterogeneous hydrolysates and uneven reporting that hinder sequence–activity correlations, gastrointestinal degradation and bitterness affecting applicability, and scale-up and purification choices influencing feasibility. The mapping clarified where Alcalase enables bioactive peptide generation and highlighted practical priorities, including protocol standardization and enzyme benchmarking, the integration of peptidomics and machine learning with targeted assays, and formulation-focused validation (encapsulation, stability, and delivery) to bridge in vitro activity to real-world use. These directions support the production of reproducible, application-ready peptide ingredients. Full article
Show Figures

Graphical abstract

22 pages, 3908 KB  
Article
Physics-Topology-Anchored Learning: A Robust and Lightweight Framework for Time-Series Prediction and Anomaly Detection Under Data Scarcity
by Xuanhao Hua, Weiqi Yin, Libin Wang, Meng Ma, Jianfeng Yuan and Jing Zhang
Sensors 2026, 26(5), 1721; https://doi.org/10.3390/s26051721 - 9 Mar 2026
Viewed by 143
Abstract
Health monitoring of complex systems is critical for ensuring reliability and achieving cost-effective reusability. However, deploying deep learning models in this domain is impeded by two primary constraints: the scarcity of high-quality fault samples and the restricted computational resources available on-board. To address [...] Read more.
Health monitoring of complex systems is critical for ensuring reliability and achieving cost-effective reusability. However, deploying deep learning models in this domain is impeded by two primary constraints: the scarcity of high-quality fault samples and the restricted computational resources available on-board. To address these challenges, this paper proposes a Physics-Topology-Anchored Learning (PTAL) framework. The core innovation lies in the effective integration of physical inductive bias into the model architecture. Specifically, PTAL incorporates a predefined adjacency matrix, derived from the physical mechanism, as a structural prior. This design anchors the neural network to explicit physical causality, effectively constraining the hypothesis space and reducing the model’s dependency on large-scale data. Furthermore, by coupling this physics-informed structure with a lightweight recurrent attention mechanism, the model avoids the high computational overhead typical of generic large-scale networks. Experimental evaluations demonstrate that PTAL achieves a peak diagnostic accuracy of 97.8% and a low standard deviation of 0.1145, significantly outperforming baseline models in data-scarce regimes. The results confirm that the proposed model successfully leverages physical bias to maintain a favorable trade-off between diagnostic performance and computational efficiency, making it highly suitable for the resource-constrained environments of complex systems. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

46 pages, 990 KB  
Review
Machine Learning for Outdoor Thermal Comfort Assessment and Optimization: Methods, Applications and Perspectives
by Giouli Mihalakakou, John A. Paravantis, Alexandros Romeos, Sonia Malefaki, Paraskevas N. Georgiou and Athanasios Giannadakis
Sustainability 2026, 18(5), 2600; https://doi.org/10.3390/su18052600 - 6 Mar 2026
Viewed by 173
Abstract
Urban environments face increasing thermal stress from climate change and the Urban Heat Island effect, with significant implications for livability, public health, and energy sustainability. Outdoor thermal comfort is defined as the state in which conditions are perceived as acceptable, depends on interactions [...] Read more.
Urban environments face increasing thermal stress from climate change and the Urban Heat Island effect, with significant implications for livability, public health, and energy sustainability. Outdoor thermal comfort is defined as the state in which conditions are perceived as acceptable, depends on interactions among meteorological, morphological, physiological, and behavioral factors. This review synthesizes the application of machine learning (ML) to outdoor thermal comfort assessment into a practice-oriented taxonomy. Research spans diverse climates and urban forms, using inputs across environmental and human domains. Supervised learning dominates. Regression approaches (linear regression, support vector regression, random forest, gradient boosting) and classification algorithms (decision trees, support vector machines, K-nearest neighbors, Naïve Bayes, random forest classifiers) are widely used to predict thermal indices such as the Physiological Equivalent Temperature and Universal Thermal Climate Index, or to classify subjective responses including thermal sensation, comfort, and acceptability. Unsupervised learning (clustering, principal component analysis) supports identification of microclimatic zones and perceptual clusters, while deep learning (multilayer perceptrons, convolutional and recurrent neural networks, generative adversarial networks) achieves superior accuracy for complex, high-dimensional, and spatiotemporal data. Algorithms such as random forests, support vector machines, and gradient boosting consistently show strong performance for both indices and subjective responses when integrating multi-domain inputs. Semi-supervised and reinforcement learning remain underexplored but offer promise for leveraging large-scale sensor data and enabling adaptive, real-time comfort management. The review concludes with a roadmap emphasizing explainable artificial intelligence, scalable surrogate modeling, and integration with simulation-based optimization and parametric design tools. Full article
Show Figures

Figure 1

26 pages, 3000 KB  
Article
Material Classification from Non-Line-of-Sight Acoustic Echoes Using Wavelet-Acoustic Hybrid Feature Fusion
by Dilan Onat Alakuş and İbrahim Türkoğlu
Sensors 2026, 26(5), 1577; https://doi.org/10.3390/s26051577 - 3 Mar 2026
Viewed by 286
Abstract
Acoustic material classification under non-line-of-sight (NLOS) conditions—where direct sound paths are obstructed—is a challenging task due to echo attenuation, complex reflections, and noise effects. This study aims to improve NLOS material recognition by introducing a novel wavelet–acoustic hybrid feature fusion method integrated with [...] Read more.
Acoustic material classification under non-line-of-sight (NLOS) conditions—where direct sound paths are obstructed—is a challenging task due to echo attenuation, complex reflections, and noise effects. This study aims to improve NLOS material recognition by introducing a novel wavelet–acoustic hybrid feature fusion method integrated with deep recurrent neural network architectures. Echo signals from nine different materials were collected using the newly developed ANLOS-R (Acoustic Non-Line-of-Sight Recognition) dataset, which was specifically designed to simulate realistic NLOS propagation environments. From these recordings, time-domain acoustic features and multi-scale wavelet-based energy and entropy statistics were extracted using ten wavelet families. The resulting 70-dimensional hybrid feature set was used to train several deep learning architectures, including Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network–LSTM (CNN–LSTM). Among these, the CNN–LSTM achieved the highest balanced accuracy and macro-F1 score of 0.99, showing strong generalization and convergence performance. SHapley Additive exPlanations (SHAP) analysis indicated that Mel-Frequency Cepstral Coefficients (MFCCs) and wavelet entropy–energy features play complementary roles in material discrimination. The proposed approach provides a robust and interpretable framework for real-time NLOS acoustic sensing, bridging data-driven deep learning with the physical understanding of acoustic material behavior. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

23 pages, 919 KB  
Article
A Hybrid Deep Learning Architecture for Intrusion Detection Deploying Multi-Scale Feature Interaction and Temporal Modeling
by Eva Jakubcova, Maros Jakubec and Peter Pocta
AI 2026, 7(3), 87; https://doi.org/10.3390/ai7030087 - 2 Mar 2026
Viewed by 353
Abstract
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle [...] Read more.
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle with complex attack patterns, while deep learning approaches may become overly complex or difficult to interpret. In this paper, we propose a neural intrusion detection method that combines structured feature preprocessing with a compact hybrid architecture. Numerical and categorical traffic features are processed separately using robust normalisation and trainable embeddings, and then merged into an unified representation. The proposed model builds on a multi-scale feature interaction block, followed by channel-wise attention and a single bidirectional gated recurrent unit layer with attention pooling to capture short-term temporal behavior. The method is evaluated on two widely used benchmark datasets, i.e., the CIC-IDS2017 and CSE-CIC-IDS2018 dataset. Experimental results show that the proposed approach consistently outperforms the classical machine learning baselines and achieves competitive or superior performance compared to the recent deep learning methods proposed in the literature. The results confirm that the proposed architectural choices effectively capture both feature interactions and temporal patterns in network traffic. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Back to TopTop