Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,784)

Search Parameters:
Keywords = short-term memory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 765 KB  
Article
A Bayesian-Optimized Mixture of Experts Framework for Short-Term Traffic Flow Prediction
by Jianqing Wu, Jiaao Ren, Hui Wang, Fei Xie, Shaohan Chen and Mengjie Jiang
Modelling 2026, 7(2), 55; https://doi.org/10.3390/modelling7020055 (registering DOI) - 16 Mar 2026
Abstract
Accurate and reliable short-term traffic flow prediction is crucial for managing urban congestion but is challenged by the complex spatio-temporal dependencies inherent in traffic systems. Conventional single models, such as Long Short-Term Memory (LSTM) and Temporal Convolutional Network (TCN), often fail to capture [...] Read more.
Accurate and reliable short-term traffic flow prediction is crucial for managing urban congestion but is challenged by the complex spatio-temporal dependencies inherent in traffic systems. Conventional single models, such as Long Short-Term Memory (LSTM) and Temporal Convolutional Network (TCN), often fail to capture these nonlinear dynamics. To address this, we propose a novel Bayesian-Optimized Mixture of Experts (BO-MoE) framework. This hybrid architecture utilizes a Mixture of Experts (MoE) to dynamically integrate multiple specialized deep learning models, allowing it to adapt to diverse and complex traffic patterns. Bayesian Optimization (BO) is further integrated to automate hyperparameter tuning, significantly enhancing predictive accuracy and model efficiency. We evaluated BO-MoE on three real-world traffic datasets. Empirical results demonstrate that our model consistently outperforms strong baselines, including TCN. Specifically, on PEMS04, it reduces MAE, RMSE, and MAPE by 1.97%, 1.19%, and 3.23%, respectively, while on PEMS08, the corresponding reductions reach 3.83%, 1.26%, and 5.49%. On the NZ dataset, BO-MoE also achieves superior performance, with improvements comparable to those on PEMS benchmarks. Full article
Show Figures

Figure 1

23 pages, 2965 KB  
Article
Hybrid Supervised Classification and Deep Embedding–Based Profiling Framework for Electricity Consumption Analysis
by Mihriban Gunay, Ozal Yildirim, Yakup Demir, Marin Zhilevski, Mikho Mikhov and Nikolay Yordanov
Appl. Sci. 2026, 16(6), 2827; https://doi.org/10.3390/app16062827 (registering DOI) - 16 Mar 2026
Abstract
This study proposes a hybrid deep learning framework that integrates supervised classification and unsupervised profiling for electricity consumption analysis. In the supervised phase, a one-dimensional Convolutional Neural Network combined with Long Short-Term Memory (1D CNN–LSTM) architecture is developed to classify daily load patterns. [...] Read more.
This study proposes a hybrid deep learning framework that integrates supervised classification and unsupervised profiling for electricity consumption analysis. In the supervised phase, a one-dimensional Convolutional Neural Network combined with Long Short-Term Memory (1D CNN–LSTM) architecture is developed to classify daily load patterns. The performance of the proposed model is compared with traditional machine learning and deep learning approaches, including Support Vector Machine (SVM), k-Nearest Neighbors (KNN), a standalone Long Short-Term Memory (LSTM) model, a Transformer-based model, and a standalone 1D CNN model. Experimental results on the Precon house dataset and the CU-BEMS dataset demonstrate that the proposed hybrid architecture outperforms the benchmark models, achieving classification accuracies of 87.59% and 86.40%, respectively. In the unsupervised phase, the trained CNN–LSTM encoder is utilized as a deep feature extractor. The resulting 32-dimensional latent embeddings are clustered using K-Means, Gaussian Mixture Model (GMM), Agglomerative, Spectral, and Ensemble methods. Clustering robustness is evaluated through bootstrap-based stability analysis using the Adjusted Rand Index (ARI) and the Normalized Mutual Information (NMI). The results demonstrate stable and interpretable electricity consumption profiles, particularly in the residential dataset, where near-perfect clustering stability is observed for K-Means. The proposed framework provides both improved classification performance and robust consumption profiling based on deep embedding, offering a practical tool for energy management. Full article
Show Figures

Figure 1

33 pages, 3876 KB  
Article
Predictive Network Slicing Resource Orchestration: A VNF Approach
by Andrés Cárdenas, Luis Sigcha and Mohammadreza Mosahebfard
Future Internet 2026, 18(3), 149; https://doi.org/10.3390/fi18030149 (registering DOI) - 16 Mar 2026
Abstract
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource [...] Read more.
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource overprovisioning practice implemented by service providers leads to the inefficient use of resources, limiting the ability of Mobile Network Operators (MNOs) to rent new network slices to more vertical customers. Hence, efficient resource allocation mechanisms are essential to achieve optimal network performance and cost-effectiveness. This paper proposes a predictive model for network slice resource optimization based on resource sharing between Virtualized Network Functions (VNFs). The model employs deep learning models based on Long Short-Term Memory (LSTM) and Transformers for CPU resource usage prediction and a reactive algorithm for resource sharing between VNFs. The model is powered by a telemetry system proposed as an extension of the 3GPP network slice management architectural framework. The extended architectural framework enhances the automation and optimization of the network slice lifecycle management. The model is validated through a practical use case, demonstrating the effectiveness of the resource sharing algorithm in preventing VNF overload and predicting resource usage accurately. The findings demonstrate that the sharing mechanism enhances resource optimization and ensures compliance with service level agreements, mitigating service degradation. This work contributes to the efficient management and utilization of network resources in 5G networks and provides a basis for further research in network slice resource optimization. Full article
(This article belongs to the Special Issue Software-Defined Networking and Network Function Virtualization)
Show Figures

Figure 1

17 pages, 4808 KB  
Article
Predicting Groundwater Depth Using Historical Data Trend Decomposition: Based on the VMD-LSTM Hybrid Deep Learning Model
by Jie Yue, Hong Guo, Deng Pan, Huanxiang Wang, Yawen Xin, Furong Yu, Yingying Shao and Rui Dun
Water 2026, 18(6), 689; https://doi.org/10.3390/w18060689 (registering DOI) - 15 Mar 2026
Abstract
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater [...] Read more.
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater level time series exhibit strong nonlinear and non-stationary characteristics, posing great challenges to the accurate prediction of groundwater level dynamics. Most existing prediction models rely on sufficient hydro-meteorological and exploitation data that are difficult to obtain in water-scarce regions, or fail to effectively decouple the multi-scale features of non-stationary groundwater level signals, resulting in limited prediction accuracy and insufficient generalization ability. To address these research gaps, this study takes Zhengzhou, a typical water-deficient city in the Yellow River Basin, as the study area, and proposes a hybrid deep learning framework combining Variational Mode Decomposition (VMD) and Long Short-Term Memory (LSTM) neural network for predicting shallow and intermediate-deep groundwater level changes. Kolmogorov–Arnold Networks (KANs) and Gated Recurrent Units (GRUs) are selected as benchmark models to verify the superior performance of the proposed framework. In this framework, the non-stationary groundwater level signal is adaptively decomposed into Intrinsic Mode Functions (IMFs) with distinct frequency characteristics via VMD. An independent LSTM model is constructed for each IMF to capture its unique temporal variation pattern, and the final groundwater level prediction is obtained by linearly reconstructing the predicted results of all IMFs. The results show that the coefficient of determination (R2) of the VMD-LSTM model exceeds 0.90 for all monitoring datasets, with low Mean Absolute Error (MAE) and Mean Squared Error (MSE). It significantly outperforms the benchmark models in handling nonlinear and non-stationary time series features. Using only historical groundwater level data as input, the proposed framework effectively overcomes the limitation of insufficient driving variables in data-scarce regions and fully explores the multi-scale evolution of groundwater dynamics through the synergistic effect of multi-scale decomposition and deep learning. The method presented in this study provides a novel and reliable technical approach for groundwater level prediction in water-deficient and data-limited areas, and also offers scientific support for the rational management and sustainable utilization of regional groundwater resources. Future research will incorporate driving factors such as meteorology and exploitation to further improve the model’s ability to capture abrupt changes in groundwater level dynamics. Full article
Show Figures

Figure 1

18 pages, 2774 KB  
Article
Hybrid RF–ConvLSTM Approach for Rainfall Estimation from MSG Data over Northern Algeria
by Fethi Ouallouche, Mourad Lazri, Karim Labadi, Djamal Alouache, Yacine Mohia, Mounir Sehad and Soltane Ameur
Atmosphere 2026, 17(3), 296; https://doi.org/10.3390/atmos17030296 (registering DOI) - 15 Mar 2026
Abstract
This study introduces a novel approach to 3-hourly and daily precipitation estimation over northern Algeria. The novel approach benefits from the classification capabilities of Random Forest (RF) and the predictive power of Convolutional Long Short-Term Memory (ConvLSTM) regression, with multi-temporal observations from the [...] Read more.
This study introduces a novel approach to 3-hourly and daily precipitation estimation over northern Algeria. The novel approach benefits from the classification capabilities of Random Forest (RF) and the predictive power of Convolutional Long Short-Term Memory (ConvLSTM) regression, with multi-temporal observations from the SEVIRI radiometer onboard the Meteosat Second Generation (MSG) satellite. The approach is a two-stage process: A Random Forest classifier is first used to provide a probabilistic characterization of precipitation occurrence and rainfall regimes. The ConvLSTM model then applies spatio-temporal regression to estimate rainfall intensities by analyzing multi-channel temporal sequences. The hybrid model produces spatially and temporally consistent precipitation fields by taking advantage of the spatio-temporal correlations of meteorological events, with the aim of obtaining accurate 3-hourly and daily rainfall accumulations for Northern Algeria. Results show a dramatic improvement over the reference RF-based technique, with correlation coefficients reaching 0.89 for 3-hourly accumulations and 0.91 for daily rainfall. Full article
Show Figures

Figure 1

17 pages, 998 KB  
Article
A Novel SOC Estimation Method for Lithium-Ion Batteries Based on Serial LSTM-UKF Fusion
by Yao Li, Rong Wang, Yi Jin, Zhenxin Sun, Hui Liu, Yu Liu, Yanhui Liu, Jiahuan Xu, Ye Tao, Zhaoyu Jiang, Yue Ma and Jiuchun Jiang
Energies 2026, 19(6), 1467; https://doi.org/10.3390/en19061467 (registering DOI) - 14 Mar 2026
Abstract
Accurate estimation of the State of Charge (SOC) of lithium-ion batteries is one of the core functions of a battery management system and is of great significance for ensuring the safe operation of electric vehicles and optimizing energy utilization. However, due to the [...] Read more.
Accurate estimation of the State of Charge (SOC) of lithium-ion batteries is one of the core functions of a battery management system and is of great significance for ensuring the safe operation of electric vehicles and optimizing energy utilization. However, due to the strong nonlinearity, time-varying characteristics, and interference from complex operating conditions within the battery, high-precision SOC estimation faces severe challenges. To address the problems that a single data-driven method lacks physical constraints and a single model-driven method struggles to characterize complex nonlinearities, this paper proposes a series-connected LSTM-UKF fusion estimation method. This method first utilizes a Long Short-Term Memory network to learn the dynamic characteristics of the battery from historical voltage and current data, capturing the long-term dependencies of SOC changes to achieve an initial prediction. Subsequently, using this predicted value as the observation input, an Unscented Kalman Filter based on a second-order RC equivalent circuit model is introduced for optimal state correction, effectively suppressing model uncertainty and measurement noise. Simulation validation under various dynamic conditions, such as constant current discharge and FUDS, shows that compared to single LSTM or UKF algorithms, the proposed fusion method has significant advantages in estimation accuracy, convergence speed, and robustness. Its root mean square error is reduced to 0.0031, and it maintains stable estimation performance under different operating conditions. This study provides an effective data-model fusion solution for high-precision SOC estimation of lithium-ion batteries under complex operating conditions. Full article
26 pages, 2686 KB  
Article
Algorithmic Stability in Turbulent Markets: Unveiling the Superiority of Shallow Learning over Deep Architectures in Cryptocurrency Forecasting
by Ceyda Yerdelen Kaygın, Musa Gün, Osman Nuri Akarsu, Haşim Bağcı and Ahmet Yanık
Mathematics 2026, 14(6), 989; https://doi.org/10.3390/math14060989 (registering DOI) - 14 Mar 2026
Abstract
Forecasting cryptocurrency prices is challenging due to extreme volatility, nonlinear dynamics, and frequent structural shifts in digital asset markets. While recent research increasingly applies deep learning architectures, the predictive advantage of highly complex models in noisy financial environments remains uncertain. This study evaluates [...] Read more.
Forecasting cryptocurrency prices is challenging due to extreme volatility, nonlinear dynamics, and frequent structural shifts in digital asset markets. While recent research increasingly applies deep learning architectures, the predictive advantage of highly complex models in noisy financial environments remains uncertain. This study evaluates the forecasting performance of shallow and deep learning approaches by comparing Support Vector Machines (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) models, along with hybrid configurations (GRU + SVM, LSTM + SVM, and GRU + LSTM). Using daily data spanning from 1 October 2020 to 23 September 2025 for five major cryptocurrencies—Bitcoin, Ethereum, Binance Coin, Solana, and Ripple—the models are estimated within a consistent framework and assessed using out-of-sample performance metrics, including MAE, MAPE, MSE, and R2. The results indicate that greater algorithmic complexity does not necessarily improve forecasting accuracy. In several cases, the parsimonious SVM model outperforms deep neural network architectures, particularly for highly volatile assets, while hybrid models fail to provide systematic improvements and sometimes amplify prediction errors. SHapley Additive exPlanations analysis further shows that immediate price-based variables dominate predictive power, whereas many lagged technical indicators contribute relatively limited explanatory value. Overall, the findings underscore the importance of algorithmic parsimony, suggesting that simpler machine learning models may deliver more robust forecasts in highly volatile cryptocurrency markets. Full article
(This article belongs to the Special Issue Recent Computational Techniques to Forecast Cryptocurrency Markets)
Show Figures

Figure 1

17 pages, 2083 KB  
Article
Monitoring of Liquid Metal Reactor Heater Zones with Recurrent Neural Network Learning of Temperature Time Series
by Maria Pantopoulou, Derek Kultgen, Lefteri Tsoukalas and Alexander Heifetz
Energies 2026, 19(6), 1462; https://doi.org/10.3390/en19061462 (registering DOI) - 14 Mar 2026
Abstract
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include [...] Read more.
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include heater zones consisting of specific heaters with controllers, temperature sensors, and thermal insulation. The failure of heater zones due to insulation material degradation or improper installation, resulting in parasitic heat losses, can lead to fluid freezing. The detection of faults using a heat-transfer model is difficult because of a lack of knowledge of the experimental details. Data-driven machine learning of heater zone temperature time series offers a viable alternative. In this study, we benchmarked the performance of recurrent neural networks (RNNs) in an analysis of heat-up transient temperature time series of heater zones installed on a liquid sodium vessel. The RNN models include long short-term memory (LSTM) and gated recurrent unit (GRU) networks, as well as their bi-directional variants, BiLSTM and BiGRU. Anomalous temperature points were designated using a percentile-based threshold applied to residual fluctuations in the detrended temperature time series. Additionally, the impact of the exponentially weighted moving average (EWMA) method on detection accuracy was examined. The RNN models’ performance was assessed using precision, recall, and F1 score metrics. Results demonstrated that RNN models effectively detect anomalies in temperature time series with the best models for each heater zone achieving F1 scores of over 93%. To explain the variations in RNN model performance across different heater zones, we used Kullback–Leibler (KL) divergence to quantify the relative entropy between training and testing data, and the Detrended Fluctuation Analysis (DFA) to assess long-range temporal correlations. For datasets with strong long-range correlations and minimal relative entropy between training and testing data, GRU is the best-performing model. When the data exhibits weaker long-term correlations and a significant relative entropy between training and testing distributions, BiGRU shows the best performance. For the data sets with intermediate values of both KL divergence and DFA, the best performance is obtained with LSTM and BiLSTM, respectively. Full article
19 pages, 2968 KB  
Article
CBAM-Enhanced CNN-LSTM with Improved DBSCAN for High-Precision Radar-Based Gesture Recognition
by Shiwei Yi, Zhenyu Zhao and Tongning Wu
Sensors 2026, 26(6), 1835; https://doi.org/10.3390/s26061835 (registering DOI) - 14 Mar 2026
Abstract
In recent years, radar-based gesture recognition technology has been widely applied in industrial and daily life scenarios. However, increasingly complex application scenarios have imposed higher demands on the accuracy and robustness of gesture recognition algorithms, and challenges such as clutter interference, inter-gesture similarity, [...] Read more.
In recent years, radar-based gesture recognition technology has been widely applied in industrial and daily life scenarios. However, increasingly complex application scenarios have imposed higher demands on the accuracy and robustness of gesture recognition algorithms, and challenges such as clutter interference, inter-gesture similarity, and spatial–temporal feature ambiguity limit recognition performance. To address these challenges, a novel framework named CECL, which incorporates the Convolutional Block Attention Module (CBAM) into a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) architecture, is proposed for high-accuracy radar-based gesture recognition. The CBAM adaptively highlights discriminative spatial regions and suppresses irrelevant background, and the CNN-LSTM network captures temporal dynamics across gesture sequences. During gesture signal processing, the Blackman window is applied to suppress spectral leakage. Additionally, a combination of wavelet thresholding and dynamic energy nulling is employed to effectively suppress clutter and enhance feature representation. Furthermore, an improved Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm further eliminates isolated sparse noise while preserving dense and valid target signal regions. Experimental results demonstrate that the proposed algorithm achieves 98.33% average accuracy in gesture classification, outperforming other baseline models. It exhibits excellent recognition performance across various distances and angles, demonstrating significantly enhanced robustness. Full article
Show Figures

Figure 1

17 pages, 1953 KB  
Article
Early Detection and Classification of Gibberella Zeae Contamination in Maize Kernels Using SWIR Hyperspectral Imaging and Machine Learning
by Kaili Liu, Shiling Li, Wenbo Shi, Zhen Guo, Xijun Shao, Yemin Guo, Jicheng Zhao, Xia Sun, Nortoji A. Khujamshukurov and Fangling Du
Sensors 2026, 26(6), 1834; https://doi.org/10.3390/s26061834 (registering DOI) - 14 Mar 2026
Abstract
Early-stage fungal contamination in maize kernels is difficult to identify visually and it can cause severe quality and safety risks during storage and transportation. Short-wave infrared (SWIR) hyperspectral imaging offers a rapid, non-destructive approach by capturing chemical information related to water, proteins, and [...] Read more.
Early-stage fungal contamination in maize kernels is difficult to identify visually and it can cause severe quality and safety risks during storage and transportation. Short-wave infrared (SWIR) hyperspectral imaging offers a rapid, non-destructive approach by capturing chemical information related to water, proteins, and lipids. This study investigates the early detection and classification of Gibberella zeae contamination in maize kernels using SWIR hyperspectral imaging combined with machine learning. Two maize varieties were artificially inoculated and cultured under controlled conditions, followed by hyperspectral data collection over six contamination stages. Various preprocessing techniques including standard normal variate (SNV), second derivative (SD), multiplicative scatter correction (MSC), and derivatives were evaluated to enhance data quality. Feature wavelength selection was performed using successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and uninformative variable elimination (UVE), significantly reducing redundancy and improving classification performance. Multiple models, including linear discriminant analysis (LDA), multilayer perceptron (MLP), support vector machine (SVM), a convolutional neural network (CNN), long short-term memory (LSTM) network, and a hybrid architecture Transformer that integrated a CNN, a LSTM network, and a Transformer (abbreviated as CLT), were constructed for both binary (healthy vs. contaminated) and multiclass classification tasks. Specifically, the multiclass task consisted of six contamination stages corresponding to contamination time from Day 0 to Day 5. The best binary classification task accuracy of 100% was achieved using SNV-preprocessed data with the MLP model. For multiclass classification task, the SD-preprocessed LDA model reached a test accuracy of 92.56%. Combined with appropriate preprocessing, feature selection and modeling, these results demonstrate that hyperspectral imaging is a powerful tool for the non-destructive, early-stage identification of fungal contamination in maize kernels, offering strong support for food safety and quality monitoring. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

24 pages, 1800 KB  
Article
D3PG-Light: A Lightweight and Stable Resource Scheduling Framework for UAV-Integrated Sensing, Communication, and Computation Systems
by Qing Cheng, Wenwen Wu and Yebo Zhou
Sensors 2026, 26(6), 1829; https://doi.org/10.3390/s26061829 - 13 Mar 2026
Abstract
Unmanned Aerial Vehicles (UAVs) are gradually emerging as key platforms for Integrated Sensing, Communication, and Computation (ISCC) systems in next-generation wireless networks. However, strict resource constraints and task coupling make static allocation inefficient in dynamic environments. This paper studies a UAV-driven ISCC system [...] Read more.
Unmanned Aerial Vehicles (UAVs) are gradually emerging as key platforms for Integrated Sensing, Communication, and Computation (ISCC) systems in next-generation wireless networks. However, strict resource constraints and task coupling make static allocation inefficient in dynamic environments. This paper studies a UAV-driven ISCC system in which a single UAV dynamically allocates communication bandwidth, sensing resources, and computing power. Considering that sensing data in mission-critical applications is highly time-sensitive, minimizing the response time is paramount. To reduce system latency while maintaining sensing quality and energy efficiency, we propose D3PG-Light, a deployment oriented and stability-enhanced refinement of the deep reinforcement learning framework, specifically tailored for real-time resource scheduling under UAV hardware constraints. D3PG-Light incorporates an adaptive gradient stabilization mechanism, Long Short-Term Memory (LSTM), and feature fusion to enhance training stability. Simulation results based on real air–ground channel measurements show that D3PG-Light converges faster and achieves more stable learning behavior than DDPG, TD3, and the original D3PG. In particular, the proposed method reduces the 95th-percentile latency from over 100 ms to approximately 24 ms, achieves higher converged reward values, and requires fewer than 50 k model parameters. These results demonstrate the effectiveness of D3PG-Light for latency-sensitive UAV-ISCC applications. Full article
(This article belongs to the Section Communications)
15 pages, 1452 KB  
Article
Hybrid Deep Learning and Transformer-Based Framework for Multivariate Electricity Consumption Forecasting
by Muzaffer Ertürk, Murat Emeç and Mahmut Turhan
Appl. Sci. 2026, 16(6), 2760; https://doi.org/10.3390/app16062760 - 13 Mar 2026
Viewed by 20
Abstract
Accurate forecasting of multivariate time series is essential for energy management, grid optimisation, and policy planning. This study presents a hybrid deep learning and Transformer-based forecasting framework for predicting hourly electricity consumption across Turkey using nationwide data from Energy Exchange Istanbul (EPİAŞ) between [...] Read more.
Accurate forecasting of multivariate time series is essential for energy management, grid optimisation, and policy planning. This study presents a hybrid deep learning and Transformer-based forecasting framework for predicting hourly electricity consumption across Turkey using nationwide data from Energy Exchange Istanbul (EPİAŞ) between 2018 and 2025. The dataset comprises 15 variables representing diverse energy sources and market indicators, including consumption, generation, and the market-clearing price (MCP). The proposed hybrid model integrates Long Short-Term Memory (LSTM), Bidirectional LSTM (BLSTM), and Gated Recurrent Unit (GRU) layers to capture both short- and long-term temporal dependencies, while a Transformer model leveraging multi-head self-attention mechanisms is used for comparison. All models were trained using standardised preprocessing, a 24 h lookback window, and optimised hyperparameters via GridSearchCV. Experimental results reveal that the hybrid model achieved the best overall performance, with MAE = 464.01, RMSE = 663.39, and R2 = 0.9902, significantly outperforming the baseline and Transformer models. The Transformer demonstrated robust long-horizon learning capability (R2 = 0.9257) but at a higher computational cost. These results confirm that combining multiple recurrent architectures enhances predictive accuracy and stability for large-scale, real-time energy forecasting. The proposed framework offers a reliable foundation for smart grid operations, demand prediction, and data-driven energy policy development. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3027 KB  
Article
Acoustic Signal-Based Piezoelectric Thin-Film Microbalance: A Versatile and Portable Platform for Biomedical Sensing and Point-of-Care Testing
by Bei Zhao, Xiaomeng Li, Jing Shi and Huiling Liu
Biosensors 2026, 16(3), 160; https://doi.org/10.3390/bios16030160 - 13 Mar 2026
Viewed by 38
Abstract
This study introduces a portable piezoelectric thin-film microbalance platform that combines acoustic signal analysis with deep learning for point-of-care mass detection. The system employs a flexible polyvinylidene fluoride sensor, a smartphone for acoustic signal acquisition, and three deep learning models: convolutional neural network, [...] Read more.
This study introduces a portable piezoelectric thin-film microbalance platform that combines acoustic signal analysis with deep learning for point-of-care mass detection. The system employs a flexible polyvinylidene fluoride sensor, a smartphone for acoustic signal acquisition, and three deep learning models: convolutional neural network, long short-term memory network, and Transformer. Experimental findings indicate that the Transformer achieves the highest classification accuracy of 99.5%, outperforming the convolutional neural network at 96.9% and the long short-term memory network at 97.3%, attributed to its enhanced capability to capture long-range temporal dependencies. The platform facilitates real-time, label-free detection without the necessity for bulky instrumentation, providing a cost-effective and scalable solution for decentralized diagnostics. This research establishes a foundational framework for intelligent portable micro-mass sensing with significant potential applications in precision medicine, environmental monitoring, and personalized healthcare. Full article
(This article belongs to the Section Biosensor and Bioelectronic Devices)
Show Figures

Figure 1

17 pages, 4541 KB  
Article
Neurophysiological In Vitro Model of Amyloid-β-Induced Deficits of Hippocampal LTP Involving Neuronal Adenosine A2A Receptor Dysfunction Through CD73
by Francisco Q. Gonçalves, Henrique B. Silva, Ângelo R. Tomé, Paula Agostinho, Rodrigo A. Cunha and João P. Lopes
Cells 2026, 15(6), 510; https://doi.org/10.3390/cells15060510 - 13 Mar 2026
Viewed by 37
Abstract
Amyloid-β peptides (Aβ) are considered a main culprit of Alzheimer’s disease (AD), leading to synaptic dysfunction and memory deficits. Although studies in animal models of AD converge to show alterations of synaptic plasticity, namely of long-term potentiation (LTP), the mechanisms through which Aβ [...] Read more.
Amyloid-β peptides (Aβ) are considered a main culprit of Alzheimer’s disease (AD), leading to synaptic dysfunction and memory deficits. Although studies in animal models of AD converge to show alterations of synaptic plasticity, namely of long-term potentiation (LTP), the mechanisms through which Aβ affects synaptic function remain to be unveiled. In this study, we established experimental conditions showing that the acute exposure of mouse hippocampal slices to optimized concentrations of Aβ impaired short-term (PPF-paired-pulse facilitation) and long-term (LTP-long-term potentiation) plasticity without altering basal synaptic transmission. We observed that the elimination of extracellular adenosine with adenosine deaminase abrogated the impact of Aβ on synaptic plasticity, showing a mandatory involvement of extracellular adenosine in the neurophysiological effects of Aβ. Additionally, inhibiting adenosine receptor function with caffeine, as well as selectively blocking adenosine A1 receptors (A1R) with DPCPX, or adenosine A2A receptor (A2AR) with either an antagonist SCH58261 or through knocking out A2AR, demonstrated that acute Aβ modified mouse hippocampal PPF via A1R and LTP through A2AR. Furthermore, the use of slices from mice bearing forebrain-neuron A2AR deletion, along with the application of α,β-methylene ADP, a CD73 inhibitor, confirmed that the neurophysiological actions of Aβ on hippocampal LTP occur selectively through the overfunction of neuronal A2AR via CD73-mediated formation of extracellular adenosine. Overall, the exploitation of a neurophysiological model of early AD, based on the acute administration of Aβ to hippocampal slices, confirmed the critical involvement of adenosine signaling in the impact of Aβ on synaptic plasticity. Full article
(This article belongs to the Special Issue New Discoveries in Calcium Signaling-Related Neurological Disorders)
Show Figures

Figure 1

18 pages, 4314 KB  
Article
Remaining Useful Life Prediction for Rotating Machinery via Multi-Graph-Based Spatiotemporal Feature Fusion
by Xiangang Cao, Chenjian Gao and Xinyuan Zhang
Appl. Sci. 2026, 16(6), 2738; https://doi.org/10.3390/app16062738 - 13 Mar 2026
Viewed by 79
Abstract
Rotating machinery serves as a critical component in various engineering systems, making accurate prediction of its Remaining Useful Life (RUL) essential for ensuring operational stability. To address the technical limitations of mainstream RUL prediction models comprehensively capturing spatial correlations among multiple sensors, this [...] Read more.
Rotating machinery serves as a critical component in various engineering systems, making accurate prediction of its Remaining Useful Life (RUL) essential for ensuring operational stability. To address the technical limitations of mainstream RUL prediction models comprehensively capturing spatial correlations among multiple sensors, this paper proposes a multi-graph-structured spatiotemporal feature fusion model for RUL prediction of rotating machinery. Breaking through the constraints of constructing a single correlation graph, the model first builds two distinct graphs—a prior correlation graph based on the structural mechanism of the rotating machinery and a similarity correlation graph derived from monitoring data distribution characteristics. These dual-perspective graphs collectively characterize the potential spatial dependencies among multiple sensors. Subsequently, a Graph Attention Network (GAT) is introduced to aggregate spatial features from both graphs, and a feature concatenation fusion strategy is adopted to achieve a comprehensive representation of the inter-sensor spatial dependencies. Finally, a Long Short-Term Memory (LSTM) network is employed to extract temporal evolution features from the operational data. The effective fusion of these spatial and temporal features enhances the model’s RUL prediction performance. Simulation experiments conducted on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset validated the robustness of the proposed method. Full article
Show Figures

Figure 1

Back to TopTop