Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,336)

Search Parameters:
Keywords = memory enhancers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1217 KB  
Review
Apolipoprotein E4 in Alzheimer’s Disease: Role in Pathology, Lipid Metabolism, and Drug Treatment
by Nour F. Al-Ghraiybah, Amer E. Alkhalifa, Yutaka Itokazu, Taylor O. Farr, Naima C. Perez, Hande Ali and Amal Kaddoumi
Int. J. Mol. Sci. 2026, 27(2), 1004; https://doi.org/10.3390/ijms27021004 (registering DOI) - 19 Jan 2026
Abstract
Alzheimer’s Disease (AD) is a neurodegenerative disorder characterized by cognitive decline and memory loss. Among the genetic risk factors linked to AD, the apolipoprotein E4 (ApoE4) remains the strongest. It is well known that carrying the ApoE4 isoform is associated with advanced AD [...] Read more.
Alzheimer’s Disease (AD) is a neurodegenerative disorder characterized by cognitive decline and memory loss. Among the genetic risk factors linked to AD, the apolipoprotein E4 (ApoE4) remains the strongest. It is well known that carrying the ApoE4 isoform is associated with advanced AD pathology, blood–brain barrier (BBB) disruption, and changes in lipid metabolism. In this review, we provide an overview of the role of centrally and peripherally produced ApoE in AD. After this introduction, we focus on new findings regarding ApoE4’s effects on AD pathology and BBB function. We then discuss ApoE’s role in lipid metabolism in AD, highlighting examples of lipid changes caused by carrying the ApoE4 isoform. Next, the review explores the implications of ApoE4 isoforms for current treatments—whether they involve anti-amyloid therapy or other pharmacological agents used for AD—emphasizing the importance of personalized medicine approaches for patients with this high-risk allele. This review aims to provide an updated overview of ApoE4’s effects on AD pathology and treatment. By integrating recent discoveries, it underscores the critical need to consider ApoE4 status in both research and clinical settings to enhance therapeutic strategies and outcomes for individuals with AD. Full article
23 pages, 3992 KB  
Article
A Sparse Aperture ISAR Imaging Based on a Single-Layer Network Framework
by Haoxuan Song, Xin Zhang, Taonan Wu, Jialiang Xu, Yong Wang and Hongzhi Li
Remote Sens. 2026, 18(2), 335; https://doi.org/10.3390/rs18020335 (registering DOI) - 19 Jan 2026
Abstract
Under sparse aperture (SA) conditions, inverse synthetic aperture radar (ISAR) imaging becomes a severely ill-posed inverse problem due to undersampled and noisy measurements, leading to pronounced degradation in azimuth resolution and image quality. Although deep learning approaches have demonstrated promising performance for SA-ISAR [...] Read more.
Under sparse aperture (SA) conditions, inverse synthetic aperture radar (ISAR) imaging becomes a severely ill-posed inverse problem due to undersampled and noisy measurements, leading to pronounced degradation in azimuth resolution and image quality. Although deep learning approaches have demonstrated promising performance for SA-ISAR imaging, their practical deployment is often hindered by black-box behavior, fixed network depth, high computational cost, and limited robustness under extreme operating conditions. To address these challenges, this paper proposes an ADMM Denoising Deep Equilibrium Framework (ADnDEQ) for SA-ISAR imaging. The proposed method reformulates an ADMM-based unfolding process as an implicit deep equilibrium (DEQ) model, where ADMM provides an interpretable optimization structure and a lightweight DnCNN is embedded as a learned proximal operator to enhance robustness against noise and sparse sampling. By representing the reconstruction process as the equilibrium solution of a single-layer network with shared parameters, ADnDEQ decouples forward and backward propagation, achieves constant memory complexity, and enables flexible control of inference iterations. Experimental results demonstrate that the proposed ADnDEQ framework achieves superior reconstruction quality and robustness compared with conventional layer-stacked networks, particularly under low sampling ratios and low-SNR conditions, while maintaining significantly reduced computational cost. Full article
Show Figures

Figure 1

19 pages, 3684 KB  
Article
Building Cooling Load Prediction Based on GWO-CNN-LSTM
by Xuelong Zhang, Chao Zhang, Yongzhi Ma and Kunyu Liu
Energies 2026, 19(2), 498; https://doi.org/10.3390/en19020498 - 19 Jan 2026
Abstract
Accurate prediction of building cooling load is crucial for enhancing energy efficiency and optimizing the operation of Heating, Ventilation, and Air Conditioning (HVAC) systems. To improve predictive accuracy, we propose a hybrid Grey Wolf Optimizer-Convolutional Neural Network–Long Short-Term Memory (GWO-CNN-LSTM) prediction model. A [...] Read more.
Accurate prediction of building cooling load is crucial for enhancing energy efficiency and optimizing the operation of Heating, Ventilation, and Air Conditioning (HVAC) systems. To improve predictive accuracy, we propose a hybrid Grey Wolf Optimizer-Convolutional Neural Network–Long Short-Term Memory (GWO-CNN-LSTM) prediction model. A 3D model of the building was first developed using SketchUp, and its cooling load was subsequently simulated with EnergyPlus and OpenStudio. The Grey Wolf Optimizer (GWO) algorithm is employed to automatically tune the hyperparameters of the CNN-LSTM model, thereby improving both training efficiency and predictive performance. A comparative analysis with other models demonstrates that the proposed model effectively captures both long-term temporal patterns and short-term fluctuations in cooling load, outperforming baseline models such as Long Short-Term Memory (LSTM), Genetic Algorithm-Convolutional Neural Network-Long Short-Term Memory (GA-CNN-LSTM), and Particle Swarm Optimization-Convolutional Neural Network–Long Short-Term Memory (PSO-CNN-LSTM). A comparative analysis with other models demonstrates that the proposed model effectively captures both long-term temporal patterns and short-term fluctuations in cooling load, outperforming baseline models such as LSTM, GA-CNN-LSTM, and PSO-CNN-LSTM. The GWO-CNN-LSTM model achieves an R2 of 0.9266, with MAE and RMSE of 218.7830 W and 327.4012 W, respectively, representing improvements of 35.0% and 27.0% in MAE and RMSE compared to LSTM, and 20.8% and 16.3% compared to GA-CNN-LSTM. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

22 pages, 3531 KB  
Article
Active Fault-Tolerant Method for Navigation Sensor Faults Based on Frobenius Norm–KPCA–SVM–BiLSTM
by Zexia Huang, Bei Xu, Guoyang Ye, Pu Yang and Chunli Shao
Actuators 2026, 15(1), 64; https://doi.org/10.3390/act15010064 - 19 Jan 2026
Abstract
Aiming to address the safety and stability issues caused by typical faults of Unmanned Aerial Vehicle (UAV) navigation sensors, a novel fault-tolerant method is proposed, which can capture the temporal dependencies of fault feature evolution, and complete the classification, prediction, and data reconstruction [...] Read more.
Aiming to address the safety and stability issues caused by typical faults of Unmanned Aerial Vehicle (UAV) navigation sensors, a novel fault-tolerant method is proposed, which can capture the temporal dependencies of fault feature evolution, and complete the classification, prediction, and data reconstruction of fault data. In this fault-tolerant method, the feature extraction module adopts the FNKPCA method—integrating the Frobenius Norm (F-norm) with Kernel Principal Component Analysis (KPCA)—to optimize the kernel function’s ability to capture signal features, and enhance the system reliability. By combining FNKPCA with Support Vector Machine (SVM) and Bidirectional Long Short-Term Memory (BiLSTM), an active fault-tolerant processing method, namely FNKPCA–SVM–BiLSTM, is obtained. This study conducts comparative experiments on public datasets, and verifies the effectiveness of the proposed method under different fault states. The proposed approach has the following advantages: (1) It achieves a detection accuracy of 98.64% for sensor faults, with an average false alarm rate of only 0.15% and an average missed detection rate of 1.16%, demonstrating excellent detection performance. (2) Compared with the Long Short-Term Memory (LSTM)-based method, the proposed fault-tolerant method can reduce the RMSE metrics of Global Positioning System (GPS), Inertial Measurement Unit (IMU), and Ultra-Wide-Band (UWB) sensors by 77.80%, 14.30%, and 75.00%, respectively, exhibiting a significant fault-tolerant effect. Full article
(This article belongs to the Section Actuators for Manufacturing Systems)
21 pages, 2881 KB  
Article
A Rapid Prediction Model of Rainstorm Flood Targeting Power Grid Facilities
by Shuai Wang, Lei Shi, Xiaoli Hao, Xiaohua Ren, Qing Liu, Hongping Zhang and Mei Xu
Hydrology 2026, 13(1), 37; https://doi.org/10.3390/hydrology13010037 - 19 Jan 2026
Abstract
Rainstorm floods constitute one of the major natural hazards threatening the safe and stable operation of power grid facilities. Constructing a rapid and accurate prediction model is of great significance in order to enhance the disaster prevention capacity of the power grid. This [...] Read more.
Rainstorm floods constitute one of the major natural hazards threatening the safe and stable operation of power grid facilities. Constructing a rapid and accurate prediction model is of great significance in order to enhance the disaster prevention capacity of the power grid. This study proposes a rapid prediction model for urban rainstorm flood targeting power grid facilities based on deep learning. The model utilizes computational results of high-precision mechanism models as data-driven input and adopts a dual-branch prediction architecture of space and time: the spatial prediction module employs a multi-layer perceptron (MLP), and the temporal prediction module integrates convolutional neural network (CNN), long short-term memory network (LSTM), and attention mechanism (ATT). The constructed water dynamics model of the right bank of Liangshui River in Fengtai District of Beijing has been verified to be reliable in the simulation of the July 2023 (“23·7”) extreme rainstorm event in Beijing (the July 2023 event), which provides high-quality training and validation data for the deep learning-based surrogate model (SM model). Compared with traditional high-precision mechanism models, the SM model shows distinctive advantages: the R2 value of the overall inundation water depth prediction of the spatial prediction module reaches 0.9939, and the average absolute error of water depth is 0.013 m; the R2 values of temporal water depth processes prediction at all substations made by the temporal prediction module are all higher than 0.92. Only by inputting rainfall data can the water depth at power grid facilities be output within seconds, providing an effective tool for rapid assessment of flood risks to power grid facilities. In a word, the main contribution of this study lies in the proposal of the SM model driven by the high-precision mechanism model. This model, through a dual-branch module in both space and time, has achieved second-level high-precision prediction from rainfall input to water depth output in scenarios where the power grid is at risk of flooding for the first time, providing an expandable method for real-time simulation of complex physical processes. Full article
15 pages, 2074 KB  
Article
Research on Encryption and Decryption Technology of Microservice Communication Based on Block Cipher
by Shijie Zhang, Xiaolan Xie, Ting Fan and Yu Wang
Electronics 2026, 15(2), 431; https://doi.org/10.3390/electronics15020431 - 19 Jan 2026
Abstract
The efficiency optimization of encryption and decryption algorithms in cloud environments is addressed in this study, where the processing speed of encryption and decryption is enhanced through the application of multi-threaded parallel technology. In view of the high-concurrency and distributed storage characteristics of [...] Read more.
The efficiency optimization of encryption and decryption algorithms in cloud environments is addressed in this study, where the processing speed of encryption and decryption is enhanced through the application of multi-threaded parallel technology. In view of the high-concurrency and distributed storage characteristics of cloud platforms, a multi-threaded concurrency mechanism is adopted for the direct processing of data streams. Compared with the traditional serial processing mode, four distinct encryption algorithms, namely AES, DES, SM4 and Ascon, are employed, and different data units are processed concurrently by means of multithreaded technology. Based on multi-dimensional performance evaluation indicators (including throughput, memory footprint and security level), comparative analyses are carried out to optimize the design scheme; accordingly, multi-threaded collaborative encryption is realized to improve the overall operation efficiency. Experimental results indicate that, in comparison with the traditional serial encryption method, the encryption and decryption latency of the algorithm is reduced by around 50%, which significantly lowers the time overhead associated with encryption and decryption processes. Simultaneously, the throughput of AES and DES algorithms is observed to be doubled, which leads to a remarkable improvement in communication efficiency. Moreover, under the premise that the original secure communication capability is guaranteed, system resource overhead is effectively reduced by SM4 and Ascon algorithms. On this basis, a quantitative reference basis is provided for cloud platforms to develop targeted encryption strategies tailored to diverse business demands. In conclusion, the proposed approach is of profound significance for advancing the synergistic optimization of security and performance in cloud-native data communication scenarios. Full article
(This article belongs to the Special Issue AI for Wireless Communications and Security)
Show Figures

Figure 1

24 pages, 4302 KB  
Article
TPC-Tracker: A Tracker-Predictor Correlation Framework for Latency Compensation in Aerial Tracking
by Xuqi Yang, Yulong Xu, Renwu Sun, Tong Wang and Ning Zhang
Remote Sens. 2026, 18(2), 328; https://doi.org/10.3390/rs18020328 - 19 Jan 2026
Abstract
Online visual object tracking is a critical component of remote sensing-based aerial vehicle physical tracking, enabling applications such as environmental monitoring, target surveillance, and disaster response. In real-world remote sensing scenarios, the inherent processing delay of tracking algorithms results in the tracker’s output [...] Read more.
Online visual object tracking is a critical component of remote sensing-based aerial vehicle physical tracking, enabling applications such as environmental monitoring, target surveillance, and disaster response. In real-world remote sensing scenarios, the inherent processing delay of tracking algorithms results in the tracker’s output lagging behind the actual state of the observed scene. This latency not only degrades the accuracy of visual tracking in dynamic remote sensing environments but also impairs the reliability of UAV physical tracking control systems. Although predictive trackers have shown promise in mitigating latency impacts by forecasting target future states, existing methods face two key challenges in remote sensing applications: weak correlation between trackers and predictors, where predictions rely solely on motion information without leveraging rich remote sensing visual features; and inadequate modeling of continuous historical memory from discrete remote sensing data, limiting adaptability to complex spatiotemporal changes. To address these issues, we propose TPC-Tracker, a Tracker-Predictor Correlation Framework tailored for latency compensation in remote sensing-based aerial tracking. A Visual Motion Decoder (VMD) is designed to fuse high-dimensional visual features from remote sensing imagery with motion information, strengthening the tracker-predictor connection. Additionally, the Visual Memory Module (VMM) and Motion Memory Module (M3) model discrete historical remote sensing data into continuous spatiotemporal memory, enhancing predictive robustness. Compared with state-of-the-art predictive trackers, TPC-Tracker reduces the Mean Squared Error (MSE) by up to 38.95% in remote sensing-oriented physical tracking simulations. Deployed on VTOL drones, it achieves stable tracking of remote sensing targets at 80 m altitude and 20 m/s speed. Extensive experiments on public UAV remote sensing datasets and real-world remote sensing tasks validate the framework’s superiority in handling latency-induced challenges in aerial remote sensing scenarios. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

29 pages, 6120 KB  
Article
Bionic Technology in Prosthetics: Multi-Objective Optimization of a Bioinspired Shoulder-Elbow Prosthesis with Embedded Actuation
by Jingxu Jiang, Gengbiao Chen, Xin Wang and Hongwei Yan
Biomimetics 2026, 11(1), 79; https://doi.org/10.3390/biomimetics11010079 (registering DOI) - 19 Jan 2026
Abstract
The development of upper-limb prostheses is often hindered by limited dexterity, a restricted workspace, and bulky designs, primarily due to performance limitations in proximal joints like the shoulder and elbow, which contribute to high user abandonment rates. To overcome these challenges, this paper [...] Read more.
The development of upper-limb prostheses is often hindered by limited dexterity, a restricted workspace, and bulky designs, primarily due to performance limitations in proximal joints like the shoulder and elbow, which contribute to high user abandonment rates. To overcome these challenges, this paper presents a novel, bioinspired, and integrated prosthetic system as an advancement in bionic technology. The design incorporates a shoulder joint based on an asymmetric 3-RRR spherical parallel mechanism (SPM) with actuators embedded within the moving platform, and an elbow joint actuated by low-voltage Shape Memory Alloy (SMA) springs. The inverse kinematics of the shoulder mechanism was established, revealing the existence of up to eight configurations. We employed Multi-Objective Particle Swarm Optimization (MOPSO) to simultaneously maximize workspace coverage, enhance dexterity, and minimize joint torque. The optimized design achieves remarkable performance: (1) 85% coverage of the natural shoulder’s workspace; (2) a maximum von Mises stress of merely 3.4 MPa under a 40 N load, ensuring structural integrity; and (3) a sub-0.2 s response time for the SMA-driven elbow under low-voltage conditions (6 V) at a motion velocity of 6°/s. Both motion simulation and prototype testing validated smooth and anthropomorphic motion trajectories. This work provides a comprehensive framework for developing lightweight, high-performance prosthetic limbs, establishing a solid foundation for next-generation wearable robotics and bionic devices. Future research will focus on the integration of neural interfaces for intuitive control. Full article
Show Figures

Figure 1

22 pages, 1120 KB  
Review
Beyond Cognitive Load Theory: Why Learning Needs More than Memory Management
by Andrew Sortwell, Evgenia Gkintoni, Jesús Díaz-García, Peter Ellerton, Ricardo Ferraz and Gregory Hine
Brain Sci. 2026, 16(1), 109; https://doi.org/10.3390/brainsci16010109 - 19 Jan 2026
Abstract
Background: The role of cognitive load theory (CLT) in understanding effective pedagogy has received increased attention in the fields of education and psychology in recent years. A considerable amount of literature has been published on the CLT construct as foundational guidance for instructional [...] Read more.
Background: The role of cognitive load theory (CLT) in understanding effective pedagogy has received increased attention in the fields of education and psychology in recent years. A considerable amount of literature has been published on the CLT construct as foundational guidance for instructional design by focusing on managing cognitive load in working memory to enhance learning outcomes. However, recent neuroscientific findings and practical critiques suggest that CLT’s emphasis on content-focused instruction and cognitive efficiency may overlook the complexity of human learning. Methods: This conceptual paper synthesises evidence from cognitive science, developmental psychology, neuroscience, health sciences and educational research to examine the scope conditions and limitations of CLT when applied as a general framework for K–12 learning. One of the major theoretical issues identified is the lack of consideration for the broad set of interpersonal and self-management skills, creating potential limitations for real-world educational contexts, where social-emotional and self-regulatory abilities are as crucial as cognitive competencies. Results: As a result of the critique, this paper introduces the Neurodevelopmental Informed Holistic Learning and Development Framework as a neuroscience-informed construct that integrates cognitive, emotional, and interpersonal dimensions essential for effective learning. Conclusions: In recognising the limitations of CLT, the paper offers practitioners contemporary, neurodevelopmentally informed insights that extend beyond cognitive efficiency alone and better reflect the multidimensional nature of real-world learning. Full article
(This article belongs to the Special Issue Neuroeducation: Bridging Cognitive Science and Classroom Practice)
Show Figures

Graphical abstract

12 pages, 2328 KB  
Article
A Rapid Single-Phase Blackout Detection Algorithm Based on Clarke–Park Transformations
by Avelina Alejo-Reyes, Julio C. Rosas-Caro, Antonio Valderrabano-González, Jesus E. Valdez-Resendiz, Johnny Posada and Juana E. Medina-Alvarez
Electricity 2026, 7(1), 8; https://doi.org/10.3390/electricity7010008 (registering DOI) - 19 Jan 2026
Abstract
This paper presents a detection algorithm for identifying when a sinusoidal signal becomes zero, which can provide information about its amplitude. This method can be used to detect voltage interruptions in a single-phase sinusoidal waveform, which may be applied in the rapid recognition [...] Read more.
This paper presents a detection algorithm for identifying when a sinusoidal signal becomes zero, which can provide information about its amplitude. This method can be used to detect voltage interruptions in a single-phase sinusoidal waveform, which may be applied in the rapid recognition of power outages in single-phase electrical systems. The method requires the measurement of a voltage signal. Other analysis methods, like calculating the Root Mean Square (RMS), are based on window sampling and require storing a relatively larger amount of samples in the system memory; an advantage of the proposed method is that it does not require as many samples, but its main advantage is its ability to reduce the detection time compared to other approaches. Techniques like the RMS value or amplitude detection through FFT typically require one full AC cycle to change from a 100% to 0% output signal and then detect a blackout, whereas the proposed method achieves detection within only a quarter cycle without considering additional rate-of-change enhancements, which can be further applied. The algorithm treats the measured single-phase voltage as the α component of an αβ Clarke pair and generates the β component by introducing a 90° electrical delay through a delayed replica of the original signal. The resulting αβ signals are then transformed into the dq reference frame in which the d component is used for outage detection, as it rapidly decreases from 100% to 0% within a quarter cycle following an interruption. This rapid response makes the proposed method suitable for applications that demand minimal detection latency, such as battery backup systems. Both simulation and experimental results validate the effectiveness of the approach. Full article
Show Figures

Figure 1

23 pages, 1505 KB  
Article
Loss Prediction and Global Sensitivity Analysis for Distribution Transformers Based on NRBO-Transformer-BiLSTM
by Qionglin Li, Yi Wang and Tao Mao
Electronics 2026, 15(2), 420; https://doi.org/10.3390/electronics15020420 - 18 Jan 2026
Abstract
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality [...] Read more.
As distributed energy resources and nonlinear loads are integrated into power grids on a large scale, power quality issues have grown increasingly prominent, triggering a substantial rise in distribution transformer losses. Traditional approaches struggle to accurately forecast transformer losses under complex power quality conditions and lack quantitative analysis of the influence of various power quality indicators on losses. This study presents a data-driven methodology for transformer loss prediction and sensitivity analysis in such environments. First, an experimental platform is designed and built to measure transformer losses under composite power quality conditions, enabling the collection of actual measurement data when multi-source disturbances exist. Second, a high-precision loss prediction model—dubbed Newton-Raphson-Based Optimizer-Transformer-Bidirectional Long Short-Term Memory (NRBO-Transformer-BiLSTM)—is developed on the basis of an enhanced deep neural network. Finally, global sensitivity analysis methods are utilized to quantitatively evaluate the impact of different power quality indicators on transformer losses. Experimental results reveal that the proposed prediction model achieves an average error rate of less than 0.18% and a similarity coefficient of over 0.9989. Among all power quality indicators, voltage deviation has the most significant impact on transformer losses (with a sensitivity of 0.3268), followed by three-phase unbalance (sensitivity: 0.0109) and third harmonics (sensitivity: 0.0075). This research offers a theoretical foundation and technical support for enhancing the energy efficiency of distribution transformers and implementing effective power quality management. Full article
23 pages, 13094 KB  
Article
PDR-STGCN: An Enhanced STGCN with Multi-Scale Periodic Fusion and a Dynamic Relational Graph for Traffic Forecasting
by Jie Hu, Bingbing Tang, Langsha Zhu, Yiting Li, Jianjun Hu and Guanci Yang
Systems 2026, 14(1), 102; https://doi.org/10.3390/systems14010102 - 18 Jan 2026
Abstract
Accurate traffic flow prediction is a core component of intelligent transportation systems, supporting proactive traffic management, resource optimization, and sustainable urban mobility. However, urban traffic networks exhibit heterogeneous multi-scale periodic patterns and time-varying spatial interactions among road segments, which are not sufficiently captured [...] Read more.
Accurate traffic flow prediction is a core component of intelligent transportation systems, supporting proactive traffic management, resource optimization, and sustainable urban mobility. However, urban traffic networks exhibit heterogeneous multi-scale periodic patterns and time-varying spatial interactions among road segments, which are not sufficiently captured by many existing spatio-temporal forecasting models. To address this limitation, this paper proposes PDR-STGCN (Periodicity-Aware Dynamic Relational Spatio-Temporal Graph Convolutional Network), an enhanced STGCN framework that jointly models multi-scale periodicity and dynamically evolving spatial dependencies for traffic flow prediction. Specifically, a periodicity-aware embedding module is designed to capture heterogeneous temporal cycles (e.g., daily and weekly patterns) and emphasize dominant social rhythms in traffic systems. In addition, a dynamic relational graph construction module adaptively learns time-varying spatial interactions among road nodes, enabling the model to reflect evolving traffic states. Spatio-temporal feature fusion and prediction are achieved through an attention-based Bidirectional Long Short-Term Memory (BiLSTM) network integrated with graph convolution operations. Extensive experiments are conducted on three datasets, including Metro Traffic Los Angeles (METR-LA), Performance Measurement System Bay Area (PEMS-BAY), and a real-world traffic dataset from Guizhou, China. Experimental results demonstrate that PDR-STGCN consistently outperforms state-of-the-art baseline models. For next-hour traffic forecasting, the proposed model achieves average reductions of 16.50% in RMSE, 9.00% in MAE, and 0.34% in MAPE compared with the second-best baseline. Beyond improved prediction accuracy, PDR-STGCN reveals latent spatio-temporal evolution patterns and dynamic interaction mechanisms, providing interpretable insights for traffic system analysis, simulation, and AI-driven decision-making in urban transportation networks. Full article
Show Figures

Figure 1

15 pages, 16477 KB  
Article
Defect Classification Dataset and Algorithm for Magnetic Random Access Memory
by Hui Chen and Jianyi Yang
Mathematics 2026, 14(2), 323; https://doi.org/10.3390/math14020323 - 18 Jan 2026
Abstract
Defect categorization is essential to product quality assurance during the production of magnetic random access memory (MRAM). Nevertheless, traditional defect detection techniques continue to face difficulties in large-scale deployments, such as a lack of labeled examples with complicated defect shapes, which results in [...] Read more.
Defect categorization is essential to product quality assurance during the production of magnetic random access memory (MRAM). Nevertheless, traditional defect detection techniques continue to face difficulties in large-scale deployments, such as a lack of labeled examples with complicated defect shapes, which results in inadequate identification accuracy. In order to overcome these problems, we create the MARMset dataset, which consists of 39,822 photos and covers 14 common defect types for MRAM defect detection and classification. Furthermore, we present a baseline framework (GAGBnet) for MRAM defect classification, including a global attention module (GAM) and an attention-guided block (AGB). Firstly, the GAM is introduced to enhance the model’s feature extraction capability. Secondly, inspired by the feature enhancement strategy, the AGB is designed to incorporate an attention-guided mechanism during feature fusion to remove redundant information and focus on critical features. Finally, the experimental results show that the average accuracy rate of this method on the MARMset reaches 92.90%. In addition, we test on the NEU-CLS dataset to evaluate cross-dataset generalization, achieving an average accuracy of 98.60%. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 6489 KB  
Article
LIF-VSR: A Lightweight Framework for Video Super-Resolution with Implicit Alignment and Attentional Fusion
by Songyi Zhang, Hailin Zhang, Xiaolin Wang, Kailei Song, Zhizhuo Han, Zhitao Zhang and Wenchi Cheng
Sensors 2026, 26(2), 637; https://doi.org/10.3390/s26020637 - 17 Jan 2026
Viewed by 64
Abstract
Video super-resolution (VSR) has advanced rapidly in enhancing video quality and restoring compressed content, yet leading methods often remain too costly for real-world use. We present LIF-VSR, a lightweight, near-real-time framework built with an efficiency-first philosophy, comprising economical temporal propagation, a new neighboring-frame [...] Read more.
Video super-resolution (VSR) has advanced rapidly in enhancing video quality and restoring compressed content, yet leading methods often remain too costly for real-world use. We present LIF-VSR, a lightweight, near-real-time framework built with an efficiency-first philosophy, comprising economical temporal propagation, a new neighboring-frame fusion strategy, and three streamlined core modules. For temporal propagation, a uni-directional recurrent architecture transfers context through a compact inter-frame memory unit, avoiding the heavy compute and memory of multi-frame parallel inputs. For fusion and alignment, we discard 3D convolutions and optical flow, instead using (i) a deformable convolution module for implicit feature-space alignment, and (ii) a sparse attention fusion module that aggregates adjacent-frame information via learned sparse key sampling points, sidestepping dense global computation. For feature enhancement, a cross-attention mechanism selectively calibrates temporal features at far lower cost than global self-attention. Across public benchmarks, LIF-VSR achieves competitive results with only 3.06 M parameters and a very low computational footprint, reaching 27.65 dB on Vid4 and 31.61 dB on SPMCs. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

14 pages, 14186 KB  
Article
Efficient and Spatially Aware 3D Gaussian Splatting for Compact Large-Scale Scene Reconstruction
by Hao Luo, Zhituo Tu, Jialei He and Jie Yuan
Appl. Sci. 2026, 16(2), 965; https://doi.org/10.3390/app16020965 (registering DOI) - 17 Jan 2026
Viewed by 122
Abstract
While 3D Gaussian Splatting (3DGS) has significantly advanced large-scale 3D reconstruction and novel view synthesis, it still suffers from high memory consumption and slow training speed. To address these issues without compromising reconstruction quality, we propose a novel 3DGS-based framework tailored for large-scale [...] Read more.
While 3D Gaussian Splatting (3DGS) has significantly advanced large-scale 3D reconstruction and novel view synthesis, it still suffers from high memory consumption and slow training speed. To address these issues without compromising reconstruction quality, we propose a novel 3DGS-based framework tailored for large-scale scenes. Specifically, we introduce a visibility-aware camera selection strategy within a divide-and-conquer training approach to dynamically adjust the number of input views for each sub-region. During training, a spatially aware densification strategy is employed to improve the reconstruction of distant objects, complemented by depth regularization to refine geometric details. Moreover, we apply an enhanced Gaussian pruning method to re-evaluate the importance of each Gaussian, prune redundant Gaussians with low contributions, and improve efficiency while reducing memory usage. Experiments on multiple large-scale scene datasets demonstrate that our approach achieves superior performance in both quality and efficiency. With its robustness and scalability, our method shows great potential for real-world applications such as autonomous driving, digital twins, urban mapping, and virtual reality content creation. Full article
Show Figures

Figure 1

Back to TopTop