Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (739)

Search Parameters:
Keywords = network slicing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4972 KB  
Article
Seismic Attribute Fusion and Reservoir Prediction Using Multiscale Convolutional Neural Networks and Self-Attention: A Case Study of the B Gas Field, South Sumatra Basin
by Ziyun Cheng, Wensong Huang, Xiaoling Zhang, Zhanxiang Lei, Guoliang Hong, Wenwen Wang, Mengyang Zhang, Linze Li and Jian Li
Processes 2026, 14(6), 981; https://doi.org/10.3390/pr14060981 - 19 Mar 2026
Viewed by 37
Abstract
Strong heterogeneity and ambiguous seismic responses hinder reliable sandstone thickness prediction when using a single seismic attribute in the lower sandstone interval of the Talang Akar Formation (hereafter abbreviated as the LTAF interval) in the B gas field, South Sumatra Basin. To address [...] Read more.
Strong heterogeneity and ambiguous seismic responses hinder reliable sandstone thickness prediction when using a single seismic attribute in the lower sandstone interval of the Talang Akar Formation (hereafter abbreviated as the LTAF interval) in the B gas field, South Sumatra Basin. To address this challenge, we propose a seismic attribute fusion and reservoir sweet-spot prediction framework based on a multiscale convolutional neural network (CNN) integrated with a self-attention module. Multiple seismic attribute volumes are organized as multi-channel 2D attribute slices, and parallel convolutions with kernel sizes of 3 × 3, 5 × 5, and 7 × 7 are employed to capture spatial features ranging from thin-bed boundaries and channel morphology to sand-body assemblage distribution. The self-attention module explicitly models inter-attribute dependencies and performs adaptive weighted fusion to suppress noise and emphasize informative attributes. The network adopts a dual-output design, producing (i) a sandstone thickness prediction map at the same spatial resolution as the input and (ii) attribute importance scores for quantitative attribute selection and geological interpretation. Using 3D seismic data and well-constrained thickness labels, the proposed model achieves an R2 of 0.8954, outperforming linear regression (R2 = 0.8281) and random forest regression (R2 ≈ 0.8453). The learned importance scores indicate that amplitude-related attributes (e.g., RMS amplitude and maximum amplitude) contribute most to thickness prediction, whereas frequency- and energy-related attributes show relatively lower contributions, which is consistent with bandwidth-limited resolution effects. Overall, the proposed framework unifies attribute fusion, thickness prediction, and interpretability within a single model, providing practical support for fine reservoir characterization and development optimization in heterogeneous sandstone reservoirs. Full article
(This article belongs to the Special Issue Applications of Intelligent Models in the Petroleum Industry)
Show Figures

Figure 1

24 pages, 1451 KB  
Review
AI-Driven Network Optimization for the 5G-to-6G Transition: A Taxonomy-Based Survey and Reference Framework
by Rexhep Mustafovski, Galia Marinova, Besnik Qehaja, Edmond Hajrizi, Shejnaze Gagica and Vassil Guliashki
Future Internet 2026, 18(3), 155; https://doi.org/10.3390/fi18030155 - 17 Mar 2026
Viewed by 296
Abstract
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution [...] Read more.
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution is increasingly characterized in the literature as a prolonged period of coexistence, hybrid operation, and progressive integration of new capabilities across radio, edge, core, and service layers. To structure this transition, the paper organizes prior work into a transition-oriented taxonomy covering migration strategies, AI-enabled closed-loop control, RAN disaggregation and edge intelligence, core virtualization and slice orchestration, spectrum-aware coexistence, service-driven requirements, and security-aware governance. Rather than introducing a new optimization algorithm or an experimentally validated architecture, the contribution of this survey is analytical and integrative. Specifically, it consolidates fragmented research directions into a reference view of how AI-driven control mechanisms are distributed across spectrum, RAN, edge, and core domains during hybrid 5G–6G operation. In addition, the paper includes a structured evidence synthesis of performance trends, deployment maturity signals, and recurring methodological limitations reported across the literature. The review indicates that meeting anticipated 6G objectives, including ultra-low latency, high reliability, scalability, and improved energy efficiency, depends less on isolated enhancements at individual protocol layers and more on coordinated cross-layer optimization supported by AI-native control loops. At the same time, the surveyed literature reveals persistent gaps in service-to-control mapping, security-aware orchestration, interoperability across heterogeneous domains, and reproducible evaluation methodologies for hybrid 5G–6G environments. The survey is intended to provide researchers, network operators, and standardization stakeholders with a structured analytical basis for assessing how AI-driven optimization can support the staged evolution from 5G systems toward 6G-ready infrastructures. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

33 pages, 3876 KB  
Article
Predictive Network Slicing Resource Orchestration: A VNF Approach
by Andrés Cárdenas, Luis Sigcha and Mohammadreza Mosahebfard
Future Internet 2026, 18(3), 149; https://doi.org/10.3390/fi18030149 - 16 Mar 2026
Viewed by 159
Abstract
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource [...] Read more.
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource overprovisioning practice implemented by service providers leads to the inefficient use of resources, limiting the ability of Mobile Network Operators (MNOs) to rent new network slices to more vertical customers. Hence, efficient resource allocation mechanisms are essential to achieve optimal network performance and cost-effectiveness. This paper proposes a predictive model for network slice resource optimization based on resource sharing between Virtualized Network Functions (VNFs). The model employs deep learning models based on Long Short-Term Memory (LSTM) and Transformers for CPU resource usage prediction and a reactive algorithm for resource sharing between VNFs. The model is powered by a telemetry system proposed as an extension of the 3GPP network slice management architectural framework. The extended architectural framework enhances the automation and optimization of the network slice lifecycle management. The model is validated through a practical use case, demonstrating the effectiveness of the resource sharing algorithm in preventing VNF overload and predicting resource usage accurately. The findings demonstrate that the sharing mechanism enhances resource optimization and ensures compliance with service level agreements, mitigating service degradation. This work contributes to the efficient management and utilization of network resources in 5G networks and provides a basis for further research in network slice resource optimization. Full article
(This article belongs to the Special Issue Software-Defined Networking and Network Function Virtualization)
Show Figures

Figure 1

24 pages, 6557 KB  
Article
Ka-Band 16-Channel T/R Module Based on MMIC with Low Cost and High Integration
by Mengyun He, Qinghua Zeng, Xuesong Zhao, Song Wang, Yan Zhao, Pengfei Zhang, Gaoang Li and Xiao Liu
Electronics 2026, 15(6), 1185; https://doi.org/10.3390/electronics15061185 - 12 Mar 2026
Viewed by 254
Abstract
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite [...] Read more.
Based on monolithic microwave integrated circuit (MMIC) technology, this paper presents the design and implementation of a low-cost, highly integrated Ka-band sixteen-channel transmit/receive (T/R) module, specifically tailored to meet the application requirements of phased array antennas in airborne and spaceborne radar systems, satellite communications, and 5G/6G millimeter-wave networks. The proposed module employs an MMIC-based single-channel dual-chip discrete architecture, optimally integrating amplitude-phase multifunction chips and transmit-receive multifunction chips in terms of both fabrication process and performance characteristics, achieving a favorable balance between high performance and high-integration density. Using low-cost, low-temperature co-fired ceramic (LTCC) substrates, full-silver conductive paste, and a nickel–palladium–gold plating process, a novel “back-to-back” thin-slice packaging technique is presented to improve integration, lower manufacturing costs, and boost long-term reliability. Furthermore, the design incorporates glass insulators and a direct array interconnection scheme, which significantly minimizes transmission losses and reduces interface dimensions. The final module measures 70.3 mm × 26.2 mm × 10.9 mm and weighs only 34 g. Experimental results demonstrate a transmit output power of at least 23 dBm, a receive gain exceeding 26 dB, and a noise figure below 3.5 dB, achieving a 22.5–58% reduction in volume per channel while maintaining competitive RF performance. To improve testing effectiveness and guarantee data consistency, an automated radio frequency (RF) test system based on Python 3.11.5 was also developed. This work provides a practical technical approach for the engineering realization of Ka-band phased array systems. Full article
Show Figures

Figure 1

20 pages, 3279 KB  
Article
Pore Structure Characteristics of Vegetated Concrete and Their Influence on Physical Properties
by Fazhi Huo, Xinjun Yan, Jiaqi Liu and Peiyuan Zhuang
Materials 2026, 19(5), 1042; https://doi.org/10.3390/ma19051042 - 9 Mar 2026
Viewed by 229
Abstract
In this study, CT scanning technology was combined with ImageJ 1.54r and Avizo 3D 2022 professional image analysis software to quantify porosity. The aim was to reveal the intrinsic correlation between the pore structure characteristics and the macroscopic properties of vegetated concrete. A [...] Read more.
In this study, CT scanning technology was combined with ImageJ 1.54r and Avizo 3D 2022 professional image analysis software to quantify porosity. The aim was to reveal the intrinsic correlation between the pore structure characteristics and the macroscopic properties of vegetated concrete. A combination of 3D reconstruction, fractal analysis and multi-parameter regression modelling techniques was utilised to quantify the association between pore parameters and material properties. The mechanistic role of pore structure in regulating the strength–permeability trade-off relationship was elucidated. The results show that: (1) aggregate particle size and porosity are significantly negatively correlated with the compressive strength of vegetated concrete and strongly positively correlated with the water permeability coefficient, while the effects of both of them on the pH value of the material are negligible; (2) the porosity obtained by the image analysis method meets the design requirements of the target porosity, and the deviation between the computed 3D porosity from CT scanning and the 2D sliced porosity is less than 1%. The image analysis porosity is slightly lower than the measured value, a deviation within a reasonable range. (3) There is a robust positive correlation between the fractal dimension of the vegetated concrete structural surface and porosity. With increasing aggregate size, porosity gradually increases, pore network connectivity is significantly enhanced, and the fractal dimension increases correspondingly. (4) Function fitting analysis confirms that the correlation between the connected porosity and the compressive strength and permeability coefficient is more significant than that of the cross-sectional porosity. Specifically, compressive strength is significantly negatively correlated with equivalent pore size and fractal dimension, and the water permeability coefficient is strongly positively correlated with these two parameters. This study can provide important theoretical support and engineering reference for the optimization of the mix proportion and performance control of vegetated concrete. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

25 pages, 2809 KB  
Article
Multi-Architecture Deep Learning for Early Alzheimer’s Detection in MRI: Slice- and Scan-Level Analysis
by Isabelle Bricaud and Giovanni Luca Masala
Int. J. Environ. Res. Public Health 2026, 23(3), 322; https://doi.org/10.3390/ijerph23030322 - 5 Mar 2026
Viewed by 441
Abstract
Alzheimer’s disease (AD), the most common form of dementia, is a progressive and irreversible neurodegenerative disorder. Structural MRI is widely used for diagnosis, revealing brain changes associated with AD. However, these alterations are often subtle and difficult to detect manually, particularly at early [...] Read more.
Alzheimer’s disease (AD), the most common form of dementia, is a progressive and irreversible neurodegenerative disorder. Structural MRI is widely used for diagnosis, revealing brain changes associated with AD. However, these alterations are often subtle and difficult to detect manually, particularly at early stages. Early intervention during prodromal stages, such as mild cognitive impairment (MCI), can help slow disease progression, highlighting the need for reliable automated methods. In this work, we introduce a dual-level evaluation framework comparing fifteen deep learning architectures, including convolutional neural networks (CNNs), Transformers, and hybrid models, for classifying AD, MCI, and cognitively normal (CN) subjects using the ADNI dataset. A central focus of our work is the impact of robust and standardized preprocessing pipelines, which we identified as a critical yet underexplored factor influencing model reliability. By evaluating performance at both slice-level and scan-level, we reveal that multi-slice aggregation affects architectures asymmetrically. By systematically optimizing preprocessing steps to reduce data variability and enhance feature consistency, we established preprocessing quality as an essential determinant of deep learning performance in neuroimaging. Experimental results show that CNNs and hybrid pre-trained models outperform Transformer-based models in both slice-level and scan-level classification. ConvNeXtV2-L achieved the best scan-level performance (91.07%), EfficientNetV2-L the highest slice-level accuracy (86.84%), and VGG19 balanced results (86.07%/88.52%). ConvNeXtV2-L and SwinV1-L exhibited scan-level improvements of 7.60% and 9.04% respectively, while EfficientNetV2-L experienced degradation of 2.66%, demonstrating that architectural selection and aggregation strategy are interdependent factors. These findings suggest that carefully designed preprocessing not only improves classification accuracy but may also serve as a foundation for more reproducible and interpretable Alzheimer’s disease detection pipelines. Full article
Show Figures

Figure 1

30 pages, 1397 KB  
Article
GAN-Based Cross-Modality Brain MRI Synthesis: Paired Versus Unpaired Training and Comparison with Diffusion and Transformer Models
by Behnam Kiani Kalejahi, Sebelan Danishvar and Mohammad Javad Rajabi
Biomimetics 2026, 11(3), 175; https://doi.org/10.3390/biomimetics11030175 - 2 Mar 2026
Viewed by 379
Abstract
Incomplete or faulty MRI sequences are common in clinical practice and can impair AI-based analyses that rely on complete multi-contrast data. The relative effectiveness of classical generative adversarial networks (GANs) versus modern diffusion and transformer-based models for clinically usable MRI synthesis remains unclear. [...] Read more.
Incomplete or faulty MRI sequences are common in clinical practice and can impair AI-based analyses that rely on complete multi-contrast data. The relative effectiveness of classical generative adversarial networks (GANs) versus modern diffusion and transformer-based models for clinically usable MRI synthesis remains unclear. This study evaluates cross-modality MRI synthesis using the BraTS 2019 brain tumour dataset, focusing on T1-to-T2 translation. We assess paired and unpaired CycleGAN models and compare them with two stronger but computationally intensive baselines, a conditional denoising diffusion probabilistic model (DDPM) and a transformer-enhanced GAN, using identical data splits and preprocessing pipelines. Inter-modality correlation was evaluated to estimate the achievable similarity between modalities. Conceptually, modality synthesis may be viewed as a representation-learning approach that compensates for missing imaging information by reconstructing clinically relevant features from available contrasts. Paired CycleGAN achieved correlations of r0.920.93  and SSIM 0.900.92, approaching natural T1–T2 correlation (r0.95) while maintaining very fast inference (<50 ms/slice). Unpaired CycleGAN achieved r0.740.78 and SSIM 0.820.85, producing clinically interpretable reconstructions without voxel-level supervision. DDPM achieved the highest fidelity (SSIM 0.930.95, r0.94) but required substantially greater computational resources, while transformer-enhanced GAN performance was intermediate. Qualitative analysis showed that CycleGAN and DDPM best preserved tumour and tissue boundaries, whereas unpaired CycleGAN occasionally over-smoothed subtle lesions. These findings highlight the trade-off between fidelity and efficiency in cross-modality MRI synthesis, suggesting paired CycleGAN for time-sensitive clinical workflows and diffusion models as a computationally expensive accuracy upper bound. Full article
Show Figures

Figure 1

22 pages, 10242 KB  
Article
Cross-Modality Whole-Heart MRI Reconstruction with Deep Motion Correction and Super-Resolution
by Jinwei Dong, Wenhao Ke, Wangbin Ding, Liqin Huang and Mingjing Yang
Sensors 2026, 26(5), 1565; https://doi.org/10.3390/s26051565 - 2 Mar 2026
Viewed by 284
Abstract
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical [...] Read more.
Magnetic resonance imaging (MRI) inherently suffers from motion artifacts and inter-slice misalignment, primarily due to sequential slice acquisition and the prolonged scanning time required for dynamic cardiac motion. These acquisition-induced inconsistencies often lead to anatomically implausible representations of cardiac structures, impairing subsequent clinical analyses such as 3D reconstruction and regional functional assessment. On the other hand, acquiring high-resolution MRI demands extended scan durations that increase patient burden and potential health risks. To address this challenge, we propose a deep motion correction and super-resolution whole-heart reconstruction (DeepWHR) framework. It learns cardiac structure prior knowledge from computed tomography (CT) data, and transfers it to reconstruct cardiac structure from conventional misaligned and large slice thickness MRI images. Specifically, DeepWHR utilizes CT anatomy data to train a deep motion correction model that enables the network to capture structurally coherent and anatomically consistent representations, while MRI Finetune preserves modality-specific spatial characteristics, ensuring that the reconstructed results retain the intrinsic MRI data distribution. Furthermore, DeepWHR introduced an implicit neural representation module, which models continuous spatial fields, enabling multi-scale super-resolution structure reconstruction. Experiments on the CARE2024 WHS dataset validate that our method not only restores the spatial coherence of MRI-derived anatomical structures but also generates high-fidelity label representations suitable for downstream cardiac applications. This study demonstrates that DeepWHR transforms sparse, misaligned 2D label stacks into anatomically coherent, high-resolution 3D models, enhancing their reliability for clinical applications. Full article
(This article belongs to the Special Issue Emerging MRI Techniques for Enhanced Disease Diagnosis and Monitoring)
Show Figures

Figure 1

15 pages, 1404 KB  
Article
A Deep Learning-Based Decision Support System for Cholelithiasis in MRI Data
by Ebru Hasbay, Caglar Cengizler, Mahmut Ucar, Nagihan Durgun, Hayriye Ulkucan Disli and Deniz Bolat
J. Clin. Med. 2026, 15(5), 1891; https://doi.org/10.3390/jcm15051891 - 2 Mar 2026
Viewed by 277
Abstract
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time [...] Read more.
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time and resources required for the radiological evaluation of the gallbladder and cholelithiasis. Objective: To detect cholelithiasis, a support system with a graphical user interface for magnetic resonance (MR) images of the gallbladder was implemented to reduce the manual effort and time required to identify gallstones. Method: A commonly used deep learning model for pixel-level mask generation and instance segmentation, Mask Region Based Convolutional Neural Network (Mask R-CNN), was modified, trained, and evaluated to provide a robust pipeline for automated analysis. The primary aim was to automatically locate and label the gallbladder in T2-weighted axial MR images to detect gallstones and highlight the visual characteristics of the target region, thereby supporting radiologists. All automation was designed to operate on a single optimal slice instead of the entire volume. While this approach limits generalisability, it offers a practical starting point for method development. This setup reflects a feasibility-oriented design, rather than a comprehensive diagnostic capability. The dataset included 788 axial MR images from different patients. Each image was labeled and segmented by an experienced radiologist to train and test the models at the image level. Results: The proposed model with squeeze and excitation (SE) modification improved classification accuracy, and at the image level, stone detection improved in terms of accuracy, precision, and specificity, although recall and F1 scores slightly decreased. Conclusions: The results show that the modified Mask R-CNN model can detect gallstones with up to 0.89 accuracy, supporting the clinical applicability of the proposed method. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

35 pages, 633 KB  
Article
Bi-Objective Optimization for Scalable Resource Scheduling in Dense IoT Deployments via 5G Network Slicing Using NSGA-II
by Francesco Nucci and Gabriele Papadia
Telecom 2026, 7(2), 24; https://doi.org/10.3390/telecom7020024 - 2 Mar 2026
Viewed by 238
Abstract
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs [...] Read more.
The proliferation of Internet of Things (IoT) devices demands efficient resource management in fifth-generation (5G) networks, particularly through network slicing mechanisms supporting massive machine-type communications (mMTCs). This paper addresses IoT connectivity in 5G network slicing through a bi-objective optimization framework balancing operational costs with quality-of-service. We formulate a bi-objective optimization problem that balances operational costs with quality-of-service (QoS) requirements across heterogeneous 5G network slices. The proposed approach employs a tailored Non-dominated Sorting Genetic Algorithm II (NSGA-II) incorporating domain-specific constraints, including device priorities, slicing isolation requirements, radio resource limitations, and battery capacity. Through extensive simulations on scenarios with up to 5000 devices, our method generates diverse Pareto-optimal solutions achieving hypervolume improvements of 8–13% over multi-objective DRL, 15–28% over single-objective DRL baselines, and 22–41% over heuristic approaches while maintaining computational scalability suitable for real-time network management (sub-2 min execution). Validation with real-world traffic traces from operational deployments confirms algorithm robustness under realistic burstiness and temporal patterns, with 7% performance degradation vs. synthetic traffic—within expected simulation–reality gaps. This work provides a practical framework for IoT resource scheduling in current 5G and future Beyond-5G (B5G) telecommunications infrastructures, validated in scenarios of up to 5000 devices. Full article
Show Figures

Figure 1

39 pages, 13134 KB  
Article
Three-Dimensional Digital Model Reconstruction and Seepage Characteristic Analysis of Porous Polyimide
by Zhaoliang Dou, Shuang Li, Wenbin Chen, Ye Yang, Hongjuan Yan, Lina Si, Qianghua Chen, Kang An, Hong Li and Fengbin Liu
Polymers 2026, 18(5), 591; https://doi.org/10.3390/polym18050591 - 27 Feb 2026
Viewed by 291
Abstract
This study focuses on porous polyimide (PPI) lubricating materials for high-speed aerospace bearings. Based on their real microstructure, three-dimensional digital model reconstruction and mesoscale seepage characteristics were investigated. First, a sequence of two-dimensional slice images of PPI was obtained using micro-focus X-ray computed [...] Read more.
This study focuses on porous polyimide (PPI) lubricating materials for high-speed aerospace bearings. Based on their real microstructure, three-dimensional digital model reconstruction and mesoscale seepage characteristics were investigated. First, a sequence of two-dimensional slice images of PPI was obtained using micro-focus X-ray computed tomography (CT). Through image filtering, threshold segmentation, and three-dimensional reconstruction, a highly faithful digital model of the pore structure was constructed, and a quantified pore-network model was further extracted. Second, a multiple-relaxation-time lattice Boltzmann model based on the D3Q27 discrete scheme was established, and its accuracy and stability in complex boundaries and pressure-driven flows were verified using classic benchmark cases. Subsequently, the validated numerical model was applied to the reconstructed PPI pore structure to simulate and systematically analyze the single-phase seepage behavior of lubricating oil. The results show that the lubricant seepage exhibits a strong “preferential flow path” effect, with most of the flow transported through a small number of large-size throats. A clear quantitative relationship exists between the microscopic flow field structure—including velocity distribution, flow paths, and pressure gradient—and the pore-topology features, such as throat-size distribution, connectivity, and tortuosity. This verifies the mesoscale mechanism that “structure governs flow.” The complete technical chain established in this work—“real-structure reconstruction–numerical model validation–seepage mechanism analysis”—provides a reliable theoretical and numerical tool for gaining deeper insight into the lubricant transport behavior in porous polyimide and offers guidance for the microstructural design and optimization of this material. Full article
(This article belongs to the Section Polymer Analysis and Characterization)
Show Figures

Figure 1

18 pages, 1234 KB  
Article
STFF-CANet Diagnosis Model of Aero-Engine Surge Based on Spatio-Temporal Feature Fusion
by Chunyan Hu, Yafeng Shen, Qingwen Zeng, Gang Xu, Jiaxian Sun and Keqiang Miao
Aerospace 2026, 13(3), 212; https://doi.org/10.3390/aerospace13030212 - 27 Feb 2026
Viewed by 197
Abstract
Aero engine surge diagnosis is a key technology in engine health management, and its diagnostic accuracy is of great significance for ensuring operational safety. Traditional threshold-based diagnostic methods are significantly affected by working conditions, which makes it difficult to achieve full working condition [...] Read more.
Aero engine surge diagnosis is a key technology in engine health management, and its diagnostic accuracy is of great significance for ensuring operational safety. Traditional threshold-based diagnostic methods are significantly affected by working conditions, which makes it difficult to achieve full working condition coverage. Moreover, due to issues such as varying feature thresholds across conditions, weak signal characteristics, and low identifiability, the diagnostic accuracy remains limited. To address these challenges, this paper proposes an STFF-CANet (Spatio-Temporal Feature Fusion Cross-Attentional Network) diagnosis model of aero engine surge based on spatio-temporal feature fusion. The model first employs a Convolutional Neural Network (CNN) to extract spatial features from the frequency domain of dynamic signals via Fast Fourier Transform (FFT). Simultaneously, a Bidirectional Long Short-Term Memory (BiLSTM) network is used to capture temporal features from signals optimized by Variational Mode Decomposition (VMD). A cross-attention mechanism is further introduced to achieve deep fusion of spatiotemporal features, thereby enhancing the capability to identify weak fault characteristics. In addition, the sliding window slice method is used to expand the sample size for the small sample fault data of the engine surge of an aero engine. This ensures both informational continuity between slices and statistical stability of features, effectively mitigating the difficulty of diagnosing early and weak surge characteristics under small-sample conditions. Experimental results demonstrate that the model achieves an F1-score, Recall, Precision, and Accuracy of 97.96%, 97.52%, 98.43%, and 99.01%, respectively, in surge fault classification. These outcomes meet the practical requirements for aero engine surge diagnosis and provide an effective solution for early fault warning in complex industrial equipment. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

17 pages, 876 KB  
Article
Transformer-Enhanced Localization via Adaptive PDP Representation Under Dynamic Bandwidths
by Lei Cao, Tianqi Xiang, Weiyan Chen, Yicheng Wang, Yuehong Gao and Xin Zhang
Sensors 2026, 26(5), 1486; https://doi.org/10.3390/s26051486 - 27 Feb 2026
Viewed by 223
Abstract
Accurate wireless positioning has remained challenging under dynamic bandwidth conditions and outdoor multipath environments that are typical in Internet of Things (IoT) and autonomous aerial vehicle (AAV) applications. Conventional learning-based localization methods rely on bandwidth-specific channel state information (CSI) representations, which causes the [...] Read more.
Accurate wireless positioning has remained challenging under dynamic bandwidth conditions and outdoor multipath environments that are typical in Internet of Things (IoT) and autonomous aerial vehicle (AAV) applications. Conventional learning-based localization methods rely on bandwidth-specific channel state information (CSI) representations, which causes the trained models to be inapplicable or less adaptive when the signal bandwidth differs from that used during training. To overcome this limitation, a unified and neural network-oriented framework is proposed, which constructs bandwidth-adaptive power delay profile (PDP) representations for learning-based models. A PDP preprocessing scheme through adaptive zero-padding and oversampled IFFT of heterogeneous CSI is introduced to generate dimension-consistent and delay-aligned neural network inputs. To enhance robustness, a sub-band-sliced PDP representation is developed to enhance model robustness, where each bandwidth is divided into equal-width sub-bands whose PDPs are independently processed and organized as Transformer tokens. A dedicated Transformer is designed to get the location estimation from PDPs of multi-access points. Simulation results have demonstrated that the proposed preprocessing-PDP-plus-Transformer framework achieves superior cross-bandwidth generalization and localization accuracy, compared to analytical and learning-based baselines. Full article
Show Figures

Figure 1

30 pages, 146632 KB  
Article
Form Meets Flow: Linking Historic Corridor Morphology to Multi-Scale Accessibility and Pedestrian Interface on Beishan Street, West Lake
by Dongxuan Li, Jin Yan, Shengbei Zhou, Yingning Shen, Hongjun Peng, Zhuoyuan Du, Xinyue Gao, Yankui Yuan, Ming Du and Jun Wu
Buildings 2026, 16(5), 889; https://doi.org/10.3390/buildings16050889 - 24 Feb 2026
Viewed by 272
Abstract
Historic linear corridors in living-heritage settings concentrate identity, everyday mobility, and visitor experience. Balancing authenticity, adaptability, and publicness therefore benefits from evidence that jointly characterizes long-term physical change, network accessibility, and eye-level interface conditions. Existing assessments often focus on façades or single time [...] Read more.
Historic linear corridors in living-heritage settings concentrate identity, everyday mobility, and visitor experience. Balancing authenticity, adaptability, and publicness therefore benefits from evidence that jointly characterizes long-term physical change, network accessibility, and eye-level interface conditions. Existing assessments often focus on façades or single time slices, leaving limited evidence that relates decades of built-fabric reconfiguration (changes in building footprints, street edges, and open-space fragmentation) to multi-scale accessibility and pedestrian-facing qualities. We propose an integrated and interpretable workflow for the Beishan Street corridor in the West Lake World Heritage core (Hangzhou) over 1929–2024. Scale-sensitive morphological metrics, multi-radius network measures (integration and centrality), and street-view semantic segmentation are aligned at corridor-segment resolution and examined together with segment-level functional intensity derived from POIs using transparent linear models. The results indicate a long-term shift from a lakeshore-led to a road-led spatial logic, followed by post-2000 stabilization near saturation. Average integration increases, while the high-integration tail becomes thinner. In connector-removal scenarios, the eastern segment shows a relative accessibility decline, and a central hinge node emerges as a vulnerability hotspot (bottleneck) where through-movement concentrates. Eye-level profiles differ by segment: the west exhibits maximal canopy and lower sky visibility, the center shows stronger continuous walls around compounds with intermittent forecourt openings, and the east is characterized by compact residential heritage frontage with low vegetation. Segment-level associations suggest that address and wayfinding density tends to co-occur with clearer frontages, wider sky cones, and stronger tree cover. Transportation-related and access/passage facilities tend to co-occur with higher ground-plane legibility, measured as wider and more continuous road and sidewalk surfaces. Medical and government clusters tend to co-occur with lower sky openness. Recommended actions include the following: (1) mesh-aware protection of key connectors and the hinge, (2) segment-specific targets for façade share and ground cues with planned punctuations, (3) tailored interface standards for institutional clusters, (4) scalable address and wayfinding systems, and (5) event staging that preserves effective roadway and sidewalk capacity. Full article
(This article belongs to the Special Issue Advanced Study on Urban Environment by Big Data Analytics)
Show Figures

Figure 1

27 pages, 7733 KB  
Article
Deep Fusion of Kinematic Features and Task-Aware Partition Planning for Mold Surface Robotic Polishing
by Miao Yu, Xu Liu, Baowen He and Zhen Pan
Machines 2026, 14(2), 243; https://doi.org/10.3390/machines14020243 - 21 Feb 2026
Viewed by 245
Abstract
Robotic polishing in CAD-free industrial settings relies on point-cloud data, yet noise and non-uniform sampling often compromise kinematic feasibility and finishing quality. This paper proposes an adaptive motion planning approach with explicit kinematic constraints. A downsampling–clustering–mapping-back strategy is first employed for rapid workpiece [...] Read more.
Robotic polishing in CAD-free industrial settings relies on point-cloud data, yet noise and non-uniform sampling often compromise kinematic feasibility and finishing quality. This paper proposes an adaptive motion planning approach with explicit kinematic constraints. A downsampling–clustering–mapping-back strategy is first employed for rapid workpiece extraction. Subsequently, an improved supervoxel representation and attributed adjacency graph (AAG) are developed, utilizing a multi-objective energy formulation to partition sub-regions that satisfy geometric consistency and kinematic reachability. To handle point-cloud noise, a lightweight neural network predicts scanning directions and step-distance coefficients, followed by thick-slice serpentine path generation. Finally, closed-loop verification ensures safety through inverse-kinematics and safety-margin checks. Experimental results demonstrate consistent sub-micron finishing quality, with Ra ≈ 0.6 μm on complex mold surfaces. Moreover, the proposed pipeline achieves a 7.5× preprocessing speedup, completing workpiece extraction in 1.14 s for a 237,640-point scan, and improves kinematic feasibility to 100% IK success while reducing the mean TCP normal deviation by ~76% compared with a PCA-based baseline. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

Back to TopTop