Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (39,561)

Search Parameters:
Keywords = feature-based methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 7263 KB  
Article
LEViM-Net: A Lightweight EfficientViM Network for Earthquake Building Damage Assessment
by Qing Ma, Dongpu Wu, Yichen Zhang, Jiquan Zhang, Jinyuan Xu and Yechi Yao
Remote Sens. 2026, 18(10), 1592; https://doi.org/10.3390/rs18101592 (registering DOI) - 15 May 2026
Abstract
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment [...] Read more.
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment and emergency action. Convolutional neural networks (CNNs) primarily concentrate on local features and frequently ignore global contextual information within and across buildings, despite the fact that deep learning-based techniques allow automated damage identification. Transformer-based approaches, on the other hand, are good at capturing global dependencies, but their large memory and processing costs restrict their usefulness. As a result, existing networks still struggle to achieve an effective balance between accuracy and efficiency. To address this issue, this study proposes a lightweight and efficient network for post-earthquake building damage assessment. Specifically, we develop a two-stage method based on EfficientViM with an encoder–decoder architecture. In the encoder, Mamba is introduced to extract multi-scale change features with long-range dependencies, leveraging the state space model to preserve global modeling capability while significantly reducing computational complexity. In the decoder, two lightweight modules are designed to further enhance discriminative capability and computational efficiency. The network finally outputs building localization and pixel-level building damage, respectively. Experiments were conducted on four earthquake events from the BRIGHT dataset using a three-for-training and one-for-testing cross-event rotation evaluation strategy. The results demonstrate that LEViM-Net requires only 30.94 M parameters and 27.10 G FLOPs. In addition, for the Türkiye earthquake event, the proposed method achieves an F1 score of 80.49%, an overall accuracy (OA) of 88.17%, and a mean intersection over union (mIoU) of 49.73%. The proposed model enables efficient remote-sensing-based mapping of macroscopic and image-visible building damage, providing timely support for early-stage emergency response. Full article
(This article belongs to the Special Issue Advances in AI-Driven Remote Sensing for Geohazard Perception)
16 pages, 1433 KB  
Review
A Clinical Decision Support System for Post-Surgical Cardiovascular Remote Monitoring
by Charalampia Pylarinou, Francesk Mulita, Efstratios Koletsis, Vasileios Leivaditis, Elias Liolis, Lefteris Gortzis and Dimosthenis Mavrilas
Clin. Pract. 2026, 16(5), 93; https://doi.org/10.3390/clinpract16050093 (registering DOI) - 15 May 2026
Abstract
Background: Post-surgical cardiovascular monitoring places a heavy information burden on clinical teams, requiring the rapid synthesis of patient history, intraoperative data, monitoring streams, and surgical outcome evidence. Existing clinical decision support systems handle this integration poorly, and most offer little visibility into their [...] Read more.
Background: Post-surgical cardiovascular monitoring places a heavy information burden on clinical teams, requiring the rapid synthesis of patient history, intraoperative data, monitoring streams, and surgical outcome evidence. Existing clinical decision support systems handle this integration poorly, and most offer little visibility into their reasoning. We present a Retrieval-Augmented Generation (RAG) architecture designed specifically for this domain, with a focus on evidence traceability and practical workflow integration. Methods: We describe a three-layer RAG architecture comprising a retrieval layer that creates 768-dimensional representations of clinical scenarios; an augmentation layer using a stacking ensemble (Random Forest and XGBoost base learners with a logistic-regression meta-learner) to integrate patient-specific data with retrieved evidence and produce calibrated probability estimates; and a generative layer using a fine-tuned BERT classifier together with Gemini 2.5 Pro to synthesise actionable clinical recommendations. Components were prototyped on publicly available, de-identified data from MIMIC-III and the MIMIC-III-Ext-PPG benchmark to verify pipeline integrity. Proposed Evaluation Framework: This paper presents a system architecture rather than a clinically validated implementation. We outline a structured evaluation framework to assess the technical performance and clinical applicability of the RAG architecture, encompassing the technical validation of system components, expert assessment of clinical workflow integration potential, and analysis of interpretability features essential for healthcare deployment. Specific technical targets include retrieval precision >90% for relevant evidence, query response time <3 s, and a clinical appropriateness rating of >85% from expert review. Conclusions: We describe a RAG architecture for post-surgical cardiovascular monitoring in which every recommendation is linked to retrievable source documents, making the reasoning visible and challengeable. A structured evaluation framework is proposed to guide the system towards clinical validation. Full article
16 pages, 379 KB  
Article
Validation and Development of Claims-Based Algorithms for Identifying Thyroid Eye Disease Using the IRIS Registry-Komodo Linked Database
by Junjie Ma, Wendy W. Lee, Maurice Alan Brookhart, Madhura A. Tamhankar, Juan Ayala-Haedo, Fang He and Haridarshan Patel
J. Clin. Med. 2026, 15(10), 3836; https://doi.org/10.3390/jcm15103836 (registering DOI) - 15 May 2026
Abstract
Objectives: To validate claims-based algorithms for identifying thyroid eye disease (TED) cases and assess whether machine learning can improve case identification in a large, linked real-world dataset. Methods: Using a large, linked database from Komodo Health® and Academy IRIS® [...] Read more.
Objectives: To validate claims-based algorithms for identifying thyroid eye disease (TED) cases and assess whether machine learning can improve case identification in a large, linked real-world dataset. Methods: Using a large, linked database from Komodo Health® and Academy IRIS® Registry, we evaluated six rule-based algorithms incorporating Graves’ disease (GD), eye symptoms and signs. The IRIS Registry’s curated data, based on confirmed TED diagnoses from medical notes, served as the reference standard. Additionally, we developed supervised machine learning models using demographic, diagnostic, procedural, and medication data. Feature selection was performed using recursive feature elimination to rank predictive codes and construct a simplified, interpretable model. Cross-validation was used to assess model performance and compare performance with the rule-based algorithms. Results: The rule-based algorithms demonstrated a trade-off between sensitivity and specificity, with some achieving high specificity but limited sensitivity. Algorithm 1 had the highest sensitivity (48.7%) but lower specificity (59.9%) and PPV (75.8%). Algorithms 2–5 demonstrated higher specificity (87.2–93.5%) but lower sensitivity (17.8–27.0%). Algorithm 6 improved sensitivity (33.4%) compared to Algorithms 2–5 while maintaining high specificity (86.8%) and a strong PPV (86.7%). Machine learning models demonstrated similar trade-offs. One model achieved improved specificity (77.2%) with sensitivity of 49.3%, outperforming Algorithm 1 in specificity while matching its sensitivity. Another model maximized specificity (91.7%) and PPV (89.8%) at a reduced sensitivity of 28.5%. These results highlight the flexibility of machine learning models in adjusting performance to address different research objectives. Conclusions: This study evaluated existing rule-based algorithms for identifying TED cases in claims data, revealing trade-offs between sensitivity and specificity. Machine learning models provide additional flexibility, allowing performance to be tailored to specific research use cases. While no single method consistently outperformed others across all metrics, both rule-based and machine learning approaches demonstrated value in improving TED case identification using real-world data sources. Full article
(This article belongs to the Section Ophthalmology)
29 pages, 11107 KB  
Article
3D Perception-Based Adaptive Point Cloud Simplification and Slicing for Soil Compaction Pit Volume Calculation
by Chuang Han, Jiayu Wei, Tao Shen and Chengli Guo
Sensors 2026, 26(10), 3150; https://doi.org/10.3390/s26103150 (registering DOI) - 15 May 2026
Abstract
In the field of subgrade compaction quality assessment, accurate volume measurement of excavated pits is hindered by non-uniform point cloud distribution, environmental noise interference, and complex irregular boundary features. To address these challenges, this paper proposes a robust volume detection framework that integrates [...] Read more.
In the field of subgrade compaction quality assessment, accurate volume measurement of excavated pits is hindered by non-uniform point cloud distribution, environmental noise interference, and complex irregular boundary features. To address these challenges, this paper proposes a robust volume detection framework that integrates adaptive point cloud refinement and morphological discrimination. First, a pose normalization method employing RANSAC plane fitting and rigid body transformation corrects the spatial orientation of the raw point clouds. To balance data redundancy removal with feature preservation, a gradient adaptive simplification strategy based on local density feedback and K-nearest neighbor estimation is developed. Subsequently, a cross-sectional area calculation model utilizing piecewise-cubic polynomial fitting is proposed to mitigate boundary noise and accurately reconstruct irregular contours. Furthermore, a dynamic outlier removal mechanism based on the Median Absolute Deviation (MAD) and sliding windows is introduced to eliminate non-physical geometric fluctuations. Finally, the total volume is aggregated using a hybrid strategy of Simpson’s rule and a frustum compensation operator. Experimental results on simulated pits with typical topological defects demonstrate that the proposed algorithm outperforms traditional methods, achieving an average relative volume error of less than 0.8%. This approach significantly improves the robustness and precision of sensor-based automated subgrade compaction quality measurement. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

24 pages, 5438 KB  
Article
An Improved DeepLabV3+-Based Method for Crop Row Segmentation and Navigation Line Extraction in Agricultural Fields
by Letian Wu, Yongzhi Cui, Huifeng Shi, Xiaoli Sun, Jiayan Yang, Xinwei Cao, Ping Zou and Ya Liu
Sensors 2026, 26(10), 3142; https://doi.org/10.3390/s26103142 - 15 May 2026
Abstract
Accurate crop row detection is identified as a critical prerequisite for autonomous agricultural navigation, yet it remains challenging in complex field environments. To achieve a balance between segmentation accuracy, robustness, and real-time performance, an improved crop row segmentation and navigation method based on [...] Read more.
Accurate crop row detection is identified as a critical prerequisite for autonomous agricultural navigation, yet it remains challenging in complex field environments. To achieve a balance between segmentation accuracy, robustness, and real-time performance, an improved crop row segmentation and navigation method based on the DeepLabV3+ framework was developed. MobileNetV2 was adopted as the backbone to minimize computational costs, while feature representation was enhanced through integrated attention mechanisms and multi-scale fusion. Specifically, split-attention convolution was integrated into the backbone, a DenseASPP + SP module was employed for multi-scale contextual capture, and a Convolutional Block Attention Module (CBAM) was added to refine feature responses. Experimental results demonstrated that the proposed method outperformed mainstream models, achieving a mean Intersection over Union (mIoU) of 93.42% and an f1-score of 96.8%. The model maintained a lightweight architecture with 8.35 M parameters and a real-time speed of 32 FPS. Furthermore, crop row anchor points were extracted and processed via DBSCAN clustering and RANSAC fitting to generate high-precision navigation lines. Validation showed that the middle crop row yielded the highest fitting accuracy with minimal angular and lateral errors. This study provides an efficient visual perception solution for intelligent field operations. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

13 pages, 788 KB  
Article
A Lightweight Machine Learning Framework for Post-Stroke Gait Abnormality Classification Using Wearable Gyroscope Features
by Stamatios Orfanos, Thanita Sanghan, Andreas Menychtas, Christos Panagopoulos, Ilias Maglogiannis and Surapong Chatpun
Sensors 2026, 26(10), 3143; https://doi.org/10.3390/s26103143 - 15 May 2026
Abstract
Accurately classifying gait abnormalities is crucial for the effective monitoring and rehabilitation of stroke patients. This study proposed a lightweight machine learning framework for distinguishing healthy from abnormal gait patterns using statistical features extracted from wearable gyroscope data. Statistical z-axis angular velocity [...] Read more.
Accurately classifying gait abnormalities is crucial for the effective monitoring and rehabilitation of stroke patients. This study proposed a lightweight machine learning framework for distinguishing healthy from abnormal gait patterns using statistical features extracted from wearable gyroscope data. Statistical z-axis angular velocity values from both limbs were derived and used to evaluate the performance of multiple classifiers, including logistic regression, support vector machines, and ensemble methods. A leave-one-out cross-validation strategy was employed to enhance generalizability across subjects. The results indicated that several classifiers achieve accuracy and area under the curve (AUC) values exceeding 0.95, with random forest and support vector machine-based models demonstrating near-perfect class separability, with an AUC of 0.98. These findings highlighted the effectiveness of using minimal set of biomechanically relevant gyroscope features for gait classification in real-world healthcare applications. The proposed pipeline is computationally efficient, making it well suited for implementing in wearable and remote monitoring systems. Full article
(This article belongs to the Section Wearables)
25 pages, 2253 KB  
Article
Monocular Visual Pose Estimation Method Based on Spherical Cooperative Target
by Yanyu Ding, Chaoran Zhang, Yongbin Zhang, Fujin Yang, Zhiyuan Tang, Shipeng Li, Xinran Liu and Xiaojun Zhao
Sensors 2026, 26(10), 3139; https://doi.org/10.3390/s26103139 - 15 May 2026
Abstract
In close-range monocular visual measurement and cooperative target pose estimation, conventional planar targets are constrained by viewpoint changes and are prone to perspective distortion. Although spherical targets provide omnidirectional observability, their PnP-based pose estimation may still suffer from large errors under limited fields [...] Read more.
In close-range monocular visual measurement and cooperative target pose estimation, conventional planar targets are constrained by viewpoint changes and are prone to perspective distortion. Although spherical targets provide omnidirectional observability, their PnP-based pose estimation may still suffer from large errors under limited fields of view and sparse feature observations. To address this issue, this paper proposes an integrated visual measurement framework covering both high-precision spherical target construction and robust pose estimation. First, a composite marker layout based on adaptively scaled latitude–longitude topology is designed. To suppress cumulative distortion caused by long-sequence multi-view rigid registration, a center-to-pole point-cloud stitching strategy is developed, and multiple observations are fused using geometric-consistency weighting to accurately reconstruct the feature-point coordinate system of the target. Second, a joint optimization method is proposed by combining feature-point reprojection error with a contour center consistency constraint. Specifically, the theoretical contour center is predicted from the analytical projection model of the sphere and constrained to agree with the observed contour center fitted from the image. In addition, an SQPnP-based sequential reinitialization mechanism is introduced to improve robustness under sparse-point observations. Simulation results demonstrate that the proposed method achieves higher accuracy and robustness under continuous pose changes, sparse feature points, and different noise levels, compared with EPnP, EPnP+LM, LM, and SQPnP, while real-image experiments further demonstrate its practical feasibility. Full article
(This article belongs to the Section Sensing and Imaging)
25 pages, 15746 KB  
Article
Modulated Diffusion with Spatial–Spectral Disentangled Guidance for Hyperspectral Image Super-Resolution
by Xinlan Xu, Jiaqing Qiao, Jialin Zhou, Kuo Yuan and Lei Feng
Remote Sens. 2026, 18(10), 1582; https://doi.org/10.3390/rs18101582 - 15 May 2026
Abstract
Fusion-based hyperspectral image super-resolution (HSI-SR) on diffusion models exhibits promising performance in generating high-quality, realistic features. However, existing methods are confronted with two limitations: (1) static conditional guidance is discordant with the dynamic denoising process, and (2) modality conflicts are inadequately addressed by [...] Read more.
Fusion-based hyperspectral image super-resolution (HSI-SR) on diffusion models exhibits promising performance in generating high-quality, realistic features. However, existing methods are confronted with two limitations: (1) static conditional guidance is discordant with the dynamic denoising process, and (2) modality conflicts are inadequately addressed by concatenation. To address these challenges, we propose a novel Modulated Diffusion Framework with Spatial–Spectral Disentangled Guidance (SSDG). Specifically, it introduces a Dynamic Modulated Residual Network (DMRN), which leverages a time-aware mechanism to dynamically adjust conditional feature injection, ensuring adaptive guidance throughout all denoising stages. Furthermore, we design a training-free SSDG strategy to explicitly decouple spatial and spectral guidance during sampling, allowing for flexible control over the fusion process to mitigate modality conflicts. Extensive experiments on three public datasets demonstrate that the proposed method achieves state-of-the-art performance, exhibiting superior robustness, particularly in challenging noisy scenarios. Full article
Show Figures

Figure 1

11 pages, 707 KB  
Article
Aquaporin-4 and MicroRNA Expression in Meningiomas: A Tissue-Level Exploratory Analysis
by Huseyin Omer Keskin, Emre Ozkara, Ebru Erzurumluoglu, Zuhtu Ozbek, Evrim Yilmaz, Funda Canaz, Didem Arslantas, Sevilhan Artan and Ali Arslantas
Biomedicines 2026, 14(5), 1125; https://doi.org/10.3390/biomedicines14051125 - 15 May 2026
Abstract
Background: Meningiomas exhibit considerable biological heterogeneity that is not fully captured by histopathological grading. Tissue-based molecular markers may provide complementary insight into tumor biology within routine diagnostic settings. Methods: Formalin-fixed paraffin-embedded tissue samples from 65 intracranial meningiomas and 13 non-neoplastic controls were analyzed. [...] Read more.
Background: Meningiomas exhibit considerable biological heterogeneity that is not fully captured by histopathological grading. Tissue-based molecular markers may provide complementary insight into tumor biology within routine diagnostic settings. Methods: Formalin-fixed paraffin-embedded tissue samples from 65 intracranial meningiomas and 13 non-neoplastic controls were analyzed. Aquaporin-4 (AQP4) expression was assessed using immunohistochemistry, while miR-216a, miR-320a, and LINC00461 levels were quantified by means of RT-qPCR. Expression patterns were compared across groups and evaluated in relation to histological grade. Results: AQP4 expression was significantly reduced in meningiomas compared with controls and showed a further decrease in higher-grade tumors. Although expression of miR-216a and miR-320a was also lower in tumor samples, these differences did not reach statistical significance. Correlation analysis revealed modest but significant associations between AQP4 and miR-216a, as well as between miR-216a and miR-320a. Individual markers demonstrated limited discriminatory performance; however, combined expression patterns suggested underlying molecular variability across tumor grades. Conclusions: Our findings indicate that AQP4 downregulation represents a consistent feature in meningiomas, while associated microRNA alterations may reflect coordinated but context-dependent expression patterns. Although these markers are not sufficient as standalone diagnostic tools, their combined tissue-level assessment may provide complementary information on tumor heterogeneity. These findings should be interpreted as exploratory and highlight the need for further validation in larger and mechanistic studies. Full article
(This article belongs to the Section Cell Biology and Pathology)
37 pages, 1707 KB  
Article
A Consolidated Framework for the Detection of Alzheimer’s Disease Using EEG Signals and Hybrid Models
by Sunil Kumar Prabhakar and Dong-Ok Won
Biomimetics 2026, 11(5), 348; https://doi.org/10.3390/biomimetics11050348 - 15 May 2026
Abstract
Alzheimer’s disease (AD) is a serious neurodegenerative disorder that can severely affect behavior and thinking patterns, and is accompanied by frequent memory loss. The early diagnosis of AD is essential, as this can benefit the patient, but detecting AD is a complex process [...] Read more.
Alzheimer’s disease (AD) is a serious neurodegenerative disorder that can severely affect behavior and thinking patterns, and is accompanied by frequent memory loss. The early diagnosis of AD is essential, as this can benefit the patient, but detecting AD is a complex process due to the nature of its associated clinical data. Electroencephalography (EEG) serves as a promising and cost-effective technique for analyzing AD-related brain activity patterns. In this work, a consolidated framework for detecting AD using EEG signals and hybrid models is proposed that uses a dataset that is available online. For the feature extraction module, five efficient techniques—Principal Component Analysis (PCA), Kernel Partial Least Squares (KPLS), Kriging Model, Isomap, and K-means clustering—are used. For feature selection, with the help of biomimetics-based concepts, three efficient algorithms are used: hybrid Cuckoo Search Optimization–Rat Swarm Optimization (CSO-RSO), Zebra Optimization (ZOA), and hybrid Gravitational Search Algorithm–Particle Swarm Optimization (GSA-PSO). Four interesting hybrid classifiers are utilized here to detect AD using EEG signals—hybrid Extreme Learning Machine–Adaboost (ELM–Adaboost), hybrid Classification and Regression Trees–Adaboost (CART–Adaboost), and hybrid weighted broad learning system-based Adaboost (HWBLSA), followed by a hybrid machine learning classification model with a soft voting technique—and, finally, these are compared with other standard machine learning classifiers. The highest classification accuracy of 98.71% is found when the Kriging Model feature extraction concept is combined with the hybrid GSA-PSO feature selection method and classified with the ELM–Adaboost classifier. Full article
(This article belongs to the Section Biological Optimisation and Management)
21 pages, 7178 KB  
Article
Thermal Runaway Evolution, Propagation Mechanism and Multi-Dimensional Fire Investigation Methodology for 18650-Type NCA Lithium-Ion Batteries
by Juntao Wu, Yafei Fan, Haojun Zhang, Ziheng Wang, Jianhong Du and Diping Yuan
Batteries 2026, 12(5), 172; https://doi.org/10.3390/batteries12050172 - 15 May 2026
Abstract
To address the critical industry challenges of insufficient thermal safety and reliability in the stacking design of lithium-ion battery (LIB) modules, as well as the lack of accurate traceability methods for LIB fire accidents, this study takes commercial 18650-type lithium, nickel cobalt aluminum [...] Read more.
To address the critical industry challenges of insufficient thermal safety and reliability in the stacking design of lithium-ion battery (LIB) modules, as well as the lack of accurate traceability methods for LIB fire accidents, this study takes commercial 18650-type lithium, nickel cobalt aluminum (NCA) LIBs as the research object. First, we systematically investigated the thermal runaway (TR) behavior of single cells under thermal and electrical abuse conditions, identified the significant discrepancies in TR behavior between the two abuse scenarios, quantitatively revealed a positive correlation between TR risk and state of charge (SOC), and determined that the core feature is that the maximum heat release occurs in the negative electrode of the battery. Subsequently, we quantitatively analyzed the influences of the initial TR trigger position and module stacking structure on the TR propagation characteristics within the module, and obtained the key conclusions that center-triggered TR exhibits a faster propagation rate and that the vertical stacking structure significantly aggravates the TR chain reaction. Finally, based on the TR process, this paper summarizes the burn mark characteristics caused by different triggers of thermal runaway in LIBs. The results of this study provide critical fundamental data for optimizing the thermal safety design of LIB modules, and offer a scientific basis for the formulation of LIB fire rescue schemes and the implementation of fire investigation. Full article
(This article belongs to the Section Energy Storage System Aging, Diagnosis and Safety)
Show Figures

Figure 1

25 pages, 7431 KB  
Article
Node Importance Evaluation Method Based on Fractional-Order Topological Propagation and Local Information Entropy
by Kangzheng Huang, Weibo Li, Shuai Cao, Xianping Zhu and Peng Li
Systems 2026, 14(5), 565; https://doi.org/10.3390/systems14050565 (registering DOI) - 15 May 2026
Abstract
Accurate identification of key nodes in complex networks is vital for optimizing system robustness and controlling information spread. Existing centrality metrics struggle to balance the continuous extraction of global topological features with the fine-grained perception of local structures, while traditional heuristic algorithms also [...] Read more.
Accurate identification of key nodes in complex networks is vital for optimizing system robustness and controlling information spread. Existing centrality metrics struggle to balance the continuous extraction of global topological features with the fine-grained perception of local structures, while traditional heuristic algorithms also face severe resolution limitations. To address these issues, this paper proposes a node importance evaluation method based on fractional-order topological propagation and local information entropy (FSEC). This method overcomes the limitations of discrete integer-order propagation inherent in traditional graph walks. It constructs a continuous fractional-order topological propagation operator within the spectral graph theory framework. This enables the smooth projection of node degree features into the global topological space, thereby yielding high-order global impact factors. Furthermore, an information theory mechanism is introduced to quantify the probability distribution of a node’s information contribution within its local neighborhood. The local structural information entropy is then calculated to reflect the node’s asymmetric control over micro-level information flow. Deliberate attack simulations were conducted on nine real-world networks and three types of artificial network models. The results show that the proposed FSEC algorithm significantly outperforms baseline algorithms like Autoencoder and Graph Neural Network (AGNN), Degree Centrality, k-shell, PageRank, and Mixed Degree Decomposition (MDD) in degrading the largest connected component (LCC) and global network efficiency (NE). The proposed method also achieves the minimum Area Under the Curve (AUC) values globally. Its monotonicity is slightly lower than that of AGNN but superior to all other baseline algorithms. In addition, SIR simulations further confirm the effectiveness of the FSEC method. This approach successfully resolves the ranking tie problem among nodes in the same topological layer. Full article
(This article belongs to the Section Complex Systems and Cybernetics)
Show Figures

Figure 1

19 pages, 8217 KB  
Article
A GIN-Based Pre-Identification Method for Dominant Flow Channels in Connection-Element Reservoirs: An Optimized Ant Colony Algorithm Search Scheme
by Zihao Zheng, Siying Chen, Fulin An, Shengquan Yu, Haotong Guo, Ze Du, Hua Xiang and Yunfeng Xu
Processes 2026, 14(10), 1605; https://doi.org/10.3390/pr14101605 - 15 May 2026
Abstract
Dominant flow channels formed during the late stages of waterflooding can severely reduce sweep efficiency and intensify ineffective interwell circulation. Conventional identification approaches, including tracer testing, well testing, and numerical simulation, often suffer from high operational cost, long execution time, or limited adaptability [...] Read more.
Dominant flow channels formed during the late stages of waterflooding can severely reduce sweep efficiency and intensify ineffective interwell circulation. Conventional identification approaches, including tracer testing, well testing, and numerical simulation, often suffer from high operational cost, long execution time, or limited adaptability to heterogeneous interwell connectivity. Although ant colony optimization (ACO) is suitable for path-search problems in reservoir networks, its performance depends strongly on hyperparameter settings, and sample-by-sample parameter tuning introduces substantial online computational overhead. This study proposes a structure-informed GIN–ACO framework for adaptive dominant flow channel identification in connection-element reservoir graphs. A physics-constrained benchmark model is first established using Darcy’s law and the connection element method to provide reference flow paths. A geometry-based surrogate model is then developed to approximate flow splitting coefficients efficiently while preserving the main physical trends. Based on graph topology and geometric descriptors, a graph isomorphism network is trained to predict task-specific ACO parameters, replacing iterative online search with direct parameter inference. Experiments on 1000 synthetic reservoir graphs show that the proposed method achieves a 100% success rate with an average online computation time of 143.5 ms, outperforming fixed-parameter ACO, PSO-ACO, and BO-ACO. On 20 semi-realistic SPE10 reservoir models, GIN–ACO achieves a success rate of 92 ± 1% with an average runtime of 160.3 ± 5 ms. Ablation studies further confirm that graph-structure learning, combined topology–geometry features, and GIN-based parameter prediction are essential for robust performance. The proposed framework provides a promising and computationally efficient route for structure-aware dominant channel identification in connection-element reservoir models. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

22 pages, 2402 KB  
Article
A Two-Stage Transformer Framework for Sparse-Array Direction-of-Arrival Estimation via Correlation Vector Recovery
by Wenchao He, Yiran Shi, Hongxi Zhao, Hongliang Zhu and Chunshan Bao
Sensors 2026, 26(10), 3132; https://doi.org/10.3390/s26103132 - 15 May 2026
Abstract
Accurate direction-of-arrival (DOA) estimation with high resolution is fundamental to many array sensing applications. In practice, however, sparse arrays with missing sensors and snapshot-limited observations often lead to incomplete and noisy second-order statistics, which substantially degrades the performance of conventional eigendecomposition-based estimators. In [...] Read more.
Accurate direction-of-arrival (DOA) estimation with high resolution is fundamental to many array sensing applications. In practice, however, sparse arrays with missing sensors and snapshot-limited observations often lead to incomplete and noisy second-order statistics, which substantially degrades the performance of conventional eigendecomposition-based estimators. In this paper, we propose a two-stage Transformer framework for sparse-array DOA estimation that explicitly separates correlation recovery from angle inference. The first stage operates in the correlation domain and learns to reconstruct a clean and complete correlation vector from partially observed measurements using masking-aware tokenization and global-context modeling. The recovered representation can be further converted into a structured covariance matrix, providing an interpretable interface to classical signal processing back-ends. Based on the recovered features, the second stage adopts a Transformer regressor to directly predict multi-source DOAs. Extensive simulations on a large-scale dataset with SNRs from −5 to 10 dB and various snapshot numbers demonstrate that the proposed method delivers robust accuracy and improved stability in low-SNR and snapshot-limited regimes, while maintaining competitive performance at higher SNRs. Additional evaluations with an ESPRIT back-end further confirm that the recovery-based covariance yields more reliable DOA estimation than conventional difference–coarray processing, with particularly evident gains under challenging noise conditions. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

19 pages, 760 KB  
Review
Evaluating Cognition Across Aging and Traumatic Brain Injury: Integrating Neurological and Neuropsychological Approaches
by Miguel A. Pappolla, Sean L. Pappolla, Remi Nader, Mohammad K. Hamza, Felix Fang and Xiang Fang
J. Clin. Med. 2026, 15(10), 3822; https://doi.org/10.3390/jcm15103822 - 15 May 2026
Abstract
Background/Objectives: The evaluation of cognition is central to many neurological conditions, including traumatic brain injury, Alzheimer’s disease, Lewy body disease, frontotemporal degeneration, and vascular disorders. In clinical practice, particularly in aging populations, cognitive complaints often arise in the context of mixed neurological processes, [...] Read more.
Background/Objectives: The evaluation of cognition is central to many neurological conditions, including traumatic brain injury, Alzheimer’s disease, Lewy body disease, frontotemporal degeneration, and vascular disorders. In clinical practice, particularly in aging populations, cognitive complaints often arise in the context of mixed neurological processes, requiring careful integration of cognitive and non-cognitive findings. Despite this, there remains limited clarity regarding the respective roles of neurologists and clinical neuropsychologists and the distinction between cognitive and neuropsychological assessments, terms that are often used interchangeably despite important differences in methodology and scope. This lack of a shared framework has practical consequences. Cognitive test results, when interpreted in isolation for diagnosis, may be misconstrued as comprehensive measures of brain function, particularly when non-cognitive neurological features such as motor, cerebellar, or vestibular abnormalities should have been considered (but were not). Methods: In this narrative review, we synthesize clinical guidelines, consensus statements, regulatory sources, and representative empirical literature to articulate a competence-based framework in which cognitive assessment is a medically integrated process incorporating history, functional evaluation, neurological examination, and the targeted use of standardized neuropsychological instruments. Results: Neurologists are trained to establish medical diagnoses and integrate cognitive findings into the context of neurological disease, while neuropsychologists contribute detailed psychometric characterization, culturally and demographically informed interpretation, cognitive phenotyping, functional characterization, and validity assessment in complex clinical and medicolegal contexts. Although neuropsychologists are qualified to diagnose neurocognitive disorders using standardized diagnostic criteria, attribution to specific neurological etiologies requires a comprehensive medical evaluation that extends beyond cognitive testing alone. Conclusions: We outline a tiered approach to evaluation that aligns assessment methods with clinical questions and supports accurate diagnosis, interdisciplinary collaboration, and patient-centered care. Full article
(This article belongs to the Section Clinical Neurology)
Back to TopTop