Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (15,826)

Search Parameters:
Keywords = computer-based applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5645 KB  
Article
A Robust Hybrid Staggered/Collocated Mesh Scheme for CFD on Skewed Meshes
by Raad Issa and Giovanni Giustini
Fluids 2026, 11(2), 53; https://doi.org/10.3390/fluids11020053 (registering DOI) - 14 Feb 2026
Abstract
In this study, a finite-volume computational fluid dynamics (CFD) technique for application on skewed meshes using staggered pressure nodes is proposed. The method is based on the derivation of a momentum equation for the cell face velocities from appropriately discretised momentum equations in [...] Read more.
In this study, a finite-volume computational fluid dynamics (CFD) technique for application on skewed meshes using staggered pressure nodes is proposed. The method is based on the derivation of a momentum equation for the cell face velocities from appropriately discretised momentum equations in the two cells surrounding the cell face with the driving pressure difference pertaining to the staggered adjacent nodes. In this way, a staggered mesh-like method is obtained that would prevent the occurrence of oscillatory behaviour in pressure or velocity fields. The cell-face velocities are then forced to obey continuity via an equation for pressure akin to other standard CFD schemes. This article describes the formulation of the cell-face momentum equation as well as the way the nodal velocity is reconstructed from the surrounding cell-face velocities. The method is demonstrated to recover the advantages of the PISO solution algorithm that were diminished in implementations in collocated schemes. It is also validated on a reference two-dimensional, steady viscous flow case on both rectangular and skewed meshes to verify its accuracy. It is then applied to the case of an unsteady vortex-shedding flow past a square obstacle, on both rectangular and skewed meshes, and the results are compared with a solution obtained from a collocated method as well as with an experimental value of the Strouhal number. Full article
(This article belongs to the Section Mathematical and Computational Fluid Mechanics)
17 pages, 1014 KB  
Article
A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports
by Dandan Cui, Zili Jiang, Xiangning Zhang, Wenchao Yang and Zihong He
Appl. Syst. Innov. 2026, 9(2), 43; https://doi.org/10.3390/asi9020043 (registering DOI) - 14 Feb 2026
Abstract
In elite sports, discovering interdisciplinary causal relationships from public data is critical for gaining a competitive edge. However, the causal knowledge required for these practices is difficult to obtain through either existing intervention-based sports science methods or computational techniques focused on statistical association. [...] Read more.
In elite sports, discovering interdisciplinary causal relationships from public data is critical for gaining a competitive edge. However, the causal knowledge required for these practices is difficult to obtain through either existing intervention-based sports science methods or computational techniques focused on statistical association. This paper formalizes a multi-domain collaborative framework, which involves three roles: (1) the elite sports team; (2) the sport science expert; and (3) the causal inference expert. Our nine-step workflow, which processes three core elements of problem, data, and computing, guides these experts through a cycle that systematically transforms practical problems into computational models and, crucially, translates complex analytical outputs back into actionable strategies. The framework also introduces a dual-dimensional “field evaluation” method, encompassing both process and outcome, to quantify the trustworthiness of knowledge in practical settings where a “gold standard” is absent. This framework was applied in an illustrative case study prior to the Paris 2024 Olympics, providing one additional evidence-informed input for the national team. The success was observed and interpreted as contextual consistency rather than causal validation. This framework ensures the practical application of causal discovery in elite sports, offering a repeatable and explainable pathway for generating credible, evidence-based insights from public data for elite sports decision-making. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

18 pages, 11603 KB  
Article
Learning Robust Node Representations via Graph Neural Network and Multilayer Perceptron Classifier
by Mohammad Abrar Shakil Sejan, Md Habibur Rahman, Md Abdul Aziz, Iqra Hameed, Md Shofiqul Islam, Saifur Rahman Sabuj and Hyoung-Kyu Song
Mathematics 2026, 14(4), 680; https://doi.org/10.3390/math14040680 (registering DOI) - 14 Feb 2026
Abstract
Node classification is a fundamental task in graph-based learning, with applications in social networks, citation networks, and biological systems. Learning node representations for different graph datasets is necessary to find the correlation between different types of nodes. Graph Neural Networks (GNNs) play a [...] Read more.
Node classification is a fundamental task in graph-based learning, with applications in social networks, citation networks, and biological systems. Learning node representations for different graph datasets is necessary to find the correlation between different types of nodes. Graph Neural Networks (GNNs) play a critical role in providing revolutionary solutions for graph data structures. In this paper, we analyze the effect of combined GNN and multilayer perceptron (MLP) architecture to solve the node classification problem for different graph datasets. The feature information and network topology are efficiently captured by the GNN layer, and the MLP helps to make accurate decisions. We have selected popular datasets, namely Amazon-computer, Amazon-photo, Citeseer, Cora, Corafull, PubMed, and Wikics, for evaluating the performance of the proposed approach. In addition, in the GNN part, we have used six models to find the best model fit in the proposed architecture. We have conducted extensive simulations to find the node classification accuracy for the proposed model. The results show the proposed architecture can outperform previous studies in terms of test accuracy. In particular, the GNN algorithms SAGEConv, GENConv, and TAGConv show superior performance across different datasets. Full article
Show Figures

Figure 1

29 pages, 2940 KB  
Article
Influence of EEG Signal Augmentation Methods on Classification Accuracy of Motor Imagery Events
by Bartłomiej Sztyler, Aleksandra Królak and Paweł Strumiłło
Sensors 2026, 26(4), 1258; https://doi.org/10.3390/s26041258 (registering DOI) - 14 Feb 2026
Abstract
This study investigates the impact of various data-augmentation techniques on the performance of neural networks in EEG-based motor imagery three-class event classification. EEG data were obtained from a publicly available open-source database, and a subset of 25 patients was selected for analysis. The [...] Read more.
This study investigates the impact of various data-augmentation techniques on the performance of neural networks in EEG-based motor imagery three-class event classification. EEG data were obtained from a publicly available open-source database, and a subset of 25 patients was selected for analysis. The classification task focused on detecting two types of motor events: imagined movements of the left hand and imagined movements of the right hand. EEGNet, a convolutional neural network architecture optimized for EEG signal processing, was employed for classification. A comprehensive set of augmentation techniques was evaluated, including five time-domain transformations, three frequency-domain transformations, two spatial-domain transformations and two generative approaches. Each method was tested individually, as well as in selected two- and three-method cascade combinations. The augmentation strategies were tested using three data-splitting methodologies and applying four ratios of original-to-generated data: 1:0.25, 1:0.5, 1:0.75 and 1:1. Our results demonstrate that the augmentation strategies we used significantly influence classification accuracy, particularly when used in combination. These findings underscore the importance of selecting appropriate augmentation techniques to enhance generalization in EEG-based brain–computer interface applications. Full article
(This article belongs to the Special Issue EEG-Based Brain–Computer Interfaces: Research and Applications)
Show Figures

Figure 1

46 pages, 1455 KB  
Review
Recent Advances in AI and GenAI for Health Informatics
by Sio Iong Ao, Vasile Palade, Chris Holt, Suzy Araujo, Mike Gourlay and Danina Kapetanovic
Healthcare 2026, 14(4), 495; https://doi.org/10.3390/healthcare14040495 (registering DOI) - 14 Feb 2026
Abstract
The emergence of large language models (LLMs) and generative artificial intelligence (GenAI) has marked a turning point in health informatics. AI has become a very helpful tool for health informatics applications, with numerous AI applications in health informatics being reported in the last [...] Read more.
The emergence of large language models (LLMs) and generative artificial intelligence (GenAI) has marked a turning point in health informatics. AI has become a very helpful tool for health informatics applications, with numerous AI applications in health informatics being reported in the last years. The objective of this paper is to synthesize the common concerns and opportunities raised by recent popular reviews on AI and health informatics. The main methodological topics covered in this up-to-date review include traditional AI, GenAI, and LLMs. The literature search was conducted through the popular academic database Scopus, which covers over one hundred million records, including both computer science and healthcare. Among these popular reviews (measured by the number of citations that each one received), clinical decision support, patient care, electronic health records, hospital management, and remote patient monitoring are the most mentioned healthcare topics. Different from the majority of the existing reviews that narrowly cover on one to a few topics in healthcare, our review is designed with the objective to provide a broad coverage, such that practitioners may benefit from comprehensive insights covering the above mentioned five popular topics in AI health informatics applications. Based on an in-depth analysis of these reviews by human experts, the main AI tools used, their main challenges, and some future directions have been identified in our investigation. Patient privacy, cybersecurity, ethics, clinical accountability, engaging health professionals, benchmarks and standardization, as well as lack of explainability are the common concerns identified from the literature covered in this review. Full article
37 pages, 5815 KB  
Review
Current Status and Future Prospects of Simulation Technology in Cleaning Systems for Crop Harvesters
by Peng Chen, Hongguang Yang, Chenxu Zhao, Jiayong Pei, Fengwei Gu, Yurong Wang, Zhaoyang Yu and Feng Wu
Agriculture 2026, 16(4), 446; https://doi.org/10.3390/agriculture16040446 (registering DOI) - 14 Feb 2026
Abstract
The performance of the cleaning system in crop harvesters directly impacts overall operational efficiency and harvest quality. Against the background of traditional design relying on physical experiments—which is costly and provides limited mechanistic insight—Discrete Element Method (DEM), Computational Fluid Dynamics (CFD), and their [...] Read more.
The performance of the cleaning system in crop harvesters directly impacts overall operational efficiency and harvest quality. Against the background of traditional design relying on physical experiments—which is costly and provides limited mechanistic insight—Discrete Element Method (DEM), Computational Fluid Dynamics (CFD), and their coupled simulation (CFD-DEM) have become key means for in-depth study of the cleaning process, capable of revealing the complex interactions between particles and between particles and airflow. With the increasingly widespread and deep application of computer simulation technology in agricultural machinery research and development, it is particularly necessary to systematically review its research progress in cleaning systems. Therefore, this study provides a comprehensive and systematic analysis and summary of the key technologies in cleaning system simulation, aiming to address the current gap in systematic reviews of simulation technology in this field. Compared with previous studies that mostly focus on a single method or a specific crop type, this paper systematically reviews the application of three simulation technologies in cleaning systems of various crop harvesters. First, based on the working principle and core operational challenges of cleaning systems, the necessity of applying simulation technology is clarified. Second, the basic principles, modeling processes, and suitable application scenarios and key points for the cleaning simulation of each method are analyzed. Third, typical cases are reviewed to summarize their key achievements in structural innovation, parameter optimization of cleaning devices, and revealing the mechanisms of material separation. Finally, current bottlenecks in simulation applications are pointed out, and future development directions are outlined, including high-precision multi-field coupling, integration with intelligent algorithms, and the construction of digital twin systems. This study aims to provide systematic theoretical reference and methodological support for the innovative design and performance improvement of cleaning systems. Full article
(This article belongs to the Section Agricultural Technology)
34 pages, 9510 KB  
Review
Advances in DNAzyme Selection, Molecular Engineering and Biomedical Applications
by Li Yan, Jingjing Tian, Hongyu Yang, Shuai Liu, Zaihui Du, Chen Li and Hongtao Tian
Int. J. Mol. Sci. 2026, 27(4), 1833; https://doi.org/10.3390/ijms27041833 (registering DOI) - 14 Feb 2026
Abstract
DNAzymes are catalytically active single-stranded DNAs that fold into metal-ion-assisted architectures to mediate diverse reactions. Addressing the performance gap in biological settings, we establish a novel conceptual framework based on a continuous iteration workflow of selection, enhancement, and application. This paradigm integrates selection [...] Read more.
DNAzymes are catalytically active single-stranded DNAs that fold into metal-ion-assisted architectures to mediate diverse reactions. Addressing the performance gap in biological settings, we establish a novel conceptual framework based on a continuous iteration workflow of selection, enhancement, and application. This paradigm integrates selection constraints, molecular engineering, and clinical context into a unified cycle. We summarize the evolution of SELEX toward application-driven selection incorporating functional/environmental constraints, deep-sequencing-enabled high-throughput activity readouts, droplet compartmentalization and structure- and computation-guided design. We further consolidate engineering strategies to improve stability, kinetics and controllability, including 2′-sugar modifications and XNA substitution, backbone and nucleobase functionalization, arm and secondary-structure engineering for switchable or split architectures and multivalent organization on nanocarriers or nucleic acid scaffolds to enhance local concentration, protection and targeted delivery. Finally, we survey applications in ultrasensitive biosensing and portable diagnostics, activatable and multimodal in vivo imaging, and therapies for cancer, inflammatory diseases and airway disorders, and outline translational priorities: data-driven design, next-generation delivery, standardized safety/PK-PD evaluation and scalable manufacturing, ultimately for clinical and point-of-care deployment. Full article
(This article belongs to the Special Issue Whole-Cell System and Synthetic Biology, 2nd Edition)
Show Figures

Figure 1

46 pages, 2169 KB  
Review
Vision Mamba in Remote Sensing: A Comprehensive Survey of Techniques, Applications and Outlook
by Muyi Bao, Shuchang Lyu, Zhaoyang Xu, Huiyu Zhou, Jinchang Ren, Shiming Xiang, Xiangtai Li and Guangliang Cheng
Remote Sens. 2026, 18(4), 594; https://doi.org/10.3390/rs18040594 (registering DOI) - 14 Feb 2026
Abstract
Deep learning has profoundly transformed remote sensing, yet prevailing architectures like Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) remain constrained by critical trade-offs: CNNs suffer from limited receptive fields, while ViTs grapple with quadratic computational complexity, hindering their scalability for high-resolution remote [...] Read more.
Deep learning has profoundly transformed remote sensing, yet prevailing architectures like Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) remain constrained by critical trade-offs: CNNs suffer from limited receptive fields, while ViTs grapple with quadratic computational complexity, hindering their scalability for high-resolution remote sensing data. State Space Models (SSMs), particularly the recently proposed Mamba architecture, have emerged as a paradigm-shifting solution, combining linear computational scaling with global context modeling. This survey presents a comprehensive review of Mamba-based methodologies in remote sensing, systematically analyzing about 120 Mamba-based remote sensing studies to construct a holistic taxonomy of innovations and applications. Our contributions are structured across five dimensions: (i) foundational principles of Vision Mamba architectures, (ii) micro-architectural advancements such as adaptive scan strategies and hybrid SSM formulations, (iii) macro-architectural integrations, including CNN–Transformer–Mamba hybrids and frequency-domain adaptations, (iv) rigorous benchmarking against state-of-the-art methods in multiple application tasks, such as object detection, semantic segmentation, change detection, etc. and (v) critical analysis of unresolved challenges with actionable future directions. By bridging the gap between SSM theory and remote sensing practice, this survey establishes Mamba as a transformative framework for remote sensing analysis. To our knowledge, this paper is the first systematic review of Mamba architectures in remote sensing. Our work provides a structured foundation for advancing research in remote sensing systems through SSM-based methods. We curate an open-source GitHub repository to foster community-driven advancements. Full article
Show Figures

Figure 1

24 pages, 16542 KB  
Article
Wampee-YOLO: A High-Precision Detection Model for Dense Clustered Wampee in Natural Orchard Scenario
by Zhiwei Li, Yusha Xie, Jingjie Wang, Guogang Huang, Longzhen Yu, Kai Zhang, Junlong Li and Changyu Liu
Horticulturae 2026, 12(2), 232; https://doi.org/10.3390/horticulturae12020232 (registering DOI) - 14 Feb 2026
Abstract
Wampee (Clausena lansium) harvesting currently relies heavily on manual labor, but automation is significantly hindered by clustered fruit growth patterns, small fruit sizes, and complex orchard backgrounds, which make accurate detection highly challenging. This study proposes Wampee-YOLO, a lightweight and high-precision [...] Read more.
Wampee (Clausena lansium) harvesting currently relies heavily on manual labor, but automation is significantly hindered by clustered fruit growth patterns, small fruit sizes, and complex orchard backgrounds, which make accurate detection highly challenging. This study proposes Wampee-YOLO, a lightweight and high-precision model based on the YOLO11n architecture, specifically designed for real-time wampee detection in natural orchard environments. The proposed model integrates several architectural enhancements: the RFEMAConv module for expanded receptive fields, an AIFI module for improved small target interaction, and a C2PSA-MSCADYT structure to boost multi-scale adaptability. Additionally, a Triplet Attention mechanism strengthens multi-dimensional feature representation, while an AFPN-Pro2345 neck structure optimizes cross-scale feature fusion. Experimental results demonstrate that Wampee-YOLO achieves an mAP50 of 90.3%, a precision of 92.1%, and F1 score of 87%. This represents a significant 3.4% mAP50 improvement over the YOLO11n baseline, with a slight increase to 3.28 M parameters. Ablation studies further confirm that the AFPN-Pro2345 module provides the most substantial performance gain, increasing mAP50 by 2.4%. The model effectively balances computational efficiency with detection accuracy. These findings indicate that Wampee-YOLO offers a robust and efficient visual detection solution suitable for deployment on resource-constrained edge devices in smart orchard applications. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 (registering DOI) - 14 Feb 2026
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

27 pages, 13706 KB  
Article
A Patch-Based Computational Framework for the Analysis of Structurally Heterogeneous Bioelectrographic Images
by Rodrigo Guedes Pereira Pinheiro and Claudia Lage Rebello da Motta
Appl. Sci. 2026, 16(4), 1907; https://doi.org/10.3390/app16041907 - 13 Feb 2026
Abstract
Image datasets characterized by high intra-image structural heterogeneity pose significant challenges for supervised classification, particularly when local patterns contribute unevenly to image-level decisions. In such scenarios, direct image-level learning may obscure relevant local variability and introduce bias in both training and evaluation. This [...] Read more.
Image datasets characterized by high intra-image structural heterogeneity pose significant challenges for supervised classification, particularly when local patterns contribute unevenly to image-level decisions. In such scenarios, direct image-level learning may obscure relevant local variability and introduce bias in both training and evaluation. This study proposes a statistically guided, patch-based computational pipeline for the automatic classification of elementary morphological patterns, with application to bioelectrographic imaging data. The pipeline is progressively refined through explicit statistical diagnostics, including image-level data splitting to prevent data leakage, class imbalance handling, and decision threshold calibration based on validation performance. To further control structural bias across images, a continuous image-level descriptor, denoted as pct_point_true , is introduced to quantify the proportion of point-like structures and support dataset stratification and stability analysis. Experimental results demonstrate consistent and robust patch-level performance, together with coherent behavior under complementary image-level aggregation analysis. Rather than emphasizing architectural novelty, the study prioritizes methodological rigor and evaluation validity, providing a transferable framework for patch-based analysis of structurally heterogeneous image datasets in applied computer vision contexts. Full article
24 pages, 1480 KB  
Review
Future Perspectives on the Application of Systems Biology and Generative Artificial Intelligence in the Design of Immunogenic Peptides for Vaccines
by José M. Pérez de la Lastra, Isidro Sobrino, Víctor M. Rodríguez Borges and José de la Fuente
Vaccines 2026, 14(2), 177; https://doi.org/10.3390/vaccines14020177 (registering DOI) - 13 Feb 2026
Abstract
Peptide-based vaccines offer a modular and readily manufacturable platform for both prophylactic and therapeutic immunization. However, their broader translation has been constrained by the limited capacity to predict protective immunity directly from sequence-level features. Recent advances in systems vaccinology and high-throughput immune profiling [...] Read more.
Peptide-based vaccines offer a modular and readily manufacturable platform for both prophylactic and therapeutic immunization. However, their broader translation has been constrained by the limited capacity to predict protective immunity directly from sequence-level features. Recent advances in systems vaccinology and high-throughput immune profiling have substantially expanded the experimental evidence, while generative artificial intelligence now enables de novo design of peptide immunogens and multi-epitope antigens under precisely controlled constraints. This review approaches how these complementary developments are transforming peptide vaccine research, moving beyond classical reverse vaccinology and conventional epitope prediction toward integrated, data-driven design frameworks. We discuss key generative model architectures and conditioning strategies aligned with vaccine objectives, including approaches that account for structural presentation, antigen processing and population-level human leukocyte antigen (HLA) diversity. Central to this perspective is the requirement for rigorous experimental validation and for strengthening the computational–experimental feedback loop through iterative in vitro and in vivo testing informed by systems-level immune readouts. We highlight representative applications spanning infectious diseases, cancer immunotherapy and vector-borne vaccinology, and we outline major technical and translational challenges that must be addressed to enable robust real-world deployment. Finally, we propose future directions for precision peptide vaccinology, emphasizing standardized functional benchmarks, the development of richer curated datasets linking sequence space to immune outcomes, and the early incorporation of formulation and delivery constraints into generative design pipelines. Full article
(This article belongs to the Special Issue The Development of Peptide-Based Vaccines)
Show Figures

Figure 1

30 pages, 78159 KB  
Article
SCOPES: Spatially-Constrained Optimization for Efficient Image Selection in Remote Sensing
by Hongmei Fang, Shibin Liu and Wei Liu
Remote Sens. 2026, 18(4), 588; https://doi.org/10.3390/rs18040588 - 13 Feb 2026
Abstract
The rapid growth of remote sensing data offers unprecedented opportunities for global environmental monitoring and resource assessment, yet poses significant challenges for efficient selection of large-scale image datasets. Traditional conditional retrieval methods often return extensive sets with substantial spatial redundancy, imposing heavy selection [...] Read more.
The rapid growth of remote sensing data offers unprecedented opportunities for global environmental monitoring and resource assessment, yet poses significant challenges for efficient selection of large-scale image datasets. Traditional conditional retrieval methods often return extensive sets with substantial spatial redundancy, imposing heavy selection burdens on users. Existing automated selection methods struggle to balance coverage accuracy, redundancy control, and computational efficiency in large-scale scenarios, making efficient and accurate image selection a critical challenge for large-scale applications. To address this, we propose SCOPES (Spatially-Constrained Optimization for Efficient Image Selection), a novel spatial constraint optimization framework. SCOPES operates directly on actual image footprints in continuous space, thereby circumventing the limitations of traditional discretization-based modeling. We design a unit area cost function aimed at balancing image quality with spatial contribution. To ensure computational efficiency and solution optimization, SCOPES adopts a three-stage “preliminary selection-structural optimization-supplementary selection” strategy: employing lazy greedy for efficient initial selection, spatial Boolean overlay for redundancy control, and supplementary selection for coverage gap repair. Experiments conducted in four regions of different scales demonstrate that compared to baseline methods, SCOPES minimizes the number of selected images and maximizes coverage while achieving a near-universally minimal redundancy ratio. Meanwhile, the introduction of the lazy greedy algorithm significantly improves computational efficiency, achieving up to a 229-fold speedup in the large-scale East Asia region. Overall, SCOPES provides an efficient, accurate, and scalable solution for remote sensing data selection, substantially reducing the manual selection workload for platform users. Full article
Show Figures

Figure 1

32 pages, 6234 KB  
Article
Beyond Attention: Hierarchical Mamba Models for Scalable Spatiotemporal Traffic Forecasting
by Zineddine Bettouche, Khalid Ali, Andreas Fischer and Andreas Kassler
Network 2026, 6(1), 11; https://doi.org/10.3390/network6010011 - 13 Feb 2026
Abstract
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail [...] Read more.
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail to capture heterogeneous spatial dynamics. Recent spatiotemporal architectures based on attention or graph neural networks improve accuracy but introduce high computational overhead, limiting their applicability in large-scale or real-time settings. We propose HiSTM (Hierarchical SpatioTemporal Mamba), a spatiotemporal forecasting architecture built on state-space modeling. HiSTM combines spatial convolutional encoding for local neighborhood interactions with Mamba-based temporal modeling to capture long-range dependencies, followed by attention-based temporal aggregation for prediction. The hierarchical design enables representation learning with linear computational complexity in sequence length and supports both grid-based and correlation-defined spatial structures. Cluster-aware extensions incorporate spatial regime information to handle heterogeneous traffic patterns. Experimental evaluation on large-scale real-world cellular datasets demonstrates that HiSTM achieves better accuracy, outperforming strong baselines. On the Milan dataset, HiSTM reduces MAE by 29.4% compared to STN, while achieving the lowest RMSE and highest R2 score among all evaluated models. In multi-step autoregressive forecasting, HiSTM maintains 36.8% lower MAE than STN and 11.3% lower than STTRE at the 6-step horizon, with a 58% slower error accumulation rate compared to STN. On the unseen Trentino dataset, HiSTM achieves 47.3% MAE reduction over STN and demonstrates better cross-dataset generalization. A single HiSTM model outperforms 10,000 independently trained cell-specific LSTMs, demonstrating the advantage of joint spatiotemporal learning. HiSTM maintains best-in-class performance with up to 30% missing data, outperforming all baselines under various missing data scenarios. The model achieves these results while being 45× smaller than PredRNNpp, 18× smaller than xLSTM, and maintaining competitive inference latency of 1.19ms, showcasing its effectiveness for scalable 5/6G traffic prediction in resource-constrained environments. Full article
24 pages, 63699 KB  
Article
Optimal Water Resource Allocation Under Policy-Driven Rigid Constraints: A Case Study of the Yellow River Great Bend
by Zhenhua Han, Rui Jiao, Yanfei Zhang and Yaru Feng
Land 2026, 15(2), 318; https://doi.org/10.3390/land15020318 - 13 Feb 2026
Abstract
The “Great Bend” of the Yellow River, a region characterized by the tension between ecological fragility and economic growth, faces dual pressures from physical water scarcity and stringent policy redlines. Traditional allocation models often struggle to operationalize the rigid boundaries of the “Four [...] Read more.
The “Great Bend” of the Yellow River, a region characterized by the tension between ecological fragility and economic growth, faces dual pressures from physical water scarcity and stringent policy redlines. Traditional allocation models often struggle to operationalize the rigid boundaries of the “Four Determinants” policy (water determines production, city, land, and population) and suffer from computational inefficiencies under high-dimensional non-linear constraints. To address these issues, this study proposes a policy-driven “Four-Determinant, Three-Multiple” (FDTM) rigid constraint optimization framework. First, a multi-level boundary system is constructed based on water-carrying capacity, thereby converting the policy into dynamic interaction constraints among industry, city, land, and population. Second, to overcome potential computational bottlenecks, an Improved Adaptive Cheetah Optimization Algorithm (IA-COA) is developed. By integrating chaos mapping initialization and an adaptive penalty function mechanism, the algorithm exhibits enhanced global search capability and convergence speed within confined search spaces. Using Baotou City as a representative case study, the model simulates scenarios for the 2030 planning horizon. The results indicate that (i) the integration of rigid constraints effectively identifies development bottlenecks, capping projected water demand at 1.075 × 109 m3 and preventing ecological overdraft despite a 5.15% theoretical deficit; (ii) through IA-COA optimization, a balanced trade-off between economic benefits and ecological security is achieved. The comprehensive water supply guarantee rate increased to over 90%, and satisfaction levels for all sectors exceeded 0.8, demonstrating improved allocation efficiency. This study elucidates the marginal transformation mechanism of the water–economy–ecology nexus under rigid constraints and demonstrates the applicability of IA-COA in solving complex basin allocation problems constrained by strict boundaries. It provides a methodological reference for sustainable water management in similar resource-stressed arid regions. Full article
(This article belongs to the Section Land, Soil and Water)
Show Figures

Figure 1

Back to TopTop