Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,961)

Search Parameters:
Keywords = task distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1408 KB  
Article
A Fuzzy Granular K-Means Clustering Method Driven by Gaussian Membership Functions
by Junjie Huang, Biyun Lan, Haibo Huang, Tiancai Huang and Yumin Chen
Mathematics 2026, 14(3), 462; https://doi.org/10.3390/math14030462 - 28 Jan 2026
Abstract
The K-means clustering algorithm is widely applied in various clustering tasks due to its high computational efficiency and simple implementation. However, its performance significantly deteriorates when dealing with non-convex structures, fuzzy boundaries, or noisy data, as it relies on the assumption that clusters [...] Read more.
The K-means clustering algorithm is widely applied in various clustering tasks due to its high computational efficiency and simple implementation. However, its performance significantly deteriorates when dealing with non-convex structures, fuzzy boundaries, or noisy data, as it relies on the assumption that clusters are spherical or linearly separable. To address these limitations, this paper proposes a Gaussian membership-driven fuzzy granular K-means clustering method. In this approach, multi-function Gaussian membership functions are used for fuzzy granulation at the single-feature level to generate fuzzy granules, while fuzzy granule vectors are constructed in the multi-feature space. A novel distance metric for fuzzy granules is defined along with operational rules, for which axiomatic proof is provided. This Gaussian-based granulation enables effective modeling of nonlinear separability in complex data structures, leading to the development of a new fuzzy granular K-means clustering framework. Experimental results on multiple public UCI datasets demonstrate that the proposed method significantly outperforms traditional K-means and other baseline methods in clustering tasks involving complex geometric data (e.g., circular and spiral structures), showing improved robustness and adaptability. This offers an effective solution for clustering data with intricate distributions. Full article
17 pages, 23550 KB  
Article
DSAC-ICM: A Distributional Reinforcement Learning Framework for Path Planning in 3D Uneven Terrains
by Yixin Zhou, Fan Liu, Zhixiao Liu, Xianghan Ji and Guangqiang Yin
Sensors 2026, 26(3), 853; https://doi.org/10.3390/s26030853 - 28 Jan 2026
Abstract
Ground autonomous mobile robots are increasingly critical for reconnaissance, patrol, and resupply tasks in public safety and national defense scenarios, where global path planning in 3D uneven terrains remains a major challenge. Traditional planners struggle with high dimensionality, while Deep Reinforcement Learning (DRL) [...] Read more.
Ground autonomous mobile robots are increasingly critical for reconnaissance, patrol, and resupply tasks in public safety and national defense scenarios, where global path planning in 3D uneven terrains remains a major challenge. Traditional planners struggle with high dimensionality, while Deep Reinforcement Learning (DRL) is hindered by two key issues: (1) systematic overestimation of action values (Q-values) due to function approximation error, which leads to suboptimal policies and training instability; and (2) inefficient exploration under sparse reward signals. To address these limitations, we propose DSAC-ICM: a Distributional Soft Actor–Critic framework integrated with an Intrinsic Curiosity Module (ICM). Our method fundamentally shifts the learning paradigm from estimating scalar Q-values to learning the full probability distribution of state-action returns, which inherently mitigates value overestimation. We further integrate the ICM to generate dense intrinsic rewards, guiding the agent toward novel and unvisited states to tackle the exploration challenge. Comprehensive experiments conducted in a suite of realistic 3D uneven-terrain environments demonstrate that DSAC-ICM successfully enables the agent to learn effective navigation capabilities. Crucially, it achieves a superior trade-off between path quality and computational cost when compared to traditional path planning algorithms. Furthermore, DSAC-ICM significantly outperforms other RL baselines in terms of convergence speed and return. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

29 pages, 473 KB  
Article
FedHGPrompt: Privacy-Preserving Federated Prompt Learning for Few-Shot Heterogeneous Graph Learning
by Xijun Wu, Jianjun Shi and Xinming Zhang
Entropy 2026, 28(2), 143; https://doi.org/10.3390/e28020143 - 27 Jan 2026
Abstract
Learning from heterogeneous graphs under the constraints of both data scarcity and data privacy presents a significant challenge. While graph prompt learning offers a pathway for efficient few-shot adaptation, and federated learning provides a paradigm for decentralized training, their direct integration for heterogeneous [...] Read more.
Learning from heterogeneous graphs under the constraints of both data scarcity and data privacy presents a significant challenge. While graph prompt learning offers a pathway for efficient few-shot adaptation, and federated learning provides a paradigm for decentralized training, their direct integration for heterogeneous graphs is non-trivial due to structural complexity and the need for rigorous privacy guarantees. This paper proposes FedHGPrompt, a novel federated framework that bridges this gap through a cohesive architectural design. Our approach introduces a three-layer model: a unification layer employing dual templates to standardize heterogeneous graphs and tasks, an adaptation layer utilizing trainable dual prompts to steer a frozen pre-trained model for few-shot learning, and a privacy layer integrating a cryptographic secure aggregation protocol. This design ensures that the central server only accesses aggregated updates, thereby cryptographically safeguarding individual client data. Extensive evaluations on three real-world heterogeneous graph datasets (ACM, DBLP, and Freebase) demonstrate that FedHGPrompt achieves superior few-shot learning performance compared to existing federated graph learning baselines (including FedGCN, FedGAT, FedHAN, and FedGPL) while maintaining strong privacy assurances and practical communication efficiency. The framework establishes an effective approach for collaborative learning on distributed, heterogeneous graph data where privacy is paramount. Full article
Show Figures

Figure 1

20 pages, 7006 KB  
Article
Possible Role of Dopamine in the Enhancement of Intrahippocampal Arc Protein Expression Induced by Post-Learning Noradrenergic Stimulation of the Basolateral Amygdala
by Bogomil Peshev, Dimitrinka Atanasova, Pavel Rashev, Lidia Kortenska, Milena Mourdjeva, Despina Pupaki, Nikolaos Efstratiou, Nikolay Dimitrov and Jana Tchekalarova
Int. J. Mol. Sci. 2026, 27(3), 1273; https://doi.org/10.3390/ijms27031273 - 27 Jan 2026
Abstract
Extensive research in laboratory rodents has shown that novelty exposure enhances the consolidation of memories for preceding or following low-arousal events by elevating dopamine release in the dorsal hippocampus (dHipp). Additionally, numerous studies have demonstrated that post-encoding noradrenergic activation of the basolateral amygdala [...] Read more.
Extensive research in laboratory rodents has shown that novelty exposure enhances the consolidation of memories for preceding or following low-arousal events by elevating dopamine release in the dorsal hippocampus (dHipp). Additionally, numerous studies have demonstrated that post-encoding noradrenergic activation of the basolateral amygdala (BLA) can also enhance memory consolidation in dHipp. Since the BLA is most active during emotionally arousing or stress-related situations, it was suggested that this nuclear complex mediates the effects of emotional salience on memory consolidation. However, it is presently unknown whether the reinforcement of memory storage in dHipp induced by novel and arousing environmental conditions results from the interaction between these two modulatory systems. To test the hypothesis of a functional interaction between dopaminergic and noradrenergic systems, this study assessed their combined effects on a low-arousal object-location (OL) task. Rats received post-training intra-BLA infusions of vehicle or clenbuterol (Clen), a selective β-adrenoceptor agonist. Novelty-induced dopamine release in the dHipp was enhanced by omitting habituation prior to training, and the contribution of dopamine signaling was further evaluated using pre-infusion administration of the D1/D5 receptor antagonist SCH 23390. The distribution of two important proteins for memory processing, namely the activity-regulated cytoskeleton-associated protein (Arc) and the phosphorylated form of the transcription factor, cAMP-response element-binding protein (pCREB) in the dHipp, was explored in a subset of rats perfused 60 min after the training phase. Stimulation of the BLA significantly increased the number of Arc- and pCREB-positive cells in several dHipp areas. The preceding application of SCH 23390, however, substantially decreased these effects in the same areas, i.e., the dentate gyrus (DG), CA2, and CA1 subregions for pCREB, and the CA3b, CA3c, CA2, and CA1 subregions for Arc. This interaction is considered essential for the initial stages of memory consolidation. The obtained results indicate the presence of a region-specific interaction between BLA modulatory inputs and intrahippocampal dopaminergic transmission, the mechanisms of which remain to be elucidated. Full article
Show Figures

Figure 1

24 pages, 1252 KB  
Protocol
Feasibility of “DiverAcción”: A Web-Based Telerehabilitation System for Executive Functions Training in Children and Adolescents with ADHD—Longitudinal Study Protocol
by Marina Rivas-García, Carmen Vidal-Ramírez, Abel Toledano-González, María del Carmen Rodríguez-Martínez, Esther Molina-Torres, José-Antonio Marín-Marín, José-Matías Triviño-Juárez, Miguel Gea-Mejías and Dulce Romero-Ayuso
Healthcare 2026, 14(3), 323; https://doi.org/10.3390/healthcare14030323 - 27 Jan 2026
Abstract
Background: Attention Deficit Hyperactivity Disorder (ADHD) is associated with executive function deficits—such as planning, organization, and prospective memory—that impair autonomy and daily functioning, increase family stress, and create challenges in educational contexts. These consequences underscore the need for accessible and ecologically valid [...] Read more.
Background: Attention Deficit Hyperactivity Disorder (ADHD) is associated with executive function deficits—such as planning, organization, and prospective memory—that impair autonomy and daily functioning, increase family stress, and create challenges in educational contexts. These consequences underscore the need for accessible and ecologically valid interventions addressing the cognitive, familial, and educational dimensions. Traditional approaches often lack ecological validity, and pharmacological treatment shows a limited impact on functional cognition. Objectives: This protocol outlines a feasibility study of DiverAcción, a web-based telerehabilitation system designed to enhance functional cognition through interactive and gamified tasks integrated into a comprehensive healthcare programme. Methods: A quasi-experimental feasibility study before and after the study will recruit 30 participants aged 9 to 17 years with ADHD. The study comprises an initial face-to-face session for instructions and baseline assessment (T0), followed by twelve supervised online sessions over six weeks. Therapeutic support is provided via integrated chat, email, and two scheduled videoconference check-ins. Feasibility Outcomes: include recruitment, adherence, retention, usability (SUS), acceptability (TAM), satisfaction, user-friendly design, therapeutic alliance (WAI-I), and professionals’ attitudes toward technology (e-TAP-T). Exploratory Measures: include parental self-efficacy (BPSES), parenting stress (PSI-4-SF), ADHD symptomatology (SNAP-IV), executive functioning (BRIEF-2), time management (Time-S), emotional regulation (ERQ-CA), prospective memory (PRMQ-C), and health-related quality of life (KIDSCREEN-52). Analyses emphasize descriptive statistics for feasibility metrics (recruitment, adherence, retention, dropout and fidelity). Assessments are conducted post-intervention (T1) and at three-month follow-up (T2) and analyzed relative to baseline using repeated-measures ANOVA or Friedman tests, depending on data distribution. Conclusions: This feasibility protocol will provide preliminary evidence on the usability, acceptability, and implementation of DiverAcción. Findings will guide refinements and inform the design of a subsequent randomized controlled trial. Full article
Show Figures

Figure 1

35 pages, 2414 KB  
Article
Hierarchical Caching for Agentic Workflows: A Multi-Level Architecture to Reduce Tool Execution Overhead
by Farhana Begum, Craig Scott, Kofi Nyarko, Mansoureh Jeihani and Fahmi Khalifa
Mach. Learn. Knowl. Extr. 2026, 8(2), 30; https://doi.org/10.3390/make8020030 - 27 Jan 2026
Abstract
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods [...] Read more.
Large Language Model (LLM) agents depend heavily on multiple external tools such as APIs, databases and computational services to perform complex tasks. However, these tool executions create latency and introduce costs, particularly when agents handle similar queries or workflows. Most current caching methods focus on LLM prompt–response pairs or execution plans and overlook redundancies at the tool level. To address this, we designed a multi-level caching architecture that captures redundancy at both the workflow and tool level. The proposed system integrates four key components: (1) hierarchical caching that operates at both the workflow and tool level to capture coarse and fine-grained redundancies; (2) dependency-aware invalidation using graph-based techniques to maintain consistency when write operations affect cached reads across execution contexts; (3) category-specific time-to-live (TTL) policies tailored to different data types, e.g., weather APIs, user location, database queries and filesystem and computational tasks; and (4) session isolation to ensure multi-tenant cache safety through automatic session scoping. We evaluated the system using synthetic data with 2.25 million queries across ten configurations in fifteen runs. In addition, we conducted four targeted evaluations—write intensity robustness from 4 to 30% writes, personalized memory effects under isolated vs. shared cache modes, workflow-level caching comparison and workload sensitivity across five access distributions—on an additional 2.565 million queries, bringing the total experimental scope to 4.815 million executed queries. The architecture achieved 76.5% caching efficiency, reducing query processing time by 13.3× and lowering estimated costs by 73.3% compared to a no-cache baseline. Multi-tenant testing with fifteen concurrent tenants confirmed robust session isolation and 74.1% efficiency under concurrent workloads. Our evaluation used controlled synthetic workloads following Zipfian distributions, which are commonly used in caching research. While absolute hit rates vary by deployment domain, the architectural principles of hierarchical caching, dependency tracking and session isolation remain broadly applicable. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

35 pages, 1699 KB  
Review
Will AI Replace Physicians in the Near Future? AI Adoption Barriers in Medicine
by Rafał Obuchowicz, Adam Piórkowski, Karolina Nurzyńska, Barbara Obuchowicz, Michał Strzelecki and Marzena Bielecka
Diagnostics 2026, 16(3), 396; https://doi.org/10.3390/diagnostics16030396 - 26 Jan 2026
Abstract
Objectives: This study aims to evaluate whether contemporary artificial intelligence (AI), including convolutional neural networks (CNNs) for medical imaging and large language models (LLMs) for language processing, could replace physicians in the near future and to identify the principal clinical, technical, and [...] Read more.
Objectives: This study aims to evaluate whether contemporary artificial intelligence (AI), including convolutional neural networks (CNNs) for medical imaging and large language models (LLMs) for language processing, could replace physicians in the near future and to identify the principal clinical, technical, and regulatory barriers. Methods: A narrative review is conducted on the scientific literature addressing AI performance and reproducibility in medical imaging, LLM competence in medical knowledge assessment and patient communication, limitations in out-of-distribution generalization, absence of physical examination and sensory inputs, and current regulatory and legal frameworks, particularly within the European Union. Results: AI systems demonstrate high accuracy and reproducibility in narrowly defined tasks, such as image interpretation, lesion measurement, triage, documentation support, and written communication. These capabilities reduce interobserver variability and support workflow efficiency. However, major obstacles to physician replacement persist, including limited generalization beyond training distributions, inability to perform physical examination or procedural tasks, susceptibility of LLMs to hallucinations and overconfidence, unresolved issues of legal liability at higher levels of autonomy, and the continued requirement for clinician oversight. Conclusions: In the foreseeable future, AI will augment rather than replace physicians. The most realistic trajectory involves automation of well-defined tasks under human supervision, while clinical integration, physical examination, procedural performance, ethical judgment, and accountability remain physician-dependent. Future adoption should prioritize robust clinical validation, uncertainty management, escalation pathways to clinicians, and clear regulatory and legal frameworks. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

27 pages, 4342 KB  
Article
Energy–Latency–Accuracy Trade-off in UAV-Assisted VECNs: A Robust Optimization Approach Under Channel Uncertainty
by Tiannuo Liu, Menghan Wu, Hanjun Yu, Yixin He, Dawei Wang, Li Li and Hongbo Zhao
Drones 2026, 10(2), 86; https://doi.org/10.3390/drones10020086 - 26 Jan 2026
Abstract
Federated learning (FL)-based vehicular edge computing networks (VECNs) are emerging as a key enabler of intelligent transportation systems, as their privacy-preserving and distributed architecture can safeguard vehicle data while reducing latency and energy consumption. However, conventional roadside units face processing bottlenecks in dense [...] Read more.
Federated learning (FL)-based vehicular edge computing networks (VECNs) are emerging as a key enabler of intelligent transportation systems, as their privacy-preserving and distributed architecture can safeguard vehicle data while reducing latency and energy consumption. However, conventional roadside units face processing bottlenecks in dense traffic and at the network edge, motivating the adoption of unmanned aerial vehicle (UAV)-assisted VECNs. To address this challenge, this paper proposes a UAV-assisted VECN framework with FL, aiming to improve model accuracy while minimizing latency and energy consumption during computation and transmission. Specifically, a reputation-based client selection mechanism is introduced to enhance the accuracy and reliability of federated aggregation. Furthermore, to address the channel dynamics induced by high vehicle mobility, we design a robust reinforcement learning-based resource allocation scheme. In particular, an asynchronous parallel deep deterministic policy gradient (APDDPG) algorithm is developed to adaptively allocate computation and communication resources in response to real-time channel states and task demands. To ensure consistency with real vehicular communication environments, field experiments were conducted and the obtained measurements were used as simulation parameters to analyze the proposed algorithm. Compared with state-of-the-art algorithms, the developed APDDPG algorithm achieves 20% faster convergence, 9% lower energy consumption, a FL accuracy of 95.8%, and the most robust standard deviation under varying channel conditions. Full article
(This article belongs to the Special Issue Low-Latency Communication for Real-Time UAV Applications)
36 pages, 1772 KB  
Article
Dynamic Allocation of Emergency Medical Resources in Respiratory Infectious Disease Models Considering Vaccine Failure
by Muni Zhuang, Jianping Zhu, Xin Lu, Dongsheng Cheng and Xu Tan
Mathematics 2026, 14(3), 425; https://doi.org/10.3390/math14030425 - 26 Jan 2026
Viewed by 23
Abstract
Objective: Dynamic allocation of emergency medical resources is a critical task in the prevention and control of respiratory infectious diseases (RIDs). This study aims to address the challenge of a “run on the healthcare system” by proposing an effective resource allocation strategy to [...] Read more.
Objective: Dynamic allocation of emergency medical resources is a critical task in the prevention and control of respiratory infectious diseases (RIDs). This study aims to address the challenge of a “run on the healthcare system” by proposing an effective resource allocation strategy to curb the spread of RIDs. Methods: Considering the infection severity of RIDs, the capacity of emergency medical resources (hospitalization rate), and vaccination status, we construct an SVInR dynamic model of RIDs that considers vaccine failure. Under the constraint of emergency medical resources and with the goal of minimizing the basic reproduction number, we propose a dynamic allocation strategy for distributing emergency medical resources among different types of infected individuals. Results: Simulation results demonstrate that improving the hospitalization efficiency of emergency medical resources significantly contributes to the effective control of RIDs. The model shows that targeted dynamic allocation helps reduce disease transmission. Conclusions: Validation using real-world data confirms that the model is effective and practical. It offers theoretical guidance for dynamically allocating emergency medical resources and supports informed decision-making in response to major emerging RIDs. Full article
(This article belongs to the Special Issue Applied Mathematical Modelling and Dynamical Systems, 2nd Edition)
28 pages, 32574 KB  
Article
CauseHSI: Counterfactual-Augmented Domain Generalization for Hyperspectral Image Classification via Causal Disentanglement
by Xin Li, Zongchi Yang and Wenlong Li
J. Imaging 2026, 12(2), 57; https://doi.org/10.3390/jimaging12020057 - 26 Jan 2026
Viewed by 32
Abstract
Cross-scene hyperspectral image (HSI) classification under single-source domain generalization (DG) is a crucial yet challenging task in remote sensing. The core difficulty lies in generalizing from a limited source domain to unseen target scenes. We formalize this through the causal theory, where different [...] Read more.
Cross-scene hyperspectral image (HSI) classification under single-source domain generalization (DG) is a crucial yet challenging task in remote sensing. The core difficulty lies in generalizing from a limited source domain to unseen target scenes. We formalize this through the causal theory, where different sensing scenes are viewed as distinct interventions on a shared physical system. This perspective reveals two fundamental obstacles: interventional distribution shifts arising from varying acquisition conditions, and confounding biases induced by spurious correlations driven by domain-specific factors. Taking the above considerations into account, we propose CauseHSI, a causality-inspired framework that offers new insights into cross-scene HSI classification. CauseHSI consists of two key components: a Counterfactual Generation Module (CGM) that perturbs domain-specific factors to generate diverse counterfactual variants, simulating cross-domain interventions while preserving semantic consistency, and a Causal Disentanglement Module (CDM) that separates invariant causal semantics from spurious correlations through structured constraints under a structural causal model, ultimately guiding the model to focus on domain-invariant and generalizable representations. By aligning model learning with causal principles, CauseHSI enhances robustness against domain shifts. Extensive experiments on the Pavia, Houston, and HyRANK datasets demonstrate that CauseHSI outperforms existing DG methods. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

26 pages, 4764 KB  
Article
Hybrid ConvLSTM U-Net Deep Neural Network for Land Use and Land Cover Classification from Multi-Temporal Sentinel-2 Images: Application to Yaoundé, Cameroon
by Ange Gabriel Belinga, Stéphane Cédric Tékouabou Koumetio and Mohammed El Haziti
Math. Comput. Appl. 2026, 31(1), 18; https://doi.org/10.3390/mca31010018 - 26 Jan 2026
Viewed by 18
Abstract
Accurate mapping of land use and land cover (LULC) is crucial for various applications such as urban planning, environmental management, and sustainable development, particularly in rapidly growing urban areas. African cities such as Yaoundé, Cameroon, are particularly affected by this rapid and often [...] Read more.
Accurate mapping of land use and land cover (LULC) is crucial for various applications such as urban planning, environmental management, and sustainable development, particularly in rapidly growing urban areas. African cities such as Yaoundé, Cameroon, are particularly affected by this rapid and often uncontrolled urban growth with complex spatio-temporal dynamics. Effective modeling of LULC indicators in such areas requires robust algorithms for high-resolution images segmentation and classification, as well as reliable data with great spatio-temporal distributions. Among the most suitable data sources for these types of studies, Sentinel-2 image time series, thanks to their high spatial (10 m) and temporal (5 days) resolution, are a valuable source of data for this task. However, for an effective LULC modeling purpose in such dynamic areas, many challenges remain, including spectral confusion between certain classes, seasonal variability, and spatial heterogeneity. This study proposes a hybrid deep learning architecture combining U-Net and Convolutional Long Short-Term Memory (ConvLSTM) layers, allowing the spatial structures and temporal dynamics of the Sentinel-2 series to be exploited jointly. Applied to the Yaoundé region (Cameroon) over the period 2018–2025, the hybrid model significantly outperforms the U-Net and ConvLSTM models alone. It achieves a macro-average F1 score of 0.893, an accuracy of 0.912, and an average IoU of 0.811 on the test set. These segmentation performances reached up to 0.948, 0.953, and 0.910 for precision, F1-score, and IoU, respectively, on the built-up areas class. Moreover, despite its better performance, in terms of complexity, the figures confirm that the hybrid does not significantly penalize evaluation speed. These results demonstrate the relevance of jointly integrating space and time for robust LULC classification from multi-temporal satellite images. Full article
Show Figures

Figure 1

15 pages, 6259 KB  
Article
TopoAD: Resource-Efficient OOD Detection via Multi-Scale Euler Characteristic Curves
by Liqiang Lin, Xueyu Ye, Zhiyu Lin, Yunyu Kang, Shuwu Chen and Xiaolong Liu
Sustainability 2026, 18(3), 1215; https://doi.org/10.3390/su18031215 - 25 Jan 2026
Viewed by 142
Abstract
Out-of-distribution (OOD) detection is essential for ensuring the reliability of machine learning models deployed in safety-critical applications. Existing methods often rely solely on statistical properties of feature distributions while ignoring the geometric structure of learned representations. We propose TopoAD, a topology-aware OOD detection [...] Read more.
Out-of-distribution (OOD) detection is essential for ensuring the reliability of machine learning models deployed in safety-critical applications. Existing methods often rely solely on statistical properties of feature distributions while ignoring the geometric structure of learned representations. We propose TopoAD, a topology-aware OOD detection framework that leverages Euler Characteristic Curves (ECCs) extracted from intermediate convolutional activation maps and fuses them with standardized energy scores. Specifically, we employ a computationally efficient superlevel-set filtration with a local estimator to capture topological invariants, avoiding the high cost of persistent homology. Furthermore, we introduce task-adaptive aggregation strategies to effectively integrate multi-scale topological features based on the complexity of distribution shifts. We evaluate our method on CIFAR-10 against four diverse OOD benchmarks spanning far-OOD (Textures), near-OOD (SVHN), and semantic shift scenarios. Our results demonstrate that TopoAD-Gated achieves superior performance on far-OOD data with 89.98% AUROC on Textures, while the ultra-lightweight TopoAD-Linear provides an efficient alternative for near-OOD detection. Comprehensive ablation studies reveal that cross-layer gating effectively captures multi-scale topological shifts, while threshold-wise attention provides limited benefit and can degrade far-OOD performance. Our analysis demonstrates that topological features are particularly effective for detecting OOD samples with distinct structural characteristics, highlighting TopoAD’s potential as a sustainable solution for resource-constrained applications in texture analysis, medical imaging, and remote sensing. Full article
(This article belongs to the Special Issue Sustainability of Intelligent Detection and New Sensor Technology)
Show Figures

Figure 1

30 pages, 7439 KB  
Article
Traffic Forecasting for Industrial Internet Gateway Based on Multi-Scale Dependency Integration
by Tingyu Ma, Jiaqi Liu, Panfeng Xu and Yan Song
Sensors 2026, 26(3), 795; https://doi.org/10.3390/s26030795 - 25 Jan 2026
Viewed by 147
Abstract
Industrial gateways serve as critical data aggregation points within the Industrial Internet of Things (IIoT), enabling seamless data interoperability that empowers enterprises to extract value from equipment data more efficiently. However, their role exposes a fundamental trade-off between computational efficiency and prediction accuracy—a [...] Read more.
Industrial gateways serve as critical data aggregation points within the Industrial Internet of Things (IIoT), enabling seamless data interoperability that empowers enterprises to extract value from equipment data more efficiently. However, their role exposes a fundamental trade-off between computational efficiency and prediction accuracy—a contradiction yet to be fully resolved by existing approaches. The rapid proliferation of IoT devices has led to a corresponding surge in network traffic, posing significant challenges for traffic forecasting methods, while deep learning models like Transformers and GNNs demonstrate high accuracy in traffic prediction, their substantial computational and memory demands hinder effective deployment on resource-constrained industrial gateways, while simple linear models offer relative simplicity, they struggle to effectively capture the complex characteristics of IIoT traffic—which often exhibits high nonlinearity, significant burstiness, and a wide distribution of time scales. The inherent time-varying nature of traffic data further complicates achieving high prediction accuracy. To address these interrelated challenges, we propose the lightweight and theoretically grounded DOA-MSDI-CrossLinear framework, redefining traffic forecasting as a hierarchical decomposition–interaction problem. Unlike existing approaches that simply combine components, we recognize that industrial traffic inherently exhibits scale-dependent temporal correlations requiring explicit decomposition prior to interaction modeling. The Multi-Scale Decomposable Mixing (MDM) module implements this concept through adaptive sequence decomposition, while the Dual Dependency Interaction (DDI) module simultaneously captures dependencies across time and channels. Ultimately, decomposed patterns are fed into an enhanced CrossLinear model to predict flow values for specific future time periods. The Dream Optimization Algorithm (DOA) provides bio-inspired hyperparameter tuning that balances exploration and exploitation—particularly suited for the non-convex optimization scenarios typical in industrial forecasting tasks. Extensive experiments on real industrial IoT datasets thoroughly validate the effectiveness of this approach. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

27 pages, 6866 KB  
Article
Recovering Gamma-Ray Burst Redshift Completeness Maps via Spherical Generalized Additive Models
by Zsolt Bagoly and Istvan I. Racz
Universe 2026, 12(2), 31; https://doi.org/10.3390/universe12020031 - 24 Jan 2026
Viewed by 66
Abstract
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation [...] Read more.
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation (KDE), which is characterized by numerical instability and bandwidth sensitivity, this work applies a logistic regression embedded in a Bayesian framework to directly model selection effects. It reformulates the problem as a logistic regression task within a Generalized Additive Model (GAM) framework, utilizing isotropic Splines on the Sphere (SOS) to map the conditional probability of redshift measurement. The model complexity and smoothness are objectively optimized using Restricted Maximum Likelihood (REML) and the Akaike Information Criterion (AIC), ensuring a data-driven bias-variance trade-off. We benchmark this approach against an Adaptive Kernel Density Estimator (AKDE) using von Mises–Fisher kernels and Abramson’s square root law. The comparative analysis reveals strong statistical evidence in favor of this Preconditioned (Precon) Estimator, yielding a log-likelihood improvement of ΔL74.3 (Bayes factor >1030) over the adaptive method. We show that this Precon Estimator acts as a spectral bandwidth extender, effectively decoupling the wideband exposure map from the narrowband selection efficiency. This provides a tool for cosmologists to recover high-frequency structural features—such as the sharp cutoffs—that are mathematically irresolvable by direct density estimators due to the bandwidth limitation inherent in sparse samples. The methodology ensures that reconstructions of the cosmic web are stable against Poisson noise and consistent with observational constraints. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
45 pages, 12136 KB  
Article
GUMM-HMRF: A Fine Point Cloud Segmentation Method for Junction Regions of Hull Structures
by Yuchao Han, Fei Peng, Zhong Wang and Qingxu Meng
J. Mar. Sci. Eng. 2026, 14(3), 246; https://doi.org/10.3390/jmse14030246 - 24 Jan 2026
Viewed by 95
Abstract
Fine segmentation of point clouds in hull structure junction regions is a key technology for achieving high-precision digital inspection. Conventional hard-segmentation methods frequently yield over- or under-segmentation in junction regions such as welds, compromising the reliability of subsequent inspections. This study presents a [...] Read more.
Fine segmentation of point clouds in hull structure junction regions is a key technology for achieving high-precision digital inspection. Conventional hard-segmentation methods frequently yield over- or under-segmentation in junction regions such as welds, compromising the reliability of subsequent inspections. This study presents a computational framework that combines the Gaussian-Uniform Mixture Model (GUMM) with the Hidden Markov Random Field (HMRF) and follows a “coarse segmentation–model construction–fine segmentation” pipeline. The framework jointly optimizes the sampling model, the probabilistic model, and the expectation–maximization (EM) inference procedure. By leveraging model simplification and dimensionality reduction, the algorithm simultaneously addresses initial value estimation, spatial distribution characterization, and continuity constraints. Experiments on representative structures, including wall corner, T-joint weld, groove, and flange, show that the proposed framework outperforms the conventional GMM-EM method by approximately 2.5% in precision and 1.5% in both accuracy and F1 score. In local segmentation tasks of complex hull structures, the method achieves a deviation of less than 0.2 mm relative to manual measurements, validating its practical utility in engineering contexts. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop