Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (332)

Search Parameters:
Keywords = Generalized Uncertainty Principle

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1178 KB  
Article
Soft-Community Kernel Rényi Spectrum for Semantic Uncertainty Estimation in Large Language Models
by Zongkai Li and Junliang Du
Entropy 2026, 28(4), 442; https://doi.org/10.3390/e28040442 - 14 Apr 2026
Viewed by 217
Abstract
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard [...] Read more.
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard clustering and von Neumann entropy, which suffer from sensitivity to clustering order, noise in semantic equivalence judgments, and limited control over spectral contributions. In this work, we propose a principled information-theoretic framework for LLM semantic uncertainty estimation based on soft semantic communities and kernel Rényi entropy. Given multiple generations for a query, we construct a weighted semantic graph using pairwise semantic similarity scores and infer soft community assignments via weighted graph community detection. These soft assignments induce a positive semi-definite semantic kernel that captures the distribution of semantic modes without enforcing hard equivalence relations. Uncertainty is then quantified by the Rényi entropy of the kernel spectrum, yielding a tunable measure that interpolates between sensitivity to dominant semantic modes and long-tail semantic diversity. Compared to prior von Neumann entropy-based estimators, the proposed Rényi spectral uncertainty offers improved robustness to semantic noise, reduced dependence on clustering heuristics, and greater flexibility through its order parameter. Extensive experiments on question answering tasks demonstrate that our method provides more stable and discriminative uncertainty estimates, particularly under limited sampling budgets and noisy semantic judgments. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

26 pages, 23804 KB  
Article
Sensorless Admittance Control for Cable-Driven Synchronous Continuum Robot
by Myung-Oh Kim, Jaeuk Cho, Dongwoon Choi, TaeWon Seo and Dong-Wook Lee
Appl. Sci. 2026, 16(8), 3637; https://doi.org/10.3390/app16083637 - 8 Apr 2026
Viewed by 244
Abstract
The synchronous continuum robot (SCR) was developed to emulate biological structures, such as animal tails and elephant trunks, based on continuum robot principles. By synchronizing disk motions, the SCR generates biologically inspired continuous movements while maintaining precise trajectory control. However, its synchronization-based architecture [...] Read more.
The synchronous continuum robot (SCR) was developed to emulate biological structures, such as animal tails and elephant trunks, based on continuum robot principles. By synchronizing disk motions, the SCR generates biologically inspired continuous movements while maintaining precise trajectory control. However, its synchronization-based architecture limits adaptability during physical interaction due to rigid trajectory-following characteristics. To address this limitation, this paper proposes a sensorless variable admittance control (VAC)-based compliant motion generation framework for the SCR. A dynamic model-based sensorless disturbance observer is designed to estimate external torques without additional force sensors. To compensate for uncertainties inherent in the cable-driven transmission mechanism, an adaptive term is incorporated into the parameter identification process, improving disturbance estimation accuracy. Based on the estimated external torques, admittance parameters are adaptively modulated according to joint angles, angular velocities, and robot posture, enabling interaction-aware motion speed regulation. Furthermore, the proposed method simultaneously enforces constraints on both joint angles and angular velocities through the adaptive regulation of target positions and velocities, ensuring safe and physically feasible motion. Experimental results under various interaction scenarios demonstrate reliable contact-independent force estimation and effective compliant motion generation. The proposed framework provides an integrated solution for robust force estimation, adaptive compliance control, and simultaneous constraint enforcement in mechanically synchronized continuum robots. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

23 pages, 399 KB  
Article
Integrating Model Explainability and Uncertainty Quantification for Trustworthy Fraud Detection
by Tebogo Forster Mapaila and Makhamisa Senekane
Technologies 2026, 14(4), 212; https://doi.org/10.3390/technologies14040212 - 3 Apr 2026
Viewed by 383
Abstract
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments [...] Read more.
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments is constrained by limited interpretability, overconfident predictions, and the absence of principled mechanisms for expressing decision uncertainty. Emerging regulatory expectations increasingly emphasise transparency, accountability, and operational reliability, underscoring the need for evaluation frameworks that extend beyond predictive accuracy. This study proposes the Integrated Transparency and Confidence Framework (ITCF), a deployment-oriented approach that unifies model explainability, statistically valid uncertainty quantification, and operational decision support for fraud detection. ITCF combines instance-level explanations generated via Local Interpretable Model-Agnostic Explanations (LIME) with distribution-free uncertainty estimation using split conformal prediction. The framework incorporates selective explainability, abstention-based routing, and uncertainty-driven triage to support human-in-the-loop review. Using the PaySim dataset of 6,362,620 mobile-money transactions, Random Forest and XGBoost models are evaluated under extreme class imbalance using F1-score, AUC–ROC, and Matthews Correlation Coefficient (MCC). At a target coverage level of 90% (α=0.1), both models achieve empirical coverage close to the target level, with XGBoost producing smaller prediction sets and superior recall, MCC, and latency. ITCF provides transaction-level explanations for uncertain cases and specifies an auditable workflow that is intended to support transparency, traceability, and risk-aware human review, thereby enabling defensible human decision-making in regulated environments. Overall, this study illustrates how explainability and uncertainty quantification can be combined in a deployment-oriented evaluation workflow while noting that real-world validation remains a future endeavour. Full article
(This article belongs to the Special Issue Privacy-Preserving and Trustworthy AI for Industrial 4.0 and Beyond)
Show Figures

Graphical abstract

18 pages, 1010 KB  
Article
Dynamics of a Classical Bi-Metric Cosmology with GUP-Deformed Poisson Brackets
by Diego Castillo and Fernando Méndez
Universe 2026, 12(4), 103; https://doi.org/10.3390/universe12040103 - 2 Apr 2026
Viewed by 402
Abstract
This work analyzes a bi-metric cosmological model where two sectors, characterized by their respective scale factors, interact through a deformed Poisson bracket structure. This deformation is based on the Generalized Uncertainty Principle (GUP). Through a numerical analysis, we study how this interaction affects [...] Read more.
This work analyzes a bi-metric cosmological model where two sectors, characterized by their respective scale factors, interact through a deformed Poisson bracket structure. This deformation is based on the Generalized Uncertainty Principle (GUP). Through a numerical analysis, we study how this interaction affects the expansion dynamics. The results indicate that for positive values of the deformation parameter, the coupling induces an acceleration that leads to a Big Rip singularity in finite time, even in the absence of a cosmological constant. A power-law relation is established between the deformation parameter and the critical time of divergence for the scale factors. Finally, the regime with a negative deformation parameter is also investigated. In this case, the symplectic structure becomes singular, leading to the contraction of one sector and the freezing of the other. Full article
(This article belongs to the Section Cosmology)
Show Figures

Figure 1

23 pages, 599 KB  
Review
Towards Sustainable Manufacturing in Developing Economies: A Systems-Based Model Linking Industry 5.0, SCE, and Green HRM
by Rubee Singh, Amit Joshi, Hiranya Dissanayake, Akshay Singh, Anuradha Iddagoda, Vikas Kumar and Siwarit Pongsakornrungsilp
Sustainability 2026, 18(7), 3404; https://doi.org/10.3390/su18073404 - 1 Apr 2026
Viewed by 282
Abstract
Manufacturing firms face intensifying pressure to achieve sustainability while remaining competitive under environmental stress, rapid technological change, and institutional uncertainty—challenges that are particularly acute in developing economies. Although Industry 5.0 has emerged as a human-centric and sustainability-oriented industrial paradigm, limited research explains how [...] Read more.
Manufacturing firms face intensifying pressure to achieve sustainability while remaining competitive under environmental stress, rapid technological change, and institutional uncertainty—challenges that are particularly acute in developing economies. Although Industry 5.0 has emerged as a human-centric and sustainability-oriented industrial paradigm, limited research explains how it can be systematically operationalized to enhance sustainable business performance. This study addresses this gap by developing an integrative conceptual framework linking Industry 5.0, Smart Circular Economy (SCE), and Green Human Resource Management (GHRM) within manufacturing contexts. Drawing on resource-based, dynamic capability, and institutional perspectives, the framework conceptualizes Industry 5.0 as a strategic digital orientation that enables circular resource orchestration and sustainability-aligned human capital systems. SCE and GHRM are positioned as complementary operational mechanisms that translate Industry 5.0 principles into organizational capabilities. Innovation capability is introduced as a mediating dynamic capability explaining how technological and human resource investments generate environmental, social, and economic performance outcomes. Digital maturity and policy support are incorporated as contextual moderators shaping transformation pathways in developing economies. The proposed model advances sustainability-oriented industrial transformation theory by integrating previously fragmented research streams into a coherent socio-technical capability architecture. It also offers actionable insights for managers and policymakers seeking to align digital industrial development with long-term sustainability objectives under conditions of institutional heterogeneity. Full article
Show Figures

Figure 1

51 pages, 2241 KB  
Review
Mathematical Analysis Methods for Quantitative Scenario Generation of Renewable Power Output: A Comprehensive Review
by Tong Ma, Boyu Qin, Shidong Hong and Yiwei Su
Energies 2026, 19(7), 1701; https://doi.org/10.3390/en19071701 - 31 Mar 2026
Viewed by 338
Abstract
As the proportion of renewable power continues to increase, its inherent intermittency and volatility pose serious challenges to the security and stability of power systems. Scenario generation technology serves as a key tool supporting decision-making methods such as stochastic optimization and risk analysis. [...] Read more.
As the proportion of renewable power continues to increase, its inherent intermittency and volatility pose serious challenges to the security and stability of power systems. Scenario generation technology serves as a key tool supporting decision-making methods such as stochastic optimization and risk analysis. By generating representative power output scenarios, it can effectively characterize the uncertainty of renewable power output. This paper systematically reviews mainstream methods for the scenario generation of renewable power output, categorizing them into two major classes: sampling-based methods and model-based methods. Among them, sampling-based methods include Monte Carlo sampling, Latin hypercube sampling (LHS), Markov chains (MCs), and Copula functions. Model-based methods encompass artificial neural networks (ANNs), long short-term memory networks (LSTMs), autoregressive moving average models (ARMAs), generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models and transformer-based models. This paper elaborates on the principles and characteristics of each type of method. Moreover, scenario quality is evaluated from three dimensions: output-based metrics for numerical accuracy, distribution-based metrics for statistical consistency, and event-based metrics for key operational event representation. The current research challenges and future research directions are also summarized to provide a reference for modeling the uncertainty of renewable output. Full article
Show Figures

Figure 1

20 pages, 2881 KB  
Article
Structural Deformation Prediction and Uncertainty Quantification via Physics-Informed Data-Driven Learning
by Tong Zhang and Shiwei Qin
Appl. Sci. 2026, 16(7), 3194; https://doi.org/10.3390/app16073194 - 26 Mar 2026
Viewed by 277
Abstract
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long [...] Read more.
In structural health monitoring, purely data-driven methods for deformation prediction are often susceptible to time-varying boundary conditions under complex operating scenarios, leading to insufficient physical interpretability and limited generalization across different conditions. To address these challenges, this study proposes a Physics-Informed Dual-branch Long Short-Term Memory framework (PINN-DualSHM). The framework employs dual-branch LSTMs to separately extract temporal features of structural mechanical responses and environmental thermal effects. Dynamic decoupling and fusion of these heterogeneous features are achieved through an adaptive cross-attention mechanism. Furthermore, physical priors, including the thermodynamic superposition principle and structural settlement monotonicity, are embedded into the loss function as regularization terms, complemented by a dual uncertainty quantification system based on heteroscedastic regression and MC Dropout. Experimental results based on long-term measured data from an industrial base project in Shenzhen demonstrate that PINN-DualSHM significantly outperforms baseline models such as LSTM, CNN-LSTM, and GAT-LSTM. Specifically, the Root Mean Square Error (RMSE) is reduced by 65.25%, and the coefficient of determination (R2) reaches 0.925. Physical consistency analysis confirms that the introduction of physical constraints effectively suppresses anomalous predictive fluctuations that violate mechanical laws. Uncertainty decomposition reveals that aleatoric uncertainty is dominant (93.7%), objectively indicating that the current system’s accuracy bottleneck lies in sensor noise rather than model capability. By enhancing prediction accuracy while providing credible quantitative assessments and physical interpretability, the proposed method provides a scientific basis for the operation, maintenance optimization, and upgrading decisions of SHM systems. Full article
Show Figures

Figure 1

32 pages, 4987 KB  
Article
Reinterpreting Le Corbusier’s Concept of Unlimited Growth for University Campus Transformation Under Demographic Decline: A Typo-Morphological and Spatial Adaptation Framework
by Bih-Chuan Lin, Chin-Feng Lin and Xuan-Xi Wang
Sustainability 2026, 18(7), 3226; https://doi.org/10.3390/su18073226 - 25 Mar 2026
Viewed by 466
Abstract
Declining birth rates are reshaping higher education across East Asia, accelerating the large-scale underutilization and, in some contexts, partial abandonment of university campus assets. Although adaptive reuse has been widely discussed, campus transformation is often framed primarily as a programmatic or policy problem, [...] Read more.
Declining birth rates are reshaping higher education across East Asia, accelerating the large-scale underutilization and, in some contexts, partial abandonment of university campus assets. Although adaptive reuse has been widely discussed, campus transformation is often framed primarily as a programmatic or policy problem, with limited attention to the inherited spatial logic embedded in campus morphology. This study revisits Le Corbusier’s concept of unlimited growth as a generative framework for campus transformation. Rather than treating it as a museum-specific historical typology, the research reinterprets unlimited growth as a scalable spatial logic defined by modular continuity, circulation hierarchy, and open-ended sequencing. To enhance reproducibility and operational clarity, the study formalizes a typo-morphological decoding protocol—modules, circulation, and growth sequence—and applies it through plan-, section-, and diagram-based analysis. Through comparative examination of three museum precedents—Sanskar Kendra Museum, the National Museum of Western Art (Tokyo), and the Chandigarh Museum and Art Gallery—the study extracts a set of transferable spatial mechanisms: modular increment, circulation-centered ordering, directional displacement, and fifth-façade ecological continuity. These mechanisms are then translated into an operational right-sizing model and tested through a design-operational demonstrator on a single anonymized Taiwanese campus experiencing demographic contraction. The findings indicate that unlimited growth functions not merely as a formal principle but as a spatial governance logic that supports phased consolidation, adaptive recomposition, and system-level coherence under long-term uncertainty. Importantly, this framework contributes to sustainability by reducing land consumption through spatial consolidation, minimizing unnecessary new construction, enabling adaptive reuse of existing campus assets, and improving long-term resource-use efficiency through phased right-sizing and ecological continuity. This study further advances a reproducible, mechanism-based methodological framework for institutional spatial transformation, providing a transferable approach for large-scale campus restructuring under conditions of long-term demographic and environmental uncertainty. Full article
(This article belongs to the Special Issue Urban Resilience and Sustainable Construction Under Disaster Risk)
Show Figures

Figure 1

26 pages, 621 KB  
Review
Toxicity and Appeal of Flavoured E-Cigarettes and Flavour Ban Outcomes: A Narrative Review
by Stijn Everaert, Filip Lardon, Eric Deconinck, Sophia Barhdadi, Dirk Adang, Nicolas Van Larebeke, Greet Schoeters, Adrien Meunier, Veerle Maes, Suzanne Gabriels, Eline Remue, Katrien Eger, Pieter Goeminne and Frieda Matthys
Int. J. Environ. Res. Public Health 2026, 23(4), 416; https://doi.org/10.3390/ijerph23040416 - 25 Mar 2026
Viewed by 1229
Abstract
Background: E-cigarette use has risen sharply among young never-smokers, largely driven by the availability of several thousand appealing flavours. This narrative review synthesises evidence on the health effects of vaping, flavour toxicology and attractiveness, designs and outcomes of flavour bans, and complementary measures. [...] Read more.
Background: E-cigarette use has risen sharply among young never-smokers, largely driven by the availability of several thousand appealing flavours. This narrative review synthesises evidence on the health effects of vaping, flavour toxicology and attractiveness, designs and outcomes of flavour bans, and complementary measures. Methods: Peer-reviewed publications and institutional reports (up to January 2026) were retrieved from PubMed, Web of Science, Google Scholar, and reference lists of included articles. Evidence from about 200 references was synthesised by a multidisciplinary working group. Results: Although flavouring substances are generally considered safe for ingestion, their inhalation toxicity remains uncertain. In vitro and in vivo studies have reported oxidative stress, inflammation, cytotoxicity, impaired ciliary function, transcriptomic changes, genotoxicity, and DNA damage. These findings—along with the strong youth appeal of fruit/sweet flavours, the inconclusive effects of flavours on smoking cessation, and persisting uncertainties—support banning non-tobacco e-cigarette flavours under the precautionary principle. Flavour bans can reduce e-cigarette use and initiation, especially among young adults, although partial substitution towards combustible cigarettes has been reported in some U.S. states. Policy success requires effective enforcement, prevention of industry circumvention, curbing cross-border sales, and closing regulatory loopholes—ideally at the international level (e.g., EU-wide). Conclusions: E-cigarette flavours may increase vaping toxicity and strongly appeal to youth, justifying flavour bans to prioritise youth protection. To maximise effectiveness, accompanying measures and sustained investment in tobacco prevention, youth education, and accessible evidence-based smoking cessation support are essential. Full article
Show Figures

Figure 1

23 pages, 782 KB  
Article
Computational Economics of Circular Construction: Machine Learning and Digital Twins for Optimizing Demolition Waste Recovery and Business Value
by Marta Torres-Polo and Eduardo Guzmán Ortíz
Computation 2026, 14(4), 76; https://doi.org/10.3390/computation14040076 - 25 Mar 2026
Viewed by 423
Abstract
Construction and demolition waste (CDW) represents a critical environmental challenge in the building sector, with global generation exceeding 3.57 billion tonnes annually. The circular economy (CE) framework offers a transformative pathway through selective deconstruction and material recovery, yet implementation faces significant barriers including [...] Read more.
Construction and demolition waste (CDW) represents a critical environmental challenge in the building sector, with global generation exceeding 3.57 billion tonnes annually. The circular economy (CE) framework offers a transformative pathway through selective deconstruction and material recovery, yet implementation faces significant barriers including information asymmetry, supply chain fragmentation, and regulatory uncertainty. This study conducts a systematic literature review using the Context–Mechanism–Outcome (CMO) framework to analyze how computational methods, specifically Digital Twins (DT), Building Information Modeling (BIM), Internet of Things (IoT), blockchain, artificial intelligence, and robotics, act as enablers for resilience in CDW management. Following PRISMA 2020 guidelines and realist synthesis principles, we analyzed 42 high-quality empirical studies from Web of Science and Scopus (2015–2025). Our analysis identifies seven primary mechanisms: traceability (M1), simulation (M2), classification (M3), tracking (M4), collaboration (M5), analytics (M6) and robotics (M7). These mechanisms interact with four critical contexts (information asymmetry, supply chain fragmentation, economic uncertainty, operational risks) to generate outcomes at two levels: resilience capabilities (visibility, monitoring, collaboration, flexibility, anticipation) and performance indicators (recovery rates, cost reduction, CO2 emissions mitigation, occupational safety). Key findings from the CMO analysis reveal that blockchain-enabled traceability increases material recovery rates by 15–25%, DT simulation reduces deconstruction costs by 20–30%, and computer vision automation improves sorting accuracy to 85–95%. The study contributes middle-range theories explaining how digital technologies enable circular transitions under specific contextual conditions, offering actionable strategic implications for researchers, project managers, technology developers, and policymakers committed to advancing computational economics in sustainable construction. Full article
Show Figures

Graphical abstract

19 pages, 635 KB  
Article
Conformal Prediction for Counterfactual Detection in Concept Learning from Synthetic Visual Patterns
by Ulf Norinder, Stephanie Lowry, Heimo Müller and Andreas Holzinger
Electronics 2026, 15(7), 1346; https://doi.org/10.3390/electronics15071346 - 24 Mar 2026
Viewed by 384
Abstract
Reliable detection of previously unseen classes under distributional shift remains a central challenge in concept learning and explainable artificial intelligence. In particular, high-performance deep learning models often lack statistically grounded mechanisms to signal when an instance deviates from learned concepts. This paper addresses [...] Read more.
Reliable detection of previously unseen classes under distributional shift remains a central challenge in concept learning and explainable artificial intelligence. In particular, high-performance deep learning models often lack statistically grounded mechanisms to signal when an instance deviates from learned concepts. This paper addresses this limitation by investigating whether conformal prediction can be effectively combined with a YOLOv5 deep learning classifier to enable principled counterfactual detection without prior exposure to the counterfactual class. As a controlled testbed, we employ Kandinsky patterns, a structured benchmark widely used in explainable AI research due to its rule-based generative transparency and suitability for concept learning studies. The proposed framework first classifies valid and invalid patterns and subsequently applies inductive conformal prediction to obtain calibrated prediction sets at a user-defined significance level. Counterfactual instances are, at start, identified based solely on information from known true and false patterns, without explicit training examples of the counterfactual class. Experimental results demonstrate that the conformalized detector reliably identifies a substantial proportion of previously unseen counterfactual patterns while maintaining statistical validity. In addition, the method flags unlabeled (“empty”) instances, thereby providing a principled signal for the emergence of new concepts. By conformalizing YOLOv5 outputs, the approach establishes a statistically sound mechanism for uncertainty-aware detection of divergent classes, contributing to robust and explainable concept learning in structured visual pattern recognition. Full article
Show Figures

Figure 1

22 pages, 4742 KB  
Article
PromptSeg: An End-to-End Universal Medical Image Segmentation Method via Visual Prompts
by Minfan Zhao, Bingxun Wang, Jun Shi and Hong An
Entropy 2026, 28(3), 342; https://doi.org/10.3390/e28030342 - 18 Mar 2026
Viewed by 351
Abstract
Deep learning has achieved remarkable advancements in medical image segmentation, yet its generalization capability across unseen tasks remains a significant challenge. The variety of task objectives, disease-dependent labeling variations, and multi-center data contribute to the high uncertainty of task-specific models on unseen distributions. [...] Read more.
Deep learning has achieved remarkable advancements in medical image segmentation, yet its generalization capability across unseen tasks remains a significant challenge. The variety of task objectives, disease-dependent labeling variations, and multi-center data contribute to the high uncertainty of task-specific models on unseen distributions. In this study, we propose PromptSeg, an innovative Transformer-based unified framework for universal 2D medical image segmentation. From an information-theoretic perspective, PromptSeg formulates the segmentation process as a conditional entropy minimization problem, utilizing visual prompts as side information to reduce the uncertainty of the target task. Guided by the information bottleneck principle, PromptSeg aims to utilize the provided visual prompts to filter out redundant noise and learn contextual representations, thereby breaking the restrictions of the task-specific paradigm. When faced with unseen datasets or segmentation targets, our method only requires a few annotated visual prompt pairs to extract task-specific semantics and segment the query images without retraining. Extensive experiments on CT and MRI datasets demonstrate that PromptSeg not only outperforms state-of-the-art methods but also exhibits strong multi-modality generalization capabilities. Full article
Show Figures

Figure 1

22 pages, 307 KB  
Article
The Awareness-First Theory: A Coherence Principle Underlying Active Inference and Physical Law
by Jason Clarke
Entropy 2026, 28(3), 306; https://doi.org/10.3390/e28030306 - 9 Mar 2026
Viewed by 847
Abstract
The Free Energy Principle (FEP) and Active Inference provide a unifying variational framework for modelling perception, action, learning, and self-organisation across biological systems. While highly successful at explaining how systems maintain organisation under uncertainty, these frameworks remain explicitly neutral with respect to a [...] Read more.
The Free Energy Principle (FEP) and Active Inference provide a unifying variational framework for modelling perception, action, learning, and self-organisation across biological systems. While highly successful at explaining how systems maintain organisation under uncertainty, these frameworks remain explicitly neutral with respect to a foundational question: why there is experience at all. This paper argues that this limitation reflects not an empirical gap but a misplaced starting point. The Awareness-First Theory (AFT) inverts the usual explanatory order by beginning from the givenness of awareness itself and asking what must be the case for any world to appear coherently. This requirement is formalised as a Coherence Principle, expressed as a variational stationarity condition, δA=0, which specifies the invariance of coherent awareness across changing appearances. I argue that familiar variational principles-most notably free-energy minimisation (δF=0) and stationary-action physics (δS=0)-can be understood as restricted projections of this parent constraint under specific abstractions. Active Inference therefore does not generate awareness but describes how locally bounded systems maintain coherence within awareness under uncertainty. Making this projection structure explicit dissolves the explanatory gap between physical process and phenomenal presence, revealing the gap itself as a category error. Although the Coherence Principle itself is transcendental rather than empirical, the AFT generates testable consequences at the level of its projections, including predicted dissociations between inferential optimisation and phenomenological coherence in dreaming, altered states, meditation, and psychopathology. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
30 pages, 3865 KB  
Review
Advanced Temperature Prediction for Electric Motors: A Review from Physical Foundations to Physics-Informed Intelligence
by Yaofei Han, Qian Zhang, Yongfeng Liu, Shaofeng Chen, Zhixun Ma, Yawei Li and Jianping Sun
Machines 2026, 14(3), 305; https://doi.org/10.3390/machines14030305 - 7 Mar 2026
Viewed by 576
Abstract
Motor temperature prediction is critical for ensuring the reliability and safe operation of high-power-density electric drives. Since direct measurement of internal temperatures, especially rotor and magnet temperatures, is often impractical, virtual sensing has become an important research direction. This review provides a structured [...] Read more.
Motor temperature prediction is critical for ensuring the reliability and safe operation of high-power-density electric drives. Since direct measurement of internal temperatures, especially rotor and magnet temperatures, is often impractical, virtual sensing has become an important research direction. This review provides a structured clarification of motor temperature prediction technologies. First, the physical foundations of motor thermal behavior are revisited, emphasizing multi-source loss generation, electro-thermal coupling mechanisms, and the dominant influence of time-varying boundary conditions. Second, existing estimation methodologies are systematically categorized into physics-based thermal models, observer- and identification-based approaches, and data-driven machine learning frameworks. Their mathematical principles, information bottlenecks, computational trade-offs, and deployment constraints are comparatively analyzed. Particular attention is given to hybrid and physics-informed methods, where reduced-order thermal networks, parameter adaptation, and learning-based residual correction are integrated to enhance robustness. Future developments should focus on uncertainty-aware estimation, lifecycle-adaptive modeling, and reliable temperature field inference under sparse sensing conditions. Full article
Show Figures

Figure 1

21 pages, 1429 KB  
Review
Nanopore Sequencing in Veterinary Pathogen Detection: A Review of Technologies and Applications
by Lei Xu, Leilei Zhao, Zeyu Tong, Kai Peng, Mianzhi Wang, Runsheng Li, Zhiqiang Wang and Ruichao Li
Vet. Sci. 2026, 13(3), 216; https://doi.org/10.3390/vetsci13030216 - 25 Feb 2026
Viewed by 758
Abstract
Nanopore-based sequencing has emerged as a revolutionary tool for animal pathogen genomics, offering capabilities unattainable with Sanger and next-generation sequencing (NGS). Despite rapid technical progress, routine veterinary deployment still faces uncertainty in study design, sample preparation, and interpretation thresholds across diverse hosts and [...] Read more.
Nanopore-based sequencing has emerged as a revolutionary tool for animal pathogen genomics, offering capabilities unattainable with Sanger and next-generation sequencing (NGS). Despite rapid technical progress, routine veterinary deployment still faces uncertainty in study design, sample preparation, and interpretation thresholds across diverse hosts and sample matrices. Accordingly, this review consolidates recent evidence and provides workflow-oriented guidance for veterinary diagnostics and One Health surveillance. Its portability, ability to generate real-time long-read data, and minimal infrastructure requirements enable rapid, on-site sequencing for veterinary diagnostics and surveillance. This review examines the principles of nanopore sequencing and its advantages over conventional methods, surveying recent applications across viral, bacterial (including antimicrobial resistance, AMR), and parasitic pathogen detection in animals. In viral diagnostics, it facilitates rapid whole-genome sequencing and outbreak tracing in field settings. For bacterial pathogens, it enables near-complete genome assembly and identification of plasmid-borne AMR genes. Emerging studies also demonstrate its utility in parasitology, from high-resolution species identification to whole-genome assemblies. We compare these advancements with traditional diagnostics, highlighting strengths in speed and comprehensiveness while addressing current limitations in accuracy and host-DNA interference. As technology matures through improvements in chemistry and adaptive sampling, nanopore sequencing is poised to transform veterinary pathogen detection and bolster One Health surveillance of emerging zoonoses. Full article
Show Figures

Figure 1

Back to TopTop