Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,240)

Search Parameters:
Keywords = formal methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 811 KB  
Article
Statistical Inference for the Inverted Kumaraswamy Accelerated Model Under Type-I Generalized Hybrid Censoring with Applications
by Gamal M. Ismail, Ohud A. Alqasem, Lamis M. Alamoudi, Maryam Ibrahim Habadi, Meshayil M. Alsolmi, Raga Hassan Ali Shiekh, Md. Mahabubur Rahman and Samah M. Ahmed
Symmetry 2026, 18(3), 446; https://doi.org/10.3390/sym18030446 - 4 Mar 2026
Abstract
This study investigates the methodologies for robust parameter estimation within the context of the parameters of the inverted Kumaraswamy model using data derived from step-stress partially accelerated life testing with Type-I generalized hybrid censoring. We formulate estimation procedures within both frequentist (maximum likelihood) [...] Read more.
This study investigates the methodologies for robust parameter estimation within the context of the parameters of the inverted Kumaraswamy model using data derived from step-stress partially accelerated life testing with Type-I generalized hybrid censoring. We formulate estimation procedures within both frequentist (maximum likelihood) and Bayesian frameworks, including the construction of asymptotic and credible intervals. Subsequently, we provide a formal derivation of the associated asymptotic and bootstrap confidence intervals. To address the analytical intractability of the Bayesian estimation, we employ Markov Chain Monte Carlo techniques. The proposed methods are illustrated through an illustrative example, an application to real-world precipitation data, and a simulation study. Full article
(This article belongs to the Section Mathematics)
32 pages, 5809 KB  
Article
Ontology-Driven Automatic Scoring of Mechanization Rate in Power Grid Construction Projects Using Large Language Models
by Jiawei Chen, Xin Xu, Jun Liu, Yunyun Gao, Jingjing Guo, Zhuqing Ding, Mao Zhang, Juncheng Zhu and Yifan He
Buildings 2026, 16(5), 1010; https://doi.org/10.3390/buildings16051010 - 4 Mar 2026
Abstract
Driven by the global energy transition, mechanized construction—characterized by enhanced safety, efficiency, and quality—is becoming the mainstream approach in power grid development. Mechanization assessment serves as a critical tool for guiding and optimizing this process, yet current practices remain largely manual, resulting in [...] Read more.
Driven by the global energy transition, mechanized construction—characterized by enhanced safety, efficiency, and quality—is becoming the mainstream approach in power grid development. Mechanization assessment serves as a critical tool for guiding and optimizing this process, yet current practices remain largely manual, resulting in inefficiency, time-consuming operations, and a lack of real-time insights, which severely limit its practical utility for dynamic project guidance. To address these challenges, this study proposes a novel framework that integrates semantic technology (i.e., ontology) and large language models (LLMs). The framework first constructs a semantic model of the power grid construction domain using ontology. An LLM is then employed to convert multi-source project data into structured ontological instances. Building on this, mechanization assessment criteria are formalized into machine-executable Semantic Web Rule Language (SWRL) rules, which enable automated reasoning and scoring through an ontological reasoner. Furthermore, the LLM is utilized to generate comprehensive and intelligible assessment reports based on the reasoning outputs. To validate the proposed method, 126 real-world project cases were applied to the system. The results demonstrate a 96% accuracy rate in mechanization assessment outcomes compared to expert evaluations. The approach facilitates an objective, standardized, and dynamic evaluation of construction mechanization levels, providing a foundation for intelligent and scalable management models in power grid construction. Full article
(This article belongs to the Special Issue Intelligence and Automation in Construction—2nd Edition)
Show Figures

Figure 1

14 pages, 849 KB  
Review
Eye Lens Radiation Exposure During TAVI: Current Evidence and Imaging-Based Strategies for Dose Reduction
by Chiara Zanon, Alessandro Fiocco, Vincenzo Tarzia and Emilio Quaia
Tomography 2026, 12(3), 36; https://doi.org/10.3390/tomography12030036 - 4 Mar 2026
Abstract
Background: Transcatheter aortic valve implantation (TAVI) is increasingly performed in fluoroscopy-intensive environments, raising concerns about occupational eye lens dose (equivalent dose to the eye lens, Hp (3)) and the risk of radiation-induced cataract, particularly after the reduction of recommended annual eye lens dose [...] Read more.
Background: Transcatheter aortic valve implantation (TAVI) is increasingly performed in fluoroscopy-intensive environments, raising concerns about occupational eye lens dose (equivalent dose to the eye lens, Hp (3)) and the risk of radiation-induced cataract, particularly after the reduction of recommended annual eye lens dose limits to 20 mSv. Purpose: To summarize evidence on eye lens radiation exposure during TAVI, identify procedural and occupational determinants, and review strategies to reduce exposure with a focus on imaging optimization. Methods: We performed a narrative review of observational and prospective studies reporting direct eye-level dose measurements or validated surrogate eye lens dose estimates (head-level, chest-level, or DAP-normalized) during TAVI and related structural heart procedures. This approach was chosen to provide a qualitative synthesis of the available evidence rather than a formal systematic review. Results: Reported operator eye lens doses typically ranged from 30 to 110 µSv per procedure, with higher exposure during transapical/transaortal access and among staff working close to the patient (e.g., anesthesiologists and circulating nurses). Additional shielding and lead-free drapes reduced normalized eye dose by approximately 25–40%, and RADPAD® use reduced operator eye-level dose from 24.3 to 14.8 µSv per procedure (p = 0.008). At these levels, cumulative exposure may approach recommended regulatory limits after approximately 150–300 procedures, depending on role, access route, and shielding practices. Conclusion: In conclusion, Occupational eye lens exposure during TAVI is clinically relevant and strongly influenced by access route, staff positioning, and imaging-system use. Dose reduction should combine routine eye protection and dedicated eye-level dosimetry with imaging optimization (low pulse-rate fluoroscopy, minimized Digital-Subtraction-Angiography (DSA)/cine acquisitions, tight collimation, avoidance of unnecessary magnification, and correct positioning of ceiling-suspended shields and table skirts). Full article
Show Figures

Figure 1

18 pages, 1286 KB  
Review
Dietary Caffeine, Cold Exposure, and the Estrogen–TRPM8 Axis: A Nutri-Environmental Model for Lower Urinary Tract Symptoms in the Menopause Transition: A Narrative Review
by Dong Hee Lee and Jeong Jun Park
Nutrients 2026, 18(5), 825; https://doi.org/10.3390/nu18050825 - 3 Mar 2026
Abstract
Background/Objectives: Lower urinary tract symptoms (LUTSs), particularly nocturia and urgency, often intensify during the menopause transition and may worsen with caffeine intake and cold exposure. This review aims to synthesize evidence relevant to a hypothesized caffeine–cold interaction in transitional menopause, focusing on [...] Read more.
Background/Objectives: Lower urinary tract symptoms (LUTSs), particularly nocturia and urgency, often intensify during the menopause transition and may worsen with caffeine intake and cold exposure. This review aims to synthesize evidence relevant to a hypothesized caffeine–cold interaction in transitional menopause, focusing on water homeostasis and the estrogen–transient receptor potential melastatin 8 (TRPM8) cold-sensory axis, and to propose potentially actionable, nutrition-centered intervention candidates for future testing. Methods: Structured narrative review of PubMed, Embase, Web of Science, and citation tracking (inception–January 2026). Evidence was mapped into a mechanistic framework distinguishing established from hypothesis-generating links; no formal systematic-review study selection or meta-analysis was performed. Results: Caffeine can increase urine output via renal mechanisms (adenosine receptor antagonism and natriuresis) and may lower bladder sensory thresholds. Because half-life is long and variable, afternoon intake can extend into sleep, potentially increasing awakenings and nocturnal voids. Human studies link colder indoor environments to nocturia/overactive bladder, and passive pre-bedtime heating is associated with fewer nocturnal voids. We propose that repeated nighttime cold may amplify caffeine-related diuresis and may shift urine production toward the night, while estradiol decline may heighten TRPM8-mediated cold sensory gain, potentially contributing to urgency/frequency flares. A testable 2 × 2 cold × caffeine framework can operationalize dose, timing, and metabolism, pairing voiding diaries and bedroom temperature sensing with copeptin profiling. Conclusions: Transitional menopause may represent a susceptibility window in which endocrine instability and estradiol decline could plausibly increase sensitivity to indoor cold exposure and caffeine intake, potentially contributing to nocturia and urgency. The hypothesis label ‘dual hormone suppression’ (attenuated nocturnal AVP signal plus estradiol decline) may provide a mechanistic substrate for cold-exacerbated nocturnal polyuria, while an estrogen–TRPM8 axis may amplify cold-evoked urgency. Potentially actionable candidates include chronobiological caffeine timing/management and low-burden thermal strategies; nevertheless, menopause-stage-specific epidemiologic and clinical evidence for a caffeine × cold interaction remains limited and several mechanistic links are extrapolated, so prospective diary- and biomarker-enabled studies and controlled trials are needed to validate mechanisms and refine cold-sensitive endotypes. Full article
(This article belongs to the Special Issue Nutrition, Lifestyle and Women’s Health)
27 pages, 3300 KB  
Article
A Methodology for Evaluating User Experience in Human-Centered Extended Reality Applications
by Daniela Quiñones, Luis Felipe Rojas, Renato Olavarría, Claudio Cubillos and Felipe Muñoz-La Rivera
Biomimetics 2026, 11(3), 182; https://doi.org/10.3390/biomimetics11030182 - 3 Mar 2026
Abstract
Extended Reality (XR) technologies are increasingly used to create immersive and interactive systems across domains such as education, training, health, and entertainment. As these systems become more complex and multisensory, evaluating user experience (UX) in XR environments requires approaches that go beyond traditional [...] Read more.
Extended Reality (XR) technologies are increasingly used to create immersive and interactive systems across domains such as education, training, health, and entertainment. As these systems become more complex and multisensory, evaluating user experience (UX) in XR environments requires approaches that go beyond traditional usability assessments and consider perceptual, cognitive, emotional, and interaction-related factors. However, existing UX evaluation efforts in XR often rely on isolated instruments or domain-specific studies, lacking a systematic and reusable evaluation methodology. This paper proposes a human-centered methodology for evaluating user experience in extended reality applications, integrating UX dimensions and XR-specific characteristics into a structured and coherent evaluation process. The methodology is grounded in a multi-phase research process that includes a comprehensive literature review, expert consultation, correlation analysis between UX dimensions and XR features, and formal specification of evaluation phases and activities. Based on this process, the proposed methodology supports evaluators in selecting appropriate UX evaluation methods and instruments according to the characteristics and experiential goals of XR applications. The methodology defines a set of UX dimensions tailored to immersive environments, capturing perceptual, cognitive, emotional, and interaction aspects that are critical for the design and evaluation of adaptive and human-centered XR systems. An expert-based validation was conducted to assess the clarity, usefulness, and applicability of the methodology, leading to refinements in its structure and descriptions. The methodology promotes a human-centered approach by considering user perception, emotional impact, and contextual experience across XR modalities. It additionally contributes to the field by offering a reusable process for UX evaluation in XR, supporting more consistent, transparent, and human-centered assessment practices. It also provides a foundation for future empirical studies and the development of evaluation approaches inspired by natural and adaptive human–environment interactions. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Graphical abstract

21 pages, 1479 KB  
Article
Event Patterns Enhancing Causal Reasoning Method Incorporating Category Theory for Stored Grain Pests
by Le Xiao, Yunfei Zhang, Shengtong Wang, Zimin Yang and Qinghui Zhang
AgriEngineering 2026, 8(3), 93; https://doi.org/10.3390/agriengineering8030093 (registering DOI) - 3 Mar 2026
Abstract
Outbreaks of stored grain pests can pose significant threats to food security. In-depth analyses of sudden outbreaks are key to achieving effective prevention and control. To address the issue of models’ insufficient reasoning capability arising from complex causal relationships in stored grain pest [...] Read more.
Outbreaks of stored grain pests can pose significant threats to food security. In-depth analyses of sudden outbreaks are key to achieving effective prevention and control. To address the issue of models’ insufficient reasoning capability arising from complex causal relationships in stored grain pest events, this study proposes an Event Patterns Enhancing Causal Reasoning (EPECR) method incorporating category theory. Specifically, we focus on common pests such as Sitophilus zeamais (maize weevil) and Sitotroga cerealella (Angoumois grain moth). We formally map the domain ontology—including entities like environmental factors (e.g., temperature, humidity) and control measures (e.g., fumigation)—to categories, and represent their inter-relationships (e.g., inhibition, promotion) as functors. To handle complex scenarios, we model multi-cause events (e.g., high temperature and humidity jointly accelerating pest reproduction) using functor products, and represent multi-hop events (e.g., environmental changes leading to pest outbreak and subsequent grain loss) through functor compositions. This formal expression enables Large Language Models (LLMs) to extract reliable event patterns. Based on these patterns, this study constructed 1440 structured datasets and adopted the Low-Rank Adaptation (LoRA) strategy to fine-tune the LLMs. Experiments on the domain-specific Stored Grain Pest Events Dataset (SGPE) demonstrate that EPECR achieves a reasoning accuracy of 85.9% on in-distribution data and 79.9% on out-of-distribution data, effectively identifying correct causal chains for pest logic. This method significantly outperforms the state-of-the-art domain method-Naive Augmentations (NA)-by 4.9%, providing precise decision support for the early warning and control of specific pest incidents. Full article
Show Figures

Figure 1

13 pages, 464 KB  
Systematic Review
Circulating Tumour DNA After Neoadjuvant Therapy in Non-Metastatic Colon Cancer: A Systematic Review and Implications for Surgical Decision-Making
by Mahmoud M. Salama, Charles Eddershaw, Hugo C. Temperley, Arvin Kumar Perthiani, John O. Larkin, Brian J. Mehigan, Dara O. Kavanagh, Paul H. McCormick, David Gallagher, Charles Gillham, Emily Harrold and Michael E. Kelly
Cancers 2026, 18(5), 815; https://doi.org/10.3390/cancers18050815 - 3 Mar 2026
Abstract
Introduction: Neoadjuvant systemic and immunotherapy strategies in non-metastatic colon cancer have demonstrated high pathological response rates, raising interest in surgery-sparing approaches. Circulating tumour DNA (ctDNA) is an emerging biomarker for treatment response and minimal residual disease, but its role in guiding surgical omission [...] Read more.
Introduction: Neoadjuvant systemic and immunotherapy strategies in non-metastatic colon cancer have demonstrated high pathological response rates, raising interest in surgery-sparing approaches. Circulating tumour DNA (ctDNA) is an emerging biomarker for treatment response and minimal residual disease, but its role in guiding surgical omission in colon cancer remains unclear. This systematic review evaluates the diagnostic and prognostic accuracy of ctDNA in predicting pathological response following neoadjuvant therapy in non-metastatic colon cancer. Methods: A systematic review was conducted in accordance with PRISMA guidelines. PubMed, Embase/MEDLINE, Scopus, and the Cochrane Register were searched from inception to 21 October 2025. Eligible studies included adults with non-metastatic colon cancer treated with neoadjuvant therapy who had serial ctDNA assessment prior to surgery. Results: Three cohort studies comprising 100 patients met inclusion criteria. Baseline ctDNA detection ranged from 42% to 84%. Across studies, ctDNA clearance following neoadjuvant therapy was consistently associated with major pathological response or pathological complete response, whereas persistent ctDNA strongly predicted residual viable tumour at resection. In the largest prospective cohort, 5 of 26 patients (19%) achieved ctDNA clearance prior to surgery; all were pathological responders, while 19 of 26 patients (73%) with persistent ctDNA demonstrated no pathological response. No study reported pathological complete response in the presence of persistently positive ctDNA. No prospective trial formally evaluated ctDNA-guided surgical omission. Conclusions: Current evidence does not support the use of ctDNA alone to guide omission of surgery after neoadjuvant therapy in non-metastatic colon cancer—even in patients who show complete pathological response. While persistent ctDNA reliably identifies patients with residual disease, ctDNA clearance lacks sufficient positive predictive value to safely forego surgery. Prospective trials with standardised ctDNA platforms and predefined non-operative management protocols are required before ctDNA-guided organ preservation can be recommended. Full article
(This article belongs to the Section Cancer Biomarkers)
Show Figures

Figure 1

17 pages, 1296 KB  
Article
SSKEM: A Global Pointer Network Model for Joint Entity and Relation Extraction in Storm Surge Texts
by Yebin Chen, Mingjie Xie, Yongli Chen, Zhenduo Dou and Weihong Li
ISPRS Int. J. Geo-Inf. 2026, 15(3), 105; https://doi.org/10.3390/ijgi15030105 - 3 Mar 2026
Abstract
Storm surges are catastrophic marine disasters that pose severe threats to coastal populations, making the rapid extraction of key information from multi-source texts critical for effective emergency response. However, existing extraction methods often struggle with complex linguistic challenges, such as identifying nested entities [...] Read more.
Storm surges are catastrophic marine disasters that pose severe threats to coastal populations, making the rapid extraction of key information from multi-source texts critical for effective emergency response. However, existing extraction methods often struggle with complex linguistic challenges, such as identifying nested entities (e.g., overlapping geographic names), capturing relationships across long texts, and handling the disparity between formal official reports and unstructured social media data. To address these limitations, this study proposes a Storm Surge Knowledge Extraction Model (SSKEM) based on Global Pointer Networks. By constructing a domain-specific dataset of 4000 records from government bulletins, news reports, and social media, the proposed model utilizes a unified matrix decoding mechanism to treat entity and relation extraction as a holistic task. Experimental results demonstrate that the model achieves an F1-score of 88.4%, outperforming robust baseline models by 5.5%. Notably, it improves the recognition accuracy of complex nested entities by 13.7% and enhances the recall rate for cross-sentence relations by 18.2%. Furthermore, the model exhibits high computational efficiency, processing speed suitable for real-time applications, and effectively bridges the performance gap between standardized and fragmented data sources. This research provides a robust technical solution for transforming heterogeneous disaster big data into actionable knowledge for decision-support systems. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

26 pages, 446 KB  
Article
A Mathematical Framework for Modeling Global Value Chain Networks
by Georgios Angelidis
Foundations 2026, 6(1), 8; https://doi.org/10.3390/foundations6010008 - 3 Mar 2026
Abstract
Global value chains (GVCs) have evolved into highly interconnected and geographically fragmented production networks, increasing exposure to systemic disruptions and revealing the limitations of static input–output and conventional network approaches. This study develops a unified analytical framework for modeling the structure, dynamics, and [...] Read more.
Global value chains (GVCs) have evolved into highly interconnected and geographically fragmented production networks, increasing exposure to systemic disruptions and revealing the limitations of static input–output and conventional network approaches. This study develops a unified analytical framework for modeling the structure, dynamics, and resilience of GVCs by integrating input–output economics with network theory, control theory, optimal transport, information theory, and cooperative game theory. The framework represents GVCs as time-varying, multi-level networks and formalizes shock propagation through stochastic normalization and state-space dynamics. Entropy-regularized optimal transport is employed to model friction-dependent substitution and supply chain reconfiguration, while Koopman operator methods approximate nonlinear adjustment dynamics. Cooperative flow-based indices are introduced to assess systemic importance and bargaining power. The analysis produces a coherent set of structural and dynamic indicators capturing vulnerability, adaptability, and controllability across country–sector nodes. Overall, the framework provides an empirically applicable toolkit for diagnosing structural fragilities, comparing resilience across economies, and supporting scenario-based evaluation of industrial and trade policies in complex global production networks. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

18 pages, 339 KB  
Article
Entropy-Based Portfolio Optimization in Cryptocurrency Markets: A Unified Maximum Entropy Framework
by Silvia Dedu and Florentin Șerban
Entropy 2026, 28(3), 285; https://doi.org/10.3390/e28030285 - 2 Mar 2026
Abstract
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded [...] Read more.
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty. Full article
38 pages, 9716 KB  
Article
Research on Spatial Information Network Vulnerability Analysis Methodology Based on Multi-Layer Hypernetworks
by Xiaolan Yu, Wei Xiong and Yali Liu
Sensors 2026, 26(5), 1570; https://doi.org/10.3390/s26051570 - 2 Mar 2026
Abstract
As the core infrastructure for providing all-weather, full-coverage, high-speed, and diversified information services, spatial information networks (SINs) possess significant social, economic, and military value. However, due to the inherent characteristics of their network architecture, SINs are susceptible to core service paralysis and functional [...] Read more.
As the core infrastructure for providing all-weather, full-coverage, high-speed, and diversified information services, spatial information networks (SINs) possess significant social, economic, and military value. However, due to the inherent characteristics of their network architecture, SINs are susceptible to core service paralysis and functional failure under large-scale targeted attacks or random disturbances, posing a critical bottleneck that constrains their stable operation. Current research on SIN vulnerability is predominantly confined to a single network topology perspective, lacking an integrated consideration of the task execution perspective. Consequently, it fails to accommodate the dual requirements of “network topology stability” and “task execution effectiveness”. To address the aforementioned research needs and challenges, this study adopts a “topology-task” dual-perspective fusion approach and proposes a vulnerability analysis framework for SINs that integrates multi-layer networks and hypernetworks. First, a two-layer SIN topology model encompassing the user layer and the satellite layer is constructed. Leveraging hypernetwork theory, information tasks involving multiple network entities are formally defined, and an integrated multi-layer hypernetwork model is established. Second, based on distinct task types, three categories of task efficiency evaluation metrics are defined, and corresponding quantitative methods for calculating SIN vulnerability are derived. Third, during the vulnerability analysis phase, a novel strategy for identifying and removing overlapping nodes in hypernetworks is introduced to enable precise localization of critical nodes within the network. Concurrently, a pre-attack node hardening strategy is designed to minimize the impact of attacks on network performance. Finally, through systematic analysis of vulnerability performance and critical node characteristics under different node removal strategies, the results demonstrate enhanced network performance. The effectiveness of the proposed method is validated by comparing the defense performance of the hardening strategy across various attack scenarios. To verify the feasibility and superiority of the proposed method, this study designs 5 × 5 groups of simulation experiments with varying network parameters. The results indicate that, compared with traditional methods, the proposed strategy can more accurately identify core nodes affecting the stable operation of SINs, significantly reducing network vulnerability and improving network survivability. In addition, a comprehensive sensitivity analysis of SIN vulnerability is conducted from three key influencing dimensions—mission scale, satellite count, and constellation configuration—clarifying the impact of each dimension on network invulnerability. Thus, this paper provides a reliable theoretical foundation and technical support for the planning, design, optimal deployment, and operation and maintenance management of SINs. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

17 pages, 354 KB  
Review
Early Prognostic Factors in Multiple Sclerosis: Clinical and Therapeutic Implications
by Katarzyna Maciejowska-Szydło and Przemysław Puz
Medicina 2026, 62(3), 475; https://doi.org/10.3390/medicina62030475 - 2 Mar 2026
Abstract
Introduction: Multiple sclerosis (MS) is a chronic, inflammatory, demyelinating disease of the central nervous system with a highly heterogeneous clinical course. Early identification of patients at risk of aggressive disease progression is crucial for optimizing therapeutic strategies, including eligibility for highly effective [...] Read more.
Introduction: Multiple sclerosis (MS) is a chronic, inflammatory, demyelinating disease of the central nervous system with a highly heterogeneous clinical course. Early identification of patients at risk of aggressive disease progression is crucial for optimizing therapeutic strategies, including eligibility for highly effective treatments. Objective: The aim of this review was to synthesize current data on prognostic factors in multiple sclerosis, with particular emphasis on their significance in the early stages of the disease and potential clinical implications. Methods: A narrative systematic review of the literature was conducted, including observational studies, cohort studies, meta-analyses, and systematic reviews on the natural course of MS, prognostic factors, and clinical, neuroimaging, and laboratory biomarkers. We comprehensively reviewed PubMed and Scopus databases, focusing on English-language publications. Study selection prioritized longitudinal studies and meta-analyses with clear outcome definitions and sufficient follow-up. Formal quality scoring was not applied due to the narrative design of the review. Results: Key adverse prognostic factors include older age at onset, polysymptomatic onset, high relapse activity in the first years, incomplete remission after relapses, and the primary progressive form. Magnetic resonance imaging features, including the number and location of T2 lesions, contrast activity, the presence of spinal cord lesions, PRLs and SELs, and severe brain atrophy, also have significant predictive value. Increasing importance is being attached to laboratory biomarkers, such as oligoclonal bands, light neurofilaments, free kappa light chains, and GFAP. Conclusions: An integrated assessment of clinical, neuroimaging, and laboratory factors enables more effective risk stratification in patients with newly diagnosed MS. Early identification of an unfavorable prognostic profile may provide a basis for individualizing treatment and considering the use of highly effective therapies early in the course of the disease. Full article
(This article belongs to the Section Neurology)
40 pages, 838 KB  
Article
The Role of Promoters in Organizational Learning Within the Digital Transformation of Schools
by Nina Carolin von Grumbkow, Amelie Sprenger, Cornelia Gräsel and Kathrin Fussangel
Systems 2026, 14(3), 266; https://doi.org/10.3390/systems14030266 - 2 Mar 2026
Abstract
Digital transformation demands schools to act as learning organizations in order to rethink and reform their structures and practices. Using a mixed-methods design (quantitative analysis of code co-occurrences within 60 semi-structured group interviews and qualitative structural content analysis), the study examines how teachers [...] Read more.
Digital transformation demands schools to act as learning organizations in order to rethink and reform their structures and practices. Using a mixed-methods design (quantitative analysis of code co-occurrences within 60 semi-structured group interviews and qualitative structural content analysis), the study examines how teachers who act as promoters for digital transformation facilitate organizational learning (OL) processes and how these processes can be described. While five OL processes emerge (collective sense making, knowledge creation and transfer, evaluation and feedback, experimentation and piloting, and external cooperation and knowledge import), each process is mainly shaped by a distinct promoter activity. Findings reveal that school-wide systematic structural conditions for OL processes, for instance formal evaluation and scheduled collaboration time for the whole teaching staff, are scarce, leaving many learning processes informal and project-based. The study concludes that sustainable digital transformation requires schools to institutionalize adequate structural conditions for OL activities and to empower promoters through both top-down mandates and bottom-up support, ensuring all OL processes become habituated routines. Full article
Show Figures

Figure 1

21 pages, 4620 KB  
Article
Precision Agriculture Management System and Traceability Architecture in Specialty Coffee Farms in Chiriquí, Panama
by Elia E. Cano, Milva Eileen Justavino-Castillo, Jorge Centeno, Marlín Villamil-Barrios, Aracelly Vega and Carlos Alvino Rovetto
Appl. Sci. 2026, 16(5), 2399; https://doi.org/10.3390/app16052399 - 28 Feb 2026
Viewed by 110
Abstract
The management of specialty coffee production represents a complex dynamical process characterized by highly nonlinear interconnections between environmental variables, agronomic practices, and chemical compositions. Traditionally, the classification of specialty coffee relies on sensory evaluations conducted by highly certified coffee experts named Q-Graders, using [...] Read more.
The management of specialty coffee production represents a complex dynamical process characterized by highly nonlinear interconnections between environmental variables, agronomic practices, and chemical compositions. Traditionally, the classification of specialty coffee relies on sensory evaluations conducted by highly certified coffee experts named Q-Graders, using a strict, standardized Specialty Coffee Association (SCA) protocol. However, scientific methods that generate spectral fingerprints provide a more reliable guarantee of quality while also ensuring traceability to the farm of origin. Panamanian Geisha coffee is one of the world’s most expensive award-winning microlots, frequently exceeding 1000 American dollars per pound, with a record-breaking price of over 30,000 American dollars per kilogram in 2025. This research presents an integrated framework that combines Precision Agriculture Management Systems (PAMSs) and a traceability architecture that facilitates the collection of georeferenced coffee bean samples using a mobile application (apps), while preserving the coffee varieties and geographical origin necessary for the subsequent identification of the spectral fingerprint by chemical specialists in their laboratory. A mathematical model is introduced to formally characterize the mobile application’s behavior, distributed structure, and inherent constraints. Serving as a mathematical blueprint, this model identifies critical influencing factors and establishes strategic assumptions to distill complex real-world variables into a rigorous, manageable framework. Large-scale experiments conducted across more than 820 coffee farms in Chiriquí, Panama, demonstrate that the proposed decentralized architecture effectively coordinates the acquisition and synchronization of georeferenced chemical data. The decentralized architecture of the mobile application utilizes private blockchain technology to facilitate autonomous operations, effectively decoupling the system from central authorities to ensure functional continuity in environments characterized by intermittent connectivity. Full article
(This article belongs to the Special Issue Intelligent Control of Dynamical Processes and Systems)
Show Figures

Figure 1

40 pages, 11812 KB  
Article
Coastal Flood-Driven Settlement Dynamics and Local Governance Challenges in Chattogram Division of Bangladesh
by Fowzia Gulshana Rashid Lopa, Sajib Sarker and Rizbina Reduan Rayma
Geographies 2026, 6(1), 25; https://doi.org/10.3390/geographies6010025 - 28 Feb 2026
Viewed by 526
Abstract
Coastal settlements in Bangladesh are geographically flood-prone areas. This physical nature erodes the size and shape of those settlement boundaries over time. Such changes leave communities vulnerable in terms of securing a living place and livelihoods. However, the research arena rarely addresses the [...] Read more.
Coastal settlements in Bangladesh are geographically flood-prone areas. This physical nature erodes the size and shape of those settlement boundaries over time. Such changes leave communities vulnerable in terms of securing a living place and livelihoods. However, the research arena rarely addresses the long-term changing aspects of settlement and the local governance responses to vulnerability. To examine this situation, this study explored settlement transformation patterns and governance challenges, using the case study of Chattogram Division in Bangladesh from 2005 to 2025. It applied a mixed-methods approach. The analysis, using the technique of Multi-temporal Landsat imagery with Random Forest classification, revealed complex settlement trajectories. It showed built-up areas expanded significantly between 2005 and 2015 but shrank by 2025, reflecting both hazard exposure and displacement pressures. Union-level analysis identified 62 coastal unions with high to very high settlement change. Conducting field surveys in selected Juidandi and Kalamarchhara unions through focus group discussions with communities and interviews with local officials highlighted recurring inundation, permanent land loss affecting thousands of households, and persistent disruptions to livelihoods. This study also found moderate emergency responses in selected unions; however, strategic planning for relocation, health, and well-being of communities is insufficient. Continuous resource constraints and poor coordination with communities and line organizations made local implementation less effective, which blurs the effectiveness of disaster risk reduction policies. These findings underscore the necessity of union-level governance capacity building, integrating community-based adaptation with formal interventions, and developing spatially differentiated relocation strategies to enhance the resilience of climate-vulnerable coastal settlements. Full article
Show Figures

Figure 1

Back to TopTop