Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (322)

Search Parameters:
Keywords = methods of analytical regularization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 446 KB  
Article
A Mathematical Framework for Modeling Global Value Chain Networks
by Georgios Angelidis
Foundations 2026, 6(1), 8; https://doi.org/10.3390/foundations6010008 - 3 Mar 2026
Abstract
Global value chains (GVCs) have evolved into highly interconnected and geographically fragmented production networks, increasing exposure to systemic disruptions and revealing the limitations of static input–output and conventional network approaches. This study develops a unified analytical framework for modeling the structure, dynamics, and [...] Read more.
Global value chains (GVCs) have evolved into highly interconnected and geographically fragmented production networks, increasing exposure to systemic disruptions and revealing the limitations of static input–output and conventional network approaches. This study develops a unified analytical framework for modeling the structure, dynamics, and resilience of GVCs by integrating input–output economics with network theory, control theory, optimal transport, information theory, and cooperative game theory. The framework represents GVCs as time-varying, multi-level networks and formalizes shock propagation through stochastic normalization and state-space dynamics. Entropy-regularized optimal transport is employed to model friction-dependent substitution and supply chain reconfiguration, while Koopman operator methods approximate nonlinear adjustment dynamics. Cooperative flow-based indices are introduced to assess systemic importance and bargaining power. The analysis produces a coherent set of structural and dynamic indicators capturing vulnerability, adaptability, and controllability across country–sector nodes. Overall, the framework provides an empirically applicable toolkit for diagnosing structural fragilities, comparing resilience across economies, and supporting scenario-based evaluation of industrial and trade policies in complex global production networks. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

18 pages, 339 KB  
Article
Entropy-Based Portfolio Optimization in Cryptocurrency Markets: A Unified Maximum Entropy Framework
by Silvia Dedu and Florentin Șerban
Entropy 2026, 28(3), 285; https://doi.org/10.3390/e28030285 - 2 Mar 2026
Abstract
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded [...] Read more.
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty. Full article
17 pages, 1309 KB  
Article
Path Loss Considering Atmospheric Impact in 5G Networks: A Comparison of Machine Learning Models
by Vasileios P. Rekkas, Leandro dos Santos Coelho, Viviana Cocco Mariani, Adamantini Peratikou and Sotirios K. Goudos
Technologies 2026, 14(3), 151; https://doi.org/10.3390/technologies14030151 - 2 Mar 2026
Abstract
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies [...] Read more.
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies and improve network efficiency. Conventional propagation models frequently display limited flexibility when applied to diverse environmental conditions and often entail considerable computational expense, reducing their practicality for large-scale 5G planning. Recent developments in data-centric artificial intelligence (AI) have enabled more adaptive and analytically powerful approaches to propagation modeling, resulting in notable gains in PL prediction accuracyThis study employs a comprehensive dataset produced using the NYUSIM channel simulator, integrating a wide spectrum of atmospheric parameters and seasonal variations within South Asian urban microcell environments, complemented by broad empirical observations. The core objective is to construct, optimize, and evaluate four machine learning (ML) models capable of accurately predicting PL at high-frequency bands critical to 5G performance. A fully automated hyperparameter tuning pipeline, based on the Optuna framework, is applied to twelve regression algorithms, including advanced ensemble methods, regularized linear techniques, and classical baseline models. Performance assessment emphasizes predictive reliability, stability, and cross-model generalization. Furthermore, statistical analysis utilizing bootstrap confidence intervals and paired t-tests indicates that all ML methods perform equivalently (p > 0.4), while SHapley Additive exPlanations (SHAP) analysis across all models supports a consistent feature importance distribution, supporting the statistical analysis results. To showcase the superiority of the ML approaches, a comparison with conventional free-space PL modeling methods is presented, with the AI methodology demonstrating robust performance across seasonal variations and a 95.3% improvement. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

16 pages, 1522 KB  
Article
Relationship Between Physical Activity Frequency and Cardiovascular Risk Throughout the Life Cycle
by Oscar Araque, Luz Adriana Sánchez-Echeverri and Ivonne X. Cerón
J. Funct. Morphol. Kinesiol. 2026, 11(1), 91; https://doi.org/10.3390/jfmk11010091 - 25 Feb 2026
Viewed by 146
Abstract
Objectives: Cardiovascular diseases (CVD) remain a leading cause of premature mortality globally, despite the proven efficacy of physical activity in reducing risks. This research aims to identify risk characteristics and characterise pathologies related to the onset of CVD in relation to physical [...] Read more.
Objectives: Cardiovascular diseases (CVD) remain a leading cause of premature mortality globally, despite the proven efficacy of physical activity in reducing risks. This research aims to identify risk characteristics and characterise pathologies related to the onset of CVD in relation to physical activity levels. The study tests the hypothesis that adequate physical activity is associated with CVD-related events, while sedentary behaviour is a factor related to increased risk factors. Methods: A cross-sectional, observational, descriptive, and analytical study was conducted with 116 participants of both sexes (aged 16 to 77 years) in El Espinal, Tolima. Clinical, anthropometric, and biochemical assessments were performed, including blood pressure, Body Mass Index (BMI), visceral fat, and lipid profiles. Physical activity was self-reported and categorised as weekly, monthly, and occasional exercise. Descriptive and bivariate statistical analyses were performed. Quantitative variables were expressed as means and standard deviations. Qualitative variables were presented as absolute frequencies. Statistical interaction graphs were used to analyse the effects of age and exercise frequency on pulse pressure. Results: Weekly exercise was identified as a key modulator of hemodynamic stability; while BMI and visceral fat increased with age, pulse pressure remained stable (44.17–46.55 mmHg). In contrast, occasional exercise was linked to high cardiovascular vulnerability, with pulse pressure spiking to a critical 75.00 mmHg in elderly participants (77 years) and BMI reaching obesity levels (38.15 kg/m2). Monthly exercise showed high variability and progressive lipid profile deterioration, with total cholesterol reaching 282.00 mg/dL in late maturity. Conclusions: Regular weekly physical activity acts as a physiological buffer that dissociates chronological ageing from vascular damage. While weekly exercise maintains optimal hemodynamic and metabolic ranges, occasional or inconsistent activity fails to prevent critical increases in pulse pressure and arterial stiffness during senescence. These findings underscore the necessity of regular, rather than sporadic, exercise as a vital “medicine” for maintaining arterial integrity across the lifespan. Full article
Show Figures

Figure 1

11 pages, 1864 KB  
Article
Evaluation of a Subsampling Protocol for RapidHITTM ID V2 Analysis
by Marion Defontaine, Logan Privat, Christian Siatka, Chloé Scherer, Anna Franzoni, Michele Rosso, Sylvain Hubac and Francis Hermitte
Forensic Sci. 2026, 6(1), 19; https://doi.org/10.3390/forensicsci6010019 - 19 Feb 2026
Viewed by 150
Abstract
Background/Objectives: Rapid DNA systems accelerate STR profiling but often require the consumption of the entire swab, limiting confirmation testing or downstream analyses. We previously validated a simple subsampling protocol for blood swabs on the RapidHITTM ID, using a rigid subungual mini-swab (Copan [...] Read more.
Background/Objectives: Rapid DNA systems accelerate STR profiling but often require the consumption of the entire swab, limiting confirmation testing or downstream analyses. We previously validated a simple subsampling protocol for blood swabs on the RapidHITTM ID, using a rigid subungual mini-swab (Copan Italia S.p.A). A new version of this instrument has recently been released, featuring redesigned software and consumables. The RapidINTELTM Plus sample cartridge now enables two distinct lysis/extraction protocols, expanding analytical possibilities for rich biological traces. We evaluated subsampling performance using the subungual mini-swab and microFLOQ® swabs (Copan Italia S.p.A), and assessed feasibility for both blood and buccal reference swabs. Methods: Whole blood from four donors was deposited onto regular Copan swabs (10 µL) or microFLOQ® swabs (1 µL). A comparison was performed between the direct analysis of blood swabs using a RapidHITTM ID V1 (RapidINTELTM cartridge) and a RapidHITTM ID V2 (RapidINTELTM Plus cartridge, GENERAL protocol). Subsequently, both the GENERAL and SPECIALIZED protocols were tested after subsampling from primary blood or buccal swabs dried for 24 h using either a subungual mini-swab or a microFLOQ®. Results: Blood-swab subsampling on the V2 produced usable STR profiles with both the subungual mini-swab and the microFLOQ®. The subungual mini-swab was compatible with both the GENERAL and SPECIALIZED protocols. For blood applications, microFLOQ® fiber treatment showed no inhibitory effects. Reference buccal swabs were successfully analyzed with the RapidINTELTM Plus cartridge, either directly (regular swab) or via subungual subsampling under both protocols. In contrast, in this feasibility dataset (single analysis per donor per condition), subsampling a reference swab with microFLOQ® did not yield suitable profiles for RapidINTELTM Plus analysis under the tested conditions. Conclusions: This feasibility study indicates that the subsampling strategy can be applied on the RapidHITTM ID V2, particularly using subungual mini-swabs, to retain the primary swab for potential downstream testing while maintaining usable STR profile quality for blood and buccal reference workflows under the tested conditions. Full article
Show Figures

Figure 1

18 pages, 2458 KB  
Perspective
From Statistical Mechanics to Nonlinear Dynamics and into Complex Systems
by Alberto Robledo
Complexities 2026, 2(1), 3; https://doi.org/10.3390/complexities2010003 - 13 Feb 2026
Viewed by 266
Abstract
We detail a procedure to transform the current empirical stage in the study of complex systems into a predictive phenomenological one. Our approach starts with the statistical-mechanical Landau-Ginzburg equation for dissipative processes, such as kinetics of phase change. Then, it imposes discrete time [...] Read more.
We detail a procedure to transform the current empirical stage in the study of complex systems into a predictive phenomenological one. Our approach starts with the statistical-mechanical Landau-Ginzburg equation for dissipative processes, such as kinetics of phase change. Then, it imposes discrete time evolution to explicit back feeding, and adopts a power-law driving force to incorporate the onset of chaos, or, alternatively, criticality, the guiding principles of complexity. One obtains, in closed analytical form, a nonlinear renormalization-group (RG) fixed-point map descriptive of any of the three known (one-dimensional) transitions to or out of chaos. Furthermore, its Lyapunov function is shown to be the thermodynamic potential in q-statistics, because the regular or multifractal attractors at the transitions to chaos impose a severe impediment to access the system’s built-in configurations, leaving only a subset of vanishing measure available. To test the pertinence of our approach, we refer to the following complex systems issues: (i) Basic questions, such as demonstration of paradigms equivalence, illustration of self-organization, thermodynamic viewpoint of diversity, biological or other. (ii) Derivation of empirical laws, e.g., ranked data distributions (Zipf law), biological regularities (Kleiber law), river and cosmological structures (Hack law). (iii) Complex systems methods, for example, evolutionary game theory, self-similar networks, central-limit theorem questions. (iv) Condensed-matter physics complex problems (and their analogs in other disciplines), like, critical fluctuations (catastrophes), glass formation (traffic jams), localization transition (foraging, collective motion). Full article
Show Figures

Graphical abstract

26 pages, 1482 KB  
Article
Multimodal Autoencoder–Based Anomaly Detection Reveals Clinical–Radiologic Heterogeneity in Pulmonary Fibrosis
by Constantin Ghimuș, Călin Gheorghe Buzea, Alin Horațiu Nedelcu, Vlad Florin Oiegar, Ancuța Lupu, Răzvan Tudor Tepordei, Simona Alice Partene Vicoleanu, Ana Maria Dumitrescu, Manuela Ursaru, Gabriel Statescu, Emil Anton, Vasile Valeriu Lupu and Paraschiva Postolache
Med. Sci. 2026, 14(1), 76; https://doi.org/10.3390/medsci14010076 - 10 Feb 2026
Viewed by 261
Abstract
Background: Pulmonary fibrosis (PF) and post-infectious fibrotic lung disease are characterized by marked heterogeneity in radiologic patterns, physiologic impairment, and clinical presentation. Conventional analytic approaches often fail to capture non-linear and multimodal relationships between structural imaging findings and functional limitation. Integrating imaging-derived representations [...] Read more.
Background: Pulmonary fibrosis (PF) and post-infectious fibrotic lung disease are characterized by marked heterogeneity in radiologic patterns, physiologic impairment, and clinical presentation. Conventional analytic approaches often fail to capture non-linear and multimodal relationships between structural imaging findings and functional limitation. Integrating imaging-derived representations with clinical and functional data using artificial intelligence (AI) may provide a more comprehensive characterization of disease heterogeneity. Objectives: The objective of this study was to develop and evaluate a multimodal AI framework combining imaging-derived embeddings and structured clinical data to identify atypical clinical–radiologic profiles in patients with pulmonary fibrosis using unsupervised anomaly detection. Methods: A retrospective cohort of 41 patients with radiologically confirmed pulmonary fibrosis or post-infectious fibrotic lung disease was analyzed. Deep imaging embeddings were extracted from baseline thoracic CT examinations using a pretrained convolutional neural network and integrated with standardized clinical and functional variables. A multimodal variational autoencoder (VAE) was trained in an unsupervised manner to learn the distribution of typical patient profiles. Patient-specific anomaly scores were derived from reconstruction error plus latent regularization (β·KL divergence). Associations between anomaly scores, disease severity, and clinical markers were assessed using Spearman rank correlation. Results: Anomaly scores were right-skewed (median 26.91, IQR 22.87–32.11; range 19.75–46.18). Patients above the 85th percentile (anomaly score ≥ 33.85) comprised 7/41 (17.1%) of the cohort and occurred across all clinician-assigned severity categories (mild 3, moderate 1, severe 3). Anomaly scores overlapped substantially across severity groups, with similar medians (mild 26.47, moderate 28.55, severe 28.23). Correlations with conventional severity markers were weak and non-significant, including DLCO (% predicted; ρ = −0.25, p = 0.115) and FEV1 (% predicted; ρ = −0.22, p = 0.165), a pattern consistent with anomaly scores reflecting multimodal deviation rather than severity alone, while acknowledging the exploratory nature of the analysis. Highly anomalous patients frequently exhibited discordant clinical–radiologic profiles, including preserved functional capacity despite marked imaging-derived deviation or disproportionate physiological impairment relative to imaging patterns. Conclusions: This proof-of-concept study demonstrates that multimodal VAE-based anomaly detection integrating imaging-derived embeddings with clinical data can quantify clinical–radiologic heterogeneity in pulmonary fibrosis beyond conventional severity stratification. Unsupervised anomaly detection provides a complementary framework for identifying atypical multimodal profiles and supporting individualized phenotyping and hypothesis generation in fibrotic lung disease. Given the modest cohort size, these findings should be interpreted as illustrative and hypothesis-generating rather than generalizable. Full article
Show Figures

Figure 1

17 pages, 3363 KB  
Article
Homogenization of Polymer Composite Materials Using Discrete Element Modeling
by Andrey A. Zhuravlev, Karine K. Abgaryan, Alexander Yu. Morozov and Dmitry L. Reviznikov
Symmetry 2026, 18(2), 281; https://doi.org/10.3390/sym18020281 - 3 Feb 2026
Viewed by 231
Abstract
A multiscale approach to calculating the effective elastic properties of a composite material with a fibrous filler and a polymer matrix is presented. Modeling at various scales is performed using a unified algorithmic scheme, solving Cauchy problems for systems of ordinary differential equations. [...] Read more.
A multiscale approach to calculating the effective elastic properties of a composite material with a fibrous filler and a polymer matrix is presented. Modeling at various scales is performed using a unified algorithmic scheme, solving Cauchy problems for systems of ordinary differential equations. At the atomic level, molecular dynamics modeling is used to calculate the elastic constant tensor of the polymer material. At the mesoscale, the author’s discrete element method is applied to calculate the effective elastic properties of the composite material. The method was tested on problems with regular fiber placement with symmetry, which have an analytical solution. The capabilities of the proposed approach are demonstrated using defect composite homogenization problems, where symmetry is broken. Fiber cracking and matrix–fiber debonding are considered. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

39 pages, 3699 KB  
Article
Enhancing Decision Intelligence Using Hybrid Machine Learning Framework with Linear Programming for Enterprise Project Selection and Portfolio Optimization
by Abdullah, Nida Hafeez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz, Rolando Quintero Téllez, Eponon Anvi Alex, Grigori Sidorov and Alexander Gelbukh
AI 2026, 7(2), 52; https://doi.org/10.3390/ai7020052 - 1 Feb 2026
Viewed by 962
Abstract
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we [...] Read more.
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we develop a three-step architecture for intelligent decision support. First, we introduce an extended Analytic Hierarchy Process (AHP) that incorporates organizational learning patterns to compute expert-validated criteria weights with a consistent level of reliability (CR=0.04), and Linear Programming is used for portfolio optimization. Second, we propose a machine learning architecture that integrates expert knowledge derived from AHP into models such as Transformers, TabNet, and Neural Oblivious Decision Ensembles through mechanisms including attention modulation, split criterion weighting, and differentiable tree regularization. Third, the hybrid AHP-Stacking classifier generates a meta-ensemble that adaptively balances expert-derived information with data-driven patterns. The analysis shows that the model achieves 97.5% accuracy, a 96.9% F1-score, and a 0.989 AUC-ROC, representing a 25% improvement compared to baseline methods. The framework also indicates a projected 68.2% improvement in portfolio value (estimated incremental value of USD 83.5 M) based on post factum financial results from the enterprise’s ventures.This study is evaluated retrospectively using data from a single enterprise, and while the results demonstrate strong robustness, generalizability to other organizational contexts requires further validation. This research contributes a structured approach to hybrid intelligent systems and demonstrates that combining expert knowledge with machine learning can provide reliable, transparent, and high-performing decision-support capabilities for project portfolio management. Full article
Show Figures

Figure 1

21 pages, 1404 KB  
Article
Deep Learning-Enhanced Hybrid Beamforming Design with Regularized SVD Under Imperfect Channel Information
by S. Pourmohammad Azizi, Amirhossein Nafei, Shu-Chuan Chen and Rong-Ho Lin
Mathematics 2026, 14(3), 509; https://doi.org/10.3390/math14030509 - 31 Jan 2026
Viewed by 199
Abstract
We propose a low-complexity hybrid beamforming method for massive Multiple-Input Multiple-Output (MIMO) systems that is robust to Channel State Information (CSI) estimation errors. These errors stem from hardware impairments, pilot contamination, limited training, and fast fading, causing spectral-efficiency loss. However, existing hybrid beamforming [...] Read more.
We propose a low-complexity hybrid beamforming method for massive Multiple-Input Multiple-Output (MIMO) systems that is robust to Channel State Information (CSI) estimation errors. These errors stem from hardware impairments, pilot contamination, limited training, and fast fading, causing spectral-efficiency loss. However, existing hybrid beamforming solutions typically either assume near-perfect CSI or rely on greedy/black-box designs without an explicit mechanism to regularize the error-distorted singular modes, leaving a gap in unified, low-complexity, and theoretically grounded robustness. We unfold the Alternating Direction Method of Multipliers (ADMM) into a trainable Deep Learning (DL) network, termed DL-ADMM, to jointly optimize Radio-Frequency (RF) and baseband precoders and combiners. In DL-ADMM, the ADMM update mappings are learned (layer-wise parameters and projections) to amortize the joint RF/baseband optimization, whereas Regularized Singular Value Decomposition (RSVD) acts as an analytical regularizer that reshapes the observed channel’s singular values to suppress noise amplification under imperfect CSI. RSVD is integrated to stabilize singular modes and curb noise amplification, yielding a unified and scalable design. For σe2=0.1, the proposed DL-ADMM-Reg achieves approximately 8–11 bits/s/Hz higher spectral efficiency than Orthogonal Matching Pursuit (OMP) at Signal-to-Noise Ratio (SNR) =20–40 dB, while remaining within <1 bit/s/Hz of the digital-optimal benchmark across both (Nt,Nr)=(32,32) and (64,64) settings. Simulations confirm higher spectral efficiency and robustness than OMP and Adaptive Phase Shifters (APSs). Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communications with Applications)
Show Figures

Figure 1

38 pages, 6300 KB  
Article
Fused Unbalanced Gromov–Wasserstein-Based Network Distributional Resilience Analysis for Critical Infrastructure Assessment
by Iman Seyedi, Antonio Candelieri and Francesco Archetti
Mathematics 2026, 14(3), 417; https://doi.org/10.3390/math14030417 - 25 Jan 2026
Viewed by 282
Abstract
Identifying critical infrastructure in transportation networks requires metrics that can capture both the topological structure and how demand is redistributed during disruptions. Conventional graph-theoretic approaches fail to jointly quantify these vulnerabilities. This study presents a computational framework for edge-criticality assessment based on the [...] Read more.
Identifying critical infrastructure in transportation networks requires metrics that can capture both the topological structure and how demand is redistributed during disruptions. Conventional graph-theoretic approaches fail to jointly quantify these vulnerabilities. This study presents a computational framework for edge-criticality assessment based on the Fused Unbalanced Gromov–Wasserstein (FUGW) distance, incorporating both structural similarity and demand characteristics of network nodes in an optimal transport tool. The three hyperparameters that influence FUGW accuracy—fusion weight, entropic regularization, and marginal penalties—were tuned using Bayesian optimization. This ensures the rankings remain accurate, stable, and reproducible under temporal variability and demand shifts. We apply the framework to a benchmark transportation network evaluated across four diurnal periods, capturing dynamic congestion and shifting demand patterns. Systematic variation in the fusion parameter shows seven consistently critical edges whose rankings remain stable across analytical configurations. It can be concluded from the results that monotonic scaling with increasing feature emphasis, strong cross-hyperparameter correlation, and low temporal variability confirm the robustness of the inferred criticality hierarchy. These edges represent both structural bridges and demand concentration points, offering α indicators of network vulnerability. These findings demonstrate that FUGW provides a solid and scalable method of assessing transportation vulnerabilities. It helps support clear decisions on maintenance planning, redundancy, and resilience investments. Full article
Show Figures

Figure 1

23 pages, 1503 KB  
Article
Hallucination-Aware Interpretable Sentiment Analysis Model: A Grounded Approach to Reliable Social Media Content Classification
by Abdul Rahaman Wahab Sait and Yazeed Alkhurayyif
Electronics 2026, 15(2), 409; https://doi.org/10.3390/electronics15020409 - 16 Jan 2026
Viewed by 331
Abstract
Sentiment analysis (SA) has become an essential tool for analyzing social media content in order to monitor public opinion and support digital analytics. Although transformer-based SA models exhibit remarkable performance, they lack mechanisms to mitigate hallucinated sentiment, which refers to the generation of [...] Read more.
Sentiment analysis (SA) has become an essential tool for analyzing social media content in order to monitor public opinion and support digital analytics. Although transformer-based SA models exhibit remarkable performance, they lack mechanisms to mitigate hallucinated sentiment, which refers to the generation of unsupported or overconfident predictions without explicit linguistic evidence. To address this limitation, this study presents a hallucination-aware SA model by incorporating semantic grounding, interpretability-congruent supervision, and neuro-symbolic reasoning within a unified architecture. The proposed model is based on a fine-tuned Open Pre-trained Transformer (OPT) model, using three fundamental mechanisms: a Sentiment Integrity Filter (SIF), a SHapley Additive exPlanations (SHAP)-guided regularization technique, and a confidence-based lexicon-deep fusion module. The experimental analysis was conducted on two multi-class sentiment datasets that contain Twitter (now X) and Reddit posts. In Dataset 1, the suggested model achieved an average accuracy of 97.6% and a hallucination rate of 2.3%, outperforming the current transformer-based and hybrid sentiment models. With Dataset 2, the framework demonstrated strong external generalization with an accuracy of 95.8%, and a hallucination rate of 3.4%, which is significantly lower than state-of-the-art methods. These findings indicate that it is possible to include hallucination mitigation into transformer optimization without any performance degradation, offering a deployable, interpretable, and linguistically complex social media SA framework, which will enhance the reliability of neural systems of language understanding. Full article
Show Figures

Figure 1

34 pages, 3406 KB  
Article
Reconstructing Spatial Localization Error Maps via Physics-Informed Tensor Completion for Passive Sensor Systems
by Zhaohang Zhang, Zhen Huang, Chunzhe Wang and Qiaowen Jiang
Sensors 2026, 26(2), 597; https://doi.org/10.3390/s26020597 - 15 Jan 2026
Viewed by 278
Abstract
Accurate mapping of localization error distribution is essential for assessing passive sensor systems and guiding sensor placement. However, conventional analytical methods like the Geometrical Dilution of Precision (GDOP) rely on idealized error models, failing to capture the complex, heterogeneous error distributions typical of [...] Read more.
Accurate mapping of localization error distribution is essential for assessing passive sensor systems and guiding sensor placement. However, conventional analytical methods like the Geometrical Dilution of Precision (GDOP) rely on idealized error models, failing to capture the complex, heterogeneous error distributions typical of real-world environments. To overcome this challenge, we propose a novel data-driven framework that reconstructs high-fidelity localization error maps from sparse observations in TDOA-based systems. Specifically, we model the error distribution as a tensor and formulate the reconstruction as a tensor completion problem. A key innovation is our physics-informed regularization strategy, which incorporates prior knowledge from the analytical error covariance matrix into the tensor factorization process. This allows for robust recovery of the complete error map even from highly incomplete data. Experiments on a real-world dataset validate the superiority of our approach, showing an accuracy improvement of at least 27.96% over state-of-the-art methods. Full article
(This article belongs to the Special Issue Multi-Agent Sensors Systems and Their Applications)
Show Figures

Figure 1

12 pages, 264 KB  
Article
Timelike Thin-Shell Evolution in Gravitational Collapse: Classical Dynamics and Thermodynamic Interpretation
by Axel G. Schubert
Entropy 2026, 28(1), 96; https://doi.org/10.3390/e28010096 - 13 Jan 2026
Viewed by 223
Abstract
This work explores late-time gravitational collapse using timelike thin-shell methods in classical general relativity. A junction surface separates a regular de Sitter interior from a Schwarzschild or Schwarzschild–de Sitter exterior in a post-transient regime with fixed exterior mass M (ADM for [...] Read more.
This work explores late-time gravitational collapse using timelike thin-shell methods in classical general relativity. A junction surface separates a regular de Sitter interior from a Schwarzschild or Schwarzschild–de Sitter exterior in a post-transient regime with fixed exterior mass M (ADM for Λ+=0), modelling a vacuum–energy core surrounded by an asymptotically classical spacetime. The configuration admits a natural thermodynamic interpretation based on a geometric area functional SshellR2 and Tolman redshift, both derived from classical junction conditions and used as an entropy-like coarse-grained quantity rather than a fundamental statistical entropy. Key results include (i) identification of a deceleration mechanism at the balance radius Rthr=(3M/Λ)1/3 for linear surface equations of state p=wσ; (ii) classification of the allowable radial domain V(R)0 for outward evolution; (iii) bounded curvature invariants throughout the shell-supported spacetime region; and (iv) a mass-scaled frequency bound fcRSξ/(33π) for persistent near-shell spectral modes. All predictions follow from standard Israel junction techniques and provide concrete observational tests. The framework offers an analytically tractable example of regular thin-shell collapse dynamics within classical general relativity, with implications for alternative compact object scenarios. Full article
(This article belongs to the Special Issue Coarse and Fine-Grained Aspects of Gravitational Entropy)
16 pages, 336 KB  
Article
Bayesian Neural Networks with Regularization for Sparse Zero-Inflated Data Modeling
by Sunghae Jun
Information 2026, 17(1), 81; https://doi.org/10.3390/info17010081 - 13 Jan 2026
Viewed by 331
Abstract
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid [...] Read more.
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid parametric assumptions and fixed model structures, which can limit flexibility in high-dimensional, sparse settings. We propose a Bayesian neural network (BNN) with regularization for sparse zero-inflated data modeling. The method separately parameterizes the zero inflation probability and the count intensity under ZIP/ZINB likelihoods, while employing Bayesian regularization to induce sparsity and control overfitting. Posterior inference is performed using variational inference. We evaluate the approach through controlled simulations with varying zero ratios and a real-world dataset, and we compare it against Poisson generalized linear models, ZIP, and ZINB baselines. The present study focuses on predictive performance measured by mean squared error (MSE). Across all settings, the proposed method achieves consistently lower prediction error and improved uncertainty problems, with ablation studies confirming the contribution of the regularization components. These results demonstrate that a regularized BNN provides a flexible and robust framework for sparse zero-inflated data analysis in information-rich environments. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

Back to TopTop