Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,264)

Search Parameters:
Keywords = tail approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3370 KB  
Article
An Innovative Semiparametric Density Model for the Statistical Characterization of Ground-Vehicle Radar Cross Sections
by Zengcan Liu, Shuhao Wen, Houjun Sun and Ming Deng
Sensors 2026, 26(9), 2572; https://doi.org/10.3390/s26092572 - 22 Apr 2026
Abstract
Accurately characterizing the statistical fluctuations of vehicle radar cross sections (RCSs) across polarization states and azimuthal sectors is essential for evaluating detection performance, conducting probabilistic simulations, and analyzing target features in millimeter-wave radar systems. Existing one-dimensional RCS statistical models, including Weibull, Chi-square, Lognormal, [...] Read more.
Accurately characterizing the statistical fluctuations of vehicle radar cross sections (RCSs) across polarization states and azimuthal sectors is essential for evaluating detection performance, conducting probabilistic simulations, and analyzing target features in millimeter-wave radar systems. Existing one-dimensional RCS statistical models, including Weibull, Chi-square, Lognormal, Rice, and Gaussian distributions, are often limited by their restricted functional expressiveness, making it difficult to simultaneously capture skewness, tail thickness, and azimuthal dependence under narrow angular-domain conditions. In addition, purely nonparametric approaches tend to produce spurious modes under finite-sample conditions and lack interpretable structural priors. To address these limitations, this paper proposes a Unimodal RCS Semiparametric Density Estimator (URCS-SDE) tailored for ground-vehicle targets. The proposed approach adopts kernel density estimation (KDE) as a data-driven baseline representation and incorporates physically plausible structural constraints through unimodal shape projection. Then a beta-type tail template is further introduced in the normalized amplitude domain to regulate boundary decay behavior. Finally, weighted least-squares calibration is performed on the histogram grid of the empirical probability density function (PDF), achieving a balanced trade-off between fitting accuracy and stability in both the peak and tail regions. Using multi-azimuth RCS measurements of two representative ground vehicles, the URCS-SDE is systematically compared with five classical parametric distributions and a representative regularized mixture density network (MDN) baseline. Performance is evaluated under both full-azimuth and directional-window conditions using the sum of squared errors (SSE), root mean squared error (RMSE), coefficient of determination (R-square) and held-out negative log-likelihood (NLL). The results show that the URCS-SDE consistently provides the most accurate and stable density estimates, especially in narrow angular windows. In addition, a threshold-based detection-support example derived from the fitted PDFs demonstrates that the advantage of the URCS-SDE transfers from density reconstruction to a directly engineering-relevant downstream quantity. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

29 pages, 485 KB  
Article
A Sequential Design for Extreme Quantile Estimation Under Binary Sampling
by Michel Broniatowski and Emilie Miranda
Entropy 2026, 28(4), 479; https://doi.org/10.3390/e28040479 - 21 Apr 2026
Abstract
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating [...] Read more.
We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of binary data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists of estimating a failure quantile from trials whose outcomes are reduced to indicators of whether the specimen has failed at the tested stress levels. The proposed approach relies on a splitting strategy that decomposes the target extreme probability into a product of higher-order conditional probabilities, enabling a progressive exploration of the tail of the distribution through sampling under truncated laws. We consider GEV and Weibull models for the underlying distribution, and the sequential estimation of their parameters is carried out using an enhanced maximum likelihood procedure specifically adapted to binary data, addressing the substantial uncertainty inherent to such limited information. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
25 pages, 7615 KB  
Article
Regional Copula Modeling of Rainfall Duration and Intensity: Derivation and Validation of IDF Curves in the Kastoria Basin
by Evangelos Leivadiotis, Aris Psilovikos and Silvia Kohnová
Hydrology 2026, 13(4), 117; https://doi.org/10.3390/hydrology13040117 - 20 Apr 2026
Abstract
Intensity–Duration–Frequency (IDF) curves are the cornerstone of hydraulic infrastructure design, yet standard methodologies often fail to account for the complex dependence structure of rainfall characteristics and the non-stationary effects of climate change. This study develops a robust Regional Copula Framework for the Kastoria [...] Read more.
Intensity–Duration–Frequency (IDF) curves are the cornerstone of hydraulic infrastructure design, yet standard methodologies often fail to account for the complex dependence structure of rainfall characteristics and the non-stationary effects of climate change. This study develops a robust Regional Copula Framework for the Kastoria Lake basin, Greece, utilizing sub-hourly rainfall records from four meteorological stations (2007–2024). We employ a forensic data quality control process to pool 277 independent storm events. Unlike traditional approaches, our analysis demonstrates that the Generalized Extreme Value (GEV) distribution (ξ = 0.348) significantly outperforms the standard Lognormal distribution in modeling heavy-tailed rainfall intensities. The dependence between storm duration and intensity was found to be consistently negative (τ = −0.35), a structure best captured by the Rotated Gumbel (90°) copula, which physically reflects the region’s convective storm dynamics. Trend analysis revealed a statistically significant decrease in peak intensity (τ = −0.14) coupled with an increase in storm duration (τ = 0.22), a hydro-climatic shift that contrasts with increasing intensity trends reported in the wider Balkan region. These findings suggest a regime transition from flash-flood dominance to volume-critical events, necessitating updated design criteria that integrate both multivariate dependence and local climatic non-stationarity. Full article
Show Figures

Figure 1

40 pages, 1430 KB  
Article
Optimal Coordination of Distance and Two-Level Directional Overcurrent Relays for Renewable Energy-Integrated Power Networks Using Enhanced Red-Tailed Hawk Algorithm
by Birsen Boylu Ayvaz and Zafer Dogan
Appl. Sci. 2026, 16(8), 3961; https://doi.org/10.3390/app16083961 - 19 Apr 2026
Viewed by 83
Abstract
Optimal coordination of distance and directional overcurrent relays (DR–DOCR) aims to achieve a fast, selective, and reliable protection scheme for transmission and sub-transmission systems. However, it constitutes a complex, nonlinear, and highly constrained optimization problem. In particular, single-setting DOCR characteristics used in conventional [...] Read more.
Optimal coordination of distance and directional overcurrent relays (DR–DOCR) aims to achieve a fast, selective, and reliable protection scheme for transmission and sub-transmission systems. However, it constitutes a complex, nonlinear, and highly constrained optimization problem. In particular, single-setting DOCR characteristics used in conventional DR-DOCR coordination introduce additional challenges in lowering relay operating times while satisfying the coordination time interval (CTI) constraint. To address this issue, this paper proposes a novel DR-DOCR coordination approach that leverages a two-level DOCR characteristic. The objective is to exploit this characteristic, which partitions the relay curve into primary and backup protection regions in a highly flexible manner, thereby enabling easier avoidance of CTI violations. In addition, an enhanced variant of the red-tailed hawk algorithm, called ERTH, has been newly developed to solve this challenging problem. The proposed method is validated on versions of the 8-bus and 33-kV portion of the 30-bus power networks that have been modified to include renewable energy sources. Results demonstrate that the proposed method achieves total relay operating times of 23.681 s and 70.742 s for the 8-bus and 30-bus power systems, respectively. These values correspond to an 80.4% and 81.2% reduction compared to the conventional coordination scheme optimized by the ERTH algorithm, which yields 120.702 s and 376.757 s, respectively. Moreover, the ERTH algorithm exhibits superior performance in attaining near-global optimal solutions compared to the original RTH and other competitive optimization algorithms. In particular, for the 30-bus system under the conventional coordination scheme, the second-best result after ERTH is obtained by the teaching-learning-based optimization algorithm with a total relay operating time of 415.885 s. This indicates a 9.4% improvement achieved by ERTH (376.757 s) and a significantly higher improvement of 83% (70.742 s) achieved by the proposed strategy integrating ERTH with the two-level DOCR-based coordination scheme. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
31 pages, 543 KB  
Article
Frequentist and Bayesian Predictive Inference for the Log-Logistic Distribution Under Progressive Type-II Censoring
by Ziteng Zhang and Wenhao Gui
Entropy 2026, 28(4), 466; https://doi.org/10.3390/e28040466 - 18 Apr 2026
Viewed by 93
Abstract
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we [...] Read more.
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we derive three distinct point predictors: the Best Unbiased Predictor (BUP), the Conditional Median Predictor (CMP), and the Bayesian Predictor (BP). Corresponding prediction intervals are constructed using frequentist pivotal quantities, Bayesian Equal-Tailed Intervals (ETIs), and Highest Posterior Density (HPD) methods. The Bayesian procedures are implemented via Markov chain Monte Carlo (MCMC) sampling. We evaluate the finite-sample performance of the proposed methodologies through a Monte Carlo simulation study and further validate them using two real-world datasets, namely bladder cancer remission times and guinea pig survival times. The numerical results indicate that the proposed BP, particularly under the empirical prior, provides the most accurate and stable overall performance for point prediction, while the frequentist predictors become less reliable in extreme heavy-tailed settings. For interval prediction, the Bayesian HPD method consistently outperforms the alternatives, substantially reducing interval lengths for right-skewed data while maintaining the nominal coverage probability. Full article
14 pages, 936 KB  
Article
Cannabidiol Prevents Ovariectomy-Induced Thermoregulatory Dysfunction in Rats: A Preclinical Study on Menopausal Vasomotor Symptoms
by Vitória Leite Lages, Lourdes Fernanda Godinho, Alayanne Santos Guieiro, Thais Trindade, Bruna Oliveira Costa, Joyce Mirlene Moreira Costa, Ramona Ramalho de Souza Pereira, Caíque Olegário Diniz e Magalhães and Kinulpe Honorato-Sampaio
Drugs Drug Candidates 2026, 5(2), 26; https://doi.org/10.3390/ddc5020026 - 18 Apr 2026
Viewed by 103
Abstract
Background/Objectives: Vasomotor symptoms (hot flashes) affect 70–80% of menopausal women, significantly impairing quality of life. Current treatments include hormone therapy, which is contraindicated for many patients, and non-hormonal alternatives with limited efficacy or adverse effects. Cannabidiol (CBD), a non-psychoactive phytocannabinoid, has emerged as [...] Read more.
Background/Objectives: Vasomotor symptoms (hot flashes) affect 70–80% of menopausal women, significantly impairing quality of life. Current treatments include hormone therapy, which is contraindicated for many patients, and non-hormonal alternatives with limited efficacy or adverse effects. Cannabidiol (CBD), a non-psychoactive phytocannabinoid, has emerged as a potential therapeutic candidate due to its interaction with the endocannabinoid system. This study aimed to investigate whether a standardized Cannabis sativa extract containing isolated CBD attenuates heat dissipation in ovariectomized rats, a preclinical model of estrogen deficiency. Methods: Female Wistar rats were randomly assigned to sham-operated vehicle-treated (SHAM-V), ovariectomized vehicle-treated (OVX-V), or ovariectomized CBD-treated (OVX-CBD; 10 mg/kg/day, oral gavage) groups. Treatment began on postoperative day 2 and continued for 21 days. Tail-skin temperature, a surrogate marker of heat dissipation, was assessed by infrared thermography on day 14. Energy metabolism was evaluated by indirect calorimetry on day 21. Uterine weight was measured as a biomarker of estrogen depletion. Results: Ovariectomy significantly increased tail temperature compared to SHAM-V. CBD treatment completely prevented this effect, with OVX-CBD animals exhibiting thermographic profiles similar to SHAM-V. Uterine atrophy was not reversed by CBD. No differences in the calorimetry parameter were observed among groups. Conclusions: This study provides novel preclinical evidence that cannabidiol attenuates ovariectomy-induced heat dissipation in rats, without detectable effects on uterine weight or metabolic parameters. These findings suggest that CBD may represent a potential non-hormonal approach for the management of menopausal vasomotor symptoms; however, further studies are required to elucidate the underlying mechanisms and to determine its translational and clinical relevance. Full article
(This article belongs to the Section Drug Candidates from Natural Sources)
Show Figures

Figure 1

24 pages, 921 KB  
Article
Advanced Insurance Risk Modeling for Pseudo-New Customers Using Balanced Ensembles and Transformer Architectures
by Finn L. Solly, Raquel Soriano-Gonzalez, Angel A. Juan and Antoni Guerrero
Risks 2026, 14(4), 91; https://doi.org/10.3390/risks14040091 - 17 Apr 2026
Viewed by 197
Abstract
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in [...] Read more.
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in previous studies, typically optimize global predictive accuracy and therefore fail to capture business-critical outcomes, especially the identification of high-risk clients. This study extends the existing approach by evaluating two complementary business-aware classification strategies: (i) a balanced bagging ensemble specifically designed to handle class imbalance and maximize expected profit under explicit customer-omission constraints, and (ii) a lightweight Transformer-based architecture capable of learning richer feature representations. Both approaches incorporate the asymmetric financial cost structure of insurance and operate under operational selection limits. The empirical analysis is conducted on a proprietary large-scale auto insurance dataset comprising 51,618 customers and is complemented by validation on nine synthetic datasets to assess robustness. Model performance is evaluated using statistical tests (ANOVA, Friedman, and pair-wise comparisons) together with business-oriented metrics. The results show that both proposed approaches consistently outperform the baseline methodology (p < 0.001) in terms of profit, with the ensemble offering a better balance of performance and efficiency, while the Transformer shows stronger robustness and generalization under data perturbations. The balanced ensemble provides the most favourable trade-off between predictive performance, robustness, interpretability, and computational efficiency, making it suitable for deployment in regulated insurance environments, while the Transformer achieves competitive results and exhibits stronger generalization under data perturbations. The proposed approach aligns machine learning with actuarial portfolio optimization by explicitly integrating profit-driven objectives and operational constraints, offering two practical and scalable solutions for risk-based decision-making in real-world insurance settings. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
Show Figures

Figure 1

31 pages, 2156 KB  
Article
Design of Dry Stacking of Filtered Tailings in Extreme Seismic and Mountain Conditions
by Carlos Cacciuttolo, Edison Atencio, Seyedmilad Komarizadehasl and Jose Antonio Lozano-Galant
Appl. Sci. 2026, 16(8), 3911; https://doi.org/10.3390/app16083911 - 17 Apr 2026
Viewed by 131
Abstract
Tailings management presents a critical challenge for the mining industry, particularly in mountainous regions with high seismicity and steep slopes. This article presents the development and design criteria for dry stacking of filtered tailings as a sustainable and safe alternative to conventional slurry [...] Read more.
Tailings management presents a critical challenge for the mining industry, particularly in mountainous regions with high seismicity and steep slopes. This article presents the development and design criteria for dry stacking of filtered tailings as a sustainable and safe alternative to conventional slurry tailings storage facilities (TSFs). The study focuses on the extreme conditions of a mountainous location characterized by complex topography with 10% slopes, space constraints, and significant seismic activity defined by a peak ground acceleration (PGA) of 0.3 g. The design methodology, which incorporates layered compaction of the filtered tailings to achieve a geotechnically stable structure, is detailed for a filtered TSF consisting of 7 terraces, each 10 m high, reaching a total height of 70 m. This approach minimizes the risk of liquefaction and prepares the filtered tailings surface for progressive closure, with unit operating costs (OPEX) of 2.5 USD/t. The results of the physical stability analysis confirm the viability of this solution: pseudo-static stability analysis yielded a safety factor of 1.22, demonstrating a significant reduction in water consumption and potential environmental impact. It is concluded that the dry disposal of filtered tailings is a technically robust option for tailings management in extreme mountainous environments, offering greater long-term safety guarantees and facilitating landscape integration, thus setting a precedent for mining projects in similar geographies. Full article
(This article belongs to the Special Issue Surface and Underground Mining Technology and Sustainability)
13 pages, 744 KB  
Article
Uplink-Centric DUDe for IoT and Industry 4.0
by Charalampos Chatzigeorgiou, Christos Bouras, Vasileios Kokkinos, Apostolos Gkamas and Philippos Pouyioutas
Electronics 2026, 15(8), 1680; https://doi.org/10.3390/electronics15081680 - 16 Apr 2026
Viewed by 161
Abstract
This study investigates Downlink/Uplink Decoupling (DUDe) in 5G networks, a framework that allows user equipment to select its uplink serving cell independently of the downlink anchor. This approach is designed to alleviate the “macro bias” and pathloss issues that typically degrade performance for [...] Read more.
This study investigates Downlink/Uplink Decoupling (DUDe) in 5G networks, a framework that allows user equipment to select its uplink serving cell independently of the downlink anchor. This approach is designed to alleviate the “macro bias” and pathloss issues that typically degrade performance for Internet of Things (IoT) traffic. We propose a framework managed by Mobile Edge Computing (MEC) that operates on a per-Transmission Time Interval (TTI) basis, incorporating stability mechanisms such as hysteresis and Time to Trigger to prevent frequent, unnecessary handovers. The performance is evaluated using a system-level simulator across two scenarios: a high-density urban IoT deployment and an Industry 4.0 smart factory environment. Our results demonstrate that the proposed framework significantly improves uplink throughput and reduces tail latency compared to traditional coupled association methods. Furthermore, an ablation study confirms that these performance gains are derived from the structural decoupling of links, providing a scalable path for improving connectivity in 5G and beyond. Full article
(This article belongs to the Special Issue Feature Papers in Networks: 2025–2026 Edition)
20 pages, 1575 KB  
Article
Topology-Aware Admission Control for Dynamic Load Balancing in NUMA-Based Parallel RTL Simulation
by Xin Huang, Guangrong Li, Fan Yang and Zhaori Bi
Electronics 2026, 15(8), 1672; https://doi.org/10.3390/electronics15081672 - 16 Apr 2026
Viewed by 200
Abstract
Parallel discrete-event simulation (PDES) of register-transfer-level (RTL) designs on multi-socket NUMA platforms demand dynamic load balancing to mitigate barrier-induced tail latency. However, the ultra-fine event granularity of RTL simulation makes migration cost non-negligible, and the non-uniform memory hierarchy of NUMA turns migration cost [...] Read more.
Parallel discrete-event simulation (PDES) of register-transfer-level (RTL) designs on multi-socket NUMA platforms demand dynamic load balancing to mitigate barrier-induced tail latency. However, the ultra-fine event granularity of RTL simulation makes migration cost non-negligible, and the non-uniform memory hierarchy of NUMA turns migration cost into a topology-dependent variable rather than a constant. Existing approaches either ignore this topology dependence or rely on heuristic thresholds that lack theoretical justification. This paper formulates NUMA-aware dynamic load balancing as a constrained optimization problem in which the migration cost is an explicit function of the socket locality between the source and destination cores. We introduce a unified net benefit function G(m,ij,f) that jointly captures the tail-latency reduction, migration overhead, and cache warm-up penalty for migrating module m from core i to core j at frequency f. We prove that G is jointly concave in migration scale and frequency, yielding two analytical results: (i) a closed-form admission inequality that prescribes when migration is strictly beneficial, and (ii) a conservative fixed-frequency design rule that guides the choice of a global epoch length for the proposed epoch-based controller. We further show that when the initial static partition satisfies a bounded-quality condition, the total migration volume is provably bounded, formalizing the intuition that restraint is optimal, not merely conservative. We implement the proposed topology-aware admission control (TAC) framework in TACVS (Topology-Aware Admission Control Verilog Simulator), our event-driven parallel RTL simulation prototype. Experiments on four open-source RTL designs running on a 2-socket NUMA platform show that TAC reduces the tail-latency ratio by 18.0% on average (up to 28.5%) and improves normalized throughput by 27.1% on average (up to 34.1%) relative to topology-oblivious baselines. An ablation study further shows that admission control and cooldown are critical for performance, with throughput dropping by 15.9% and 22.8% on average (up to 22.4% and 32.5%) when each is removed, respectively. Full article
Show Figures

Figure 1

21 pages, 2881 KB  
Article
Risk-Sensitive Reinforcement Learning for Portfolio Optimization Under Stochastic Market Dynamics
by Binod Kumar Mishra, Munish Kumar, Hashmat Fida and Branimir Kalaš
Mathematics 2026, 14(8), 1334; https://doi.org/10.3390/math14081334 - 16 Apr 2026
Viewed by 292
Abstract
Portfolio optimization is one of the most difficult sequential decision problems, as uncertainty and the non-stationary nature of financial markets hinder the development of robust strategies. Reinforcement learning is an attractive framework for addressing this problem, as it allows agents to learn market-adaptive [...] Read more.
Portfolio optimization is one of the most difficult sequential decision problems, as uncertainty and the non-stationary nature of financial markets hinder the development of robust strategies. Reinforcement learning is an attractive framework for addressing this problem, as it allows agents to learn market-adaptive strategies through data-driven interactions. However, existing risk-neutral reinforcement learning solutions for portfolio management are oblivious to downside risk and are mainly concerned with maximizing returns. To address this limitation, this paper proposes a novel risk-sensitive reinforcement learning framework for risk-aware portfolio optimization based on a conditional value-at-risk-based learning objective that explicitly controls extreme loss events. It formulates the portfolio optimization problem as a Markov decision process and solves it using a linearized actor–critic architecture. It also develops theoretical results to analyze important aspects of the learning process, specifically proving that the convexity of the conditional value-at-risk-based formulation and convergence of learning hold under standard assumptions. The proposed algorithm is applied in a realistic investment setting using NIFTY 50 market data. Quantitative results from a rolling window backtesting methodology show that the proposed model achieves the best risk-adjusted portfolio performance, i.e., a Sharpe ratio (0.610), while significantly reducing tail risk, as measured by the conditional value-at-risk (−0.121) and maximum drawdown (−0.198), compared to classical strategies and risk-neutral reinforcement learning solutions. Overall, the results demonstrate that integrating coherent risk measures into reinforcement learning provides an effective approach for developing robust and risk-aware portfolio optimization strategies in dynamic financial environments. Full article
(This article belongs to the Special Issue Portfolio Optimization and Risk Management In Financial Markets )
Show Figures

Figure 1

19 pages, 3886 KB  
Article
Optimization of the Job–Housing Balance in Megacities by Integrating Commuting Behavior Patterns: A Case Study of Shenzhen
by Yuhong Bai, Shuyan Yang, Changfeng Li and Wangshu Mu
ISPRS Int. J. Geo-Inf. 2026, 15(4), 176; https://doi.org/10.3390/ijgi15040176 - 16 Apr 2026
Viewed by 278
Abstract
Rapid urbanization in megacities has exacerbated the spatial mismatch between employment and housing, necessitating effective spatial optimization strategies. However, classical optimization models often rely on the idealized assumption of “proximity maximization,” failing to account for the complex, nonlinear regularities of actual human mobility. [...] Read more.
Rapid urbanization in megacities has exacerbated the spatial mismatch between employment and housing, necessitating effective spatial optimization strategies. However, classical optimization models often rely on the idealized assumption of “proximity maximization,” failing to account for the complex, nonlinear regularities of actual human mobility. To address this disconnect between theoretical modeling and real-world behavior, this study establishes a job–housing balance optimization framework integrated with empirical commuting patterns. Using Shenzhen as a case study, we analyze citywide commuting big data since 2024 to characterize the power law relationship between commuting population size and distance. We propose a novel optimization model that partitions residential areas into “commuting rings” on the basis of observed distance-decay functions rather than simple Euclidean proximity. We applied the proposed method to current and future planning scenarios and successfully generated spatial regulation schemes that decentralize employment functions to peripheral areas while strategically densifying residential zones. By respecting the “heavy-tailed” nature of commuting distributions, this approach offers urban planners a more robust tool for reducing aggregate commuting burdens without violating the behavioral realities of the workforce. Full article
Show Figures

Figure 1

16 pages, 3536 KB  
Article
Innovation and Sustainable Tailing Management: Technological and Mineralogical Characterization of Rock Powder from the São Paulo Aggregate Industry for Potential Reuse
by Ana Olivia Barufi Franco-Magalhães, Fabiano Cabañas Navarro, Rogério Pinto Ribeiro and Jacqueline Zanin Lima
Sustainability 2026, 18(8), 3932; https://doi.org/10.3390/su18083932 - 15 Apr 2026
Viewed by 221
Abstract
Brazilian soils are prone to a gradual decline in fertility due to intensive agricultural activity combined with natural weathering, which increases the demand for chemical fertilizers. Among potential alternatives, soil remineralization using crushed rock is a promising strategy. Silicate agrominerals (SAs) applied as [...] Read more.
Brazilian soils are prone to a gradual decline in fertility due to intensive agricultural activity combined with natural weathering, which increases the demand for chemical fertilizers. Among potential alternatives, soil remineralization using crushed rock is a promising strategy. Silicate agrominerals (SAs) applied as soil remineralizers have attracted attention due to their ability to supply plant-available nutrients while reducing dependence on conventional mineral fertilizers. This study evaluated the potential of residues from six quarries in Brazil as soil remineralizers as a regulatory screening assessment. Samples were subjected to mineralogical, petrological, and chemical characterization using an integrated approach, including X-ray diffraction (XRD), Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), and leaching experiments. XRD analysis revealed that anorthite and augite were the major minerals present in the mining waste. These minerals are less resistant to weathering, which enhances the release of macro- and micronutrients, essential for the development of various crops. Chemically, the samples were dominated by SiO2, Fe2O3, and Al2O3, with the sum of bases (K2O + CaO + MgO) ranging from 11.92% to 16.85%, meeting Brazilian standards for use as a soil remineralizer. Leaching results revealed that pH responses varied significantly among the studied samples for the filler particles, with an alkaline shift reaching values above 9.0 after 72 h. In contrast, the powder particle size samples showed no significant variation between the different materials tested, maintaining nearly constant pH levels throughout the period. This preliminary evaluation demonstrates that mining tailings from Brazilian quarries have potential as a sustainable soil remineralizer. This approach not only offers an alternative for soil fertilization but also promotes waste management and circular economy practices, although further studies are needed to assess long-term effectiveness and safety. Full article
Show Figures

Figure 1

23 pages, 9212 KB  
Article
Study on the Recycling of Phosphate Ore Waste Rock and Its Impact on Mortar Properties
by Ridong Fan and Baiyang Mao
Materials 2026, 19(8), 1568; https://doi.org/10.3390/ma19081568 - 14 Apr 2026
Viewed by 359
Abstract
To promote the resource recovery of phosphate mine tailings and alleviate the pressure caused by the growing scarcity of river sand, this study employs a research methodology combining macroscopic performance analysis with microscopic testing to systematically investigate the effects of three types of [...] Read more.
To promote the resource recovery of phosphate mine tailings and alleviate the pressure caused by the growing scarcity of river sand, this study employs a research methodology combining macroscopic performance analysis with microscopic testing to systematically investigate the effects of three types of recycled sand containing varying proportions of phosphate mine tailings (flint (FS), phosphorite flint (PFS) and dolomitic limestone (DLS)) on the performance of mortar. The study focused on assessing the impact of recycled sand on the workability of mortar, water absorption, mechanical properties, pore structure, cement hydration characteristics, and environmental safety, and conducted a comprehensive evaluation of the project’s feasibility in conjunction with a cost analysis. The effect of DLS was most pronounced in terms of setting time. Water absorption tests show that when the proportions of FS, PFS, and DLS are all 25%, the mortar’s water absorption reaches its minimum value. In terms of mechanical properties, DLS showed a more pronounced increase in early-stage flexural strength, whilst PFS and FS demonstrated a more significant increase in later-stage strength. In terms of compressive strength improvement, PFS outperformed both FS and DLS. XRD and TG-DTA test results show that the three kinds of recycled sand have no adverse effect on cement hydration. SEM and MIP results confirmed that compared with river sand, the porosity of mortar mixed with FS was smaller and the pore structure was denser. Environmental safety assessments have shown that the heavy metal leaching concentrations in the mortar made from the three types of recycled sand are all significantly below the national limits, indicating good environmental compatibility. An economic analysis indicates that the “25% river sand + 75% FS” alternative offers the best economic benefits, resulting in cost savings of 93.27 CNY per cubic metre. In summary, the use of recycled sand derived from phosphate ore tailings as a substitute for river sand in the preparation of mortar is feasible from technical, environmental, and economic perspectives. This approach facilitates the recovery of solid waste resources, conserves natural resources, reduces the environmental burden, and promotes cost optimisation. Full article
Show Figures

Figure 1

21 pages, 1178 KB  
Article
Soft-Community Kernel Rényi Spectrum for Semantic Uncertainty Estimation in Large Language Models
by Zongkai Li and Junliang Du
Entropy 2026, 28(4), 442; https://doi.org/10.3390/e28040442 - 14 Apr 2026
Viewed by 236
Abstract
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard [...] Read more.
Uncertainty estimation is critical for deploying large language models (LLMs) in safety-sensitive and decision-critical applications. Recent approaches estimate semantic uncertainty by clustering multiple sampled responses into equivalence classes and measuring their diversity via entropy-based criteria. However, existing methods typically rely on greedy hard clustering and von Neumann entropy, which suffer from sensitivity to clustering order, noise in semantic equivalence judgments, and limited control over spectral contributions. In this work, we propose a principled information-theoretic framework for LLM semantic uncertainty estimation based on soft semantic communities and kernel Rényi entropy. Given multiple generations for a query, we construct a weighted semantic graph using pairwise semantic similarity scores and infer soft community assignments via weighted graph community detection. These soft assignments induce a positive semi-definite semantic kernel that captures the distribution of semantic modes without enforcing hard equivalence relations. Uncertainty is then quantified by the Rényi entropy of the kernel spectrum, yielding a tunable measure that interpolates between sensitivity to dominant semantic modes and long-tail semantic diversity. Compared to prior von Neumann entropy-based estimators, the proposed Rényi spectral uncertainty offers improved robustness to semantic noise, reduced dependence on clustering heuristics, and greater flexibility through its order parameter. Extensive experiments on question answering tasks demonstrate that our method provides more stable and discriminative uncertainty estimates, particularly under limited sampling budgets and noisy semantic judgments. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Back to TopTop