Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,697)

Search Parameters:
Keywords = random uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 10147 KiB  
Article
Construction of Analogy Indicator System and Machine-Learning-Based Optimization of Analogy Methods for Oilfield Development Projects
by Muzhen Zhang, Zhanxiang Lei, Chengyun Yan, Baoquan Zeng, Fei Huang, Tailai Qu, Bin Wang and Li Fu
Energies 2025, 18(15), 4076; https://doi.org/10.3390/en18154076 (registering DOI) - 1 Aug 2025
Abstract
Oil and gas development is characterized by high technical complexity, strong interdisciplinarity, long investment cycles, and significant uncertainty. To meet the need for quick evaluation of overseas oilfield projects with limited data and experience, this study develops an analogy indicator system and tests [...] Read more.
Oil and gas development is characterized by high technical complexity, strong interdisciplinarity, long investment cycles, and significant uncertainty. To meet the need for quick evaluation of overseas oilfield projects with limited data and experience, this study develops an analogy indicator system and tests multiple machine-learning algorithms on two analogy tasks to identify the optimal method. Using an initial set of basic indicators and a database of 1436 oilfield samples, a combined subjective–objective weighting strategy that integrates statistical methods with expert judgment is used to select, classify, and assign weights to the indicators. This process results in 26 key indicators for practical analogy analysis. Single-indicator and whole-asset analogy experiments are then performed with five standard machine-learning algorithms—support vector machine (SVM), random forest (RF), backpropagation neural network (BP), k-nearest neighbor (KNN), and decision tree (DT). Results show that SVM achieves classification accuracies of 86% and 95% in medium-high permeability sandstone oilfields, respectively, greatly surpassing other methods. These results demonstrate the effectiveness of the proposed indicator system and methodology, providing efficient and objective technical support for evaluating and making decisions on overseas oilfield development projects. Full article
(This article belongs to the Section H1: Petroleum Engineering)
Show Figures

Figure 1

40 pages, 6841 KiB  
Article
Distributionally Robust Multivariate Stochastic Cone Order Portfolio Optimization: Theory and Evidence from Borsa Istanbul
by Larissa Margerata Batrancea, Mehmet Ali Balcı, Ömer Akgüller and Lucian Gaban
Mathematics 2025, 13(15), 2473; https://doi.org/10.3390/math13152473 - 31 Jul 2025
Abstract
We introduce a novel portfolio optimization framework—Distributionally Robust Multivariate Stochastic Cone Order (DR-MSCO)—which integrates partial orders on random vectors with Wasserstein-metric ambiguity sets and adaptive cone structures to model multivariate investor preferences under distributional uncertainty. Grounded in measure theory and convex analysis, DR-MSCO [...] Read more.
We introduce a novel portfolio optimization framework—Distributionally Robust Multivariate Stochastic Cone Order (DR-MSCO)—which integrates partial orders on random vectors with Wasserstein-metric ambiguity sets and adaptive cone structures to model multivariate investor preferences under distributional uncertainty. Grounded in measure theory and convex analysis, DR-MSCO employs data-driven cone selection calibrated to market regimes, along with coherent tail-risk operators that generalize Conditional Value-at-Risk to the multivariate setting. We derive a tractable second-order cone programming reformulation and demonstrate statistical consistency under empirical ambiguity sets. Empirically, we apply DR-MSCO to 23 Borsa Istanbul equities from 2021–2024, using a rolling estimation window and realistic transaction costs. Compared to classical mean–variance and standard distributionally robust benchmarks, DR-MSCO achieves higher overall and crisis-period Sharpe ratios (2.18 vs. 2.09 full sample; 0.95 vs. 0.69 during crises), reduces maximum drawdown by 10%, and yields endogenous diversification without exogenous constraints. Our results underscore the practical benefits of combining multivariate preference modeling with distributional robustness, offering institutional investors a tractable tool for resilient portfolio construction in volatile emerging markets. Full article
(This article belongs to the Special Issue Modern Trends in Mathematics, Probability and Statistics for Finance)
Show Figures

Figure 1

28 pages, 8732 KiB  
Article
Acceleration Command Tracking via Hierarchical Neural Predictive Control for the Effectiveness of Unknown Control
by Zhengpeng Yang, Chao Ming, Huaiyan Wang and Tongxing Peng
Aerospace 2025, 12(8), 689; https://doi.org/10.3390/aerospace12080689 (registering DOI) - 31 Jul 2025
Abstract
This paper presents a flight control framework based on neural network Model Predictive Control (NN-MPC) to tackle the challenges of acceleration command tracking for supersonic vehicles (SVs) in complex flight environments, addressing the shortcomings of traditional methods in managing nonlinearity, random disturbances, and [...] Read more.
This paper presents a flight control framework based on neural network Model Predictive Control (NN-MPC) to tackle the challenges of acceleration command tracking for supersonic vehicles (SVs) in complex flight environments, addressing the shortcomings of traditional methods in managing nonlinearity, random disturbances, and real-time performance requirements. Initially, a dynamic model is developed through a comprehensive analysis of the vehicle’s dynamic characteristics, incorporating strong cross-coupling effects and disturbance influences. Subsequently, a predictive mechanism is employed to forecast future states and generate virtual control commands, effectively resolving the issue of sluggish responses under rapidly changing commands. Furthermore, the approximation capability of neural networks is leveraged to optimize the control strategy in real time, ensuring that rudder deflection commands adapt to disturbance variations, thus overcoming the robustness limitations inherent in fixed-parameter control approaches. Within the proposed framework, the ultimate uniform bounded stability of the control system is rigorously established using the Lyapunov method. Simulation results demonstrate that the method exhibits exceptional performance under conditions of system state uncertainty and unknown external disturbances, confirming its effectiveness and reliability. Full article
(This article belongs to the Section Aeronautics)
16 pages, 1833 KiB  
Article
Prediction of Waste Generation Using Machine Learning: A Regional Study in Korea
by Jae-Sang Lee and Dong-Chul Shin
Urban Sci. 2025, 9(8), 297; https://doi.org/10.3390/urbansci9080297 - 30 Jul 2025
Viewed by 149
Abstract
Accurate forecasting of household waste generation is essential for sustainable urban planning and the development of data-driven environmental policies. Conventional statistical models, while simple and interpretable, often fail to capture the nonlinear and multidimensional relationships inherent in waste production patterns. This study proposes [...] Read more.
Accurate forecasting of household waste generation is essential for sustainable urban planning and the development of data-driven environmental policies. Conventional statistical models, while simple and interpretable, often fail to capture the nonlinear and multidimensional relationships inherent in waste production patterns. This study proposes a machine learning-based regression framework utilizing Random Forest and XGBoost algorithms to predict annual household waste generation across four metropolitan regions in South Korea Seoul, Gyeonggi, Incheon, and Jeju over the period from 2000 to 2023. Independent variables include demographic indicators (total population, working-age population, elderly population), economic indicators (Gross Regional Domestic Product), and regional identifiers encoded using One-Hot Encoding. A derived feature, elderly ratio, was introduced to reflect population aging. Model performance was evaluated using R2, RMSE, and MAE, with artificial noise added to simulate uncertainty. Random Forest demonstrated superior generalization and robustness to data irregularities, especially in data-scarce regions like Jeju. SHAP-based interpretability analysis revealed total population and GRDP as the most influential features. The findings underscore the importance of incorporating economic indicators in waste forecasting models, as demographic variables alone were insufficient for explaining waste dynamics. This approach provides valuable insights for policymakers and supports the development of adaptive, region-specific strategies for waste reduction and infrastructure investment. Full article
Show Figures

Figure 1

18 pages, 1072 KiB  
Article
Complexity of Supply Chains Using Shannon Entropy: Strategic Relationship with Competitive Priorities
by Miguel Afonso Sellitto, Ismael Cristofer Baierle and Marta Rinaldi
Appl. Syst. Innov. 2025, 8(4), 105; https://doi.org/10.3390/asi8040105 - 29 Jul 2025
Viewed by 112
Abstract
Entropy is a foundational concept across scientific domains, playing a role in understanding disorder, randomness, and uncertainty within systems. This study applies Shannon’s entropy in information theory to evaluate and manage complexity in industrial supply chain management. The purpose of the study is [...] Read more.
Entropy is a foundational concept across scientific domains, playing a role in understanding disorder, randomness, and uncertainty within systems. This study applies Shannon’s entropy in information theory to evaluate and manage complexity in industrial supply chain management. The purpose of the study is to propose a quantitative modeling method, employing Shannon’s entropy model as a proxy to assess the complexity in SCs. The underlying assumption is that information entropy serves as a proxy for the complexity of the SC. The research method is quantitative modeling, which is applied to four focal companies from the agrifood and metalworking industries in Southern Brazil. The results showed that companies prioritizing cost and quality exhibit lower complexity compared to those emphasizing flexibility and dependability. Additionally, information flows related to specially engineered products and deliveries show significant differences in average entropies, indicating that organizational complexities vary according to competitive priorities. The implications of this suggest that a focus on cost and quality in SCM may lead to lower complexity, in opposition to a focus on flexibility and dependability, influencing strategic decision making in industrial contexts. This research introduces the novel application of information entropy to assess and control complexity within industrial SCs. Future studies can explore and validate these insights, contributing to the evolving field of supply chain management. Full article
Show Figures

Figure 1

24 pages, 10078 KiB  
Article
Satellite Hyperspectral Mapping of Farmland Soil Organic Carbon in Yuncheng Basin Along the Yellow River, China
by Haixia Jin, Rutian Bi, Huiwen Tian, Hongfen Zhu and Yingqiang Jing
Agronomy 2025, 15(8), 1827; https://doi.org/10.3390/agronomy15081827 - 28 Jul 2025
Viewed by 238
Abstract
This study combined field survey data with Gaofen 5 (GF-5) satellite hyperspectral images of the Yuncheng Basin (China), considering 15 environmental variables. Random forest (RF) was used to select the optimal satellite hyperspectral model, sequentially introducing natural and farmland management factors into the [...] Read more.
This study combined field survey data with Gaofen 5 (GF-5) satellite hyperspectral images of the Yuncheng Basin (China), considering 15 environmental variables. Random forest (RF) was used to select the optimal satellite hyperspectral model, sequentially introducing natural and farmland management factors into the model to analyze the spatial distribution of farmland soil organic carbon (SOC). Furthermore, RF factorial experiments determined the contributions of farmland management, climate, vegetation, soil, and topography to the SOC. Structural equation modeling (SEM) elucidated the driving mechanisms of SOC variations. Integrating satellite hyperspectral data and environmental variables improved the prediction accuracy and SOC-mapping precision of the model. The integration of natural variables significantly improved the RF model performance (R2 = 0.78). The prediction accuracy enhanced with the introduction of crop phenology (R2 = 0.81) and farmland management factors (R2 = 0.87). The model that incorporated all 15 variables demonstrated the highest prediction accuracy (R2 = 0.89) and greatest spatial SOC variability, with minimal uncertainty. Farmland management activities exerted the strongest influence on SOC (0.38). The proposed method can support future investigations on soil carbon sequestration processes in river basins worldwide. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 1979 KiB  
Article
Energy Storage Configuration Optimization of a Wind–Solar–Thermal Complementary Energy System, Considering Source-Load Uncertainty
by Guangxiu Yu, Ping Zhou, Zhenzhong Zhao, Yiheng Liang and Weijun Wang
Energies 2025, 18(15), 4011; https://doi.org/10.3390/en18154011 - 28 Jul 2025
Viewed by 301
Abstract
The large-scale integration of new energy is an inevitable trend to achieve the low-carbon transformation of power systems. However, the strong randomness of wind power, photovoltaic power, and loads poses severe challenges to the safe and stable operation of systems. Existing studies demonstrate [...] Read more.
The large-scale integration of new energy is an inevitable trend to achieve the low-carbon transformation of power systems. However, the strong randomness of wind power, photovoltaic power, and loads poses severe challenges to the safe and stable operation of systems. Existing studies demonstrate insufficient integration and handling of source-load bilateral uncertainties in wind–solar–fossil fuel storage complementary systems, resulting in difficulties in balancing economy and low-carbon performance in their energy storage configuration. To address this insufficiency, this study proposes an optimal energy storage configuration method considering source-load uncertainties. Firstly, a deterministic bi-level model is constructed: the upper level aims to minimize the comprehensive cost of the system to determine the energy storage capacity and power, and the lower level aims to minimize the system operation cost to solve the optimal scheduling scheme. Then, wind and solar output, as well as loads, are treated as fuzzy variables based on fuzzy chance constraints, and uncertainty constraints are transformed using clear equivalence class processing to establish a bi-level optimization model that considers uncertainties. A differential evolution algorithm and CPLEX are used for solving the upper and lower levels, respectively. Simulation verification in a certain region shows that the proposed method reduces comprehensive cost by 8.9%, operation cost by 10.3%, the curtailment rate of wind and solar energy by 8.92%, and carbon emissions by 3.51%, which significantly improves the economy and low-carbon performance of the system and provides a reference for the future planning and operation of energy systems. Full article
Show Figures

Figure 1

17 pages, 2895 KiB  
Article
Trade-Offs of Plant Biomass by Precipitation Regulation Across the Sanjiangyuan Region of Qinghai–Tibet Plateau
by Mingxue Xiang, Gang Fu, Junxi Wu, Yunqiao Ma, Tao Ma, Kai Zheng, Zhaoqi Wang and Xinquan Zhao
Plants 2025, 14(15), 2325; https://doi.org/10.3390/plants14152325 - 27 Jul 2025
Viewed by 268
Abstract
Climate change alters plant biomass allocation and aboveground–belowground trade-offs in grassland ecosystems, potentially affecting critical functions such as carbon sequestration. However, uncertainties persist regarding how precipitation gradients regulate (1) responses of aboveground biomass (AGB), belowground biomass (BGB), and total biomass in alpine grasslands, [...] Read more.
Climate change alters plant biomass allocation and aboveground–belowground trade-offs in grassland ecosystems, potentially affecting critical functions such as carbon sequestration. However, uncertainties persist regarding how precipitation gradients regulate (1) responses of aboveground biomass (AGB), belowground biomass (BGB), and total biomass in alpine grasslands, and (2) precipitation-mediated AGB-BGB allocation strategies. To address this, we conducted a large-scale field survey across precipitation gradients (400–700 mm/y) in the Sanjiangyuan alpine grasslands, Qinghai–Tibet Plateau. During the 2024 growing season, a total of 63 sites (including 189 plots and 945 quadrats) were sampled along five aridity classes: <400, 400–500, 500–600, 600–700, and >700 mm/y. Our findings revealed precipitation as the dominant driver of biomass dynamics: AGB exhibited equal growth rates relative to BGB within the 600–700 mm/y range, but accelerated under drier/wetter conditions. This suggests preferential allocation to aboveground parts under most precipitation regimes. Precipitation explained 31.71% of AGB–BGB trade-off variance (random forest IncMSE), surpassing contributions from AGB (17.61%), specific leaf area (SLA, 13.87%), and BGB (12.91%). Structural equation modeling confirmed precipitation’s positive effects on SLA (β = 0.28, p < 0.05), AGB (β = 0.53, p < 0.05), and BGB (β = 0.60, p < 0.05), with AGB-mediated cascades (β = 0.33, p < 0.05) dominating trade-off regulation. These results advance our understanding of mechanistic drivers governing allometric AGB–BGB relationships across climatic gradients in alpine ecosystems of the Sanjiangyuan Region on the Qinghai–Tibet Plateau. Full article
(This article belongs to the Section Plant Ecology)
Show Figures

Figure 1

12 pages, 462 KiB  
Article
AI-Based Classification of Mild Cognitive Impairment and Cognitively Normal Patients
by Rafail Christodoulou, Giorgos Christofi, Rafael Pitsillos, Reina Ibrahim, Platon Papageorgiou, Sokratis G. Papageorgiou, Evros Vassiliou and Michalis F. Georgiou
J. Clin. Med. 2025, 14(15), 5261; https://doi.org/10.3390/jcm14155261 - 25 Jul 2025
Viewed by 352
Abstract
Background: Mild Cognitive Impairment (MCI) represents an intermediate stage between normal cognitive aging and Alzheimer’s Disease (AD). Early and accurate identification of MCI is crucial for implementing interventions that may delay or prevent further cognitive decline. This study aims to develop a [...] Read more.
Background: Mild Cognitive Impairment (MCI) represents an intermediate stage between normal cognitive aging and Alzheimer’s Disease (AD). Early and accurate identification of MCI is crucial for implementing interventions that may delay or prevent further cognitive decline. This study aims to develop a machine learning-based model for differentiating between Cognitively Normal (CN) individuals and MCI patients using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Methods: An ensemble classification approach was designed by integrating Extra Trees, Random Forest, and Light Gradient Boosting Machine (LightGBM) algorithms. Feature selection emphasized clinically relevant biomarkers, including Amyloid-β 42, phosphorylated tau, diastolic blood pressure, age, and gender. The dataset was split into training and held-out test sets. A probability thresholding strategy was employed to flag uncertain predictions for potential deferral, enhancing model reliability in borderline cases. Results: The final ensemble model achieved an accuracy of 83.2%, a recall of 80.2%, and a precision of 86.3% on the independent test set. The probability thresholding mechanism flagged 23.3% of cases as uncertain, allowing the system to abstain from low-confidence predictions. This strategy improved clinical interpretability and minimized the risk of misclassification in ambiguous cases. Conclusions: The proposed AI-driven ensemble model demonstrates strong performance in classifying MCI versus CN individuals using multimodal ADNI data. Incorporating a deferral mechanism through uncertainty estimation further enhances the model’s clinical utility. These findings support the integration of machine learning tools into early screening workflows for cognitive impairment. Full article
Show Figures

Figure 1

17 pages, 6827 KiB  
Article
Deep Learning-Based Min-Entropy-Accelerated Evaluation for High-Speed Quantum Random Number Generation
by Xiaomin Guo, Wenhe Zhou, Yue Luo, Xiangyu Meng, Jiamin Li, Yaoxing Bian, Yanqiang Guo and Liantuan Xiao
Entropy 2025, 27(8), 786; https://doi.org/10.3390/e27080786 - 24 Jul 2025
Viewed by 150
Abstract
Secure communication is critically dependent on high-speed and high-security quantum random number generation (QRNG). In this work, we present a responsive approach to enhance the efficiency and security of QRNG by leveraging polarization-controlled heterodyne detection to simultaneously measure the quadrature amplitude and phase [...] Read more.
Secure communication is critically dependent on high-speed and high-security quantum random number generation (QRNG). In this work, we present a responsive approach to enhance the efficiency and security of QRNG by leveraging polarization-controlled heterodyne detection to simultaneously measure the quadrature amplitude and phase fluctuations of vacuum shot noise. To address the practical non-idealities inherent in QRNG systems, we investigate the critical impacts of imbalanced heterodyne detection, amplitude–phase overlap, finite-size effects, and security parameters on quantum conditional min-entropy derived from the entropy uncertainty principle. It effectively mitigates the overestimation of randomness and fortifies the system against potential eavesdropping attacks. For a high-security parameter of 1020, QRNG achieves a true random bit extraction ratio of 83.16% with a corresponding real-time speed of 37.25 Gbps following a 16-bit analog-to-digital converter quantization and 1.4 GHz bandwidth extraction. Furthermore, we develop a deep convolutional neural network for rapid and accurate entropy evaluation. The entropy evaluation of 13,473 sets of quadrature data is processed in 68.89 s with a mean absolute percentage error of 0.004, achieving an acceleration of two orders of magnitude in evaluation speed. Extracting the shot noise with full detection bandwidth, the generation rate of QRNG using dual-quadrature heterodyne detection exceeds 85 Gbps. The research contributes to advancing the practical deployment of QRNG and expediting rapid entropy assessment. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

19 pages, 842 KiB  
Article
Enhancing Processing Time for Uncertainty Cost Quantification: Demonstration in a Scheduling Approach for Energy Management Systems
by Luis Carlos Pérez Guzmán, Gina Idárraga-Ospina and Sergio Raúl Rivera Rodríguez
Sustainability 2025, 17(15), 6738; https://doi.org/10.3390/su17156738 - 24 Jul 2025
Viewed by 212
Abstract
This paper calculates the expected cost of uncertainty in solar and wind energy using the uncertainty cost function (UCF), with a primary focus on computational processing time. The comparison of processing time for the uncertainty cost quantification (UCQ) is conducted through three methods: [...] Read more.
This paper calculates the expected cost of uncertainty in solar and wind energy using the uncertainty cost function (UCF), with a primary focus on computational processing time. The comparison of processing time for the uncertainty cost quantification (UCQ) is conducted through three methods: the Monte Carlo simulation method (MC), numerical integration method, and analytical method. The MC simulation relies on random simulations, while numerical integration employs established numerical formulations. These methods are commonly used for solving cost optimization problems in power systems. However, the analytical method is a less conventional approach. The analytical method for calculating uncertainty costs is closely related to the UCF, as it relies on a mathematical representation of the impact of uncertainty on costs, which is modeled through the UCF. A multi-objective approach was employed for scheduling an energy management system, that is to say, thermal–wind–solar energy systems, proposing a simplified method for modeling controllable renewable generation through UCF with an analytical method, instead of the complex probability distributions typically used in traditional methods. This simplification reduces complexity and computational processing time in optimization problems, offering greater accuracy in approximating real distributions and adaptability to various scenarios. The simulations performed yielded positive results in improving cost estimation and computational efficiency, making it a promising tool for enhancing economic distribution and grid operability. Full article
(This article belongs to the Special Issue Intelligent Control for Sustainable Energy Management Systems)
Show Figures

Figure 1

22 pages, 12767 KiB  
Article
Remote Sensing Evidence of Blue Carbon Stock Increase and Attribution of Its Drivers in Coastal China
by Jie Chen, Yiming Lu, Fangyuan Liu, Guoping Gao and Mengyan Xie
Remote Sens. 2025, 17(15), 2559; https://doi.org/10.3390/rs17152559 - 23 Jul 2025
Viewed by 355
Abstract
Coastal blue carbon ecosystems (traditional types such as mangroves, salt marshes, and seagrass meadows; emerging types such as tidal flats and mariculture) play pivotal roles in capturing and storing atmospheric carbon dioxide. Reliable assessment of the spatial and temporal variation and the carbon [...] Read more.
Coastal blue carbon ecosystems (traditional types such as mangroves, salt marshes, and seagrass meadows; emerging types such as tidal flats and mariculture) play pivotal roles in capturing and storing atmospheric carbon dioxide. Reliable assessment of the spatial and temporal variation and the carbon storage potential holds immense promise for mitigating climate change. Although previous field surveys and regional assessments have improved the understanding of individual habitats, most studies remain site-specific and short-term; comprehensive, multi-decadal assessments that integrate all major coastal blue carbon systems at the national scale are still scarce for China. In this study, we integrated 30 m Landsat imagery (1992–2022), processed on Google Earth Engine with a random forest classifier; province-specific, literature-derived carbon density data with quantified uncertainty (mean ± standard deviation); and the InVEST model to track coastal China’s mangroves, salt marshes, tidal flats, and mariculture to quantify their associated carbon stocks. Then the GeoDetector was applied to distinguish the natural and anthropogenic drivers of carbon stock change. Results showed rapid and divergent land use change over the past three decades, with mariculture expanded by 44%, becoming the dominant blue carbon land use; whereas tidal flats declined by 39%, mangroves and salt marshes exhibited fluctuating upward trends. National blue carbon stock rose markedly from 74 Mt C in 1992 to 194 Mt C in 2022, with Liaoning, Shandong, and Fujian holding the largest provincial stock; Jiangsu and Guangdong showed higher increasing trends. The Normalized Difference Vegetation Index (NDVI) was the primary driver of spatial variability in carbon stock change (q = 0.63), followed by precipitation and temperature. Synergistic interactions were also detected, e.g., NDVI and precipitation, enhancing the effects beyond those of single factors, which indicates that a wetter climate may boost NDVI’s carbon sequestration. These findings highlight the urgency of strengthening ecological red lines, scaling climate-smart restoration of mangroves and salt marshes, and promoting low-impact mariculture. Our workflow and driver diagnostics provide a transferable template for blue carbon monitoring and evidence-based coastal management frameworks. Full article
Show Figures

Figure 1

30 pages, 6810 KiB  
Article
Interpretable Machine Learning Framework for Non-Destructive Concrete Strength Prediction with Physics-Consistent Feature Analysis
by Teerapun Saeheaw
Buildings 2025, 15(15), 2601; https://doi.org/10.3390/buildings15152601 - 23 Jul 2025
Viewed by 288
Abstract
Non-destructive concrete strength prediction faces limitations in validation scope, methodological comparison, and interpretability that constrain deployment in safety-critical construction applications. This study presents a machine learning framework integrating polynomial feature engineering, AdaBoost ensemble regression, and Bayesian optimization to achieve both predictive accuracy and [...] Read more.
Non-destructive concrete strength prediction faces limitations in validation scope, methodological comparison, and interpretability that constrain deployment in safety-critical construction applications. This study presents a machine learning framework integrating polynomial feature engineering, AdaBoost ensemble regression, and Bayesian optimization to achieve both predictive accuracy and physics-consistent interpretability. Eight state-of-the-art methods were evaluated across 4420 concrete samples, including statistical significance testing, scenario-based assessment, and robustness analysis under measurement uncertainty. The proposed PolyBayes-ABR methodology achieves R2 = 0.9957 (RMSE = 0.643 MPa), showing statistical equivalence to leading ensemble methods, including XGBoost (p = 0.734) and Random Forest (p = 0.888), while outperforming traditional approaches (p < 0.001). Scenario-based validation across four engineering applications confirms robust performance (R2 > 0.93 in all cases). SHAP analysis reveals that polynomial features capture physics-consistent interactions, with the Curing_age × Er interaction achieving dominant importance (SHAP value: 4.2337), aligning with established hydration–microstructure relationships. When accuracy differences fall within measurement uncertainty ranges, the framework provides practical advantages through enhanced uncertainty quantification (±1.260 MPa vs. ±1.338 MPa baseline) and actionable engineering insights for quality control and mix design optimization. This approach addresses the interpretability challenge in concrete engineering applications where both predictive performance and scientific understanding are essential for safe deployment. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

14 pages, 4599 KiB  
Article
Predictive Flood Uncertainty Associated with the Overtopping Rates of Vertical Seawall on Coral Reef Topography
by Hongqian Zhang, Bin Lu, Yumei Geng and Ye Liu
Water 2025, 17(15), 2186; https://doi.org/10.3390/w17152186 - 22 Jul 2025
Viewed by 170
Abstract
Accurate prediction of wave overtopping rates is essential for flood risk assessment along coral reef coastlines. This study quantifies the uncertainty sources affecting overtopping rates for vertical seawalls on reef flats, using ensemble simulations with a validated non-hydrostatic SWASH model. By generating extensive [...] Read more.
Accurate prediction of wave overtopping rates is essential for flood risk assessment along coral reef coastlines. This study quantifies the uncertainty sources affecting overtopping rates for vertical seawalls on reef flats, using ensemble simulations with a validated non-hydrostatic SWASH model. By generating extensive random wave sequences, we identify spectral resolution, wave spectral width, and wave groupiness as the dominant controls on the uncertainty. Statistical metrics, including the Coefficient of Variation (CV) and Range Uncertainty Level (RUL), demonstrate that overtopping rates exhibit substantial variability under randomized wave conditions, with CV exceeding 40% for low spectral resolutions (50–100 bins), while achieving statistical convergence (CV around 20%) requires at least 700 frequency bins, far surpassing conventional standards. The RUL, which describes the ratio of extreme to minimal overtopping rates, also decreases markedly as the number of frequency bins increases from 50 to 700. It is found that the overtopping rate follows a normal distribution with 700 frequency bins in wave generation. Simulations further demonstrate that overtopping rates increase by a factor of 2–4 as the JONSWAP spectrum peak enhancement factor (γ) increases from 1 to 7. The wave groupiness factor (GF) emerges as a predictor of overtopping variability, enabling a more efficient experimental design through reduction in groupiness-guided replication. These findings establish practical thresholds for experimental design and highlight the critical role of spectral parameters in hazard assessment. Full article
Show Figures

Figure 1

32 pages, 1575 KiB  
Article
Entropy Accumulation Under Post-Quantum Cryptographic Assumptions
by Ilya Merkulov and Rotem Arnon
Entropy 2025, 27(8), 772; https://doi.org/10.3390/e27080772 - 22 Jul 2025
Viewed by 236
Abstract
In device-independent (DI) quantum protocols, security statements are agnostic to the internal workings of the quantum devices—they rely solely on classical interactions with the devices and specific assumptions. Traditionally, such protocols are set in a non-local scenario, where two non-communicating devices exhibit Bell [...] Read more.
In device-independent (DI) quantum protocols, security statements are agnostic to the internal workings of the quantum devices—they rely solely on classical interactions with the devices and specific assumptions. Traditionally, such protocols are set in a non-local scenario, where two non-communicating devices exhibit Bell inequality violations. Recently, a new class of DI protocols has emerged that requires only a single device. In this setting, the assumption of no communication is replaced by a computational one: the device cannot solve certain post-quantum cryptographic problems. Protocols developed in this single-device computational setting—such as for randomness certification—have relied on ad hoc techniques, making their guarantees difficult to compare and generalize. In this work, we introduce a modular proof framework inspired by techniques from the non-local DI literature. Our approach combines tools from quantum information theory, including entropic uncertainty relations and the entropy accumulation theorem, to yield both conceptual clarity and quantitative security guarantees. This framework provides a foundation for systematically analyzing DI protocols in the single-device setting under computational assumptions. It enables the design and security proof of future protocols for DI randomness generation, expansion, amplification, and key distribution, grounded in post-quantum cryptographic hardness. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Back to TopTop