Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,374)

Search Parameters:
Keywords = biased estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 774 KiB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

23 pages, 15163 KiB  
Article
3D Dubins Curve-Based Path Planning for UUV in Unknown Environments Using an Improved RRT* Algorithm
by Feng Pan, Peng Cui, Bo Cui, Weisheng Yan and Shouxu Zhang
J. Mar. Sci. Eng. 2025, 13(7), 1354; https://doi.org/10.3390/jmse13071354 - 16 Jul 2025
Abstract
The autonomous navigation of an Unmanned Underwater Vehicle (UUV) in unknown 3D underwater environments remains a challenging task due to the presence of complex terrain, uncertain obstacles, and strict kinematic constraints. This paper proposes a novel smooth path planning framework that integrates improved [...] Read more.
The autonomous navigation of an Unmanned Underwater Vehicle (UUV) in unknown 3D underwater environments remains a challenging task due to the presence of complex terrain, uncertain obstacles, and strict kinematic constraints. This paper proposes a novel smooth path planning framework that integrates improved Rapidly-exploring Random Tree* (RRT*) with 3D Dubins curves to efficiently generate feasible and collision-free trajectories for nonholonomic UUVs. A fast curve-length estimation approach based on a backpropagation neural network is introduced to reduce computational burden during path evaluation. Furthermore, the improved RRT* algorithm incorporates pseudorandom sampling, terminal node backtracking, and goal-biased exploration strategies to enhance convergence and path quality. Extensive simulation results in unknown underwater scenarios with static and moving obstacles demonstrate that the proposed method significantly outperforms state-of-the-art planning algorithms in terms of smoothness, path length, and computational efficiency. Full article
(This article belongs to the Special Issue Intelligent Measurement and Control System of Marine Robots)
Show Figures

Figure 1

14 pages, 2402 KiB  
Article
Application of Machine Learning Models in the Estimation of Quercus mongolica Stem Profiles
by Chiung Ko, Jintaek Kang, Chaejun Lim, Donggeun Kim and Minwoo Lee
Forests 2025, 16(7), 1138; https://doi.org/10.3390/f16071138 - 10 Jul 2025
Viewed by 203
Abstract
Accurate estimation of stem profiles is critical for forest management, timber yield prediction, and ecological modeling. However, traditional taper equations often fail to capture species-specific growth variability and exhibit significant biases, particularly in the upper stem regions. Machine learning regression models were applied [...] Read more.
Accurate estimation of stem profiles is critical for forest management, timber yield prediction, and ecological modeling. However, traditional taper equations often fail to capture species-specific growth variability and exhibit significant biases, particularly in the upper stem regions. Machine learning regression models were applied to estimate Quercus mongolica stem profiles across South Korea, and performance was compared with that of a traditional taper equation. A total of 2503 sample trees were used to train and validate Random Forest (RF), XGBoost (XGB), Artificial Neural Network (ANN), and Support Vector Regression (SVR) models. Predictive performance was evaluated using root mean square error, mean absolute error, and coefficient of determination metrics, and performance differences were validated statistically. The ANN model exhibited the highest predictive accuracy and stability across all diameter classes, maintaining smooth and consistent stem profiles even in the upper stem regions where the traditional taper model exhibited significant errors. RF and XGB models had moderate performance but exhibited localized fluctuations, whereas the Kozak taper equation tended to overestimate basal diameters and underestimate crown-top diameters. Machine learning models, particularly ANN, offer a robust alternative to fixed-form taper equations, contributing substantially to forest resource inventory, carbon stock assessment, and climate-adaptive forest management planning. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

26 pages, 1556 KiB  
Article
Modified Two-Parameter Ridge Estimators for Enhanced Regression Performance in the Presence of Multicollinearity: Simulations and Medical Data Applications
by Muteb Faraj Alharthi and Nadeem Akhtar
Axioms 2025, 14(7), 527; https://doi.org/10.3390/axioms14070527 - 10 Jul 2025
Viewed by 143
Abstract
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and [...] Read more.
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and improve the prediction accuracy. In this study, we introduce four newly modified ridge estimators, referred to as RIRE1, RIRE2, RIRE3, and RIRE4, that are aimed at tackling severe multicollinearity more effectively than ordinary least squares (OLS) and other existing estimators under both normal and non-normal error distributions. The ridge estimators are biased, so their efficiency cannot be judged by variance alone; instead, we use the mean squared error (MSE) to compare their performance. Each new estimator depends on two shrinkage parameters, k and d, making the theoretical analysis complex. To address this, we employ Monte Carlo simulations to rigorously evaluate and compare these new estimators with OLS and other existing ridge estimators. Our simulations show that the proposed estimators consistently minimize the MSE better than OLS and other ridge estimators, particularly in datasets with strong multicollinearity and large error variances. We further validate their practical value through applications using two real-world datasets, demonstrating both their robustness and theoretical alignment. Full article
(This article belongs to the Special Issue Applied Mathematics and Mathematical Modeling)
Show Figures

Figure 1

20 pages, 9491 KiB  
Article
A General Model for Converting All-Wave Net Radiation at Instantaneous to Daily Scales Under Clear Sky
by Jiakun Han, Bo Jiang, Yu Zhao, Jianghai Peng, Shaopeng Li, Hui Liang, Xiuwan Yin and Yingping Chen
Remote Sens. 2025, 17(14), 2364; https://doi.org/10.3390/rs17142364 - 9 Jul 2025
Viewed by 142
Abstract
Surface all-wave net radiation (Rn) is one of the essential parameters to describe surface radiative energy balance, and it is of great significance in scientific research and practical applications. Among various acquisition approaches, the estimation of Rn from satellite [...] Read more.
Surface all-wave net radiation (Rn) is one of the essential parameters to describe surface radiative energy balance, and it is of great significance in scientific research and practical applications. Among various acquisition approaches, the estimation of Rn from satellite data is gaining more and more attention. In order to obtain the daily Rn (Rnd) from the instantaneous satellite observations, a parameter Cd, which is defined as the ratio between the Rn at daily and at instantaneous under clear sky was proposed and has been widely applied. Inspired by the sinusoidal model, a new model for Cd estimation, namely New Model, was proposed based on the comprehensive clear-sky Rn measurements collected from 105 global sites in this study. Compared with existing models, New Model could estimate Cd at any moment during 9:30~14:30 h, only depending on the length of daytime. Against the measurements, New Model was evaluated by validating and comparing it with two popular existing models. The results demonstrated that the Rnd obtained by multiplying Cd from New Model had the best accuracy, yielding an overall R2 of 0.95, root mean square error (RMSE) of 14.07 Wm−2, and Bias of −0.21 Wm−2. Additionally, New Model performed relatively better over vegetated surfaces than over non- or less-vegetated surfaces with a relative RMSE (rRMSE) of 11.1% and 17.89%, respectively. Afterwards, the New Model Cd estimate was applied with MODIS data to calculate Rnd. After validation, the Rnd computed from Cd was much better than that from the sinusoidal model, especially for the case MODIS transiting only once in a day, with Rnd-validated R2 of 0.88 and 0.84, RMSEs of 19.60 and 27.70 Wm−2, and Biases of −0.76 and 8.88 Wm−2. Finally, more analysis on New Model further pointed out the robustness of this model under various conditions in terms of moments, land cover types, and geolocations, but the model is suggested to be applied at a time scale of 30 min. In summary, although the new Cd  model only works for clear-sky, it has the strong potential to be used in estimating Rnd from satellite data, especially for those having fine spatial resolution but low temporal resolution. Full article
(This article belongs to the Special Issue Remote Sensing of Solar Radiation Absorbed by Land Surfaces)
Show Figures

Figure 1

22 pages, 20312 KiB  
Review
On the Incompleteness of the Coelacanth Fossil Record
by Zhiwei Yuan, Lionel Cavin and Haijun Song
Foss. Stud. 2025, 3(3), 10; https://doi.org/10.3390/fossils3030010 - 8 Jul 2025
Viewed by 1929
Abstract
This study conducted a spatiotemporal review of the coelacanth fossil record and explored its distribution and diversity patterns. Coelacanth research can be divided into two distinct periods: the first period, which is based solely on the fossil record, and the second period following [...] Read more.
This study conducted a spatiotemporal review of the coelacanth fossil record and explored its distribution and diversity patterns. Coelacanth research can be divided into two distinct periods: the first period, which is based solely on the fossil record, and the second period following the discovery of extant taxa, significantly stimulating research interest. The distribution and research intensity of coelacanth fossils exhibit marked spatial heterogeneity, with Europe and North America being the most extensively studied regions. In contrast, Asia, South America, and Oceania offer substantial potential for future research. Temporally, the coelacanth fossil record also demonstrates significant variation across geological periods, revealing three diversity peaks in the Middle Devonian, Early Triassic, and Late Jurassic, with the Early Triassic peak exhibiting the highest diversity. With the exception of the Late Devonian, Carboniferous, and Late Cretaceous, most periods remain understudied, particularly the Permian, Early Jurassic, and Middle Jurassic, where the record is notably scarce. Integrating the fossil record with phylogenetic analyses enables more robust estimations of coelacanth diversity patterns through deep time. The diversity peak observed in the Middle Devonian is consistent with early burst models of diversification, whereas the Early and Middle Triassic peaks are considered robust, and the Late Jurassic peak may be influenced by taphonomic biases. The low population abundance and limited diversity of coelacanths reduce the number of specimens available for fossilization. The absence of a Cenozoic coelacanth fossil record may be linked to their moderately deep-sea habitat. Future research should prioritize addressing gaps in the fossil record, particularly in Africa, Asia, and Latin America; employing multiple metrics to mitigate sampling biases; and integrating a broader range of taxa into phylogenetic analyses. In contrast to the widespread distribution of the fossil record, extant coelacanths exhibit a restricted distribution, underscoring the urgent need to increase conservation efforts. Full article
(This article belongs to the Special Issue Continuities and Discontinuities of the Fossil Record)
Show Figures

Figure 1

18 pages, 2395 KiB  
Article
Theoretical Potential of TanSat-2 to Quantify China’s CH4 Emissions
by Sihong Zhu, Dongxu Yang, Liang Feng, Longfei Tian, Yi Liu, Junji Cao, Minqiang Zhou, Zhaonan Cai, Kai Wu and Paul I. Palmer
Remote Sens. 2025, 17(13), 2321; https://doi.org/10.3390/rs17132321 - 7 Jul 2025
Viewed by 250
Abstract
Satellite-based monitoring of atmospheric column-averaged dry-air mole fraction (XCH4) is essential for quantifying methane (CH4) emissions, yet uncharacterized spatially varying biases in XCH4 observations can cause misattribution in flux estimates. This study assesses the potential of the upcoming [...] Read more.
Satellite-based monitoring of atmospheric column-averaged dry-air mole fraction (XCH4) is essential for quantifying methane (CH4) emissions, yet uncharacterized spatially varying biases in XCH4 observations can cause misattribution in flux estimates. This study assesses the potential of the upcoming TanSat-2 satellite mission to estimate China’s CH4 emission using a series of Observing System Simulation Experiments (OSSEs) based on an Ensemble Kalman Filter (EnKF) inversion framework coupled with GEOS-Chem on a 0.5° × 0.625° grid, alongside an evaluation of current TROPOMI-based products against Total Carbon Column Observing Network (TCCON) observations. Assuming a target precision of 8 ppb, TanSat-2 could achieve an annual national emission estimate accuracy of 2.9% ± 4.2%, reducing prior uncertainty by 84%, with regional deviations below 5.0% across Northeast, Central, East, and Southwest China. In contrast, limited coverage in South China due to persistent cloud cover leads to a 26.1% discrepancy—also evident in pseudo TROPOMI OSSEs—highlighting the need for complementary ground-based monitoring strategies. Sensitivity analyses show that satellite retrieval biases strongly affect inversion robustness, reducing the accuracy in China’s total emission estimates by 5.8% for every 1 ppb increase in bias level across scenarios, particularly in Northeast, Central and East China. We recommend expanding ground-based XCH4 observations in these regions to support the correction of satellite-derived biases and improve the reliability of satellite-constrained inversion results. Full article
Show Figures

Figure 1

23 pages, 31371 KiB  
Article
Evaluations of GPM IMERG-Late Satellite Precipitation Product for Extreme Precipitation Events in Zhejiang Province
by Ruijin Zhu, Zhe Lv, Muzhi Li, Jiaxi Wu, Meiying Dong and Huiyan Xu
Atmosphere 2025, 16(7), 821; https://doi.org/10.3390/atmos16070821 - 6 Jul 2025
Viewed by 268
Abstract
In recent years, satellite products have played an increasingly significant role in monitoring and estimating global extreme weather events, owing to their advantages of an excellent spatiotemporal continuity and broad coverage. This study systematically evaluates the Global Precipitation Measurement (GPM) Integrated Multi-Satellite Retrievals [...] Read more.
In recent years, satellite products have played an increasingly significant role in monitoring and estimating global extreme weather events, owing to their advantages of an excellent spatiotemporal continuity and broad coverage. This study systematically evaluates the Global Precipitation Measurement (GPM) Integrated Multi-Satellite Retrievals for the GPM Late Run (IMERG-L) product for regional precipitation events based on the observations in Zhejiang Province from 2001 to 2020. In this study, seven typical precipitation indices with seven accuracy evaluation indexes are applied to analyze the performance of IMERG-L from multiple perspectives in terms of the precipitation intensity, frequency and spatial distribution dimensions. The results show that IMERG-L is capable of capturing the spatial distribution trends, especially in the frequency-based precipitation indices (CWD, R10mm and R20mm), which can depict the regional wetness and precipitation pattern. However, the product suffers from a systematic overestimation in capturing heavy precipitation and an extreme precipitation intensity, with a high false alarm rate and unstable accuracy, especially in heavy rainfall and above class events, where the Probability of Detection (POD) drops significantly, showing an obvious reduction in the recognition capability and risk of misclassification. Specifically, IMERG-L failed to reproduce the observed eastward-increasing trends in the annual maximum precipitation for both one-day (RX1day) and five-day (RX5day) durations, demonstrating its limitations in accurately capturing extreme precipitation patterns across Zhejiang Province. Overall, furthering the optimization and improvement of IMERG-L in reducing the intensity-dependent biases in heavy rainfall detection, increasing spatial inhomogeneity in trend representations and improving the false alarm suppression for extreme events are needed for the accurate monitoring and quantitative estimation of high-intensity extreme precipitation events. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

23 pages, 943 KiB  
Review
Establishing Best Practices for Clinical GWAS: Tackling Imputation and Data Quality Challenges
by Giorgio Casaburi, Ron McCullough and Valeria D’Argenio
Int. J. Mol. Sci. 2025, 26(13), 6397; https://doi.org/10.3390/ijms26136397 - 3 Jul 2025
Viewed by 367
Abstract
Genome-wide association studies (GWASs) play a central role in precision medicine, powering a range of clinical applications from pharmacogenomics to disease risk prediction. A critical component of GWASs is genotype imputation, a computational method used to infer untyped genetic variants. While imputation increases [...] Read more.
Genome-wide association studies (GWASs) play a central role in precision medicine, powering a range of clinical applications from pharmacogenomics to disease risk prediction. A critical component of GWASs is genotype imputation, a computational method used to infer untyped genetic variants. While imputation increases variant coverage by estimating genotypes at untyped loci, this expanded coverage can enhance the ability to detect genetic associations in some cases. However, imputation also introduces biases, particularly for rare variants and underrepresented populations, which may compromise clinical accuracy. This review examines the challenges and clinical implications of genotype imputation errors, including their impact on therapeutic decisions and predictive models, like polygenic risk scores (PRSs). In particular, the sources of imputation errors have been deeply explored, emphasizing the disparities in performance across ancestral populations and downstream effects on healthcare equity and addressing ethical considerations surrounding the access to equitable genomic resources. Based on the above, we propose evidence-based best practices for clinical GWAS implementation, including the direct genotyping of clinically actionable variants, the cross-population validation of imputation models, the transparent reporting of imputation quality metrics, and the use of ancestry-matched reference panels. As genomic data becomes increasingly adopted in healthcare systems worldwide, ensuring the accuracy and inclusivity of GWAS-derived insights is paramount. Here, we suggest a framework for the responsible clinical integration of imputed genetic data, paving the way for more reliable and equitable personalized medicine. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Graphical abstract

30 pages, 621 KiB  
Article
Digital Transitions and Sustainable Futures: Family Structure’s Impact on Chinese Consumer Saving Choices and Marketing Implications
by Wenxin Fu, Qijun Jiang, Jiahao Ni and Yihong Xue
Sustainability 2025, 17(13), 6070; https://doi.org/10.3390/su17136070 - 2 Jul 2025
Viewed by 253
Abstract
Family structure has long been regarded as an important determinant of household saving, yet the empirical evidence for developing economies remains limited. Using the 2018–2022 panels of the China Family Panel Studies (CFPS), a nationwide survey that follows 16,519 households across three waves, [...] Read more.
Family structure has long been regarded as an important determinant of household saving, yet the empirical evidence for developing economies remains limited. Using the 2018–2022 panels of the China Family Panel Studies (CFPS), a nationwide survey that follows 16,519 households across three waves, the present study investigates how family size, the elderly share, and the child share jointly shape saving behavior. A household fixed effects framework is employed to control for time-invariant heterogeneity, followed by a sequential endogeneity strategy: external-shock instruments are tested and rejected, lagged two-stage least squares implement internal instruments, and a dynamic System-GMM model is estimated to capture saving persistence. Robustness checks include province-by-year fixed effects, inverse probability weighting for attrition, balanced-panel replication, alternative variable definitions, lag structures, and sample filters. Family size raises the saving rate by 4.6 percentage points in the preferred dynamic specification (p < 0.01). The elderly ratio remains insignificant throughout, whereas the child ratio exerts a negative but model-sensitive association. A three-path mediation analysis indicates that approximately 26 percent of the total family size effect operates through scale economy savings on quasi-fixed expenses, 19 percent is offset by resource dilution pressure, and less than 1 percent flows through a precautionary saving channel linked to income volatility. These findings extend the resource dilution literature by quantifying the relative strength of competing mechanisms in a middle-income context and showing that cost-sharing economies dominate child-related dilution for most households. Policy discussion highlights the importance of public childcare subsidies and targeted credit access for rural parents, whose saving capacity is the most constrained by additional children. The study also demonstrates that fixed effects estimates of family structure can be upward-biased unless dynamic saving behavior and internal instruments are considered. Full article
Show Figures

Figure 1

21 pages, 699 KiB  
Article
Stock Market Hype: An Empirical Investigation of the Impact of Overconfidence on Meme Stock Valuation
by Richard Mawulawoe Ahadzie, Peterson Owusu Junior, John Kingsley Woode and Dan Daugaard
Risks 2025, 13(7), 127; https://doi.org/10.3390/risks13070127 - 1 Jul 2025
Viewed by 351
Abstract
This study investigates the relationship between overconfidence and meme stock valuation, drawing on panel data from 28 meme stocks listed from 2019 to 2024. The analysis incorporates key financial indicators, including Tobin’s Q ratio, market capitalization, return on assets, leverage, and volatility. A [...] Read more.
This study investigates the relationship between overconfidence and meme stock valuation, drawing on panel data from 28 meme stocks listed from 2019 to 2024. The analysis incorporates key financial indicators, including Tobin’s Q ratio, market capitalization, return on assets, leverage, and volatility. A range of overconfidence proxies is employed, including changes in trading volume, turnover rate, changes in outstanding shares, and alternative measures of excessive trading. We observe a significant positive relationship between overconfidence (as measured by changes in trading volume) and firm valuation, suggesting that investor biases contribute to notable pricing distortions. Leverage has a significant negative relationship with firm valuation. In contrast, market capitalization has a significant positive relationship with firm valuation, implying that meme stock investors respond to both speculative sentiment and traditional firm fundamentals. Robustness checks using alternative proxies reveal that turnover rate and changes in the number of shares are negatively related to valuation. This shows the complex dynamics of meme stocks, where psychological factors intersect with firm-specific indicators. However, results from a dynamic panel model estimated using the Dynamic System Generalized Method of Moments (GMM) show that the turnover rate has a significantly positive relationship with firm valuation. These results offer valuable insights into the pricing behavior of meme stocks, revealing how investor sentiment impacts periodic valuation adjustments in speculative markets. Full article
(This article belongs to the Special Issue Theoretical and Empirical Asset Pricing)
Show Figures

Figure 1

16 pages, 3542 KiB  
Article
Calculation Method for Failure Pressure of Oil and Gas Pipelines with Multiple Corrosion Defects
by Jian Chen, Ying Zhen, Yuanyuan Liu, Dekang Zhang, Yuguang Cao, Yaya He and Qiankun Zhao
Coatings 2025, 15(7), 774; https://doi.org/10.3390/coatings15070774 - 30 Jun 2025
Viewed by 281
Abstract
In the structural integrity assessment of pipelines with multiple defects, conventional methods such as the DNV and MTI approaches demonstrate notable predictive biases. Specifically, the burst capacity estimated by the MTI model frequently exceeds experimental test values, whereas DNV calculations tend to be [...] Read more.
In the structural integrity assessment of pipelines with multiple defects, conventional methods such as the DNV and MTI approaches demonstrate notable predictive biases. Specifically, the burst capacity estimated by the MTI model frequently exceeds experimental test values, whereas DNV calculations tend to be overly conservative, possibly resulting in unnecessary maintenance expenses. To address these challenges, this study employs nonlinear finite element analysis to reassess the failure pressure of pipelines with multiple corrosion defects. Initially, the interaction limit spacing between adjacent corrosion defects is computed to determine their potential for interaction. For adjacent corrosion defects exhibiting interaction, a merging limit criterion is introduced to evaluate their feasibility for merging. For those adjacent defects that satisfy the merging limit criterion, the DNV method is employed to merge them, followed by recalculating the failure pressure using the relevant formula. Finally, this study proposes a novel approach for assessing the failure pressure of pipelines with multiple corrosion defects. Through comparative analysis and validation, the proposed evaluation method exhibits enhanced accuracy in predicting the failure pressure of pipelines with multiple corrosion defects. The findings of this study provide a more precise methodology for assessing the residual strength of pipelines affected by multiple corrosion defects. Full article
(This article belongs to the Section Corrosion, Wear and Erosion)
Show Figures

Figure 1

13 pages, 1875 KiB  
Article
Quantitative Characterization of Carbonate Mineralogy in Lake Yangzong Sediments Using XRF-Derived Calcium Signatures and Inorganic Carbon Measurements
by Huayong Li, Lizeng Duan, Junhui Mo, Jungang Lin, Huayu Li, Han Wang, Jingwen Wu, Qifa Sun and Hucai Zhang
Water 2025, 17(13), 1949; https://doi.org/10.3390/w17131949 - 29 Jun 2025
Viewed by 241
Abstract
The carbonate content serves as a fundamental proxy in lacustrine sediments for reconstructing palaeoclimate and environmental changes. Although multiple analytical techniques exist for its quantification, systematic comparisons between different methodologies and the precise identification of carbonate mineralogy are still needed. In this study, [...] Read more.
The carbonate content serves as a fundamental proxy in lacustrine sediments for reconstructing palaeoclimate and environmental changes. Although multiple analytical techniques exist for its quantification, systematic comparisons between different methodologies and the precise identification of carbonate mineralogy are still needed. In this study, a 1020 cm continuous sediment core (YZH-1) from Lake Yangzong in Yunnan Province was employed. Initially, the semi-quantitative calcium (Ca) concentration was obtained via X-ray fluorescence (XRF) core scanning. Subsequently, the total inorganic carbon (TIC) content was determined using both the loss on ignition (LOI) and gasometric (GM) methods to evaluate methodological discrepancies and potential biases. Furthermore, a quantitative regression model was developed to estimate carbonate abundance based on the relationship between XRF-derived Ca data and the analytically determined carbonate content. A comparative analysis revealed a strong positive correlation (r = 0.97) between LOI and GM measurements, though LOI-derived values are systematically elevated by 2.6% on average. This overestimation likely stems from the thermal decomposition of non-carbonate minerals during LOI analysis. Conversely, GM measurements exhibit a ~5% underestimation relative to certified reference materials, attributable to instrumental limitations such as gas leakage. Strong covariation (r = 0.92) between XRF-Ca intensities and the TIC content indicates that carbonate minerals in Lake Yangzong sediments predominantly consist of calcite. A transfer function was established to convert XRF-Ca scanning data into absolute Ca concentrations, leveraging the robust Ca-TIC relationship. The proposed quantification model demonstrates high reliability when applied to standardized XRF-Ca datasets, offering a practical tool for paleolimnological studies in similar geological settings. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

16 pages, 2103 KiB  
Article
Improving Green Roof Runoff Modeling for Sustainable Cities: The Role of Site-Specific Calibration in SCS-CN Parameters
by Thiago Masaharu Osawa, Fabio Ferreira Nogueira, Brenda Chaves Coelho Leite and José Rodolfo Scarati Martins
Sustainability 2025, 17(13), 5976; https://doi.org/10.3390/su17135976 - 29 Jun 2025
Viewed by 290
Abstract
Green roofs are increasingly recognized as effective Nature-Based Solutions (NBS) for urban stormwater management, contributing to sustainable and climate-resilient cities. The Soil Conservation Service Curve Number (SCS-CN) model is commonly used to simulate their hydrological performance due to its simplicity and low data [...] Read more.
Green roofs are increasingly recognized as effective Nature-Based Solutions (NBS) for urban stormwater management, contributing to sustainable and climate-resilient cities. The Soil Conservation Service Curve Number (SCS-CN) model is commonly used to simulate their hydrological performance due to its simplicity and low data requirements. However, the standard assumption of a fixed initial abstraction ratio (Ia/S = 0.2), long debated in hydrology, has been largely overlooked in green roof applications. This study investigates the variability of Ia/S and its impact on runoff simulation accuracy for a green roof under a humid subtropical climate. Event-based analysis across multiple storms revealed Ia/S values ranging from 0.01 to 0.62, with a calibrated optimal value of 0.17. This variability is primarily driven by the physical and biological characteristics of the green roof rather than short-term rainfall conditions. Using the fixed ratio introduced consistent biases in runoff estimation, while intermediate ratios (0.17–0.22) provided higher accuracy, with the optimal ratio yielding a median Curve Number (CN) of 89 and high model performance (NSE = 0.95). Additionally, CN values followed a positively skewed Weibull distribution, highlighting the value of probabilistic modeling. Though limited to one green roof design, the findings underscore the importance of site-specific parameter calibration to improve predictive reliability. By enhancing model accuracy, this research supports better design, evaluation, and management of green roofs, reinforcing their contribution to integrated urban water systems and global sustainability goals. Full article
(This article belongs to the Special Issue Green Roof Benefits, Performances and Challenges)
Show Figures

Figure 1

15 pages, 4773 KiB  
Article
Relationship Between Effective Dose, Alternative Metrics, and SSDE: Experiences with Two CT Dose-Monitoring Systems
by Lilla Szatmáriné Egeresi, László Urbán, Zsolt Dankó, Ervin Balázs, Ervin Berényi, Mária Marosi, János Kiss, Péter Bágyi, Zita Képes, Miklós Emri and László Balkay
Diagnostics 2025, 15(13), 1654; https://doi.org/10.3390/diagnostics15131654 - 28 Jun 2025
Viewed by 1127
Abstract
Background: We assessed the frequency and causes of discrepancies in CT dose indices such as dose-length product (DLP), size-specific dose estimate (SSDE), and effective dose (ED), as calculated by CT dose-monitoring systems. Our secondary aim was to demonstrate the estimation of size-specific [...] Read more.
Background: We assessed the frequency and causes of discrepancies in CT dose indices such as dose-length product (DLP), size-specific dose estimate (SSDE), and effective dose (ED), as calculated by CT dose-monitoring systems. Our secondary aim was to demonstrate the estimation of size-specific ED (SED) from the patients’ dose records. Methods: The retrospective study included dosimetric data of 79,383 consecutive CT exams performed on two CT scanners. The following dose values were recorded from both the locally developed dose-monitoring system (DMS) and a commercial dose-monitoring program (DWTM): DLP, SSDE, and ED. Only the DMS provided bodyweight-corrected effective dose (SEDDMS) and the SED based on previous published data. Results: Without body-region-specific analysis, there were no tendentious differences between the DLP, ED, or SSDE values obtained from DWTM and DMS. However, the body region-based correlation revealed substantial differences between EDDMS and EDDW, primarily related to inadequate identification of the body. SSDE showed strong correlation to each anatomical category and CT device, except for the head region, where inadequate consideration of CT inclination was the reason for the biased SSDEDW value. Furthermore, by analyzing the SEDDMS, SSDE, and SED correlations, we concluded that SEDDMS is a promising figure for estimating the SED value. Conclusions: SED provides suitable supplementary size-specific dose data to SDDE and may be a preferable choice for estimating cumulative doses in routine radiological practice. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

Back to TopTop