Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (430)

Search Parameters:
Keywords = penalized models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 957 KiB  
Article
The Influence of Blood Transfusion Indexed to Patient Blood Volume on 5-Year Mortality After Coronary Artery Bypass Grafting—An EuroSCORE II Adjusted Spline Regression Analysis
by Joseph Kletzer, Maximilian Kreibich, Martin Czerny, Tim Berger, Albi Fagu, Laurin Micek, Ulrich Franke, Matthias Eschenhagen, Tau S. Hartikainen, Mirjam Wild and Dalibor Bockelmann
J. Cardiovasc. Dev. Dis. 2025, 12(8), 287; https://doi.org/10.3390/jcdd12080287 - 28 Jul 2025
Abstract
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: [...] Read more.
Background: While timely blood transfusion is critical for restoring oxygen-carrying capacity after coronary artery bypass grafting (CABG), allogeneic blood product transfusions are independently associated with increased long-term mortality, necessitating a risk-stratified approach to balance oxygen delivery against immunological complications and infection risks. Methods: We retrospectively analyzed 3376 patients undergoing isolated CABG between 2005 and 2023 at a single tertiary center. Patients who died during their perioperative hospital stay within 30 days were excluded. Transfusion burden was assessed both as the absolute number of blood product units (packed red blood cells, platelet transfusion, fresh frozen plasma) and as a percentage of calculated patient blood volume. The primary outcome was all-cause mortality at 5 years. Flexible Cox regression with penalized smoothing splines, adjusted for EuroSCORE II, was used to model dose–response relationships. Results: From our cohort of 3376 patients, a total of 137 patients (4.05%) received >10 units of packed red blood cells (PRBC) perioperatively. These patients were older (median 71 vs. 68 years, p < 0.001), more often female (29% vs. 15%, p < 0.001), and had higher preoperative risk (EuroSCORE II: 2.53 vs. 1.41, p < 0.001). After 5 years, mortality was 42% in the massive transfusion group versus 10% in controls. Spline regression revealed an exponential increase in mortality with transfused units: 14 units yielded a 1.5-fold higher hazard of death (HR 1.46, 95% CI 1.31–1.64), rising to HR 2.71 (95% CI 2.12–3.47) at 30 units. When transfusion was indexed to blood volume, this relationship became linear and more tightly correlated with mortality, with lower maximum hazard ratios and narrower confidence intervals. Conclusion: Indexing transfusion burden to the percentage of patient blood volume replaced provides a more accurate and clinically actionable predictor of 5-year mortality after CABG than absolute unit counts. Our findings support a shift toward individualized, volume-based transfusion strategies to optimize patient outcomes and resource stewardship in a time of limited availability of blood products. Full article
26 pages, 9588 KiB  
Article
Research and Experimental Verification of an Efficient Subframe Lightweighting Method Integrating SIMP Topology and Size Optimization
by Jihui Zhuang and Fan Zeng
Appl. Sci. 2025, 15(15), 8192; https://doi.org/10.3390/app15158192 - 23 Jul 2025
Viewed by 90
Abstract
Under the context of the dual-carbon policy, reducing energy consumption and emissions in automobiles has garnered significant attention, with automotive lightweighting being particularly important. This paper focuses on the lightweight design of automotive subframes, aiming to minimize weight while meeting performance requirements. Research [...] Read more.
Under the context of the dual-carbon policy, reducing energy consumption and emissions in automobiles has garnered significant attention, with automotive lightweighting being particularly important. This paper focuses on the lightweight design of automotive subframes, aiming to minimize weight while meeting performance requirements. Research has revealed that the original subframe allows further room for lightweighting and performance optimization. A topology optimization model was established using the Solid Isotropic Material with Penalization (SIMP) method and solved using the Method of Moving Asymptotes (MMA) algorithm. Integration of the SIMP method was achieved on the Abaqus-Matlab (2022x) platform via Python (3.11.0) and Matlab (R2022a) coding, forming an effective optimization framework. The optimization results provided clear load transfer paths, offering a theoretical basis for geometric model conversion. The subframe model was subsequently reconstructed in CATIA. Material redundancy was identified in the reconstructed subframe model, prompting secondary optimization. Multi-objective size optimization was conducted in OptiStruct, reducing the subframe’s mass from 33.73 kg to 17.84 kg, achieving a 47.1% weight reduction. Static stiffness and modal analyses performed in HyperMesh confirmed that results met all relevant standards. Modal testing revealed a minimal deviation of only −2.7% from the simulation results, validating the feasibility and reliability of the optimized design. This research demonstrates that combining topology optimization with size optimization can significantly reduce weight and enhance subframe performance, providing valuable support for future automotive component design. Full article
Show Figures

Figure 1

15 pages, 2325 KiB  
Article
Research on Quantitative Analysis Method of Infrared Spectroscopy for Coal Mine Gases
by Feng Zhang, Yuchen Zhu, Lin Li, Suping Zhao, Xiaoyan Zhang and Chaobo Chen
Molecules 2025, 30(14), 3040; https://doi.org/10.3390/molecules30143040 - 20 Jul 2025
Viewed by 188
Abstract
Accurate and reliable detection of coal mine gases is the key to ensuring the safe service of coal mine production. Fourier Transform Infrared (FTIR) spectroscopy, due to its high sensitivity, non-destructive nature, and potential for online monitoring, has emerged as a key technique [...] Read more.
Accurate and reliable detection of coal mine gases is the key to ensuring the safe service of coal mine production. Fourier Transform Infrared (FTIR) spectroscopy, due to its high sensitivity, non-destructive nature, and potential for online monitoring, has emerged as a key technique in gas detection. However, the complex underground environment often causes baseline drift in IR spectra. Furthermore, the variety of gas species and uneven distribution of concentrations make it difficult to achieve precise and reliable online analysis using existing quantitative methods. This paper aims to perform a quantitative analysis of coal mine gases by FTIR. It utilized the adaptive smoothness parameter penalized least squares method to correct the drifted spectra. Subsequently, based on the infrared spectral distribution characteristics of coal mine gases, they could be classified into gases with mutually distinct absorption peaks and gases with overlapping absorption peaks. For gases with distinct absorption peaks, three spectral lines, including the absorption peak and its adjacent troughs, were selected for quantitative analysis. Spline fitting, polynomial fitting, and other curve fitting methods are used to establish a functional relationship between characteristic parameters and gas concentration. For gases with overlapping absorption peaks, a wavelength selection method bassed on the impact values of variables and population analysis was applied to select variables from the spectral data. The selected variables were then used as input features for building a model with a backpropagation (BP) neural network. Finally, the proposed method was validated using standard gases. Experimental results show detection limits of 0.5 ppm for CH4, 1 ppm for C2H6, 0.5 ppm for C3H8, 0.5 ppm for n-C4H10, 0.5 ppm for i-C4H10, 0.5 ppm for C2H4, 0.2 ppm for C2H2, 0.5 ppm for C3H6, 1 ppm for CO, 0.5 ppm for CO2, and 0.1 ppm for SF6, with quantification limits below 10 ppm for all gases. Experimental results show that the absolute error is less than 0.3% of the full scale (F.S.) and the relative error is within 10%. These results demonstrate that the proposed infrared spectral quantitative analysis method can effectively analyze mine gases and achieve good predictive performance. Full article
Show Figures

Figure 1

27 pages, 3704 KiB  
Article
Explainable Machine Learning and Predictive Statistics for Sustainable Photovoltaic Power Prediction on Areal Meteorological Variables
by Sajjad Nematzadeh and Vedat Esen
Appl. Sci. 2025, 15(14), 8005; https://doi.org/10.3390/app15148005 - 18 Jul 2025
Viewed by 287
Abstract
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters [...] Read more.
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters and reveals their physical relevance to PV generation. Starting from 27 local and plant-level variables recorded at 15 min resolution for a 1 MW array in Çanakkale region, Türkiye (1 August 2022–3 August 2024), we apply a three-stage feature-selection pipeline: (i) variance filtering, (ii) hierarchical correlation clustering with Ward linkage, and (iii) a meta-heuristic optimizer that maximizes a neural-network R2 while penalizing poor or redundant inputs. The resulting subset, dominated by apparent temperature and diffuse, direct, global-tilted, and terrestrial irradiance, reduces dimensionality without significantly degrading accuracy. Feature importance is then quantified through two complementary aspects: (a) tree-based permutation scores extracted from a set of ensemble models and (b) information gain computed over random feature combinations. Both views converge on shortwave, direct, and global-tilted irradiance as the primary drivers of active power. Using only the selected features, the best model attains an average R2 ≅ 0.91 on unseen data. By utilizing transparent feature-reduction techniques and explainable importance metrics, the proposed approach delivers compact, more generalized, and reliable PV forecasts that generalize to sites lacking embedded sensor networks, and it provides actionable insights for plant siting, sensor prioritization, and grid-operation strategies. Full article
Show Figures

Figure 1

25 pages, 434 KiB  
Article
The Impact of Digitalization on Carbon Emission Efficiency: An Intrinsic Gaussian Process Regression Approach
by Yongtong Hu, Jiaqi Xu and Tao Liu
Sustainability 2025, 17(14), 6551; https://doi.org/10.3390/su17146551 - 17 Jul 2025
Viewed by 243
Abstract
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process [...] Read more.
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process as a flexible spatial random effect with a heat-kernel-based covariance function to capture the manifold geometry of spatial features. To enable tractable inference, we employ a penalized maximum-likelihood estimation (PMLE) approach to jointly estimate regression coefficients and covariance hyperparameters. Using a panel dataset linking a national digitalization (modernization) index to carbon emission efficiency, the empirical analysis demonstrates that digitalization has a significantly positive impact on carbon emission efficiency while accounting for spatial heterogeneity. The iGPR model also exhibits superior predictive accuracy compared to state-of-the-art machine learning methods (including XGBoost, random forest, support vector regression, ElasticNet, and a standard Gaussian process regression), achieving the lowest mean squared error (MSE = 0.0047) and an average prediction error near zero. Robustness checks include instrumental-variable GMM estimation to address potential endogeneity across the efficiency distribution and confirm the stability of the estimated positive effect of digitalization. Full article
Show Figures

Figure 1

20 pages, 774 KiB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Viewed by 263
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

35 pages, 3495 KiB  
Article
Demographic Capital and the Conditional Validity of SERVPERF: Rethinking Tourist Satisfaction Models in an Emerging Market Destination
by Reyner Pérez-Campdesuñer, Alexander Sánchez-Rodríguez, Gelmar García-Vidal, Rodobaldo Martínez-Vivar, Marcos Eduardo Valdés-Alarcón and Margarita De Miguel-Guzmán
Adm. Sci. 2025, 15(7), 272; https://doi.org/10.3390/admsci15070272 - 11 Jul 2025
Viewed by 433
Abstract
Tourist satisfaction models typically assume that service performance dimensions carry the same weight for all travelers. Drawing on Bourdieu, we reconceptualize age, gender, and region of origin as demographic capital, durable resources that mediate how visitors decode service cues. Using a SERVPERF-based survey [...] Read more.
Tourist satisfaction models typically assume that service performance dimensions carry the same weight for all travelers. Drawing on Bourdieu, we reconceptualize age, gender, and region of origin as demographic capital, durable resources that mediate how visitors decode service cues. Using a SERVPERF-based survey of 407 international travelers departing Quito (Ecuador), we test measurement invariance across six sociodemographic strata with multi-group confirmatory factor analysis. The four-factor SERVPERF core (Access, Lodging, Extra-hotel Services, Attractions) holds, yet partial metric invariance emerges: specific loadings flex with demographic capital. Gen-Z travelers penalize transport reliability and safety; female visitors reward cleanliness and empathy; and Latin American guests are the most critical of basic organization. These patterns expose a boundary condition for universalistic satisfaction models and elevate demographic capital from a descriptive tag to a structuring construct. Managerially, we translate the findings into segment-sensitive levers, visible security for youth and regional markets, gender-responsive facility upgrades, and dual eco-luxury versus digital-detox bundles for long-haul segments. By demonstrating when and how SERVPERF fractures across sociodemographic lines, this study intervenes in three theoretical conversations: (1) capital-based readings of consumption, (2) the search for boundary conditions in service-quality measurement, and (3) the shift from segmentation to capital-sensitive interpretation in emerging markets. The results position Ecuador as a critical case and provide a template for destinations facing similar performance–perception mismatches in the Global South. Full article
(This article belongs to the Special Issue Tourism and Hospitality Marketing: Trends and Best Practices)
Show Figures

Figure 1

25 pages, 646 KiB  
Article
Exponential Squared Loss-Based Robust Variable Selection with Prior Information in Linear Regression Models
by Hejun Wei, Tian Jin and Yunquan Song
Axioms 2025, 14(7), 516; https://doi.org/10.3390/axioms14070516 - 4 Jul 2025
Viewed by 180
Abstract
This paper proposes a robust variable selection method that incorporates prior information through linear constraints. For more than a decade, penalized likelihood frameworks have been the predominant approach for variable selection, where appropriate loss and penalty functions are selected to formulate unconstrained optimization [...] Read more.
This paper proposes a robust variable selection method that incorporates prior information through linear constraints. For more than a decade, penalized likelihood frameworks have been the predominant approach for variable selection, where appropriate loss and penalty functions are selected to formulate unconstrained optimization problems. However, in many specific applications, some prior information can be obtained. In this paper, we reformulate variable selection by incorporating prior knowledge as linear constraints. In addition, the loss function adopted in this paper is a robust exponential squared loss function, which ensures that the estimation of model parameter coefficient will not have a great impact when there are a few outliers in the dataset. This paper uses the designed solution algorithm to calculate the estimated values of coefficients and some other parameters, and finally conducts numerical simulations and a real-data experiment. Experimental results demonstrate that our model significantly improves estimation robustness compared to existing methods, even in outlier-contaminated scenarios. Full article
(This article belongs to the Special Issue Computational Statistics and Its Applications, 2nd Edition)
Show Figures

Figure 1

15 pages, 1978 KiB  
Article
Exposure to Metal Mixtures and Metabolic Syndrome in Residents Living near an Abandoned Lead–Zinc Mine: A Cross-Sectional Study
by Min Zhao, Qi Xu, Lingqiao Qin, Tufeng He, Yifan Zhang, Runlin Chen, Lijun Tao, Ting Chen and Qiuan Zhong
Toxics 2025, 13(7), 565; https://doi.org/10.3390/toxics13070565 - 3 Jul 2025
Viewed by 517
Abstract
Information regarding the impact of polymetallic exposure on metabolic syndrome (MetS) among residents living near abandoned Pb-Zn mines is limited. Our objective was to investigate the impact of co-exposure to metal mixtures on the prevalence of MetS among residents. ICP-MS was used to [...] Read more.
Information regarding the impact of polymetallic exposure on metabolic syndrome (MetS) among residents living near abandoned Pb-Zn mines is limited. Our objective was to investigate the impact of co-exposure to metal mixtures on the prevalence of MetS among residents. ICP-MS was used to measure the levels of 24 metals in the urine of 1744 participants, including 723 participants living near abandoned Pb-Zn mines, labeled as exposed area, and 1021 participants from other towns, labeled as reference area in the same city. Multivariable generalized linear regression, adaptive LASSO penalized regression, and BKMR were used to assess the associations between metals and MetS. The levels of eleven metals were higher, while those of nine metals were lower in the exposed area than those in the reference area. Mg, Cd, Ti, TI, Zn, Rb, and Pb were selected as important MetS predictors using LASSO regression. In exposed area, urinary Zn and TI were positively associated with MetS, whereas Mg was negatively associated with MetS. In the reference area, urinary Zn was positively associated with MetS, whereas Mg and Ti were negatively associated with MetS. The BKMR model indicates a statistically significant positive overall effect of the seven metal mixtures on MetS in the exposed area. Polymetallic exposure was positively associated with MetS risk in the abandoned Pb-Zn mining areas, suggesting that excessive Zn and TI may be associated with a higher MetS risk among residents living near abandoned Pb-Zn mines. Full article
(This article belongs to the Special Issue Health Effects of Exposure to Environmental Pollutants—2nd Edition)
Show Figures

Graphical abstract

16 pages, 492 KiB  
Article
Error-Function-Based Penalized Quantile Regression in the Linear Mixed Model
by Zelin Hang and Xiuli He
Appl. Sci. 2025, 15(13), 7461; https://doi.org/10.3390/app15137461 - 3 Jul 2025
Viewed by 197
Abstract
We study a novel Doubly penalized ERror Function regularized Quantile Regression (DERF-QR) in this paper. This is a method of variable selection by ERror Function (ERF) regularization in the linear effects model. We introduce a two-stage iterative algorithm combining the iterative reweighted [...] Read more.
We study a novel Doubly penalized ERror Function regularized Quantile Regression (DERF-QR) in this paper. This is a method of variable selection by ERror Function (ERF) regularization in the linear effects model. We introduce a two-stage iterative algorithm combining the iterative reweighted L1 approach and the alternating direction method of multipliers (ADMM) to estimate the model parameters. Numerical simulations show that by using this method we remove redundant variables effectively and obtain accurate coefficient estimations. Our method outperforms two existing penalized quantile regression methods in various error conditions by comparison. Finally, we apply the methodology in a financial dataset and showcase its practicality. Full article
Show Figures

Figure 1

34 pages, 4523 KiB  
Article
Evaluating Prediction Performance: A Simulation Study Comparing Penalized and Classical Variable Selection Methods in Low-Dimensional Data
by Edwin Kipruto and Willi Sauerbrei
Appl. Sci. 2025, 15(13), 7443; https://doi.org/10.3390/app15137443 - 2 Jul 2025
Viewed by 375
Abstract
Variable selection is important for developing accurate and interpretable prediction models. While classical and penalized methods are widely used, few simulation studies provide meaningful comparisons. This study compares their predictive performance and model complexity in low-dimensional data. Three classical methods (best subset selection, [...] Read more.
Variable selection is important for developing accurate and interpretable prediction models. While classical and penalized methods are widely used, few simulation studies provide meaningful comparisons. This study compares their predictive performance and model complexity in low-dimensional data. Three classical methods (best subset selection, backward elimination, and forward selection) and four penalized methods (nonnegative garrote (NNG), lasso, adaptive lasso (ALASSO), and relaxed lasso (RLASSO)) were compared. Tuning parameters were selected using cross-validation (CV), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Classical methods performed similarly and produced worse predictions than penalized methods in limited-information scenarios (small samples, high correlation, and low signal-to-noise ratio (SNR)), but performed comparably or better in sufficient-information scenarios (large samples, low correlation, and high SNR). Lasso was superior under limited information but was less effective in sufficient-information scenarios. NNG, ALASSO, and RLASSO outperformed lasso in sufficient-information scenarios, with no clear winner among them. AIC and CV produced similar results and outperformed BIC, except in sufficient-information settings, where BIC performed better. Our findings suggest that no single method consistently outperforms others, as performance depends on the amount of information in the data. Lasso is preferred in limited-information settings, whereas classical methods are more suitable in sufficient-information settings, as they also tend to select simpler models. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

19 pages, 863 KiB  
Article
Beyond Intentionality: A Latent Class Analysis of Barriers to Prenatal Care in an Explanatory Mixed Methods Study
by John Kwame Duah
Healthcare 2025, 13(13), 1546; https://doi.org/10.3390/healthcare13131546 - 28 Jun 2025
Viewed by 360
Abstract
Objective: Utilizing the Health Care Access Barriers (HCAB) Theoretical Framework, this study examined latent profiles of barriers to prenatal care among pregnant women in Alabama and whether these profiles mediate or moderate the relationship between pregnancy intentionality and early prenatal care initiation. Methods [...] Read more.
Objective: Utilizing the Health Care Access Barriers (HCAB) Theoretical Framework, this study examined latent profiles of barriers to prenatal care among pregnant women in Alabama and whether these profiles mediate or moderate the relationship between pregnancy intentionality and early prenatal care initiation. Methods: An explanatory mixed-method design was employed, integrating quantitative analysis of Alabama Pregnancy Risk Assessment Monitoring System (PRAMS) Phase 8 data (2016–2021) with qualitative insights from expert interviews. Latent class analysis (LCA) identified subgroups based on reported barriers. Multivariable logistic regression assessed the association between pregnancy intentionality and early prenatal care initiation, controlling for covariates. A Firth-penalized multivariable logistic regression model tested interaction effects. Results: Planned pregnancy was associated with higher odds of early prenatal care initiation (OR = 0.78, 95% CI [0.49, 1.23], p = 0.286), though this association was not statistically significant. Barrier profiles did not significantly moderate or mediate the relationship. The interaction term was nonsignificant (OR = 5.19, 95% CI [0.22, 828.94], p = 0.309), and the mediation pathway was also not supported (indirect effect = 0.012, p = 0.518). Expert interviews emphasized ongoing systemic and cognitive barriers that hinder timely access. Conclusions: Although pregnancy intentionality was not a statistically significant predictor of early prenatal care initiation, qualitative findings highlighted persistent barriers that continue to constrain access. These results underscore the need for multilevel strategies to address informational and logistical challenges. Future research should evaluate additional pathways that influence care-seeking behaviors. Full article
(This article belongs to the Section Women's Health Care)
Show Figures

Figure 1

11 pages, 693 KiB  
Article
Perioperative Predictors of Early Spinal Cord Stimulator Removal: A Retrospective Cohort Study
by Peyton J. Murin, Patrick J. Murin, Sejal V. Jain and Yuri Chaves Martins
Neurol. Int. 2025, 17(7), 100; https://doi.org/10.3390/neurolint17070100 - 27 Jun 2025
Viewed by 532
Abstract
Background: Spinal cord stimulators can offer an effective treatment in chronic pain refractory to conventional medical management. However, with a failure rate of up to 44% and an annual explantation rate of 6–9%, there is a need to better identify patients at high [...] Read more.
Background: Spinal cord stimulators can offer an effective treatment in chronic pain refractory to conventional medical management. However, with a failure rate of up to 44% and an annual explantation rate of 6–9%, there is a need to better identify patients at high risk for therapeutic failure. The objective of this retrospective cohort study was to determine predictors of early SCS explantation following device placement. Methods: The Medical Informatics Operating room Vitals and Events Repository database was queried for patients with a spinal cord stimulator and at least two years of follow-up (n = 56). A multivariate logistic regression was fitted. Recursive factor elimination with cross-validation and L1 penalization were used to reduce the number of predictors and minimize the risk of overfitting. The model was used to predict risk factors for explantation, odds ratio (OR), 95% confidence interval (CI), and false discovery rate-adjusted p-value. Results: The final model displayed adequate performance with an average precision of 0.769. Sleep disorders were identified as a statistically significant predictor of SCS explantation (OR: 3.88, CI: 1.36–11.04, FDR p-value: 0.0497). Conclusions: While further prospective studies are needed, our study indicates that sleep disorders are a risk factor for spinal cord stimulator explantation and should be considered during pre-operative evaluation. Full article
(This article belongs to the Section Pain Research)
Show Figures

Figure 1

18 pages, 2168 KiB  
Article
A New Approach to Topology Optimization with Genetic Algorithm and Parameterization Level Set Function
by Igor Pehnec, Damir Sedlar, Ivo Marinic-Kragic and Damir Vučina
Computation 2025, 13(7), 153; https://doi.org/10.3390/computation13070153 - 26 Jun 2025
Viewed by 414
Abstract
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using [...] Read more.
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using the B-spline interpolation function, the number of variables describing the level set function was decreased, enabling the application of evolutionary methods (genetic algorithms) in the topology optimization process. The traditional level set method is performed by using the Hamilton–Jacobi transport equation, which implies the use of gradient optimization methods that are prone to becoming stuck in local minima. Furthermore, the resulting optimal shapes are strongly dependent on the initial solution. The proposed topology optimization procedure, written in MATLAB R2013b, utilizes a genetic algorithm for global optimization, enabling it to locate the global optimum efficiently. To assess the acceleration and convergence capabilities of the proposed topology optimization method, a new genetic algorithm penalty operator was tested. This operator addresses the slow convergence issue typically encountered when the genetic algorithm optimization procedure nears a solution. By penalizing similar individuals within a population, the method aims to enhance convergence speed and overall performance. In complex examples (3D), the method can also function as a generator of good initial solutions for faster topology optimization methods (e.g., level set) that rely on such initial solutions. Both the proposed method and the traditional methods have their own advantages and limitations. The main advantage is that the proposed method is a global search method. This makes it robust against entrapment in local minima and independent of the initial solution. It is important to note that this evolutionary approach does not necessarily perform better in terms of convergence speed compared to gradient-based or other local optimization methods. However, once the global optimum has been found using the genetic algorithm, convergence can be accelerated using a faster local method such as gradient-based optimization. The application and usefulness of the method were tested on typical 2D cantilever beams and Michell beams. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

23 pages, 6238 KiB  
Article
The Semi-Penalized Updated Properties Model and Its Algorithm to Impose the Volume Fraction
by Amin Alibakhshi and Luis Saucedo-Mora
Materials 2025, 18(13), 2972; https://doi.org/10.3390/ma18132972 - 23 Jun 2025
Viewed by 369
Abstract
Intricate structures with minimal weight and maximum stiffness are demanded in many practical engineering applications. Topology optimization is a method for designing these structures, and the rise of additive manufacturing technologies has opened the door to their production. In a recently published paper, [...] Read more.
Intricate structures with minimal weight and maximum stiffness are demanded in many practical engineering applications. Topology optimization is a method for designing these structures, and the rise of additive manufacturing technologies has opened the door to their production. In a recently published paper, a novel topology optimization algorithm, named the Updated Properties Model (UPM), was developed with the homogenization of strain level as an objective function and an updating Young modulus as the design variable. The UPM method optimizes mechanical structures without applying any constraints. However, including constraints such as volume, mass, and/or stress in topology optimization is prevalent. This paper uses the density-dependent Young modulus concept to incorporate the volume fraction in the UPM method. We address the critical problem of constraint-aware design without the complexity of constraint-handling formulations. We show the proposed methodology’s success and functionality by plotting the algorithm’s results in two- and three-dimensional benchmark structures. Key results present that adjusting algorithmic parameters can yield both binary (single-material) and graded-material solutions, offering flexibility for different applications. These findings suggest that the UPM can effectively replicate constraint-driven outcomes without explicitly enforcing constraints. The main novelty of this work lies in extending the constraint-free UPM framework to allow for controlled material distribution using a physically meaningful update rule. This extends the applicability of the UPM beyond previous efforts in the literature. We have also created a Julia package for our proposal. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

Back to TopTop