Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = penalty norm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1184 KB  
Article
Adaptive Gradient Penalty for Wasserstein GANs: Theory and Applications
by Joseph Tafataona Mtetwa, Kingsley A. Ogudo and Sameerchand Pudaruth
Mathematics 2025, 13(16), 2651; https://doi.org/10.3390/math13162651 - 18 Aug 2025
Viewed by 215
Abstract
Wasserstein Generative Adversarial Networks (WGANs) have gained significant attention due to their theoretical foundations and effectiveness in generative modeling. However, training stability remains a major challenge, typically addressed through fixed gradient penalty (GP) techniques. In this paper, we propose an Adaptive Gradient Penalty [...] Read more.
Wasserstein Generative Adversarial Networks (WGANs) have gained significant attention due to their theoretical foundations and effectiveness in generative modeling. However, training stability remains a major challenge, typically addressed through fixed gradient penalty (GP) techniques. In this paper, we propose an Adaptive Gradient Penalty (AGP) framework that employs a Proportional–Integral (PI) controller to adjust the gradient penalty coefficient λt based on real-time training feedback. We provide a comprehensive theoretical analysis, including convergence guarantees, stability conditions, and optimal parameter selection. Experimental validation on MNIST and CIFAR-10 datasets demonstrates that AGP achieves an 11.4% improvement in FID scores on CIFAR-10 while maintaining comparable performance on MNIST. The adaptive mechanism automatically evolves penalty coefficients from 10.0 to 21.29 for CIFAR-10, appropriately responding to dataset complexity, and achieves superior gradient norm control with only 7.9% deviation from the target value compared to 18.3% for standard WGAN-GP. This work represents the first comprehensive investigation of adaptive gradient penalty mechanisms for WGANs, providing both theoretical foundations and empirical evidence for their advantages in achieving robust and efficient adversarial training. Full article
Show Figures

Figure 1

58 pages, 949 KB  
Review
Excess Pollution from Vehicles—A Review and Outlook on Emission Controls, Testing, Malfunctions, Tampering, and Cheating
by Robin Smit, Alberto Ayala, Gerrit Kadijk and Pascal Buekenhoudt
Sustainability 2025, 17(12), 5362; https://doi.org/10.3390/su17125362 - 10 Jun 2025
Viewed by 2093
Abstract
Although the transition to electric vehicles (EVs) is well underway and expected to continue in global car markets, most vehicles on the world’s roads will be powered by internal combustion engine vehicles (ICEVs) and fossil fuels for the foreseeable future, possibly well past [...] Read more.
Although the transition to electric vehicles (EVs) is well underway and expected to continue in global car markets, most vehicles on the world’s roads will be powered by internal combustion engine vehicles (ICEVs) and fossil fuels for the foreseeable future, possibly well past 2050. Thus, good environmental performance and effective emission control of ICE vehicles will continue to be of paramount importance if the world is to achieve the stated air and climate pollution reduction goals. In this study, we review 228 publications and identify four main issues confronting these objectives: (1) cheating by vehicle manufacturers, (2) tampering by vehicle owners, (3) malfunctioning emission control systems, and (4) inadequate in-service emission programs. With progressively more stringent vehicle emission and fuel quality standards being implemented in all major markets, engine designs and emission control systems have become increasingly complex and sophisticated, creating opportunities for cheating and tampering. This is not a new phenomenon, with the first cases reported in the 1970s and continuing to happen today. Cheating appears not to be restricted to specific manufacturers or vehicle types. Suspicious real-world emissions behavior suggests that the use of defeat devices may be widespread. Defeat devices are primarily a concern with diesel vehicles, where emission control deactivation in real-world driving can lower manufacturing costs, improve fuel economy, reduce engine noise, improve vehicle performance, and extend refill intervals for diesel exhaust fluid, if present. Despite the financial penalties, undesired global attention, damage to brand reputation, a temporary drop in sales and stock value, and forced recalls, cheating may continue. Private vehicle owners resort to tampering to (1) improve performance and fuel efficiency; (2) avoid operating costs, including repairs; (3) increase the resale value of the vehicle (i.e., odometer tampering); or (4) simply to rebel against established norms. Tampering and cheating in the commercial freight sector also mean undercutting law-abiding operators, gaining unfair economic advantage, and posing excess harm to the environment and public health. At the individual vehicle level, the impacts of cheating, tampering, or malfunctioning emission control systems can be substantial. The removal or deactivation of emission control systems increases emissions—for instance, typically 70% (NOx and EGR), a factor of 3 or more (NOx and SCR), and a factor of 25–100 (PM and DPF). Our analysis shows significant uncertainty and (geographic) variability regarding the occurrence of cheating and tampering by vehicle owners. The available evidence suggests that fleet-wide impacts of cheating and tampering on emissions are undeniable, substantial, and cannot be ignored. The presence of a relatively small fraction of high-emitters, due to either cheating, tampering, or malfunctioning, causes excess pollution that must be tackled by environmental authorities around the world, in particular in emerging economies, where millions of used ICE vehicles from the US and EU end up. Modernized in-service emission programs designed to efficiently identify and fix large faults are needed to ensure that the benefits of modern vehicle technologies are not lost. Effective programs should address malfunctions, engine problems, incorrect repairs, a lack of servicing and maintenance, poorly retrofitted fuel and emission control systems, the use of improper or low-quality fuels and tampering. Periodic Test and Repair (PTR) is a common in-service program. We estimate that PTR generally reduces emissions by 11% (8–14%), 11% (7–15%), and 4% (−1–10%) for carbon monoxide (CO), hydrocarbons (HC), and oxides of nitrogen (NOx), respectively. This is based on the grand mean effect and the associated 95% confidence interval. PTR effectiveness could be significantly higher, but we find that it critically depends on various design factors, including (1) comprehensive fleet coverage, (2) a suitable test procedure, (3) compliance and enforcement, (4) proper technician training, (5) quality control and quality assurance, (6) periodic program evaluation, and (7) minimization of waivers and exemptions. Now that both particulate matter (PM, i.e., DPF) and NOx (i.e., SCR) emission controls are common in all modern new diesel vehicles, and commonly the focus of cheating and tampering, robust measurement approaches for assessing in-use emissions performance are urgently needed to modernize PTR programs. To increase (cost) effectiveness, a modern approach could include screening methods, such as remote sensing and plume chasing. We conclude this study with recommendations and suggestions for future improvements and research, listing a range of potential solutions for the issues identified in new and in-service vehicles. Full article
Show Figures

Figure 1

23 pages, 472 KB  
Article
Variable Selection for Multivariate Failure Time Data via Regularized Sparse-Input Neural Network
by Bin Luo and Susan Halabi
Bioengineering 2025, 12(6), 596; https://doi.org/10.3390/bioengineering12060596 - 31 May 2025
Viewed by 674
Abstract
This study addresses the problem of simultaneous variable selection and model estimation in multivariate failure time data, a common challenge in clinical trials with multiple correlated time-to-event endpoints. We propose a unified framework that identifies predictors shared across outcomes, applicable to both low- [...] Read more.
This study addresses the problem of simultaneous variable selection and model estimation in multivariate failure time data, a common challenge in clinical trials with multiple correlated time-to-event endpoints. We propose a unified framework that identifies predictors shared across outcomes, applicable to both low- and high-dimensional settings. For linear marginal hazard models, we develop a penalized pseudo-partial likelihood approach with a group LASSO-type penalty applied to the 2 norms of coefficients corresponding to the same covariates across marginal hazard functions. To capture potential nonlinear effects, we further extend the approach to a sparse-input neural network model with structured group penalties on input-layer weights. Both methods are optimized using a composite gradient descent algorithm combining standard gradient steps with proximal updates. Simulation studies demonstrate that the proposed methods yield superior variable selection and predictive performance compared to traditional and outcome-specific approaches, while remaining robust to violations of the common predictor assumption. In an application to advanced prostate cancer data, the framework identifies both established clinical factors and potentially novel prognostic single-nucleotide polymorphisms for overall and progression-free survival. This work provides a flexible and robust tool for analyzing complex multivariate survival data, with potential utility in prognostic modeling and personalized medicine. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

23 pages, 2319 KB  
Article
Codesign of Transmit Waveform and Receive Filter with Similarity Constraints for FDA-MIMO Radar
by Qiping Zhang, Jinfeng Hu, Xin Tai, Yongfeng Zuo, Huiyong Li, Kai Zhong and Chaohai Li
Remote Sens. 2025, 17(10), 1800; https://doi.org/10.3390/rs17101800 - 21 May 2025
Viewed by 477
Abstract
The codesign of the receive filter and transmit waveform under similarity constraints is one of the key technologies in frequency diverse array multiple-input multiple-output (FDA-MIMO) radar systems. This paper discusses the design of constant modulus waveforms and filters aimed at maximizing the signal-to-interference-and-noise [...] Read more.
The codesign of the receive filter and transmit waveform under similarity constraints is one of the key technologies in frequency diverse array multiple-input multiple-output (FDA-MIMO) radar systems. This paper discusses the design of constant modulus waveforms and filters aimed at maximizing the signal-to-interference-and-noise ratio (SINR). The problem’s non-convexity renders it challenging to solve. Existing studies have typically employed relaxation-based methods, which inevitably introduce relaxation errors that degrade system performance. To address these issues, we propose an optimization framework based on the joint complex circle manifold–complex sphere manifold space (JCCM-CSMS). Firstly, the similarity constraint is converted into the penalty term in the objective function using an adaptive penalty strategy. Then, JCCM-CSMS is constructed to satisfy the waveform constant modulus constraint and filter norm constraint. The problem is projected into it and transformed into an unconstrained optimization problem. Finally, the Riemannian limited-memory Broyden–Fletcher–Goldfarb–Shanno (RL-BFGS) algorithm is employed to optimize the variables in parallel. Simulation results demonstrate that our method achieves a 0.6 dB improvement in SINR compared to existing methods while maintaining competitive computational efficiency. Additionally, waveform similarity was also analyzed. Full article
(This article belongs to the Special Issue Array Digital Signal Processing for Radar)
Show Figures

Graphical abstract

13 pages, 345 KB  
Article
Novel Iterative Reweighted 1 Minimization for Sparse Recovery
by Qi An, Li Wang and Nana Zhang
Mathematics 2025, 13(8), 1219; https://doi.org/10.3390/math13081219 - 8 Apr 2025
Viewed by 479
Abstract
Data acquisition and high-dimensional signal processing often require the recovery of sparse representations of signals to minimize the resources needed for data collection. p quasi-norm minimization excels in exactly reconstructing sparse signals from fewer measurements, but it is NP-hard and challenging to [...] Read more.
Data acquisition and high-dimensional signal processing often require the recovery of sparse representations of signals to minimize the resources needed for data collection. p quasi-norm minimization excels in exactly reconstructing sparse signals from fewer measurements, but it is NP-hard and challenging to solve. In this paper, we propose two distinct Iteratively Re-weighted 1 Minimization (IR1) formulations for solving this non-convex sparse recovery problem by introducing two novel reweighting strategies. These strategies ensure that the ϵ-regularizations adjust dynamically based on the magnitudes of the solution components, leading to more effective approximations of the non-convex sparsity penalty. The resulting IR1 formulations provide first-order approximations of tighter surrogates for the original p quasi-norm objective. We prove that both algorithms converge to the true sparse solution under appropriate conditions on the sensing matrix. Our numerical experiments demonstrate that the proposed IR1 algorithms outperform the conventional approach in enhancing recovery success rate and computational efficiency, especially in cases with small values of p. Full article
Show Figures

Figure 1

13 pages, 563 KB  
Article
Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs
by Liping Chen, Hongji Zhu and Shuguang Han
Mathematics 2025, 13(5), 761; https://doi.org/10.3390/math13050761 - 26 Feb 2025
Cited by 1 | Viewed by 550
Abstract
The node representation learning capability of Graph Convolutional Networks (GCNs) is fundamentally constrained by dynamic instability during feature propagation, yet existing research lacks systematic theoretical analysis of stability control mechanisms. This paper proposes a Stability-Optimized Graph Convolutional Network (SO-GCN) that enhances training stability [...] Read more.
The node representation learning capability of Graph Convolutional Networks (GCNs) is fundamentally constrained by dynamic instability during feature propagation, yet existing research lacks systematic theoretical analysis of stability control mechanisms. This paper proposes a Stability-Optimized Graph Convolutional Network (SO-GCN) that enhances training stability and feature expressiveness in shallow architectures through continuous–discrete dual-domain stability constraints. By constructing continuous dynamical equations for GCNs and rigorously proving conditional stability under arbitrary parameter dimensions using nonlinear operator theory, we establish theoretical foundations. A Precision Weight Parameter Mechanism is introduced to determine critical Frobenius norm thresholds through feature contraction rates, optimized via differentiable penalty terms. Simultaneously, a Dynamic Step-size Adjustment Mechanism regulates propagation steps based on spectral properties of instantaneous Jacobian matrices and forward Euler discretization. Experimental results demonstrate SO-GCN’s superiority: 1.1–10.7% accuracy improvement on homophilic graphs (Cora/CiteSeer) and 11.22–12.09% enhancement on heterophilic graphs (Texas/Chameleon) compared to conventional GCN. Hilbert–Schmidt Independence Criterion (HSIC) analysis reveals SO-GCN’s superior inter-layer feature independence maintenance across 2–7 layers. This study establishes a novel theoretical paradigm for graph network stability analysis, with practical implications for optimizing shallow architectures in real-world applications. Full article
Show Figures

Figure 1

13 pages, 230 KB  
Article
The New Moral Absolutism in Catholic Moral Teaching: A Critique Based on Veritatis Splendor
by Károly Mike
Religions 2025, 16(2), 149; https://doi.org/10.3390/rel16020149 - 28 Jan 2025
Viewed by 1729
Abstract
This paper examines a recent shift in Catholic moral teaching, characterized by the emergence of a ‘new moral absolutism’, in which certain acts traditionally subject to prudential judgment—such as the death penalty, ecological harm, and restrictive migration policies—are increasingly portrayed as universally and [...] Read more.
This paper examines a recent shift in Catholic moral teaching, characterized by the emergence of a ‘new moral absolutism’, in which certain acts traditionally subject to prudential judgment—such as the death penalty, ecological harm, and restrictive migration policies—are increasingly portrayed as universally and gravely wrong in our age. Simultaneously, traditional moral absolutes, especially in sexual and life ethics, have experienced cautious relativization. Drawing on the framework of Veritatis Splendor (1993), the paper critiques the approach of this new moral absolutism, arguing that it undermines the proper role of individual conscience and situational discernment while failing to provide coherent guidance on complex moral dilemmas. It links its emergence to proportionalist ethics: when traditional moral absolutes are relativized, new types of wrongs take their place. The paper proposes a return to the principles of Veritatis Splendor, advocating for a nuanced approach that preserves the constant and limited set of absolute negative norms and encourages the formation and use of conscience for all other matters. Full article
16 pages, 2439 KB  
Article
Improved RPCA Method via Fractional Function-Based Structure and Its Application
by Yong-Ke Pan and Shuang Peng
Information 2025, 16(1), 69; https://doi.org/10.3390/info16010069 - 20 Jan 2025
Viewed by 908
Abstract
With the advancement of oil logging techniques, vast amounts of data have been generated. However, this data often contains significant redundancy and noise. The logging data must be denoised before it is used for oil logging recognition. Hence, this paper proposed an improved [...] Read more.
With the advancement of oil logging techniques, vast amounts of data have been generated. However, this data often contains significant redundancy and noise. The logging data must be denoised before it is used for oil logging recognition. Hence, this paper proposed an improved robust principal component analysis algorithm (IRPCA) for logging data denoising, which addresses the problems of various noises in oil logging data acquisition and the limitations of conventional data processing methods. The IRPCA algorithm enhances both the efficiency of the model and the accuracy of low-rank matrix recovery. This improvement is achieved primarily by introducing the approximate zero norm based on the fractional function structure and by adding weighted kernel parametrization and penalty terms to enhance the model’s capability in handling complex matrices. The efficacy of the proposed IRPCA algorithm has been verified through simulation experiments, demonstrating its superiority over the widely used RPCA algorithm. We then present a denoising method tailored to the characteristics of logging data and based on the IRPCA algorithm. This method first involves the segregation of the original logging data to acquire background and foreground information. The background information is subsequently further separated to isolate the factual background and noise, resulting in the denoised logging data. The results indicate that the IRPCA algorithm is practical and effective when applied to the denoising of actual logging data. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

10 pages, 479 KB  
Article
The Capped Separable Difference of Two Norms for Signal Recovery
by Zhiyong Zhou and Gui Wang
Mathematics 2024, 12(23), 3717; https://doi.org/10.3390/math12233717 - 27 Nov 2024
Viewed by 638
Abstract
This paper introduces a novel capped separable difference of two norms (CSDTN) method for sparse signal recovery, which generalizes the well-known minimax concave penalty (MCP) method. The CSDTN method incorporates two shape parameters and one scale parameter, with their appropriate selection being crucial [...] Read more.
This paper introduces a novel capped separable difference of two norms (CSDTN) method for sparse signal recovery, which generalizes the well-known minimax concave penalty (MCP) method. The CSDTN method incorporates two shape parameters and one scale parameter, with their appropriate selection being crucial for ensuring robustness and achieving superior reconstruction performance. We provide a detailed theoretical analysis of the method and propose an efficient iteratively reweighted 1 (IRL1)-based algorithm for solving the corresponding optimization problem. Extensive numerical experiments, including electrocardiogram (ECG) and synthetic signal recovery tasks, demonstrate the effectiveness of the proposed CSDTN method. Our method outperforms existing methods in terms of recovery accuracy and robustness. These results highlight the potential of CSDTN in various signal-processing applications. Full article
Show Figures

Figure 1

25 pages, 18179 KB  
Article
ES-L2-VGG16 Model for Artificial Intelligent Identification of Ice Avalanche Hidden Danger
by Daojing Guo, Minggao Tang, Qiang Xu, Guangjian Wu, Guang Li, Wei Yang, Zhihang Long, Huanle Zhao and Yu Ren
Remote Sens. 2024, 16(21), 4041; https://doi.org/10.3390/rs16214041 - 30 Oct 2024
Viewed by 1349
Abstract
Ice avalanche (IA) has a strong concealment and sudden characteristics, which can cause severe disasters. The early identification of IA hidden danger is of great value for disaster prevention and mitigation. However, it is very difficult, and there is poor efficiency in identifying [...] Read more.
Ice avalanche (IA) has a strong concealment and sudden characteristics, which can cause severe disasters. The early identification of IA hidden danger is of great value for disaster prevention and mitigation. However, it is very difficult, and there is poor efficiency in identifying it by site investigation or manual remote sensing. So, an artificial intelligence method for the identification of IA hidden dangers using a deep learning model has been proposed, with the glacier area of the Yarlung Tsangpo River Gorge in Nyingchi selected for identification and validation. First, through engineering geological investigations, three key identification indices for IA hidden dangers are established, glacier source, slope angle, and cracks. Sentinel-2A satellite data, Google Earth, and ArcGIS are used to extract these indices and construct a feature dataset for the study and validation area. Next, key performance metrics, such as training accuracy, validation accuracy, test accuracy, and loss rates, are compared to assess the performance of the ResNet50 (Residual Neural Network 50) and VGG16 (Visual Geometry Group 16) models. The VGG16 model (96.09% training accuracy) is selected and optimized, using Early Stopping (ES) to prevent overfitting and L2 regularization techniques (L2) to add weight penalties, which constrained model complexity and enhanced simplicity and generalization, ultimately developing the ES-L2-VGG16 (Early Stopping—L2 Norm Regularization Techniques—Visual Geometry Group 16) model (98.61% training accuracy). Lastly, during the validation phase, the model is applied to the Yarlung Tsangpo River Gorge glacier area on the Tibetan Plateau (TP), identifying a total of 100 IA hidden danger areas, with average slopes ranging between 34° and 48°. The ES-L2-VGG16 model achieves an accuracy of 96% in identifying these hidden danger areas, ensuring the precise identification of IA dangers. This study offers a new intelligent technical method for identifying IA hidden danger, with clear advantages and promising application prospects. Full article
Show Figures

Figure 1

14 pages, 783 KB  
Article
Integrating TRA and SET to Influence Food Waste Reduction in Buffet-Style Restaurants: A Gender-Specific Approach
by Qianni Zhu and Pei Liu
Sustainability 2024, 16(20), 8999; https://doi.org/10.3390/su16208999 - 17 Oct 2024
Cited by 1 | Viewed by 2767
Abstract
As one of the major greenhouse gas emission contributors, the food service industry, particularly buffet-style restaurants, is responsible for reducing food waste. This study explores the factors that shape consumer behavior toward food waste reduction in buffet-style restaurants based on the Theory of [...] Read more.
As one of the major greenhouse gas emission contributors, the food service industry, particularly buffet-style restaurants, is responsible for reducing food waste. This study explores the factors that shape consumer behavior toward food waste reduction in buffet-style restaurants based on the Theory of Reasoned Action (TRA) and Social Exchange theory (SET), as well as analyzing the gender differences in these determinants, offering practical insights for the restaurant industry. This study also uses structural equation modeling and group analysis to examine a total of 547 valid responses gathered through an online survey, including 286 male (52.3%) and 258 female (47.2%) respondents. The findings underscore the attitudes, subjective norms, and establishment policies that emerge as critical drivers of consumer behavior in buffet-style dining settings. Notably, significant gender differences are observed in attitudes and establishment policies. In light of these results, we recommend strategies that include enhancing consumer attitudes and implementing penalty policies within restaurant operations. Restaurants could display visual signs and images related to reducing food waste, provide detailed portion size information, and apply monetary fines for excess waste to reduce consumers’ food waste intentions. These strategies are particularly effective for male consumers, who are more influenced by these factors compared to female consumers. This research contributes valuable guidance for the industry’s efforts to address food waste concerns, emphasizing gender differences and promoting environmentally responsible behavior among consumers. Full article
(This article belongs to the Special Issue Research on Consumer Behaviour and Sustainable Marketing Strategy)
Show Figures

Figure 1

12 pages, 4815 KB  
Article
Approximate Observation Weighted 2/3 SAR Imaging under Compressed Sensing
by Guangtao Li, Dongjin Xin, Weixin Li, Lei Yang, Dong Wang and Yongkang Zhou
Sensors 2024, 24(19), 6418; https://doi.org/10.3390/s24196418 - 3 Oct 2024
Viewed by 1364
Abstract
Compressed Sensing SAR Imaging is based on an accurate observation matrix. As the observed scene enlarges, the resource consumption of the method increases exponentially. In this paper, we propose a weighted 2/3-norm regularization SAR imaging method based on approximate observation. Initially, [...] Read more.
Compressed Sensing SAR Imaging is based on an accurate observation matrix. As the observed scene enlarges, the resource consumption of the method increases exponentially. In this paper, we propose a weighted 2/3-norm regularization SAR imaging method based on approximate observation. Initially, to address the issues brought by the precise observation model, we employ an approximate observation operator based on the Chirp Scaling Algorithm as a substitute. Existing approximate observation models typically utilize q(q = 1, 1/2)-norm regularization for sparse constraints in imaging. However, these models are not sufficiently effective in terms of sparsity and imaging detail. Finally, to overcome the aforementioned issues, we apply 2/3 regularization, which aligns with the natural image gradient distribution, and further constrain it using a weighted matrix. This method enhances the sparsity of the algorithm and balances the detail insufficiency caused by the penalty term. Experimental results demonstrate the excellent performance of the proposed method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 411 KB  
Article
Zero-Inflated Binary Classification Model with Elastic Net Regularization
by Hua Xin, Yuhlong Lio, Hsien-Ching Chen and Tzong-Ru Tsai
Mathematics 2024, 12(19), 2990; https://doi.org/10.3390/math12192990 - 25 Sep 2024
Cited by 2 | Viewed by 1713
Abstract
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize [...] Read more.
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize zero-inflated binary data sets, overcoming the overfitting problem is still an open question. To improve the overfitting problem for using the ZIBer model, the minus log-likelihood function of the ZIBer model with the elastic net regularization rule for an overfitting penalty is proposed as the loss function. An estimation procedure to minimize the loss function is developed in this study using the gradient descent method (GDM) with the momentum term as the learning rate. The proposed estimation method has two advantages. First, the proposed estimation method can be a general method that simultaneously uses L1- and L2-norm terms for penalty and includes the ridge and least absolute shrinkage and selection operator methods as special cases. Second, the momentum learning rate can accelerate the convergence of the GDM and enhance the computation efficiency of the proposed estimation procedure. The parameter selection strategy is studied, and the performance of the proposed method is evaluated using Monte Carlo simulations. A diabetes example is used as an illustration. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
Show Figures

Figure A1

21 pages, 989 KB  
Review
Motherhood Penalty and Labour Market Integration of Immigrant Women: A Review on Evidence from Four OECD Countries
by Samitha Udayanga
Societies 2024, 14(9), 162; https://doi.org/10.3390/soc14090162 - 28 Aug 2024
Cited by 1 | Viewed by 4359
Abstract
Among several reasons preventing the effective labour market integration of immigrant women, the motherhood penalty and unpaid care responsibilities stand out prominently. In line with this, the present scoping review shows how motherhood affects the labour market integration of immigrant women in Australia, [...] Read more.
Among several reasons preventing the effective labour market integration of immigrant women, the motherhood penalty and unpaid care responsibilities stand out prominently. In line with this, the present scoping review shows how motherhood affects the labour market integration of immigrant women in Australia, Canada, the UK, and the USA. This review shows that parenthood exacerbates the gender pay gap and limits labour market access, favouring men with children over immigrant mothers. Moreover, the effect of the motherhood penalty might be moderated by the level of education, age of the children, and the country of origin/ethnicity of immigrants. In the four countries examined, labour market outcomes for immigrant women are particularly poor. Factors contributing to this include limited language proficiency, traditional gender norms that restrict the full-time employment of certain groups of immigrant women, and institutional barriers like work-permit processing delays. To address these challenges, Australia, Canada, the UK, and the USA have implemented various policies facilitating immigrant mothers’ workforce participation. These measures include language and legal-system education, subsidised childcare, and integration programmes for both mothers and children. Additionally, some programmes in Canada and the USA provide employment assistance and financial support for childcare, while Australia and the UK offer comprehensive integration and settlement services. Full article
(This article belongs to the Special Issue Society and Immigration: Reducing Inequalities)
Show Figures

Figure 1

37 pages, 24321 KB  
Article
Damage Identification of Plate Structures Based on a Non-Convex Approximate Robust Principal Component Analysis
by Dong Liang, Yarong Zhang, Xueping Jiang, Li Yin, Ang Li and Guanyu Shen
Appl. Sci. 2024, 14(16), 7076; https://doi.org/10.3390/app14167076 - 12 Aug 2024
Viewed by 1123
Abstract
Structural damage identification has been one of the key applications in the field of Structural Health Monitoring (SHM). With the development of technology and the growth of demand, the method of identifying damage anomalies in plate structures is increasingly being developed in pursuit [...] Read more.
Structural damage identification has been one of the key applications in the field of Structural Health Monitoring (SHM). With the development of technology and the growth of demand, the method of identifying damage anomalies in plate structures is increasingly being developed in pursuit of accuracy and high efficiency. Principal Component Analysis (PCA) has always been effective in damage identification in SHM, but because of its sensitivity to outliers and low robustness, it does not work well for complex damage or data. The effect is not satisfactory. This paper introduces the Robust Principal Component Analysis (RPCA) model framework for the characteristics of PCA that are too sensitive to the outliers or noise in the data and combines it with Lamb to achieve the damage recognition of wavefield images, which greatly improves the robustness and reliability. To further improve the real-time monitoring efficiency and reduce the error, this paper proposes a non-convex approximate RPCA (NCA-RPCA) algorithm model. The algorithm uses a non-convex rank approximation function to approximate the rank of the matrix, a non-convex penalty function to approximate the norm to ensure the uniqueness of the sparse solution, and an alternating direction multiplier method to solve the problem, which is more efficient. Comparison and analysis with various algorithms through simulation and experiments show that the algorithm in this paper improves the real-time monitoring efficiency by about ten times, the error is also greatly reduced, and it can restore the original data at a lower rank level to achieve more effective damage identification in the field of SHM. Full article
(This article belongs to the Special Issue Advanced Sensing Technology for Structural Health Monitoring)
Show Figures

Figure 1

Back to TopTop