Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = roughness penalty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4698 KiB  
Article
Rough-Terrain Path Planning Based on Deep Reinforcement Learning
by Yufeng Yang and Zijie Zhang
Appl. Sci. 2025, 15(11), 6226; https://doi.org/10.3390/app15116226 - 31 May 2025
Cited by 1 | Viewed by 645
Abstract
Road undulations have a significant impact on path lengths and energy consumption, so rough-terrain path planning for unmanned vehicles is of great research importance for performing more tasks with limited energy. This paper proposes a Deep Q-Network (DQN)-based path-planning method, which shapes the [...] Read more.
Road undulations have a significant impact on path lengths and energy consumption, so rough-terrain path planning for unmanned vehicles is of great research importance for performing more tasks with limited energy. This paper proposes a Deep Q-Network (DQN)-based path-planning method, which shapes the reward by introducing a slope penalty function and a terrain penalty function. For the problem of the low exploration efficiency of the ε-greedy strategy, a hybrid exploration strategy combining stochastic exploration and the A* algorithm is proposed, after which the agent is trained on rough terrain. The results show that the algorithm can efficiently plan energy-saving paths, converge quickly, and compared with the traditional A* algorithm and RRT algorithm, performs better under three-dimensional terrain and can choose paths more rationally. Full article
Show Figures

Figure 1

18 pages, 1290 KiB  
Review
A Review of Numerical Techniques for Frictional Contact Analysis
by Govind Vashishtha, Sumika Chauhan, Riya Singh, Manpreet Singh and Ghanshyam G. Tejani
Lubricants 2025, 13(1), 18; https://doi.org/10.3390/lubricants13010018 - 6 Jan 2025
Cited by 1 | Viewed by 2431
Abstract
This review analyzes numerical techniques for frictional contact problems, highlighting their strengths and limitations in addressing inherent nonlinearities and computational demands. Finite element methods (FEM), while dominant due to versatility, often require computationally expensive iterative solutions. Alternative methods, like boundary element methods (BEM) [...] Read more.
This review analyzes numerical techniques for frictional contact problems, highlighting their strengths and limitations in addressing inherent nonlinearities and computational demands. Finite element methods (FEM), while dominant due to versatility, often require computationally expensive iterative solutions. Alternative methods, like boundary element methods (BEM) and meshless methods, offer potential advantages but require further exploration for broader applicability. The choice of contact algorithm significantly impacts accuracy and efficiency; penalty methods, though computationally efficient, can lack accuracy at high friction coefficients; whereas, Lagrange multiplier methods, while more accurate, are computationally more demanding. The selection of an appropriate friction constitutive model is crucial; while the Coulomb friction law is common, more sophisticated models are necessary to represent real-world complexities, including surface roughness and temperature dependence. This review paper delves into the future research that prioritizes developing computationally efficient algorithms and parallel computing strategies. Advancements in constitutive modelling are vital for improved accuracy, along with enhanced contact detection algorithms for complex geometries and large deformations. Integrating experimental data and multiphysics capabilities will further enhance the reliability and applicability of these numerical techniques across various engineering applications. These advancements will ultimately improve the predictive power of simulations in diverse fields. Full article
(This article belongs to the Special Issue Advanced Computational Studies in Frictional Contact)
Show Figures

Figure 1

17 pages, 348 KiB  
Article
Maximum Penalized-Likelihood Structured Covariance Estimation for Imaging Extended Objects, with Application to Radio Astronomy
by Aaron Lanterman
Stats 2024, 7(4), 1496-1512; https://doi.org/10.3390/stats7040088 - 17 Dec 2024
Viewed by 1419
Abstract
Image formation in radio astronomy is often posed as a problem of constructing a nonnegative function from sparse samples of its Fourier transform. We explore an alternative approach that reformulates the problem in terms of estimating the entries of a diagonal covariance matrix [...] Read more.
Image formation in radio astronomy is often posed as a problem of constructing a nonnegative function from sparse samples of its Fourier transform. We explore an alternative approach that reformulates the problem in terms of estimating the entries of a diagonal covariance matrix from Gaussian data. Maximum-likelihood estimates of the covariance cannot be readily computed analytically; hence, we investigate an iterative algorithm originally proposed by Snyder, O’Sullivan, and Miller in the context of radar imaging. The resulting maximum-likelihood estimates tend to be unacceptably rough due to the ill-posed nature of the maximum-likelihood estimation of functions from limited data, so some kind of regularization is needed. We explore penalized likelihoods based on entropy functionals, a roughness penalty proposed by Silverman, and an information-theoretic formulation of Good’s roughness penalty crafted by O’Sullivan. We also investigate algorithm variations that perform a generic smoothing step at each iteration. The results illustrate that tuning parameters allow for a tradeoff between the noise and blurriness of the reconstruction. Full article
(This article belongs to the Section Computational Statistics)
Show Figures

Figure 1

22 pages, 3602 KiB  
Article
Optimization Selection Method of Post-Disaster Wireless Location Detection Equipment Based on Semi-Definite Programming
by Aihua Hu, Zhongliang Deng, Jianke Li, Yao Zhang, Yuhui Gao and Di Zhao
Electronics 2022, 11(14), 2170; https://doi.org/10.3390/electronics11142170 - 11 Jul 2022
Cited by 1 | Viewed by 1661
Abstract
Signal propagation attenuation is greater in the post-disaster collapsed environment than that it is indoor or outdoor. The transmission environment is seriously affected by multi-path and non-line-of-sight transmission. When the signals penetrate the ruins and reach the receiver, their power may become very [...] Read more.
Signal propagation attenuation is greater in the post-disaster collapsed environment than that it is indoor or outdoor. The transmission environment is seriously affected by multi-path and non-line-of-sight transmission. When the signals penetrate the ruins and reach the receiver, their power may become very weak, which greatly affects the success rate of signal acquisition by the receiver. In the post-disaster environment, wireless signal propagation is severely blocked, which leads to serious signal attenuation and non-line-of-sight propagation, and signal acquisition distance and direction of detection equipment are limited. An optimization method of post-disaster wireless positioning detection equipment based on semi-deterministic programming was proposed, which allowed us to construct a location model of multiple detection equipment. The decision variable with variable nodes was generated. The reference node and penalty function were determined by fast rough positioning of the target. The node selection algorithm based on semi-definite programming was used to find the optimal node combination and complete the precise location. The performance of this algorithm was better than other known SDP optimization algorithms for reference nodes, and the correct selection accuracy of locating nodes was higher than 90%. Compared with the fixed main nodes, the positioning accuracy of the optimization algorithm was improved by 15.8%. Full article
Show Figures

Figure 1

21 pages, 2359 KiB  
Article
The Effect of Roughness in Absorbing Materials on Solar Air Heater Performance
by Karmveer, Naveen Kumar Gupta, Md Irfanul Haque Siddiqui, Dan Dobrotă, Tabish Alam, Masood Ashraf Ali and Jamel Orfi
Materials 2022, 15(9), 3088; https://doi.org/10.3390/ma15093088 - 24 Apr 2022
Cited by 15 | Viewed by 2760
Abstract
Artificial roughness on the absorber of the solar air heater (SAH) is considered to be the best passive technology for performance improvement. The roughened SAHs perform better in comparison to conventional SAHs under the same operational conditions, with some penalty of higher pumping [...] Read more.
Artificial roughness on the absorber of the solar air heater (SAH) is considered to be the best passive technology for performance improvement. The roughened SAHs perform better in comparison to conventional SAHs under the same operational conditions, with some penalty of higher pumping power requirements. Thermo-hydraulic performance, based on effective efficiency, is much more appropriate to design roughened SAH, as it considers both the requirement of pumping power and useful heat gain. The shape, size, and arrangement of artificial roughness are the most important factors for the performance optimization of SAHs. The parameters of artificial roughness and operating parameters, such as the Reynolds number (Re), temperature rise parameter (ΔT/I) and insolation (I) show a combined effect on the performance of SAH. In this case study, various performance parameters of SAH have been evaluated to show the effect of distinct artificial roughness, investigated previously. Therefore, thermal efficiency, thermal efficiency improvement factor (TEIF) and the effective efficiency of various roughened absorbers of SAH have been predicted. As a result, thermal and effective efficiencies strongly depend on the roughness parameter, Re and ΔT/I. Staggered, broken arc hybrid-rib roughness shows a higher value of TEIF, thermal and effective efficiencies consistently among all other distinct roughness geometries for the ascending values of ΔT/I. This roughness shows the maximum value of effective efficiency equals 74.63% at a ΔT/I = 0.01 K·m2/W. The unique combination of parameters p/e = 10, e/Dh = 0.043 and α = 60° are observed for best performance at a ΔT/I higher than 0.00789 K·m2/W. Full article
(This article belongs to the Special Issue Ecodesign for Composite Materials and Products)
Show Figures

Figure 1

17 pages, 1597 KiB  
Article
Bi-Smoothed Functional Independent Component Analysis for EEG Artifact Removal
by Marc Vidal, Mattia Rosso and Ana M. Aguilera 
Mathematics 2021, 9(11), 1243; https://doi.org/10.3390/math9111243 - 28 May 2021
Cited by 9 | Viewed by 5126
Abstract
Motivated by mapping adverse artifactual events caused by body movements in electroencephalographic (EEG) signals, we present a functional independent component analysis based on the spectral decomposition of the kurtosis operator of a smoothed principal component expansion. A discrete roughness penalty is introduced in [...] Read more.
Motivated by mapping adverse artifactual events caused by body movements in electroencephalographic (EEG) signals, we present a functional independent component analysis based on the spectral decomposition of the kurtosis operator of a smoothed principal component expansion. A discrete roughness penalty is introduced in the orthonormality constraint of the covariance eigenfunctions in order to obtain the smoothed basis for the proposed independent component model. To select the tuning parameters, a cross-validation method that incorporates shrinkage is used to enhance the performance on functional representations with a large basis dimension. This method provides an estimation strategy to determine the penalty parameter and the optimal number of components. Our independent component approach is applied to real EEG data to estimate genuine brain potentials from a contaminated signal. As a result, it is possible to control high-frequency remnants of neural origin overlapping artifactual sources to optimize their removal from the signal. An R package implementing our methods is available at CRAN. Full article
Show Figures

Figure 1

18 pages, 6288 KiB  
Article
Smoothing and Differentiation of Kinematic Data Using Functional Data Analysis Approach: An Application of Automatic and Subjective Methods
by Muhammad Athif Mat Zin, Azmin Sham Rambely, Noratiqah Mohd Ariff and Muhammad Shahimi Ariffin
Appl. Sci. 2020, 10(7), 2493; https://doi.org/10.3390/app10072493 - 5 Apr 2020
Cited by 12 | Viewed by 5580
Abstract
Smoothing is one of the fundamental procedures in functional data analysis (FDA). The smoothing parameter λ influences data smoothness and fitting, which is governed by selecting automatic methods, namely, cross-validation (CV) and generalized cross-validation (GCV) or subjective assessment. However, previous biomechanics research has [...] Read more.
Smoothing is one of the fundamental procedures in functional data analysis (FDA). The smoothing parameter λ influences data smoothness and fitting, which is governed by selecting automatic methods, namely, cross-validation (CV) and generalized cross-validation (GCV) or subjective assessment. However, previous biomechanics research has only applied subjective assessment in choosing optimal λ without using any automatic methods beforehand. None of that research demonstrated how the subjective assessment was made. Thus, the goal of this research was to apply the FDA method to smoothing and differentiating kinematic data, specifically right hip flexion/extension (F/E) angle during the American kettlebell swing (AKS) and determine the optimal λ . CV and GCV were applied prior to the subjective assessment with various values of λ together with cubic and quintic spline (B-spline) bases using the FDA approach. The selection of optimal λ was based on smoothed and well-fitted first and second derivatives. The chosen optimal λ was 1 × 10 12 with a quintic spline (B-spline) basis and penalized fourth-order derivative. Quintic spline is a better smoothing and differentiation method compared to cubic spline, as it does not produce zero acceleration at endpoints. CV and GCV did not give optimal λ , forcing subjective assessment to be employed instead. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

20 pages, 2413 KiB  
Article
Spatial and Temporal Variabilities of PM2.5 Concentrations in China Using Functional Data Analysis
by Deqing Wang, Zhangqi Zhong, Kaixu Bai and Lingyun He
Sustainability 2019, 11(6), 1620; https://doi.org/10.3390/su11061620 - 18 Mar 2019
Cited by 12 | Viewed by 3870
Abstract
As air pollution characterized by fine particulate matter has become one of the most serious environmental issues in China, a critical understanding of the behavior of major pollutant is increasingly becoming very important for air pollution prevention and control. The main concern of [...] Read more.
As air pollution characterized by fine particulate matter has become one of the most serious environmental issues in China, a critical understanding of the behavior of major pollutant is increasingly becoming very important for air pollution prevention and control. The main concern of this study is, within the framework of functional data analysis, to compare the fluctuation patterns of PM2.5 concentration between provinces from 1998 to 2016 in China, both spatially and temporally. By converting these discrete PM2.5 concentration values into a smoothing curve with a roughness penalty, the continuous process of PM2.5 concentration for each province was presented. The variance decomposition via functional principal component analysis indicates that the highest mean and largest variability of PM2.5 concentration occurred during the period from 2003 to 2012, during which national environmental protection policies were intensively issued. However, the beginning and end stages indicate equal variability, which was far less than that of the middle stage. Since the PM2.5 concentration curves showed different fluctuation patterns in each province, the adaptive clustering analysis combined with functional analysis of variance were adopted to explore the categories of PM2.5 concentration curves. The classification result shows that: (1) there existed eight patterns of PM2.5 concentration among 34 provinces, and the difference among different patterns was significant whether from a static perspective or multiple dynamic perspectives; (2) air pollution in China presents a characteristic of high-emission “club” agglomeration. Comparative analysis of PM2.5 profiles showed that the heavy pollution areas could rapidly adjust their emission levels according to the environmental protection policies, whereas low pollution areas characterized by the tourism industry would rationally support the opportunity of developing the economy at the expense of environment and resources. This study not only introduces an advanced technique to extract additional information implied in the functions of PM2.5 concentration, but also provides empirical suggestions for government policies directed to reduce or eliminate the haze pollution fundamentally. Full article
Show Figures

Figure 1

Back to TopTop