Next Issue
Volume 16, October
Previous Issue
Volume 16, August
 
 

Symmetry, Volume 16, Issue 9 (September 2024) – 158 articles

Cover Story (view full-size image): The equation of state is a fundamental property of nuclear matter governing the size and radius of neutron stars, and the dynamics of neutron star mergers. In addition to astrophysical observations, experiments with heavy-ion collisions at relativistic beam energies have contributed to constrain the parameters of the equation-of-state for symmetric and neutron-rich matter at high density, which exists in the core of neutron stars. The article reviews the status of the experimental research in the laboratory and outlines future prospects. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 13621 KiB  
Article
Exploiting Axisymmetry to Optimize CFD Simulations—Heave Motion and Wave Radiation of a Spherical Buoy
by Josh Davidson, Vincenzo Nava, Jacob Andersen and Morten Bech Kramer
Symmetry 2024, 16(9), 1252; https://doi.org/10.3390/sym16091252 - 23 Sep 2024
Viewed by 603
Abstract
Simulating the free decay motion and wave radiation from a heaving semi-submerged sphere poses significant computational challenges due to its three-dimensional complexity. By leveraging axisymmetry, we reduce the problem to a two-dimensional simulation, significantly decreasing computational demands while maintaining accuracy. In this paper, [...] Read more.
Simulating the free decay motion and wave radiation from a heaving semi-submerged sphere poses significant computational challenges due to its three-dimensional complexity. By leveraging axisymmetry, we reduce the problem to a two-dimensional simulation, significantly decreasing computational demands while maintaining accuracy. In this paper, we exploit axisymmetry to perform a large ensemble of Computational Fluid Dynamics (CFDs) simulations, aiming to evaluate and maximize both accuracy and efficiency, using the Reynolds Averaged Navier–Stokes (RANS) solver interFOAM, in the opensource finite volume CFD software OpenFOAM. Validated against highly accurate experimental data, extensive parametric studies are conducted, previously limited by computational constraints, which facilitate the refinement of simulation setups. More than 50 iterations of the same heaving sphere simulation are performed, informing efficient trade-offs between computational cost and accuracy across various simulation parameters and mesh configurations. Ultimately, by employing axisymmetry, this research contributes to the development of more accurate and efficient numerical modeling in ocean engineering. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Ocean Engineering)
Show Figures

Figure 1

30 pages, 3076 KiB  
Article
Constant Stress-Partially Accelerated Life Tests of Vtub-Shaped Lifetime Distribution under Progressive Type II Censoring
by Aisha Fayomi, Asmaa A. Ahmed, Neama T. AL-Sayed, Sara M. Behairy, Asmaa M. Abd AL-Fattah, Gannat R. AL-Dayian and Abeer A. EL-Helbawy
Symmetry 2024, 16(9), 1251; https://doi.org/10.3390/sym16091251 - 23 Sep 2024
Viewed by 448
Abstract
In lifetime tests, the waiting time for items to fail may be long under usual use conditions, particularly when the products have high reliability. To reduce the cost of testing without sacrificing the quality of the data obtained, the products are exposed to [...] Read more.
In lifetime tests, the waiting time for items to fail may be long under usual use conditions, particularly when the products have high reliability. To reduce the cost of testing without sacrificing the quality of the data obtained, the products are exposed to higher stress levels than normal, which quickly causes early failures. Therefore, accelerated life testing is essential since it saves costs and time. This paper considers constant stress-partially accelerated life tests under progressive Type II censored samples. This is realized under the claim that the lifetime of products under usual use conditions follows Vtub-shaped lifetime distribution, which is also known as log-log distribution. The log–log distribution is highly significant and has several real-world applications since it has distinct shapes of its probability density function and hazard rate function. A graphical description of the log–log distribution is exhibited, including plots of the probability density function and hazard rate. The log–log density has different shapes, such as decreasing, unimodal, and approximately symmetric. Several mathematical properties, such as quantiles, probability weighted moments, incomplete moments, moments of residual life, and reversed residual life functions, and entropy of the log–log distribution, are discussed. In addition, the maximum likelihood and maximum product spacing methods are used to obtain the interval and point estimators of the acceleration factor, as well as the model parameters. A simulation study is employed to assess the implementation of the estimation approaches under censoring schemes and different sample sizes. Finally, to demonstrate the viability of the various approaches, two real data sets are investigated. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 5895 KiB  
Article
Research on Rock Energy Constitutive Model Based on Functional Principle
by Hongmiao Lv, Xiaochen Yang, Yue Yu and Wenbo Liu
Symmetry 2024, 16(9), 1250; https://doi.org/10.3390/sym16091250 - 23 Sep 2024
Viewed by 345
Abstract
The essence of rock fracture can be broadly categorized into four processes: energy input, energy accumulation, energy dissipation, and energy release. From the perspective of energy consumption, the failure of rock materials must be accompanied by energy dissipation. Dissipated energy serves as the [...] Read more.
The essence of rock fracture can be broadly categorized into four processes: energy input, energy accumulation, energy dissipation, and energy release. From the perspective of energy consumption, the failure of rock materials must be accompanied by energy dissipation. Dissipated energy serves as the internal driving force behind rock damage and progressive failure. Given that the process of rock loading and deformation involves energy accumulation and dissipation, the rock constitutive model theory is expanded by incorporating energy principles. By introducing the dynamic energy correction coefficient, according to the law of the conservation of energy, the total energy exerted by external loads on rocks is equal to the energy dissipated through the dynamic energy inside the rocks. A new type of energy constitutive model is established through the functional principle and momentum principle. To validate the model’s accuracy, a triaxial compression test was conducted on sandstone to examine the stress–strain behavior of the rock during the failure process. A sensitivity analysis of the parameters introduced into the model was conducted by comparing the model results, which helped to clarify the innate laws of significance of these parameters. The results indicated that the energy model more accurately captures the non-linear mechanical behavior of sandstone under high-stress loading conditions. The model curve fits the test data to a high degree. The fitting curve was basically consistent with the changing trend of the test curve, and the correlation coefficients were all above 0.90. Compared with other models, the model based on the energy principle not only accurately reflects the rock’s stress–strain curve, but also reflects the energy change law of rock. This has reference value for the safety analysis of rock mass engineering under loading conditions and aids in the development of anchoring and support schemes. The research results can fill in the blanks that exist in the energy method in terms of rock deformation and failure and provide a theoretical basis for deep rock engineering. Moreover, this research can further improve and extend the rock mechanics research system based on energy. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

14 pages, 2496 KiB  
Article
Imaging Analysis Method for a Paraxial Refractive Optical System with a Large Aperture Based on the Wave Aberration Method
by Yiqing Cao, Lijun Lu and Xiaonan Zhao
Symmetry 2024, 16(9), 1249; https://doi.org/10.3390/sym16091249 - 23 Sep 2024
Viewed by 687
Abstract
Recently developed sixth-order wave aberration expressions of soft X-rays and a vacuum ultraviolet optical system are first extended to plane-symmetric refractive optical systems, and then, applying the transformation relations between plane-symmetric and paraxial refractive optical system, the sixth-order intrinsic and extrinsic wave aberration [...] Read more.
Recently developed sixth-order wave aberration expressions of soft X-rays and a vacuum ultraviolet optical system are first extended to plane-symmetric refractive optical systems, and then, applying the transformation relations between plane-symmetric and paraxial refractive optical system, the sixth-order intrinsic and extrinsic wave aberration coefficient expressions of a paraxial refractive optical system are derived. In addition, the corresponding fifth-order aberration expressions are also obtained. Finally, the resultant aberration expressions are applied to calculate the aberration on the image plane of one design example of a paraxial refractive optical system with a large aperture, and these calculation results are compared with ones obtained by ray-tracing software Zemax to prove that they have satisfactory calculation accuracy. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

16 pages, 2851 KiB  
Article
Trajectory Privacy-Protection Mechanism Based on Multidimensional Spatial–Temporal Prediction
by Ji Xi, Meiyu Shi, Weiqi Zhang, Zhe Xu and Yanting Liu
Symmetry 2024, 16(9), 1248; https://doi.org/10.3390/sym16091248 - 23 Sep 2024
Viewed by 390
Abstract
The popularity of global GPS location services and location-enabled personal terminal applications has contributed to the rapid growth of location-based social networks. Users can access social networks at anytime and anywhere to obtain services in the relevant location. While accessing services is convenient, [...] Read more.
The popularity of global GPS location services and location-enabled personal terminal applications has contributed to the rapid growth of location-based social networks. Users can access social networks at anytime and anywhere to obtain services in the relevant location. While accessing services is convenient, there is a potential risk of leaking users’ private information. In data processing, the discovery of issues and the generation of optimal solutions constitute a symmetrical process. Therefore, this paper proposes a symmetry–trajectory differential privacy-protection mechanism based on multi-dimensional prediction (TPPM-MP). Firstly, the temporal attention mechanism is designed to extract spatiotemporal features of trajectories from different spatiotemporal dimensions and perform trajectory-sensitive prediction. Secondly, class-prevalence-based weights are assigned to sensitive regions. Finally, the privacy budget is assigned based on the sensitive weights, and noise conforming to localized differential privacy is added. Validated on real datasets, the proposed method in this paper enhanced usability by 22% and 37% on the same dataset compared with other methods mentioned, while providing equivalent privacy protection. Full article
Show Figures

Figure 1

17 pages, 1602 KiB  
Article
Achieving the Best Symmetry by Finding the Optimal Clustering Filters for Specific Lighting Conditions
by Volodymyr Hrytsyk, Anton Borkivskyi and Taras Oliinyk
Symmetry 2024, 16(9), 1247; https://doi.org/10.3390/sym16091247 - 23 Sep 2024
Viewed by 732
Abstract
This article explores the efficiency of various clustering methods for image segmentation under different luminosity conditions. Image segmentation plays a crucial role in computer vision applications, and clustering algorithms are commonly used for this purpose. The search for an adaptive clustering mechanism aims [...] Read more.
This article explores the efficiency of various clustering methods for image segmentation under different luminosity conditions. Image segmentation plays a crucial role in computer vision applications, and clustering algorithms are commonly used for this purpose. The search for an adaptive clustering mechanism aims to ensure the maximum symmetry of real objects with objects/segments in their digital representations. However, clustering method performances can fluctuate with varying lighting conditions during image capture. Therefore, we assess the performance of several clustering algorithms—including K-Means, K-Medoids, Fuzzy C-Means, Possibilistic C-Means, Gustafson–Kessel, Entropy-based Fuzzy, Ridler–Calvard, Kohonen Self-Organizing Maps and MeanShift—across images captured under different illumination conditions. Additionally, we develop an adaptive image segmentation system utilizing empirical data. Conducted experiments highlight varied performances among clustering methods under different luminosity conditions. This research enhances a better understanding of luminosity’s impact on image segmentation and aids the method selection for diverse lighting scenarios. Full article
(This article belongs to the Special Issue Image Processing and Symmetry: Topics and Applications)
Show Figures

Figure 1

11 pages, 436 KiB  
Article
Stacking Monotone Polytopes
by Hee-Kap Ahn, Seung Joon Lee and Sang Duk Yoon
Symmetry 2024, 16(9), 1246; https://doi.org/10.3390/sym16091246 - 23 Sep 2024
Viewed by 656
Abstract
This paper addresses the problem of computing the optimal stacking of two monotone polytopes P and Q in Rd. A monotone polytope in Rd is defined as a polytope whose intersection with any line parallel to the last coordinate axis [...] Read more.
This paper addresses the problem of computing the optimal stacking of two monotone polytopes P and Q in Rd. A monotone polytope in Rd is defined as a polytope whose intersection with any line parallel to the last coordinate axis xd is connected, and the stacking of P and Q is defined as a translation of Q, such that “Q touches P from above”. To evaluate the stack, we use three different scoring criteria: (1) the height of the stack, (2) the maximum pointwise distance along the xd-axis, and (3) the volume between P and Q. We propose exact algorithms to compute the optimal stacking for each scoring criterion. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

28 pages, 7879 KiB  
Article
Research on Pricing and Dynamic Replenishment Planning Strategies for Perishable Vegetables Based on the RF-GWO Model
by Yongjun Pu, Zhonglin Huang, Junjie Wang and Qianrong Zhang
Symmetry 2024, 16(9), 1245; https://doi.org/10.3390/sym16091245 - 22 Sep 2024
Viewed by 1084
Abstract
This paper addresses the challenges of automated pricing and replenishment strategies for perishable products with time-varying deterioration rates, aiming to assist wholesalers and retailers in optimizing their production, transportation, and sales processes to meet market demand while minimizing inventory backlog and losses. The [...] Read more.
This paper addresses the challenges of automated pricing and replenishment strategies for perishable products with time-varying deterioration rates, aiming to assist wholesalers and retailers in optimizing their production, transportation, and sales processes to meet market demand while minimizing inventory backlog and losses. The study utilizes an improved convolutional neural network–long short-term memory (CNN-LSTM) hybrid model, autoregressive moving average (ARIMA) model, and random forest–grey wolf optimization (RF-GWO) algorithm. Using fresh vegetables as an example, the cost relationship is analyzed through linear regression, sales volume is predicted using the LSTM recurrent neural network, and pricing is forecasted with a time series analysis. The RF-GWO algorithm is then employed to solve the profit maximization problem, identifying the optimal replenishment quantity, type, and most effective pricing strategy, which involves dynamically adjusting prices based on predicted sales and market conditions. The experimental results indicate a 5.4% reduction in inventory losses and a 6.15% increase in sales profits, confirming the model’s effectiveness. The proposed mathematical model offers a novel approach to automated pricing and replenishment in managing perishable goods, providing valuable insights for dynamic inventory control and profit optimization. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 1333 KiB  
Article
Intuitionistic Fuzzy Sequential Three-Way Decision Model in Incomplete Information Systems
by Jie Shi, Qiupeng Liu, Chunlei Shi, Mingming Lv and Wenli Pang
Symmetry 2024, 16(9), 1244; https://doi.org/10.3390/sym16091244 - 22 Sep 2024
Viewed by 407
Abstract
As an effective method for uncertain knowledge discovery and decision-making, the three-way decisions model has attracted extensive attention from scholars. However, in practice, the existing sequential three-way decision model often faces challenges due to factors such as missing data and unbalanced attribute granularity. [...] Read more.
As an effective method for uncertain knowledge discovery and decision-making, the three-way decisions model has attracted extensive attention from scholars. However, in practice, the existing sequential three-way decision model often faces challenges due to factors such as missing data and unbalanced attribute granularity. To address these issues, we propose an intuitionistic fuzzy sequential three-way decision (IFSTWD) model, which introduces several significant contributions: (1) New intuitionistic fuzzy similarity relations. By integrating possibility theory, our model defines similarity and dissimilarity in incomplete information systems, establishing new intuitionistic fuzzy similarity relations and their cut relations. (2) Granulation method innovation. We propose a density neighborhood-based granulation method to partition decision attributes and introduce a novel criterion for evaluating attribute importance. (3) Enhanced decision process. By incorporating sequential three-way decision theory and developing a multi-level granularity structure, our model replaces the traditional equivalent relation in the decision-theoretic rough sets model, thus advancing the model’s applicability and effectiveness. The practical utility of our model is demonstrated through an example analysis of “Chinese + vocational skills” talent competency and validated through simulation experiments on the UCI dataset, showing superior performance compared to existing methods. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 2361 KiB  
Article
Einstein Aggregation Operator Technique in Circular Fermatean Fuzzy Environment for MCDM
by Revathy Aruchsamy, Inthumathi Velusamy, Prasantha Bharathi Dhandapani and Taha Radwan
Symmetry 2024, 16(9), 1243; https://doi.org/10.3390/sym16091243 - 22 Sep 2024
Viewed by 381
Abstract
An Ethernet cable enables users to connect their electronic devices, such as smartphones, computers, routers, laptops, etc., to a network that permits them to utilize the internet. Additionally, it transfers broadband signals among connected devices. Wi-Fi is tremendously helpful with small, handheld gadgets, [...] Read more.
An Ethernet cable enables users to connect their electronic devices, such as smartphones, computers, routers, laptops, etc., to a network that permits them to utilize the internet. Additionally, it transfers broadband signals among connected devices. Wi-Fi is tremendously helpful with small, handheld gadgets, but if capacity is required, cable Ethernet connectivity cannot be surpassed. Ethernet connections typically work faster than Wi-Fi connections; they also tend to be more flexible, have fewer interruptions, can handle problems rapidly, and have a cleaner appearance. However, it becomes complicated to decide upon an appropriate Ethernet cable. The circular Fermatean fuzzy set (∘FF), an extension of the interval-valued Fermatean fuzzy set(IVFFS) for two dimensions, provides a comprehensive framework for decision-making under uncertainty, where the concept of symmetry plays a crucial role in ensuring the balanced and unbiased aggregation of criteria. The main objective of this investigation was to select one of the best Ethernet cables using multi-criteria decision-making (MCDM). We employed aggregation operators (AOs), such as Einstein averaging and geometric AO, to amalgamate cable choices based on predefined criteria within the ∘FF set environment. Our approach ranks Ethernet cable options by evaluating their proximity to the ideal choice using ∘FF cosine and ∘FF dice similarity measures to ∘FF Einstein-weighted averaging aggregation and geometric operators. The effectiveness and stability of our suggested method are guaranteed by performing visualization, comparison, and statistical analysis. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 5132 KiB  
Article
Novel Flexible Pressure Sensor with Symmetrical Structure Based on 2-D MoS2 Material on Polydimethylsiloxane Substrate
by Shaoxiong Deng, Feng Li, Mengye Cai and Yanfeng Jiang
Symmetry 2024, 16(9), 1242; https://doi.org/10.3390/sym16091242 - 21 Sep 2024
Viewed by 719
Abstract
Flexible pressure sensors can be widely utilized in healthcare, human–computer interaction, and the Internet of Things (IoT). There is an increasing demand for high-precision and high-sensitivity flexible pressure sensors. In response to this demand, a novel flexible pressure sensor with a symmetrical structure [...] Read more.
Flexible pressure sensors can be widely utilized in healthcare, human–computer interaction, and the Internet of Things (IoT). There is an increasing demand for high-precision and high-sensitivity flexible pressure sensors. In response to this demand, a novel flexible pressure sensor with a symmetrical structure composed of MoS2 and PDMS is designed in this paper. Simulation is conducted on the designed flexible pressure sensor. Its piezoresistive effect is analyzed, and the influence of the cavity structure on its sensitivity is investigated. Additionally, a fully symmetrical Wheatstone bridge composed of the flexible pressure sensor is designed and simulated. Its symmetrical structure improves the temperature stability and the sensitivity of the sensor. The structure can be used to convert pressure changes into voltage changes conveniently. It indicates that the sensor achieves a sensitivity of 1.13 kPa−1 in the micro-pressure range of 0–20 kPa, with an output voltage sensitivity of 3.729 V/kPa. The designed flexible pressure sensor exhibits promising potential for applications in wearable devices and related fields, owing to its high sensitivity and precision. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Graphical abstract

34 pages, 27876 KiB  
Article
Assessment of Measured Mixing Time in a Water Model of Eccentric Gas-Stirred Ladle with a Low Gas Flow Rate: Tendency of Salt Solution Tracer Dispersions
by Xin Tao, Hongyu Qi, Zhijie Guo, Jia Wang, Xiaoge Wang, Jundi Yang, Qi Zhao, Wanming Lin, Kun Yang and Chao Chen
Symmetry 2024, 16(9), 1241; https://doi.org/10.3390/sym16091241 - 21 Sep 2024
Viewed by 859
Abstract
The measurement of mixing time in a water model of soft-stirring steelmaking ladles is practically facing a problem of bad repeatability. This uncertainty severely affects both the understandings of transport phenomenon in ladles and the measurement accuracy. Scaled down by a ratio of [...] Read more.
The measurement of mixing time in a water model of soft-stirring steelmaking ladles is practically facing a problem of bad repeatability. This uncertainty severely affects both the understandings of transport phenomenon in ladles and the measurement accuracy. Scaled down by a ratio of 1:4, a water model based on an industrial 260-ton ladle is used. This paper studies the transport process paths and mixing time of salt solution tracers in the water model of eccentric gas-stirred ladles with a low gas flow rate. After a large number of repeated experiments, the different transport paths of the tracer and the error of the mixing time in each transport path are discussed and compared with the numerical simulation results. The results of a large number of repeated experiments on the water model show that there are five transport paths for the tracer in the ladle. The tracer of the first path is mainly transported by the left-side main circulation flow, which is identical to the numerical simulation results. The tracer of the second and third paths are also mainly transported by the left-side circulation flow, but bifurcations occur when the tracer in the middle area is transported downward. In the third path, the portion and intensity of the tracer transferring to the right side from the central region is higher than in the second path. The fourth path is that the tracer is transported downward from the left, middle, and right sides with a similar intensity at the same time. While the tracer in the fifth path is mainly transported on the right side, and the tracer forms a clockwise circulation flow on the right side. The mixing times from the first transport path to the fifth transport path are 158.3 s, 149.7 s, 171.7 s, 134 s and 95.7 s, respectively, among which the third transport path and the fifth transport path are the maximum and minimum values among all transport paths. The error between the mixing time and the averaged mixing time at each monitoring point in the five transport paths of the tracer is between −34.7% and 40.9%. Furthermore, the error of the averaged mixing time of each path and the path-based average value is between 5.5% and 32.6%. Full article
(This article belongs to the Special Issue Symmetry Fluid Dynamics in Materials and Metallurgical Processes)
Show Figures

Figure 1

16 pages, 533 KiB  
Article
On a Randomly Censoring Scheme for Generalized Logistic Distribution with Applications
by Mustafa M. Hasaballah, Oluwafemi Samson Balogun and Mahmoud E. Bakr
Symmetry 2024, 16(9), 1240; https://doi.org/10.3390/sym16091240 - 21 Sep 2024
Viewed by 364
Abstract
In this paper, we investigate the inferential procedures within both classical and Bayesian frameworks for the generalized logistic distribution under a random censoring model. For randomly censored data, our main goals were to develop maximum likelihood estimators and construct confidence intervals using the [...] Read more.
In this paper, we investigate the inferential procedures within both classical and Bayesian frameworks for the generalized logistic distribution under a random censoring model. For randomly censored data, our main goals were to develop maximum likelihood estimators and construct confidence intervals using the Fisher information matrix for the unknown parameters. Additionally, we developed Bayes estimators with gamma priors, addressing both squared error and general entropy loss functions. We also calculated Bayesian credible intervals for the parameters. These methods were applied to two real datasets with random censoring to provide valuable insights. Finally, we conducted a simulation analysis to assess the effectiveness of the estimated values. Full article
Show Figures

Figure 1

18 pages, 4581 KiB  
Article
Dynamic Variable Precision Attribute Reduction Algorithm
by Xu Li, Ruibo Dong, Zhanwei Chen and Jiankang Ren
Symmetry 2024, 16(9), 1239; https://doi.org/10.3390/sym16091239 - 21 Sep 2024
Viewed by 642
Abstract
Dynamic reduction algorithms have become an important part of attribute reduction research because of their ability to perform dynamic updates without the need to retrain the original model. To enhance the efficiency of variable precision reduction algorithms in processing dynamic data, research has [...] Read more.
Dynamic reduction algorithms have become an important part of attribute reduction research because of their ability to perform dynamic updates without the need to retrain the original model. To enhance the efficiency of variable precision reduction algorithms in processing dynamic data, research has been conducted from the perspective of the construction process of the discernibility matrix. By modifying the decision values of some samples through an absolute majority voting strategy, a connection between variable precision reduction and positive region reduction has been established. Considering the increase and decrease of samples, dynamic variable precision reduction algorithms have been proposed. For four cases of sample increase, four corresponding scenarios have been discussed, and judgment conditions for the construction of the discernibility matrix have been proposed, which has led to the development of a dynamic variable precision reduction algorithm for sample increasing (DVPRA-SI). Simultaneously, for the scenario of sample deletion, three corresponding scenarios have been proposed, and the judgment conditions for the construction of the discernibility matrix have been discussed, which has resulted in the development of a dynamic variable precision reduction algorithm for sample deletion (DVPRA-SD). Finally, the proposed two algorithms and existing dynamic variable precision reduction algorithms were compared in terms of the running time and classification precision, and the experiments demonstrated that both algorithms are feasible and effective. Full article
Show Figures

Figure 1

34 pages, 700 KiB  
Article
Nuclear Matter and Finite Nuclei: Recent Studies Based on Parity Doublet Model
by Yuk-Kei Kong, Youngman Kim and Masayasu Harada
Symmetry 2024, 16(9), 1238; https://doi.org/10.3390/sym16091238 - 20 Sep 2024
Viewed by 442
Abstract
In this review, we summarize recent studies on nuclear matter and finite nuclei based on parity doublet models. We first construct a parity doublet model (PDM), which includes the chiral invariant mass m0 of nucleons together with the mass generated by the [...] Read more.
In this review, we summarize recent studies on nuclear matter and finite nuclei based on parity doublet models. We first construct a parity doublet model (PDM), which includes the chiral invariant mass m0 of nucleons together with the mass generated by the spontaneous chiral symmetry breaking. We then study the density dependence of the symmetry energy in the PDM, which shows that the symmetry energy is larger for smaller chiral inavariant mass. Then, we investigate some finite nuclei by applying the Relativistic Continuum Hartree–Bogoliubov (RCHB) theory to the PDM. We present the root-mean-square deviation (RMSD) of the binding energies and charge radii, and show that m0=700 MeV is preferred by the nuclear properties. Finally, we modify the PDM by adding the isovector scalar meson a0(980), and show that the inclusion of the a0(980) enlarges the symmetry energy of the infinite nuclear matter. Full article
Show Figures

Figure 1

20 pages, 381 KiB  
Article
New Insights into Rough Set Theory: Transitive Neighborhoods and Approximations
by Sibel Demiralp
Symmetry 2024, 16(9), 1237; https://doi.org/10.3390/sym16091237 - 20 Sep 2024
Viewed by 340
Abstract
Rough set theory is a methodology that defines the definite or probable membership of an element for exploring data with uncertainty and incompleteness. It classifies data sets using lower and upper approximations to model uncertainty and missing information. To contribute to this goal, [...] Read more.
Rough set theory is a methodology that defines the definite or probable membership of an element for exploring data with uncertainty and incompleteness. It classifies data sets using lower and upper approximations to model uncertainty and missing information. To contribute to this goal, this study presents a newer approach to the concept of rough sets by introducing a new type of neighborhood called j-transitive neighborhood or j-TN. Some of the basic properties of j-transitive neighborhoods are studied. Also, approximations are obtained through j-TN, and the relationships between them are investigated. It is proven that these approaches provide almost all the properties provided by the approaches given by Pawlak. This study also defines the concepts of lower and upper approximations from the topological view and compares them with some existing topological structures in the literature. In addition, the applicability of the j-TN framework is demonstrated in a medical scenario. The approach proposed here represents a new view in the design of rough set theory and its practical applications to develop the appropriate strategy to handle uncertainty while performing data analysis. Full article
(This article belongs to the Section Mathematics)
20 pages, 2672 KiB  
Article
Persistence Symmetric Kernels for Classification: A Comparative Study
by Cinzia Bandiziol and Stefano De Marchi
Symmetry 2024, 16(9), 1236; https://doi.org/10.3390/sym16091236 - 20 Sep 2024
Viewed by 737
Abstract
The aim of the present work is a comparative study of different persistence kernels applied to various classification problems. After some necessary preliminaries on homology and persistence diagrams, we introduce five different kernels that are then used to compare their performances of classification [...] Read more.
The aim of the present work is a comparative study of different persistence kernels applied to various classification problems. After some necessary preliminaries on homology and persistence diagrams, we introduce five different kernels that are then used to compare their performances of classification on various datasets. We also provide the Python codes for the reproducibility of results and, thanks to the symmetry of kernels, we can reduce the computational costs of the Gram matrices. Full article
(This article belongs to the Special Issue Algebraic Systems, Models and Applications)
Show Figures

Figure 1

15 pages, 296 KiB  
Article
Some Aspects of Differential Topology of Subcartesian Spaces
by Liuyi Chen and Qianqian Xia
Symmetry 2024, 16(9), 1235; https://doi.org/10.3390/sym16091235 - 20 Sep 2024
Viewed by 629
Abstract
In this paper, we investigate the differential topological properties of a large class of singular spaces: subcarteisan space. First, a minor further result on the partition of unity for differential spaces is derived. Second, the tubular neighborhood theorem for subcartesian spaces with constant [...] Read more.
In this paper, we investigate the differential topological properties of a large class of singular spaces: subcarteisan space. First, a minor further result on the partition of unity for differential spaces is derived. Second, the tubular neighborhood theorem for subcartesian spaces with constant structural dimensions is established. Third, the concept of Morse functions on smooth manifolds is generalized to differential spaces. For subcartesian space with constant structural dimension, a class of examples of Morse functions is provided. With the assumption that the subcartesian space can be embedded as a bounded subset of an Euclidean space, it is proved that any smooth bounded function on this space can be approximated by Morse functions. The infinitesimal stability of Morse functions on subcartesian spaces is studied. Classical results on Morse functions on smooth manifolds can be treated directly as corollaries of our results here. Full article
18 pages, 5570 KiB  
Article
Electromagnetic Field and Variable Inertia Analysis of a Dual Mass Flywheel Based on Electromagnetic Control
by Hongen Niu, Liping Zeng, Cuicui Wei and Zihao Wan
Symmetry 2024, 16(9), 1234; https://doi.org/10.3390/sym16091234 - 20 Sep 2024
Viewed by 349
Abstract
The moment of inertia of the primary flywheel and the secondary flywheel in a dual mass flywheel (DMF) directly affects the vibration damping performance in an automotive driveline. To enable better minimization of vibration and noise by changing the moment of inertia of [...] Read more.
The moment of inertia of the primary flywheel and the secondary flywheel in a dual mass flywheel (DMF) directly affects the vibration damping performance in an automotive driveline. To enable better minimization of vibration and noise by changing the moment of inertia of the DMF to adjust the frequency characteristics of the automotive driveline, a new variable inertia DMF structure is proposed by introducing electromagnetic devices. The finite element simulation model of the electromagnetic field of an electromagnetic device is established, the electromagnetic field characteristics in the structure are analyzed, and the variation in the electromagnetic force under different air gaps and current conditions is obtained. The electromagnetic force test system of the electromagnetic device is constructed, and the validity of the finite element simulation analysis of the electromagnetic field of the electromagnetic device is verified. A mechanical model of the electromagnetic device is established to analyze the characteristics of the displacement of the moving mass in the structure as well as the variation in the moment of inertia of the DMF at different rotational speeds and currents. The maximum adjustable proportion of its moment of inertia can reach 15.07%. A torsional model of the automotive driveline is established to analyze the effect of variable inertia DMF on the resonance frequency of the system under different currents. The results show that the electromagnetic device introduced in the DMF can realize the active adjustment of the moment of inertia and enable the resonance frequency to decrease with increasing rotational speed, which expands the idea of optimizing the vibration damping performance of the DMF and provides a reference for better control of the torsional vibration of the automobile or other mechanical transmission systems. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

21 pages, 4736 KiB  
Article
Consistency Analysis of Collaborative Process Data Change Based on a Rule-Driven Method
by Qianqian Wang and Chifeng Shao
Symmetry 2024, 16(9), 1233; https://doi.org/10.3390/sym16091233 - 20 Sep 2024
Viewed by 344
Abstract
In business process management, business process change analysis is the key link to ensure the flexibility and adaptability of the system. The existing methods mostly focus on the change analysis of a single business process from the perspective of control flow, ignoring the [...] Read more.
In business process management, business process change analysis is the key link to ensure the flexibility and adaptability of the system. The existing methods mostly focus on the change analysis of a single business process from the perspective of control flow, ignoring the influence of data changes on collaborative processes with information interaction. In order to compensate for this deficiency, this paper proposes a rule-driven consistency analysis method for data changes in collaborative processes. Firstly, it analyzes the influence of data changes on other elements (such as activities, data, roles, and guards) in collaborative processes, and gives the definition of data influence. Secondly, the optimal alignment technology is used to explore how data changes interfere with the expected behavior of deviation activities, and decision rules are integrated into the Petri net model to accurately evaluate and screen out the effective expected behavior that conforms to business logic and established rules. Finally, the initial optimal alignment is repaired according to the screened effective expected behavior, and the consistency of business processes is recalculated. The experimental results show that the introduced rule constraint mechanism can effectively avoid the misjudgment of abnormal behavior. Compared with the traditional method, the average accuracy, recall rate, and F1-score of effective expected behavior are improved by 4%, 4.7%, and 4.3%, respectively. In addition, the repaired optimal alignment significantly enhances the system’s ability to respond quickly and self-adjust to data changes, providing a strong support for the intelligent and automated transformation of business process management. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 507 KiB  
Article
Conformable Double Laplace Transform Method (CDLTM) and Homotopy Perturbation Method (HPM) for Solving Conformable Fractional Partial Differential Equations
by Musa Rahamh GadAllah and Hassan Eltayeb Gadain
Symmetry 2024, 16(9), 1232; https://doi.org/10.3390/sym16091232 - 19 Sep 2024
Viewed by 455
Abstract
In the present article, the method which was obtained from a combination of the conformable fractional double Laplace transform method (CFDLTM) and the homotopy perturbation method (HPM) was successfully applied to solve linear and nonlinear conformable fractional partial differential equations (CFPDEs). We included [...] Read more.
In the present article, the method which was obtained from a combination of the conformable fractional double Laplace transform method (CFDLTM) and the homotopy perturbation method (HPM) was successfully applied to solve linear and nonlinear conformable fractional partial differential equations (CFPDEs). We included three examples to help our presented technique. Moreover, the results show that the proposed method is efficient, dependable, and easy to use for certain problems in PDEs compared with existing methods. The solution graphs show close contact between the exact and CFDLTM solutions. The outcome obtained by the conformable fractional double Laplace transform method is symmetrical to the gain using the double Laplace transform. Full article
Show Figures

Figure 1

14 pages, 2078 KiB  
Article
HCAM-CL: A Novel Method Integrating a Hierarchical Cross-Attention Mechanism with CNN-LSTM for Hierarchical Image Classification
by Jing Su, Jianmin Liang, Jiayi Zhu and Yongjiang Li
Symmetry 2024, 16(9), 1231; https://doi.org/10.3390/sym16091231 - 19 Sep 2024
Viewed by 403
Abstract
Deep learning networks have yielded promising insights in the field of image classification. However, the hierarchical image classification (HIC) task, which involves assigning multiple, hierarchically organized labels to each image, presents a notable challenge. In response to this complexity, we developed a novel [...] Read more.
Deep learning networks have yielded promising insights in the field of image classification. However, the hierarchical image classification (HIC) task, which involves assigning multiple, hierarchically organized labels to each image, presents a notable challenge. In response to this complexity, we developed a novel framework (HCAM-CL), which integrates a hierarchical cross-attention mechanism with a CNN-LSTM architecture for the HIC task. The HCAM-CL model effectively identifies the relevance between images and their corresponding labels while also being attuned to learning the hierarchical inter-dependencies among labels. Our versatile model is designed to manage both fixed-length and variable-length classification pathways within the hierarchy. In the HCAM-CL model, the CNN module is responsible for the essential task of extracting image features. The hierarchical cross-attention mechanism vertically aligns these features with hierarchical levels, uniformly weighing the importance of different spatial regions. Ultimately, the LSTM module is strategically utilized to generate predictive outcomes by treating HIC as a sequence generation challenge. Extensive experimental evaluations on CIFAR-10, CIFAR-100, and design patent image datasets demonstrate that our HCAM-CL framework consistently outperforms other state-of-the-art methods in hierarchical image classification. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

29 pages, 2965 KiB  
Article
The Robust Supervised Learning Framework: Harmonious Integration of Twin Extreme Learning Machine, Squared Fractional Loss, Capped L2,p-norm Metric, and Fisher Regularization
by Zhenxia Xue, Yan Wang, Yuwen Ren and Xinyuan Zhang
Symmetry 2024, 16(9), 1230; https://doi.org/10.3390/sym16091230 - 19 Sep 2024
Viewed by 823
Abstract
As a novel learning algorithm for feedforward neural networks, the twin extreme learning machine (TELM) boasts advantages such as simple structure, few parameters, low complexity, and excellent generalization performance. However, it employs the squared L2-norm metric and an unbounded hinge loss [...] Read more.
As a novel learning algorithm for feedforward neural networks, the twin extreme learning machine (TELM) boasts advantages such as simple structure, few parameters, low complexity, and excellent generalization performance. However, it employs the squared L2-norm metric and an unbounded hinge loss function, which tends to overstate the influence of outliers and subsequently diminishes the robustness of the model. To address this issue, scholars have proposed the bounded capped L2,p-norm metric, which can be flexibly adjusted by varying the p value to adapt to different data and reduce the impact of noise. Therefore, we substitute the metric in the TELM with the capped L2,p-norm metric in this paper. Furthermore, we propose a bounded, smooth, symmetric, and noise-insensitive squared fractional loss (SF-loss) function to replace the hinge loss function in the TELM. Additionally, the TELM neglects statistical information in the data; thus, we incorporate the Fisher regularization term into our model to fully exploit the statistical characteristics of the data. Drawing upon these merits, a squared fractional loss-based robust supervised twin extreme learning machine (SF-RSTELM) model is proposed by integrating the capped L2,p-norm metric, SF-loss, and Fisher regularization term. The model shows significant effectiveness in decreasing the impacts of noise and outliers. However, the proposed model’s non-convexity poses a formidable challenge in the realm of optimization. We use an efficient iterative algorithm to solve it based on the concave-convex procedure (CCCP) algorithm and demonstrate the convergence of the proposed algorithm. Finally, to verify the algorithm’s effectiveness, we conduct experiments on artificial datasets, UCI datasets, image datasets, and NDC large datasets. The experimental results show that our model is able to achieve higher ACC and F1 scores across most datasets, with improvements ranging from 0.28% to 4.5% compared to other state-of-the-art algorithms. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

14 pages, 4923 KiB  
Article
Probabilistic Multi-Robot Task Scheduling for the Antarctic Environments with Crevasses
by Seokjin Kang and Heoncheol Lee
Symmetry 2024, 16(9), 1229; https://doi.org/10.3390/sym16091229 - 19 Sep 2024
Viewed by 406
Abstract
This paper deals with the problem of multi-robot task scheduling in the Antarctic environments with crevasses. Because the crevasses may cause hazardous situations when robots are operated in the Antarctic environments, robot navigation should be planned to safely avoid the positions of crevasses. [...] Read more.
This paper deals with the problem of multi-robot task scheduling in the Antarctic environments with crevasses. Because the crevasses may cause hazardous situations when robots are operated in the Antarctic environments, robot navigation should be planned to safely avoid the positions of crevasses. However, the positions of the crevasses may be inaccurately measured due to the lack of sensor performance, the asymmetry of sensor data, and the possibility of crevasses drifting irregularly as time passes. To overcome these uncertain and asymmetric problems, this paper proposes a probabilistic multi-robot task scheduling method based on the Nearest Neighbors Test (NNT) algorithm and the probabilistic modeling of the positions of crevasses. The proposed method was tested with a Google map of the Antarctic environments and showed a better performance than the Ant Colony Optimization (ACO) algorithm and the Genetic Algorithm (GA) in the context of total cost and computational time. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Operations Research)
Show Figures

Figure 1

32 pages, 6740 KiB  
Review
Magnetohydrodynamic Waves in Asymmetric Waveguides and Their Applications in Solar Physics—A Review
by Robertus Erdélyi and Noémi Kinga Zsámberger
Symmetry 2024, 16(9), 1228; https://doi.org/10.3390/sym16091228 - 18 Sep 2024
Viewed by 450
Abstract
The solar atmosphere is a complex, coupled, highly dynamic plasma environment, which shows rich structuring due to the presence of gravitational and magnetic fields. Several features of the Sun’s atmosphere can serve as guiding media for magnetohydrodynamic (MHD) waves. At the same time, [...] Read more.
The solar atmosphere is a complex, coupled, highly dynamic plasma environment, which shows rich structuring due to the presence of gravitational and magnetic fields. Several features of the Sun’s atmosphere can serve as guiding media for magnetohydrodynamic (MHD) waves. At the same time, these waveguides may contain flows of various magnitudes, which can then destabilise the waveguides themselves. MHD waves were found to be ubiquitously present in the solar atmosphere, thanks to the continuous improvement in the spatial, temporal, and spectral resolution of both space-born and ground-based observatories. These detections, coupled with recent theoretical advancements, have been used to obtain diagnostic information about the solar plasma and the magnetic fields that permeate it, by applying the powerful concept of solar magneto-seismology (SMS). The inclusion of asymmetric shear flows in the MHD waveguide models used may considerably affect the seismological results obtained. Further, they also influence the threshold for the onset of the Kelvin–Helmholtz instability, which, at high enough relative flow speeds, can lead to energy dissipation and contribute to the heating of the solar atmosphere—one of the long-standing and most intensely studied questions in solar physics. Full article
(This article belongs to the Special Issue Symmetry in Magnetohydrodynamic Flows and Their Applications)
Show Figures

Figure 1

24 pages, 20112 KiB  
Article
Balance Controller Design for Inverted Pendulum Considering Detail Reward Function and Two-Phase Learning Protocol
by Xiaochen Liu, Sipeng Wang, Xingxing Li and Ze Cui
Symmetry 2024, 16(9), 1227; https://doi.org/10.3390/sym16091227 - 18 Sep 2024
Viewed by 807
Abstract
As a complex nonlinear system, the inverted pendulum (IP) system has the characteristics of asymmetry and instability. In this paper, the IP system is controlled by a learned deep neural network (DNN) that directly maps the system states to control commands in an [...] Read more.
As a complex nonlinear system, the inverted pendulum (IP) system has the characteristics of asymmetry and instability. In this paper, the IP system is controlled by a learned deep neural network (DNN) that directly maps the system states to control commands in an end-to-end style. On the basis of deep reinforcement learning (DRL), the detail reward function (DRF) is designed to guide the DNN learning control strategy, which greatly enhances the pertinence and flexibility of the control. Moreover, a two-phase learning protocol (offline learning phase and online learning phase) is proposed to solve the “real gap” problem of the IP system. Firstly, the DNN learns the offline control strategy based on a simplified IP dynamic model and DRF. Then, a security controller is designed and used on the IP platform to optimize the DNN online. The experimental results demonstrate that the DNN has good robustness to model errors after secondary learning on the platform. When the length of the pendulum is reduced by 25% or increased by 25%, the steady-state error of the pendulum angle is less than 0.05 rad. The error is within the allowable range. The DNN is robust to changes in the length of the pendulum. The DRF and the two-phase learning protocol improve the adaptability of the controller to the complex and variable characteristics of the real platform and provide reference for other learning-based robot control problems. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

7 pages, 163 KiB  
Editorial
The Optimization of Aviation Technologies and Design Strategies for a Carbon-Neutral Future
by Zheng Xu, Jinze Pei and Yue Song
Symmetry 2024, 16(9), 1226; https://doi.org/10.3390/sym16091226 - 18 Sep 2024
Viewed by 535
Abstract
This Special Issue systematically reviews and summarizes the latest research into carbon neutrality technology and symmetry principles in power engineering and engineering thermophysics [...] Full article
23 pages, 1008 KiB  
Article
A Channel-Sensing-Based Multipath Multihop Cooperative Transmission Mechanism for UE Aggregation in Asymmetric IoE Scenarios
by Hua-Min Chen, Ruijie Fang, Shoufeng Wang, Zhuwei Wang, Yanhua Sun and Yu Zheng
Symmetry 2024, 16(9), 1225; https://doi.org/10.3390/sym16091225 - 18 Sep 2024
Viewed by 829
Abstract
With the continuous progress and development of technology, the Internet of Everything (IoE) is gradually becoming a research hotspot. More companies and research institutes are focusing on the connectivity and transmission between multiple devices in asymmetric networks, such as V2X, Industrial Internet of [...] Read more.
With the continuous progress and development of technology, the Internet of Everything (IoE) is gradually becoming a research hotspot. More companies and research institutes are focusing on the connectivity and transmission between multiple devices in asymmetric networks, such as V2X, Industrial Internet of Things (IIoT), environmental monitoring, disaster management, agriculture, and so on. The number of devices and business volume of these applications have rapidly increased in recent years, which will lead to a large load of terminals and affect the transmission efficiency of IoE data transmission. To deal with this issue, it has been proposed to perform data transmission via multipath cooperative transmission with multihop transmission. This approach aims to improve transmission latency, energy consumption, reliability, and throughput. This paper designs a channel-sensing-based cooperative transmission mechanism (CSCTM) with hybrid automatic repeat request (HARQ) for user equipment (UE) aggregation mechanism in future asymmetric IoE scenarios, which ensures that IoE devices data can be transmitted quickly and reliably, and supports real-time data processing and analysis. The main contents of this proposed method include strategies of cooperative transmission and redundancy version (RV) determination, a joint combination of decoding process at the receiving side, and a design of transmission priority through ascending offset sort (AOS) algorithm based on channel sensing. In addition, multihop technology is designed for the multipath cooperative transmission strategy, which enables cooperative nodes (CN) to help UE to transmit data. As a result, it can be obtained that CSCTM provides significant advancements in latency and energy consumption for the whole system. It demonstrates improvements in enhanced coverage, improved reliability, and minimized latency. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 526 KiB  
Article
A New Multimodal Modification of the Skew Family of Distributions: Properties and Applications to Medical and Environmental Data
by Jimmy Reyes, Mario A. Rojas, Pedro L. Cortés and Jaime Arrué
Symmetry 2024, 16(9), 1224; https://doi.org/10.3390/sym16091224 - 18 Sep 2024
Viewed by 786
Abstract
The skew distribution has the characteristic of appropriately modeling asymmetric unimodal data. However, in practice, there are several cases in which the data present more than one mode. In the literature, it is possible to find a large number of authors who have [...] Read more.
The skew distribution has the characteristic of appropriately modeling asymmetric unimodal data. However, in practice, there are several cases in which the data present more than one mode. In the literature, it is possible to find a large number of authors who have studied extensions based on the skew distribution to model this type of data. In this article, a new family is introduced, consisting of a multimodal modification to the family of skew distributions. Using the methodology of the weighted version of a function, we perform the product of the density function of a family of skew distributions with a polynomial of degree 4, thus obtaining a more flexible model that allows modeling data sets, whose distribution contains at most three modes. The density function, some properties, moments, skewness coefficients, and kurtosis of this new family are presented. This study focuses on the particular cases of skew-normal and Laplace distributions, although it can be applied to any other distribution. A simulation study was carried out, to study the behavior of the model parameter estimates. Illustrations with real data, referring to medicine and environmental data, show the practical performance of the proposed model in the two particular cases presented. Full article
Show Figures

Figure 1

10 pages, 3433 KiB  
Article
Tool-Emitted Sound Signal Decomposition Using Wavelet and Empirical Mode Decomposition Techniques—A Comparison
by Emerson Raja Joseph, Hossen Jakir, Bhuvaneswari Thangavel, Azlina Nor, Thong Leng Lim and Pushpa Rani Mariathangam
Symmetry 2024, 16(9), 1223; https://doi.org/10.3390/sym16091223 - 18 Sep 2024
Viewed by 411
Abstract
Analysis of non-stationary and nonlinear sound signals obtained from dynamical processes is one of the greatest challenges in signal processing. Turning machine operation is a highly dynamic process influenced by many events, such as dynamical responses, chip formations and the operational conditions of [...] Read more.
Analysis of non-stationary and nonlinear sound signals obtained from dynamical processes is one of the greatest challenges in signal processing. Turning machine operation is a highly dynamic process influenced by many events, such as dynamical responses, chip formations and the operational conditions of machining. Traditional and widely used fast Fourier transformation and spectrogram are not suitable for processing sound signals acquired from dynamical systems as their results have significant deficiencies because of stationary assumptions and having an a priori basis. A relatively new technique, discrete wavelet transform (DWT), which uses Wavelet decomposition (WD), and the recently developed technique, Hilbert–Huang Transform (HHT), which uses empirical mode decomposition (EMD), have notably better properties in the analysis of nonlinear and non-stationary sound signals. The EMD process helps the HHT to locate the signal’s instantaneous frequencies by forming symmetrical envelopes on the signal. The objective of this paper is to present a comparative study on the decomposition of multi-component sound signals using EMD and WD to highlight the suitability of HHT to analyze tool-emitted sound signals received from turning processes. The methodology used to achieve the objective is recording a tool-emitted sound signal by way of conducting an experiment on a turning machine and comparing the results of decomposing the signal by WD and EMD techniques. Apart from the short mathematical and theoretical foundations of the transformations, this paper demonstrates their decomposition strength using an experimental case study of tool flank wear monitoring in turning. It also concludes HHT is more suitable than DWT to analyze tool-emitted sound signals received from turning processes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop