1. Introduction
In recent years, with increasing environmental awareness and technological advancements, the nationwide electrification of automobiles is accelerating. However, due to the absence of the masking effect provided by traditional internal combustion engines, the issue of micro-motor vibrations in electric vehicles become more prominent, directly affecting the comfort of passengers and drivers [
1]. Existing vibration optimization methods mainly rely on physical experiments and numerical models, which not only require professional engineers to operate but also fail to fully explore the optimization space, leading to inefficiencies [
2]. Furthermore, the structural parameters are influenced by the precision of manufacturing equipment, factors that are often difficult to consider comprehensively during optimization. Therefore, optimizing the NVH performance of automotive micro-motors under parameter uncertainty is both necessary and important.
In the optimization process of micro-motor NVH characteristics, the primary focus is on electromagnetic forces [
3,
4]. During operation, the motor experiences Lorentz forces and Ampere forces acting on the ferromagnetic components inside the motor, leading to the generation of unbalanced electromagnetic forces. These forces serve as excitation sources, causing vibrations in the motor’s stator structure [
5]. Traditional methods for analyzing and optimizing the NVH of automotive micro-motors typically involve numerical models, mathematical models, and physical experiments. Consequently, many researchers have established mathematical models of motor electromagnetic forces to calculate the impact of periodic electromagnetic forces on vibration [
6,
7,
8]. Jie Xu [
9] reduced the cogging torque pulsation between the stator and rotor by changing the angle of the motor’s rotor. Other researchers have proposed novel stator and rotor structures to reduce motor vibrations. For instance, Omer Gundogmus [
10] proposed placing rectangular windows in both rotor poles and stator poles to reduce the vibration and noise of switched reluctance motors. Some researchers have proposed adjusting the motor current, frequency, phase, and other related parameters through controllers to reduce radiated vibrations [
11,
12]. However, this method requires real-time monitoring by relevant sensors. Due to its complexity, cost, and maintenance requirements, it is challenging to widely apply this approach to automotive micro-motors.
Compared to the aforementioned methods, optimizing NVH performance by adjusting the structural parameters of the motor itself offers natural advantages. In the motor design process, achieving an optimal combination of parameters is crucial. However, traditional methods such as Design of Experiments (DOE), Taguchi experiments, and expert judgment often fail to thoroughly explore the entire optimization space [
13,
14]. Moreover, traditional experimental or simulation model approaches still face significant challenges associated with high computational complexity and inefficiency. These factors make it difficult to find high-quality solutions set in multi-objective optimization. In recent years, with the advancement of data-driven algorithms, data-driven methods have gradually replaced traditional methods, significantly reducing computational load [
15,
16]. For example, Haibo Huang [
17] proposed a data-driven genetic algorithm and TOPSIS algorithm based on deep learning for multi-objective optimization of processing parameters. Pengcheng Wu [
18] introduced a similar approach for optimizing processing parameters. Song Chi [
19] comprehensively discussed three methods to reduce motor vibration and noise, including pole-slot combinations, slot width, and pole arc coefficient. Feng Liu [
20] proposed an improved optimization model that considers the interaction of multiple variables in the motor. Despite significant advancements in optimizing motor NVH characteristics using data-driven methods, several issues still require further exploration.
In practical applications, the impact of uncertainty on outcomes cannot be overlooked. The optimization process itself faces significant computational demands [
21], and the presence of uncertainty further increases this burden. Therefore, further research on uncertainty is necessary. To address parameter uncertainty in the optimization process, numerous theories and methods have been developed, including probability theory, evidence theory, fuzzy set theory, possibility theory, interval analysis, info-gap decision theory, and hybrid methods [
22]. Early methods for optimizing under uncertainty primarily employed probabilistic and stochastic optimization to enhance system robustness. Bernardo [
23] embedded the Taguchi method into stochastic optimization formulas, utilizing efficient volume techniques to calculate the expected objective function value, thereby determining optimal design and robust operation strategies. Romero VJ [
24] advanced design improvements through ordinal optimization, focusing on relative ranking rather than precise quantification. Hao YD [
25] introduced truncated normal distribution and Monte Carlo processes for the first time in the analysis and optimization of the torsional vibration model of the transmission system and rear axle coupling. Gray A [
26] carried out reliability analysis and optimization by fusing mixed uncertainty features and integrating data into design decisions. D. W. Coit [
27] addressed the uncertainty in component reliability estimates, providing optimal trade-off solutions through multi-objective optimization. Li, ZS [
28,
29] proposed robust topology optimization strategies and parallel robust topology optimization methods by quantifying the uncertainties in non-design domain positions and parameters. Beyond probabilistic or specifically parameterized uncertainty design methods, non-probabilistic approaches are also evolving. Yang, C [
30] proposed an SPS interval dynamic model based on non-probabilistic theory. Masatoshi Shimoda [
31] developed a non-parametric optimization method for robust shape design of solids, shells, and frame structures under uncertain loads. Dilaksan Thillaithevan [
32] considered material uncertainties, simulated geometric variations during manufacturing, and designed structures tolerant of micro-scale geometric changes. Haibo Huang [
33] presented an improved interval analysis method for optimizing road noise quality in pure electric vehicles. The optimization process under uncertainty is complex and costly, especially when dealing with high-dimensional problems and large-scale data. However, the emergence of data-driven methods offers new solutions to these challenges.
Through the above analysis, we identified two main deficiencies in traditional optimization methods for the vibration response of automotive micro-motors: (1) Due to the complexity and high cost of traditional optimization methods, we are hindered from fully exploring the optimization space, resulting in low optimization efficiency. (2) The structural parameters of micro-motors are often constrained by the precision of manufacturing equipment. Additionally, optimization problems involving uncertainties further increase the complexity of optimization. This makes it even more difficult for traditional optimization processes to consider these uncertainty factors, leading to suboptimal results in practical applications. Therefore, it is crucial to consider these factors during the optimization process. These factors restrict the optimization effectiveness of the vibration characteristics of automotive micro-motors, impeding the development of more efficient and lower-vibration motor designs.
To address these challenges, this study proposes a Pareto ellipsoid parameter method to transform the uncertainty problem in micro-motors into a deterministic problem, thus enabling the use of traditional optimization models to optimize the vibration response of micro-motors. This method allows for a more comprehensive exploration of the optimization space, significantly improves optimization efficiency, and provides practical technical support for the development of the electric vehicle industry.
The remainder of this paper is organized as follows:
Section 2 introduces the proposed method, including the Pareto ellipsoid parameter method for handling uncertainty issues and optimizing the Inception module in the surrogate model to enhance its feature extraction capability and overall performance. Additionally, the reference point optimization method of NSGA-III is adjusted to improve the quality and diversity of the Pareto front solution set.
Section 3 details the experimental and dataset construction process.
Section 4 outlines the construction process of the prediction and optimization models, along with a comparison and discussion of different models. Finally,
Section 5 presents the conclusions.
2. Pareto Elliptic and Optimization Approaches for Uncertainty
The overall framework for optimizing the vibration response of automotive micro-motors is illustrated in
Figure 1. This framework consists of four main parts: data collection and preprocessing, study of the motor vibration surrogate model based on the inception module, motor vibration optimization using NSGA-III with adaptive reference point adjustment, and discussion of the optimization results.
Step 1: Conduct micro-motor vibration experiments in a semi-anechoic chamber and perform benchmark analysis through simulations and experiments. Apply the Fast Fourier Transform (FFT) [
34] to the collected time-domain data to convert it into frequency-domain data.
Step 2: Based on the data collected in Step 1, establish a surrogate model for the vibration response of automotive micro-motors using an optimized Inception module, and adjust the model’s hyperparameters to ensure it meets accuracy requirements.
Step 3: Utilize the developed surrogate model to establish an NSGA-III optimization model with adaptive reference point adjustment for the vibration response of micro-motors, in order to calculate the optimal combination of design parameters.
Step 4: Select design parameter combinations that meet the requirements from the Pareto front solution set, analyze their vibration response characteristics, and evaluate their performance within the objective feasible region.
2.1. Pareto Elliptic Parameter Methods for Dealing with Uncertainty
The traditional multi-objective optimization problem can be formulated as follows:
where
represents the objective function vector, containing
objective values. “
s.t.” stands for “subject to.” It indicates the constraints that the solution must satisfy.
and
denote the inequality constraints and equality constraints, respectively, with
inequality constraints and
equality constraints.
is the vector of decision variables, containing
decision variables.
represents what needs to be determined in the optimization process. The models for
,
, and
are constructed using simulation models, approximations, and physical experimental tests.
If
involves uncertainty, meaning it contains an uncertainty interval denoted by
, then the optimization model considering the uncertain decision variables can be expressed as follows:
where
is a random variable or belongs to a known uncertainty set
. The objective function and constraints must hold for all possible values of
. In the optimization process, the uncertainty in parameters is propagated through the model equations. This means seeking solutions that satisfy constraints and optimize objectives under uncertain conditions to the greatest extent possible. Traditional optimization methods are usually based on deterministic processes and struggle to effectively handle uncertainty. To address the issue of uncertainty in decision variables, the Pareto ellipsoid parameter method can be introduced to adjust the optimization objectives under uncertainty, enabling traditional models to deal with uncertainty.
In the manufacturing process of automotive micro-motors, it is often challenging to accurately determine the specific probability distributions of uncertainties. Therefore, Monte Carlo simulation [
35] is employed to address these uncertainties in optimization parameters. By generating and analyzing a large number of random samples, Monte Carlo simulation can effectively estimate and manage uncertainties, even in the absence of explicit probability distributions. Using the Monte Carlo method to simulate the range of uncertainty intervals
involves generating
uncertainty sample points. The optimization objective values and constraint values for all points are obtained from the computational models of
,
, and
, forming a vector of function values
for the uncertainty interval
with dimensions
. The mean vector
can be expressed as follows:
After decentering the function value vector
through
, its covariance matrix
can be expressed as follows:
Performing eigenvalue decomposition on
yields eigenvalues
and eigenvectors
. This process can be represented as follows:
where
is the eigenvector matrix,
is the diagonal matrix,
is the diagonal element of the diagonal matrix, and
is the column vector of the eigenvector matrix. The corresponding chi-square distribution
critical value
is calculated from the predetermined confidence level
:
Therefore, the above process can be summarized as follows:
Using the above process, we obtain the eigenvalues
and eigenvectors
corresponding to the function value vector
, as well as the critical value
corresponding to the distribution of
. Subsequently, the width
and height
of the Pareto ellipsoid can be calculated using
, and the angle can be determined using the eigenvectors and the arctangent function. The problem caused by uncertainty can thus be represented by the specific area of the Pareto ellipsoid or its area projection onto the objective or constraints. Through this process, we transform the parameter uncertainty issue in optimizing automotive micro-motors into a deterministic problem, allowing the use of traditional methods to optimize its vibration response. However, propagating parameter uncertainty via numerical models and experimental tests during the optimization process is computationally expensive. To address this issue, we use a data-driven method to replace the traditional numerical model or physical experimental process. This approach allows the propagation of uncertainty in the optimization function
, and constraint functions
and
, significantly reducing the computational load and meeting the needs of uncertainty optimization. The specific construction method of the data-driven surrogate model will be detailed in
Section 2.2.
2.2. Optimization Methods under Parameter Uncertainty
2.2.1. Surrogate Models for Vibration Response
Due to the coupling of multiple physical fields such as electromagnetic, structure, and vibration in motors, their nonlinearity is strong. To overcome this problem, the following improvements are made in the construction process of the data-driven surrogate model in this paper. First, basic features are extracted from the raw data using a feature extraction module. Next, a one-dimensional time-series Inception module [
36] is employed to extract deeper and more complex features across multiple scales, with the structure of the Inception module optimized. Specifically,
and
max pooling and average pooling layers are used to extract significant features and smooth feature maps from different scales, preserving the overall trend of the features. The max pooling layers focus on extracting significant local features, while the average pooling layers retain global information. Combining both allows the network to capture both local and global feature information simultaneously, enhancing the richness of feature representation. Residual connections have been added to the original network architecture of the Inception module. Specifically, the input features are passed through a convolutional layer and standardized before being directly connected to the output features of the Inception module through residual connections. This operation helps mitigate the gradient vanishing problem and improves the model’s training efficiency. Following the Inception module, an attention mechanism is incorporated to adaptively assign appropriate weights to different scale features and positions extracted by multiple branches of the Inception module. The improved Inception module architecture is shown in
Figure 2.
The features extracted by the Inception module vary in importance at different scales and depths and cannot be treated equally. To better adaptively assign weights to the multi-scale features extracted by the Inception module, we combined channel attention and spatial attention mechanisms [
37] to improve the overall performance of the model. The channel attention mechanism identifies the importance of different channels in the feature map, while the spatial attention mechanism captures the significance of different positions within the feature map. The integrated attention mechanism is shown in
Figure 3, where the green sections represent the channel attention mechanism and the blue sections represent the spatial attention mechanism. In
Figure 3, the blue feature part and the green feature part represent the process of max pooling and average pooling to capture the weights, respectively. The channel attention mechanism performs max pooling and average pooling on the input features along the channel dimension to obtain two different feature maps. These feature maps are then passed through a multi-layer perceptron (MLP) and activated by a sigmoid function to generate weights for each channel. Subsequently, the weights of the Inception feature map are multiplied by these channel weights to obtain the weighted features. The spatial attention mechanism operates similarly to the channel attention mechanism, but pooling operations are performed along the spatial dimension. The color intensity of the features in
Figure 3 indicates the magnitude of the weight. This attention mechanism integrates both channel and spatial attention, weighting the importance of features in the channel and spatial dimensions, respectively. By fusing the original features and weighted features through residual connections, more distinctive features are ultimately produced. This combined attention mechanism allows for more precise capture and processing of multi-scale feature information, enhancing the feature extraction capability of the Inception module and the overall performance of the model.
The steps for constructing the surrogate model of the vibration response of automotive micro-motors are as follows:
(1) A parameter combination table is designed using the Latin Hypercube Sampling (LHS) method [
38]. Specifically, in the LHS method, five design parameters are stratified for sampling to generate multiple parameter combinations. These five parameters are the pole embrace (denoted as
), pole arc offset (denoted as
), slot wedge maximum width (denoted as
), magnetic tile round corner (denoted as
), and max thickness of magnets (denoted as
). The specific descriptions of each parameter are shown in
Figure 4. Subsequently, physical experiments are conducted to obtain the optimization objectives and constraints under different parameter combinations.
(2) Using the obtained dataset, a surrogate model for predicting the vibration response of the micro-motor is constructed with the optimized Inception module and attention mechanism.
(3) The hyperparameters of the surrogate model are adjusted to improve its prediction accuracy, ensuring it meets the requirements for subsequent optimization.
2.2.2. Optimization Method for Vibration Response
Compared to the Genetic Algorithm (GA), the NSGA-III is better suited for handling high-dimensional objective optimization problems [
39]. It introduces a selection mechanism based on reference points, which helps in finding a more evenly distributed Pareto front solution set, thereby maintaining good performance in high dimensional objective spaces. However, the selection of reference points in NSGA-III is particularly crucial. If the distribution of the reference points is inappropriate, it may lead to some objectives being ignored or unevenly weighted during the optimization process, affecting the algorithm’s performance. Moreover, the reference points in NSGA-III are predefined and cannot be dynamically adjusted according to changes in the population during the optimization process, which limits the guidance of the search process and may result in a decrease in the quality of the solution set. To address this issue, this paper proposes a two-stage update strategy for reference points, namely the population exploration stage and the refinement stage, to improve the model’s search optimization capability. The specifics are as follows:
In the population exploration stage, predefined uniformly distributed reference points are used, and training iterations and optimizations begin. The reference points are sorted based on the shortest distance between the reference points and the population objective values, forming a reference point set
:
where
is the number of reference points; those better than
are retained. The K-means method [
40] is used to adaptively divide the population objective values into two blocks. The optimal set of reference points will be saved, and those worse will be discarded. This ensures that the reference point set can always reflect the distribution of the current population and the changes in objective values during the optimization process. Through traversing different cluster numbers and calculating the average
score of each cluster number according to the spatial position of the population target value and reference point at this time, the optimal cluster number is found by
criterion [
41]:
where
represents the average distance from a point
to all other points within its cluster, while
is the average distance from point
to all data points in the nearest cluster that it does not belong to. By calculating the overall Silhouette coefficient
corresponding to different numbers of clusters
, the
with the highest
is selected as the optimal number of clusters. The K-means method is used to adaptively divide the population objective values into
blocks, which are then enclosed by the smallest convex hull. New reference points are uniformly sampled from outside the convex hull, with the number of new reference points matching the number of discarded reference points. Iterative optimization continues with the new reference points. As the population gradually converges, convergence is assessed by the following criterion:
where
is a point from the Pareto ellipsoid parameters in the previous iteration, and
is the closest value to point
in the current iteration. The new matching error is calculated based on these nearest neighbor points. When the matching error remains below a preset threshold for
consecutive iterations, the population is considered to have converged. During the first stage, when the population has not yet converged, sampling new reference points outside the convex hull encourages the model to actively explore, avoiding convergence to only a few local optima. The initial process of the second stage is the same as the first stage, but the difference is that after the model converges, additional reference points are sampled from inside the convex hull.
Figure 5 illustrates the update of reference points in two stages. In
Figure 5a, the
criterion is used adaptively to divide the population and retained reference points into three regions, with points selected outside the convex hull in the first stage. Similarly,
Figure 5b shows the adaptive division of the population and retained reference points into two regions, with points selected inside the convex hull in the second stage. The purpose of the second stage is to conduct more detailed searches around the multiple local optima found, thereby further improving the quality of the solution set.
Figure 6 shows the naga-iii optimization process for adaptively adjusting the reference point. Overall, this method enhances the diversity of solutions and the performance of global optimization.
3. Vehicle Micro-Motor Vibration Response Experiments
In this study, we used a 4-pole 12-slot electric seat motor for the experiment. To minimize external environmental interference, the experiment was conducted in a semi-anechoic chamber. The laboratory temperature was maintained at 25 °C, and the humidity at 50%. A B05Y32 tri-axial accelerometer was used to record the vibration acceleration. Following the Nyquist sampling theorem, the sampling frequency for vibration acceleration was set to 25,600 Hz, enabling the analysis of frequencies up to 12,800 Hz. Data was processed using the Simcenter Testlab 16-channel processing system. The main objective of the experiment was to test the stator vibration response of an automotive micro-motor. First, we employed the LHS method to generate 100 uniformly distributed samples in the parameter space based on the upper and lower limits of the parameters in
Table 1 and processed corresponding micro-motor prototypes.
Figure 7a shows the experimental motor sample.
Figure 7b is a schematic diagram of the motor disassembly. It is required that the motor operates in an unconstrained condition to simulate the free vibration situation of the motor. Fixing the motor could introduce additional constraints, altering the motor’s vibration characteristics and thereby affecting the experimental results. Therefore, during the experiment, the micro-motor was placed in the center of a sponge for operation, and the accelerometer was installed on the surface of the micro-motor stator. It was powered by a DC power supply set to 14V, and data acquisition was conducted using LMS SCADAS Mobile. The data was processed on a PC, with each sample running for 5 s, collecting a total of 100 data samples.
Figure 7c shows all the settings for the entire experiment.
Because physical experiments usually require a large amount of materials and equipment, and testing the magnetic flux between the motor stator and rotor is relatively difficult and less accurate, we obtained the dataset of magnetic flux changes over time through simulation. In the simulation, ANSYS and Maxwell were used to calculate the motor’s vibration performance. After ensuring that the simulation results matched the actual vibration response, the magnetic flux data over time for the parameter combination was extracted.
The collected data was analyzed for frequency response using the FFT to obtain the amplitude at the motor’s commutation frequency.
Figure 8a shows the motor stator vibration test results. The
Figure 8 indicates that the vibration response at the commutation frequency of 600 Hz is significantly higher than at other frequencies, verifying the accuracy of the above analysis.
Figure 8b shows the magnetic flux changes over time, indicating that the magnetic flux fluctuates and exhibits a certain level of repeatability. In the optimization process, maximizing the magnetic flux is used as a constraint to ensure that the motor’s vibration response is reduced while achieving the highest possible magnetic flux value.