Next Article in Journal
Determination of Mechanical Power Loss of the Output Mechanisms with Serially Arranged Rollers in Cycloidal Gears While Taking into Account Manufacturing Tolerances
Previous Article in Journal
Electromagnetic Characterization of Permanent Magnet Eddy Current Structures Based on Backplane Distance Adjustment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Objective Evaluation of Motion Cueing Algorithms for Vehicle Driving Simulator Based on Criteria Importance through Intercriteria Correlation (CRITIC) Weight Method Combined with Gray Correlation Analysis

Hebei Provincial Key Laboratory of Heavy Machinery Fluid Power Transmission and Control, School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Machines 2024, 12(5), 344; https://doi.org/10.3390/machines12050344
Submission received: 11 April 2024 / Revised: 8 May 2024 / Accepted: 9 May 2024 / Published: 16 May 2024
(This article belongs to the Section Automation and Control Systems)

Abstract

:
Perception-based fidelity evaluation metrics are crucial in driving simulators, as they play a key role in the automatic tuning, assessment, and comparison of motion cueing algorithms. Nevertheless, there is presently no unified and effective evaluation framework for these algorithms. To tackle this challenge, our study initially establishes a model rooted in visual–vestibular interaction and head tilt angle perception systems. We then employ metrics like the Normalized Average Absolute Difference (NAAD), Normalized Pearson Correlation (NPC), and Estimated Delay (ED) to devise an evaluation index system. Furthermore, we use a combined approach incorporating CRITIC and gray relational analysis to ascertain the weights of these indicators. This allows us to consolidate them into a comprehensive evaluation metric that reflects the overall fidelity of motion cueing algorithms. Subjective evaluation experiments validate the reasonableness and efficacy of our proposed Perception Fidelity Evaluation (PFE) method.

1. Introduction

Driving simulators play an increasingly vital role in today’s societal development due to their high safety, reliability, low cost, and diverse training scenarios [1]. However, due to spatial constraints, simulators cannot fully replicate the motion of real vehicles. To address this issue, numerous scholars have long been dedicated to researching and optimizing various motion cueing algorithms, including classical motion cueing algorithm (CMCA) [2], the adaptive motion cueing algorithm (AMCA) [3], the optimal motion cueing algorithm (OMCA) [4], the adaptive washout algorithm-based fuzzy tuning (FMCA) [5], and the model predictive control (MPC)-based MCA [6], among others. Table 1 shows the characteristics of each MCA.
Perception-based evaluation metrics are indispensable in driving simulators, playing a crucial role in the automatic tuning, assessment, and comparison of motion cueing algorithms [7]. However, there is currently a lack of a unified and effective evaluation scheme for motion cueing algorithms. Consequently, numerous researchers have conducted studies on perception-based evaluation metrics.
Felix [8] devised a rating system to evaluate the fidelity of motion simulation by continuously rating 35 participants during simulated driving. While this method provides a straightforward evaluation, its accuracy and reliability are compromised due to various subjective factors influencing participants’ ratings. Casas proposed Motion Fidelity Evaluation (MFE) and Perception Fidelity Evaluation (PFE) [9]. MFE involves comparing the motion signals generated by the simulator with expected motion signals, such as the Objective Motion Cueing Test (OMCT) suggested by Advani and Hosman [10] (see Figure 1). The reliability of motion simulation is assessed by comparing the magnitude and phase differences between the vestibular stimulation signals provided by the motion simulation system (output) and the actual motion signals (input) [11].
Hess [12,13] added the pilot structural model and aircraft dynamics models to the evaluation system (see Figure 2) and put forward a Handling Qualities Sensitivity Function (HQSF) to analyze the similarity between the pilot–aircraft system and the pilot–simulator system from the perspective of frequency domain. The higher the similarity, the better the performance of the MCA.
The advantage of the MFE is that it has definite signal comparison and analysis method and evaluation indicators. While OMCT does not take into account the human perception, which is used to optimize MCA and is important for the MCA evaluation. The model and the determination of the fidelity metric in the Hess method are complicated. And the method has poor universality for other types of driving simulators.
The process of PFE is shown in Figure 3. The input and output signals are denoted as follows:
U0: motion signal of real vehicle.
U1: motion signal of reference point of motion platform.
R0: pose signal of motion platform.
L0: signal of variation in elongation of each actuator.
P0: expected perceptual signal.
P1: simulated perceptual signal.
Firstly, U0 is calculated by the MCA and inverse kinematics solution to obtain L0, so as to control the pose of the moving platform, and then obtain R0. After that, U0 and U1 pass through the motion perception model, which is the vestibular model, respectively, to obtain P0 and P1. Finally, the comparison between P0 and P1 is carried out to complete the evaluation of the MCA [14].
Compared to the MFE, the PFE not only considers perception fidelity but also features a simpler model, making it the most widely used method in the MCA evaluation and tuning. However, there are some deficiencies in this method. Firstly, the mathematical model of motion perception in the PFE falls short of reality. Studies indicate that the human motion perception system mainly comprises the visual system, vestibular system, and somatosensory system. The visual system perceives angular and linear velocity, while the vestibular system, consisting of otoliths and semicircular canals, senses specific force and angular velocity. Zacharias [15] reported that both psychophysical and neurophysiological studies support the theory that visual and vestibular cues are jointly processed to provide a perceived sense of self-motion. Accordingly, Telban [16] proposed a visual–vestibular interaction model. Later, Gustav [17] confirmed that during driving, drivers integrate visual and vestibular cues rather than relying solely on the vestibular system for perceptual information. Tactile or somatosensory receptors sense changes in force on the body, with proprioceptive and kinesthetic senses providing information about the relative position of body parts and their movement [18]. In the study of proprioceptive and kinesthetic senses in humans, Gum [19] developed an initial model that includes muscles, muscle spindles, and the mechanics of head movement. Later, Pham [20] organized and analyzed the model, proposing a model of tilting angle perception of the head. Currently, only the vestibular system model is used in the PFE. Obviously, the perceptual information obtained by the PFE is not perfect.
Secondly, the evaluation result of the PFE is not clear enough, lacking an indicator for the overall performance of the MCA. In advance, P0 and P1, including specific force in transversal, horizontal, and vertical directions, as well as angular velocity in roll, pitch, and yaw, are often compared using curves, as demonstrated in Aminzadeh’s paper [21]. While this approach is relatively straightforward, when comparing and evaluating two or more MCAs, results can be ambiguous due to factors such as the complexity of U0 or small differences between the MCAs. In other words, there is no specific evaluation indicator to clarify the results. To address this issue, some scholars have employed statistical methods to compare and calculate P0 and P1, proposing evaluation indices.
For instance, Mohammad [22,23] and his team compared P0 and P1 by calculating the root mean square error (RMSE) and correlation coefficient (CC) between them. Pham [19] proposed an evaluation index based on the standard error formula. Casas [24] also introduced a set of relevant reference indicators, including Normalized Average Absolute Difference (NAAD), Average Absolute Scale (AAS), Normalized Pearson Correlation (NPC), and Estimated Delay (ED). Higher values of each indicator suggest better MCA performance. These four indicators can be categorized into three types: error analysis indicators, which include NAAD and AAS; correlation analysis indicator, which is NPC; and delay analysis indicator, which is ED. This set of indicators is currently the most comprehensive and is also the focus of this paper.
However, in the existing research, due to the lack of comprehensive indicators during MCA evaluation, the final assessment results remain unclear for a large number of signal types such as perceived forces, angular velocities, and head tilt angles, which need to be compared and evaluated by drivers. To improve the accuracy and efficiency of the PFE, this study adopts a more advanced visual–vestibular interaction model in the evaluation scheme, replacing the vestibular model, and introduces a head tilt angle perception model. The output signals of the improved perception system not only include forces and angular velocities but also linear velocities and head tilt angles, making the perceived information more realistic and enhancing the accuracy and reliability of the assessment results. Additionally, to provide clearer evaluation results, this study selects NAAD, NPC, and ED, combining the aforementioned perception types to establish an evaluation index system. Furthermore, based on CRITIC and combined with gray relational analysis, the weights of each indicator are determined, proposing a comprehensive performance evaluation index for MCAs. This study employs an objective evaluation method for MCAs based on CRITIC weights, combined with gray relational analysis, which holds significant practical value for the assessment and enhancement of MCAs and motion simulator-based evaluations.
The remainder of this study is organized as follows. Section 2 introduces the visual–vestibular interaction model and the head tilt angle perception model, and enhances the perceptual fidelity evaluation scheme. Section 3 establishes an evaluation indicator system and determines the weight value of each index based on the CRITIC method combined with gray correlation analysis. Section 4 designs a subjective evaluation experiment to verify the rationality and accuracy of the proposed PFE in this paper. Finally, Section 5 provides the conclusions.

2. Materials and Methods

2.1. Visual–Vestibular Interaction Model and Evaluation Scheme

2.1.1. Visual–Vestibular Interaction Model

As depicted in Figure 4 (Zacharias, G. 1977), the visual–vestibular interactive model for rotational motion can be utilized to estimate perceived motion for yaw, roll, and pitch stimuli. The model mainly comprises the visual system, semicircular canal, adaptive operator, cosine bell function, visual motion gain, and optokinetic influence.
The parameters in the semicircular canal model are τ1 = 80 s and τa = 5.73 s, respectively. In the visual model, τd = 0.09 s. Additionally, τw = 5 s and τva = 2 s. Based on the model shown in Figure 4 we can obtain the cosine bell function shown in Figure 5. The gain K varies between zero and one. A large conflict signal greater than the conflict threshold ε drives the gain to zero, whereas a small signal below the threshold value drives the gain to a value between zero and one, approaching one as ωerr approaches zero. For ωerr less than zero, the gain remains at one. The conflict threshold ε is chosen to equalize the vestibular indifference motion threshold.
Angular velocities perceived by the visual system and vestibular system are represented as ωvest and ωvis, respectively. The visual cue is passed through an internal model of the vestibular dynamics to produce an “expected” vestibular signal, which is then subtracted from the actual vestibular signal. To allow for the long-term resolution of steady-state conflict, the absolute value of e is passed through an adaptation operator to generate a conflict signal ωerr. This signal is then modulated by a visual gain coefficient K using the cosine bell function. The optokinetic influence simulates the impact of the vestibular system perception on the visual system perception.
The visual–vestibular interaction model for translational motion is depicted in Figure 6, which is employed to estimate perceived motion of linear velocity in three directions: transversal, horizontal, and vertical. Its structure is similar to the rotational model, but the difference lies in the replacement of the semicircular canal model with the otolith model. Since the input of acceleration in this model is instantaneous, an integral component needs to be added after the otolith and before the visual system. The parameters in the otolith model are K = 0.4, τL = 10 s, τ1 = 5 s, and τ2 = 0.016 s, respectively. Additionally, τw = 0.2 s and τva = 2 s.

2.1.2. Model of Tilting Angle Perception of Head

The model of head tilting angle perception proposed by Pham [19] consists of a dynamic model of the human head and a neck muscle spindle dynamic model. During the rolling motion of the body, the disturbance angular acceleration signal uh is inputted into the head movement model, where the output signal is defined as the head tilt angle ωh. Subsequently, the muscle spindle generates the sensation of the head angle yh. Considering the motion inertia of both the human body and the vehicle, as well as the coordinated tilting motion of the driving simulation platform, the input disturbance angular acceleration in the vehicle for this model is as follows:
u h v = M h a y v r I h ,
where ayv is the lateral acceleration of the vehicle. In the driving simulation platform, the input disturbance angular acceleration is as follows:
u h p = φ p M h a y p r I h + M h g r φ p I h ,
where φp is the roll angle of the platform, and ayp is the lateral acceleration output of the platform. And Mh = 4.6 kg, r = 0.05 m, Ih = 0.0304 kg/m2.

2.1.3. Improved Scheme of MCA Perceptual Fidelity Evaluation

This paper establishes a comprehensive perception system model for humans, which is more advanced than the vestibular model. It includes a visual–vestibular interaction model for rotational and translational motion, a model for perceiving tilting angles of the head, and an otolith model used to perceive specific forces. These models are applied within the PFE scheme. It is important to note that the otolith model mentioned here is the same as the one used in the visual–vestibular interaction model for translational motion. The improved PFE scheme is illustrated in Figure 7.
Since U1 can only be sensed by the vestibular system and somatosensory system, it is necessary to input U0 as visual perception signals into the visual–vestibular interaction model. This further underscores the importance of the motion perception system in evaluating the MCAs. The sensing signals generated by the comprehensive perception system not only include specific force and angular velocity but also linear velocity and head tilt angle. Finally, the closer P0 and P1 are, the better the performance of the MCA.

2.2. Establishment of the Evaluation Indicator System

2.2.1. Key Performance Indicators

As mentioned above, due to the variety of perception and evaluation indicators, some indicators of PFE may perform well while others may perform poorly. This implies that the overall level of the MCAs cannot be adequately reflected, leaving the final result of the PFE somewhat unclear. Therefore, this paper combines the aforementioned indicators through a weighted sum to create a more sophisticated indicator capable of reflecting the overall performance of the MCAs. Prior to this, a system of indicators should be established, as depicted in Figure 8 and Figure 9. The system is divided into three levels.
The formulation of each indicator at the first level is as follows [12]:
M A A D ( x , y ) = 1 + 1 n x p i = 1 n x i y i ,
where x and y represent the data of P0 and P1, respectively. xp represents the peak value of the x signal.
N P C ( x , y ) = 1 1 + P C ( x , y ) + 1 2 ,
with
P C ( x , y ) = i = 1 n ( x i x ¯ ) ( y i y ¯ ) i = 1 n ( x i x ¯ ) 2 i = 1 n ( y i y ¯ ) 2 ,
E D ( x , y ) = 1 + ( x , y )
where ∂(x,y) represents the offset that needs to be applied to the input signal of the MCA to maximize the Pearson Correlation with respect to the motion generated by the motion platform.
Figure 8 includes the overall performance indicator and indicators at the first level. Indicators of the second and third levels under NAAD are shown in Figure 9, where R, P, and Y represent roll, pitch, and yaw, respectively. T, H, and V denote transversal, horizontal, and vertical directions, respectively. The indicator sub-system of NPC and ED is similar to that shown in Figure 9. Subsequently, this paper will determine the weights of each indicator in the entire system.

2.2.2. Weights of Indicators at the First Level

Casas [23] analyzed the correlation between NAAD, NPC, and ED and participants’ perception in the subjective experiment by combining subjective evaluation with objective evaluation and calculated the correlation coefficient. The higher the correlation between an indicator and participants’ perception, the more the indicator can reflect the performance of the MCA in the objective evaluation, and the greater the weight value allocated in the indicator system. Therefore, the weight values of the three indicators at the first level are obtained by normalizing these correlation coefficients, as shown in Table 2.
Regarding the determination method of the weight values for all indicators in the second and third levels, in consideration of the objectivity of the PFE, this paper adopts a combination of CRITIC and gray correlation analysis.

2.2.3. Criteria Importance through Intercriteria Correlation (CRITIC)

The objective weight of each indicator in the CRITIC method is calculated based on the information contained in the indicator data. The amount of information is expressed by the standard deviation and correlation coefficient between indicators. Standard deviation represents the variance among indicators, while correlation represents the interrelationship between indicators. As an enhancement of the entropy weight method, it effectively captures the differences and conflicts among indicators and demonstrates strong practical utility. The calculation steps for determining the weights of the proposed indicator system based on CRITIC are listed [25].
Step 1: Suppose there are n evaluation objects and m evaluation indicators. Since the units of each indicator differ, the dimensional aspect of the indicators should be normalized prior to processing. The formulas for standardizing positive and negative indicators are as follows:
x i j = x i j min x i j max x i j min x i j , for positive indicators ,
x i j = max x i j x i j max x i j min x i j , for negative indicators ,
Step 2: Determine the indicator standard deviations using Equation (9):
σ j = i = 1 n ( x i j x ¯ j ) 2 n
where
x ¯ j = i = 1 n x i j n
Step 3: Calculate the correlation coefficient r j h between indicator pairs:
r j h = i = 1 n ( x i j x ¯ j ) ( x i j x ¯ h ) i = 1 n ( x i j x ¯ j ) 2 i = 1 n ( x i j x ¯ h ) 2
Step 4: Determine the quantity of information of each indicator as follows:
C j = σ j h = 1 n ( 1 r j h )
Step 5: Calculate the objective weight of each indicator:
W j = C j j = 1 m C j

2.2.4. The Method of Gray Relational Analysis

The gray relational analysis method is an essential component of gray system theory. It compares the correlation among factors based on the similarity of each sequence and the optimal sequence, thus reflecting the relative importance of evaluation attributes in the evaluation sequence to a certain extent. Its advantages lie in its lack of requirements for the number of evaluation attributes and evaluation objects, its simplicity in calculation, its high precision in calculation results, and its wide applicability in multi-attribute evaluation and multi-index comprehensive evaluation in uncertain environments. Below is the basic process of the gray correlation analysis method [26].
The optimal set of each indicator is Y0, that is, Y0 = (Y01, Y02, ⋯, Y0n), and the corresponding actual data are as follows:
Y = y 11 y 12 y 1 n y 21 y 22 y 2 n y t 1 y t 2 y t n
where yij represents the corresponding data of each indicator.
Standardize the optimal set and original data according to Equation (15), and obtain the following:
M i j = Y i j Y 0 j
M = m 11 m 12 m 1 n m 21 m 22 m 2 n m t 1 m t 2 m t n
M 0 = m 01 m 02 m 0 n
The correlation coefficients of the corresponding elements of each comparison sequence and reference sequence are calculated, respectively, and the formula is as follows:
ξ i j = min i   min j M 0 i M i j + ρ   max i   max j M 0 i M i j M 0 i M i j + ρ   max i   max j M 0 i M i j
where ρ is the distinguishing coefficient, usually ρ = 0.5.
The correlation degree of each indicator is calculated by Equation (19):
τ j = 1 m i = 1 m ξ i j
According to the correlation degree, the weight of each indicator is as follows:
W i = τ i j = 1 n τ j

2.2.5. Determine Combined Weights

Once the comprehensive performance evaluation index system for sports prompting algorithms is determined, the quantitative evaluation of the algorithm’s performance first requires the weights of the evaluation indicators to be determined. The methods for determining the weights of evaluation indicators can be divided into subjective weighting and objective weighting. Subjective weighting involves experts directly judging the weights of indicators based on their experience, or experienced personnel scoring the indicators according to certain criteria, and then calculating the weights using certain methods. Objective weighting is a method of analyzing and processing the actual attribute values of evaluation indicators to obtain the weights. Common objective weighting methods include the coefficient of variation method, CRITIC method, principal component analysis method, entropy method, and gray relational analysis method. The entropy method in objective weighting cannot directly obtain the weights of indicators based on the volatility or correlation of data; thus, it is not suitable for the objective evaluation of sports prompting algorithms; although the coefficient of variation method considers the volatility of data and the principal component analysis method considers the correlation between data, they still lack comprehensive consideration.
Combining gray relational analysis with CRITIC allows for a comprehensive and objective consideration of multiple evaluation factors. Firstly, gray relational analysis determines the correlation between factors, and then, CRITIC is used to determine the weights of each factor, thereby ensuring a more comprehensive decision-making process. Moreover, CRITIC objectively determines the weights of each factor, while gray relational analysis helps reduce the influence of subjective factors on the correlation, further enhancing the objectivity of the decision-making process.
In the CRITIC method, the objective weight of each indicator is calculated based on the information contained in the indicator data. Information is represented by the standard deviation and correlation coefficient between indicators. In contrast, the gray relational analysis determines the weights by comparing the correlation between each indicator [27].
In order to leverage the advantages of both the CRITIC and gray correlation analysis methods to enhance the accuracy of weight values, this paper integrates the weight values obtained from both methods. The weights obtained from the CRITIC and gray correlation analysis methods are denoted as W1 = (w11, w12, ⋯, w1t) and W2 = (w21, w22, ⋯, w2t), respectively. Then, the combined weight can be defined as follows:
W i = τ i j = 1 n τ j
where α + β = 1, and
α = i = 1 n α i i = 1 n α i + i = 1 n β i β = i = 1 n β i i = 1 n α i + i = 1 n β i
with
α i = E ( w 1 i ) E ( w 1 i ) + E ( w 2 i ) β i = E ( w 2 i ) E ( w 1 i ) + E ( w 2 i )
Finally, this paper evaluates 15 MCAs according to the process shown in Figure 7. The input signal U0 remains the same in each evaluation process. It utilizes the evaluation results combined with the aforementioned weight calculation method to compute the weights of indicators at the second and third levels of the indicator system. The final results are presented in Table 3, Table 4 and Table 5, where the GRA represents the gray relational analysis.

2.3. Example Analysis

In order to illustrate the detailed process of the objective evaluation method for the MCAs mentioned in this paper, five CMCAs (a, b, c, d, and e) with different parameters are selected as the evaluation objects. Figure 10 displays the vehicle motion signal U0 input in each evaluation process, encompassing the acceleration signal and angular velocity signal, which are collected from the Unity simulation scene. Figure 11 and Figure 12 show the comparison between sensed specific force, sensed angular velocity, sensed linear velocity, and sensed head tilt angle obtained by these five CMCAs and actual perception. Figure 13 shows the calculation results of various perception data using NAAD, NPC, and ED, respectively, where ED_Wx, ED_Fx, ED_Vx, and ED_Hta, respectively, represent the Estimated Delay of sensed rolling angular velocity, sensed specific force, sensed linear velocity, and sensed head tilt angle. Similarly, other abbreviations in the diagram follow the same pattern. It can be observed that the overall performance of each CMCA cannot be accurately judged due to a large number of perception types and evaluation indexes, which is also the main problem to be addressed in this paper. The comprehensive performance of the five CMCAs is depicted in Figure 13d.

3. Results

To validate the rationality of the PFE and the accuracy of the overall performance index of the MCAs proposed in this paper, a subjective evaluation experiment was designed. The results of this subjective evaluation experiment are then compared with the objective evaluation results.

3.1. Apparatus and the MCAs

The prototype of the motion simulation platform used in this experiment is depicted in Figure 14. The entire set of equipment mainly comprises a visual system, motion system, control system, and driving operating system. The visual system consists of three 32-inch LCD monitors, and a simulation scene based on engineering vehicle kinematics is created using Unity3D software 2021, as illustrated in Figure 15. The scene features various road conditions such as inclines, bends, and straight lanes.
The motion system is a 6-DOF parallel mechanism consisting of an upper platform, a lower platform, six moving branches, and hook hinges. The moving branches are actuated by electric cylinders. The control system comprises a Speedgoat controller master station and a Delta server slave station. The driving operating system includes a game steering wheel, accelerator, and brake pedals. Additionally, the five CMCAs utilized in this experiment are identical to those employed in the example analysis.

3.2. Experimental Procedure

There are a total of 10 participants, including 3 females and 7 males in the experiment. They all had driving experience in ordinary cars, but no or limited experience in driving simulators.
Firstly, one of the five CMCAs was randomly selected, and each participant was allowed to operate freely on the motion simulation platform for 5 min to familiarize themselves with the operation process.
Then, we commenced the actual trial. Each participant was instructed to follow the same route in the virtual scenario and was given a time limit of 10 min. Upon completion of the operation, participants were asked to rate their experience, with 0 being the lowest and 10 being the highest. A higher score indicates a more realistic experience and better performance of the CMCA. The same participants repeated this process until they completed the experience and evaluation for all five CMCAs.
Finally, the average of the 10 scores for each CMCA was calculated to derive its final subjective evaluation result, as illustrated in Table 6.

4. Discussion

Based on the objective evaluation (Figure 13) and subjective evaluation (Table 6), it can be observed that, in the longitudinal, lateral, vertical, pitch, roll, and yaw channels, the NPC index generally follows a similar trend as the subjective evaluation. In the pitch, roll, and yaw directions, the ED index also aligns with the subjective evaluation trend. The overall trend of the objective and subjective evaluations is consistent.
The correlation coefficient ρ between objective evaluation results and subjective evaluation results is calculated. When the objective evaluation index value and subjective evaluation score are closer to 1 and higher, respectively, it indicates better overall performance of the MCA. Therefore, the closer the correlation coefficient is to −1, the more consistent the two evaluation results are. As shown in Table 7, the objective evaluation results and subjective evaluation results are normalized, respectively. The value of ρ = −0.9786 indicates that the PFE and the overall performance index of the MCA proposed in this paper are reasonable and effective.

5. Conclusions

The paper proposes an objective evaluation method for the perception fidelity of MCAs based on a visual–vestibular interaction system and head tilt angle perception system model. Compared with the existing PFE method, it not only considers acceleration and angular velocity perception but also integrates linear velocity perception and tilt angle perception, effectively improving the accuracy and reliability of the evaluation results. Experimental results demonstrate that human drivers’ responses to the MCA are generally consistent with the NPC index in the longitudinal, lateral, vertical, pitch, roll, and yaw channels. In the pitch, roll, and yaw directions, the ED index trend is also generally consistent with subjective evaluation. Furthermore, the overall performance of MCA perception fidelity, as reflected by the combination of CRITIC and gray correlation analysis, aligns with the trend of human drivers’ responses to the MCA.
Certainly, the proposed evaluation indicator system and overall performance indicators in this article are primarily designed for MCAs in automotive simulators. The objective weight of each indicator is determined by the information contained in the indicator data (standard deviation and correlation coefficient). The indicators listed in the article are commonly used indicators in the current driving simulators. Further analysis is needed for future new evaluation indicators to obtain better overall performance indicators.
Future work will further research and validate objective evaluation methods for the motion fidelity of different types of vehicle simulators as well as different types (flight, maritime) of driving simulators.

Author Contributions

Methodology, X.J.; Software, X.C.; Resources, Y.J.; Data curation, Y.J.; Writing—original draft, X.J.; Writing—review & editing, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 51875499.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwon, S.-J.; Kim, M.-S. A Study on Improvement of Motion Sensation for a Vehicle Driving Simulator Based on Specific Force Gain and Tilt Angle Scale Method. Appl. Sci. 2022, 12, 9473. [Google Scholar] [CrossRef]
  2. Conrad, B.; Schmidt, S.F. Motion Drive Signals for Piloted Flight Simulators; No. NASA-CR-1601; NASA: Washington, DC, USA, 1970.
  3. Parrish, R.V.; Dieudonne, J.E.; Bowles, R.L.; Martin, D.J. Coordinated adaptive filters for motion simulators. J. Aircr. 1975, 12, 44–50. [Google Scholar] [CrossRef]
  4. Sivan, R.; Ish-Shalom, J.; Huang, J.K. An optimal control approach to the design of moving flight simulators. IEEE Trans. Syst. Man Cybern. 1982, 12, 818–827. [Google Scholar] [CrossRef]
  5. Asadi, H.; Mohamed, S.; Nahavandi, S. Incorporating human perception with the motion washout filter using fuzzy logic control. IEEE/ASME Trans. Mechatron. 2015, 20, 3276–3284. [Google Scholar] [CrossRef]
  6. Mehmet, D.; Gilles, R.; Andras, K.; Marc, B.; Nadia, M. Model-based predictive motion cueing strategy for vehicle driving simulators. Control. Eng. Pract. 2009, 17, 995–1003. [Google Scholar]
  7. Zhu, D.; Duan, S.; Fang, D. Development of Cueing Algorithm Based on “Closed-Loop” Control for Flight Simulator Motion System. Wuhan Univ. J. Nat. Sci. 2019, 24, 376–382. [Google Scholar] [CrossRef]
  8. Ellensohn, F.; Venrooij, J.; Schwienbacher, M.; Rixen, D. Experimental evaluation of an optimization-based motion cueing algorithm. Transp. Res. Part F Traffic Psychol. Behav. 2019, 62, 115–125. [Google Scholar] [CrossRef]
  9. Casas-Yrurzum, S.; Portalés-Ricart, C.; Morillo-Tena, P. On the objective evaluation of motion cueing in vehicle simulations. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3001–3012. [Google Scholar] [CrossRef]
  10. Advani, S.; Hosman, R.; Potter, M. Objective motion fidelity qualification in flight training simulators. In Proceedings of the AIAA Modeling and Simulation Technologies Conference and Exhibit, Hilton Head, SC, USA, 20–23 August 2007; pp. 1018–1030. [Google Scholar]
  11. Wei, C.; Wang, J.; Ying, X.; Shang, H.; Chao, J. Credibility evaluation of motion simulation based on visual-vestibular perception coherence. Space Med. Med. Eng. 2017, 30, 66–72. [Google Scholar]
  12. Hess, R.A.; Siwakosit, W. Assessment of flight simulator fidelity in multiaxis tasks including visual cue quality. J. Aircr. 2001, 38, 607–614. [Google Scholar] [CrossRef]
  13. Zeyada, Y.; Hess, R.A. Modeling human pilot cue utilization with applications to simulator fidelity assessment. J. Aircr. 2000, 37, 588–597. [Google Scholar] [CrossRef] [PubMed]
  14. Casas, S.; Coma, I.; Portalés, C.; Fernández, M. Towards a simulation-based tuning of motion cueing algorithms. Simul. Model. Pract. Theory 2016, 67, 137–154. [Google Scholar] [CrossRef]
  15. Zacharias, G. Motion Sensation Dependence on Visual and Vestibular Cues. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1977. [Google Scholar]
  16. Telban, R.; Cardullo, F. An integrated model of human motion perception with visual-vestibular interaction. In Proceedings of the AIAA Modeling and Simulation Technologies Conference, Montreal, QC, Canada, 9 August 2001. [Google Scholar]
  17. Markkula, G.; Romano, R.; Waldram, R.; Giles, O.; Mole, C.; Wilkie, R. Modelling visual-vestibular integration and behavioural adaptation in the driving simulator. Transp. Res. Part F Traffic Psychol. Behav. 2019, 66, 310–323. [Google Scholar] [CrossRef]
  18. Dehouck, T.L.; Mulder, M.; van Paassen, M.M. The effects of simulator motion filter settings on pilot manual control behaviour. In Proceedings of the AIAA Modeling and Simulation Technologies Conference and Exhibit, Keystone, CO, USA, 21–24 August 2006. [Google Scholar]
  19. Gum, D.R. Modeling of the Human Force and Motion-Sensing Mechanisms; Technical Report; AFHRL-TR-72-54; Air Force Human Resources Lab: Dayton, OH, USA, 1973. [Google Scholar]
  20. Duc-An, P.; Duc-Toan, N. A novel motion cueing algorithm integrated multi-sensory system–vestibular and proprioceptive system. Proceedings of the Institution of Mechanical Engineers, Part K. J. Multi-Body Dyn. 2020, 234, 256–271. [Google Scholar]
  21. Aminzadeh, M.; Mahmoodi, A.; Sabzehparvar, M. Optimal motion-cueing algorithm using motion system kinematics. Eur. J. Control 2012, 18, 363–375. [Google Scholar] [CrossRef]
  22. Asadi, H.; Mohamed, S.; Zadeh, D.R.; Nahavandi, S. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm. Veh. Syst. Dyn. 2015, 53, 526–545. [Google Scholar] [CrossRef]
  23. Qazani, M.R.C.; Asadi, H.; Nahavandi, S. A motion cueing algorithm based on model predictive control using terminal conditions in urban driving scenario. IEEE Syst. J. 2021, 15, 445–453. [Google Scholar] [CrossRef]
  24. Casas, S.; Coma, I.; Riera, J.V.; Fernández, M. Motion-cuing algorithms: Characterization of users’ perception. Hum. Factors 2015, 57, 144–162. [Google Scholar] [CrossRef] [PubMed]
  25. Luo, N.; He, M.; Gao, H. Comprehensive evaluation method for a distribution network based on improved AHP-CRITIC combination weighting and an extension evaluation model. Power Syst. Prot. Control 2021, 49, 86–96. [Google Scholar]
  26. Guan, L.; Gao, Z.; Tu, M.; Li, X.; Jiang, J. An evaluation method of power market operation efficiency based on Grey Relational Analysis of entropy weight method. In Proceedings of the IEEE Sustainable Power and Energy Conference: Energy Transition and Energy Internet (iSPEC 2020), Chengdu, China, 23–25 November 2020; pp. 1114–1120. [Google Scholar]
  27. Peng, X.; Garg, H. Intuitionistic fuzzy soft decision making method based on CoCoSo and CRITIC for CCN cache placement strategy selection. Artif. Intell. Rev. 2022, 55, 1567–1604. [Google Scholar] [CrossRef]
Figure 1. OMCT working scheme.
Figure 1. OMCT working scheme.
Machines 12 00344 g001
Figure 2. (a) Pilot–simulator loop and (b) pilot–aircraft loop.
Figure 2. (a) Pilot–simulator loop and (b) pilot–aircraft loop.
Machines 12 00344 g002
Figure 3. MCA perceptual fidelity evaluation scheme.
Figure 3. MCA perceptual fidelity evaluation scheme.
Machines 12 00344 g003
Figure 4. Visual–vestibular interaction model for rotational motion.
Figure 4. Visual–vestibular interaction model for rotational motion.
Machines 12 00344 g004
Figure 5. Modified cosine bell operator for optokinetic influence gain.
Figure 5. Modified cosine bell operator for optokinetic influence gain.
Machines 12 00344 g005
Figure 6. The visual–vestibular interaction model for translational motion.
Figure 6. The visual–vestibular interaction model for translational motion.
Machines 12 00344 g006
Figure 7. Improved scheme of the MCA perceptual fidelity evaluation.
Figure 7. Improved scheme of the MCA perceptual fidelity evaluation.
Machines 12 00344 g007
Figure 8. The overall performance indicator and indicators at the first level.
Figure 8. The overall performance indicator and indicators at the first level.
Machines 12 00344 g008
Figure 9. Indicators of the second level and the third level under NAAD.
Figure 9. Indicators of the second level and the third level under NAAD.
Machines 12 00344 g009
Figure 10. Motion signal of real vehicle: (a) longitudinal, lateral, and vertical accelerations and (b) roll, pitch, and yaw angular velocities.
Figure 10. Motion signal of real vehicle: (a) longitudinal, lateral, and vertical accelerations and (b) roll, pitch, and yaw angular velocities.
Machines 12 00344 g010
Figure 11. Sensed angular velocity and sensed specific force contrast curve: (a) roll; (b) pitch; (c) yaw; (d) longitudinal; (e) lateral; and (f) vertical.
Figure 11. Sensed angular velocity and sensed specific force contrast curve: (a) roll; (b) pitch; (c) yaw; (d) longitudinal; (e) lateral; and (f) vertical.
Machines 12 00344 g011
Figure 12. Sensed linear velocity and sensed head tilt angle contrast curve: (a) longitudinal; (b) lateral; (c) vertical; and (d) roll head tilt.
Figure 12. Sensed linear velocity and sensed head tilt angle contrast curve: (a) longitudinal; (b) lateral; (c) vertical; and (d) roll head tilt.
Machines 12 00344 g012
Figure 13. The results of objective evaluation: (a) ED; (b) NAAD; (c) NPC; and (d) the comprehensive performance.
Figure 13. The results of objective evaluation: (a) ED; (b) NAAD; (c) NPC; and (d) the comprehensive performance.
Machines 12 00344 g013
Figure 14. The motion simulation platform.
Figure 14. The motion simulation platform.
Machines 12 00344 g014
Figure 15. The scene features of various roads: (a) the map of the virtual scene; (b) the uphill road; (c) the downhill road; and (d) the bend.
Figure 15. The scene features of various roads: (a) the map of the virtual scene; (b) the uphill road; (c) the downhill road; and (d) the bend.
Machines 12 00344 g015aMachines 12 00344 g015b
Table 1. Comparison of algorithm characteristics.
Table 1. Comparison of algorithm characteristics.
TypeAdvantagesDisadvantages
CMCAFewer parameters, fast calculation.Fixed parameters, poor applicability.
AMCAFilter parameters can be adjusted in real time.The stability is poor, and the optimization efficiency is low.
OMCAHuman perception error is considered for the first time.The parameters are fixed, and the calculation is complicated.
FMCARegulators are plentiful and flexible.The algorithm structure is complex.
MPC-MCAParameters and constraints have a more intuitive relationship.It is highly dependent on human perception model, which is not perfect.
Table 2. Weights of each indicator at the first level.
Table 2. Weights of each indicator at the first level.
IndicatorCorrelationCoefficientAverageWeight
NAAD0.1230.2180.17050.17
NPC0.4250.4950.4600.47
ED0.3470.3570.3520.36
Table 3. Weights of indicators of ED sub-system.
Table 3. Weights of indicators of ED sub-system.
Second LevelCRITIC WeightGRA WeightCombined WeightThird LevelCRITIC WeightGRA WeightCombined Weight
Angular
Velocity
0.37840.22770.3031R0.28580.40740.3466
P0.28260.28310.2829
Y0.43150.30950.3705
Specific
Force
0.05910.34150.2003H0.51380.28670.4002
T0.27680.28730.2821
V0.20950.42600.3177
Linear
Velocity
0.27750.24140.2594H0.26270.33330.2980
T0.32080.27600.2984
V0.41660.39070.4036
Head Angle0.28500.18940.2372
Table 4. Weights of indicators of NAAD sub-system.
Table 4. Weights of indicators of NAAD sub-system.
Second LevelCRITIC WeightGRA WeightCombined WeightThird LevelCRITIC WeightGRA WeightCombined Weight
Angular
Velocity
0.31270.23300.2729R0.79470.30480.5498
P0.09310.36030.2267
Y0.11220.33490.2235
Specific
Force
0.40440.26000.3322H0.90670.28770.5972
T0.05010.34390.1969
V0.04330.36850.2059
Linear
Velocity
0.16260.27910.2209H0.69650.32620.5114
T0.22020.29800.2591
V0.08330.37580.2295
Head Angle0.12030.22780.1740
Table 5. Weights of indicators of NPC sub-system.
Table 5. Weights of indicators of NPC sub-system.
Second LevelCRITIC WeightGRA WeightCombined WeightThird LevelCRITIC WeightGRA WeightCombined Weight
Angular
Velocity
0.09740.25720.1774R0.51540.30670.4110
P0.38790.29040.3392
Y0.09670.40300.2498
Specific
Force
0.24000.27560.2578H0.70890.29770.5033
T0.25490.31350.2842
V0.03630.38880.2125
Linear
Velocity
0.20920.27370.2414H0.20940.38330.2963
T0.06670.37440.2206
V0.72390.24230.4831
Head Angle0.45350.19340.3234
Table 6. Results of subjective evaluation.
Table 6. Results of subjective evaluation.
CMCA (a)CMCA (b)CMCA (c)CMCA (d)CMCA (e)
Participant 176.57.588
Participant 265678
Participant 365789
Participant 456.5788.5
Participant 547567
Participant 64655.56
Participant 75.56678
Participant 866.5788.5
Participant 955.5767.5
Participant 1068.56.577.5
The average5.456.256.407.057.80
Table 7. Normalization of evaluation results of the CMCAs.
Table 7. Normalization of evaluation results of the CMCAs.
The MCAsCMCA (a)CMCA (b)CMCA (c)CMCA (d)CMCA (e)
Subjective evaluation results0.1650.1900.1940.2140.237
Objective evaluation results0.2070.2020.2000.1990.192
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, X.; Chen, X.; Jiao, Y.; Zhang, L. Objective Evaluation of Motion Cueing Algorithms for Vehicle Driving Simulator Based on Criteria Importance through Intercriteria Correlation (CRITIC) Weight Method Combined with Gray Correlation Analysis. Machines 2024, 12, 344. https://doi.org/10.3390/machines12050344

AMA Style

Jiang X, Chen X, Jiao Y, Zhang L. Objective Evaluation of Motion Cueing Algorithms for Vehicle Driving Simulator Based on Criteria Importance through Intercriteria Correlation (CRITIC) Weight Method Combined with Gray Correlation Analysis. Machines. 2024; 12(5):344. https://doi.org/10.3390/machines12050344

Chicago/Turabian Style

Jiang, Xue, Xiafei Chen, Yiyang Jiao, and Lijie Zhang. 2024. "Objective Evaluation of Motion Cueing Algorithms for Vehicle Driving Simulator Based on Criteria Importance through Intercriteria Correlation (CRITIC) Weight Method Combined with Gray Correlation Analysis" Machines 12, no. 5: 344. https://doi.org/10.3390/machines12050344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop