# Testing Scenario Identification for Automated Vehicles Based on Deep Unsupervised Learning

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

#### 2.1. Typical–Extreme Scenario Data Segmentation Based on Isolation Forest

- (1)
- Building isolation forest: an isolation forest is composed of multiple randomly partitioned binary trees.
- (2)
- Calculate the path length h of a binary tree $h(x)$: the path length $h(x)$ can be calculated as,$$h(x)=e+C(T.size)$$
- (3)
- Deviation measurement of extreme points: calculate the expected value $E(h(x))$ and variance $S(h(x))$ of the outlying degree of all samples, and then obtain the extreme data that deviate from the expected value and variance.

#### 2.2. Scene Feature Extraction Based on One-Dimensional Residual Convolutional Autoencoder

#### 2.3. K-Means Algorithm Based on Information Entropy Optimization

- (1)
- Standardize ${x}_{ij}$ with attribute $j$ for sample $i$, usually applying the min–max method for data standardization. The min–max method can be described as,$${x}_{ij}^{\prime}=\frac{{x}_{ij}-{\mathrm{min}}_{i}({x}_{ij})}{{\mathrm{max}}_{i}({x}_{ij})-{\mathrm{min}}_{i}({x}_{ij})}$$
- (2)
- Calculate the proportion of the i-th sample in feature $j$.$${p}_{ij}={x}_{ij}^{\prime}/{\displaystyle {\sum}_{i=1}^{n}{x}_{ij}^{\prime}}$$
- (3)
- Calculate the IE of $n$ samples in the feature $j$.$${E}_{j}=-\frac{1}{\mathrm{ln}n}{\displaystyle {\sum}_{i=1}^{n}{p}_{ij}\mathrm{ln}{p}_{ij}}$$
- (4)
- Calculate the weight of the j-th feature.$${w}_{j}=(1-{E}_{j})/{\displaystyle {\sum}_{k=1}^{p}(1-{E}_{k})}$$

## 3. Data Collection and Experiments

#### 3.1. Datasets and Data Processing

#### 3.2. Typical—Extreme Scenario Data Segmentation

#### 3.3. Scenario Feature Extraction

#### 3.4. Scenario Clustering

## 4. Results and Analysis

#### 4.1. Performance Comparison of Different Feature Extraction Networks

#### 4.2. Performance Comparison of Different Clustering Algorithms

#### 4.3. Analysis of Scene Identification Results

^{2}, level two falls between 2.22 m/s

^{2}and 2.78 m/s

^{2}, level three is between 1.67 m/s

^{2}and 2.22 m/s

^{2}, while level four is less than 1.67 m/s

^{2}. Level one and level two represent dangerous driving with sudden braking and acceleration, which may lead to safety accidents. Level three represents normal driving and braking with large amplitudes, and poses some risk. Level four represents normal driving with higher safety. It should be noted that both the first-level and second-level standards for acceleration are rarely presented in typical scenarios. Therefore, the first-level and second-level standards are denoted as ‘Others’.

#### 4.4. Comparative Analysis of Typical-Extreme Scenarios

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

NDD | naturalistic driving data |

1D-RCAE | one-dimensional residual convolutional autoencoder |

IF | isolation forest |

IE | information entropy |

CNN | convolutional neural network |

LSTM | long short-term memory network |

CTMS | conditional multiple-trajectory synthesizer |

AE | autoencoder |

mDAE | marginalized denoising autoencoder |

CAE | convolutional autoencoder |

DRAE | deep regularized autoencoder |

BN | batch normalization |

OCSVM | one-class support vector machine |

LOF | local outlier factor |

MSE | mean-squared error |

MAE | mean absolute error |

RMSE | root-mean-squared error |

SC | silhouette coefficient |

CH | Calinski–Harabaz score |

DB | Davies–Bouldin index |

## References

- Li, Z.; Huang, X.; Wang, J.; Mu, T. Lane Change Behavior Research Based on NGSIM Vehicle Trajectory Data. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 1865–1870. [Google Scholar]
- Zhai, M.; Xiang, X.; Lv, N.; Saddik, A.E. Multi-Task Learning in Autonomous Driving Scenarios Via Adaptive Feature Refinement Networks. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2323–2327. [Google Scholar]
- Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
- Gu, J.; Zhao, H.; Lin, Z.; Li, S.; Cai, J.; Ling, M. Scene Graph Generation With External Knowledge and Image Reconstruction. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1969–1978. [Google Scholar]
- Ries, L.; Langner, J.; Otten, S.; Bach, J.; Sax, E. A Driving Scenario Representation for Scalable Real-Data Analytics with Neural Networks. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 2215–2222. [Google Scholar]
- Du, Z.; Zhang, L.; Zhao, S.; Hou, Q.; Zhai, Y. Research on Test and Evaluation Method of L3 Intelligent Vehicle Based on Chinese Characteristics Scene. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 26–32. [Google Scholar]
- Ding, W.; Xu, M.; Zhao, D. CMTS: A Conditional Multiple Trajectory Synthesizer for Generating Safety-Critical Driving Scenarios. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4314–4321. [Google Scholar]
- Hou, L.; Xin, L.; Li, S.E.; Cheng, B.; Wang, W. Interactive Trajectory Prediction of Surrounding Road Users for Autonomous Driving Using Structural-LSTM Network. IEEE Trans. Intell. Transp. Syst.
**2020**, 21, 4615–4625. [Google Scholar] [CrossRef] - Li, X.S.; Cui, X.T.; Ren, Y.Y.; Zheng, X.L. Unsupervised Driving Style Analysis Based on Driving Maneuver Intensity. IEEE Access
**2022**, 10, 48160–48178. [Google Scholar] [CrossRef] - Pang, M.Y. Trajectory Data Based Clustering and Feature Analysis of Vehicle Lane-Changing Behavior. In Proceedings of the 2019 4th International Conference on Electromechanical Control Technology and Transportation (ICECTT), Guilin, China, 26–28 April 2019; pp. 229–233. [Google Scholar]
- Ren, Y.Y.; Zhao, L.; Zheng, X.L.; Li, X.S. A Method for Predicting Diverse Lane-Changing Trajectories of Surrounding Vehicles Based on Early Detection of Lane Change. IEEE Access
**2022**, 10, 17451–17472. [Google Scholar] [CrossRef] - Wang, Z.; Ma, X.; Liu, J.; Chigan, D.; Liu, G.; Zhao, C. Car-Following Behavior of Coach Bus Based on Naturalistic Driving Experiments in Urban Roads. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; pp. 1–4. [Google Scholar]
- Masmoudi, M.; Friji, H.; Ghazzai, H.; Massoud, Y. A Reinforcement Learning Framework for Video Frame-Based Autonomous Car-Following. IEEE Open J. Intell. Transp. Syst.
**2021**, 2, 111–127. [Google Scholar] [CrossRef] - Zhao, D.; Guo, Y.; Jia, Y.J. TrafficNet: An open naturalistic driving scenario library. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–8. [Google Scholar]
- Martinsson, J.; Mohammadiha, N.; Schliep, A. Clustering Vehicle Maneuver Trajectories Using Mixtures of Hidden Markov Models. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3698–3705. [Google Scholar]
- Wang, W.; Ramesh, A.; Zhu, J.; Li, J.; Zhao, D. Clustering of Driving Encounter Scenarios Using Connected Vehicle Trajectories. IEEE Trans. Intell. Veh.
**2020**, 5, 485–496. [Google Scholar] [CrossRef] [Green Version] - Wang, W.; Zhao, D. Extracting Traffic Primitives Directly From Naturalistically Logged Data for Self-Driving Applications. IEEE Rob. Autom. Lett.
**2018**, 3, 1223–1229. [Google Scholar] [CrossRef] [Green Version] - Zhao, J.; Fang, J.; Ye, Z.; Zhang, L. Large Scale Autonomous Driving Scenarios Clustering with Self-supervised Feature Extraction. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 473–480. [Google Scholar]
- Zhao, S.; Wu, Y.; Zhu, X.; Zhou, B.; Bao, H.; Zhang, L. Research on Automatic Classification for Driving Scenarios Based on Big Data and Ontology. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 1857–1862. [Google Scholar]
- Xiaoyu, S.; Xichan, Z.; Kaiyuan, Z.; Lin, L.; Zhixiong, M.; Dazhi, W. Automatic detection method research of incidents in China-FOT database. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 754–759. [Google Scholar]
- Wachenfeld, W.; Junietz, P.; Wenzel, R.; Winner, H. The worst-time-to-collision metric for situation identification. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 729–734. [Google Scholar]
- Tan, S.; Wong, K.; Wang, S.; Manivasagam, S.; Ren, M.; Urtasun, R. SceneGen: Learning to Generate Realistic Traffic Scenes. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 892–901. [Google Scholar]
- Rocklage, E.; Kraft, H.; Karatas, A.; Seewig, J. Automated scenario generation for regression testing of autonomous vehicles. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 476–483. [Google Scholar]
- Fellner, A.; Krenn, W.; Schlick, R.; Tarrach, T.; Weissenbacher, G. Model-based, Mutation-driven Test-case Generation Via Heuristic-guided Branching Search. ACM Trans. Embedded Comput. Syst.
**2019**, 18, 1–28. [Google Scholar] [CrossRef] [Green Version] - Spooner, J.; Palade, V.; Cheah, M.; Kanarachos, S.; Daneshkhah, A. Generation of Pedestrian Crossing Scenarios Using Ped-Cross Generative Adversarial Network. Appl. Sci.
**2021**, 11, 471. [Google Scholar] [CrossRef] - Feng, S.; Feng, Y.; Yu, C.; Zhang, Y.; Liu, H.X. Testing Scenario Library Generation for Connected and Automated Vehicles, Part I: Methodology. IEEE Trans. Intell. Transp. Syst.
**2021**, 22, 1573–1582. [Google Scholar] [CrossRef] [Green Version] - Koren, M.; Alsaif, S.; Lee, R.; Kochenderfer, M.J. Adaptive Stress Testing for Autonomous Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1–7. [Google Scholar]
- Zhang, L.; Liu, L. Data Anomaly Detection Based on Isolation Forest Algorithm. In Proceedings of the 2022 International Conference on Computation, Big-Data and Engineering (ICCBE), Yunlin, Taiwan, 27–29 May 2022; pp. 87–89. [Google Scholar]
- Liu, H.; Taniguchi, T.; Tanaka, Y.; Takenaka, K.; Bando, T. Visualization of Driving Behavior Based on Hidden Feature Extraction by Using Deep Learning. IEEE Trans. Intell. Transp. Syst.
**2017**, 18, 2477–2489. [Google Scholar] [CrossRef] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. AAAI Conf. Artif. Intell.
**2016**, 31, 1. [Google Scholar] [CrossRef] - Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv
**2015**, arXiv:1502.03167. [Google Scholar] - Xingxing, L.; Changjun, F.; Baoxin, X.; Chao, C. Evaluation algorithm for clustering quality based on information entropy. In Proceedings of the 2016 IEEE International Conference on Big Data Analysis (ICBDA), Hangzhou, China, 12–14 March 2016; pp. 1–5. [Google Scholar]
- Wang, W.; Liu, C.; Zhao, D. How Much Data Are Enough? A Statistical Approach With Case Study on Longitudinal Driving Behavior. IEEE Trans. Intell. Veh.
**2017**, 2, 85–98. [Google Scholar] [CrossRef] [Green Version] - Liu, J.; Khattak, A. Delivering improved alerts, warnings, and control assistance using basic safety messages transmitted between connected vehicles. Transp. Res. Part C Emerg. Technol.
**2016**, 68, 83–100. [Google Scholar] [CrossRef] - Zhang, C.; Zhang, H.; Sun, G.; Ma, X. Transformer Anomaly Detection Method Based on MDS and LOF Algorithm. In Proceedings of the 2022 7th Asia Conference on Power and Electrical Engineering (ACPEE), Hangzhou, China, 15–17 April 2022; pp. 987–991. [Google Scholar]
- Minjie, Z.; Yilian, Z. Abnormal Traffic Detection Technology of Power IOT Terminal Based on PCA and OCSVM. In Proceedings of the 2023 IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 24–26 February 2023; pp. 549–553. [Google Scholar]
- Gupta, T.; Panda, S.P. Clustering Validation of CLARA and K-Means Using Silhouette & DUNN Measures on Iris Dataset. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 10–13. [Google Scholar]
- Aguilar, J.; Gull, C.Q.; Moreno, M.D.R.; Viera, J. Analysis of Customer Energy Consumption Patterns using an Online Fuzzy Clustering Technique. In Proceedings of the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Padua, Italy, 18–23 July 2022; pp. 1–6. [Google Scholar]
- Gonzales, M.E.M.; Uy, L.C.; Sy, J.A.L.; Cordel, M.O. Distance Metric Recommendation for k-Means Clustering: A Meta-Learning Approach. In Proceedings of the TENCON 2022—2022 IEEE Region 10 Conference (TENCON), Hong Kong, China, 1–4 November 2022; pp. 1–6. [Google Scholar]

**Figure 6.**Segmentation results of velocity and acceleration. (

**a**) Velocity segmentation result; (

**b**) acceleration segmentation result.

**Figure 8.**Loss values. (

**a**) Training loss and validation loss of the 1D-RCAE; (

**b**) comparison of training losses between CAE and 1D-RCAE.

**Figure 9.**Elbow plots of typical scenario and extreme scenario. (

**a**) Typical scenarios; (

**b**) extreme scenario.

**Figure 13.**Distribution of acceleration levels in typical scenarios and extreme scenarios. (

**a**) Typical scenarios; (

**b**) extreme scenarios.

Parameters | Value | |
---|---|---|

Encoder | convolutional layer 1 | kernel_size: 1 × 4, strides: 1 |

max pooling layer 1 | size = 2 | |

convolutional layer 2 | kernel_size: 1 × 1, strides: 1 | |

convolutional layer 3 | kernel_size: 1 × 6, strides: 1 | |

max pooling layer 2 | size = 4 | |

bottleneck layer 1 | kernel_size: 1 × 1, strides: 1 | |

Decoder | bottleneck layer 2 | kernel_size: 1 × 1, strides: 1 |

upsampling layer 1 | size = 4 | |

deconvolution layer 1 | kernel_size: 1 × 1, strides: 1 | |

deconvolution layer 2 | kernel_size: 1 × 6, strides: 1 | |

upsampling layer 2 | size = 2 | |

deconvolution layer 3 | kernel_size: 1 × 4, strides: 1 |

Parameters | Values |
---|---|

Network structure of encode layer | conv-pooling-conv-conv-pooling-conv |

Initial learning rate | 0.001 |

Optimizer | Adam |

Activation function | ReLu |

Loss function | MSE |

$\lambda $ | 0.05 |

Momentum | 0.9 |

Epochs | 400 |

Batch size | 32 |

Feature 1 | Feature 2 | Feature 3 | |
---|---|---|---|

0 | 0.19868 | 1.28456 | 2.54810 |

1 | 0.19664 | 1.29419 | 2.53582 |

2 | 0.19499 | 1.02378 | 2.44288 |

3 | 0.19685 | 0.98529 | 2.46010 |

4 | 0.19582 | 1.03302 | 2.47379 |

… | … | … | … |

Scenarios | Features | Weights |
---|---|---|

Typical scenarios | Feature 1 | 0.1371 |

Feature 2 | 0.1665 | |

Feature 3 | 0.6964 | |

Extreme scenarios | Feature 1 | 0.2155 |

Feature 2 | 0.2251 | |

Feature 3 | 0.5594 |

Scenarios | Algorithms | SC | CH | DB |
---|---|---|---|---|

Typical scenarios | DBSCAN | 0.314 | 3266.536 | 1.282 |

Mini-batch K-means | 0.583 | 17,829.595 | 0.538 | |

Hierarchical clustering | 0.417 | 10,321.479 | 0.719 | |

K-means | 0.429 | 4930.992 | 0.902 | |

Ours | 0.585 | 17,849.535 | 0.536 | |

Extreme scenarios | DBSCAN | 0.474 | 2672.742 | 1.352 |

Mini-batch K-means | 0.340 | 721.814 | 1.072 | |

Hierarchical clustering | 0.206 | 359.014 | 0.902 | |

K-means | 0.347 | 732.298 | 0.898 | |

Ours | 0.435 | 1547.784 | 0.832 |

Parameters | Classification | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 |
---|---|---|---|---|---|

Speed (m/s) | Low speed | 18.7% | 30.4% | 54.9% | 16.8% |

Medium speed | 24.8% | 45.9% | 35.3% | 24.5% | |

High speed | 56.5% | 19.7% | 9.8% | 58.7% | |

Acceleration (m/s^{2}) | Level three | 0.7% | 0.3% | 0.4% | 0.4% |

Level four | 99.1% | 99.7% | 99.6% | 99.5% | |

Others | 0.2% | 0 | 0 | 0.1% | |

Average | −0.0597 | −0.33 | −0.3682 | 0.1243 | |

Standard deviation | 0.3812 | 0.4521 | 0.3456 | 0.3449 | |

Steering wheel angle (°) | Minimum average steering wheel angle | −0.7614 | −5.1246 | −17.9846 | −2.8176 |

Maximum average steering wheel angle | 3.916 | 6.8848 | 27.9020 | 3.7130 | |

The behavior of ego vehicle | Left lane changing (Turn left) | 87.03% | 73.55% | 78.57% | 88.25% |

Right lane changing (Turn right) | 6.87% | 15.79% | 16.52% | 5.25% | |

Straight ahead | 6.10% | 10.66% | 4.91% | 6.50% | |

Target motion state | Uniform speed | 35.5% | 7.29% | 4.83% | 4.32% |

Deceleration | 32.2% | 81.51% | 53.23% | 48.87% | |

Acceleration | 32.75% | 11.2% | 41.94% | 46.77% | |

Target position | Straight ahead | 71.81% | 82.05% | 87.5% | 76.67% |

Left cut in | 16.70% | 7.42% | 4.46% | 10.90% | |

Right cut in | 11.48% | 10.53% | 8.04% | 12.43% | |

Types of surrounding traffic participants | Pedestrian | 0.1% | 0.5% | 0.1% | 1.6% |

Bicycle | 0 | 0 | 0 | 0 | |

Light vehicle | 95.7% | 90.2% | 95.4% | 94.8% | |

Heavy vehicle | 2.7% | 1.2% | 0.5% | 1.7% | |

Tractor | 0.1% | 0.1% | 0.1% | 0.2% | |

Total | Numbers | 1820 | 741 | 224 | 2799 |

Proportion | 32.6% | 13.3% | 4.0% | 50.1% |

Parameters | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 |
---|---|---|---|---|

Regional distribution | Expressway | City | Suburb | Expressway |

Ego vehicle behavior | Straight ahead | Straight ahead | Driving around a curve | Straight ahead |

Ego vehicle state | Uniform speed | Deceleration | Deceleration | Acceleration |

Target type | Light vehicle | Light vehicle | Light vehicle | Light vehicle |

Target state | Uniform speed | Deceleration | Deceleration | Acceleration |

Target position | Straight ahead | Straight ahead | Driving around a curve | Straight ahead |

Traffic flow | Vehicles on the right | Vehicles on both sides | Vehicles on the left | Vehicles on the left |

Number of surrounding traffic participants | 1–4 | 1–10 | 1–5 | 1–5 |

Intersection shape | Non-intersection | Non-intersection | Non-intersection | Non-intersection |

Straight road/curved road | Straight road | Straight road | Curve road | Straight road |

Diagram |

Parameters | Classification | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | Scenario 5 |
---|---|---|---|---|---|---|

Speed (m/s) | Low speed | 22.4% | 62.6% | 53.6% | 30.9% | 66.7% |

Medium speed | 25.2% | 23.0% | 19.9% | 33.3% | 29.4% | |

High speed | 52.4% | 11.4% | 27.5% | 35.8% | 3.9% | |

Acceleration (m/s^{2}) | Level one | 1.8% | 2.9% | 5.0% | 3.6% | 2.0% |

Level two | 4.2% | 6.3% | 9.5% | 8.5% | 3.9% | |

Level three | 8.9% | 10.9% | 12.3% | 11.7% | 5.9% | |

Level four | 85.1% | 79.9% | 73.2% | 76.2% | 88.2% | |

Average | −0.026 | −0.046 | 0.065 | −0.0223 | −0.146 | |

Standard deviation | 0.5836 | 0.8203 | 0.7697 | 0.7015 | 0.8583 | |

Steering wheel angle | Minimum average steering wheel angle | −13.179 | −47.258 | −42.187 | −16.275 | −66.347 |

Maximum average steering wheel angle | 21.133 | 53.7619 | 39.187 | 21.405 | 34.264 | |

The behavior of ego vehicle | Left lane changing (Turn left) | 11.90% | 25.29% | 20.67% | 9.02% | 64.71% |

Right lane changing (Turn right) | 16.87% | 41.95% | 16.20% | 18.03% | 29.41% | |

Straight ahead | 71.23% | 32.76% | 63.13% | 72.95% | 5.88% | |

Target motion state | Uniform speed | 35.91% | 10.92% | 62.01% | 31.97% | 41.18% |

Deceleration | 62.30% | 61.49% | 19.55% | 63.93% | 54.90% | |

Acceleration | 1.79% | 27.59% | 18.44% | 4.10% | 3.92% | |

Target position | Straight ahead | 69.64% | 38.51% | 54.19% | 52.73% | 100% |

Left cut in | 15.48% | 49.43% | 30.73% | 18.58% | 0 | |

Right cut in | 14.88% | 12.06% | 15.08% | 28.69% | 0 | |

Types of surrounding traffic participants | Pedestrian | 0.1% | 0.04% | 0.4% | 0.2% | 0.7% |

Bicycle | 0 | 0 | 0 | 0 | 0 | |

Light vehicle | 84.9% | 72.1% | 81.6% | 83.6% | 72.8% | |

Heavy vehicle | 2.5% | 0.8% | 2.4% | 2.4% | 1.1% | |

Tractor | 0.4% | 0 | 0.5% | 0.9% | 0 | |

Total | Numbers | 504 | 174 | 179 | 366 | 51 |

Proportion | 39.5% | 13.7% | 14.1% | 28.7% | 4% |

Parameters | Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | Scenario 5 |
---|---|---|---|---|---|

Regional distribution | Expressway | City | City | Expressway | City |

Ego vehicle behavior | Driving around a curve | Turn right | Turn left | Driving around a curve | Turn left |

Ego vehicle state | Uniform speed | Uniform speed | Uniform speed | Uniform speed | Deceleration |

Target type | Light vehicle | Light vehicle | Light vehicle | Light vehicle | Light vehicle |

Target state | Deceleration | Deceleration | Deceleration | Deceleration | Deceleration |

Target position | Driving around a curve | Left cut in | Straight ahead | Driving around a curve | Straight ahead |

Traffic flow | Vehicles on the left | Vehicles on both sides | Vehicles on both sides | Vehicles on the left | Vehicles on both sides |

Number of surrounding traffic participants | 1–9 | 1–12 | 1–11 | 1–10 | 1–8 |

Intersection shape | Non-intersection | Intersection | Intersection | Non-intersection | Intersection |

Straight road/curved road | Curve road | Straight road | Straight road | Curve road | Straight road |

Diagram |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, S.; Ren, F.; Li, P.; Li, Z.; Lv, H.; Liu, Y.
Testing Scenario Identification for Automated Vehicles Based on Deep Unsupervised Learning. *World Electr. Veh. J.* **2023**, *14*, 208.
https://doi.org/10.3390/wevj14080208

**AMA Style**

Liu S, Ren F, Li P, Li Z, Lv H, Liu Y.
Testing Scenario Identification for Automated Vehicles Based on Deep Unsupervised Learning. *World Electric Vehicle Journal*. 2023; 14(8):208.
https://doi.org/10.3390/wevj14080208

**Chicago/Turabian Style**

Liu, Shuai, Fan Ren, Ping Li, Zhijie Li, Hao Lv, and Yonggang Liu.
2023. "Testing Scenario Identification for Automated Vehicles Based on Deep Unsupervised Learning" *World Electric Vehicle Journal* 14, no. 8: 208.
https://doi.org/10.3390/wevj14080208