# Outlier Vehicle Trajectory Detection Using Deep Autoencoders in Santiago, Chile

^{1}

^{2}

^{3}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

- Proposal of a stacked autoencoder neural network model for detecting anomalous sequences of GPS measurements in urban vehicles considering fully connected and convolutional architectures.
- Comparison of fully connected and convolutional autoencoder architectures.
- Evaluation of model results regarding anomalies detected by human users.

## 2. Proposed Model for Detection of Outlier Trajectories

**Encoder network:**It is formed by the input layer and a set of hidden layers: this subnet compresses the input data x to a latent space and the training of said network is done one layer at a time. Each layer is trained as an encoder by reducing the number of neurons and it receives as input the latent representation of the previous layer until reaching the hidden layer h that describes the latent space of the network. The encoder network is represented by the Equation (1).

**Decoder network:**It is formed by a set of hidden layers and the output layer: this subnet reconstructs the original input $\widehat{x}$ based on the latent space h and the training of said network is done one layer at a time. Each layer is trained as a decoder by increasing the number of neurons and it receives as input the reconstruction of the previous layer until reaching the output layer, which has the number of neurons equal to that of the input layer. The network decoder is represented by the Equation (2).

#### 2.1. Densely Connected Stacked Autoencoder Network

#### 2.2. Stacked Convolutional Autoencoder Network

#### 2.2.1. Step 1

#### 2.2.2. Step 2

#### 2.2.3. Step 3

#### 2.3. Models Implementation

## 3. Preprocessing and Data Analysis

#### 3.1. Data Preprocessing

#### 3.1.1. Data Description

#### 3.1.2. Data Cleaning and Normalization

#### 3.1.3. Transformation of Data into Time Series

**(i).**The stacked autoencoder network receives multidimensional time series as input. The time series T is a sequence of observations $T=\{{S}_{1},{S}_{2},{S}_{3},\dots ,{S}_{K}\}$, where ${S}_{i}=\{{s}_{i}^{1},{s}_{i}^{2},{s}_{i}^{3},\dots ,{s}_{i}^{d}\}$ is a multidimensional observation ${S}_{i}\in {R}^{d}$, determining the size of the window K that will be considered with the set of observations. Note that the windows were obtained by sliding the same one in each sequence considering b steps; in this way the set of time series was obtained. In our case, we assigned b equal to $0.5K$. The procedure is illustrated in Figure 3.

**(ii).**The convolutional autoencoder network also receives time series as input but in a two-dimensional grid, as in the autoencoder network. We still use the sliding window, but now the time series T is transformed into a 2-dimensional grid; Figure 4 illustrates the procedure.

#### 3.1.4. Datasets

#### 3.2. Exploratory Data Analysis

## 4. Experiments

#### 4.1. Quantitative Results

- D = Dense or fully connected layers
- C = Convolutional layers
- TC = Transposed or deconvolutional layers

#### 4.1.1. Dataset 1

#### 4.1.2. Dataset 2

#### 4.1.3. Dataset 3

#### 4.1.4. Dataset 4

#### 4.2. Qualitative Network Results

#### 4.2.1. Results of the Time Series Visualized on the Map

#### Examples of Correctly Detected Outlier Paths

#### Example of Normal Paths Detected as Outliers

#### 4.3. LOF Classical Model

#### 4.4. Explainable Model

**Outlier example:**Table 13 shows the decision rules, where the speeds of observations 6 and 8 are on average less than or equal to 41.5 km/h, unlike the speeds of observations 13 and 15, which are on average greater than or equal to 79.5 km/h, which indicate a sudden change in speed. Also the speeds of observations 17, 19, 21, 26, 29 and 31 are on average less than or equal to 29 km/h and show a sudden change in speed with respect to the speeds of observations 13 and 15.

**No outlier example:**Table 14 shows the decision rules, where the speeds of observations 1 to 19 are on average less than or equal to 45 km/h, which corresponds to a normal average speed with which the vehicles travel.

## 5. Discussion

## 6. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Hashem, I.A.T.; Chang, V.; Anuar, N.B.; Adewole, K.; Yaqoob, I.; Gani, A.; Ahmed, E.; Chiroma, H. The role of big data in smart city. Int. J. Inf. Manag.
**2016**, 36, 748–758. [Google Scholar] [CrossRef] [Green Version] - Al-Jarrah, O.Y.; Yoo, P.D.; Muhaidat, S.; Karagiannidis, G.K.; Taha, K. Efficient Machine Learning for Big Data: A Review. Big Data Res.
**2015**, 2, 87–93. [Google Scholar] [CrossRef] [Green Version] - Xu, W.; Zhou, H.; Cheng, N.; Lyu, F.; Shi, W.; Chen, J.; Shen, X. Internet of vehicles in big data era. IEEE/CAA J. Autom. Sin.
**2018**, 5, 19–35. [Google Scholar] [CrossRef] - Rathore, M.M.; Paul, A.; Hong, W.H.; Seo, H.; Awan, I.; Saeed, S. Exploiting IoT and big data analytics: Defining Smart Digital City using real-time urban data. Sustain. Cities Soc.
**2018**, 40, 600–610. [Google Scholar] [CrossRef] - Meng, F.; Yuan, G.; Lv, S.; Wang, Z.; Xia, S. An overview on trajectory outlier detection. Artif. Intell. Rev.
**2019**, 52, 2437–2456. [Google Scholar] [CrossRef] - Ferguson, M.G. Global Positioning System (GPS) Error Source Prediction. Master’s Thesis, Air Force Institute of Technology Wright-Patterson Air Force Base, Dayton, OH, USA, 2000. [Google Scholar]
- Olynik, M. Temporal Characteristics of GPS Error Sources and Their Impact on Relative Positioning. Master’s Thesis, University of Calgary, Calgary, AB, Canada, 2002. [Google Scholar]
- Grewal, M.S.; Weill, L.R.; Andrews, A.P. Global Positioning Systems, Inertial Navigation, and Integration; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
- Januszewski, J. Sources of Error in Satellite Navigation Positioning. TransNav Int. J. Mar. Navig. Saf. Sea Transp.
**2017**, 11, 419–423. [Google Scholar] [CrossRef] - Patil, V.; Singh, P.; Parikh, S.; Atrey, P.K. GeoSClean: Secure cleaning of GPS trajectory data using anomaly detection. In Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA, 10–12 April 2018; pp. 166–169. [Google Scholar]
- Žunić, E.; Delalić, S.; Hodžić, K.; Tucaković, Z. Innovative GPS data anomaly detection algorithm inspired by QRS complex detection algorithms in ECG signals. In Proceedings of the IEEE EUROCON 2019-18th International Conference on Smart Technologies, Novi Sad, Serbia, 1–4 July 2019; pp. 1–6. [Google Scholar]
- Prabha, R.; Kabadi, M.G. A comprehensive insight towards pre-processing methodologies applied on GPS data. Int. J. Electr. Comput. Eng.
**2020**, 10, 2742. [Google Scholar] [CrossRef] - Song, S.; Li, C.; Zhang, X. Turn waste into wealth: On simultaneous clustering and cleaning over dirty data. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 1115–1124. [Google Scholar]
- Lou, Y.; Zhang, C.; Zheng, Y.; Xie, X.; Wang, W.; Huang, Y. Map-matching for low-sampling-rate GPS trajectories. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009; pp. 352–361. [Google Scholar]
- Cerqueira, V.; Moreira-Matias, L.; Khiari, J.; van Lint, H. On evaluating floating car data quality for knowledge discovery. IEEE Trans. Intell. Transp. Syst.
**2018**, 19, 3749–3760. [Google Scholar] [CrossRef] - Song, S.; Zhang, A.; Wang, J.; Yu, P.S. SCREEN: Stream data cleaning under speed constraints. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Melbourne, Australia, 31 May–4 June 2015; pp. 827–841. [Google Scholar]
- Hieu, H.T.; Chou, T.Y.; Fang, Y.M.; Hoang, T. Statistical process control methods for detecting outliers in GPS time series data. Int. Refereed J. Eng. Sci.
**2018**, 7, 8–15. [Google Scholar] - Peixoto, D.A.; Xie, L. Mining Trajectory Data. 2013. Available online: https://www.researchgate.net/profile/Douglas_Peixoto/publication/275381558_Mining_Trajectory_Data/links/553b4e320cf245bdd76468c5.pdf (accessed on 1 January 2022).
- Yan, W.; Yu, L. On accurate and reliable anomaly detection for gas turbine combustors: A deep learning approach. In Proceedings of the Annual Conference of the Prognostics and Health Management Society, Coronado Island Marriott, Coronado, CA, USA, 18–24 October 2015. [Google Scholar]
- Singh, A. Anomaly Detection for Temporal Data Using Long Short-Term Memory (LSTM). Master‘s Thesis, KTH Information and Communication Technology, Stockholm, Sweden, 2017. [Google Scholar]
- Ma, L.; Gu, X.; Wang, B. Correction of Outliers in Temperature Time Series Based on Sliding Window Prediction in Meteorological Sensor Network. Information
**2017**, 8, 60. [Google Scholar] [CrossRef] [Green Version] - Lu, W.; Cheng, Y.; Xiao, C.; Chang, S.; Huang, S.; Liang, B.; Huang, T. Unsupervised sequential outlier detection with deep architectures. IEEE Trans. Image Process.
**2017**, 26, 4321–4330. [Google Scholar] [CrossRef] [PubMed] - Cao, K.; Liu, Y.; Meng, G.; Liu, H.; Miao, A.; Xu, J. Trajectory outlier detection on trajectory data streams. IEEE Access
**2020**, 8, 34187–34196. [Google Scholar] [CrossRef] - Eldawy, E.O.; Hendawi, A.; Abdalla, M.; Mokhtar, H.M. FraudMove: Fraud Drivers Discovery Using Real-Time Trajectory Outlier Detection. ISPRS Int. J. Geo-Inf.
**2021**, 10, 767. [Google Scholar] [CrossRef] - Han, X.; Cheng, R.; Ma, C.; Grubenmann, T. DeepTEA: Effective and efficient online time-dependent trajectory outlier detection. Proc. VLDB Endow.
**2022**, 15, 1493–1505. [Google Scholar] [CrossRef] - Kieu, T.; Yang, B.; Jensen, C.S. Outlier detection for multidimensional time series using deep neural networks. In Proceedings of the 2018 19th IEEE International Conference on Mobile Data Management (MDM), Aalborg, Denmark, 25–28 June 2018; pp. 125–134. [Google Scholar]
- Kieu, T.; Yang, B.; Guo, C.; Jensen, C.S. Outlier Detection for Time Series with Recurrent Autoencoder Ensembles. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, 10–16 August 2019; pp. 2725–2732. [Google Scholar]
- Pierna, J.F.; Wahl, F.; De Noord, O.; Massart, D. Methods for outlier detection in prediction. Chemom. Intell. Lab. Syst.
**2002**, 63, 27–39. [Google Scholar] [CrossRef] - Loureiro, A.; Torgo, L.; Soares, C. Outlier detection using clustering methods: A data cleaning application. In Proceedings of KDNet Symposium on Knowledge-Based Systems for the Public Sector; Springer: Bonn, Germany, 2004. [Google Scholar]
- Chen, Y.; Miao, D.; Zhang, H. Neighborhood outlier detection. Expert Syst. Appl.
**2010**, 37, 8745–8749. [Google Scholar] [CrossRef] - Breunig, M.M.; Kriegel, H.P.; Ng, R.T.; Sander, J. LOF: Identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 15–18 May 2000; pp. 93–104. [Google Scholar]
- Lyudchik, O. Outlier Detection Using Autoencoders. CERN-STUDENTS-Note-2016-079. 2016. Available online: https://it.overleaf.com/project/5ff554a17845e357f4706a1b (accessed on 1 January 2022).
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 1 January 2022).
- Géron, A. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media, Inc.: Sevastopol, CA, USA, 2017. [Google Scholar]
- Efron, B.; Tibshirani, R.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman & Hall/CRC: Philadelphia, PA, USA, 1994. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Ribeiro, M.; Singh, S.; Guestrin, C. Model-Agnostic Interpretability of Machine Learning. arXiv
**2016**, arXiv:1606.05386. [Google Scholar] - Lundberg, S.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. Adv. Neural Inf. Process. Syst.
**2017**, 30. [Google Scholar] [CrossRef]

**Figure 1.**Example of a stacked autoencoder network architecture. This network considers five dense layers for both the encoder and decoder subnets made up of 240, 200, 160, 120, 80 and 40 neurons.

**Figure 5.**Evolution of training cost function. The function decreases by minimizing the reconstruction error of the trajectories in the training set.

**Figure 6.**Evolution of validation cost function. The function decreases by minimizing the reconstruction error of the trajectories in the validation set.

**Figure 7.**Example 1 of correctly detected outlier trajectory. The light blue points indicate GPS measurements of the complete trajectory, while the red points indicate the points with the highest reconstruction error according to the neural network.

**Figure 8.**Example 2 of correctly detected outlier trajectory. The light blue points indicate GPS measurements of the complete trajectory, while the red points indicate the points with the highest reconstruction error according to the neural network. See detail in text.

**Figure 9.**Example of a normal trajectory according to human expert detected as outlier by neural network. The light blue points indicate GPS measurements of the complete trajectory, while the red points indicate the points with the highest reconstruction error according to the neural network. See detail in text.

Variable | Description |
---|---|

Time | UTC date and time of GPS transmission in milliseconds (ms). |

Latitude | Decimal coordinates for GPS latitude. |

Longitude | Decimal coordinates for GPS longitude. |

Altitude | Altitude above the sea level surface. |

Speed | Speed calculated by GPS equipment in Km/h. |

Nsat | Number of satellites viewed by the GPS equipment. |

Dataset | Input Data Type | Trajectory Type |
---|---|---|

Dataset 1 | Spatial variables | Short trajectories |

Dataset 2 | Spatial variables | Long trajectories |

Dataset 3 | Total variables | Short trajectories |

Dataset 4 | Total variables | Long trajectories |

**Table 3.**Table of quantitative description of variables. Note that we consider the original data, which is why the measurement errors of GPS devices are included.

Variable | Type | Minimum | Maximum | Mean | Median |
---|---|---|---|---|---|

Latitude | Continuous | −53.67 | 22.44 | −33.40 | −33.59 |

Longitude | Continuous | −75.75 | 0 | −71.36 | −71.18 |

Altitude | Continuous | −1044 | 5278 | 445.45 | 207 |

Speed | Continuous | −9 | 201 | 11.02 | 0 |

Number of satellites | Discrete | 0 | 315 | 9.88 | 9 |

Evaluation | Degree |
---|---|

Outlier | 1 |

Dubious | 0.5 |

No outlier | 0 |

MSE | ${\mathit{R}}^{2}$ | |||
---|---|---|---|---|

Train | Test | Train | Test | |

AFC-A | 3.888 × 10${}^{-7}$ | 1.3088 × 10${}^{-7}$ | 0.99997 | 0.999985 |

AFC-B | 3.6265 × 10${}^{-7}$ | 1.234 × 10${}^{-7}$ | 0.99997 | 0.999987 |

AFC-C | 2.9139 × 10${}^{-7}$ | 1.0942 × 10${}^{-7}$ | 0.99998 | 0.999987 |

ACONV-A | 2.7117 × 10${}^{-7}$ | 1.0446 × 10${}^{-7}$ | 0.999983 | 0.9999879 |

ACONV-B | 2.4559 × 10${}^{-7}$ | 9.1347 × 10${}^{-8}$ | 0.999985 | 0.9999897 |

ACONV-C | 2.5914 × 10${}^{-7}$ | 9.4499 × 10${}^{-8}$ | 0.999984 | 0.9999894 |

MSE | ${\mathit{R}}^{2}$ | |||
---|---|---|---|---|

Train | Test | Train | Test | |

AFC-A | 1.335 × 10${}^{-6}$ | 8.3177 × 10${}^{-7}$ | 0.999915 | 0.99990 |

AFC-B | 8.633 × 10${}^{-7}$ | 4.099 × 10${}^{-7}$ | 0.999945 | 0.99995 |

AFC-C | 8.8594 × 10${}^{-7}$ | 4.56 × 10${}^{-7}$ | 0.999943 | 0.99994 |

ACONV-A | 1.7698 × 10${}^{-6}$ | 1.3554 × 10${}^{-6}$ | 0.9999 | 0.9998 |

ACONV-B | 2.9682 × 10${}^{-6}$ | 2.3015 × 10${}^{-6}$ | 0.9998 | 0.9997 |

ACONV-C | 5.4754 × 10${}^{-6}$ | 4.3883 × 10${}^{-6}$ | 0.9997 | 0.9995 |

MSE | ${\mathit{R}}^{2}$ | |||
---|---|---|---|---|

Train | Test | Train | Test | |

AFC-A | 1.2654 × 10${}^{-4}$ | 1.1692 × 10${}^{-4}$ | 0.9976 | 0.9959 |

AFC-B | 4.9402 × 10${}^{-6}$ | 4.2375 × 10${}^{-6}$ | 0.9999 | 0.9999 |

AFC-C | 8.8641 × 10${}^{-5}$ | 7.4975 × 10${}^{-5}$ | 0.9983 | 0.9975 |

ACONV-A | 1.6204 × 10${}^{-6}$ | 1.5368 × 10${}^{-6}$ | 0.99996 | 0.99995 |

ACONV-B | 1.2402 × 10${}^{-6}$ | 1.1958 × 10${}^{-6}$ | 0.99997 | 0.99996 |

ACONV-C | 1.2624 × 10${}^{-5}$ | 1.1092 × 10${}^{-5}$ | 0.99980 | 0.99960 |

MSE | ${\mathit{R}}^{2}$ | |||
---|---|---|---|---|

Train | Test | Train | Test | |

AFC-A | 2.255 × 10${}^{-4}$ | 2.1979 × 10${}^{-4}$ | 0.9957 | 0.9922 |

AFC-B | 2.9738 × 10${}^{-4}$ | 2.8812 × 10${}^{-4}$ | 0.9943 | 0.9898 |

AFC-C | 1.3529 × 10${}^{-4}$ | 1.278 × 10${}^{-4}$ | 0.9974 | 0.9955 |

ACONV-A | 1.0563 × 10${}^{-4}$ | 9.2084 × 10${}^{-5}$ | 0.9980 | 0.9980 |

ACONV-B | 8.2154 × 10${}^{-5}$ | 6.8964 × 10${}^{-5}$ | 0.9984 | 0.9976 |

ACONV-C | 8.7996 × 10${}^{-5}$ | 7.5401 × 10${}^{-5}$ | 0.9983 | 0.9974 |

**Table 9.**Percentage of outliers detected considering the top 100 candidates from 1000 random samples: the mean and standard deviation (in parenthesis) are reported in the last column. The boldface indicates the best results.

Dataset | Neural Network | % Detected Outlier |
---|---|---|

Dataset 1 | Stacked Autoencoder(6F) | 51.7 (5.0) |

Stacked Convolutional Autoencoder(2C-2F-2D) | 39.1 (5.0) | |

Dataset 2 | Stacked Autoencoder (8F) | 46.8 (5.2) |

Stacked Convolutional Autoencoder (4C-2F-4D) | 4.0 (2.0) | |

Dataset 3 | Stacked Autoencoder (4F) | 20.0 (3.7) |

Stacked Convolutional Autoencoder (3C-2F-3D) | 15.0 (3.3) | |

Dataset 4 | Stacked Autoencoder (10D) | 79.5 (4.1) |

Stacked Convolutional Autoencoder (4C-2F-4D) | 82.1 (3.8) |

**Table 10.**Detail of GPS measurements with the greatest error, including contiguous ones, of the trajectory of Figure 7. The measurements that most contribute to the trajectory error are indicated in boldface in Id.

Id | Date | Latitude | Longitude | Altitude | Speed |
---|---|---|---|---|---|

21 | 08:38:24 | −32.937323 | −71.287701 | 87.0 | 95.0 |

22 | 08:39:04 | −32.934893 | −71.288131 | 83.0 | 0.0 |

23 | 08:39:13 | −32.934893 | −71.288131 | 83.0 | 0.0 |

24 | 08:39:44 | −32.931070 | −71.288090 | 81.0 | 107.0 |

35 | 08:43:14 | −32.893170 | −71.233126 | 119.0 | 113.0 |

36 | 08:43:44 | −32.883116 | −71.231621 | 123.0 | 151.0 |

37 | 08:44:14 | −32.876997 | −71.231075 | 130.0 | 33.0 |

38 | 08:44:15 | −32.876929 | −71.231116 | 130.0 | 33.0 |

**Table 11.**Detail of GPS measurements with the greatest error, including contiguous ones, of the trajectory of Figure 8. The measurements that contribute the most to the trajectory error are indicated in boldface in Id.

Id | Date | Latitude | Longitude | Altitude | Speed |
---|---|---|---|---|---|

3 | 13:06:04 | −37.486372 | −73.393762 | 178.0 | 84.0 |

4 | 13:06:44 | −37.473855 | −73.387041 | 170.0 | 161.0 |

5 | 13:06:59 | −37.469131 | −73.383172 | 169.0 | 140.0 |

18 | 13:10:44 | −37.404656 | −73.345360 | 137.0 | 146.0 |

19 | 13:11:24 | −37.390788 | −73.341608 | 138.0 | 150.0 |

20 | 13:12:04 | −37.384083 | −73.339738 | 143.0 | 5.0 |

21 | 13:12:44 | −37.374742 | −73.337310 | 141.0 | 143.0 |

22 | 13:13:24 | −37.361657 | −73.329252 | 131.0 | 167.0 |

23 | 13:14:04 | −37.349851 | −73.318316 | 114.0 | 131.0 |

**Table 12.**Detail of GPS measurements with the greatest error, including contiguous ones, of the trajectory of Figure 9. The measurements that contribute the most to the trajectory error are indicated in boldface in Id.

Id | Date | Latitude | Longitude | Altitude | Speed |
---|---|---|---|---|---|

5 | 14:47:11 | −33.066468 | −71.394259 | 177.0 | 125.0 |

6 | 14:47:51 | −33.065397 | −71.411271 | 162.0 | 154.0 |

7 | 14:48:01 | −33.064933 | −71.415800 | 169.0 | 154.0 |

8 | 14:48:31 | −33.063733 | −71.429321 | 166.0 | 112.0 |

9 | 14:49:01 | −33.062845 | −71.433879 | 165.0 | 20.0 |

10 | 14:49:11 | −33.062808 | −71.434275 | 164.0 | 19.0 |

11 | 14:49:51 | −33.062838 | −71.444221 | 173.0 | 120.0 |

Rule | Condition |
---|---|

1 | vel_6 ≤ 44.5 |

2 | vel_8 ≤ 38.5 |

3 | vel_13 ≥ 78.5 |

4 | vel_15 ≥ 80.5 |

5 | vel_17 ≤ 26.5 |

6 | vel_19 ≤ 13.5 |

7 | vel_21 ≤ 34.5 |

8 | vel_26 ≤ 32.5 |

9 | vel_29 ≤ 23.5 |

10 | vel_31 ≤ 43.5 |

Rule | Condition |
---|---|

1 | vel_6 ≤ 44.5 |

2 | vel_7 ≤ 42.5 |

3 | vel_8 ≤ 38.5 |

4 | vel_9 ≤ 43.5 |

5 | vel_11 ≤ 72.5 |

6 | vel_13 ≤ 43.5 |

7 | vel_15 ≤ 36.5 |

8 | vel_17 ≤ 26.5 |

9 | vel_19 ≤ 13.5 |

10 | vel_20 ≤ 26.5 |

11 | vel_21 ≤ 34.5 |

12 | vel_22 ≤ 42.5 |

13 | vel_26 ≤ 32.5 |

14 | vel_28 ≤ 43.5 |

15 | vel_29 ≤ 23.5 |

16 | vel_30 ≤ 42.5 |

17 | vel_31 ≤ 43.5 |

18 | vel_33 ≤ 53.5 |

19 | vel_34 ≤ 47.5 |

20 | lon_37 ≤ −64.114 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Peralta, B.; Soria, R.; Nicolis, O.; Ruggeri, F.; Caro, L.; Bronfman, A.
Outlier Vehicle Trajectory Detection Using Deep Autoencoders in Santiago, Chile. *Sensors* **2023**, *23*, 1440.
https://doi.org/10.3390/s23031440

**AMA Style**

Peralta B, Soria R, Nicolis O, Ruggeri F, Caro L, Bronfman A.
Outlier Vehicle Trajectory Detection Using Deep Autoencoders in Santiago, Chile. *Sensors*. 2023; 23(3):1440.
https://doi.org/10.3390/s23031440

**Chicago/Turabian Style**

Peralta, Billy, Richard Soria, Orietta Nicolis, Fabrizio Ruggeri, Luis Caro, and Andrés Bronfman.
2023. "Outlier Vehicle Trajectory Detection Using Deep Autoencoders in Santiago, Chile" *Sensors* 23, no. 3: 1440.
https://doi.org/10.3390/s23031440