Next Article in Journal
Solving the Urban Transit Routing Problem Using a Cat Swarm Optimization-Based Algorithm
Next Article in Special Issue
An Efficient Data Retrieval Parallel Reeb Graph Algorithm
Previous Article in Journal
EEG Feature Extraction Using Genetic Programming for the Classification of Mental States
Previous Article in Special Issue
Adaptive Metrics for Adaptive Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Traffic Incidents Using Persistence Diagrams

Department of Mathematics, Iowa State University, Ames, IA 50011, USA
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(9), 222; https://doi.org/10.3390/a13090222
Submission received: 30 June 2020 / Revised: 28 August 2020 / Accepted: 29 August 2020 / Published: 5 September 2020
(This article belongs to the Special Issue Topological Data Analysis)

Abstract

:
We introduce a novel methodology for anomaly detection in time-series data. The method uses persistence diagrams and bottleneck distances to identify anomalies. Specifically, we generate multiple predictors by randomly bagging the data (reference bags), then for each data point replacing the data point for a randomly chosen point in each bag (modified bags). The predictors then are the set of bottleneck distances for the reference/modified bag pairs. We prove the stability of the predictors as the number of bags increases. We apply our methodology to traffic data and measure the performance for identifying known incidents.

1. Introduction

Traffic incidents are severely detrimental to society in terms of financial costs which are estimated in the U.S. by the National Highway Traffic Safety Administration [1]. Consequently, an important focus of data analysis concerns detecting incidents from traffic data for the management of response to accidents which can have significant benefits for society [2]. The type of data we consider is a time series of volumetric traffic counts. We propose a novel methodology for analyzing this data for the purpose of identifying traffic incidents.. Our approach is to view the identification of incidents as a problem of anomaly detection within the time-series data. Our method uses tools from statistical analysis–bootstrap aggregation or bagging–and from topological data analysis (TDA)–persistence diagrams–to form an ensemble of predictors to determine whether data points are normal or anomalous. To each data point and random bag, we associate two persistence diagrams, one reference and one modified. The predictors then consist of a score–the bottleneck distance between the diagrams–and for each data point, the set of scores are aggregated into several summary statistics. We then identify data points as being incidents or anomalies by percentile scores of the summary statistics.
Our algorithm using randomized bagging of the data and resultant persistence diagrams as a feature extractor for anomaly detection can be viewed as a semi-supervised learning algorithm. Indeed, our method trains the summary statistics on a small amount of labeled data. Moreover, our algorithm can be applied to any vectorized data and, thus, can be adapted for many other data analytic problems.

1.1. Description of the Data and Challenge Problem

The problem we address is to identify incidents in a data set of volumetric traffic counts obtained from multiple inductive loop road sensors, [3], supplied by the California Department of Transportation. The counts were aggregated over 5 min intervals, and we had access to one full calendar year of traffic counts for each sensor. See Table 1 for a sample from the data. We apply our method to this data set in Section 4.
Our task to identify incidents was part of an anomaly detection problem hosted by the Joint Algorithms for Threat Detection (ATD) and Algorithms for Modern Power Systems (AMPS) Annual Workshop; the challenge problem that we specifically addressed was curated for the ATD program [4]. The challenge problem consisted of two phases: in Phase 1, we were asked to detect incidents within data provided by 10 road sensors that were spatially located to be independent from one another (we were not informed of the locations); in Phase 2, we were asked to repeat Phase 1 provided additional information on the location of the sensors–see Figure 1. For each phase, we were supplied with a training data set and a testing data set with labeled incidents that were hand-selected by the ATD program coordinators. The training data set for Phase 1 contained 9 labeled incidents, and the training data set for Phase 2 contained 10 labeled incidents. There were numerous unlabeled incidents in the training data sets.
The timeline of the challenge problem was as follows:
  • Phase 1 training data posted: 15 January 2019
  • Phase 1 testing data posted: 22 March 2019
  • Phase 1 solution deadline: 2 May 2019
  • Phase 2 training data posted: 24 May 2019
  • Phase 2 testing data posted: 15 July 2019
  • Phase 2 solution deadline: 27 September 2019
Supplementary materials in the form of data sets and code are available at the following repository: https://bitbucket.org/esweber/randomized-persistence-landscapes/src/master/.
Data sets for the challenge problem were originally released as files in MATLAB. We have converted the data to comma-separated values files and all experiments discussed in the paper can be reproduced by executing our code in R-Studio. Our solutions were submitted as a binary output for each 5 min interval with 1 indicating an incident and 0 otherwise. Three evaluation metrics were then used to evaluate the performance of our algorithm: precision, recall, and an F-score.

1.2. Survey of Literature

Detecting incidents in traffic data is important for the safety and welfare of society [2]. Significant efforts have been made to automate incident detection [5,6,7,8]. Traditional automatic incident detection data sources include radar sensors [9], video cameras [10], probes [11]. Loop detectors are also a very common source of traffic data [3,12], which is the source of the traffic data analyzed in our method.
Detecting incidents in traffic data can be formulated as an anomaly detection problem. Approaches in this formulation include: dynamic clustering [13] or Support Vector Machines [14] to detect anomalous trajectories, autoencoders to learn regularity of video sequences [15], motion reconstruction [16], and convolutional neural networks to predict congestion in traffic networks [17]. In this paper, we also view incident detection as an anomaly detection problem, with our approach using TDA to identify outliers. For an overview of anomaly detection, see [18].
Topological data analysis has been used previously in the context of traffic data: [19] uses TDA as a model for tracking vehicles and [20] uses TDA to understand individual travel behaviors. TDA has also been used previously for anomaly detection in [21,22]. TDA provides a framework for studying the “shape” of data in geometric and topological terms rather than a solely statistical framework [23,24,25].
TDA has been an extremely successful tool in applied algebraic topology across a multitude of applications, including analysis of neural networks [26,27], data dimension reduction techniques [28,29], anomaly detection [21,22] biological studies [30], viral evolution [31], the study of blood flow through brain arteries [32], and tracking vehicles [19]. TDA has been applied previously to study time-series data in a variety of settings [33,34]. Existing applications include to financial data [35,36,37], medical signal (EEG) analysis [38,39] and general techniques including sliding-window methods [40,41,42,43]. Most of these methods rely on a Takens-style embedding to transform time-series data into point-cloud data [44,45]. In contrast, our method does not require such an embedding.
Our algorithm is motivated by the intuition that if we take a data set of vectors that represent typical behavior and one of the vectors were replaced with an anomalous observation, the topology of the data set would be significantly altered. This intuition is supported by randomized persistence diagrams [46,47]. For example, [48] finds that samples from a manifold corrupted by noise with a Gaussian distribution will not in general significantly deviate from the persistence diagram of the underlying manifold, but other distributions could. Further results in [49] emphasize this by showing that extending the stability results for noise with a Gaussian distribution in [48] to other noise distributions should not “be expected”.

1.3. Main Contributions

Our main contributions in this paper are two-fold. First, we introduce a novel machine learning algorithm that uses TDA for detecting anomalies in time-series data. This is done by establishing a baseline of typical data distribution to serve as a reference through the method of bagging, and deviations from the reference for various data points are measured using the bottleneck distance from TDA. This procedure is repeated multiple times to reduce the variation that naturally arises from random sampling. The algorithm requires selection of several hyperparameters. Second, we address the problem of identifying incidents in traffic data that consists only of traffic counts. Our data set does not include any functional data, such as velocity, nor does it include video feed which are the most common data sources used in traffic incident detection problems.

1.4. Outline

The rest of the paper is organized as follows. In Section 2 we present the necessary background for understanding persistence diagrams, the fundamental tool from TDA that our algorithm implements. In Section 3 we present the algorithm. In Section 4 we present the results of our algorithm when applied to the traffic data set from the ATD challenge problem.

2. Topological Data Analysis (TDA)

Here, we provide the necessary background for understanding the TDA that our algorithm implements. For more details on the subject, see [50,51]. The appeal of TDA is that it offers techniques from topology to ascertain the "shape" of high-dimensional data sets that is not accessible by other methods. In particular, persistent homology is ubiquitous in TDA for measuring certain topological features of a space (e.g., connectivity, holes, voids). These features are often summarized using Betti numbers or a persistence diagram [23]. Our focus will be on the latter and the bottleneck distance which serves as a metric on a set of those diagrams.

2.1. Persistence Diagrams

Persistence diagrams are computed based on the notion of a simplicial complex. It is natural to think of a simplicial complex in a Euclidean space, but a more abstract definition adequately serves our purposes.
Definition 1
(Abstract Simplicial Complex). A finite collection of sets A is called an abstract simplicial complex if A is closed under subset inclusion, i.e., if β A whenever α A and β α . Furthermore, we call α A a combinatorial p-simplex if | α | = p + 1 .
In our consideration, each simplex is a subset of data points. There are several useful methods by which to construct simplicial complexes from data. We have chosen the Vietoris–Rips (VR) complex for its computational convenience.
Definition 2
(Vietoris–Rips Complex). Let Y be a finite subset of data points contained in some metric space X. For every r 0 , the Vietoris–Rips Complex of Y at scale r is defined as
V R r ( Y ) : = { α Y : diam ( α ) 2 r } .
We note that the VR complex is ascending in the sense that V R r 0 ( Y ) V R r ( Y ) for r 0 r . Thus, as the radius r increases, we produce a filtration of simplicial complexes on top of the data points in Y. This filtration yields a persistent homology that quantifies the apparent topological features for different dimensions. 0-, 1-, and 2-dimensional features refer to connected components, holes, and voids, respectively. Higher-dimensional features would be the analogues of these features. The persistence diagram quantifies the importance of these features in terms of their persistence over various radii. Persistence gives us a sequence of homology groups for each dimension of the ambient metric space containing the data. By identifying the radii in the sequence when the topological features appear and disappear we obtain a collection of birth and death times for each feature of each dimension.
The birth-death pairs as a multiset in R 2 make up the persistence diagram corresponding to Y. The persistence diagrams in our algorithm only contain information on connected components, the zero-dimensional features. We were unable to find a meaningful use for persistence diagrams computed using higher-dimensional features.

2.2. Bottleneck Distance

We use the bottleneck distance to make comparisons between two persistence diagrams. The bottleneck distance can be thought of more generally as a pseudometric on multisets P and Q that quantifies the cost of mapping P to Q. To be precise, suppose P and Q are plain multisets of points in the extended plane R ¯ 2 . We say M P × Q is a partial matching between P and Q if it satisfies the following:
  • For each q Q , there exists at most one p P such that ( p , q ) M .
  • For each p P there exists at most one q Q such that ( p , q ) M .
We say that s P Q is unmatched if there does not exist p P with ( p , s ) M or q Q such that ( s , q ) M . Let Δ : = { ( x , x ) : x R ¯ } . Please note that if s = ( s x , s y ) , then if z Δ satisfies z = arg min { s x 2 : x Δ } , then
s z = | s x s y | 2 .
We define the cost function for each matching M to be
c ( M ) = max sup ( p , q ) M p q , sup s P Q , unmatched | s x s y | 2 .
Definition 3
(Bottleneck Distance). Let M ( P , Q ) be the set of all partial matchings between P and Q. We define the bottleneck distance between P and Q as
W ( P , Q ) : = inf M M ( P , Q ) c ( M ) .
Remark 1.
We refer to the bottleneck distance as a pseudometric because there are multisets P and Q such that P Q yet W ( P , Q ) = 0 . Take for example P = Q 2 Δ and Q = ( Q + 2 ) 2 Δ . Although c ( M ) > 0 for every M M ( P , Q ) , we find W ( P , Q ) = 0 by taking an infimum over M ( P , Q ) . It has been shown that in the case where P and Q are finite multisets in R ¯ 2 Δ then W ( P , Q ) = 0 implies P = Q , [52].

3. Description of the Algorithm

Our algorithm is motivated by the intuition that if we take a data set of vectors that represent typical behavior and one of the vectors were replaced with an anomalous observation, the topology of the data set would be significantly altered. The topology of the set is understood by computing persistence diagrams of randomly chosen subsets β { x i } i = 1 m . When we modify a bag β by randomly replacing one of its vectors with x j we denote the resulting modified bag by β ( j ) . Figure 2 depicts the data points in the reference and modified bags. We then let D and D ( j ) denote the persistence diagrams for vectors in β and β ( j ) , respectively. Figure 3 depicts the persistence diagrams for the reference and modified bags for a specfic data point.
The bottleneck distance quantifies how much the topology of a set has changed. We say there is evidence that x j is anomalous if W ( D , D ( j ) ) is relatively large compared to the values in { W ( D , D ( i ) ) } i = 1 m . There are some undesirable cases that may occur due to the random sampling used to form β . It may be that the observations contained in β do not adequately represent the entire data set, or if the reference bag contains an anomaly, we might replace an anomaly with a non-anomalous vector when forming the modified bag. To mitigate issues such as these, the algorithm uses multiple bags of the same size. We select hyperparameters S , N N and repeat N times the process of randomly choosing S vectors to form β . This creates a collection of reference bags { β k } k = 1 N . For each of the N bags, we form m distinct modified bags { β k ( j ) } j = 1 m by randomly choosing y β k and replacing it with x j . We calculate the summary statistics mean, median, and standard deviation for the bottleneck distances, and choose a function f : R 3 { 0 , 1 } that identifies anomalies from the summary statistics. The entire algorithm is presented in Algorithm 1.
Algorithm 1: Anomaly Detection using Persistence Diagrams of Random Subsamples
Algorithms 13 00222 i001
The summary statistics of the bottleneck distances { d j , k } k = 1 N can be used in various ways depending on the application. In our application to traffic data, we set thresholds for each summary statistic based on training data. Thus, we set thresholds τ 1 , τ 2 , τ 3 for the three summary statistics, and we defined f ( d ¯ j , h j , s j ) = 1 if and only if d ¯ j τ 1 , h j τ 2 , and s j τ 3 .
In addition to those previously mentioned, we may also encounter the issue when the modified bag β ( k ) is formed by replacing the same vector x k . This would make an anomaly look much less anomalous based on our bottleneck distance measurement. Fortunately, if N is chosen to be sufficiently large, then the effect can be very small since the summary statistics converge in probability. Let us formalize the probability distribution here. The sample space consists of ordered pairs
Ω = { ( β , y ) : β D , | β | = S , y β } .
We place the uniform distribution on Ω and denote this by P .
Theorem 1.
Let d ¯ j , s j , and h j be denote the mean, standard deviation, and median respectively of bottleneck distance associated with x j as computed in Algorithm 1. Then there exist μ j , σ j , η j + , and η j such that
d ¯ j P μ j , s j P σ j
as N , and for all ε > 0 ,
lim N P [ h j ( η j ε , η j + + ε ) ] = 1 .
Proof. 
Define the random variable d j on Ω by
d j ( β , y ) = W ( β , β { y } { x j } ) .
We have that d j , 1 , d j , 2 , , d j , N = d d j are i.i.d. random variables. Since | Ω | < and there are no infinite points in D , d j is bounded. The three limits are consequences of the Weak Law of Large Numbers, which requires a sequence of i.i.d. random variables with finite expectation. In particular, let μ j : = E [ d j ] < and σ j 2 = Var [ d j ] < . Since the { d j , k } k = 1 N are i.i.d. with finite mean μ j , by the Weak Law of Large Numbers,
d ¯ j = 1 N k = 1 N d j , k P μ j
as N . To prove the second limit, we start by writing
s j 2 = 1 N 1 k = 1 N ( d j , k d ¯ j ) 2 = N N 1 1 N k = 1 N d j , k 2 d ¯ j 2 .
Since { d j , k 2 } k = 1 N are i.i.d with E [ d j , k 2 ] = σ j 2 + μ j 2 < , it follows from the Weak Law of Large Numbers that 1 N k = 1 N d j , k 2 P σ j 2 + μ j 2 as N tends to infinity. Since we have established d ¯ j P μ j as N , it follows that d ¯ j 2 P μ j 2 as N tends to infinity. Adding up the terms in (1), this means s j σ j in probability. For the third equation, first define
η j : = sup η : P [ d j η ] < 1 2 , η j + : = inf η : P [ d j η ] > 1 2 .
For any ε > 0 , we have P [ d j , k > η j + + ε ] = α < 1 / 2 . We can define b j , k = 1 if d j , k > η j + + ε , and b j , k = 0 otherwise. It follows that b j , 1 , b j , 2 , , b j , N iid Bernoulli ( α ) . Since h j is the sample median of { d j , k } k = 1 N , it follows that
P [ h j > η j + + ε ] P k = 1 N b j , k > N 2 = P k = 1 N b j , k N α > N 2 N α = P 1 N k = 1 n b j , k α > 1 2 α P 1 N k = 1 N b j , k α > 1 2 α .
Since 1 2 α > 0 , we apply the Weak Law of Large Numbers to b j , 1 , , b j , N to see that lim N P [ h j > η j + + ε ] = 0 . Similarly, we have P [ d j , k < η j ε ] = δ < 1 / 2 , Thus, if we instead define b j , k = 1 if d j , k < η j ε , we have b j , 1 , b j , 2 , , b j , N iid Bernoulli ( δ ) . Much like what we had in (2), we find
P [ h j < η j ε ] P k = 1 N b j , k > N 2 P 1 N k = 1 N b j , k δ > 1 2 δ .
Since 1 2 δ > 0 , we again have from the Weak Law of Large Numbers applied to b j , 1 , , b j , N that lim N P [ h j < η j ε ] = 0 . □
Remark 2.
Since we are sampling from a discrete probability distribution, it is not guaranteed that the distribution producing the { d j , k } has a true median. Hence we must settle for convergence of the sample median to some interval. In the case where there exists η such that P [ d j , k η ] , P [ d j , k η ] 1 2 , then η j + = η j = η and the sample median converges in probability to η. We define the sample median the usual way where if X 1 X 2 X N , the sample median would be X ( N + 1 ) / 2 if N is odd or 1 2 ( X N / 2 + X N / 2 + 1 ) if N is even.
Remark 3.
We note that the values guaranteed in Theorem 1 are data dependent, and that for the purposes of Theorem 1 as well as Algorithm 1, the data are fixed.

4. Results

In this section, we present our contribution to the ATD challenge problem through the application of our method on traffic data collected from major highways in the Sacramento area. The problem was divided into two phases–depending on whether the location information of the sensors was provided–where each phase consisted of a training and testing data set. The objective of each phase of the challenge problem was to predict the time and location of hand-selected traffic incidents in the testing data using the training data which had very few labeled incidents. The data was collected as a count of cars that passed a given sensor during each 5-min interval throughout the 2017 calendar year. An example of this can be seen in Table 1.
The training sets included details on certain incidents reported during the year that include the nearest sensor, the timestamp, and the duration of each incident. In each data set there are a few instances in which a particular sensor was not operating so that there are no counts reported during those five-minute intervals. Table 2 contains additional information on the data sets that were provided.
To apply the algorithm to the volumetric data, we considered the embedding of the data in R 12 as sequences of 12 consecutive 5-min counts from a sensor. This means each vector represents 1 hour’s worth of counts. We index each vector by a 3-tuple ( p , t , w ) . We use p to denote the sensor ID from which the counts were collected. We let t = ( t , d ) , where t denotes the starting time of the five-minute window corresponding to the first of the 12 counts and d { 1 , , 7 } indicates the day of the week with d = 1 corresponding to Sunday, d = 2 corresponding to Monday, etc. We let w { 1 , , 52 } denote the week in which these counts were collected. (We note that in 2017, there were 53 Sundays, and so for d = 1 , w { 1 , , 53 } .) The vectors were then sorted by sensor, starting time, and day of week, meaning the set of all vectors was partitioned into smaller data sets of 52 vectors. We let D p , t denote the collection of count vectors from sensor p collected at the time (hour and day of the week) corresponding to t . For two different starting times from the same weekday, say t = ( t , d ) and t * = ( t * , d ) , it is possible that the components of vectors in D p , t overlap with those of vectors in D p , t * . For example, if t = 8:00 AM, and t * = 8:05 AM, then the vectors in D p , t * are nearly the same vectors as in D p , t , but the entries are shifted left by one component and the final entry is the count from 9:00–9:05 AM. Since there are 288 five-minute intervals throughout the day and we required 12 consecutive counts to form each vector, there are 277 possible vectors of counts each day.
After sorting the vectors, we applied Algorithm 1 to each collection of counts D p , t with S = 30 and N = 30 . Thus, each vector in D p , t , was assigned 30 bottleneck distances. For each time window ( p , t , w ) , we recorded the mean, median, and standard deviation of these bottleneck distances which we denote by d ¯ p , t , w , h p , t , w , and s p , t , w , respectively.
As in [53], we expected different periods of the day to exhibit different behavior. Therefore, after obtaining the summary statistics from Algorithm 1, our vectors were classified again according to three factors: day of week, sensor, and time of day. Time of day had 5 levels based on the time of the vector’s first count. The levels were Early Morning, Morning, Mid Day, Evening, and Late Evening, and they corresponded to one-hour windows with start times in the ranges:
  • Early Morning 12:00 AM–4:50 AM
  • Morning 4:55 AM–7:50 AM
  • Mid Day 7:55 AM–3:50 PM
  • Evening 3:55 PM–5:50 PM
  • Late Evening 5:55 PM–11:55 PM
The windows were ranked by each of their summary statistics within their classification. These rankings were given in the form of a percentile and used as anomaly scores.
We expected that traffic patterns throughout the year should be fairly consistent given a particular sensor, day of the week, and period of the day; however, it is possible that traffic behavior could be unusual for an entire day due to a holiday, road construction, or a weather event. Then, according to this window classification, it cannot be readily ascertained if a particular window with a relatively large mean bottleneck distance is due to something acute, such as a traffic incident, or to something predictable, like Labor Day or a major sporting event. We took this into account by reclassifying the windows by sensor, time of day, and day of the year. We then ranked the windows a second time within each of these treatment groups. Each window was assigned 6 percentile scores, based on three different summary statistics each ranked two different ways.
We describe our procedure using an example from the Phase 2-Train data. To measure the likelihood that there was an incident occurring near sensor p 0 : = S 314402 on Monday, 6 February 2017, during the window from 7:55 AM to 8:55 AM, we set t 0 = ( 7 : 55 , 2 ) . We apply our algorithm to D p 0 , t 0 . See Figure 3 for an example of the persistence diagrams related to this particular timestamp. If an incident occurred on February 6th, the sixth Monday of 2017 in the time window classification, we would expect that d ¯ p 0 , t 0 , 6 would be large compared to the rest of the collection { d ¯ p 0 , t 0 , w } w = 1 52 . When this is done, we find that d ¯ p 0 , t 0 , 6 = 41.95 , which is the second largest of the 52 average bottleneck distances in { d ¯ p 0 , t 0 , w } w = 1 52 . Even though this was not the largest average bottleneck distance observed for this sensor, day of week, and time of day, it did rank above the mid-98th percentile among all Mid Day windows according to average bottleneck distance for S 314402 on a Monday. Similarly, s p 0 , t 0 , 6 = 27.56 ranked above the 96th percentile and of standard deviations of bottleneck distance, and h p 0 , t 0 , 6 = 31.16 ranked just above the 99th percentile of median bottleneck distances observed from Mid Day, Monday windows at sensor S314402. If we consider this window among only the observations from Mid Day on February 6th at sensor S 314402 , then d ¯ p 0 , t 0 , 6 ranks just above the mid-89th percentile, s p 0 , t 0 , 6 ranks above the mid-90th percentile, and h p 0 , t 0 , 6 is at least tied as the highest ranking sample median of its class.
After the six rankings are determined for each window in our data set, we are ready to apply selection criteria to determine which windows overlapped with traffic incidents. The selection consists of six thresholds for the six rankings. The thresholds are determined by the rankings of windows near the starting times of the labeled incidents in the training data using the procedure outlined below:
  • Identify all windows starting within 30 min of the timestamp of a reported incident at the same sensor where the incident was located.
  • For each of the 6 types of rankings, identify a minimum ranking needed to include at least one window corresponding to each labeled incident. If all 6 minimum rankings seem too small, the minimum ranking to include one window from all but one incident could be used instead.
  • Set the first threshold, τ 1 as the largest of the six minimums.
  • Identify which of the windows found in step 1 satisfy the threshold of τ 1 .
  • For each of the 5 types of rankings that were not used to determine τ 1 , identify the minimum ranking met by all the windows identified in step 4.
  • Set the other 5 thresholds, τ 2 , , τ 6 according to the 5 minimums identified in step 5.
Once the thresholds τ 1 , , τ 6 are set, we classify all windows that have all six rankings each above their respective thresholds as incidents. In the rest of this section, we present the results when this procedure is applied to each of the four data sets. We present the rankings of windows near each incident in the data sets and the thresholds we determined from training data sets. In each phase, we use the thresholds determined by the training data to classify the windows in the corresponding testing data. We measure the performance of these predictions using precision, recall and an F-score.
Remark 4.
Our performance evaluation using Algorithm 1 exclude any incidents detected on Sundays since the training data did not include any reported incidents on Sunday.
To provide some comparison, we also apply a standard normal deviates (SND) algorithm to detect incidents in the phase-1 data. The SND algorithm detects an incident in the kth 5-min time window if the count for that window, x k , satisfies
| x k x ¯ | > τ σ ^ .
We let x ¯ and σ ^ respectively denote the mean count and standard deviation of counts from the same treatment group and τ denotes some threshold that applies across all treatment groups. In this case, a treatment group is made up of all counts belonging to the same sensor, on the same day of week, at the same time of day. For example, in Phase-1, there were 52 counts taken at sensor S 312425 , on Monday mornings at 8:00 AM, { x k } k = 1 52 . We can compute the mean and standard deviation of this treatment group,
x ¯ = 1 52 k = 1 52 x k , σ ^ = 1 52 1 k = 1 52 ( x k x ¯ ) 2
For a better description of the SND algorithm and its application to incident detection see [11], where incident detection was performed using vehicle velocity rather than traffic flow. The threshold τ was determined using the Phase 1-Train data by computing the deviates for each window occurring during a labeled incident. For each of the 15 incidents, we computed the 85th percentile of the counts that took place during the incident. We chose τ to be the second smallest of the 15 percentiles we computed. Thus, for the phase 1 data, we found τ = 1.06 , meaning any 5-min window where the count exceeded 1.06 standard deviations from the mean was detected as an incident.

4.1. Data without Sensor Locations

In the Phase 1-Train data set, we were able to find windows near the incidents that had very large average bottleneck distances. When sorted by sensor, day of week, and time of day, all of the 15 labeled incidents overlapped with a one-hour window starting within half an hour of some window that was ranked in the 85th percentile. When sorting these sensors further by day of year, the start time of six of the incidents fell within half an hour of the window with the highest average bottleneck distance in their category. Table 3 and Table 4 present the percentile scores of the incidents in the Phase 1-Train dataset.
Table 5 contains the 6 thresholds determined using the rankings of windows near the labeled incidents. The quality of the fit is given in Table 6; as a comparison, the quality of fit using the standard normal derivates is given in Table 7. To better describe the quality of the anomaly scores, we provide the receiver operating characteristic (ROC) curve based on the three rankings in Figure 4. We also provide the ROC curve based on the standard normal deviates.
Of course, these performance scores say very little about the method since the thresholds were determined using the incidents in the data set. Rather, we apply the thresholds in Table 5 to classify the Phase 1-Test data. Table 8 has the performance scores from this experiment, whereas Table 9 has the performance for the standard normal deviates on this dataset.

4.2. Data with Sensor Locations

In Phase 2 we were given the locations of the sensors in terms of latitude and longitude. Table 10 and Table 11 display the nine incidents reported in Phase 2-Train and the largest percentile recorded for each summary statistic in windows starting within half an hour of the starting time of each incident at the sensor where the incident was reported.
Depending on the severity of the incident, it is possible that a traffic incident occurring near one sensor will cause some abnormal behavior in the counts of the adjacent sensors. For example, in the network of sensors used in the Phase 2-Train data, traffic flows directly from S 314402 to S 318593 . A map of all the sensor locations in this data set is provided in Figure 1. If one of the lanes is obstructed near S 318593 it is likely to slow down traffic between the two sensors. This might cause the counts from S 314402 to look fairly large when compared to S 318593 . On the other hand, if a motorist was able to anticipate this problem in the traffic a mile ahead, they might deviate their route before even passing S 314402 . If enough cars did this, there would be a lower count at the first sensor, but the count at S 318593 might still be higher because of all the unfortunate cars that got caught between the sensors at the time of the incident. In either scenario, we would expect some unusual behavior when we compare the counts of both sensors together. Not knowing if the difference should be large or small, we apply the random bagging algorithm to the sequence of differences between the counts, with the intuition that if an incident occurred during a particular hour, especially one with lots of traffic, the mean bottleneck distance for that time window would also be large. We refer to the summary statistics obtained from this procedure as adjacency statistics. In Table 12 and Table 13 we provide the highest rankings near the reported incidents based on the adjacency statistics.
The most noticeable difference made by the adjacency statistics can be observed by the percentile of the July incident when windows are sorted by sensor, day of year, and time of day. The highest any window near that time ranks by average bottleneck distance of raw counts is in the 66th percentile of Mid Day counts for that day, but if we consider the differences in counts between the two adjacent sensors, we find the highest ranking Mid Day window for July 24th starts within half an hour of the July incident.
With the addition of the adjacency statistics, we had 12 percentiles to consider for each window. Table 14 presents the 12 thresholds determined from the incidents in the data for Phase 2-Train. The quality of the fit is given in Table 15. We only report the performance statistics for sensors S 314402 and S 318593 since those were the only sensors in the data set where incidents were labeled.
As in Phase 1, we use the thresholds learned from the training data to classify the windows in the Phase 2-Test data. In this data set, there were 8 sensors. The 8 sensors formed 8 pairs of adjacent sensors, meaning we were able to compute adjacency scores for the windows on every sensor. In Table 16 we present the performance scores for this classification. The data for Phase 2-Test was very different from any of the other data sets. There were 1409 incidents with an average duration of 42 min. Surprisingly, no true incidents were reported at sensor S 313386 .

5. Conclusions

Detecting traffic incidents using volumetric data is challenging, as reflected in our performance scores even though our method performed the best of all ATD challenge problem participants [54]. When compared to the SND algorithm, which is a commonly used method for anomaly detection in traffic data, our TDA approach performed slightly better based on the area under the ROC curve. We think there is much room for improvement in using TDA for traffic incident detection. One possibility is to apply unsupervised learning techniques to the collection of bottleneck distances produced for each vector in Algorithm 1. Another possibility is to use persistence landscapes rather than bottleneck distances.

Supplementary Materials

Data and code are available at the following repository: https://bitbucket.org/esweber/randomized-persistence-landscapes/src/master/.

Author Contributions

Conceptualization, E.S.W.; Methodology, S.N.H., L.P. and E.S.W.; Software, L.P.; Validation, S.N.H., L.P., E.S.W.; Formal Analysis, S.N.H., L.P. and E.S.W.; Investigation, S.N.H., L.P. and E.S.W.; Resources, L.P.; Data Curation, L.P.; Writing—Original Draft Preparation, L.P.; Writing—Review & Editing, S.N.H., E.S.W.; Visualization, L.P.; Supervision, E.S.W.; Project Administration, E.S.W.; Funding Acquisition, E.S.W. All authors have read and agreed to the published version of the manuscript.

Funding

All three authors were supported by the National Science Foundation and the National-Geospatial Intelligence Agency under award #1830254.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blincoe, L.; Seay, A.; Zoloshnja, E.; Miller, T.; Romano, E.; Luchter, S.; Spicer, R. Economic impact of U.S. motor vehicle crashes reaches $230.6 billion, NHTSA reports. Prof. Saf. 2002, 47, 12. [Google Scholar]
  2. Schrank, D.; Lomax, T. The 2007 Urban Mobility Report. 2007. Available online: https://static.tti.tamu.edu/tti.tamu.edu/documents/umr/archive/mobility-report-2007-wappx.pdf (accessed on 1 June 2020).
  3. Chen, C.; Petty, K.; Skabardonis, A.; Varaiya, P.; Jia, Z. Freeway Performance Measurement System, Mining Loop Detector Data. Transp. Res. Rec. 2001, 1748, 96–102. [Google Scholar] [CrossRef] [Green Version]
  4. Truslow, E.; Tsitsopoulos, G.; Manolakis, D. Event Detection in Time Series: A Traffic Data Challenge [Conference Presentation]. In Proceedings of the Algorithms for Threat Detection PI Workshop, Washington, DC, USA, 10–11 October 2018; National Science Foundation: Alexandria, VA, USA, 2018. [Google Scholar]
  5. Sadeky, S.; Al-Hamadiy, A.; Michaelisy, B.; Sayed, U. Real-time automatic traffic accident recognition using hfg. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3348–3351. [Google Scholar]
  6. Jiansheng, F.; Hui, Z.; Yaohua, M. Vision-based real-time traffic accident detection. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014; pp. 1035–1038. [Google Scholar]
  7. Chakraborty, P.; Hegde, C.; Sharma, A. Trend filtering in network time series with applications to traffic incident detection. In Proceedings of the Time Series Workshop, 31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 8 December 2017. [Google Scholar]
  8. Maaloul, B.; Taleb-Ahmed, A.; Niar, S.; Harb, N.; Valderrama, C. Adaptive video-based algorithm for accident detection on highways. In Proceedings of the 2017 12th IEEE International Symposium on Industrial Embedded Systems (SIES), Toulouse, France, 14–16 June 2017; pp. 1–6. [Google Scholar]
  9. Shi, Q.; Abdel-Aty, M. Big data applications in real-time traffic operation and safety monitoring and improvement on urban expressways. Transp. Res. Part C Emerg. Technol. 2015, 58, 380–394. [Google Scholar] [CrossRef]
  10. Zhao, B.; Li, F.-F.; Xing, E.P. Online detection of unusual events in videos via dynamic sparse coding. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 3313–3320. [Google Scholar]
  11. Chakraborty, P.; Hess, J.R.; Sharma, A.; Knickerbocker, S. Outlier mining based traffic incident detection using big data analytics. In Proceedings of the Transportation Research Board 96th Annual Meeting Compendium of Papers, Washington DC, WA, USA, 8–12 January 2017; pp. 8–12. [Google Scholar]
  12. Xu, C.; Liu, P.; Yang, B.; Wang, W. Real-time estimation of secondary crash likelihood on freeways using high-resolution loop detector data. Transp. Res. Part C Emerg. Technol. 2016, 71, 406–418. [Google Scholar] [CrossRef]
  13. Lou, J.; Liu, Q.; Tan, T.; Hu, W. Semantic interpretation of object activities in a surveillance system. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec, Canada, 11–15 August 2002; Volume 3, pp. 777–780. [Google Scholar]
  14. Piciarelli, C.; Micheloni, C.; Foresti, G.L. Trajectory-based anomalous event detection. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1544–1554. [Google Scholar] [CrossRef]
  15. Hasan, M.; Choi, J.; Neumann, J.; Roy-Chowdhury, A.K.; Davis, L.S. Learning temporal regularity in video sequences. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, Nevada, 26 June–1 July 2016; pp. 733–742. [Google Scholar]
  16. Yuan, Y.; Wang, D.; Wang, Q. Anomaly detection in traffic scenes via spatial-aware motion reconstruction. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1198–1209. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, S.; Li, S.; Li, X.; Yao, Y. Representation of Traffic Congestion Data for Urban Road Traffic Networks Based on Pooling Operations. Algorithms 2020, 13, 84. [Google Scholar] [CrossRef] [Green Version]
  18. Chandola, V.; Banerjee, A.; Kumar, V. Anomaly Detection: A Survey. ACM Comput. Surv. 2009, 41, 1–58. [Google Scholar] [CrossRef]
  19. Bendich, P.; Chin, S.P.; Clark, J.; Desena, J.; Harer, J.; Munch, E.; Newman, A.; Porter, D.; Rouse, D.; Strawn, N.; et al. Topological and statistical behavior classifiers for tracking applications. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2644–2661. [Google Scholar] [CrossRef] [Green Version]
  20. Chen, R.; Zhang, J.; Ravishanker, N.; Konduri, K. Clustering Activity–Travel Behavior Time Series using Topological Data Analysis. J. Big Data Anal. Transp. 2019, 1, 109–121. [Google Scholar] [CrossRef] [Green Version]
  21. Islambekov, U.; Yuvaraj, M.; Gel, Y.R. Harnessing the power of topological data analysis to detect change points. Environmetrics 2019, 31, e2612. [Google Scholar] [CrossRef] [Green Version]
  22. Li, Y.; Islambekov, U.; Akcora, C.; Smirnova, E.; Gel, Y.R.; Kantarcioglu, M. Dissecting Ethereum Blockchain Analytics: What We Learn from Topology and Geometry of the Ethereum Graph? In Proceedings of the 2020 SIAM International Conference on Data Mining, Hilton Cincinnati, OH, USA, 7–9 May 2020; pp. 523–531. [Google Scholar]
  23. Carlsson, G. Topology and data. Bull. Amer. Math. Soc. 2009, 46, 255–308. [Google Scholar] [CrossRef] [Green Version]
  24. Ghrist, R. Barcodes: The persistent topology of data. Bull. Am. Math. Soc. 2008, 45, 61–75. [Google Scholar] [CrossRef] [Green Version]
  25. Munch, E. A User’s Guide to Topological Data Analysis. J. Learn. Anal. 2017, 4, 47–61. [Google Scholar] [CrossRef]
  26. Rieck, B.; Togninalli, M.; Bock, C.; Moor, M.; Horn, M.; Gumbsch, T.; Borgwardt, K. Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology. arXiv 2018, arXiv:1812.09764. [Google Scholar]
  27. Guss, W.H.; Salakhutdinov, R. On Characterizing the Capacity of Neural Networks using Algebraic Topology. arXiv 2018, arXiv:1802.04443. [Google Scholar]
  28. Biasotti, S.; Falcidieno, B.; Spagnuolo, M. Extended Reeb Graphs for Surface Understanding and Description. In International Conference on Discrete Geometry for Computer Imagery; Discrete Geometry for Computer Imagery; Borgefors, G., Nystrom, I., di Baja, G.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 185–197. [Google Scholar]
  29. Zhang, E.; Mischaikow, K.; Turk, G. Feature-based Surface Parameterization and Texture Mapping. ACM Trans. Graph. 2005, 24, 1–27. [Google Scholar] [CrossRef]
  30. Nicolau, M.; Levine, A.J.; Carlsson, G. Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival. Proc. Natl. Acad. Sci. USA 2011, 108, 7265–7270. [Google Scholar] [CrossRef] [Green Version]
  31. Chan, J.M.; Carlsson, G.; Rabadan, R. Topology of viral evolution. Proc. Natl. Acad. Sci. USA 2013, 110, 18566–18571. [Google Scholar] [CrossRef] [Green Version]
  32. Bendich, P.; Marron, J.S.; Miller, E.; Pieloch, A.; Skwerer, S. Persistent Homology Analysis of Brain Artery Trees. Ann. Appl. Stat. 2016, 10, 198–218. [Google Scholar] [CrossRef] [Green Version]
  33. Ravishanker, N.; Chen, R. Topological Data Analysis (TDA) for Time Series. arXiv 2019, arXiv:1909.10604. [Google Scholar]
  34. Robinson, M. Topological Signal Processing; Mathematical Engineering; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef]
  35. Truong, P. An Exploration of Topological Properties of High-Frequency One-Dimensional Financial Time Series Data Using TDA. Ph.D. Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, December 2019. [Google Scholar]
  36. Gidea, M. Topological Data Analysis of Critical Transitions in Financial Networks. In NetSci-X 2017. Springer Proceedings in Complexity, Proceedings of the 3rd International Winter School and Conference on Network Science, Indianapolis, IN, USA, 19–23 June 2017; Shmueli, E., Barzel, B., Puzis, R., Eds.; Springer: Cham, Switzerland, 2017; pp. 47–59. [Google Scholar]
  37. Gidea, M.; Katz, Y. Topological data analysis of financial time series: Landscapes of crashes. Phys. A Stat. Mech. Appl. 2018, 491, 820–834. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, Y.; Ombao, H.; Chung, M.K. Topological data analysis of single-trial electroencephalographic signals. Ann. Appl. Stat. 2018, 12, 1506–1534. [Google Scholar] [CrossRef] [PubMed]
  39. Stolz, B.J.; Harrington, H.A.; Porter, M.A. Persistent homology of time-dependent functional networks constructed from coupled time series. Chaos An Interdiscip. J. Nonlinear Sci. 2017, 27, 047410. [Google Scholar] [CrossRef] [PubMed]
  40. Perea, J.A. Persistent homology of toroidal sliding window embeddings. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal, Shanghai, China, 20–25 March 2016; pp. 6435–6439. [Google Scholar] [CrossRef]
  41. Tralie, C.J.; Perea, J.A. (Quasi)Periodicity Quantification in Video Data, Using Topology. arXiv 2018, arXiv:1704.08382. [Google Scholar] [CrossRef] [Green Version]
  42. Perea, J.A.; Harer, J. Sliding Windows and Persistence: An Application of Topological Methods to Signal Analysis. Found. Comput. Math. 2015, 15, 799–838. [Google Scholar] [CrossRef] [Green Version]
  43. Perea, J.A.; Deckard, A.; Haase, S.B.; Harer, J. SW1PerS: Sliding windows and 1-persistence scoring; discovering periodicity in gene expression time series data. BMC Bioinform. 2015, 16, 257. [Google Scholar] [CrossRef] [Green Version]
  44. Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, Warwick 1980; Rand, D., Young, L.S., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1981; pp. 366–381. [Google Scholar]
  45. Khasawneh, F.A.; Munch, E. Chatter detection in turning using persistent homology. Mech. Syst. Signal Process. 2016, 70–71, 527–541. [Google Scholar] [CrossRef]
  46. Kahle, M. Random geometric complexes. Discrete Comput. Geom. 2011, 45, 553–573. [Google Scholar] [CrossRef] [Green Version]
  47. Bobrowski, O.; Kahle, M. Topology of random geometric complexes: A survey. J. Appl. Comput. Topol. 2018, 1, 331–364. [Google Scholar] [CrossRef] [Green Version]
  48. Niyogi, P.; Smale, S.; Weinberger, S. A topological view of unsupervised learning from noisy data. SIAM J. Comput. 2011, 40, 646–663. [Google Scholar] [CrossRef]
  49. Adler, R.J.; Bobrowski, O.; Weinberger, S. Crackle: The homology of noise. Discrete Comput. Geom. 2014, 52, 680–704. [Google Scholar] [CrossRef]
  50. Bubenik, P. Statistical topological data analysis using persistence landscapes. J. Mach. Learn. Res. 2015, 16, 77–102. [Google Scholar]
  51. Edelsbrunner, H.; Harer, J. Computational Topology: An Introduction; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  52. Oudot, S.Y. Persistence Theory: From Quiver Representations to Data Analysis; American Mathematical Society: Providence, RI, USA, 2015. [Google Scholar]
  53. Laflamme, E.M.; Ossenbruggen, P.J. Effect of time-of-day and day-of-the-week on congestion duration and breakdown: A case study at a bottleneck in Salem, NH. J. Traff. Transp. Eng. 2017, 4, 31–40. [Google Scholar] [CrossRef]
  54. Truslow, E.; Tsitsopoulos, G.; Manolakis, D. Traffic Data Challenge Problem Results [Conference Presentation]. In Proceedings of the Algorithms for Threat Detection PI Workshop, Washington, DC, USA, 21–23 October 2019; The George Washington University: Washington, DC, USA. [Google Scholar]
Figure 1. A map of sensor locations from Phase 2-Train.
Figure 1. A map of sensor locations from Phase 2-Train.
Algorithms 13 00222 g001
Figure 2. The left side represents one of the reference bags, while the right side represents the corresponding modified bag for week 6.
Figure 2. The left side represents one of the reference bags, while the right side represents the corresponding modified bag for week 6.
Algorithms 13 00222 g002
Figure 3. A plot of the persistence diagram for 20th reference diagram (left) and the modified persistence diagram (right) when week 6 replaces one of the randomly chosen observations in the 7th reference bag. This corresponds to the traffic behavior at sensor S 314402 on Monday, 6 February 2017 from 7:55–8:55 AM.
Figure 3. A plot of the persistence diagram for 20th reference diagram (left) and the modified persistence diagram (right) when week 6 replaces one of the randomly chosen observations in the 7th reference bag. This corresponds to the traffic behavior at sensor S 314402 on Monday, 6 February 2017 from 7:55–8:55 AM.
Algorithms 13 00222 g003
Figure 4. ROC curves for 4 different anomaly scores: (a) uses average bottleneck distance percentile when sorted by day of week; AUC = 0.72526 . (b) uses standard deviation bottleneck distance percentile when sorted by day of week; AUC = 0.65658 . (c) uses median bottleneck distance percentile when sorted by day of week; AUC = 0.71332 . (d) uses the standard normal deviations; AUC = 0.66754 .
Figure 4. ROC curves for 4 different anomaly scores: (a) uses average bottleneck distance percentile when sorted by day of week; AUC = 0.72526 . (b) uses standard deviation bottleneck distance percentile when sorted by day of week; AUC = 0.65658 . (c) uses median bottleneck distance percentile when sorted by day of week; AUC = 0.71332 . (d) uses the standard normal deviations; AUC = 0.66754 .
Algorithms 13 00222 g004
Table 1. A sample of the traffic counts from Phase 1-Train.
Table 1. A sample of the traffic counts from Phase 1-Train.
S 312425 S 312520 S 312694 S 312942 S 314147 Timestamp
458262102501/1/2017 0:00
396660109541/1/2017 0:05
35928799801/1/2017 0:10
671361201111361/1/2017 0:15
1041601481031221/1/2017 0:20
1001411611371591/1/2017 0:25
1101761731501491/1/2017 0:30
761591571611751/1/2017 0:35
1051661661461881/1/2017 0:40
931761431721571/1/2017 0:45
1271681521941331/1/2017 0:50
1371711741871691/1/2017 0:55
Table 2. A summary of the four traffic data sets to which we applied the incident detection algorithm.
Table 2. A summary of the four traffic data sets to which we applied the incident detection algorithm.
Data Set# SensorsSensor Locations# Inc ReportedAvg Inc Duration (Min)
Phase 1-Train10No15142
Phase 1-Test11No17159
Phase 2-Train10Yes996
Phase 2-Test8Yes140942
Table 3. Percentiles of incidents from the Phase1-Train data when sorted by sensor, day of week, and time of day.
Table 3. Percentiles of incidents from the Phase1-Train data when sorted by sensor, day of week, and time of day.
Incident TimestampPercentile by AvgPercentile by SDPercentile by Median
06-Feb-2017 11:420.991990.994790.99359
12-Jun-2017 16:470.916670.972760.82772
26-Sep-2017 00:170.856260.725560.82285
05-Sep-2017 01:450.859520.871900.66688
30-May-2017 05:51:000.975430.944440.99092
18-Jul-2017 12:080.876600.888420.87210
05-Sep-2017 16:560.991190.977571
26-Jul-2017 05:230.980770.974890.98745
22-Mar-2017 06:310.9994710.98771
05-Apr-2017 10:450.895430.724760.89764
12-Jan-2017 11:050.990580.964940.98938
01-Dec-2017 08:300.993790.988180.95212
05-May-2017 09:210.980170.888020.99249
10-Feb-2017 12:440.999000.998200.99760
28-Oct-2017 16:140.955130.649840.96034
Table 4. Percentiles of incidents from the Phase 1-Train data when sorted by sensor, day of year, and time of day.
Table 4. Percentiles of incidents from the Phase 1-Train data when sorted by sensor, day of year, and time of day.
Incident TimestampPercentile by AvgPercentile by SDPercentile by Median
06-Feb-2017 11:420.8750.895830.95833
12-Jun-2017 16:47110.91667
26-Sep-2017 00:170.898310.728810.83051
05-Sep-2017 01:450.864410.864410.64407
30-May-2017 05:5110.972220.94444
18-Jul-2017 12:080.864580.916670.875
05-Sep-2017 16:56111
26-Jul-2017 05:230.805560.805561
22-Mar-2017 06:310.9722211
05-Apr-2017 10:450.979170.729170.94792
12-Jan-2017 11:050.791670.822920.78125
01-Dec-2017 08:30:00110.95833
05-May-2017 09:2110.906251
10-Feb-2017 12:44111
28-Oct-2017 16:140.50.50.83333
Table 5. Thresholds used for classifying windows by percentile score in phase 1. * indicates the largest threshold, τ 1
Table 5. Thresholds used for classifying windows by percentile score in phase 1. * indicates the largest threshold, τ 1
Sorted by:Min Perc by AvgMin Perc by SDMin Perc by Median
sensor, day of week, time of day 0.8595176 *0.38087610.4663462
sensor, day of year, time of day0.250.16666670.08333333
Table 6. Quality of fit for the Phase 1-Train data based on the thresholds determined by the labeled incidents using Algorithm 1.
Table 6. Quality of fit for the Phase 1-Train data based on the thresholds determined by the labeled incidents using Algorithm 1.
SensorPrecisionRecallF-Score
S 312425 0.000300.228570.00059
S 312520 00NaN
S 312694 0.0007410.00148
S 312942 0.0009410.00187
S 314147 0.002640.755100.00526
S 315017 0.001730.796880.00348
S 315938 0.001460.728810.00291
S 317814 0.0010910.00219
S 318180 0.001560.891300.00312
S 318566 0.000470.50.00093
Total0.001100.7723900.00220
Table 7. Quality of fit for the Phase 1-Train data using standard normal deviates.
Table 7. Quality of fit for the Phase 1-Train data using standard normal deviates.
SensorPrecisionRecallF-Score
S 312425 0.000220.171430.00043
S 312520 0.000220.238100.00043
S 312694 0.000760.904760.00152
S 312942 0.000300.320.00060
S 314147 0.002180.561220.00435
S 315017 0.002170.781250.00434
S 315938 0.000780.355930.00155
S 317814 0.001840.888890.00376
S 318180 0.000780.434780.00156
S 318566 00NaN
Total0.000880.501160.00175
Table 8. Quality of fit for the Phase 1-Test data based on the thresholds determined by the Phase 1-Train incidents using Algorithm 1.
Table 8. Quality of fit for the Phase 1-Test data based on the thresholds determined by the Phase 1-Train incidents using Algorithm 1.
SensorPrecisionRecallF-Score
S 312425 0.001160.620.00232
S 312527 0.002170.618560.00432
S 312694 0.000790.814810.00157
S 312771 0.001330.578130.00264
S 313172 00NA
S 314147 0.001920.760560.00383
S 314899 0.001300.708330.00259
S 314982 0.0016810.00335
S 315017 0.000810.40.00162
S 318721 0.000760.956520.00153
S 3188859 0.000490.608700.00098
Total0.001120.627060.00223
Table 9. Quality of fit for the Phase 1-Test data using standard normal deviates.
Table 9. Quality of fit for the Phase 1-Test data using standard normal deviates.
SensorPrecisionRecallF-Score
S 312425 0.001000.560.00201
S 312527 0.001890.463920.00377
S 312694 0.000560.518520.00112
S 312771 0.001170.421880.00234
S 313172 0.001250.7692300250
S 314147 0.001790.633800.00356
S 314899 0.001300.750.00259
S 314982 0.001750.977780.00350
S 315017 0.002090.80.00417
S 318721 0.000360.391300.00072
S 3188859 0.000440.606090.00245
Total0.001230.616090.00245
Table 10. Percentiles of Phase 2-Train incidents when sorted by sensor, day of week, and time of day.
Table 10. Percentiles of Phase 2-Train incidents when sorted by sensor, day of week, and time of day.
Incident TimestampPercentile by AvgPercentile by SDPercentile by Median
06-Feb-2017 08:100.985380.961140.99159
24-Jul-2017 12:120.615990.760420.73127
29-Mar-2017 08:250.984380.984380.99319
18-Jan-2017 14:230.993590.977160.89433
15-Nov-2017 17:150.089740.860580.06971
06-Apr-2017 20:340.848560.894230.84816
16-Jun-2017 06:340.941240.972760.93884
27-Jan-2017 20:340.915940.988830.79404
07-Oct-2017 17:470.844910.847390.93145
Table 11. Percentiles of Phase 2-Train incidents when sorted by sensor, day of year, and time of day.
Table 11. Percentiles of Phase 2-Train incidents when sorted by sensor, day of year, and time of day.
Incident TimestampPercentile by AvgPercentile by SDPercentile by Median
06-Feb-2017 08:1010.947921
24-Jul-2017 12:120.666670.770830.875
29-Mar-2017 08:25111
18-Jan-2017 14:230.958330.958330.79167
15-Nov-2017 17:150.791670.833330.91667
06-Apr-2017 20:340.87510.95833
16-Jun-2017 06:34111
27-Jan-2017 20:3410.983870.91935
07-Oct-2017 17:470.983870.790320.98387
Table 12. Percentiles of Phase 2-Train incidents when sorted by sensor, day of week, and time of day based on their adjacency statistics.
Table 12. Percentiles of Phase 2-Train incidents when sorted by sensor, day of week, and time of day based on their adjacency statistics.
Incident TimestampPercentile by Adj AvgPercentile by SDPercentile by Adj Median
06-Feb-2017 08:100.964540.831730.98307
24-Jul-2017 12:120.968150.985180.77825
29-Mar-2017 08:250.993190.986580.99800
18-Jan-2017 14:230.839340.788460.87320
15-Nov-2017 17:150.104970.93750.07692
06-Apr-2017 20:340.866190.969550.97516
16-Jun-2017 06:340.925750.949790.76549
27-Jan-2017 20:340.901670.982630.47689
07-Oct-2017 17:470.850190.972080.92067
Table 13. Percentiles of Phase 2-Train incidents when sorted by sensor, day of year, and time of day based on their adjacency statistics.
Table 13. Percentiles of Phase 2-Train incidents when sorted by sensor, day of year, and time of day based on their adjacency statistics.
Incident TimestampPercentile by Adj AvgPercentile by Adj SDPercentile by Adj Median
06-Feb-2017 08:100.972220.777780.97222
24-Jul-2017 12:12110.92708
29-Mar-2017 08:2510.989581
18-Jan-2017 14:230.854170.791670.94792
15-Nov-2017 17:150.8750.958330.95833
06-Apr-2017 20:340.958330.958331
16-Jun-2017 06:34110.86111
27-Jan-2017 20:340.96774983870.67742
07-Oct-2017 17:4710.935480.98387
Table 14. Thresholds used for classifying windows by percentile scores in Phase 2. * indicates the largest threshold, τ 1 .
Table 14. Thresholds used for classifying windows by percentile scores in Phase 2. * indicates the largest threshold, τ 1 .
Sorted by:Min Perc by AvgMin Perc by SDMin Perc by Median
sensor, day of week, time of day0.089743590.040264420.06971154
sensor, day of year, time of day0.61290320.01041667 0.875 *
Adjacency Thresholds
sensor, day of week, time of day0.024839740.0076121790.01682692
sensor, day of year, time of day0.072916670.010416670.00677419
Table 15. The quality of fit for the Phase 2-Train data based on the thresholds determined by the labeled incidents.
Table 15. The quality of fit for the Phase 2-Train data based on the thresholds determined by the labeled incidents.
SensorPrecisionRecallF-Score
S 314402 0.001850.674420.00369
S 318593 0.001530.494620.00305
Total0.001690.581010.00338
Table 16. The quality of fit for the Phase 2-Test data based on the thresholds determined by the Phase 2-Train incidents.
Table 16. The quality of fit for the Phase 2-Test data based on the thresholds determined by the Phase 2-Train incidents.
SensorPrecisionRecallF-Score
S 312564 0.017400.415980.03340
S 312566 0.024320.367060.04562
S 313386 0NANA
S 313393 0.047040.441570.08502
S 313405 0.034790.422650.06429
S 313406 0.000240.466670.00048
S 318566 0.028700.455040.05400
S 318575 0.007480.330980.01462
Total0.019850.416850.03789

Share and Cite

MDPI and ACS Style

Weber, E.S.; Harding, S.N.; Przybylski, L. Detecting Traffic Incidents Using Persistence Diagrams. Algorithms 2020, 13, 222. https://doi.org/10.3390/a13090222

AMA Style

Weber ES, Harding SN, Przybylski L. Detecting Traffic Incidents Using Persistence Diagrams. Algorithms. 2020; 13(9):222. https://doi.org/10.3390/a13090222

Chicago/Turabian Style

Weber, Eric S., Steven N. Harding, and Lee Przybylski. 2020. "Detecting Traffic Incidents Using Persistence Diagrams" Algorithms 13, no. 9: 222. https://doi.org/10.3390/a13090222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop