Next Article in Journal
Temporal Graphs and Temporal Network Characteristics for Bio-Inspired Networks during Optimization
Next Article in Special Issue
Design of a Biomechatronic Device for Upright Mobility in People with SCI Using an Exoskeleton Like a Stabilization System
Previous Article in Journal
Transformer-Based Graph Convolutional Network for Sentiment Analysis
Previous Article in Special Issue
An Abstraction Layer Exploiting Voice Assistant Technologies for Effective Human—Robot Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sort and Deep-SORT Based Multi-Object Tracking for Mobile Robotics: Evaluation with New Data Association Metrics

Department of Electrical and Computer Engineering, Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(3), 1319; https://doi.org/10.3390/app12031319
Submission received: 22 December 2021 / Revised: 18 January 2022 / Accepted: 22 January 2022 / Published: 26 January 2022
(This article belongs to the Special Issue Robotic Wheelchairs)

Abstract

:
Multi-Object Tracking (MOT) techniques have been under continuous research and increasingly applied in a diverse range of tasks. One area in particular concerns its application in navigation tasks of assistive mobile robots, with the aim to increase the mobility and autonomy of people suffering from mobility decay, or severe motor impairments, due to muscular, neurological, or osteoarticular decay. Therefore, in this work, having in view navigation tasks for assistive mobile robots, an evaluation study of two MOTs by detection algorithms, SORT and Deep-SORT, is presented. To improve the data association of both methods, which are solved as a linear assignment problem with a generated cost matrix, a set of new object tracking data association cost matrices based on intersection over union, Euclidean distances, and bounding box metrics is proposed. For the evaluation of the MOT by detection in a real-time pipeline, the YOLOv3 is used to detect and classify the objects available on images. In addition, to perform the proposed evaluation aiming at assistive platforms, the ISR Tracking dataset, which represents the object conditions under which real robotic platforms may navigate, is presented. Experimental evaluations were also carried out on the MOT17 dataset. Promising results were achieved by the proposed object tracking data association cost matrices, showing an improvement in the majority of the MOT evaluation metrics compared to the default data association cost matrix. In addition, promising frame rate values were attained by the pipeline composed of the detector and the tracking module.

1. Introduction

Vision-based Multi-Object Tracking (MOT) methods analyze image sequences to establish object correspondences over the images [1,2]. Multiple MOT methods have been proposed over the years and have been widely used in applications, such as surveillance [3], traffic monitoring [4], autonomous driving [5], and mobile robot navigation, including object collision avoidance [6] or target following [7]. However, MOT results may be affected by difficult problem configurations due to crowded environments or occluded objects, which leads to limitations in performance for such scenarios. Moreover, due to a large number of applications where MOT methods can be applied, the importance of MOT is high and remains a challenging topic in the research community [1,2,8].
Throughout the years, MOT tasks were mainly performed by the tracking by detection paradigm [9], where objects were detected by an object detector and fed to the object tracking method, which then dealt with the object association between previous frames and the present one. Most methods proposed [10,11,12] use a Kalman Filter (KF) as a motion module to predict the position of objects of interest in the current frame. On the other hand, with the emergence of Deep learning-based Neural Networks (DNNs) [13,14], new state-of-the-art methods have been proposed in object vision-based tasks such as object classification [15], recognition [16], and tracking [11,17,18]. Therefore, to improve the object association step of tracking algorithms, Convolutional Neural Networks (CNNs) have been applied to extract object appearance features, which are used to compute similarity values between two objects’ feature maps, extracted over two consecutive images. On the other hand, CNNs have also been used to locate objects to track consecutive images [19,20].
MOT techniques can be employed to improve the motion planning behavior and safety on the navigation tasks of mobile robot platforms [6]. MOT techniques can also be an asset on assistive platforms for target-following tasks, where the platform follows a specific target (e.g., following a caregiver, reaching an object).
Due to several types of impairments, there are a significant number of people unable to perform daily tasks. Hence, a particular type of assistive mobile robot, robotic wheelchair platforms, has been researched aiming to increase the autonomy and mobility of such users [21,22]. Brain-actuated wheelchairs [21,23,24] have also received particular focus in research, with several promising techniques for severely motor disabled people who are unable to control a robotic platform by the conventional interfaces, such as joystick [21,25]. With the advances in Brain–Computer Interfaces (BCI) and shared control methods, new paradigms of the brain–computer interaction that allow the user to choose his navigation target have been proposed. The new paradigms can represent potential goals of interest to the user’s navigation (e.g., objects) and can be empowered by considering the tracked objects from MOT methods. Once the user selects its navigation target, a MOT method is required to ensure that robotic wheelchair platforms navigate towards that specific target. However, to endow a mobile robot to pursue an object as its navigation target, a robust visual perception module, including an object tracking method, is required. Moreover, to ensure a robust object tracking performance, detection and tracking should be performed frame-by-frame, which is time-consuming and can lead to the inability of performing MOT in real-time [9].
In this work, considering navigation tasks in assistive platforms, an evaluation study of two multi-object tracking by detection algorithms, SORT [10] and Deep-SORT [11], using new data association metrics [26], is proposed. SORT and Deep-SORT methods were proposed with a focus on real-time object tracking tasks, both achieving state-of-the-art results with a high frame rate. The SORT and Deep-SORT methods share the same overall architecture, divided into three main modules, as shown in Figure 1: KF-based estimation, data association, and track management. To detect objects on the images, the YOLOv3 [16] network is used. Both methods use the KF algorithm to predict the position of the objects in the current frame, which are, as well as the object detections provided by the YOLOv3, the inputs of the data association module, which is a linear assignment problem with a cost matrix association. The SORT method associates objects using bounding box detections to match measurements with predicted tracks, using the overlap of bounding boxes. On the other hand, to improve the bounding box association step, the Deep-SORT uses a CNN to extract appearance features from the object bounding box images. For a detailed evaluation of the object tracking methods, a set of different types of data association cost matrices based on bounding boxes intersection over union, Euclidean distances, and bounding boxes ratios is proposed. To evaluate both tracking methods with the proposed cost matrices, considering an assistive robotics context, the ISR Tracking dataset is proposed. The dataset contains the object conditions from an assistive mobile robot’s point of view. The dataset contains 329 object sequences of 9 different object classes. To complement the validation of the SORT and Deep-SORT methods with the proposed cost matrices, evaluation was also performed in the MOT17 [27] dataset.
The main contributions of this work can be summarized as follows:
  • Eight new object tracking data association cost matrix formulations based on intersection over union, Euclidean distances, and bounding boxes ratio are proposed.
  • The ISR Tracking dataset, presenting a mission performed by a mobile robot in a lab setting, represents the object conditions under which robotic platforms may navigate. It is a rearrangement of the ISR RGB-D dataset [28] with object tracking labels for multi-object tracking tasks.
  • An evaluation, having in view navigation tasks for assistive mobile robot platforms, of two multi-object tracking by detection algorithms, SORT and Deep-SORT, is also presented. The proposed new data association cost matrices were integrated and evaluated on both tracking methods.

2. Related Work

2.1. Object Tracking

Object tracking techniques have become a fundamental task in real-time video-based applications that require establishing object correspondences between frames [8]. In the literature, proposed tracking techniques fall in two main categories [29]: Single-Object Tracking (SOT) and MOT. In SOT approaches, the appearance of the single target is known a priori, while in MOT techniques, the aim is to estimate trajectories of multiple objects of one or more categories without any prior knowledge about their appearance or location targets. For MOT, an object detection step is required across frames [1]. According to [1], applying multiple SOT models to perform MOT tasks generally leads to poor performance, often caused by similarly looking intra-class objects.
Recent advances in MOT literature have been focusing on two different approaches: tracking by detection and joint tracking and detection. Tracking by detection [10,11,12,30], as presented in Figure 1, makes use of object detection algorithms to detect and classify objects before performing the object association. This approach simplifies the tracking task as an object association task over consecutive frames. Methods receive an array of measurements and output bounding boxes with their respective tracking ID. On the other hand, joint tracking and detection methods [9,17,19,20] are able to detect and track objects in a single model. Generally, this approach uses visual appearance features of the object to track and locate it in the frames of interest. Joint tracking and detection techniques have become widely popular due to the emergence of the deep learning-based Siamese Networks [18,31].
Despite the promising results achieved by the joint tracking and detection approaches, for navigation tasks in assistive mobile robot platforms, an object detector method can already be available to provide knowledge of the surrounding environment for motion planning or localization methods. Hence, for the purpose of this work, tracking by detection methods are more suitable.

Tracking by Detection

With the emergence of deep learning-based object detectors, tracking by detection has become the most popular approach in the MOT research community [2]. This approach takes the benefit of object location knowledge to generate an association model that would be able to associate objects over time. One of the first MOT methods found in the literature is Multiple Hypothesis Tracking [32], which calculates hypotheses over measurements to estimate if an object should be associated to a track, be considered as a new track, or if it is a mis-measurement. It uses the KF algorithm to estimate the object’s states and a probabilistic distribution over hypotheses to associate measurements to tracks.
Recent works also employ the KF algorithm, as a motion model, to improve the association of objects over time [10,11,12,33]. Bewley et al. [10] proposed SORT, which is composed of a KF to estimate object states, and by the Hungarian [34] algorithm to associate the KF predictions with new object detections. A year later, Wojke et al. proposed an improvement of SORT, the Deep-SORT [11], by including a novel cascading association step that uses CNN-based object appearance features. The data association algorithm combines the similarity of the object appearance features with the Mahalanobis distance between object states and, at a later stage for unmatched states, uses the SORT’s data association. Despite the usage of a CNN, the Deep-SORT method achieved a promising frame rate on the object tracking benchmarks. A method similar to Deep-SORT was proposed by Chen et al., the MOTDT [12]. MOTDT uses a fully CNN-based scoring function for an optimal selection of candidates. Euclidean distances between extracted object appearance features also are used to improve the association step. Recently, He et al. [33] proposed the GMT-CT algorithm that incorporates graph partitioning with deep feature learning. The graph was constructed through the extracted object appearance features, which was used in the association step to model the relationship between measurements and tracks with higher accuracy.
With the growth of deep learning-based Siamese networks in the object tracking community, a new paradigm has been proposed [1]. Lee et al. [35] introduced the FPNS-MOT, which integrates a Siamese architecture with a feature pyramid network [36]. It computes a similarity vector between features from two different inputs and then updates tracks using an interactive selection of the maximum scored pair of tracks and measurements. FPNS-MOT outperformed the aforementioned methods on the MOT challenge benchmarks [27] with an inference time of 10 Hz. Jin et al. [37] enhanced the performance of the Deep-SORT [11] object feature extractor with a Siamese architecture. In addition, it introduced optical flow [38] in the motion module, improving the object association accuracy.
In summary, Table 1 presents the main characteristics of the aforementioned tracking by detection MOT methods.

2.2. Tracking Applied in Mobile Robots

Object tracking techniques have been widely applied for navigation tasks in indoor mobile robot platforms, such as object collision avoidance [6], target following [7], and autonomous navigation [5]. Target detection and tracking have also been applied in robotic wheelchair platforms [39,40], which have been proposed to increase the mobility of people with motor impairments. Xiao et al. [39] proposed a visual-target detection and tracking method to detect and track people in the surroundings of an intelligent wheelchair. The visual tracking was implemented as a binary classification between the object and the background, and a semi-supervised online boosting approach was applied to solve the object drift problem. On the other hand, Lecrosnier et al. [40] proposed an advanced driver assistance system for a robotic wheelchair composed by the YOLOv3 [16] object detection algorithm and a 3D object tracking approach based on SORT [10] to detect and track doors and door handles.

3. Methodology

In this section, a brief review of the SORT and Deep-SORT methods is presented. The proposed cost matrix formulations, which are part of the data association’s linear assignment problem, inside the Cost Matrix Matching module (see Data Association—Cost Matrix Matching in Figure 2 and Figure 3), are also presented.

3.1. SORT

SORT [10] iteratively computes the state of the objects being tracked through a KF. The method uses the Hungarian algorithm [34] to accurately associate detected objects (by an object detector) with objects that are being tracked. A detailed overview of the SORT algorithm is represented in Figure 2.
The SORT Data Association module, which is of particular interest in this work, is responsible for matching the KF’s predicted bounding boxes with measured bounding boxes on the image, given by the object detector. This module receives, as input, N detected bounding boxes and M predicted bounding boxes (acquired from their respective KF). The module formulates a linear assignment problem by computing a cost matrix between each detected bounding box and all predicted bounding boxes (respectively D i , i { 1 N } and P i , i { 1 M } ), with the Intersection over Union (IoU) as metric:
I o U ( D , P ) = i o u ( D 1 , P 1 ) i o u ( D 1 , P M ) i o u ( D 2 , P 1 ) i o u ( D 2 , P M ) i o u ( D N , P 1 ) i o u ( D N , P M )
where the IoU between a detected bounding box and a predicted bounding box is given by
i o u ( D i , P i ) = D i P i D i P i .
After computing the cost matrix, the Hungarian algorithm [34] is used to associate the bounding boxes. The obtained associations are represented in a N × 2 array, representing N measurements associated to N tracks. Associations are also filtered by considering a minimum IoU threshold, discarding associations with IoU lower than the threshold.
The KF Estimation module uses a linear constant velocity model to represent each object’s motion model. When an object is associated with a tracked object (track), its bounding box is used to update the track state. If no object is associated with the track, then the track’s state is only predicted. The Track management module is responsible for the creation and deletion of tracks. New Tracks are created when detections do not overlap or overlap with tracks below a minimum IoU threshold. The bounding box of the detection is used to initialize the KF state. Since the only data available are the object’s bounding boxes, the object’s velocity in the KF is set to zero and its covariance is set high to signal the uncertainty in the state. If a new track does not receive updates because it does not receive associations, or if a track stops receiving associations, they are deleted to avoid maintaining a high number of tracks to false positives or objects that left the scene, respectively.

3.2. Deep-SORT

Deep-SORT [11] is an improvement of the SORT algorithm, integrating appearance information of objects to enhance associations. Data association integrates an additional appearance metric based on pre-trained CNNs allowing re-identification of tracks, after a long period of occlusion. The KF Estimation and the Track management modules are similar to the corresponding SORT modules. An overview of the method is presented in Figure 3.
As in SORT, the association of detected bounding boxes to tracks is solved by the Hungarian algorithm, using a two-part matching cascade. In the first part, the Deep-SORT method uses motion and appearance metrics to associate valid tracks. The second part uses the same data association strategy as in SORT to associate unmatched and tentative tracks (recently created) with unmatched detections. Motion information is incorporated by the (squared) Mahalanobis distance between predicted states and detections. In addition to the metric computed with the Mahalanobis distance, a second metric based on the smallest cosine distance measures the distance between each track and each measurement appearance features. The appearance features are computed by a pre-trained CNN model. The CNN in the Deep-SORT method was trained on a large-scale person re-identification dataset [41] using deep cosine metric learning [42]. A pre-trained model is provided by the authors in their repository (https://github.com/nwojke/deep_SORT (accessed on 15 October 2021)).

3.3. Data Association—Cost Matrix Matching

In this work, eight cost matrix formulations (see Cost Matrix Matching in Figure 2 and Figure 3) are proposed. As aforementioned, the data associations on the SORT, and also on the second stage of the Deep-SORT, are seen as a linear assignment problem represented by a cost matrix. Hence, the different approaches to formulate the cost matrices for the linear assignment problem with different bounding box metrics are presented.
Intersection over union quantitatively represents the overlapping between objects’ bounding boxes, which, indirectly, ends up representing other types of information such as Euclidean distances between two different bounding boxes and bounding boxes ratios. However, in MOT problems, such information can be useful to improve the data association since, between two consecutive frames, it is expected that an object has similar bounding box dimensions and a small displacement. Therefore, object tracking data association cost matrix formulations based on intersection over union, Euclidean distances, and bounding boxes ratio are proposed. Let us consider a bounding box represented by the image coordinates of its center ( u B B , v B B ) and its height and width ( h B B , v B B ), the detection set D (with N bounding boxes), and the prediction set P (with M bounding boxes). The following cost matrix formulations are proposed:
  • Euclidean distance based cost matrix ( D E ( D , P ) ):
    D E ( D , P ) = d ( D 1 , P 1 ) d ( D 1 , P M ) d ( D 2 , P 1 ) d ( D 2 , P M ) d ( D N , P 1 ) d ( D N , P M )
    which represents the distance between bounding box central points normalized into half of the image dimension. To formulate the problem as a maximization problem, to be solved using the Hungarian algorithm, the distance is obtained by the difference between 1 and the normalized Euclidean distance, as follows:
    d ( D i , P i ) = 1 ( u D i u P i ) 2 + ( v D i v P i ) 2 1 2 h 2 + w 2
    where (h,w) are the height and width of the input image, D i is a bounding box from the detection set, and P i is a bounding box from the prediction set.
  • Bounding box ratio based cost matrix ( R ( D , P ) )—implemented as a ratio between the product of each width and height:
    R ( D , P ) = r ( D 1 , P 1 ) r ( D 1 , P M ) r ( D 2 , P 1 ) r ( D 2 , P M ) r ( D N , P 1 ) r ( D N , P M )
    r ( D i , P i ) = m i n ( w D i h D i w P i h P i , w P i h P i w D i h D i )
    In addition, for boxes with similar shapes, this metric outcome with a value closer to 1 contrasts values close to 0 or much greater than 1 otherwise. For that reason, the minimum between the bounding box ratio and its inverse is applied, to get a value that is within the [ 0 , 1 ] range.
  • SORT’s IoU cost matrix combined with the Euclidean distance cost matrix:
    E D I o U ( D , P ) = I o U ( D , P ) D E ( D , P )
    where ∘ represents the Hadamard product (element-wise product) between two matrices.
  • SORT’s IoU cost matrix combined with the box ratio based cost matrix:
    R I o U ( D , P ) = I o U ( D , P ) R ( D , P )
  • Euclidean distance cost matrix combined with the box ratio based cost matrix:
    R D E ( D , P ) = D E ( D , P ) R ( D , P )
  • SORT’s IoU cost matrix combined with the Euclidean distance cost matrix and the box ratio based cost matrix:
    M ( D , P ) = I o U ( D , P ) D E ( D , P ) R ( D , P )
  • Element-wise average of every cost matrix (A(D, P)):
    A ( D i , P i ) = I o U ( D i , D j ) + D E ( D i , D j ) + R ( D i , D j ) 3 , i D , j P
  • Element-wise weighted mean of every cost matrix value:
    W M ( D i , P i ) = λ IoU · I o U ( D i , P i ) + λ D E · D E ( D i , P i ) + λ R · R ( D i , P i ) , i D , j P , λ IoU + λ D E + λ R = 1
To improve tracking performance in multi-class environments, cost matrices can be updated based on the match between predicted and detected object class (class gate):
C * ( C i , j , D i , P i ) = C i , j if Class D i = P i 0 otherwise , i D , j P

3.4. ISR Tracking Dataset

The ISR RGB-D Dataset [28] is a non-object centric RGB-D dataset, recorded at the Institute of Systems and Robotics (ISR-UC) facilities using a camera sensor onboard the ISR-InterBot [43] mobile platform. The dataset presents a mission performed by the platform in a real scenario setting, representing object conditions under which mobile robot platforms may navigate. The ISR RGB-D dataset contains a total of 10,000 RGB-D raw images captured at 30 FPS with a resolution of 640 × 480. Moreover, ten object classes (unknown, person, laptop, tvmonitor, chair, toilet, sink, desk, door-open, and door-closed) were annotated at every fourth frame, reaching a total of 7832 object-centric images.
As aforementioned, the main goal of this work is to study and compare the KF-based SORT and Deep-SORT object tracking methods to be applied in real-time mobile robot applications. To pursue that goal, a dataset representing the object conditions from the mobile robot platform’s point of view during their navigation tasks is required. Due to the lack of publicly available datasets for such requirements, the labels of ISR RGB-D Dataset (https://github.com/rmca16/ISR_RGB-D_Dataset (accessed on 15 October 2021)) were rearranged to be used as a multi-object tracking dataset, the ISR Tracking Dataset. First, the labels for the remaining images were annotated for the described ten object classes. Then, a unique tracking ID was associated with the same objects throughout the images, except for the “unknown” object class that was not considered for tracking tasks. However, if an object disappeared or was occluded for more than 15 frames, it was considered as a new object, and a new tracking ID was associated. Each image has an associated “.txt” file that contains all object labels for that image, and each object label is organized as follows: <object class>, <tracking ID>, <bounding box center x>, <bounding box center y>, <bounding box width>, and <bounding box height>. ISR Tracking dataset has in total 32,635 object bounding boxes and 329 object sequences.

4. Experiments

The proposed study was evaluated on the MOT17 [27] dataset and also on the proposed ISR Tracking dataset. Moreover, to evaluate the proposed approaches on the used KF-based algorithms, the following standard evaluation metrics [1] were used: Multi-Object Tracking Accuracy (MOTA), Multi-Object Tracking Precision (MOTP), True Positives (TP), False Positives (FP), False Negatives (FN), Identification Switch (IDs) Mostly Tracked (MT), Mostly Lost (ML), Fragmentation (FM), and Frames Per Second (FPS).

4.1. Datasets

(1) MOT17 Dataset: It is a multi-person tracking benchmark dataset divided into 14 sequences with highly crowded scenarios, different viewpoints, weather conditions, camera motions, and indoor/outdoor environments. The dataset contains a public training/test split, where the training sequences have ground-truth files and detection files provided by three object detection state-of-the-art methods, while the test sequences just have the detection files. Hence, due to the scope of the performed experiments, and also due to the submission’s constraints to obtain results on the test sequences, only the training sequences were used in this study. Since the multi-object tracking methods evaluated in this work do not require a training process, the training sequences were used as evaluation.
(2) ISR Tracking Dataset: It is composed of 10,000 RGB-D raw images acquired by an Intel RealSense D435 sensor onboard a mobile robot platform [43], representing the object conditions under which robotic platforms may navigate. Nine object classes were annotated for multi-object tracking tasks, achieving a total of 32,635 object bounding boxes and 329 object sequences. For evaluation, the ISR Tracking dataset was reorganized into two sub-datasets: ISR500 and ISR200. In the ISR500, the dataset was divided into sequences of 500 frames, which gives a total of 20 image sequences. On the other hand, the ISR200 contains 50 image sequences, which are the result of partitioning the dataset into sequences of 200 images. On both sub-datasets, the train/test image sequence split was performed by interleaving the sequences, i.e., the first sequence was used to train, the second sequence was used to test, the third sequence was used to train, and so on.

4.2. Implementation Details

All modules were implemented using the Python 3.8.5 programming language. Deep learning networks were also implemented using the PyTorch framework (version 1.8.0). YOLOv3 network was trained using an image size of 416 × 416 , a fixed learning rate of 10 4 over 50 iterations, a mini-batch of 6 images, and the ADAM optimizer. In addition, the YOLOv3 weights were initialized using the COCO pre-trained model. To perform evaluations on the SORT method [10], the number of frames to hold a track without associations before deleting that track was set with T L o s t = 1 , the minimum number of object detections to start a new track was set with h i t m i n = 3 , and the minimum threshold value for bounding box association was set with t h c o s t = 0.3 . For the Deep-SORT, the following constant values were used: λ = 0 (hyperparameter to control the influence of each metric on the association cost), T L o s t = 30 , and an association gating threshold, d i s t m a x 1 = 0.2 . Moreover, all experiments were performed using an Nvidia RTX 2060 super GPU, 32 GB RAM, and an AMD Ryzen 5 3600 CPU.

4.3. Results

The evaluation of the proposed work was divided as follows: evaluation of the SORT and Deep-SORT on both MOT17 and ISR Tracking datasets using all the available frames (ideal conditions); evaluation of the SORT and Deep-SORT on the ISR Tracking dataset skipping frames, representing real conditions when it is not possible to perform the default 30 FPS; and evaluation of the whole pipeline, YOLOv3 + object tracking method, evaluating also the influence that the YOLO’s detection performance has on the object tracking method.

4.3.1. SORT and Deep-SORT Evaluation

The proposed W M data association cost matrix formulation requires the selection of three constant values (weights) to control the influence of each data association cost matrix. Table 2 shows the evaluation performed on the MOT17 dataset with different weight value combinations of the W M cost matrix. Based on the achieved results, the highest MOTA value was attained using: λ IoU = 7 10 , λ D E = 2 10 and λ R = 1 10 . The aforementioned weight configuration has the minimum number of FP and IDs, despite the higher number of FNs. Furthermore, it has the minimum number of FM by a large amount. Therefore, throughout the following evaluations, the aforementioned values were used for the W M cost matrix.
Table 3 shows the results achieved on the MOT17 dataset. Regarding the SORT’s results, the highest MOTA result was obtained using the default I o U cost matrix, being a similar result achieved by the E D I o U , R I o U , M, and W M cost matrices. However, on the remaining evaluation metrics, the default IoU cost matrix was outperformed by the proposed cost matrices. The M cost matrix had the lowest number of FP, IDs, and FM, which represents the most accurate tracking for sequences generated by the SORT. The A cost matrix had the highest number of TP and the lowest number of FN, which is proportional to the percentage of MT sequences. For this work, which has in view mobile robot navigation tasks, those metrics could impact performance, as it can ensure that the object is successfully tracked until the object leaves the scene. Regarding the Deep-SORT results, the best MOTA result was achieved by the proposed W M cost matrix with 45.67%. The W M cost matrix reached the best results for the TP and MT evaluation metrics. The default IoU cost matrix achieved the best MOTP, FP, IDs, and FM results, which are very similar to the results attained by the proposed W M cost matrix. Overall, promising results were achieved by the proposed cost matrices, being able to outperform the default IoU cost matrix. Moreover, the Deep-SORT with W M cost matrix was able to obtain the highest MOTA and MT. Attained results show similar overall performances between SORT and Deep-SORT. However, as expected, SORT is much faster than Deep-SORT.
An evaluation of SORT and Deep-SORT, where the data association threshold is modified, was also performed on the MOT17 dataset, whose results are presented in Figure 4. As expected, as the threshold value increased, the MOTA score decreased for the majority of the cost matrices. As observed, no threshold value was found to be suitable for all evaluated cost matrices. Hence, the best results were obtained using a threshold value of 0.3, which was thereafter used for all the evaluations.
Table 4 presents the results attained on the ISR Tracking dataset. Due to the multi-class available on the ISR Tracking dataset, an evaluation on the SORT algorithm using and not using the class gate metric, to discard associations of objects with different object classes, was performed. Regarding the SORT’s results, similar to the reported results on the MOT17 dataset, the proposed data association cost matrices outperformed the default IoU cost matrix. Moreover, the results of all evaluated data association cost matrices were slightly improved by using the class gate formulation, being able to reach the highest MOTA result with 91.02%. The A cost matrix using the class gate formulation was able to achieve the best result on the TP, FN, IDs, and MT with 29,785, 2799, 51, and 69.3%, respectively. The A data association cost matrix presents a significant improvement on the MT evaluation metric compared with the IoU cost matrix (61.7% to 69.3%), which can impact the performance of a mobile robot platform during navigation tasks. Regarding the Deep-SORT’s results, once again, the proposed data association cost matrices outperformed the default I o U cost matrix. Moreover, the A cost matrix achieved the highest MOTA and MT values, while the E D I o U achieved the best TP, FN, IDs, and ML results. A significant improvement of the MT evaluation metric was attained with the A cost matrix. Overall, on both SORT and Deep-SORT algorithms, the proposed data association cost matrices outperformed the default I o U cost matrix. The A cost matrix achieved the highest values on MOTA and MT evaluation metrics, showing that it could be the most suitable data association cost matrix to use. Regarding those evaluation metrics, the Deep-SORT outperformed the SORT algorithm, with a highlight on the MT evaluation metric (69.3% to 78.7%). Moreover, promising results were reached by both methods on the ISR Tracking dataset.
For the following evaluations, based on the reported results, only the following data association cost matrices using the class gate metric were used: I o U , A, and W M on the SORT algorithm and the I o U , E D I o U , and A on the Deep-SORT algorithm.

4.3.2. SORT and Deep-SORT on Skipped Frames

In real scenarios, sometimes due to hardware constraints, it is not always possible (or needed) to run the algorithms at 30 FPS, which is a standard value on image acquisition from cameras. Hence, to evaluate the tracking performance on such conditions, experiments by skipping 1, 2, and 3 images, representing an image acquisition at 15, 10, and 7.5 FPS, respectively, were performed.
Table 5 shows the SORT and Deep-SORT results attained on the ISR Tracking dataset using non-consecutive frames. As expected, as the image gap increased, the object tracking performance decreased. This happens due to a greater displacement of the objects, which increases the difficulty in predicting and associating objects. Nevertheless, promising results were achieved by the proposed A data association metric on both tracking methods, outperforming the default I o U data association metric, especially on MOTA, IDs, and MT evaluation metrics. The best overall performance was reached by the SORT method with the proposed A data association cost matrix with an accuracy of 86.43% and 58.2% of mostly tracked object sequences. The SORT method using the I o U data association metric attained the best results on the MOTP and FP evaluation metrics, while the Deep-SORT method with the proposed A data association metric achieved the best results on the TP and FN evaluation metrics. Note that, in these conditions, a significant improvement was achieved by the A data association cost matrix compared to the I o U association metric, showing its capacity to hold the object track.

4.3.3. Detection-Based MOT Pipeline

To evaluate the performance of the SORT and Deep-SORT object tracking methods in real scenarios, an evaluation using the YOLOv3 object detector algorithm feeding the tracking methods was performed. Moreover, to also evaluate the influence that the object detector performance may have over the object tracking performance, four YOLOv3 models with different performances were used. The four YOLOv3 models were trained on the same data (ISR RGB-D Dataset), and on the same conditions, varying only the number of training epochs. Each used YOLOv3 model has the following mean average precision: Y M 1 , , M 4 = { 38 % , 60 % , 80 % , 90 % } .
Table 6 presents the detection-based MOT pipeline results achieved on the ISR200 sub-dataset. As expected, the YOLOv3’s performance had a significant role in the overall pipeline. As the YOLOv3 performance increased, the object tracking performance also increased. In the case of a poor YOLOv3’s performance, the number of FN was so high, especially on the Deep-SORT method, which achieved a negative accuracy (MOTA). Regardless of the YOLOv3’s performance, in these conditions, SORT outperformed the Deep-SORT method. Moreover, the three data association cost matrices used on the SORT method reached similar results, being the default I o U cost matrix able to achieve the best MOTA and FP results, while the A data association metric got the best MT values. Note that using an object detector may introduce additional errors to the object tracking pipeline, such as incorrect detections, shifted detections, miss detections, and wrong object classification. This can be observed by the obtained values of TP, FP, FN, and IDs, which directly influence the remaining evaluation metrics. As shown in Table 6, the object tracking performance increases as the YOLOv3’s performance also increases, due to a large decrease in the FP values as well as the IDs values, which occurs due to an improvement of the object detection performance. Regarding the frame rate results, as expected, the SORT was faster than Deep-SORT since SORT does not have to extract visual features through a CNN.
Table 7 presents the detection-based MOT pipeline results achieved on the ISR500 sub-dataset. Once again, the performance of the YOLOv3 is crucial for a promising object tracking performance. Overall, similar to the results attained on the ISR200 sub-dataset, the SORT method obtained the best results. However, the Deep-SORT method reached the best values on the FN, MT, and ML evaluation metrics, showing that the Deep-SORT could be most suitable for tracking larger object sequences. This happens due to an increased capability of the Deep-SORT method to re-identifying lost object sequences compared with the SORT method, which struggles to predict the position of the object when the track starts to miss. As observed in the previous evaluations, the A cost matrix, in both SORT and Deep-SORT, achieved the best MT result, meaning that the object sequence is, at least, tracked in 80% of its life span, which is very important to successfully perform mobile robot navigation tasks.

5. Conclusions

In this paper, having in view navigation tasks in assistive mobile robot platforms, an evaluation study of two MOTs by detection algorithms, SORT and Deep-SORT, was presented. Moreover, eight new tracking data association metrics based on intersection over union, Euclidean distances, and bounding boxes ratio were proposed. To evaluate both tracking methods with the proposed data association metrics, the ISR Tracking dataset, which represents the object conditions from an assistive mobile robot’s point of view, was also proposed. The presented pipeline consists of using the YOLOv3 network to detect and classify the objects available on RGB images, feeding the tracking algorithm. Promising results were attained by the majority of the proposed tracking data association metrics on the SORT, and also on the Deep-SORT. Overall, based on the performed experiments, the SORT method was able to achieve higher results of accuracy and precision, while the Deep-SORT method obtained the best values of FN, IDs, and MT. Moreover, the proposed A data association metric achieved the best performance on both evaluated object tracking methods. The A data association metric showed a significant improvement on the MT evaluation metric, which could be crucial to successful navigation tasks on robotic platforms. The results showed, as expected, that the object tracking overall performance has a high dependency on the object detector performance. The SORT is faster than the Deep-SORT, reaching 50 FPS on the overall pipeline (YOLOv3 + SORT). Therefore, considering navigation tasks in assistive platforms, and also considering issues associated with an object detector algorithm, the SORT method using the A data association metric obtained more robust results and, as such, can be a more suitable approach.
As future work, it is intended to integrate the presented pipeline on the RobChair [21] platform for assistive navigation tasks.

Author Contributions

Conceptualization, methodology, software, validation, investigation: R.P. and G.C.; formal analysis, R.P., G.C. and L.G.; writing, R.P., G.C., L.G. and U.J.N.; supervision, funding acquisition: U.J.N. All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Pereira has been supported by the Portuguese Foundation for Science and Technology (FCT) under a PhD grant with reference SFRH/BD/148779/2019. This work has been also supported by the projects B-RELIABLE with reference SAICT/30935/2017 and MATIS-CENTRO-01-0145-FEDER-000014. It was also partially supported by ISR-UC through FCT grant UIDB/00048/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. Publicly available datasets were analyzed in this study. Data can be found here: https://motchallenge.net/data/MOT17/ (accessed on 17 October 2021) and https://github.com/rmca16/ISR_RGB-D_Dataset (accessed on 20 December 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ciaparrone, G.; Sánchez, F.L.; Tabik, S.; Troiano, L.; Tagliaferri, R.; Herrera, F. Deep learning in video multi-object tracking: A survey. Neurocomputing 2020, 381, 61–88. [Google Scholar] [CrossRef] [Green Version]
  2. Xu, Y.; Osep, A.; Ban, Y.; Horaud, R.; Leal-Taixe, L.; Alameda-Pineda, X. How To Train Your Deep Multi-Object Tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  3. Kamal, R.; Chemmanam, A.J.; Jose, B.; Mathews, S.; Varghese, E. Construction Safety Surveillance Using Machine Learning. In Proceedings of the International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 20–22 October 2020. [Google Scholar]
  4. Behrendt, K.; Novak, L.; Botros, R. A Deep Learning Approach to Traffic Lights: Detection, Tracking, and Classification. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
  5. Ess, A.; Schindler, K.; Leibe, B.; Gool, L.V. Object Detection and Tracking for Autonomous Navigation in Dynamic Environments. Int. J. Robot. Res. 2010, 29, 1707–1725. [Google Scholar] [CrossRef] [Green Version]
  6. Lo, S.; Yamane, K.; Sugiyama, K. Perception of Pedestrian Avoidance Strategies of a Self-Balancing Mobile Robot. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019. [Google Scholar]
  7. Islam, M.; Hong, J.; Sattar, J. Person-following by autonomous robots: A categorical overview. Int. J. Robot. Res. 2019, 38, 1581–1618. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, Q.; Zhang, L.; Bertinetto, L.; Hu, W.; Torr, P.H. Fast Online Object Tracking and Segmentation: A Unifying Approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  9. Liu, Q.; Liu, B.; Wu, Y.; Li, W.; Yu, N. Real-Time Online Multi-Object Tracking in Compressed Domain. IEEE Access 2019, 7, 76489–76499. [Google Scholar] [CrossRef]
  10. Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
  11. Wojke, N.; Bewley, A.; Paulus, D. Simple Online and Realtime Tracking with a Deep Association Metric. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017. [Google Scholar]
  12. Chen, L.; Ai, H.; Zhuang, Z.; Shang, C. Real-Time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018. [Google Scholar]
  13. Pereira, R.; Gonçalves, N.; Garrote, L.; Barros, T.; Lopes, A.; Nunes, U.J. Deep-Learning based Global and Semantic Feature Fusion for Indoor Scene Classification. In Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020. [Google Scholar]
  14. Pereira, R.; Garrote, L.; Barros, T.; Lopes, A.; Nunes, U.J. A Deep Learning-based Indoor Scene Classification Approach Enhanced with Inter-Object Distance Semantic Features. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
  15. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  16. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  17. Zhang, Y.; Wang, C.; Wang, X.; Zenf, W.; Liu, W. A Simple Baseline for Multi-Object Tracking. arXiv 2020, arXiv:2004.01888. [Google Scholar]
  18. Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  19. Wu, J.; Cao, J.; Song, L.; Wang, Y.; Yang, M.; Yuan, J. Track to Detect and Segment: An Online Multi-Object Tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  20. Bergmann, P.; Meinhardt, T.; Leal-Taixé, L. Tracking without bells and whistles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  21. Lopes, A.; Rodrigues, J.; Perdigao, J.; Pires, G.; Nunes, U.J. A New Hybrid Motion Planner: Applied in a Brain-Actuated Robotic Wheelchair. IEEE Robot. Autom. Mag. 2016, 23, 82–93. [Google Scholar] [CrossRef]
  22. Iturrate, I.; Antelis, J.M.; Kubler, A.; Minguez, J. A Noninvasive Brain-Actuated Wheelchair Based on a P300 Neurophysiological Protocol and Automated Navigation. IEEE Trans. Robot. 2009, 25, 614–627. [Google Scholar] [CrossRef] [Green Version]
  23. Cruz, A.; Pires, G.; Lopes, A.; Carona, C.; Nunes, U.J. A Self-Paced BCI With a Collaborative Controller for Highly Reliable Wheelchair Driving: Experimental Tests with Physically Disabled Individuals. IEEE Trans. Hum. Mach. Syst. 2021, 51, 109–119. [Google Scholar] [CrossRef]
  24. Lopes, A.; Pires, G.; Nunes, U.J. Assisted navigation for a brain-actuated intelligent wheelchair. Robot. Auton. Syst. 2013, 61, 245–258. [Google Scholar] [CrossRef]
  25. Sinyukov, D.A.; Padir, T. A Novel Shared Position Control Method for Robot Navigation Via Low Throughput Human-Machine Interfaces. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  26. Carvalho, G. Kalman Filter-Based Object Tracking Techniques for Indoor Robotic Applications. Master’s Dissertation, University of Coimbra, Coimbra, Portugal, 2021. [Google Scholar]
  27. Milan, A.; Leal-Taixé, L.; Reid, I.D.; Roth, S.; Schindler, K. MOT16: A benchmark for multi-object tracking. arXiv 2016, arXiv:1603.00831. [Google Scholar]
  28. Pereira, R.; Garrote, L.; Barros, T.; Lopes, A.; Nunes, U.J. An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition. In Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020. [Google Scholar]
  29. Fiaz, M.; Mahmood, A.; Javed, S.; Jung, S.K. Handcrafted and Deep Trackers: Recent Visual Object Tracking Approaches and Trends. ACM Comput. Surv. 2019, 52, 1–44. [Google Scholar] [CrossRef]
  30. Zhang, X.; Wang, X.; Gu, C. Online multi-object tracking with pedestrian re-identification and occlusion processing. Vis. Comput. 2021, 37, 1089–1099. [Google Scholar] [CrossRef]
  31. Guo, Q.; Feng, W.; Zhou, C.; Huang, R.; Wan, L.; Wang, S. Learning Dynamic Siamese Network for Visual Object Tracking. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  32. Reid, D. An algorithm for tracking multiple targets. IEEE Trans. Autom. Control. 1979, 24, 843–854. [Google Scholar] [CrossRef]
  33. He, J.; Huang, Z.; Wang, N.; Zhang, Z. Learnable Graph Matching: Incorporating Graph Partitioning with Deep Feature Learning for Multiple Object Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  34. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef] [Green Version]
  35. Lee, S.; Kim, E. Multiple Object Tracking via Feature Pyramid Siamese Networks. IEEE Access 2019, 7, 8181–8194. [Google Scholar] [CrossRef]
  36. Lin, T.; Dollár, P.; Girshick, R.B.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  37. Jin, J.; Li, X.; Li, X.; Guan, S. Online Multi-object Tracking with Siamese Network and Optical Flow. In Proceedings of the IEEE 5th International Conference on Image, Vision and Computing (ICIVC), Beijing, China, 10–12 July 2020. [Google Scholar]
  38. Lucas, B.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI), Vancouver, BC, Canada, 24–28 August 1981. [Google Scholar]
  39. Xiao, H.; Li, Z.; Yang, C.; Yuan, W.; Wang, L. RGB-D Sensor-based Visual Target Detection and Tracking for an Intelligent Wheelchair Robot in Indoors Environments. Int. J. Control Autom. Syst. 2015, 13, 521–529. [Google Scholar] [CrossRef]
  40. Lecrosnier, L.; Khemmar, R.; Ragot, N.; Decoux, B.; Rossi, R.; Kefi, N.; Ertaud, J.Y. Deep Learning-Based Object Detection, Localisation and Tracking for Smart Wheelchair Healthcare Mobility. Int. J. Environ. Res. Public Health 2021, 18, 91. [Google Scholar] [CrossRef] [PubMed]
  41. Zheng, L.; Bie, Z.; Sun, Y.; Wang, J.; Su, C.; Wang, S.; Tian, Q. MARS: A Video Benchmark for Large-Scale Person Re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  42. Wojke, N.; Bewley, A. Deep Cosine Metric Learning for Person Re-identification. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018. [Google Scholar]
  43. Cruz, R.; Garrote, L.; Lopes, A.; Nunes, U.J. Modular software architecture for human-robot interaction applied to the InterBot mobile robot. In Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal, 25–27 April 2018. [Google Scholar]
Figure 1. Overview of the Kalman Filter based object tracking algorithms used in this work.
Figure 1. Overview of the Kalman Filter based object tracking algorithms used in this work.
Applsci 12 01319 g001
Figure 2. Overview of the object tracking SORT algorithm.
Figure 2. Overview of the object tracking SORT algorithm.
Applsci 12 01319 g002
Figure 3. Overview of the object tracking Deep-SORT algorithm.
Figure 3. Overview of the object tracking Deep-SORT algorithm.
Applsci 12 01319 g003
Figure 4. Tracking MOTA variation according to different data association thresholds on the MOT17 dataset.
Figure 4. Tracking MOTA variation according to different data association thresholds on the MOT17 dataset.
Applsci 12 01319 g004
Table 1. Review of state-of-the-art tracking by detection. (DL = Deep Learning-based; KF = Kalman Filter-based).
Table 1. Review of state-of-the-art tracking by detection. (DL = Deep Learning-based; KF = Kalman Filter-based).
MethodYearDLKFDescription
SORT [10]2016 ×Simple and fast KF-based algorithm that associates objects based on their bounding box appearance.
Deep-SORT [11]2017××KF-based algorithm, associates objects based on their appearance description extracted by a CNN re-identification network.
MOTDT [12]2018××Deep-SORT related algorithm that uses predicted bounding boxes as candidates for association, in an attempt to solve the occlusion problem.
GMT-CT [33]2021××Deep-SORT related algorithm that solves association problems using graph partitioning based on appearance features.
DROP [30]2020××Associates objects using a confidence-based cost to construct the Hungarian algorithm solver. Furthermore, it uses appearance features to determine occlusions in the environment.
FPSN-MOT [35]2019× It uses Siamese and Feature Pyramid-based Networks addressing appearance and motion features in the association stage.
Jiating Jin et al. [37]2020××Deep-SORT related algorithm that uses Siamese network to process association tasks and also introduce optical flow information to the motion model, in order to improve accuracy.
Table 2. Evaluation of the W M data association cost matrix using different weight combinations on the MOT17 dataset.
Table 2. Evaluation of the W M data association cost matrix using different weight combinations on the MOT17 dataset.
WeightsEvaluation Metrics
λ IoU λ D E λ R %MOTA%MOTPTPFPFNIDs%MT%MLFM
5 / 10 4 / 10 1 / 10 44.9387.8456,677622354,77284812.333.0946
5 / 10 3 / 10 2 / 10 44.9987.8456,713619654,75782712.333.3932
4 / 10 3 / 10 3 / 10 44.8587.7656,688632354,77683313.032.8953
3 / 10 4 / 10 3 / 10 44.6487.7156,613647954,83285212.833.0948
3 / 10 3 / 10 4 / 10 44.5887.7056,558649254,85888112.833.0967
4 / 10 5 / 10 1 / 10 44.7587.7556,627637954,79387712.633.3961
6 / 10 3 / 10 1 / 10 45.2587.9056,803598454,69579912.133.0912
6 / 10 2 / 10 2 / 10 45.2587.9256,801599054,70579112.133.0907
7 / 10 2 / 10 1 / 10 45.5388.0956,552542654,99674912.633.5853
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Table 3. Evaluation of the SORT, Deep-SORT, and proposed data association cost matrices on the MOT17 dataset.
Table 3. Evaluation of the SORT, Deep-SORT, and proposed data association cost matrices on the MOT17 dataset.
Cost MatrixEvaluation Metrics
% MOTA↑% MOTP↑TP↑FP↓FN↓IDs↓% MT↑% ML↓FM↓FPS↑
SORT
I o U 45.5688.1956,298513655,28171811.535.3798516
D E 41.2486.9954,292797756,27117347.933.71915500
R14.1583.0637,23621,35170,28147804.937.54730510
E D I o U 45.5588.2056,275512655,30571711.535.3799486
R I o U 45.5588.2156,263511155,32471011.535.5797499
R D E 44.4087.7956,329647055,02894011.732.41090480
M45.5488.2156,245510755,34470811.535.7797469
A44.7287.7256,636641754,81185013.033.0958472
W M 45.5388.0956,552542654,99674912.633.5853473
Deep-SORT
I o U 45.5388.2655,641451056,18746913.035.766657
D E 45.4988.1355,768468955,98854113.634.673657
R42.2087.7653,722633457,6049719.737.2112657
E D I o U 45.4988.1255,781470255,99552113.934.272457
R I o U 44.3588.0655,106530056,53265912.635.384557
R D E 44.8387.9955,383503856,27863612.334.482157
M44.8787.9755,408501956,26362612.634.481457
A45.4988.1555,788470156,00150813.934.671257
W M 45.6788.2355,834454755,99147213.934.866757
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Table 4. Evaluation of the SORT, Deep-SORT, and proposed data association cost matrices on the ISR Tracking dataset.
Table 4. Evaluation of the SORT, Deep-SORT, and proposed data association cost matrices on the ISR Tracking dataset.
Cost MatrixEvaluation Metrics
% MOTA↑% MOTP↑TP↑FP↓FN↓IDs↓% MT↑% ML↓FM↓FPS↑
SORT
I o U 90.5792.2129,58931293211460.51.55561368
D E 88.8591.4829,307310303129759.00.96181389
R68.2388.4525,01527485550207030.73.611791380
E D I o U 90.4492.2729,53822297412359.01.55611317
R I o U 90.3492.3029,4919300813658.71.85661377
R D E 90.7792.0029,7139128279565.00.95621368
M90.2192.3529,4455305313757.41.85641311
A90.8791.9629,75610127998067.51.25581288
W M 90.9092.1029,7155028328864.41.25581298
SORT with Class Gate Metric
I o U 90.6392.2029,6113329269861.71.55501404
D E 90.8292.0029,73910028306666.91.25661408
R87.7491.5629,134500321628563.21.26421425
E D I o U 90.4992.2729,55322296911359.61.55561337
R I o U 90.3692.3229,4978301512359.31.85591392
R D E 90.9892.0529,7677728135568.10.95551375
M90.2492.3629,4565305012958.11.85591307
A91.0292.0229,7858127995169.31.25541292
W M 90.9392.1029,7275328377165.31.25521305
Deep-SORT with Class Gate Metric
I o U 90.8089.6630,9891357144719972.30.3142163
D E 91.0989.5331,1001372136716876.30.3131167
R89.1290.2730,4671384178338562.30.3292166
E D I o U 91.1589.5231,1241376135016178.40.3130165
R I o U 90.9089.8630,9941328140124075.70.3160166
R D E 91.0789.5431,0871367138116776.30.6134169
M91.1589.5531,1161370135416577.80.6125163
A91.2389.5531,1231350135016278.70.3126168
W M 91.0989.5631,1031376136316976.60.3119166
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Table 5. Evaluation of the SORT and the Deep-SORT on the ISR Tracking dataset using non-consecutive images. All data association cost matrices used the class gate formulation.
Table 5. Evaluation of the SORT and the Deep-SORT on the ISR Tracking dataset using non-consecutive images. All data association cost matrices used the class gate formulation.
GapTracking MethodEvaluation Metrics
% MOTA↑% MOTP↑TP↑FP↓FN↓IDs↓% MT↑% ML↓FM↓FPS↑
1SORT ( I o U ) 84.9288.8913,5144722479748.39.21781444
SORT ( A ) 86.4388.2013,81911320122758.28.31601375
SORT ( W M ) 86.1188.4313,7327720735353.28.91541426
Deep-SORT ( I o U ) 81.2886.0213,8931003176120448.95.8128155
Deep-SORT ( E D I o U ) 82.0285.5614,0771070159019155.74.0111158
Deep-SORT ( A ) 82.4985.6114,1091027158716257.24.6103158
2SORT ( I o U ) 79.7387.75871853202212835.812.31511407
SORT ( A ) 83.3786.09923917815933649.48.61301413
SORT ( W M ) 83.0886.6191178816747743.59.01271480
Deep-SORT ( I o U ) 75.2884.168975794169719638.08.3126152
Deep-SORT ( E D I o U ) 78.6582.869414866131713751.24.089153
Deep-SORT ( A ) 79.4182.989470840127911951.94.983153
3SORT ( I o U ) 75.5087.23640638189713123.413.1841272
SORT ( A ) 81.0284.52709426112924839.96.2431338
SORT ( W M ) 81.2885.4469418614147933.08.4391355
Deep-SORT ( I o U ) 69.2483.376528688172218431.510.0107138
Deep-SORT ( E D I o U ) 73.8882.336947716129519234.04.0104145
Deep-SORT ( A ) 75.7180.90718580011925752.03.453148
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Table 6. Evaluation of the detection-based MOT pipeline on the ISR200 sub-dataset. All data association cost matrices used the class gate formulation.
Table 6. Evaluation of the detection-based MOT pipeline on the ISR200 sub-dataset. All data association cost matrices used the class gate formulation.
YOLOTracking MethodEvaluation Metrics
% MOTA↑% MOTP↑TP↑FP↓FN↓IDs↓% MT↑% ML↓FM↓FPS↑
Y M 1 SORT ( I o U ) 23.3078.6513,34596651373107842.66.616747
SORT ( A ) 22.3978.6113,38498471350106244.66.616848
SORT ( W M ) 22.6678.6113,35897781348109044.66.616649
Deep-SORT ( I o U ) −2.2778.1213,02913,3871338142939.17.015727
Deep-SORT ( E D I o U ) −10.1678.0813,09014,6951236147040.37.410928
Deep-SORT ( A ) −10.6678.2013,10014,7841218147843.07.011428
Y M 2 SORT ( I o U ) 41.9381.0314,191756792667958.93.912349
SORT ( A ) 41.5980.9914,222765289567958.53.911449
SORT ( W M ) 41.5481.0014,203764290668758.93.912049
Deep-SORT ( I o U ) 21.5081.1413,94210,54699186355.84.312428
Deep-SORT ( E D I o U ) 14.9081.0414,04311,68989985455.84.78128
Deep-SORT ( A ) 15.6581.2014,03211,56091784756.64.38728
Y M 3 SORT ( I o U ) 65.4381.6414,623428885831566.75.46450
SORT ( A ) 65.2181.6414,633433383333066.75.47150
SORT ( W M ) 65.4081.6414,644431383531766.35.46350
Deep-SORT ( I o U ) 53.7580.3314,406591697541561.25.810130
Deep-SORT ( E D I o U ) 49.2880.3214,430664689746960.56.28330
Deep-SORT ( A ) 50.0180.4414,450655189045662.46.27530
Y M 4 SORT ( I o U ) 73.8683.4114,4462779114120964.75.08451
SORT ( A ) 73.7583.3914,4552806113220965.94.78651
SORT ( W M ) 73.8383.4214,4562794113220865.15.08051
Deep-SORT ( I o U ) 65.8681.6414,2873884120530460.55.810431
Deep-SORT ( E D I o U ) 64.3281.6214,3924232113327164.35.89031
Deep-SORT ( A ) 64.6981.6114,4044186113325965.55.49031
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Table 7. Evaluation of the detection-based MOT pipeline on the ISR500 sub-dataset. All data association cost matrices used the class gate formulation.
Table 7. Evaluation of the detection-based MOT pipeline on the ISR500 sub-dataset. All data association cost matrices used the class gate formulation.
YOLOTracking MethodEvaluation Metrics
% MOTA↑% MOTP↑TP↑FP↓FN↓IDs↓% MT↑% ML↓FM↓FPS↑
Y M 1 SORT ( I o U ) 24.8678.8812,48587981288105638.32.315650
SORT ( A ) 23.6878.8212,49789861255107740.02.316349
SORT ( W M ) 24.3078.8212,49588911253108139.42.315349
Deep-SORT ( I o U ) −0.4578.3312,26612,3321178138533.12.917328
Deep-SORT ( E D I o U ) −8.7578.3112,33413,6311093140238.92.312528
Deep-SORT ( A ) −8.1578.3812,36413,5731060140541.12.312228
Y M 2 SORT ( I o U ) 43.4981.2013,2426793100158653.12.910850
SORT ( A ) 43.1681.1913,250685098659354.32.910250
SORT ( W M ) 43.3081.1913,252683198059754.92.910450
Deep-SORT ( I o U ) 23.6181.3613,041954097681249.72.912428
Deep-SORT ( E D I o U ) 16.0781.3213,14710,76487081256.62.99329
Deep-SORT ( A ) 15.4981.4013,09910,80289683456.02.99529
Y M 3 SORT ( I o U ) 65.2581.3413,637396191228064.02.37251
SORT ( A ) 65.0381.3213,653400988728963.42.37651
SORT ( W M ) 65.2081.3313,652398389628164.02.37151
Deep-SORT ( I o U ) 53.1180.0213,525564993237258.91.112430
Deep-SORT ( E D I o U ) 49.7780.0113,628624881538664.01.19930
Deep-SORT ( A ) 49.9180.0613,605620482539966.31.110930
Y M 4 SORT ( I o U ) 75.2883.5413,5702406106819156.63.47052
SORT ( A ) 75.3083.5413,5842418105818757.12.96951
SORT ( W M ) 75.3483.5413,5852413105718757.13.46851
Deep-SORT ( I o U ) 68.7081.6213,5433355102426258.31.710731
Deep-SORT ( E D I o U ) 66.9581.5613,629370195025066.92.39331
Deep-SORT ( A ) 66.8281.5813,629372194825267.41.19531
The bold value highlights the best value on each column (in this case, each MOT evaluation metric).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pereira, R.; Carvalho, G.; Garrote, L.; Nunes, U.J. Sort and Deep-SORT Based Multi-Object Tracking for Mobile Robotics: Evaluation with New Data Association Metrics. Appl. Sci. 2022, 12, 1319. https://doi.org/10.3390/app12031319

AMA Style

Pereira R, Carvalho G, Garrote L, Nunes UJ. Sort and Deep-SORT Based Multi-Object Tracking for Mobile Robotics: Evaluation with New Data Association Metrics. Applied Sciences. 2022; 12(3):1319. https://doi.org/10.3390/app12031319

Chicago/Turabian Style

Pereira, Ricardo, Guilherme Carvalho, Luís Garrote, and Urbano J. Nunes. 2022. "Sort and Deep-SORT Based Multi-Object Tracking for Mobile Robotics: Evaluation with New Data Association Metrics" Applied Sciences 12, no. 3: 1319. https://doi.org/10.3390/app12031319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop