Next Article in Journal
Extracting Optimal Number of Features for Machine Learning Models in Multilayer IoT Attacks
Previous Article in Journal
Modal Analysis with Asymptotic Strips Boundary Conditions of Skewed Helical Gratings on Dielectric Pipes as Cylindrical Metasurfaces for Multi-Beam Holographic Rod Antennas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Confidence-Guided Frame Skipping to Enhance Object Tracking Speed

School of Software, Kwangwoon University, Kwangwoon-ro 20, Nowon-gu, Seoul 01897, Republic of Korea
Sensors 2024, 24(24), 8120; https://doi.org/10.3390/s24248120
Submission received: 22 October 2024 / Revised: 7 December 2024 / Accepted: 17 December 2024 / Published: 19 December 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Object tracking is a challenging task in computer vision. While simple tracking methods offer fast speeds, they often fail to track targets. To address this issue, traditional methods typically rely on complex algorithms. This study presents a novel approach to enhance object tracking speed via confidence-guided frame skipping. The proposed method is strategically designed to complement existing methods. Initially, lightweight tracking is used to track a target. Only in scenarios where it fails to track is an existing, robust but complex algorithm used. The contribution of this study lies in the proposed confidence assessment of the lightweight tracking’s results. The proposed method determines the need for intervention by the robust algorithm based on the predicted confidence level. This two-tiered approach significantly enhances tracking speed by leveraging the lightweight method for straightforward situations and the robust algorithm for challenging scenarios. Experimental results demonstrate the effectiveness of the proposed approach in enhancing tracking speed.

1. Introduction

Object tracking is an important task in computer vision with diverse applications, including autonomous vehicle driving [1], surveillance [2], sports video analysis [3], and human–computer interaction [4]. Despite significant recent advancements, object tracking remains challenging owing to various obstacles, including illumination variation, occlusion, background clutter, target deformation, similar objects, scale transformation, low resolution, and fast motion [5].
In single-object tracking, the initial target is provided in the first frame, and the method’s objective is to locate the specific target and trace its trajectory as it moves through a sequence of frames within a video. Traditional methods for object tracking often relied on hand-crafted features, such as the histogram of oriented gradients (HoG) [6], to estimate the target’s position across frames. However, these approaches may struggle to interpret semantic target information and effectively handle significant changes in appearance [7]. Recently, deep learning-based methods have gained increasing attention for more robust and accurate tracking solutions in the field of object tracking. Numerous object tracking architectures have been developed based on convolutional neural networks (CNNs) [8,9,10,11,12,13,14,15], siamese neural networks (SNNs) [16,17,18,19], recurrent neural networks (RNNs) [20], generative adversarial networks (GANs) [21], and MixFormer [22,23].
Given that real-time object tracking is crucial for several practical applications, numerous object tracking methods have been proposed [24,25,26,27]. Although these methods enable real-time tracking, their processing speeds must be improved. This is particularly crucial when various computer vision algorithms, including object tracking, coexist and run on hardware with limited computing resources. Efficient processing is necessary to prevent object tracking methods from monopolizing the available computing resources. Additionally, some algorithms are restricted to real-time execution on expensive high-end GPUs. Consequently, achieving higher processing speeds in object tracking is imperative as it contributes to the seamless and concurrent operation of diverse algorithms in resource-constrained environments.
Object tracking is challenging owing to factors such as occlusion, illumination variation, and target deformation as mentioned previously. To tackle these issues, researchers have developed robust algorithms that are inevitably complex. Moreover, to maintain high tracking accuracy in real-time object tracking, these challenging scenarios must be considered. Hence, real-time tracking methods also tend to be complex and resource-intensive. However, not every frame features target objects embroiled in such intricate circumstances. Several instances involve only minor changes in the target’s behavior between consecutive frames. When the camera motion remains minimal, targets within successive frames remain relatively stationary. Thus, applying a complex, resource-heavy algorithm to such straightforward scenarios results in the inefficient use of computing resources. A more efficient approach involves selectively deploying a straightforward algorithm for such cases, essentially reserving the use of complex algorithms exclusively for frames encompassing difficult situations. This approach can effectively enhance the tracking speed while preserving accuracy. The key challenge in this approach lies in discriminating between easy and difficult situations with minimal computational cost.
Therefore, this study introduces a novel approach aimed at accelerating the object tracking speed by selectively applying complex object tracking to only frames containing difficult targets. The proposed method is not intended for a standalone operation; rather, it is designed to complement existing methods synergistically. The proposed method initially attempts to track a target using a lightweight object tracking method with an extremely small computational load, which is based on the block-matching algorithm [28]. Subsequently, the proposed method evaluates the tracking results using a newly introduced confidence level. For cases where the tracking results of the lightweight tracking are deemed unsuccessful, a robust algorithm (an existing technique) intervenes to track the target. The proposed method is designed to be easily integrated with existing methods, and this study provides an integration example in detail. This two-tiered approach effectively enhances tracking speed.
The remainder of this paper is structured as follows: Section 2 provides a concise summary of related work. Section 3 discusses the proposed algorithm in detail. Section 3.3 discusses the integration of the proposed method with an existing technique. Section 4 presents the experimental results. Finally, Section 5 presents the conclusions of the study.

2. Related Works

The block-matching algorithm [28] stands as a fundamental tool extensively employed for estimating motion between successive frames of video sequences. This technique finds applications in object tracking, as demonstrated in [29,30]. El-Azim et al. tracked a single moving object within a frame under the assumption that an object is a rigid body [29]. Hariharakrishnan also introduced a fast object tracking algorithm using adaptive block matching [31]. One of the key advantages of object tracking based on the block-matching algorithm is its simplicity and computational efficiency. However, despite its high processing speed, this technique has not seen widespread adoption in recent research due to limited performance.
The advent of the deep learning revolution [32] has not only transformed object recognition [33] but has also generated considerable interest in their application in object tracking. This evolution has led to the emergence of numerous tracking methodologies based on CNNs [8,9,10,11,12,13,14,15]. By leveraging the breakthroughs in CNN architectures, these trackers capitalize on their inherent advantages. They effectively capture and encode the distinctive characteristics of objects as high-dimensional features by harnessing the potent representational capabilities of CNNs. Numerous studies [8,9,10,11,12,13,14,15] have demonstrated that these feature representations can be efficiently used for object tracking.
Although CNN-based trackers are widely employed, they have certain limitations [7]. To address these limitations, recent studies have focused on Siamese neural networks [16,17,18,19]. Siamese-based trackers conceptualize object tracking as the learning of a similarity map between the target template and candidate search regions in subsequent frames [34] while harnessing the advantages of deep networks for end-to-end learning. Siamese neural network-based trackers have garnered considerable attention because of their balanced accuracy and computational effectiveness [35]. Thus, they are considered among the most promising architectures for object tracking [36]. The Siamese Region Proposal Network (SiamRPN) [17] employs the concept of the region proposal network from [37]. Li [34] introduced the Siamese Region Proposal Network++ (SiamRPN++), which employs ResNet [38] as its backbone network.

3. Proposed Algorithm

Figure 1 provides an overview of the proposed algorithm. Initially, the lightweight object tracking method, with the block-matching algorithm [28], is used to track a target within the current frame, outputting a bounding box of the target ( B L ), along with its matching cost ( SAD MIN ). Although the computational complexity of lightweight object tracking is minimal, the predicted B L is not highly reliable. To address this limitation, we calculated the proposed confidence level associated with the predicted bounding box ( B L ) using the pixels of the target in the previous frame and the matching cost obtained from lightweight object tracking. Subsequently, the proposed algorithm assesses whether the confidence level surpasses a specified threshold. If it does, the bounding box predicted by the lightweight object tracking method is considered the final output. Otherwise, the proposed algorithm invokes an existing method that ensures reliable results in challenging scenarios. When the tracked object disappears, the confidence level typically drops due to inconsistencies in the tracking results from the lightweight object tracking. This will trigger the robust tracker to reinitialize tracking when the object reappears. If lightweight tracking mistakenly locks onto another object, the confidence evaluation mechanism detects the mismatch based on predefined thresholds, prompting a corrective action by the robust tracker. It should also be noted that when the robust tracker fails to track the target due to object disappearance, the next frame is tracked using the robust tracker again. Additionally, after the lightweight object tracker processes S N consecutive frames, the robust tracker is forcibly invoked to ensure long-term reliability. Here, S N denotes the maximum number of consecutive frames processed without invoking complex but robust object tracking, and S represents the number of consecutive frames processed following the activation of complex but robust object tracking.
Two essential factors must be considered for achieving this. First, lightweight object tracking should be significantly faster than existing methods to ensure that its computational cost can be neglected. If the complexity of lightweight object tracking becomes comparable to that of existing methods, the proposed approach cannot effectively accelerate the object tracking speed. The second factor is the reliability of the calculated confidence level. Ensuring a reliable confidence calculation is crucial because high confidence in incorrect tracking results of lightweight object tracking can lead to significantly poor accuracy in the final object tracking results.

3.1. Lightweight Object Tracking

Not all target objects present challenging situations. In several cases, the changes in a target between successive frames are relatively small. Moreover, when the camera is static and the target is motionless, the target in successive frames also remains static. Hence, lightweight object tracking focuses on accurate target tracking in simpler scenarios with fewer changes between successive frames. One effective method for achieving this is using a block-matching algorithm [28]. This work employs a simple block-matching apporach without considering advanced techniques such as the adaptive block-matching method described in [31].
Figure 2a illustrates the lightweight object tracking method, which employs a block-matching algorithm. The best match for the bounding box in the k-th frame, B F ( k ) , is found in a search area in the (k + 1)-th frame, and this position is set as the position of the new bounding box, B L ( k + 1 ) in the (k + 1)-th frame. The size of the new bounding box B L ( k + 1 ) remains unchanged from B F ( k ) in the k-th frame.
( d x , d y ) = arg min ( m , n ) SR SAD ( m , n )
S A D ( m , n ) = j = b y b y + B H i = b x b x + B W | I k ( i , j ) I k + 1 ( i + m , j + n ) |
Here, I k ( i , j ) is the pixel value at ( i , j ) in the k-th frame. ( d x , d y ) is the displacement of the target from the k-th frame to (k + 1)-th frame, and SR is the search range. Equation (1) represents the full search method, which examines all the positions in the search area. However, fast block-matching algorithms, which examine a limited set of search points, can be considered to reduce the computational burden of lightweight object tracking. These fast algorithms are discussed in the Experimental Results section. ( b x , b y ) is the coordinate of the bounding box at the upper left corner. B W and B H are the width and height of the bounding box, respectively. If ( b x , b y , B W , B H ) is a bounding box for B F ( k ) , the bounding box for the (k + 1)-th frame, B L ( k + 1 ) is ( b x + d x , b y + d y , B W , B H ) . The lightweight object tracking method predicts only the displacement. The value of the sum of absolute difference (SAD) at ( d x , d y ) , SAD MIN , is used in confidence level evaluation, as depicted in Figure 1.
RGB color space is utilized to achieve more accurate prediction results. Then, the pixel value difference in Equation (2) is defined as follows:
| I k I k + 1 | = | I R k I R k + 1 | + | I G k I G k + 1 | + | I B k I B k + 1 |
Here, I R , I G , and I B represent the red, green, and blue pixel components, respectively. For convenience, the notation ( x , y ) is omitted.
As described in Section 2, object tracking based on the block-matching algorithm offers limited performance. The accuracy of its performance needs to be evaluated through an experiment. Frames with even indices ( 2 i ) were tracked using SiamRPN++ [34], whereas frames with odd indices ( 2 i + 1 ) were tracked using the lightweight tracking method. The tracking process is as follows: The ground-truth bounding box for the target in the first frame is given as B G T ( 1 ) . Applying SiamRPN++ to the second frame (or 2i-th frame) results in the bounding box for the second frame, B F ( 2 ) . The lightweight tracking finds the best matching bounding box within the third frame (or (2i + 1)-th frame), B L ( 3 ) . The size of the bounding box in the lightweight tracking method remains the same as that used for the second frame.
In this experiment, the VOT2018 [39] dataset, comprising 60 videos with a total of 21,356 frames, was used for short-term single-object tracking. The search range for the block-matching algorithm in the lightweight tracking method was set to ± 16 . Table 1 presents the performance of the lightweight tracking method integrated with SiamRPN++ (LTS). Owing to its limited capability in handling various challenging scenarios mentioned earlier, the performance of the LTS is expected to be significantly poor compared with SimaRPN++ in terms of accuracy, robustness, lost frame number, and expected average overlap (EAO). Nevertheless, the target was lost only in 95 of the 21,356 frames. These results indicate the effectiveness of lightweight tracking based on the block-matching algorithm in various cases. The subsequent subsection proposes a method to evaluate the accuracy of the tracking results of lightweight tracking. Hence, frames that fail to track the target or degrade the tracking performance can be selectively retracked using SiamRPN++. This approach can improve processing speed while preserving tracking performance.

3.2. Evaluation of Lightweight Object Tracking Result

The movement of a target in a video can include various challenging scenarios, as well as scenarios in which the target is either static or undergoes subtle changes in position or appearance. The lightweight object tracking is responsible for accurately and rapidly tracking targets in simpler scenarios with minimal changes between successive frames. In challenging scenarios, lightweight object tracking may encounter difficulties and fail to track the target, thus necessitating the evaluation of tracking results to ensure reliability. To evaluate the accuracy of the tracked bounding box ( B L ) in the current frame, the pixel-wise similarity between B L in the current frame and ( B F ) in the previous frame must be measured. Although numerous studies have attempted to estimate the similarity of two blocks or objects, incorporating several of these methods introduces additional complexity, which is undesirable.
Instead, a straightforward approach for examining the pixel-wise similarity involves comparing the pixel differences between the two bounding boxes, which are already calculated during lightweight object tracking. This difference, denoted as SAD MIN , is the output of the lightweight object tracking (as shown in Figure 1) and corresponds to the value of the sum of absolute differences (SAD) from Equation (2) at B L , i.e., ( d x , d y ) . A small value of SAD MIN indicates that the bounding boxes are well matched, the tracking results from the lightweight object tracking are reliable, and B L can be considered the final bounding box for the current frame.
However, relying solely on the SAD value to determine accuracy may result in poor performance. For example, if the target includes complex textures, the SAD value may be large even if the matching is accurate, whereas a homogeneous target with fewer textures may have a small SAD value, even in an unmatched case. Therefore, texture must be considered when determining accuracy. The amount of texture is concurrently considered by analyzing the gradients of the bounding boxes. This study proposes a method to evaluate the accuracy of B L using a confidence level ( C L ) as follows:
C L = min G X SAD MIN , G Y SAD MIN
Here, G X and G Y are gradient values calculated using pixels in B F as follows:
G X = ( i , j ) B F | I ( i + 1 , j ) I ( i , j ) |
G Y = ( i , j ) B F | I ( 1 , j + 1 ) I ( i , j ) |
As the lightweight object tracking calculates SAD in RGB color space, as shown in Equations (1) and (3), the two aforementioned gradient values, G X and G Y , are also calculated in RGB color space.
Objects with complex textures have a larger gradient value, whereas objects with simple textures have a smaller gradient value. Hence, for the same SAD value, the confidence level increases as the gradient values increase. Herein, this reliability was assessed separately along the x and y directions, and the confidence level was determined as the minimum of these two values. Finally, as shown in Figure 1, if the confidence level exceeds the threshold value, the bounding box, B L is considered final. Otherwise, the tracking result of the lightweight object tracking is considered unsuccessful, and a more accurate and complex method is employed for target tracking. The performance in terms of accuracy and speed gain, depending on the threshold value is provided in the Experimental Results section.
Let B F ( k ) and B L ( k + 1 ) be the final bounding box for the k-th frame and the bounding box predicted from lightweight object tracking, respectively. If B L ( k + 1 ) is determined as the final bounding box according to the confidence level, then B F ( k + 1 ) is set to B L ( k + 1 ) . The block-matching algorithm used in the lightweight object tracking method finds the best matching bounding box of B F ( k + 1 ) in the (k + 2)-th frame as depicted in Figure 2b.
During the evaluation of lightweight object tracking results, two gradients, G X and G Y , must be calculated. However, the computational cost for calculating these gradients is approximately equal to only two-point checks in the block-matching algorithm used in lightweight object tracking. Considering that the number of points to be examined in lightweight object tracking is significantly larger than just two points, the computational burden of calculating the two gradient values can be considered negligible compared to the overall computational complexity of lightweight object tracking.

3.3. Integration

This section explains the integration of the proposed method with SiamRPN++ [34] where SiamRPN++ is one of the state-of-the-art methods for object tracking algorithms.
Figure 3a,b illustrate the difference before and after applying the proposed method to SiamRPN++. The figures depict only those components that are involved in integration with the proposed method. In Figure 3a, SiamRPN++ crops a search region from the k-th frame, I k , according to C ( k 1 ) (the center position of the bounding box B F ( k 1 ) ). As the SiamRPN++ core tracks the target within the cropped search region, the new center position, denoted as C ( k ) , is updated in buffer 1 . The proposed method can be integrated with SiamRPN++, as shown in Figure 3b. The lightweight object tracking tracks the target using I k 1 , I k , and B F ( k 1 ) and outputs the confidence level, C L , and the predicted bounding box B L ( k ) . Based on the confidence level, the evaluator determines the success of tracking. When the tracking is successful, B L ( k ) is considered the final bounding box, B F ( k ) , and the method updates buffer 2 with the final bounding box. Furthermore, the evaluator should update buffer 1 with C ( k ) , which represents the center position of B F ( k ) . As SiamRPN++ crops the search region according to the center of the tracked bounding box, buffer 1 maintains the center position of the bounding box for the most recently tracked target. If the lightweight object tracking method proves to be ineffective, SiamRPN++ is activated to accurately track the target. In this case, the bounding box from SiamRPN++ is updated to buffer 2 as the lightweight object tracking requires the bounding box for tracking the next frame.
The parameter S N suggests the maximum number of consecutive frames for which lightweight object tracking is employed without performing SiamRPN++. If the tracking results from the lightweight object tracking are used as the final bounding boxes for S N consecutive frames, then the next frame is forcibly tracked using SiamRPN++.

4. Experimental Results

The proposed method was developed to enhance the speed of object tracking by combining it with existing algorithms, which is a novel approach in this field. Comparing the proposed method with existing techniques is challenging as it cannot be used independently and must be used with the existing algorithms. Hence, this study evaluated the performance of the proposed algorithm by comparing the tracking speed and accuracy before and after applying the proposed algorithm to existing methods. The processing speed was evaluated using a personal computer with an AMD Ryzen Threadripper 1900X (3.8 GHz), NVIDIA Geforce RTX 2080 Ti (11 GB VRAM), and 96 GB of RAM. The lightweight object tracking method was executed on the CPU, while SiamRPN++ and MixFormer [40] were executed on the GPU.

4.1. Implementation of Lightweight Object Tracking

The purpose of lightweight object tracking is to track the target in scenarios with minimal changes between successive frames. Hence, for lightweight object tracking, a small search area is sufficient to track the target. In the experiments, the search range for object tracking was set to ± 16 to examine only a small search area. The total number of search points within the search area was ( 16 × 2 + 1 ) 2 = 1089 .
Single-object tracking involves tracking only one object in each frame. Hence, the total number of search points for each frame is only ( 16 × 2 + 1 ) 2 = 1089 . The computational burden of the full-search block-matching algorithm, as expressed in Equation (1), where all search points are exhaustively examined individually, is negligible in modern computing devices. Also, the computational burden of lightweight object tracking can be reduced by employing fast block-matching algorithms. To further reduce computing costs, Intel Advanced Vector Extensions 2 (Intel AVX2) can be utilized to implement the SAD function, as shown in Equation (2).

4.2. Proposed Method with SimaRPN++

To access object tracking, the foundational algorithm chosen was SiamRPN++. The integration of the proposed algorithm with SiamRPN++ is outlined in detail in Section 3.3. The VOT2018 [39] dataset containing 60 videos with 21,296 frames was used in the experiments for short-term single object tracking. To merge the proposed method with SiamRPN++, the foundational code for SiamRPN++ was sourced from [41]. SiamRPN++ and the proposed method were executed on GPU and CPU, respectively.
Table 2 presents a comparison of the performance metrics among different methods: M BASE (SiamRPN++ using ResNet backbone), frame skipping ( M SKIP ), and the proposed method without confidence level evaluation ( M P 1 ) and with confidence level evaluation ( M P 2 ). S N denotes the maximum number of frames skipped consecutively without performing SiamRPN++. Abbreviations A, R, L, EOA, S, and FPS correspond to accuracy, robustness, lost count, expected average overlap, skip, and frames per second, respectively. For M SKIP , the initial frame was tracked using SimaRPN++. The subsequent S N frames were skipped without tracking, and their bounding boxes were replicated from the bounding box of the initial frame. For instance, with S N = 3 , only frames indexed as 1, 5, 9, 13, etc., underwent tracking using SiamRPN++, whereas frames (2, 3, 4), (6, 7, 8), etc., were skipped. The table indicates that increasing S N significantly enhances the processing speed. While SiamRPN++( M BASE ) operated at 58.3 FPS, M SKIP with S N = 10 achieved 428.2 FPS. However, as S N increased, the accuracy, robustness, and EOA metrics decreased, thus indicating that the bounding box of the previous frame is often not applicable to future frames. By contrast, M P 1 involves tracking these frames using the lightweight object tracking method, which predicts the position of the bounding box. This prediction enhances accuracy, robustness, and EOA performance over M SKIP . Although this method outperforms M SKIP , the bounding boxes predicted inaccurately can still lead to performance degradation compared with SiamRPN++.
In M P 2 , SiamRPN++ is applied to frames with a confidence level below a given threshold. Hence, the value of S(%) for M P 2 was lower than that for M P 1 , as presented in Table 2. For instance, with S N = 1 and T = 0.5, S(%) values for M P 1 and M P 2 were 49.8 and 34.1, respectively. This indicates that approximately 15.7% of the total frames with a confidence level below the threshold employed SiamRPN++ to accurately track the target. This process significantly improved the performance of M P 2 in terms of accuracy, robustness, and EOA, compared with M P 1 . This demonstrates that the proposed confidence level was an effective threshold for the accuracy of the predicted bounding boxes. Comparing M P 2 (T = 0.5 and S N = 1 ) with SiamRPN++ ( M BASE ), the proposed M P 2 accelerated the processing speed by approximately 1.5 times, whereas their accuracies were almost identical. However, the robustness and EOA of M P 2 (T = 0.5 and S N = 1 ) were slightly lower than those of SiamRPN++. This slight degradation may stem from an increase in the seven lost frames in M P 2 and the absence of bounding box size updates in the lightweight object tracking method. In M P 1 , as S N increased, the accuracy, robustness, lost counts, and EOA performance degraded significantly. However, in M P 2 , even as S N increased, the decline in performance metrics was relatively minimal. For example, in M P 1 , when S N was 1, the number of lost counts was 87; however, when S N increased to 10, the number of lost counts increased by approximately 2.6 times to 225. This indicates that indiscriminate frame skipping negatively impacted tracking performance. By contrast, for M P 2 , when S N was 1, the number of lost counts was 59, and when S N increased to 10, the number of lost counts only increased to 84, approximately 1.42 times higher. This demonstrates that the confidence evaluation effectively identifies unreliable frames and triggers robust tracking to prevent tracking failures.
Table 3 lists the performance variation with respect to different threshold values. As the threshold value decreased, the S(%) value increased. An important observation is that M P 2 with S N = 1 and T = 0.67 outperformed SiamRPN++ ( M BASE ) in terms of accuracy, robustness, lost counts, and EOA by a small margin. Although SiamRPN++ yielded better tracking results than the lightweight object tracking method, there were instances in which the latter performed better. A threshold value of 0.67 with S N = 1 corresponded to a scenario where M P 2 matched or slightly surpassed the performance of SiamRPN++ in terms of accuracy, robustness, lost counts, and EOA. (Note that the processing speed of M P 2 is significantly higher than that of SiamRPN++.) Despite extensive experiments, determining the optimal threshold value remains a challenge. High threshold values lead to a random performance owing to the factors mentioned earlier. For instance, the threshold value of 0.33 demonstrated superior performance not only in terms of speed but also in terms of accuracy, robustness, lost counts, and EOA as compared with the threshold values of 0.4 and 0.5 (Table 3). However, as the threshold value decreased further, the performance of accuracy, robustness, lost counts, and EOA deteriorated.
Table 4 presents a comparison of M P 2 with M BASE in terms of FPS for some sequences. For M BASE (SiamRPN++), the difference in FPS across the sequences was not pronounced. The highest speed, 63.7 FPS, was achieved in the ant3 sequence, whereas the lowest speed, 51.8 FPS, was observed in the ant1 sequence. By contrast, for M P 2 , the variance in FPS across the sequences was significant. The highest speed, 117 FPS, was reached in the handball1 sequence, whereas the lowest speed, 58.9 FPS, was recorded in the book sequence. Here, S N and T were set to 1 and 0.5, respectively. The speed of the proposed method depends on the degree of change between successive frames within a sequence. When the target’s motion is relatively small, the lightweight tracking method accurately tracks the target and achieves significant speed gains. However, when the target’s motion is substantial, the proposed method results in processing overhead without corresponding improvements in speed. For example, in the book sequence, the target features highly complex motion. As lightweight object tracking relies on a block-matching algorithm that solely considers translational motion, accurately tracking a rotating book becomes challenging. Owing to the inaccurate tracking results, the confidence levels dropped below the threshold, and the tracking speed did not improve. By contrast, in the fish1 sequence, the target (a fish) exhibited minimal motion. The confidence levels surpassed the threshold, and the proposed method skipped executing SiamRPN++ for most frames, thus achieving maximum acceleration.
Li [34] proposed a fast variant of SiamRPN++ using MobileNet [42] backbone, denoted as M Mobile in Table 5. This SiamRPN++ variant ( M Mobile ) enhances processing speed compared with M BASE (SiamRPN++). In M P 3 , the proposed method is combined with M Mobile to further increase the tracking speed. Table 5 lists the tracking speed enhancement of M P 3 using the proposed method at various threshold values. As presented in the table, M Mobile improved the tracking speed over M BASE while providing comparable performance. M P 3 further improved the tracking speed of M Mobile while minimizing performance degradation. Notably, when S N = 1 and T = 0.33, M P 3 achieved performance accuracy comparable to M Mobile while improving the tracking speed by approximately 1.64 times.
Given the compact size of the target objects in VOT2018, the 3% overhead incurred by lightweight object tracking, which employs a full-search block-matching algorithm implemented using C code, was practically negligible. However, when dealing with larger target sizes, the search range may need to be expanded. In such cases, utilizing fast search techniques is essential.

4.3. Proposed Method Combined with Other Methods Including MixFormer

As depicted in Figure 1, the lightweight object tracking method was used to find the best candidate for a target from the previous frame within the current frame. Subsequently, the confidence level, C L , was calculated and used to determine whether to invoke a robust but complex existing method. Therefore, for a given target, the existing method did not influence the skipping decision. The only impact of the existing method was that if it failed to detect the target object, an incorrect target object was fed into the lightweight object tracking model. Consequently, the performance of the proposed method is not dependent on the specific existing method combined with it. Instead, the major factor affecting the performance of the proposed method is the characteristics of the input video, such as texture, motion amount, object deformation, and other factors.
To validate this, two experiments were designed as follows: In the first experiment, we assumed the existence of an ultimate tracking algorithm, U B A S E , that always provided the ground truth. The proposed method combined with U B A S E is denoted as U P . Given the ground truth for the dataset, U B A S E consistently provides the correct tracking results for the corresponding frames. Since recent tracking methods generally outperform M B A S E (SiamRPN++), their performance will likely fall between that of M B A S E and U B A S E . In the second experiment, the proposed method was combined with the real tracking algorithm, MixFormer [22,23]; the source code for MixFormer is available in [40]. In the proposed method combined with MixFormer, M F B A S E is denoted as M F P . Since the optimal parameters for MixFormer on the VOT2018 dataset were not specified, default values were used in this experiment.
Table 6 shows the comparison results of M P 2 , U P , and M F P in terms of skipping rate (%) for S N = 1, 2, and 5. The threshold value was set to 0.5. The table shows the average skipping rates across the entire VOT2018 dataset and presents the skipping rates for selected video sequences from the dataset. As shown in the table, the average skipping rates for S N = 1, 2, and 5 were approximately 0.34, 0.45, and 0.55, respectively, regardless of the algorithms used. Despite the significantly superior tracking performance of U B A S E compared to M B A S E and M F B A S E , the average skipping rates across the three methods were similar. This observation underscores that the choice of algorithm has minimal impact on the skipping decision. Instead, the skipping rate is highly influenced by the characteristics of the video sequence. For example, the skipping rates for the fish1 sequence at S N = 2 were 0.63, 0.65, and 0.65 for M P 2 , U P , and M F P , respectively. On the other hand, the skipping rates for the book sequence for S N = 2 were 0.05, 0.06, and 0.03 for the same methods, respectively.
Table 7 represents the accuracy, robustness, and EOA for M P 2 , U P , and M F F . Since U B A S E always provided the correct tracking results, there were no lost frames, resulting in a robustness value of 0 for U P . Although the proposed algorithm accelerated the tracking speed by frame skipping, the performance degradation remained minimal.
The ground truth represents the theoretical upper bound of tracking performance, as no tracking algorithm can outperform it. By combining the proposed method with ground truth, the experiments in Table 6 and Table 7 demonstrate the maximum potential performance when integrated with an ideal tracker. These experimental results show that the proposed method provides consistent performance regardless of the combined existing methods, both in terms of accuracy and acceleration.
To further validate the proposed method, experiments were conducted on two additional widely used datasets, namely OTB100 [5] and UAV123 [43], using MixFormer as the baseline tracker, as presented in Table 8. Compared to M F B A S E , the MixFormer combined with the proposed method, M F P significantly accelerates processing speed with only minimal degradation in AUC score and precision for the OTB100 dataset. Interestingly, for the UAV123 dataset, the proposed method ( M F P ) not only accelerated the processing speed but also slightly improved tracking performance (AUC and precision). This improvement may be attributed to the characteristics of UAV123, where objects exhibit relatively predictable motion and minimal background changes, allowing the lightweight tracker to perform effectively even with frame skipping. These results confirm that the proposed method is not limited to specific datasets but is effective across various tracking scenarios, highlighting its generalizability and robustness.
Figure 4 illustrates examples of successful and unsuccessful tracking. In the fish sequence, MixFormer ( M F B A S E ) slightly missed the target across the frames, while MixFormer combined with the proposed method ( M F P ) accurately tracked the target throughout the sequence. This highlights how the lightweight tracker can enhance tracking accuracy in some scenarios. Conversely, in the ant1 sequence, where the target underwent slight rotation, M F P failed to track the target, while M F B A S E successfully maintained the target’s position, though with a less accurate bounding box. In the ball2 sequence, M F B A S E successfully tracked the target, whereas M F P failed. These examples illustrate specific cases where the performance of M F P and M F B A S E diverge. It is worth noting, however, that in the majority of the cases, the results of M F P and M F B A S E were nearly identical.

4.4. Discussion

The proposed method introduces variability in computation times due to the confidence evaluation mechanism, which dynamically determines whether lightweight tracking or a robust algorithm is applied for each frame. While this adaptive approach improves tracking performance, the variability in processing times can pose challenges in real-time control systems where consistent cycle times are critical. To address this limitation, the following potential solutions can be explored: (a) parameter optimization and (b) buffering or pipelining. In parameter optimization, adjusting the confidence threshold and fine-tuning the frequency of robust algorithm invocation could reduce processing time variability while maintaining performance. Implementing a buffering or pipelining strategy could smooth out fluctuations in computation time, ensuring more stable processing cycles in real-time systems. On the other hand, this variability offers a practical advantage in mobile and battery-powered applications. By primarily relying on lightweight tracking and invoking the robust algorithm only when necessary, the proposed method significantly reduces computational overhead and power consumption. This trade-off makes the method particularly suitable for devices such as drones, smartphones, and other mobile platforms where energy efficiency is critical. Further research could focus on optimizing the proposed method to reduce computation time variability while preserving its power-saving benefits, enabling broader applicability in real-time and resource-constrained environments.
The lightweight object tracking method employs the block-matching algorithm, and its tracking accuracy may degrade in challenging scenarios, such as severe occlusion, rapid object appearance changes, object disappearance, or highly cluttered backgrounds. In these situations, the confidence level typically drops below the predefined threshold, invoking the complex but robust tracker to ensure reliable tracking. However, this process increases the overall processing time, particularly in sequences where the robust tracker is frequently activated. For example, although the proposed method improved processing time in most test sequences, it slightly decreased the processing time for the book sequence, as shown in Table 4.

5. Conclusions

This study presented a method to accelerate object tracking speed by proposing a novel approach that combines a lightweight tracking algorithm with existing robust yet complex algorithms. Our approach intelligently applies the robust algorithm only when necessary. Thus, the proposed method significantly improves the tracking speed with a minor degradation in tracking accuracy. The proposed confidence level evaluation plays a crucial role in determining the tracking strategy for each frame, essentially ensuring that the robust algorithm intervenes only when the lightweight tracking method is unsuccessful. This innovation strikes a balance between computational efficiency and tracking quality. Our experiments validated the effectiveness of our approach by showing remarkable improvements in tracking speed while preserving accuracy. Moreover, the proposed approach’s flexibility in integration with existing methods demonstrates its potential for practical implementation. This study further presented integration examples highlighting the adaptability of this approach across various tracking scenarios. The proposed methodology not only contributes to the advancement of real-time tracking capabilities but also paves the way for more efficient utilization of computing resources in complex tracking environments. Further research could focus on optimizing the confidence level evaluation parameters and exploring additional ways to enhance the synergy between lightweight and robust tracking methods.

Funding

The present research has been conducted by the Research Grant of Kwangwoon University in 2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data derived from this study are presented in the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Lee, K.-H.; Hwang, J.-N. On-road pedestrian tracking across multiple driving recorders. IEEE Trans. Multimed. 2015, 17, 1429–1438. [Google Scholar] [CrossRef]
  2. Lee, K.-H.; Hwang, J.-N.; Okopal, G.; Pitton, J. Ground-movingplatform-based human tracking using visual slam and constrained multiple kernels. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3602–3612. [Google Scholar] [CrossRef]
  3. Lu, X.; Ma, C.; Ni, B.; Yang, X. Adaptive region proposal with channel regularization for robust object tracking. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1268–1282. [Google Scholar] [CrossRef]
  4. Liu, L.; Xing, J.; Ai, H.; Ruan, X. Hand posture recognition using finger geometric feature. In Proceedings of the 21st International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012; pp. 565–568. [Google Scholar]
  5. Wu, Y.; Lim, J.; Yang, M.-H. Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1834–1848. [Google Scholar] [CrossRef]
  6. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  7. Marvasti-Zadeh, S.M.; Cheng, L.; Ghanei-Yakhdan, H.; Kasaei, S. Deep learning for visual tracking: A comprehensive survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 3943–3968. [Google Scholar] [CrossRef]
  8. Han, Z.; Wang, P.; Ye, Q. Adaptive discriminative deep correlation filter for visual object tracking. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 155–166. [Google Scholar] [CrossRef]
  9. Chen, K.; Tao, W. Once for all: A two-flow convolutional neural network for visual tracking. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 3377–3386. [Google Scholar] [CrossRef]
  10. Li, S.; Zhao, S.; Cheng, B.; Zhao, E.; Chen, J. Robust visual tracking via hierarchical particle filter and ensemble deep features. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 179–191. [Google Scholar] [CrossRef]
  11. Nam, H.; Han, B. Learning multi-domain convolutional neural networks for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4293–4302. [Google Scholar]
  12. Zhu, Z.; Huang, G.; Zou, W.; Du, D.; Huang, C. Uct: Learning unified convolutional networks for real-time visual tracking. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 1973–1982. [Google Scholar]
  13. Han, B.; Sim, J.; Adam, H. Branchout: Regularization for online ensemble tracking with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 521–530. [Google Scholar]
  14. Wang, M.; Liu, Y.; Huang, Z. Large margin object tracking with circulant feature maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4800–4808. [Google Scholar]
  15. Pu, S.; Song, Y.; Ma, C.; Zhang, H.; Yang, M.-H. Deep attentive tracking via reciprocative learning. Adv. Neural Inf. Process. Syst. 2018, 31, 1931–1941. [Google Scholar]
  16. Guo, Q.; Feng, W.; Zhou, C.; Huang, R.; Wan, L.; Wang, S. Learning dynamic siamese network for visual object tracking. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017.
  17. Li, B.; Yan, J.; Wu, W.; Zhu, Z.; Hu, X. High performance visual tracking with siamese region proposal network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  18. Zhu, Z.; Wang, Q.; Li, B.; Wu, W.; Yan, J.; Hu, W. Distractor-aware siamese networks for visual object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  19. Shan, Y.; Zhou, X.; Liu, S.; Zhang, Y.; Huang, K. Siamfpn: A deep learning method for accurate and real-time maritime ship tracking. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 315–325. [Google Scholar] [CrossRef]
  20. Yang, T.; Chan, A.B. Recurrent filter learning for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2010–2019. [Google Scholar]
  21. Zhao, F.; Wang, J.; Wu, Y.; Tang, M. Adversarial deep tracking. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1998–2011. [Google Scholar] [CrossRef]
  22. Cui, Y.; Jiang, C.; Wu, G. Mixformer: End-to-end tracking with iterative mixed attention. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 4120–4146. [Google Scholar] [CrossRef] [PubMed]
  23. Cui, Y.; Jiang, C.; Wang, L.; Wu, G. Mixformer: End-to-end tracking with iterative mixed attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13608–13618. [Google Scholar]
  24. Li, H.; Wang, X.; Shen, F.; Li, Y.; Porikli, F.; Wang, M. Real-time deep tracking via corrective domain adaptation. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2600–2612. [Google Scholar] [CrossRef]
  25. Hong, S.; You, T.; Kwak, S.; Han, B. Online tracking by learning discriminative saliency map with convolutional neural network. In Proceedings of the International Conference on Machine Learning. PMLR, Lille, France, 6–11 July 2015; pp. 597–606. [Google Scholar]
  26. Wang, Q.; Teng, Z.; Xing, J.; Gao, J.; Hu, W.; Maybank, S. Learning attentions: Residual attentional siamese network for high performance online visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4854–4863. [Google Scholar]
  27. Zhang, P.; Zhuo, T.; Huang, W.; Chen, K.; Kankanhalli, M. Online object tracking based on cnn with spatial-temporal saliency guided sampling. Neurocomputing 2017, 257, 115–127. [Google Scholar] [CrossRef]
  28. Cheng, K.W.; Chan, S.C. Fast block matching algorithms for motion estimation, In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, Atlanta, GA, USA, 9 May 1996.
  29. El-Azim, S.A.; Ismail, I.; El-Latiff, H.A. An efficient object tracking technique using block-matching algorithm. In Proceedings of the Nineteenth National Radio Science Conference, Alexandria, Egypt, 19–21 March 2002. [Google Scholar]
  30. Gyaourova, A.; Kamath, C.; Cheung, S. Block Matching for Object Tracking; Technical Report; Lawrence Livermore National Lab: Livermore, CA, USA, 2003. [Google Scholar]
  31. Hariharakrishnan, K.; Schonfeld, D. Fast object tracking using adaptive block matching. IEEE Trans. Multimed. 2005, 7, 853–859. [Google Scholar] [CrossRef]
  32. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  33. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  34. Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; Yan, J. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4282–4291. [Google Scholar]
  35. Fu, C.; Lu, K.; Zheng, G.; Ye, J.; Cao, Z.; Li, B.; Lu, G. Siamese object tracking for unmanned aerial vehicle: A review and comprehensive analysis. Artif. Intell. Rev. 2023, 56, 1417–1477. [Google Scholar] [CrossRef]
  36. Ondrasovic, M.; Tarabek, P. Siamese visual object tracking: A survey. IEEE Access 2021, 9, 110149–110172. [Google Scholar] [CrossRef]
  37. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef]
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  39. Kristan, M.; Leonardis, A.; Matas, J.; Felsberg, M.; Pflugfelder, R.; Zajc, L.C.; Vojir, T.; Bhat, G.; Lukezic, A.; Eldesokey, A.; et al. The sixth visual object tracking vot2018 challenge results. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 3–53. [Google Scholar]
  40. MixFormer. Available online: https://github.com/MCG-NJU/MixFormer (accessed on 13 January 2024).
  41. PySOT. Available online: https://github.com/STVIR/pysot (accessed on 13 January 2024).
  42. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  43. Mueller, M.; Smith, N.; Ghanem, B. A benchmark and simulator for UAV tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
Figure 1. Overview of the proposed method. T is a threshold value. S N denotes the maximum number of consecutive frames processed without invoking complex but robust object tracking, while S indicates the number of consecutive frames processed following the activation of complex but robust object tracking.
Figure 1. Overview of the proposed method. T is a threshold value. S N denotes the maximum number of consecutive frames processed without invoking complex but robust object tracking, while S indicates the number of consecutive frames processed following the activation of complex but robust object tracking.
Sensors 24 08120 g001
Figure 2. Lightweight object tracker using a block-matching algorithm: (a) The method successfully finds the target in the (k + 1)-th frame. (b) If B L ( k + 1 ) is determined as a final bounding box for the (k + 1)-th frame ( B F ( k + 1 ) = B L ( k + 1 ) ), the method finds the target in the (k + 2)-th frame using the target predicted in the (k + 1)-th frame.
Figure 2. Lightweight object tracker using a block-matching algorithm: (a) The method successfully finds the target in the (k + 1)-th frame. (b) If B L ( k + 1 ) is determined as a final bounding box for the (k + 1)-th frame ( B F ( k + 1 ) = B L ( k + 1 ) ), the method finds the target in the (k + 2)-th frame using the target predicted in the (k + 1)-th frame.
Sensors 24 08120 g002
Figure 3. (a) Before applying the proposed algorithm to SiamRPN++ and (b) after applying the proposed algorithm to SiamRPN++. C k is the center position of the bounding box B F ( k ) .
Figure 3. (a) Before applying the proposed algorithm to SiamRPN++ and (b) after applying the proposed algorithm to SiamRPN++. C k is the center position of the bounding box B F ( k ) .
Sensors 24 08120 g003
Figure 4. Examples of successful and unsuccessful tracking. Yellow and green rectangles represent results from MixFormer and lightweight trackers, respectively. Red rectangles show unsuccessful tracking results.
Figure 4. Examples of successful and unsuccessful tracking. Yellow and green rectangles represent results from MixFormer and lightweight trackers, respectively. Red rectangles show unsuccessful tracking results.
Sensors 24 08120 g004
Table 1. Performance of SiamRPN++ and LTS. EOA and LTS stand for expected average overlap and lightweight tracking with SiamRPN++.
Table 1. Performance of SiamRPN++ and LTS. EOA and LTS stand for expected average overlap and lightweight tracking with SiamRPN++.
MethodAccuracyRobustnessLost CountEOASKIP (%)
SiamRPN++0.6040.243520.4130
LTS0.5980.445950.27748.7
Table 2. Comparisons of M BASE (SiamRPN++ using ResNet backbone), frame skipping ( M SKIP ), and proposed method without the confidence level evaluation ( M P 1 ) and with the confidence level evaluation ( M P 2 ).
Table 2. Comparisons of M BASE (SiamRPN++ using ResNet backbone), frame skipping ( M SKIP ), and proposed method without the confidence level evaluation ( M P 1 ) and with the confidence level evaluation ( M P 2 ).
Method S N AccuracyRobustnessLost CountEOAS(%)FPS
M BASE -0.6040.243520.413058.3
M SKIP 10.5800.5241120.22849.8118.4
20.5641.0252190.12466.3170.8
30.5521.3532890.08774.5215.2
40.5401.5733360.08279.4253.3
50.5231.7003630.07282.6288.9
100.4852.1544600.05690.0428.2
M P 1 10.5950.407870.29849.8114.8
20.5820.6321350.19466.4161.0
30.5850.7911690.17574.6199.7
40.5790.7911690.17379.6232.7
50.5710.8191750.16082.9264.3
100.5491.0542250.13390.4392.0
M P 2 (T = 0.5)10.6030.276590.38934.188.2
20.5970.290620.35845.1101.6
30.5950.304650.35650.2111.2
40.5950.328700.35353.7118.3
50.5880.332710.34155.6123.9
100.5780.393840.30459.8135.8
Table 3. Performance variation with respect to different threshold values. S N for M P 1 was set to 1.
Table 3. Performance variation with respect to different threshold values. S N for M P 1 was set to 1.
MethodThres.AccuracyRobustnessLost CountEOAS (%)FPS
M BASE -0.6040.243520.413058.3
M P 1 -0.5950.407870.29849.8114.8
M P 2 ( S N = 1)1.000.6090.272580.37614.068.0
0.670.6050.243520.41526.876.6
0.500.6030.276590.38934.188.2
0.400.6010.276590.38038.591.0
0.330.5980.262560.39341.695.8
0.250.5970.332710.33145.2102.1
0.200.5960.314670.35547.2105.9
Table 4. Comparison of M P 2 with M BASE in terms of FPS for each sequence. S N and T were 1 and 0.5, respectively.
Table 4. Comparison of M P 2 with M BASE in terms of FPS for each sequence. S N and T were 1 and 0.5, respectively.
Seq. M BASE M P 2 Seq. M BASE M P 2
ants151.959.7graduate57.186.5
ants363.776.8gymnastics158100.8
ball261104.8hand59.385.7
basketball63.388handball159.7117
birds161.660.1handball259.2101.3
bolt160104.7iceskater256.180.3
book60.758.9matrix56.567.4
butterfly55.962.3motocross158.181.2
conduction159.7109.2nature55.784.2
drone160.4115.5road58.1112.5
drone across57.489.3shaking57.281.8
drone flip61.998.1sheep57.8113.6
fernando57.269.9singer254.582
fish159.1115.3singer355.568
fish358114.2soccer260.670.4
flamingo159.1104.1soldier58.276.7
girl58.8109.9traffic58.4114.8
glove60.784.8wiper58.496.8
Table 5. Variation in performance across different threshold values. Here, M Mobile denotes SiamRPN++ utilizing the MobileNet backbone. In M P 3 , the lightweight object tracking method with confidence level evaluation is integrated with M Mobile .
Table 5. Variation in performance across different threshold values. Here, M Mobile denotes SiamRPN++ utilizing the MobileNet backbone. In M P 3 , the lightweight object tracking method with confidence level evaluation is integrated with M Mobile .
MethodThres.AccuracyRobustnessLost CountEOAS(%)FPS
M BASE -0.6040.243520.413058.3
M Mobile -0.5870.234500.411085.5
M P 3 ( S N = 1)1.00.5800.304650.34914.095.7
0.670.5850.267570.37126.8109.6
0.500.5860.290620.35234.2122.0
0.400.5740.253540.37038.5129.4
0.330.5870.267570.38441.5140.4
0.250.5840.295630.35445.2146.0
0.200.5830.309660.33947.1151.0
Table 6. Comparisons of M P 2 (the proposed method with SiamRPN++), M F P (the proposed method with MixFormer [22]), and U P (the proposed with the ground truth) in terms of the skipping rate (%). The threshold was set to 0.5.
Table 6. Comparisons of M P 2 (the proposed method with SiamRPN++), M F P (the proposed method with MixFormer [22]), and U P (the proposed with the ground truth) in terms of the skipping rate (%). The threshold was set to 0.5.
Skipping Rate, S (%)
S N = 1 S N = 2 S N = 5
Seq. M P 2 MF P U P M P 2 MF P U P M P 2 MF P U P
average0.340.340.340.440.450.450.550.550.56
bag0.390.400.400.510.520.530.630.650.65
basketball0.350.330.380.480.450.510.590.560.65
bolt10.440.370.410.580.490.550.700.590.68
book0.030.040.020.050.060.030.050.060.03
conduction10.420.340.460.530.430.600.710.540.76
fish10.490.490.490.630.650.650.780.810.82
girl0.470.470.470.630.640.640.780.780.79
helicopter0.490.490.490.640.660.660.780.820.82
leaves0.020.000.020.020.000.030.020.000.08
singer20.390.390.370.510.520.490.640.670.61
tiger0.120.150.110.130.180.150.160.220.17
traffic0.500.490.500.660.660.660.830.830.83
Table 7. Comparisons of M P 2 (the proposed method with SiamRPN++), M F P (the proposed method with MixFormer), and U P (the proposed with the ground truth) in terms of accuracy, robustness, and EOA. The threshold was set to 0.5.
Table 7. Comparisons of M P 2 (the proposed method with SiamRPN++), M F P (the proposed method with MixFormer), and U P (the proposed with the ground truth) in terms of accuracy, robustness, and EOA. The threshold was set to 0.5.
Method S N AccuracyRobustnessEOA
M B A S E 00.6040.2430.413
10.6030.2760.389
M P 2 20.5970.290.358
50.5880.3320.341
M F B A S E 00.5450.2290.341
10.5770.2150.374
M F P 20.5720.2150.381
50.5890.2810.340
U B A S E 00.78800.795
10.77500.780
U P 20.76700.771
50.73600.737
Table 8. Comparisons of M F B A S E and M F P (the proposed method with MixFormer) on OTB100 and UAV123 datasets. The threshold was set to 0.5. P stands for precision.
Table 8. Comparisons of M F B A S E and M F P (the proposed method with MixFormer) on OTB100 and UAV123 datasets. The threshold was set to 0.5. P stands for precision.
Method S N OTB100UAV123
AUC (%)P (%)FPSAUC (%)P (%)FPS
M F B A S E 071.6194.2148.0467.2789.7349.55
M F P 170.9392.7080.1267.8989.8992.50
271.2092.75107.2168.7791.00131.32
569.3190.70174.7967.9290.10207.91
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, Y.G. Confidence-Guided Frame Skipping to Enhance Object Tracking Speed. Sensors 2024, 24, 8120. https://doi.org/10.3390/s24248120

AMA Style

Lee YG. Confidence-Guided Frame Skipping to Enhance Object Tracking Speed. Sensors. 2024; 24(24):8120. https://doi.org/10.3390/s24248120

Chicago/Turabian Style

Lee, Yun Gu. 2024. "Confidence-Guided Frame Skipping to Enhance Object Tracking Speed" Sensors 24, no. 24: 8120. https://doi.org/10.3390/s24248120

APA Style

Lee, Y. G. (2024). Confidence-Guided Frame Skipping to Enhance Object Tracking Speed. Sensors, 24(24), 8120. https://doi.org/10.3390/s24248120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop