Special Issue "Visual Object Tracking: Challenges and Applications"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editors

Prof. Dr. Soon Ki Jung
Website
Guest Editor
School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
Interests: computer vision; computer graphics; virtual reality; HCI
Dr. Sajid Javed
Website
Guest Editor
Khalifa University of Science and Technology, Abu Dhabi, UAE
Interests: computer vision; image processing; machine learning; deep learning research problems

Special Issue Information

Dear Colleagues,

Visual tracking is an important and essential component of perception that has been an active research topic in the computer vision community for decades. Visual tracking algorithms have developed rapidly thanks to the massive amount of video data that in turn creates high demand for the speed and accuracy of tracking algorithms. Researchers are motivated to design faster and better methods in spite of the challenges that exist in visual tracking, especially robustness to heavy occlusions, drastic scale change, accurate localization, multi-object tracking, and recovery from failure. Despite the success in addressing numerous challenges under a wide range of circumstances, the core problems remain complex and challenging.

This main aim of this Special Issue will be to focus on the most recent advancements and trends in VOT. The methods such as those reported in the formulation of correlation filters and Siamese networks for VOT can further be explored to improve VOT performance. We invite original research work involving novel techniques, innovative methods, and useful applications that lead to significant advances in VOT. We also welcome reviews and surveys on state-of-the-art methods.

Our Special Issue will include following topics of interest but is not limited to them:

  • Detection, identification, recognition, and tracking of objects using various sensors;
  • Multiple camera networks or associations for very wide-range surveillance;
  • Development of non-visual sensors, such as time-of-flight sensor, RGB-D camera, IR sensor, RADAR, LIDAR, motion sensor, and acoustic wave sensor, and their applications to video analysis and tracking;
  • Image and video enhancement algorithms to improve the quality of visual sensors for video tracking;
  • Computational photography and imaging for advanced object detection and tracking;
  • Depth estimation and three-dimensional reconstruction for augmented reality (AR) and/or advanced driver assistance systems (ADAS);
  • Learning data representation from video based on supervised/unsupervised/semi-supervised learning;
  • Dataset and performance evaluation;
  • Person re-identification, vehicle re-identification;
  • Human behavior detection, human pose estimation, and tracking.

Prof. Dr. Soon Ki Jung
Dr. Sajid Javed
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Context-Aware and Occlusion Handling Mechanism for Online Visual Object Tracking
Electronics 2021, 10(1), 43; https://doi.org/10.3390/electronics10010043 - 29 Dec 2020
Abstract
Object tracking is still an intriguing task as the target undergoes significant appearance changes due to illumination, fast motion, occlusion and shape deformation. Background clutter and numerous other environmental factors are other major constraints which remain a riveting challenge to develop a robust [...] Read more.
Object tracking is still an intriguing task as the target undergoes significant appearance changes due to illumination, fast motion, occlusion and shape deformation. Background clutter and numerous other environmental factors are other major constraints which remain a riveting challenge to develop a robust and effective tracking algorithm. In the present study, an adaptive Spatio-temporal context (STC)-based algorithm for online tracking is proposed by combining the context-aware formulation, Kalman filter, and adaptive model learning rate. For the enhancement of seminal STC-based tracking performance, different contributions were made in the proposed study. Firstly, a context-aware formulation was incorporated in the STC framework to make it computationally less expensive while achieving better performance. Afterwards, accurate tracking was made by employing the Kalman filter when the target undergoes occlusion. Finally, an adaptive update scheme was incorporated in the model to make it more robust by coping with the changes of the environment. The state of an object in the tracking process depends on the maximum value of the response map between consecutive frames. Then, Kalman filter prediction can be updated as an object position in the next frame. The average difference between consecutive frames is used to update the target model adaptively. Experimental results on image sequences taken from Template Color (TC)-128, OTB2013, and OTB2015 datasets indicate that the proposed algorithm performs better than various algorithms, both qualitatively and quantitatively. Full article
(This article belongs to the Special Issue Visual Object Tracking: Challenges and Applications)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Siamese High-Level Feature Refine Network for Visual Object Tracking
Electronics 2020, 9(11), 1918; https://doi.org/10.3390/electronics9111918 - 14 Nov 2020
Cited by 1
Abstract
Siamese network-based trackers are broadly applied to solve visual tracking problems due to its balanced performance in terms of speed and accuracy. Tracking desired objects in challenging scenarios is still one of the fundamental concerns during visual tracking. This research paper proposes a [...] Read more.
Siamese network-based trackers are broadly applied to solve visual tracking problems due to its balanced performance in terms of speed and accuracy. Tracking desired objects in challenging scenarios is still one of the fundamental concerns during visual tracking. This research paper proposes a feature refined end-to-end tracking framework with real-time tracking speed and considerable performance. The feature refine network has been incorporated to enhance the target feature representation power, utilizing high-level semantic information. Besides, it allows the network to capture the salient information to locate the target and learns to represent the target feature in a more generalized way advancing the overall tracking performance, particularly in the challenging sequences. But, only the feature refine module is unable to handle such challenges because of its less discriminative ability. To overcome this difficulty, we employ an attention module inside the feature refine network that strengths the tracker discrimination ability between the target and background. Furthermore, we conduct extensive experiments to ensure the proposed tracker’s effectiveness using several popular tracking benchmarks, demonstrating that our proposed model achieves state-of-the-art performance over other trackers. Full article
(This article belongs to the Special Issue Visual Object Tracking: Challenges and Applications)
Show Figures

Figure 1

Open AccessArticle
ACSiamRPN: Adaptive Context Sampling for Visual Object Tracking
Electronics 2020, 9(9), 1528; https://doi.org/10.3390/electronics9091528 - 18 Sep 2020
Abstract
In visual object tracking fields, the Siamese network tracker, based on the region proposal network (SiamRPN), has achieved promising tracking effects, both in speed and accuracy. However, it did not consider the relationship and differences between the long-range context information of various objects. [...] Read more.
In visual object tracking fields, the Siamese network tracker, based on the region proposal network (SiamRPN), has achieved promising tracking effects, both in speed and accuracy. However, it did not consider the relationship and differences between the long-range context information of various objects. In this paper, we add a global context block (GC block), which is lightweight and can effectively model long-range dependency, to the Siamese network part of SiamRPN so that the object tracker can better understand the tracking scene. At the same time, we propose a novel convolution module, called a cropping-inside selective kernel block (CiSK block), based on selective kernel convolution (SK convolution, a module proposed in selective kernel networks) and use it in the region proposal network (RPN) part of SiamRPN, which can adaptively adjust the size of the receptive field for different types of objects. We make two improvements to SK convolution in the CiSK block. The first improvement is that in the fusion step of SK convolution, we use both global average pooling (GAP) and global maximum pooling (GMP) to enhance global information embedding. The second improvement is that after the selection step of SK convolution, we crop out the outermost pixels of features to reduce the impact of padding operations. The experiment results show that on the OTB100 benchmark, we achieved an accuracy of 0.857 and a success rate of 0.643. On the VOT2016 and VOT2019 benchmarks, we achieved expected average overlap (EAO) scores of 0.394 and 0.240, respectively. Full article
(This article belongs to the Special Issue Visual Object Tracking: Challenges and Applications)
Show Figures

Figure 1

Back to TopTop