sensors-logo

Journal Browser

Journal Browser

Special Issue "Visual Sensors for Object Tracking and Recognition"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 31 December 2020.

Special Issue Editors

Dr. Filiz Bunyak
Website
Guest Editor
Department of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO, USA
Interests: computer vision; biomedical image analysis; visual surveillance and monitoring; motion detection; visual tracking; deep learning methods; level set methods
Dr. Hadi Ali Akbarpour
Website
Guest Editor
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA.
Interests: computer vision; video analytics; robotics
Dr. Ilker Ersoy
Website
Guest Editor
Institute for Data Science and Informatics, University of Missouri, Columbia, MO, USA
Interests: computer vision; bioimage informatics; microscopy image analysis; deep learning; visualization

Special Issue Information

Dear Colleagues,

Visual object recognition and tracking are fundamental tasks in computer vision that are essential in a wide range of applications, including visual surveillance and monitoring, autonomous vehicles, human–computer interaction, biomedical image informatics, and so on. We are witnessing a growing need and renewed interest for robust visual object tracking and recognition capabilities due to recent advances in sensor technologies and emergence of new applications associated with these technologies.

Although visual object detection, tracking, and recognition in challenging real-world environments are relatively effortless tasks for humans, they are still very challenging tasks in a computational video analytics pipeline. Advances in technology combining more powerful and low-cost computer platforms with novel methods, particularly those relying on deep learning, are revolutionizing the computer vision field and provide new opportunities for research with larger and more diverse data sets. In addition to using visual information for recognition and tracking tasks, other sensors such as GPS, IMU, Lidar, etc. can also be synergically utilized to provide more robust approaches in diverse fields from aerial surveillance to wildlife tracking, mobile, and/or wearable technologies to automated driving and robotics.

The aim of this Special Issue is to solicit papers from academia and industry researchers with original and innovative works on all aspects of visual object recognition and tracking that address the needs in a diverse set of application fields. Original contributions that review and report on the state-of-the art, highlight challenges, point to future directions, and propose novel solutions are also welcome.

Topics of interest include but are not limited to:

  • visual recognition and/or tracking for video surveillance and monitoring (ground and aerial platforms);
  • visual recognition and/or tracking for robotics and autonomous vehicles;
  • visual recognition and/or tracking in biomedical modalities (endoscopy, videofluoroscopy, microscopy, etc.);
  • embedded solutions for visual recognition and/or tracking;
  • recognition and tracking for computational human behavior analysis, assistive robots, and human–robot interaction;
  • heterogeneous sensor fusion for robust tracking and video analytics.

Dr. Filiz Bunyak
Dr. Hadi Ali Akbarpour
Dr. Ilker Ersoy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual object tracking
  • visual object recognition
  • video analytics
  • visual surveillance and monitoring
  • bioimage informatics
  • data fusion
  • sensor fusion

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Learning Soft Mask Based Feature Fusion with Channel and Spatial Attention for Robust Visual Object Tracking
Sensors 2020, 20(14), 4021; https://doi.org/10.3390/s20144021 - 20 Jul 2020
Abstract
We propose to improve the visual object tracking by introducing a soft mask based low-level feature fusion technique. The proposed technique is further strengthened by integrating channel and spatial attention mechanisms. The proposed approach is integrated within a Siamese framework to demonstrate its [...] Read more.
We propose to improve the visual object tracking by introducing a soft mask based low-level feature fusion technique. The proposed technique is further strengthened by integrating channel and spatial attention mechanisms. The proposed approach is integrated within a Siamese framework to demonstrate its effectiveness for visual object tracking. The proposed soft mask is used to give more importance to the target regions as compared to the other regions to enable effective target feature representation and to increase discriminative power. The low-level feature fusion improves the tracker robustness against distractors. The channel attention is used to identify more discriminative channels for better target representation. The spatial attention complements the soft mask based approach to better localize the target objects in challenging tracking scenarios. We evaluated our proposed approach over five publicly available benchmark datasets and performed extensive comparisons with 39 state-of-the-art tracking algorithms. The proposed tracker demonstrates excellent performance compared to the existing state-of-the-art trackers. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

Open AccessArticle
Multiple Object Tracking for Dense Pedestrians by Markov Random Field Model with Improvement on Potentials
Sensors 2020, 20(3), 628; https://doi.org/10.3390/s20030628 - 22 Jan 2020
Abstract
Pedestrian tracking in dense crowds is a challenging task, even when using a multi-camera system. In this paper, a new Markov random field (MRF) model is proposed for the association of tracklet couplings. Equipped with a new potential function improvement method, this model [...] Read more.
Pedestrian tracking in dense crowds is a challenging task, even when using a multi-camera system. In this paper, a new Markov random field (MRF) model is proposed for the association of tracklet couplings. Equipped with a new potential function improvement method, this model can associate the small tracklet coupling segments caused by dense pedestrian crowds. The tracklet couplings in this paper are obtained through a data fusion method based on image mutual information. This method calculates the spatial relationships of tracklet pairs by integrating position and motion information, and adopts the human key point detection method for correction of the position data of incomplete and deviated detections in dense crowds. The MRF potential function improvement method for dense pedestrian scenes includes assimilation and extension processing, as well as a message selective belief propagation algorithm. The former enhances the information of the fragmented tracklets by means of a soft link with longer tracklets and expands through sharing to improve the potentials of the adjacent nodes, whereas the latter uses a message selection rule to prevent unreliable messages of fragmented tracklet couplings from being spread throughout the MRF network. With the help of the iterative belief propagation algorithm, the potentials of the model are improved to achieve valid association of the tracklet coupling fragments, such that dense pedestrians can be tracked more robustly. Modular experiments and system-level experiments are conducted using the PETS2009 experimental data set, where the experimental results reveal that the proposed method has superior tracking performance. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

Open AccessArticle
Global Motion-Aware Robust Visual Object Tracking for Electro Optical Targeting Systems
Sensors 2020, 20(2), 566; https://doi.org/10.3390/s20020566 - 20 Jan 2020
Cited by 1
Abstract
Although recently developed trackers have shown excellent performance even when tracking fast moving and shape changing objects with variable scale and orientation, the trackers for the electro-optical targeting systems (EOTS) still suffer from abrupt scene changes due to frequent and fast camera motions [...] Read more.
Although recently developed trackers have shown excellent performance even when tracking fast moving and shape changing objects with variable scale and orientation, the trackers for the electro-optical targeting systems (EOTS) still suffer from abrupt scene changes due to frequent and fast camera motions by pan-tilt motor control or dynamic distortions in field environments. Conventional context aware (CA) and deep learning based trackers have been studied to tackle these problems, but they have the drawbacks of not fully overcoming the problems and dealing with their computational burden. In this paper, a global motion aware method is proposed to address the fast camera motion issue. The proposed method consists of two modules: (i) a motion detection module, which is based on the change in image entropy value, and (ii) a background tracking module, used to track a set of features in consecutive images to find correspondences between them and estimate global camera movement. A series of experiments is conducted on thermal infrared images, and the results show that the proposed method can significantly improve the robustness of all trackers with a minimal computational overhead. We show that the proposed method can be easily integrated into any visual tracking framework and can be applied to improve the performance of EOTS applications. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Graphical abstract

Back to TopTop