remotesensing-logo

Journal Browser

Journal Browser

Hyperspectral Object Tracking

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (13 August 2023) | Viewed by 15543

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Interests: hyperspectral imaging; pattern recognition; computer vision; remote sensing
School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia
Interests: pattern recognition; computer vision and spectral imaging with their applications to remote sensing and environmental informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
Interests: hyperspectral remote-sensing information processing; high-resolution remote-sensing image understanding; multi-source remote-sensing data geological interpretation
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
1.Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), D-09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Wien, Austria
Interests: hyperspectral image interpretation; multisensor and multitemporal data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Gipsa-Lab, Grenoble Institute of Technology, 38031 Grenoble, France
Interests: image analysis; hyperspectral remote sensing; data fusion; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Object tracking is an active research topic in computer vision, pattern recognition and remote sensing. We have witnessed significant progress on this topic over the past several years, with approaches moving from hand-crafted features to deep learning families. Nevertheless, tracking in grayscale or color videos has its intrinsic limitations in depicting physical properties of targets, especially reflectance of materials. It makes trackers vulnerable in complex scenarios with cluttered background and significant object shape changes. This problem can be effectively addressed by object tracking in hyperspectral videos which provide joint spectral, spatial, and temporal information, enabling computer vision system to perceive the materials of the objects besides the shape, texture, and semantic relationship of objects.

The Second Hyperspectral Object Tracking Challenge will be held with the 12th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS). The goal of this contest is to boost the research on hyperspectral object tracking. This special issue provides a further opportunity to bring together the global research community actively involved in this field and highlights ongoing investigations and new applications of hyperspectral video data and tools in this filed.

The authors are not required to take part in the WHISPERS 2022 Hyperspectral Object Tracking Challenge. Articles may address but are not limited to the following topics:

  • Hyperspectral Video Generation
  • Hyperspectral Video Processing
  • Hyperspectral Tracking
  • Hyperspectral/Multispectral Object Detection
  • Hyperspectral Snapshot Compressive Imaging
  • Illumination Estimation

Dr. Fengchao Xiong
Dr. Jun Zhou
Prof. Dr. Yanfei Zhong
Dr. Pedram Ghamisi
Prof. Dr. Jocelyn Chanussot
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hyperspectral imaging
  • hyperspectral video processing
  • object tracking
  • deep learning
  • feature extraction
  • convolutional neural network
  • computer vision
  • pattern recognition
  • remote sensing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 38293 KiB  
Article
A Spectral–Spatial Transformer Fusion Method for Hyperspectral Video Tracking
by Ye Wang, Yuheng Liu, Mingyang Ma and Shaohui Mei
Remote Sens. 2023, 15(7), 1735; https://doi.org/10.3390/rs15071735 - 23 Mar 2023
Cited by 5 | Viewed by 1573
Abstract
Hyperspectral videos (HSVs) can record more adequate detail clues than other videos, which is especially beneficial in cases of abundant spectral information. Although traditional methods based on correlation filters (CFs) employed to explore spectral information locally achieve promising results, their performances are limited [...] Read more.
Hyperspectral videos (HSVs) can record more adequate detail clues than other videos, which is especially beneficial in cases of abundant spectral information. Although traditional methods based on correlation filters (CFs) employed to explore spectral information locally achieve promising results, their performances are limited by ignoring global information. In this paper, a joint spectral–spatial information method, named spectral–spatial transformer-based feature fusion tracker (SSTFT), is proposed for hyperspectral video tracking, which is capable of utilizing spectral–spatial features and considering global interactions. Specifically, the feature extraction module employs two parallel branches to extract multiple-level coarse-grained and fine-grained spectral–spatial features, which are fused with adaptive weights. The extracted features are further fused with the context fusion module based on a transformer with the hyperspectral self-attention (HSA) and hyperspectral cross-attention (HCA), which are designed to capture the self-context feature interaction and the cross-context feature interaction, respectively. Furthermore, an adaptive dynamic template updating strategy is used to update the template bounding box based on the prediction score. The extensive experimental results on benchmark hyperspectral video tracking datasets demonstrated that the proposed SSTFT outperforms the state-of-the-art methods in both precision and speed. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Figure 1

20 pages, 12448 KiB  
Article
AD-SiamRPN: Anti-Deformation Object Tracking via an Improved Siamese Region Proposal Network on Hyperspectral Videos
by Shiqing Wang, Kun Qian, Jianlu Shen, Hongyu Ma and Peng Chen
Remote Sens. 2023, 15(7), 1731; https://doi.org/10.3390/rs15071731 - 23 Mar 2023
Cited by 3 | Viewed by 1768
Abstract
Object tracking using Hyperspectral Images (HSIs) obtains satisfactory result in distinguishing objects with similar colors. Yet, the tracking algorithm tends to fail when the target undergoes deformation. In this paper, a SiamRPN based hyperspectral tracker is proposed to deal with this problem. Firstly, [...] Read more.
Object tracking using Hyperspectral Images (HSIs) obtains satisfactory result in distinguishing objects with similar colors. Yet, the tracking algorithm tends to fail when the target undergoes deformation. In this paper, a SiamRPN based hyperspectral tracker is proposed to deal with this problem. Firstly, a band selection method based on a genetic optimization method is designed for rapidly reducing the redundancy of information in HSIs. Specifically, three bands with highest joint entropy are selected. To solve the problem that the information of the template in the SiamRPN model decays over time, an update network is trained on the dataset from general objective tracking benchmark, which can obtain effective cumulative templates. The use of cumulative templates with spectral information makes it easier to track the deformed target. In addition, transfer learning of the pre-trained SiamRPN is designed to obtain a better model for HSIs. The experimental results show that the proposed tracker can obtain good tracking results over the entire public dataset, and that it is better than the other popular trackers when the target’s deformation is qualitatively and quantitatively compared, achieving an overall success rate of 57.5% and a deformation challenge success rate of 70.8%. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Graphical abstract

30 pages, 2611 KiB  
Article
Hyperspectral Video Tracker Based on Spectral Deviation Reduction and a Double Siamese Network
by Zhe Zhang, Bin Hu, Mengyuan Wang, Pattathal V. Arun, Dong Zhao, Xuguang Zhu, Jianling Hu, Huan Li, Huixin Zhou and Kun Qian
Remote Sens. 2023, 15(6), 1579; https://doi.org/10.3390/rs15061579 - 14 Mar 2023
Cited by 5 | Viewed by 1388
Abstract
The advent of hyperspectral cameras has popularized the study of hyperspectral video trackers. Although hyperspectral images can better distinguish the targets compared to their RGB counterparts, the occlusion and rotation of the target affect the effectiveness of the target. For instance, occlusion obscures [...] Read more.
The advent of hyperspectral cameras has popularized the study of hyperspectral video trackers. Although hyperspectral images can better distinguish the targets compared to their RGB counterparts, the occlusion and rotation of the target affect the effectiveness of the target. For instance, occlusion obscures the target, reducing the tracking accuracy and even causing tracking failure. In this regard, this paper proposes a novel hyperspectral video tracker where the double Siamese network (D-Siam) forms the basis of the framework. Moreover, AlexNet serves as the backbone of D-Siam. The current study also adopts a novel spectral–deviation-based dimensionality reduction approach on the learned features to match the input requirements of the AlexNet. It should be noted that the proposed dimensionality reduction method increases the distinction between the target and background. The two response maps, namely the initial response map and the adjacent response map, obtained using the D-Siam network, were fused using an adaptive weight estimation strategy. Finally, a confidence judgment module is proposed to regulate the update for the whole framework. A comparative analysis of the proposed approach with state-of-the-art trackers and an extensive ablation study were conducted on a publicly available benchmark hyperspectral dataset. The results show that the proposed tracker outperforms the existing state-of-the-art approaches against most of the challenges. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Figure 1

20 pages, 17506 KiB  
Article
A Fast Hyperspectral Tracking Method via Channel Selection
by Yifan Zhang, Xu Li, Baoguo Wei, Lixin Li and Shigang Yue
Remote Sens. 2023, 15(6), 1557; https://doi.org/10.3390/rs15061557 - 12 Mar 2023
Cited by 4 | Viewed by 1492
Abstract
With the rapid development of hyperspectral imaging technology, object tracking in hyperspectral video has become a research hotspot. Real-time object tracking for hyperspectral video is a great challenge. We propose a fast hyperspectral object tracking method via a channel selection strategy to improve [...] Read more.
With the rapid development of hyperspectral imaging technology, object tracking in hyperspectral video has become a research hotspot. Real-time object tracking for hyperspectral video is a great challenge. We propose a fast hyperspectral object tracking method via a channel selection strategy to improve the tracking speed significantly. First, we design a strategy of channel selection to select few candidate channels from many hyperspectral video channels, and then send the candidates to the subsequent background-aware correlation filter (BACF) tracking framework. In addition, we consider the importance of local and global spectral information in feature extraction, and further improve the BACF tracker to ensure high tracking accuracy. In the experiments carried out in this study, the proposed method was verified and the best performance was achieved on the publicly available hyperspectral dataset of the WHISPERS Hyperspectral Objecting Tracking Challenge. Our method was superior to state-of-the-art RGB-based and hyperspectral trackers, in terms of both the area under the curve (AUC) and DP@20pixels. The tracking speed of our method reached 21.9 FPS, which is much faster than that of the current most advanced hyperspectral trackers. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Figure 1

16 pages, 4631 KiB  
Article
MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors
by Feiyi Fang, Tao Zhou, Zhenbo Song and Jianfeng Lu
Remote Sens. 2023, 15(4), 1142; https://doi.org/10.3390/rs15041142 - 20 Feb 2023
Cited by 2 | Viewed by 2245
Abstract
Free-space detection plays a pivotal role in autonomous vehicle applications, and its state-of-the-art algorithms are typically based on semantic segmentation of road areas. Recently, hyperspectral images have proven useful supplementary information in multi-modal segmentation for providing more texture details to the RGB representations, [...] Read more.
Free-space detection plays a pivotal role in autonomous vehicle applications, and its state-of-the-art algorithms are typically based on semantic segmentation of road areas. Recently, hyperspectral images have proven useful supplementary information in multi-modal segmentation for providing more texture details to the RGB representations, thus performing well in road segmentation tasks. Existing multi-modal segmentation methods assume that all the inputs are well-aligned, and then the problem is converted to fuse feature maps from different modalities. However, there exist cases where sensors cannot be well-calibrated. In this paper, we propose a novel network named multi-modal cross-attention network (MMCAN) for multi-modal free-space detection with uncalibrated hyperspectral sensors. We first introduce a cross-modality transformer using hyperspectral data to enhance RGB features, then aggregate these representations alternatively via multiple stages. This transformer promotes the spread and fusion of information between modalities that cannot be aligned at the pixel level. Furthermore, we propose a triplet gate fusion strategy, which can increase the proportion of RGB in the multiple spectral fusion processes while maintaining the specificity of each modality. The experimental results on a multi-spectral dataset demonstrate that our MMCAN model has achieved state-of-the-art performance. The method can be directly used on the pictures taken in the field without complex preprocessing. Our future goal is to adapt the algorithm to multi-object segmentation and generalize it to other multi-modal combinations. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Figure 1

23 pages, 1762 KiB  
Article
TMTNet: A Transformer-Based Multimodality Information Transfer Network for Hyperspectral Object Tracking
by Chunhui Zhao, Hongjiao Liu, Nan Su, Congan Xu, Yiming Yan and Shou Feng
Remote Sens. 2023, 15(4), 1107; https://doi.org/10.3390/rs15041107 - 17 Feb 2023
Cited by 7 | Viewed by 2083
Abstract
Hyperspectral video with spatial and spectral information has great potential to improve object tracking performance. However, the limited hyperspectral training samples hinder the development of hyperspectral object tracking. Since hyperspectral data has multiple bands, from which any three bands can be extracted to [...] Read more.
Hyperspectral video with spatial and spectral information has great potential to improve object tracking performance. However, the limited hyperspectral training samples hinder the development of hyperspectral object tracking. Since hyperspectral data has multiple bands, from which any three bands can be extracted to form pseudocolor images, we propose a Transformer-based multimodality information transfer network (TMTNet), aiming to improve the tracking performance by efficiently transferring the information of multimodality data composed of RGB and hyperspectral in the hyperspectral tracking process. The multimodality information needed to be transferred mainly includes the RGB and hyperspectral multimodality fusion information and the RGB modality information. Specifically, we construct two subnetworks to transfer the multimodality fusion information and the robust RGB visual information, respectively. Among them, the multimodality fusion information transfer subnetwork is designed based on the dual Siamese branch structure. The subnetwork employs the pretrained RGB tracking model as the RGB branch to guide the training of the hyperspectral branch with little training samples. The RGB modality information transfer subnetwork is designed based on a pretrained RGB tracking model with good performance to improve the tracking network’s generalization and accuracy in unknown complex scenes. In addition, we design an information interaction module based on Transformer in the multimodality fusion information transfer subnetwork. The module can fuse multimodality information by capturing the potential interaction between different modalities. We also add a spatial optimization module to TMTNet, which further optimizes the object position predicted by the subject network by fully retaining and utilizing detailed spatial information. Experimental results on the only available hyperspectral tracking benchmark dataset show that the proposed TMTNet tracker outperforms the advanced trackers, demonstrating the effectiveness of this method. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Graphical abstract

21 pages, 1636 KiB  
Article
Hyperspectral Video Target Tracking Based on Deep Edge Convolution Feature and Improved Context Filter
by Dong Zhao, Jialu Cao, Xuguang Zhu, Zhe Zhang, Pattathal V. Arun, Yecai Guo, Kun Qian, Like Zhang, Huixin Zhou and Jianling Hu
Remote Sens. 2022, 14(24), 6219; https://doi.org/10.3390/rs14246219 - 08 Dec 2022
Cited by 9 | Viewed by 1390
Abstract
To address the problem that the performance of hyperspectral target tracking will be degraded when facing background clutter, this paper proposes a novel hyperspectral target tracking algorithm based on the deep edge convolution feature (DECF) and an improved context filter (ICF). DECF is [...] Read more.
To address the problem that the performance of hyperspectral target tracking will be degraded when facing background clutter, this paper proposes a novel hyperspectral target tracking algorithm based on the deep edge convolution feature (DECF) and an improved context filter (ICF). DECF is a fusion feature via deep features convolving 3D edge features, which makes targets easier to distinguish under complex backgrounds. In order to reduce background clutter interference, an ICF is proposed. The ICF selects eight neighborhoods around the target as the context areas. Then the first four areas that have a greater interference in the context areas are regarded as negative samples to train the ICF. To reduce the tracking drift caused by target deformation, an adaptive scale estimation module, named the region proposal module, is proposed for the adaptive estimation of the target box. Experimental results show that the proposed algorithm has satisfactory tracking performance against background clutter challenges. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Figure 1

24 pages, 4370 KiB  
Article
Hyperspectral Video Target Tracking Based on Deep Features with Spectral Matching Reduction and Adaptive Scale 3D Hog Features
by Zhe Zhang, Xuguang Zhu, Dong Zhao, Pattathal V. Arun, Huixin Zhou, Kun Qian and Jianling Hu
Remote Sens. 2022, 14(23), 5958; https://doi.org/10.3390/rs14235958 - 24 Nov 2022
Cited by 7 | Viewed by 1610
Abstract
Hyperspectral video target tracking is generally challenging when the scale of the target varies. In this paper, a novel algorithm is proposed to address the challenges prevalent in the existing hyperspectral video target tracking approaches. The proposed approach employs deep features along with [...] Read more.
Hyperspectral video target tracking is generally challenging when the scale of the target varies. In this paper, a novel algorithm is proposed to address the challenges prevalent in the existing hyperspectral video target tracking approaches. The proposed approach employs deep features along with spectral matching reduction and adaptive-scale 3D hog features to track the objects even when the scale is varying. Spectral matching reduction is adopted to estimate the spectral curve of the selected target region using a weighted combination of the global and local spectral curves. In addition to the deep features, adaptive-scale 3D hog features are extracted using cube-level features at three different scales. The four weak response maps thus obtained are then combined using adaptive weights to yield a strong response map. Finally, the region proposal module is utilized to estimate the target box. The proposed strategies make the approach robust against scale variations of the target. A comparative study on different hyperspectral video sequences illustrate the superior performance of the proposed algorithm as compared to the state-of-the-art approaches. Full article
(This article belongs to the Special Issue Hyperspectral Object Tracking)
Show Figures

Graphical abstract

Back to TopTop