Special Issue "Image Processing and Spatial Neighbourhoods for Remote Sensing Data Analysis"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 20 August 2020.

Special Issue Editors

Dr. Wenzhi Liao
Website
Guest Editor
Lecturer, Department of Electronic and Electrical Engineering, University of Strathclyde, UK
Interests: remote sensing, hyperspectral imaging, image processing, machine learning, data fusion
Special Issues and Collections in MDPI journals
Dr. Pedram Ghamisi
Website
Guest Editor
1: Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, Chemnitzer Str. 40, D-09599 Freiberg, Germany
2: CTO and co-founder at VasoGnosis, 313 N Plankinton Ave, Suite 211, Milwaukee, WI 53203, USA
Interests: Multisensor Data Fusion; Machine and Deep Learning; Image and Signal Processing; Hyperspectral Image Analysis
Special Issues and Collections in MDPI journals
Dr. Lianru Gao
Website
Guest Editor
Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, China
Interests: remote sensing; Earth observation; hyperspectral image processing; target detection
Prof. Jocelyn Chanussot
Website
Guest Editor
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX, France
Interests: image processing; machine learning; mathematical morphology; hyperspectral imaging; data fusion
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in remote sensing technologies have led to the increased availability of a multitude of satellite and airborne data sources, with increasing spatial, spectral, and temporal resolutions. Additionally, at lower altitudes, airplanes and Unmanned Aerial Vehicles (UAVs) can deliver very high-resolution data from targeted locations. Remote sensing images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of images.

In this Special Issue, we welcome methodological contributions in terms of novel spatial information extraction/modeling algorithms as well as their recent applications to relevant scenarios from remote sensing imagery. We invite you to submit the most recent advancements in (but not limited to) the following topics:

  • Mathematical morphology (e.g., morphological filters, attribute filters, etc.) for the analysis of high-resolution remote sensing images;
  • Image operations based on spatial neighbourhoods;
  • Textural, structural, and semantic feature extraction;
  • Operational methods for incorporating spatial information of high-resolution data;
  • Object-based image processing;
  • Semantic understanding and analysis.

Dr. Wenzhi Liao
Dr. Pedram Ghamisi
Dr. Lianru Gao
Prof. Jocelyn Chanussot
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • very high resolution
  • remote sensing
  • spatial information extraction
  • mathematical morphology
  • image processing

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion
Remote Sens. 2020, 12(1), 143; https://doi.org/10.3390/rs12010143 - 01 Jan 2020
Cited by 2
Abstract
The objective of detection in remote sensing images is to determine the location and category of all targets in these images. The anchor based methods are the most prevalent deep learning based methods, and still have some problems that need to be addressed. [...] Read more.
The objective of detection in remote sensing images is to determine the location and category of all targets in these images. The anchor based methods are the most prevalent deep learning based methods, and still have some problems that need to be addressed. First, the existing metric (i.e., intersection over union (IoU)) could not measure the distance between two bounding boxes when they are nonoverlapping. Second, the exsiting bounding box regression loss could not directly optimize the metric in the training process. Third, the existing methods which adopt a hierarchical deep network only choose a single level feature layer for the feature extraction of region proposals, meaning they do not take full use of the advantage of multi-level features. To resolve the above problems, a novel object detection method for remote sensing images based on improved bounding box regression and multi-level features fusion is proposed in this paper. First, a new metric named generalized IoU is applied, which can quantify the distance between two bounding boxes, regardless of whether they are overlapping or not. Second, a novel bounding box regression loss is proposed, which can not only optimize the new metric (i.e., generalized IoU) directly but also overcome the problem that existing bounding box regression loss based on the new metric cannot adaptively change the gradient based on the metric value. Finally, a multi-level features fusion module is proposed and incorporated into the existing hierarchical deep network, which can make full use of the multi-level features for each region proposal. The quantitative comparisons between the proposed method and baseline method on the large scale dataset DIOR demonstrate that incorporating the proposed bounding box regression loss, multi-level features fusion module, and a combination of both into the baseline method can obtain an absolute gain of 0.7%, 1.4%, and 2.2% or so in terms of mAP, respectively. Comparing this with the state-of-the-art methods demonstrates that the proposed method has achieved a state-of-the-art performance. The curves of average precision with different thresholds show that the advantage of the proposed method is more evident when the threshold of generalized IoU (or IoU) is relatively high, which means that the proposed method can improve the precision of object localization. Similar conclusions can be obtained on a NWPU VHR-10 dataset. Full article
Show Figures

Graphical abstract

Open AccessArticle
An Object-Based Markov Random Field Model with Anisotropic Penalty for Semantic Segmentation of High Spatial Resolution Remote Sensing Imagery
Remote Sens. 2019, 11(23), 2878; https://doi.org/10.3390/rs11232878 - 03 Dec 2019
Abstract
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to [...] Read more.
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to solve such a problem, this paper proposed a new generalized probability inference with an anisotropic penalty for the object-based MRF model (OMRF-AP) that can distinguish the differences in the interactions between any two land classes. Specifically, an anisotropic penalty matrix was first developed to describe the relationships between different classes. Then, an expected value of the penalty information (EVPI) was developed in this inference criterion to integrate the anisotropic class-interaction information and the posteriori distribution information of the OMRF model. Finally, by iteratively updating the EVPI terms of different classes, segmentation results could be achieved when the iteration converged. Experiments of texture images and different remote sensing images demonstrated that our method could show a better performance than other state-of-the-art MRF-based methods, and a post-processing scheme of the OMRF-AP model was also discussed in the experiments. Full article
Show Figures

Graphical abstract

Open AccessArticle
Multiscale Spatial-Spectral Convolutional Network with Image-Based Framework for Hyperspectral Imagery Classification
Remote Sens. 2019, 11(19), 2220; https://doi.org/10.3390/rs11192220 - 23 Sep 2019
Cited by 2
Abstract
Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches [...] Read more.
Jointly using spatial and spectral information has been widely applied to hyperspectral image (HSI) classification. Especially, convolutional neural networks (CNN) have gained attention in recent years due to their detailed representation of features. However, most of CNN-based HSI classification methods mainly use patches as input classifier. This limits the range of use for spatial neighbor information and reduces processing efficiency in training and testing. To overcome this problem, we propose an image-based classification framework that is efficient and straightforward. Based on this framework, we propose a multiscale spatial-spectral CNN for HSIs (HyMSCN) to integrate both multiple receptive fields fused features and multiscale spatial features at different levels. The fused features are exploited using a lightweight block called the multiple receptive field feature block (MRFF), which contains various types of dilation convolution. By fusing multiple receptive field features and multiscale spatial features, the HyMSCN has comprehensive feature representation for classification. Experimental results from three real hyperspectral images prove the efficiency of the proposed framework. The proposed method also achieves superior performance for HSI classification. Full article
Show Figures

Figure 1

Open AccessArticle
Adaptive Contrast Enhancement for Infrared Images Based on the Neighborhood Conditional Histogram
Remote Sens. 2019, 11(11), 1381; https://doi.org/10.3390/rs11111381 - 10 Jun 2019
Abstract
In this paper, an adaptive contrast enhancement method based on the neighborhood conditional histogram is proposed to improve the visual quality of thermal infrared images. Existing block-based local contrast enhancement methods usually suffer from the over-enhancement of smooth regions or the loss of [...] Read more.
In this paper, an adaptive contrast enhancement method based on the neighborhood conditional histogram is proposed to improve the visual quality of thermal infrared images. Existing block-based local contrast enhancement methods usually suffer from the over-enhancement of smooth regions or the loss of some details. To address these drawbacks, we first introduce a neighborhood conditional histogram to adaptively enhance the contrast and avoid the over-enhancement caused by the original histogram. Then the clip-redistributed histogram of the contrast-limited adaptive histogram equalization (CLAHE) is replaced by the neighborhood conditional histogram. In addition, the local mapping function of each sub-block is updated based on the global mapping function to further eliminate the block artifacts. Lastly, the optimized local contrast enhancement process, which combines both global and local enhanced results is employed to obtain the desired enhanced result. Experiments are conducted to evaluate the performance of the proposed method and the other five methods are introduced as a comparison. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms the other block-based methods on local contrast enhancement, visual quality improvement, and noise suppression. Full article
Show Figures

Figure 1

Open AccessArticle
A Novel Effectively Optimized One-Stage Network for Object Detection in Remote Sensing Imagery
Remote Sens. 2019, 11(11), 1376; https://doi.org/10.3390/rs11111376 - 09 Jun 2019
Cited by 2
Abstract
With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional [...] Read more.
With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional network, NEOON consists of four parts: Feature extraction, feature fusion, feature enhancement, and multi-scale detection. To extract effective features, the first part has implemented bottom-up and top-down coherent processing by taking successive down-sampling and up-sampling operations in conjunction with residual modules. The second part consolidates high-level and low-level features by adopting concatenation operations with subsequent convolutional operations to explicitly yield strong feature representation and semantic information. The third part is implemented by constructing a receptive field enhancement (RFE) module and incorporating it into the fore part of the network where the information of small objects exists. The final part is achieved by four detectors with different sensitivities accessing the fused features, all four parallel, to enable the network to make full use of information of objects in different scales. Besides, the Focal Loss is set to enable the cross entropy for classification to solve the tough problem of class imbalance in one-stage methods. In addition, we introduce the Soft-NMS to preserve accurate bounding boxes in the post-processing stage especially for densely arranged objects. Note that the split and merge strategy and multi-scale training strategy are employed in training. Thorough experiments are performed on ACS datasets constructed by us and NWPU VHR-10 datasets to evaluate the performance of NEOON. Specifically, 4.77% and 5.50% improvements in mAP and recall, respectively, on the ACS dataset as compared to YOLOv3 powerfully prove that NEOON can effectually improve the detection accuracy of small objects in remote sensing imagery. In addition, extensive experiments and comprehensive evaluations on the NWPU VHR-10 dataset with 10 classes have illustrated the superiority of NEOON in the extraction of spatial information of high-resolution remote sensing images. Full article
Show Figures

Graphical abstract

Open AccessArticle
Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation
Remote Sens. 2019, 11(10), 1229; https://doi.org/10.3390/rs11101229 - 23 May 2019
Cited by 2Correction
Abstract
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we [...] Read more.
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we present a hyperspectral image (HSI) SR method based on a deep information distillation network (IDN) and an intra-fusion operation. Specifically, bands are firstly selected by a certain distance and super-resolved by an IDN. The IDN employs distillation blocks to gradually extract abundant and efficient features for reconstructing the selected bands. Second, the unselected bands are obtained via spectral correlation, yielding a coarse high-resolution (HR) HSI. Finally, the spectral-interpolated coarse HR HSI is intra-fused with the input HSI to achieve a finer HR HSI, making further use of the spatial-spectral information these unselected bands convey. Different from most existing fusion-based HSI SR methods, the proposed intra-fusion operation does not require any auxiliary co-registered image as the input, which makes this method more practical. Moreover, contrary to most single-based HSI SR methods whose performance decreases significantly as the image quality gets worse, the proposal deeply utilizes the spatial-spectral information and the mapping knowledge provided by the IDN, which achieves more robust performance. Experimental data and comparative analysis have demonstrated the effectiveness of this method. Full article
Show Figures

Graphical abstract

Open AccessArticle
Kernel Joint Sparse Representation Based on Self-Paced Learning for Hyperspectral Image Classification
Remote Sens. 2019, 11(9), 1114; https://doi.org/10.3390/rs11091114 - 09 May 2019
Abstract
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to [...] Read more.
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models. Full article
Show Figures

Figure 1

Back to TopTop