Special Issue "Classification and Feature Extraction Based on Remote Sensing Imagery"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2021).

Special Issue Editors

Dr. Bryan Gardiner
E-Mail Website
Guest Editor
Lecturer/Course Director MSc Data Science, Ulster University, Northland Rd, Derry, Northern Ireland, BT48 8HE
Interests: digital image processing; computer vision; feature extraction; pattern recognition; object classification; machine learning; cognitive robotics; multi-modal sensing
Dr. Chris McGonigle
E-Mail Website
Guest Editor
Senior Lecturer, Ulster University, Cromore Road, Coleraine, Northern Ireland, BT52 1SA
Interests: benthic habitat mapping; benthic ecology; marine geophysics; landscape ecology

Special Issue Information

Dear Colleagues,

Classification and feature extraction for remote sensing image analysis is applicable to a wide range of different environments and ecological systems, at a range of spatial and temporal scales. Emerging methodological approaches include big data analytics, deep learning, machine learning, and object-based image analysis (OBIA), many of which are now commonplace in a range of different contexts, from geomorphological time-lapse analysis to the broad-scale characterization of terrestrial and aquatic ecosystems. These approaches are allowing environmental, earth, and marine scientists to unlock the potential capacity for research into vitally important areas, such as climate change, susceptibility to geohazards, biodiversity loss, and habitat fragmentation.

For remote sensing image analysis, the process of feature extraction and classification is applicable at the scale of the landscape (e.g., geomorphometry) and also in terms of ground validation where this is achieved by optical means (e.g., photoquadrats). Boundaries between these spatial scales of observation and analysis are increasingly becoming blurred with developments in sensors and computing power, allowing for mapping of larger contexts at higher resolutions. Independent of spatial scale, feature extraction from landscape-level features and ground validation imagery are united by their potential capacity for automation in the analytical process.

In spite of recent technological advances, a great challenge remains in the development of new computational procedures for gaining a more accurate representation of complex environments. Recent breakthroughs in computer vision methods and deep learning models for image fusion, image classification, and object detection assist with obtaining a much more accurate model of environmental features than could be achieved previously; however, further investigation is required on the development of new algorithms for automatic feature extraction, monitoring, and integration of high-quality multi-modal data.

This Special Issue focuses on feature extraction and classification using remote sensing data and novel machine learning techniques. It aims to explore the potential of new ideas and technologies from the field of machine learning and pattern recognition in remote sensing applications in a variety of different environments and spatial scales (from landscape geomorphometry to ground validation) and to further investigate the overlap between remote sensing and computer vision/image analysis.

This Special Issue will include, but not be limited to, the following topics:

  • Feature extraction approaches related to the characterization of terrestrial and marine ecosystems;
  • Novel technologies or procedures for dynamic acquisition and processing of 3D point clouds, from a variety of sensors (e.g., LiDAR, laser line scanner, multibeam echosounder, photogrammetry);
  • Pattern recognition/machine learning/deep learning for remote sensing;
  • Innovative approaches to the classification of remote sensing data, from the scale of landscapes to ground validation data;
  • Novel approaches for the quantification of biodiversity from remote sensing data;
  • Automated approaches to analysis of ecological information from photographs.

Contributions with an emphasis on open-source code and data sharing are particularly welcome.

Dr. Bryan Gardiner
Dr. Chris McGonigle
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • feature extraction
  • object detection
  • 3D point clouds
  • machine learning
  • remote sensing
  • deep learning
  • laser line scanner (LLS)
  • benthic mapping

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Horizon Line Detection in Historical Terrestrial Images in Mountainous Terrain Based on the Region Covariance
Remote Sens. 2021, 13(9), 1705; https://doi.org/10.3390/rs13091705 - 28 Apr 2021
Viewed by 278
Abstract
Horizon line detection is an important prerequisite for numerous tasks including the automatic estimation of the unknown camera parameters for images taken in mountainous terrain. In contrast to modern images, historical photographs contain no color information and have reduced image quality. In particular, [...] Read more.
Horizon line detection is an important prerequisite for numerous tasks including the automatic estimation of the unknown camera parameters for images taken in mountainous terrain. In contrast to modern images, historical photographs contain no color information and have reduced image quality. In particular, missing color information in combination with high alpine terrain, partly covered with snow or glaciers, poses a challenge for automatic horizon detection. Therefore, a robust and accurate approach for horizon line detection in historical monochrome images in mountainous terrain was developed. For the detection of potential horizon pixels, an edge detector is learned based on the region covariance as texture descriptor. In combination with shortest path search the horizon in monochrome images is accurately detected. We evaluated our approach on 250 selected historical monochrome images in average dating back to 1950. In 85% of the images the horizon was detected with an error less than 10 pixels. In order to further evaluate the performance, an additional dataset consisting of modern color images was used. Our method, using only grayscale information, achieves comparable results with methods based on color information. In comparison with other methods using only grayscale information, accuracy of the detected horizons is significantly improved. Furthermore, the influence of color, choice of neighborhood for the shortest path calculation, and patch size for the calculation of the region covariance were investigated. The results show that both the availability of color information and increasing the patch size for the calculation of the region covariance improve the accuracy of the detected horizons. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Article
Examining the Links between Multi-Frequency Multibeam Backscatter Data and Sediment Grain Size
Remote Sens. 2021, 13(8), 1539; https://doi.org/10.3390/rs13081539 - 15 Apr 2021
Viewed by 785
Abstract
Acoustic methods are routinely used to provide broad scale information on the geographical distribution of benthic marine habitats and sedimentary environments. Although single-frequency multibeam echosounder surveys have dominated seabed characterisation for decades, multifrequency approaches are now gaining favour in order to capture different [...] Read more.
Acoustic methods are routinely used to provide broad scale information on the geographical distribution of benthic marine habitats and sedimentary environments. Although single-frequency multibeam echosounder surveys have dominated seabed characterisation for decades, multifrequency approaches are now gaining favour in order to capture different frequency responses from the same seabed type. The aim of this study is to develop a robust modelling framework for testing the potential application and value of multifrequency (30, 95, and 300 kHz) multibeam backscatter responses to characterize sediments’ grain size in an area with strong geomorphological gradients and benthic ecological variability. We fit a generalized linear model on a multibeam backscatter and its derivatives to examine the explanatory power of single-frequency and multifrequency models with respect to the mean sediment grain size obtained from the grab samples. A strong and statistically significant (p < 0.05) correlation between the mean backscatter and the absolute values of the mean sediment grain size for the data was noted. The root mean squared error (RMSE) values identified the 30 kHz model as the best performing model responsible for explaining the most variation (84.3%) of the mean grain size at a statistically significant output (p < 0.05) with an adjusted r2 = 0.82. Overall, the single low-frequency sources showed a marginal gain on the multifrequency model, with the 30 kHz model driving the significance of this multifrequency model, and the inclusion of the higher frequencies diminished the level of agreement. We recommend further detailed and sufficient ground-truth data to better predict sediment properties and to discriminate benthic habitats to enhance the reliability of multifrequency backscatter data for the monitoring and management of marine protected areas. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Article
Development of a Parcel-Level Land Boundary Extraction Algorithm for Aerial Imagery of Regularly Arranged Agricultural Areas
Remote Sens. 2021, 13(6), 1167; https://doi.org/10.3390/rs13061167 - 18 Mar 2021
Cited by 1 | Viewed by 549
Abstract
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed [...] Read more.
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed to develop a boundary extraction algorithm by systematically reconstructing a series of computational and mathematical methods, including the Suzuki85 algorithm, Canny edge detection, and Hough transform. Since most irregular farmlands in Korea have been consolidated into large rectangular arrangements for agricultural productivity, the boundary between two adjacent land parcels was assumed to be a straight line. The developed algorithm was applied over six different study sites to evaluate its performance at the boundary level and sectional area level. The correctness, completeness, and quality of the extracted boundaries were approximately 80.7%, 79.7%, and 67.0%, at the boundary level, and 89.7%, 90.0%, and 81.6%, at the area-based level, respectively. These performances are comparable with the results of previous studies on similar subjects; thus, this algorithm can be used for land parcel boundary extraction. The developed algorithm tended to subdivide land parcels for distinctive features, such as greenhouse structures or isolated irregular land parcels within the land blocks. The developed algorithm is currently applicable only to regularly arranged land parcels, and further study coupled with a decision tree or artificial intelligence may allow for boundary extraction from irregularly shaped land parcels. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Figure 1

Article
MFANet: A Multi-Level Feature Aggregation Network for Semantic Segmentation of Land Cover
Remote Sens. 2021, 13(4), 731; https://doi.org/10.3390/rs13040731 - 17 Feb 2021
Cited by 3 | Viewed by 724
Abstract
Detailed information regarding land utilization/cover is a valuable resource in various fields. In recent years, remote sensing images, especially aerial images, have become higher in resolution and larger span in time and space, and the phenomenon that the objects in an identical category [...] Read more.
Detailed information regarding land utilization/cover is a valuable resource in various fields. In recent years, remote sensing images, especially aerial images, have become higher in resolution and larger span in time and space, and the phenomenon that the objects in an identical category may yield a different spectrum would lead to the fact that relying on spectral features only is often insufficient to accurately segment the target objects. In convolutional neural networks, down-sampling operations are usually used to extract abstract semantic features, which leads to loss of details and fuzzy edges. To solve these problems, the paper proposes a Multi-level Feature Aggregation Network (MFANet), which is improved in two aspects: deep feature extraction and up-sampling feature fusion. Firstly, the proposed Channel Feature Compression module extracts the deep features and filters the redundant channel information from the backbone to optimize the learned context. Secondly, the proposed Multi-level Feature Aggregation Upsample module nestedly uses the idea that high-level features provide guidance information for low-level features, which is of great significance for positioning the restoration of high-resolution remote sensing images. Finally, the proposed Channel Ladder Refinement module is used to refine the restored high-resolution feature maps. Experimental results show that the proposed method achieves state-of-the-art performance 86.45% mean IOU on LandCover dataset. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Article
Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification
Remote Sens. 2021, 13(3), 508; https://doi.org/10.3390/rs13030508 - 01 Feb 2021
Viewed by 715
Abstract
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features [...] Read more.
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features are extracted by guided filtering. Then, the spatial features and spectral features are composited by a weighted kernel summation form. Finally, the weighted extreme learning machine is employed for the hyperspectral imagery classification task. This dual-weighted framework guarantees that the subtle spatial features are extracted, while the importance of minority samples is emphasized. Experiments carried on three public data sets demonstrate that the proposed dual-weighted kernel extreme learning machine (DW-KELM) performs better than other kernel methods, in terms of accuracy of classification, and can achieve satisfactory results. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Article
PolSAR Image Classification Using a Superpixel-Based Composite Kernel and Elastic Net
Remote Sens. 2021, 13(3), 380; https://doi.org/10.3390/rs13030380 - 22 Jan 2021
Viewed by 450
Abstract
The presence of speckles and the absence of discriminative features make it difficult for the pixel-level polarimetric synthetic aperture radar (PolSAR) image classification to achieve more accurate and coherent interpretation results, especially in the case of limited available training samples. To this end, [...] Read more.
The presence of speckles and the absence of discriminative features make it difficult for the pixel-level polarimetric synthetic aperture radar (PolSAR) image classification to achieve more accurate and coherent interpretation results, especially in the case of limited available training samples. To this end, this paper presents a composite kernel-based elastic net classifier (CK-ENC) for better PolSAR image classification. First, based on superpixel segmentation of different scales, three types of features are extracted to consider more discriminative information, thereby effectively suppressing the interference of speckles and achieving better target contour preservation. Then, a composite kernel (CK) is constructed to map these features and effectively implement feature fusion under the kernel framework. The CK exploits the correlation and diversity between different features to improve the representation and discrimination capabilities of features. Finally, an ENC integrated with CK (CK-ENC) is proposed to achieve better PolSAR image classification performance with limited training samples. Experimental results on airborne and spaceborne PolSAR datasets demonstrate that the proposed CK-ENC can achieve better visual coherence and yield higher classification accuracies than other state-of-art methods, especially in the case of limited training samples. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Other

Jump to: Research

Letter
JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification
Remote Sens. 2020, 12(12), 2016; https://doi.org/10.3390/rs12122016 - 23 Jun 2020
Cited by 2 | Viewed by 905
Abstract
The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by [...] Read more.
The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by averaging three different 2D-LBP features in a three-dimensional hyperspectral data cube. Subsequently, J2D-LBP is added into the Gabor filter-based deep network (GFDN), and then a novel classification method JL-GFDN is proposed. Different from the original GFDN framework, JL-GFDN further fuses the spectral and spatial features together for HSI classification. Three real data sets are adopted to evaluate the effectiveness of JL-GFDN, and the experimental results verify that (i) JL-GFDN has a better classification accuracy than the original GFDN; (ii) J2D-LBP is more effective in HSI classification in comparison with the traditional 2D-LBP. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Back to TopTop