Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = extinction profile (EP)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 11996 KiB  
Article
Deep Learning Spatial-Spectral Classification of Remote Sensing Images by Applying Morphology-Based Differential Extinction Profile (DEP)
by Nafiseh Kakhani, Mehdi Mokhtarzade and Mohammad Javad Valadan Zoej
Electronics 2021, 10(23), 2893; https://doi.org/10.3390/electronics10232893 - 23 Nov 2021
Cited by 3 | Viewed by 2657
Abstract
Since the technology of remote sensing has been improved recently, the spatial resolution of satellite images is getting finer. This enables us to precisely analyze the small complex objects in a scene through remote sensing images. Thus, the need to develop new, efficient [...] Read more.
Since the technology of remote sensing has been improved recently, the spatial resolution of satellite images is getting finer. This enables us to precisely analyze the small complex objects in a scene through remote sensing images. Thus, the need to develop new, efficient algorithms like spatial-spectral classification methods is growing. One of the most successful approaches is based on extinction profile (EP), which can extract contextual information from remote sensing data. Moreover, deep learning classifiers have drawn attention in the remote sensing community in the past few years. Recent progress has shown the effectiveness of deep learning at solving different problems, particularly segmentation tasks. This paper proposes a novel approach based on a new concept, which is differential extinction profile (DEP). DEP makes it possible to have an input feature vector with both spectral and spatial information. The input vector is then fed into a proposed straightforward deep-learning-based classifier to produce a thematic map. The approach is carried out on two different urban datasets from Pleiades and World-View 2 satellites. In order to prove the capabilities of the suggested approach, we compare the final results to the results of other classification strategies with different input vectors and various types of common classifiers, such as support vector machine (SVM) and random forests (RF). It can be concluded that the proposed approach is significantly improved in terms of three kinds of criteria, which are overall accuracy, Kappa coefficient, and total disagreement. Full article
Show Figures

Figure 1

17 pages, 3797 KiB  
Article
Hyperspectral and LiDAR Data Fusion Classification Using Superpixel Segmentation-Based Local Pixel Neighborhood Preserving Embedding
by Yunsong Li, Chiru Ge, Weiwei Sun, Jiangtao Peng, Qian Du and Keyan Wang
Remote Sens. 2019, 11(5), 550; https://doi.org/10.3390/rs11050550 - 6 Mar 2019
Cited by 13 | Viewed by 5203
Abstract
A new method of superpixel segmentation-based local pixel neighborhood preserving embedding (SSLPNPE) is proposed for the fusion of hyperspectral and light detection and ranging (LiDAR) data based on the extinction profiles (EPs), superpixel segmentation and local pixel neighborhood preserving embedding (LPNPE). A new [...] Read more.
A new method of superpixel segmentation-based local pixel neighborhood preserving embedding (SSLPNPE) is proposed for the fusion of hyperspectral and light detection and ranging (LiDAR) data based on the extinction profiles (EPs), superpixel segmentation and local pixel neighborhood preserving embedding (LPNPE). A new workflow is proposed to calibrate the Goddard’s LiDAR, hyperspectral and thermal (G-LiHT) data, which allows our method to be applied to actual data. Specifically, EP features are extracted from both sources. Then, the derived features of each source are fused by the SSLPNPE. Using the labeled samples, the final label assignment is produced by a classifier. For the open standard experimental data and the actual data, experimental results prove that the proposed method is fast and effective in hyperspectral and LiDAR data fusion. Full article
Show Figures

Graphical abstract

20 pages, 22826 KiB  
Article
Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks
by Hao Li, Pedram Ghamisi, Uwe Soergel and Xiao Xiang Zhu
Remote Sens. 2018, 10(10), 1649; https://doi.org/10.3390/rs10101649 - 16 Oct 2018
Cited by 89 | Viewed by 9439
Abstract
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data [...] Read more.
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92 . 57 % and 97 . 91 % for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps. Full article
Show Figures

Graphical abstract

Back to TopTop