Special Issue "Urban Features Extraction from UAV Remote Sensing Data and Images"

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 31 July 2023 | Viewed by 448

Special Issue Editors

Department of Computer Science, Boston University, Boston, MA 02215, USA
Interests: computer intelligence; image processing; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
Interests: mechanical learning; data mining; artificial intelligence
School of Information Engineering, Minzu University of China, Beijing 100081, China
Interests: computer simulation; data mining

Special Issue Information

Dear Colleagues,

For urban planners and decision-makers, the built-up urban area is an important reference for assessing the city’s level of development and planning future changes. For this reason, remote sensing technologies play an important role in efficiently extracting multiple urban characteristics. Urban feature extraction includes a wide range of infrastructure such as roads, buildings footprints, bridges, railroads, airports, etc. Remote sensing encompasses a large spectrum of platforms for acquiring imagery, such as satellites, unmanned aerial vehicles (UAVs), and survey aircraft. This Special Issue is to capture the latest developments in remote sensing platforms, algorithms, and methodologies for acquiring and processing image/data to extract and model urban features. We welcome submissions that provide the scientific community with the most recent advancements in urban feature extraction and modeling from remote sensing data and images.

Potential topics include but are not limited to:

  • Fusion and integration of multisensor data;
  • Image segmentation and classification algorithms to extract urban features;
  • Machine and deep learning methods for extracting urban features;
  • Image acquisition using UAV platforms;
  • Big data handling to extract urban features;
  • Image enhancement and correction.

Dr. Kate Saenko
Dr. Kimhung Pho
Prof. Dr. Jie Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • multisensor data
  • image segmentation and classification algorithms
  • machine and deep learning methods
  • image acquisition using UAV platforms
  • big data handling
  • image enhancement and correction

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Spectral-Spatial Attention Rotation-Invariant Classification Network for Airborne Hyperspectral Images
Drones 2023, 7(4), 240; https://doi.org/10.3390/drones7040240 - 29 Mar 2023
Viewed by 163
Abstract
An airborne hyperspectral imaging system is typically equipped on an aircraft or unmanned aerial vehicle (UAV) to capture ground scenes from an overlooking perspective. Due to the rotation of the aircraft or UAV, the same region of land cover may be imaged from [...] Read more.
An airborne hyperspectral imaging system is typically equipped on an aircraft or unmanned aerial vehicle (UAV) to capture ground scenes from an overlooking perspective. Due to the rotation of the aircraft or UAV, the same region of land cover may be imaged from different viewing angles. While humans can accurately recognize the same objects from different viewing angles, classification methods based on spectral-spatial features for airborne hyperspectral images exhibit significant errors. The existing methods primarily involve incorporating image or feature rotation angles into the network to improve its accuracy in classifying rotated images. However, these methods introduce additional parameters that need to be manually determined, which may not be optimal for all applications. This paper presents a spectral-spatial attention rotation-invariant classification network for the airborne hyperspectral image to address this issue. The proposed method does not require the introduction of additional rotation angle parameters. There are three modules in the proposed framework: the band selection module, the local spatial feature enhancement module, and the lightweight feature enhancement module. The band selection module suppresses redundant spectral channels, while the local spatial feature enhancement module generates a multi-angle parallel feature encoding network to improve the discrimination of the center pixel. The multi-angle parallel feature encoding network also learns the position relationship between each pixel, thus maintaining rotation invariance. The lightweight feature enhancement module is the last layer of the framework, which enhances important features and suppresses insignificance features. At the same time, a dynamically weighted cross-entropy loss is utilized as the loss function. This loss function adjusts the model’s sensitivity for samples with different categories according to the output in the training epoch. The proposed method is evaluated on five airborne hyperspectral image datasets covering urban and agricultural regions. Compared with other state-of-the-art classification algorithms, the method achieves the best classification accuracy and is capable of effectively extracting rotation-invariant features for urban and rural areas. Full article
(This article belongs to the Special Issue Urban Features Extraction from UAV Remote Sensing Data and Images)
Show Figures

Figure 1

Back to TopTop