Urban Features Extraction from UAV Remote Sensing Data and Images

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 4414

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Boston University, Boston, MA 02215, USA
Interests: computer intelligence; image processing; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
Interests: mechanical learning; data mining; artificial intelligence

E-Mail Website
Guest Editor
School of Information Engineering, Minzu University of China, Beijing 100081, China
Interests: computer simulation; data mining

Special Issue Information

Dear Colleagues,

For urban planners and decision-makers, the built-up urban area is an important reference for assessing the city’s level of development and planning future changes. For this reason, remote sensing technologies play an important role in efficiently extracting multiple urban characteristics. Urban feature extraction includes a wide range of infrastructure such as roads, buildings footprints, bridges, railroads, airports, etc. Remote sensing encompasses a large spectrum of platforms for acquiring imagery, such as satellites, unmanned aerial vehicles (UAVs), and survey aircraft. This Special Issue is to capture the latest developments in remote sensing platforms, algorithms, and methodologies for acquiring and processing image/data to extract and model urban features. We welcome submissions that provide the scientific community with the most recent advancements in urban feature extraction and modeling from remote sensing data and images.

Potential topics include but are not limited to:

  • Fusion and integration of multisensor data;
  • Image segmentation and classification algorithms to extract urban features;
  • Machine and deep learning methods for extracting urban features;
  • Image acquisition using UAV platforms;
  • Big data handling to extract urban features;
  • Image enhancement and correction.

Dr. Kate Saenko
Dr. Kimhung Pho
Prof. Dr. Jie Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • multisensor data
  • image segmentation and classification algorithms
  • machine and deep learning methods
  • image acquisition using UAV platforms
  • big data handling
  • image enhancement and correction

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 11394 KiB  
Article
Assessment of Spatial Patterns of Backyard Shacks Using Landscape Metrics
by Naledzani Mudau and Paidamwoyo Mhangara
Drones 2023, 7(9), 561; https://doi.org/10.3390/drones7090561 - 1 Sep 2023
Viewed by 1016
Abstract
Urban informality in developing economies like South Africa takes two forms: freestanding shacks are built in informal settlements, and backyard shacks are built in the yard of a formal house. The latter is evident in established townships around South African cities. In contrast [...] Read more.
Urban informality in developing economies like South Africa takes two forms: freestanding shacks are built in informal settlements, and backyard shacks are built in the yard of a formal house. The latter is evident in established townships around South African cities. In contrast to freestanding shacks, the number of backyard shacks has increased significantly in recent years. The study assessed the spatial patterns of backyard shacks in a formal settlement containing low-cost government houses (LCHs) using Unmanned Aerial Vehicle (UAV) products and landscape metrics. The backyard shacks were mapped using Object-Based Image Analysis (OBIA), which uses height information, vegetation index, and radiometric values. We assessed the effectiveness of rule-based and Random Forest (RF) OBIA techniques in detecting formal and informal structures. Informal structures were further classified as backyard shacks using spatial analysis. The spatial patterns of backyard shacks were assessed using eight shapes, aggregation, and landscape metrics. The analysis of the shape metrics shows that the backyard shacks are primarily square, as confirmed by a higher shape index value and a lower fractional dimension index value. The contiguity index of backyard shack patches is 0.6. The values of the shape metrics of backyard shacks were almost the same as those of formal and informal dwelling structures. The values of the assessed aggregation metrics of backyard shacks were more distinct from formal and informal structures compared with the shape metrics. The aggregation metrics show that the backyard shacks are less connected, less dense, and more isolated from each other compared with formal and freestanding shacks. The Shannon’s Diversity Index and Simpson’s Evenness Index values of informal settlements and formal areas with backyard shacks are almost the same. The results achieved in this study can be used to understand and manage informality in formal settlements. Full article
(This article belongs to the Special Issue Urban Features Extraction from UAV Remote Sensing Data and Images)
Show Figures

Figure 1

22 pages, 12221 KiB  
Article
AGCosPlace: A UAV Visual Positioning Algorithm Based on Transformer
by Ya Guo, Yatong Zhou and Fan Yang
Drones 2023, 7(8), 498; https://doi.org/10.3390/drones7080498 - 28 Jul 2023
Cited by 3 | Viewed by 1205
Abstract
To address the limitation and obtain the position of the drone even when the relative poses and intrinsics of the drone camera are unknown, a visual positioning algorithm based on image retrieval called AGCosPlace, which leverages the Transformer architecture to achieve improved performance, [...] Read more.
To address the limitation and obtain the position of the drone even when the relative poses and intrinsics of the drone camera are unknown, a visual positioning algorithm based on image retrieval called AGCosPlace, which leverages the Transformer architecture to achieve improved performance, is proposed. Our approach involves subjecting the feature map of the backbone to an encoding operation that incorporates attention mechanisms, multi-layer perceptron coding, and a graph network module. This encoding operation allows for better aggregation of the context information present in the image. Subsequently, the aggregation module with dynamic adaptive pooling produces a descriptor with an appropriate dimensionality, which is then passed into the classifier to recognize the position. Considering the complexity associated with labeling visual positioning labels for UAV images, the visual positioning network is trained using the publicly available Google Street View SF-XL dataset. The performance of the trained network model on a custom UAV perspective test set is evaluated. The experimental results demonstrate that our proposed algorithm, which improves upon the ResNet backbone networks on the SF-XL test set, exhibits excellent performance on the UAV test set. The algorithm achieves notable improvements in the four evaluation metrics: R@1, R@5, R@10, and R@20. These results confirm that the trained visual positioning network can effectively be employed in UAV visual positioning tasks. Full article
(This article belongs to the Special Issue Urban Features Extraction from UAV Remote Sensing Data and Images)
Show Figures

Figure 1

29 pages, 20466 KiB  
Article
Spectral-Spatial Attention Rotation-Invariant Classification Network for Airborne Hyperspectral Images
by Yuetian Shi, Bin Fu, Nan Wang, Yinzhu Cheng, Jie Fang, Xuebin Liu and Geng Zhang
Drones 2023, 7(4), 240; https://doi.org/10.3390/drones7040240 - 29 Mar 2023
Cited by 6 | Viewed by 1363
Abstract
An airborne hyperspectral imaging system is typically equipped on an aircraft or unmanned aerial vehicle (UAV) to capture ground scenes from an overlooking perspective. Due to the rotation of the aircraft or UAV, the same region of land cover may be imaged from [...] Read more.
An airborne hyperspectral imaging system is typically equipped on an aircraft or unmanned aerial vehicle (UAV) to capture ground scenes from an overlooking perspective. Due to the rotation of the aircraft or UAV, the same region of land cover may be imaged from different viewing angles. While humans can accurately recognize the same objects from different viewing angles, classification methods based on spectral-spatial features for airborne hyperspectral images exhibit significant errors. The existing methods primarily involve incorporating image or feature rotation angles into the network to improve its accuracy in classifying rotated images. However, these methods introduce additional parameters that need to be manually determined, which may not be optimal for all applications. This paper presents a spectral-spatial attention rotation-invariant classification network for the airborne hyperspectral image to address this issue. The proposed method does not require the introduction of additional rotation angle parameters. There are three modules in the proposed framework: the band selection module, the local spatial feature enhancement module, and the lightweight feature enhancement module. The band selection module suppresses redundant spectral channels, while the local spatial feature enhancement module generates a multi-angle parallel feature encoding network to improve the discrimination of the center pixel. The multi-angle parallel feature encoding network also learns the position relationship between each pixel, thus maintaining rotation invariance. The lightweight feature enhancement module is the last layer of the framework, which enhances important features and suppresses insignificance features. At the same time, a dynamically weighted cross-entropy loss is utilized as the loss function. This loss function adjusts the model’s sensitivity for samples with different categories according to the output in the training epoch. The proposed method is evaluated on five airborne hyperspectral image datasets covering urban and agricultural regions. Compared with other state-of-the-art classification algorithms, the method achieves the best classification accuracy and is capable of effectively extracting rotation-invariant features for urban and rural areas. Full article
(This article belongs to the Special Issue Urban Features Extraction from UAV Remote Sensing Data and Images)
Show Figures

Figure 1

Back to TopTop