Topic Editors

Optical Communications Laboratory, Ocean College, Zhejiang University, Zheda Road 1, Zhoushan 316021, China
Network and Telecommunication Research Group, University of Haute-Alsace, 68008 Colmar, France
Department of Engineering, Manchester Metropolitan University, Manchester M15GD, UK
Hamdard Institute of Engineering & Technology, Islamabad 44000, Pakistan
Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China

Advances, Innovations and Applications of UAV Technology for Remote Sensing

Abstract submission deadline
closed (30 May 2023)
Manuscript submission deadline
closed (31 August 2023)
Viewed by
47797

Topic Information

Dear Colleagues,

Nowadays, a variety of Unmanned Aerial Vehicles (UAVs) are commercially available and are widely used for several real-world tasks, such as environmental monitoring, construction site surveys, remote sensing data collection, vertical structure inspection, glaciology, smart agriculture, forestry, atmospheric research, disaster prevention, humanitarian observation, biological sensing, reef monitoring, fire monitoring, volcanic gas sampling, gap pipeline monitoring, hydrology, ecology, and archaeology. These less-invasive aerial robots support minimized user interventions, high-level autonomous functionalities, and can carry payloads for certain missions. UAV operations are highly effective in collecting valuable quantitative and qualitative data required to monitor isolated and distant regions. The integration of UAVs in such regions has substantially enhanced environmental monitoring by saving time, increasing precision, minimizing human footprints, increasing safety, and extending the study area of hard-to-access regions. Moreover, we have noticed a notable growth in emerging technologies such as artificial intelligence (AI), machine learning (ML), deep learning (DL), computer vision, the Internet-of-Things (IoT), laser scanning, sensing, oblique photogrammetry, aerial imaging, efficient perception, and 3D mapping, all of which assist UAVs in their multiple operations. These promising technologies can outperform human potentials in different sophisticated tasks such as medical image analysis, 3D mapping, aerial photography, and autonomous driving. Based on the aforementioned technologies, there exists an expanding interest to consider these cutting-edge technologies in order to enhance UAVs' level of autonomy and their other capabilities. Currently, we have also witnessed a tremendous growth in the use of UAVs for remotely sensing rural, urban, suburban, and remote regions. The extensive applicability and popularity of UAVs is not only strengthening the development of advanced UAV sensors, including RGB, LiDAR, laser scanners, thermal cameras, and hyperspectral and multispectral sensors, but it is also driving pragmatic and innovative problem-solving features and intelligent decision-making strategies in diverse domains. The integration of autonomous driving, collision avoidance, strong mobility for acquiring images at high temporal and spatial resolutions, environmental awareness, communication, precise control, dynamic data collection, 3D information acquisition, and intelligent algorithms also support UAV-based remote sensing technology for multiple applications. The growing advancements and innovations of UAVs as a remote sensing platform, as well as development in the miniaturization of instrumentation, have resulted in an expanding uptake of this technology in disciplines of remote sensing science.

This Topic aims to provide a novel and modern viewpoint on recent developments, novel patterns, and applications in the field. Our objective is to gather latest research contributions from both academicians and practitioners from diversified interests to fill the gap in the aforementioned research areas. We invite researchers to devote articles of high-quality scientific research to bridge the gap between theory, practice in design, and applications. We seek reviews, surveys, and original research articles on, but not limited to, the topics given below:

  • Real-time AI in motion planning, trajectory planning and control, data gathering and analysis of UAVs;
  • Image/LiDAR feature extraction;
  • Processing algorithms for UAV-aided imagery datasets;
  • Semantic/instance segmentation, classification, object detection and tracking with UAV data using the data mining, AI, ML, DL algorithms;
  • Cooperative perception and mapping utilizing UAV swarms;
  • UAV image/point clouds processing in power/oil/industry, hydraulics, agriculture, ecology, emergency response, and smart cities;
  • UAV-borne hyperspectral remote sensing;
  • Collaborative strategies and mechanisms between UAVs and other systems, such as hardware/software architectures including multi-agent systems, protocols, and strategies to work together;
  • UAV onboard remote sensing data storage, transmission, and retrieval;
  • Advances in the applications of UAVs in archeology, precision agriculture, yield protection, atmospheric research, area management, photogrammetry, 3D modeling, object reconstruction, Earth observation, climate change, sensing and imaging for coastal and environment monitoring, construction, mining, pollution monitoring, target tracking, humanitarian localization, security and surveillance, and ecological applications;
  • Use of optical laser, hyperspectral, multi-spectral, and SAR technologies for UAV-based remote sensing.

Dr. Syed Agha Hassnain Mohsan
Prof. Dr. Pascal Lorenz
Dr. Khaled Rabie
Dr. Muhammad Asghar Khan
Dr. Muhammad Shafiq
Topic Editors

Keywords

  • drones
  • UAVs
  • aerial robots
  • remote sensing
  • aerial imagery
  • LiDAR
  • machine learning
  • atmospheric research
  • sensing and Imaging
  • processing algorithms

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 20.8 Days CHF 1600
Drones
drones
4.4 5.6 2017 17.9 Days CHF 2600
Inventions
inventions
2.1 4.8 2016 17.4 Days CHF 1800
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 19.9 Days CHF 1800
Remote Sensing
remotesensing
4.2 8.3 2009 23 Days CHF 2700
Sensors
sensors
3.4 7.3 2001 17 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (23 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
9 pages, 957 KiB  
Editorial
Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing
by Syed Agha Hassnain Mohsan, Muhammad Asghar Khan and Yazeed Yasin Ghadi
Remote Sens. 2023, 15(21), 5087; https://doi.org/10.3390/rs15215087 - 24 Oct 2023
Cited by 2 | Viewed by 1396
Abstract
Currently, several kinds of Unmanned Aerial Vehicles (UAVs) or drones [...] Full article
Show Figures

Figure 1

18 pages, 13901 KiB  
Article
The Method of Multi-Angle Remote Sensing Observation Based on Unmanned Aerial Vehicles and the Validation of BRDF
by Hongtao Cao, Dongqin You, Dabin Ji, Xingfa Gu, Jianguang Wen, Jianjun Wu, Yong Li, Yongqiang Cao, Tiejun Cui and Hu Zhang
Remote Sens. 2023, 15(20), 5000; https://doi.org/10.3390/rs15205000 - 18 Oct 2023
Cited by 1 | Viewed by 4906
Abstract
The measurement of bidirectional reflectivity for ground-based objects is a highly intricate task, with significant limitations in the capabilities of both ground-based and satellite-based observations from multiple viewpoints. In recent years, unmanned aerial vehicles (UAVs) have emerged as a novel remote sensing method, [...] Read more.
The measurement of bidirectional reflectivity for ground-based objects is a highly intricate task, with significant limitations in the capabilities of both ground-based and satellite-based observations from multiple viewpoints. In recent years, unmanned aerial vehicles (UAVs) have emerged as a novel remote sensing method, offering convenience and cost-effectiveness while enabling multi-view observations. This study devised a polygonal flight path along the hemisphere to achieve bidirectional reflectance distribution function (BRDF) measurements for large zenith angles and all azimuth angles. By employing photogrammetry’s principle of aerial triangulation, accurate observation angles were restored, and the geometric structure of “sun-object-view” was constructed. Furthermore, three BRDF models (M_Walthall, RPV, RTLSR) were compared and evaluated at the UAV scale in terms of fitting quality, shape structure, and reflectance errors to assess their inversion performance. The results demonstrated that the RPV model exhibited superior inversion performance followed, by M_Walthall; however, RTLST performed comparatively poorly. Notably, the M_Walthall model excelled in capturing smooth terrain object characteristics while RPV proved applicable to various types of rough terrain objects with multi-scale applicability for both UAVs and satellites. These methods and findings are crucial for an extensive exploration into the bidirectional reflectivity properties of ground-based objects, and provide an essential technical procedure for studying various ground-based objects’ in-plane reflection properties. Full article
Show Figures

Figure 1

25 pages, 6253 KiB  
Article
Using Schlieren Imaging and a Radar Acoustic Sounding System for the Detection of Close-in Air Turbulence
by Samantha Gordon and Graham Brooker
Sensors 2023, 23(19), 8255; https://doi.org/10.3390/s23198255 - 5 Oct 2023
Viewed by 1153
Abstract
This paper presents a novel sensor for the detection and characterization of regions of air turbulence. As part of the ground truth process, it consists of a combined Schlieren imager and a Radar Acoustic Sounding System (RASS) to produce dual-modality “images” of air [...] Read more.
This paper presents a novel sensor for the detection and characterization of regions of air turbulence. As part of the ground truth process, it consists of a combined Schlieren imager and a Radar Acoustic Sounding System (RASS) to produce dual-modality “images” of air movement within the measurement volume. The ultrasound-modulated Schlieren imager consists of a strobed point light source, parabolic mirror, light block, and camera, which are controlled by two laptops. It provides a fine-scale projection of the acoustic pulse-modulated air turbulence through the measurement volume. The narrow beam 40 kHz/17 GHz RASS produces spectra based on Bragg-enhanced Doppler radar reflections from the acoustic pulse as it travels. Tests using artificially generated air vortices showed some disruption of the Schlieren image and of the RASS spectrogram. This should allow the higher-resolution Schlieren images to identify the turbulence mechanisms that are disrupting the RASS spectra. The objective of this combined sensor is to have the Schlieren component inform the interpretation of RASS spectra to allow the latter to be used as a stand-alone sensor on a UAV. Full article
Show Figures

Figure 1

20 pages, 46373 KiB  
Article
HAM-Transformer: A Hybrid Adaptive Multi-Scaled Transformer Net for Remote Sensing in Complex Scenes
by Keying Ren, Xiaoyan Chen, Zichen Wang, Xiwen Liang, Zhihui Chen and Xia Miao
Remote Sens. 2023, 15(19), 4817; https://doi.org/10.3390/rs15194817 - 3 Oct 2023
Cited by 2 | Viewed by 1178
Abstract
The quality of remote sensing images has been greatly improved by the rapid improvement of unmanned aerial vehicles (UAVs), which has made it possible to detect small objects in the most complex scenes. Recently, learning-based object detection has been introduced and has gained [...] Read more.
The quality of remote sensing images has been greatly improved by the rapid improvement of unmanned aerial vehicles (UAVs), which has made it possible to detect small objects in the most complex scenes. Recently, learning-based object detection has been introduced and has gained popularity in remote sensing image processing. To improve the detection accuracy of small, weak objects in complex scenes, this work proposes a novel hybrid backbone composed of a convolutional neural network and an adaptive multi-scaled transformer, referred to as HAM-Transformer Net. HAM-Transformer Net firstly extracts the details of feature maps using convolutional local feature extraction blocks. Secondly, hierarchical information is extracted, using multi-scale location coding. Finally, an adaptive multi-scale transformer block is used to extract further features in different receptive fields and to fuse them adaptively. We implemented comparison experiments on a self-constructed dataset. The experiments proved that the method is a significant improvement over the state-of-the-art object detection algorithms. We also conducted a large number of comparative experiments in this work to demonstrate the effectiveness of this method. Full article
Show Figures

Figure 1

19 pages, 12731 KiB  
Article
Enhancing UAV-SfM Photogrammetry for Terrain Modeling from the Perspective of Spatial Structure of Errors
by Wen Dai, Ruibo Qiu, Bo Wang, Wangda Lu, Guanghui Zheng, Solomon Obiri Yeboah Amankwah and Guojie Wang
Remote Sens. 2023, 15(17), 4305; https://doi.org/10.3390/rs15174305 - 31 Aug 2023
Viewed by 1120
Abstract
UAV-SfM photogrammetry is widely used in remote sensing and geoscience communities. Scholars have tried to optimize UAV-SfM for terrain modeling based on analysis of error statistics like root mean squared error (RMSE), mean error (ME), and standard deviation (STD). However, the errors of [...] Read more.
UAV-SfM photogrammetry is widely used in remote sensing and geoscience communities. Scholars have tried to optimize UAV-SfM for terrain modeling based on analysis of error statistics like root mean squared error (RMSE), mean error (ME), and standard deviation (STD). However, the errors of terrain modeling tend to be spatially distributed. Although the error statistic can represent the magnitude of errors, revealing spatial structures of errors is still challenging. The “best practice” of UAV-SfM is lacking in research communities from the perspective of spatial structure of errors. Thus, this study designed various UAV-SfM photogrammetric scenarios and investigated the effects of image collection strategies and GCPs on terrain modeling. The error maps of different photogrammetric scenarios were calculated and quantitatively analyzed by ME, STD, and Moran’s I. The results show that: (1) A high camera inclination (20–40°) enhances UAV-SfM photogrammetry. This not only decreases the magnitude of errors, but also mitigates its spatial correlation (Moran’s I). Supplementing convergent images is valuable for reducing errors in a nadir camera block, but it is unnecessary when the image block is with a high camera angle. (2) Flying height increases the magnitude of errors (ME and STD) but does not affect the spatial structure (Moran’s I). By contrast, the camera angle is more important than the flying height for improving the spatial structure of errors. (3) A small number of GCPs rapidly reduce the magnitude of errors (ME and STD), and a further increase in GCPs has a marginal effect. However, the structure of errors (Moran’s I) can be further improved with increasing GCPs. (4) With the same number, the distribution of GCPs is critical for UAV-SfM photogrammetry. The edge distribution should be first considered, followed by the even distribution. The research findings contribute to understanding how different image collection scenarios and GCPs can influence subsequent terrain modeling accuracy, precision, and spatial structure of errors. The latter (spatial structure of errors) should be routinely assessed in evaluations of the quality of UAV-SfM photogrammetry. Full article
Show Figures

Figure 1

16 pages, 2398 KiB  
Article
Dynamic Repositioning of Aerial Base Stations for Enhanced User Experience in 5G and Beyond
by Shams Ur Rahman, Ajmal Khan, Muhammad Usman, Muhammad Bilal, You-Ze Cho and Hesham El-Sayed
Sensors 2023, 23(16), 7098; https://doi.org/10.3390/s23167098 - 11 Aug 2023
Cited by 1 | Viewed by 809
Abstract
The ultra-dense deployment (UDD) of small cells in 5G and beyond to enhance capacity and data rate is promising, but since user densities continually change, the static deployment of small cells can lead to wastes of capital, the underutilization of resources, and user [...] Read more.
The ultra-dense deployment (UDD) of small cells in 5G and beyond to enhance capacity and data rate is promising, but since user densities continually change, the static deployment of small cells can lead to wastes of capital, the underutilization of resources, and user dissatisfaction. This work proposes the use of Aerial Base Stations (ABSs) wherein small cells are mounted on Unmanned Aerial Vehicles (UAVs), which can be deployed to a set of candidate locations. Furthermore, based on the current user densities, this work studies the optimal placement of the ABSs, at a subset of potential candidate positions, to maximize the total received power and signal-to-interference ratio. The problems of the optimal placement for increasing received power and signal-to-interference ratio are formulated, and optimal placement solutions are designed. The proposed solutions compute the optimal candidate locations for the ABSs based on the current user densities. When the user densities change significantly, the proposed solutions can be re-executed to re-compute the optimal candidate locations for the ABSs, and hence the ABSs can be moved to their new candidate locations. Simulation results show that a 22% or more increase in the total received power can be achieved through the optimal placement of the Aerial BSs and that more than 60% users have more than 80% chance to have their individual received power increased. Full article
Show Figures

Figure 1

20 pages, 12513 KiB  
Article
UAV-Based Terrain Modeling in Low-Vegetation Areas: A Framework Based on Multiscale Elevation Variation Coefficients
by Jiaxin Fan, Wen Dai, Bo Wang, Jingliang Li, Jiahui Yao and Kai Chen
Remote Sens. 2023, 15(14), 3569; https://doi.org/10.3390/rs15143569 - 16 Jul 2023
Cited by 4 | Viewed by 1484
Abstract
The removal of low vegetation is still challenging in UAV photogrammetry. According to the different topographic features expressed by point-cloud data at different scales, a vegetation-filtering method based on multiscale elevation-variation coefficients is proposed for terrain modeling. First, virtual grids are constructed at [...] Read more.
The removal of low vegetation is still challenging in UAV photogrammetry. According to the different topographic features expressed by point-cloud data at different scales, a vegetation-filtering method based on multiscale elevation-variation coefficients is proposed for terrain modeling. First, virtual grids are constructed at different scales, and the average elevation values of the corresponding point clouds are obtained. Second, the amount of elevation change at any two scales in each virtual grid is calculated to obtain the difference in surface characteristics (degree of elevation change) at the corresponding two scales. Third, the elevation variation coefficient of the virtual grid that corresponds to the largest elevation variation degree is calculated, and threshold segmentation is performed based on the relation that the elevation variation coefficients of vegetated regions are much larger than those of terrain regions. Finally, the optimal calculation neighborhood radius of the elevation variation coefficients is analyzed, and the optimal segmentation threshold is discussed. The experimental results show that the multiscale coefficients of elevation variation method can accurately remove vegetation points and reserve ground points in low- and densely vegetated areas. The type I error, type II error, and total error in the study areas range from 1.93 to 9.20%, 5.83 to 5.84%, and 2.28 to 7.68%, respectively. The total error of the proposed method is 2.43–2.54% lower than that of the CSF, TIN, and PMF algorithms in the study areas. This study provides a foundation for the rapid establishment of high-precision DEMs based on UAV photogrammetry. Full article
Show Figures

Figure 1

14 pages, 22675 KiB  
Article
Crack Detection of Bridge Concrete Components Based on Large-Scene Images Using an Unmanned Aerial Vehicle
by Zhen Xu, Yingwang Wang, Xintian Hao and Jingjing Fan
Sensors 2023, 23(14), 6271; https://doi.org/10.3390/s23146271 - 10 Jul 2023
Cited by 3 | Viewed by 1333
Abstract
The current method of crack detection in bridges using unmanned aerial vehicles (UAVs) relies heavily on acquiring local images of bridge concrete components, making image acquisition inefficient. To address this, we propose a crack detection method that utilizes large-scene images acquired by a [...] Read more.
The current method of crack detection in bridges using unmanned aerial vehicles (UAVs) relies heavily on acquiring local images of bridge concrete components, making image acquisition inefficient. To address this, we propose a crack detection method that utilizes large-scene images acquired by a UAV. First, our approach involves designing a UAV-based scheme for acquiring large-scene images of bridges, followed by processing these images using a background denoising algorithm. Subsequently, we use a maximum crack width calculation algorithm that is based on the region of interest and the maximum inscribed circle. Finally, we applied the method to a typical reinforced concrete bridge. The results show that the large-scene images are only 1/9–1/22 of the local images for this bridge, which significantly improves detection efficiency. Moreover, the accuracy of the crack detection can reach up to 93.4%. Full article
Show Figures

Figure 1

21 pages, 7952 KiB  
Article
Research of an Unmanned Aerial Vehicle Autonomous Aerial Refueling Docking Method Based on Binocular Vision
by Kun Gong, Bo Liu, Xin Xu, Yuelei Xu, Yakun He, Zhaoxiang Zhang and Jarhinbek Rasol
Drones 2023, 7(7), 433; https://doi.org/10.3390/drones7070433 - 30 Jun 2023
Viewed by 1592
Abstract
In this paper, a visual navigation method based on binocular vision and a deep learning approach is proposed to solve the navigation problem of the unmanned aerial vehicle autonomous aerial refueling docking process. First, to meet the requirements of high accuracy and high [...] Read more.
In this paper, a visual navigation method based on binocular vision and a deep learning approach is proposed to solve the navigation problem of the unmanned aerial vehicle autonomous aerial refueling docking process. First, to meet the requirements of high accuracy and high frame rate in aerial refueling tasks, this paper proposes a single-stage lightweight drogue detection model, which greatly increases the inference speed of binocular images by introducing image alignment and depth-separable convolution and improves the feature extraction capability and scale adaptation performance of the model by using an efficient attention mechanism (ECA) and adaptive spatial feature fusion method (ASFF). Second, this paper proposes a novel method for estimating the pose of the drogue by spatial geometric modeling using optical markers, and further improves the accuracy and robustness of the algorithm by using visual reprojection. Moreover, this paper constructs a visual navigation vision simulation and semi-physical simulation experiments for the autonomous aerial refueling task, and the experimental results show the following: (1) the proposed drogue detection model has high accuracy and real-time performance, with a mean average precision (mAP) of 98.23% and a detection speed of 41.11 FPS in the embedded module; (2) the position estimation error of the proposed visual navigation algorithm is less than ±0.1 m, and the attitude estimation error of the pitch and yaw angle is less than ±0.5°; and (3) through comparison experiments with the existing advanced methods, the positioning accuracy of this method is improved by 1.18% compared with the current advanced methods. Full article
Show Figures

Figure 1

19 pages, 87282 KiB  
Article
A Drone-Powered Deep Learning Methodology for High Precision Remote Sensing in California’s Coastal Shrubs
by Jon Detka, Hayley Coyle, Marcella Gomez and Gregory S. Gilbert
Drones 2023, 7(7), 421; https://doi.org/10.3390/drones7070421 - 25 Jun 2023
Cited by 6 | Viewed by 2248
Abstract
Wildland conservation efforts require accurate maps of plant species distribution across large spatial scales. High-resolution species mapping is difficult in diverse, dense plant communities, where extensive ground-based surveys are labor-intensive and risk damaging sensitive flora. High-resolution satellite imagery is available at scales needed [...] Read more.
Wildland conservation efforts require accurate maps of plant species distribution across large spatial scales. High-resolution species mapping is difficult in diverse, dense plant communities, where extensive ground-based surveys are labor-intensive and risk damaging sensitive flora. High-resolution satellite imagery is available at scales needed for plant community conservation across large areas, but can be cost prohibitive and lack resolution to identify species. Deep learning analysis of drone-based imagery can aid in accurate classification of plant species in these communities across large regions. This study assessed whether drone-based imagery and deep learning modeling approaches could be used to map species in complex chaparral, coastal sage scrub, and oak woodland communities. We tested the effectiveness of random forest, support vector machine, and convolutional neural network (CNN) coupled with object-based image analysis (OBIA) for mapping in diverse shrublands. Our CNN + OBIA approach outperformed random forest and support vector machine methods to accurately identify tree and shrub species, vegetation gaps, and communities, even distinguishing two congeneric shrub species with similar morphological characteristics. Similar accuracies were attained when applied to neighboring sites. This work is key to the accurate species identification and large scale mapping needed for conservation research and monitoring in chaparral and other wildland plant communities. Uncertainty in model application is associated with less common species and intermixed canopies. Full article
Show Figures

Figure 1

22 pages, 3674 KiB  
Article
A UAV-Assisted Stackelberg Game Model for Securing loMT Healthcare Networks
by Jamshed Ali Shaikh, Chengliang Wang, Muhammad Asghar Khan, Syed Agha Hassnain Mohsan, Saif Ullah, Samia Allaoua Chelloug, Mohammed Saleh Ali Muthanna and Ammar Muthanna
Drones 2023, 7(7), 415; https://doi.org/10.3390/drones7070415 - 23 Jun 2023
Cited by 2 | Viewed by 1465
Abstract
On the one hand, the Internet of Medical Things (IoMT) in healthcare systems has emerged as a promising technology to monitor patients’ health and provide reliable medical services, especially in remote and underserved areas. On the other hand, in disaster scenarios, the loss [...] Read more.
On the one hand, the Internet of Medical Things (IoMT) in healthcare systems has emerged as a promising technology to monitor patients’ health and provide reliable medical services, especially in remote and underserved areas. On the other hand, in disaster scenarios, the loss of communication infrastructure can make it challenging to establish reliable communication and to provide timely first aid services. To address this challenge, unmanned aerial vehicles (UAVs) have been adopted to assist hospital centers in delivering medical care to hard-to-reach areas. Despite the potential of UAVs to improve medical services in emergency scenarios, their limited resources make their security critical. Therefore, developing secure and efficient communication protocols for IoMT networks using UAVs is a vital research area that can help ensure reliable and timely medical services. In this paper, we introduce a novel Stackelberg security-based game theory algorithm, named Stackelberg ad hoc on-demand distance vector (SBAODV), to detect and recover data affected by black hole attacks in IoMT networks using UAVs. Our proposed scheme utilizes the substantial Stackelberg equilibrium (SSE) to formulate strategies that protect the system against attacks. We evaluate the performance of our proposed SBAODV scheme and compare it with existing routing schemes. Our results demonstrate that our proposed scheme outperforms existing schemes regarding packet delivery ratio (PDR), networking load, throughput, detection ratio, and end-to-end delay. Specifically, our proposed SBAODV protocol achieves a PDR of 97%, throughput ranging from 77.7 kbps to 87.3 kbps, and up to 95% malicious detection rate at the highest number of nodes. Furthermore, our proposed SBADOV scheme offers significantly lower networking load (7% to 30%) and end-to-end delay (up to 30%) compared to existing routing schemes. These results demonstrate the efficiency and effectiveness of our proposed scheme in ensuring reliable and secure communication in IoMT emergency scenarios using UAVs. Full article
Show Figures

Figure 1

24 pages, 2642 KiB  
Article
Influence of On-Site Camera Calibration with Sub-Block of Images on the Accuracy of Spatial Data Obtained by PPK-Based UAS Photogrammetry
by Kalima Pitombeira and Edson Mitishita
Remote Sens. 2023, 15(12), 3126; https://doi.org/10.3390/rs15123126 - 15 Jun 2023
Viewed by 1111
Abstract
Unmanned Aerial Systems (UAS) Photogrammetry has become widely used for spatial data acquisition. Nowadays, RTK (Real Time Kinematic) and PPK (Post Processed Kinematic) are the main correction methods for accurate positioning used for direct measurements of camera station coordinates in UAS imagery. Thus, [...] Read more.
Unmanned Aerial Systems (UAS) Photogrammetry has become widely used for spatial data acquisition. Nowadays, RTK (Real Time Kinematic) and PPK (Post Processed Kinematic) are the main correction methods for accurate positioning used for direct measurements of camera station coordinates in UAS imagery. Thus, 3D camera coordinates are commonly used as additional observations in Bundle Block Adjustment to perform Global Navigation Satellite System-Assisted Aerial Triangulation (GNSS-AAT). This process requires accurate Interior Orientation Parameters to ensure the quality of photogrammetric intersection. Therefore, this study investigates the influence of on-site camera calibration with a sub-block of images on the accuracy of spatial data obtained by PPK-based UAS Photogrammetry. For this purpose, experiments of on-the-job camera self-calibration in the Metashape software with the SfM approach were performed. Afterward, experiments of GNSS-Assisted Aerial Triangulation with on-site calibration in the Erdas Imagine software were performed. The outcomes show that only the experiment of GNSS-AAT with three Ground Control Points yielded horizontal and vertical accuracies close to nominal precisions of the camera station positions by GNSS-PPK measurements adopted in this study, showing horizontal RMSE (Root-Mean Square Error) of 0.222 m and vertical RMSE of 0.154 m. Furthermore, the on-site camera calibration with a sub-block of images significantly improved the vertical accuracy of the spatial information extraction. Full article
Show Figures

Figure 1

22 pages, 6807 KiB  
Article
IRSDD-YOLOv5: Focusing on the Infrared Detection of Small Drones
by Shudong Yuan, Bei Sun, Zhen Zuo, Honghe Huang, Peng Wu, Can Li, Zhaoyang Dang and Zongqing Zhao
Drones 2023, 7(6), 393; https://doi.org/10.3390/drones7060393 - 14 Jun 2023
Cited by 4 | Viewed by 2205
Abstract
With the rapid growth of the global drone market, a variety of small drones have posed a certain threat to public safety. Therefore, we need to detect small drones in a timely manner so as to take effective countermeasures. At present, the method [...] Read more.
With the rapid growth of the global drone market, a variety of small drones have posed a certain threat to public safety. Therefore, we need to detect small drones in a timely manner so as to take effective countermeasures. At present, the method based on deep learning has made a great breakthrough in the field of target detection, but it is not good at detecting small drones. In order to solve the above problems, we proposed the IRSDD-YOLOv5 model, which is based on the current advanced detector YOLOv5. Firstly, in the feature extraction stage, we designed an infrared small target detection module (IRSTDM) suitable for the infrared recognition of small drones, which extracted and retained the target details to allow IRSDD-YOLOv5 to effectively detect small targets. Secondly, in the target prediction stage, we used the small target prediction head (PH) to complete the prediction of the prior information output via the infrared small target detection module (IRSTDM). We optimized the loss function by calculating the distance between the true box and the predicted box to improve the detection performance of the algorithm. In addition, we constructed a single-frame infrared drone detection dataset (SIDD), annotated at pixel level, and published an SIDD dataset publicly. According to some real scenes of drone invasion, we divided four scenes in the dataset: the city, sky, mountain and sea. We used mainstream instance segmentation algorithms (Blendmask, BoxInst, etc.) to train and evaluate the performances of the four parts of the dataset, respectively. The experimental results show that the proposed algorithm demonstrates good performance. The AP50 measurements of IRSDD-YOLOv5 in the mountain scene and ocean scene reached peak values of 79.8% and 93.4%, respectively, which are increases of 3.8% and 4% compared with YOLOv5. We also made a theoretical analysis of the detection accuracy of different scenarios in the dataset. Full article
Show Figures

Figure 1

21 pages, 656 KiB  
Article
A Cognitive Electronic Jamming Decision-Making Method Based on Q-Learning and Ant Colony Fusion Algorithm
by Chudi Zhang, Yunqi Song, Rundong Jiang, Jun Hu and Shiyou Xu
Remote Sens. 2023, 15(12), 3108; https://doi.org/10.3390/rs15123108 - 14 Jun 2023
Cited by 2 | Viewed by 1827
Abstract
In order to improve the efficiency and adaptability of cognitive radar jamming decision-making, a fusion algorithm (Ant-QL) based on ant colony and Q-Learning is proposed in this paper. The algorithm does not rely on a priori information and enhances adaptability through [...] Read more.
In order to improve the efficiency and adaptability of cognitive radar jamming decision-making, a fusion algorithm (Ant-QL) based on ant colony and Q-Learning is proposed in this paper. The algorithm does not rely on a priori information and enhances adaptability through real-time interactions between the jammer and the target radar. At the same time, it can be applied to single jammer and multiple jammer countermeasure scenarios with high jamming effects. First, traditional Q-Learning and DQN algorithms are discussed, and a radar jamming decision-making model is built for the simulation verification of each algorithm. Then, an improved Q-Learning algorithm is proposed to address the shortcomings of both algorithms. By introducing the pheromone mechanism of ant colony algorithms in Q-Learning and using the ε-greedy algorithm to balance the contradictory relationship between exploration and exploitation, the algorithm greatly avoids falling into a local optimum, thus accelerating the convergence speed of the algorithm with good stability and robustness in the convergence process. In order to better adapt to the cluster countermeasure environment in future battlefields, the algorithm and model are extended to cluster cooperative jamming decision-making. We map each jammer in the cluster to an intelligent ant searching for the optimal path, and multiple jammers interact with each other to obtain information. During the process of confrontation, the method greatly improves the convergence speed and stability and reduces the need for hardware and power resources of the jammer. Assuming that the number of jammers is three, the experimental simulation results of the convergence speed of the Ant-QL algorithm improve by 85.4%, 80.56% and 72% compared with the Q-Learning, DQN and improved Q-Learning algorithms, respectively. During the convergence process, the Ant-QL algorithm is very stable and efficient, and the algorithm complexity is low. After the algorithms converge, the average response times of the four algorithms are 6.99 × 10−4 s, 2.234 × 10−3 s, 2.21 × 10−4 s and 1.7 × 10−4 s, respectively. The results show that the improved Q-Learning algorithm and Ant-QL algorithm also have more advantages in terms of average response time after convergence. Full article
Show Figures

Figure 1

22 pages, 14906 KiB  
Article
UAV-Based Low Altitude Remote Sensing for Concrete Bridge Multi-Category Damage Automatic Detection System
by Han Liang, Seong-Cheol Lee and Suyoung Seo
Drones 2023, 7(6), 386; https://doi.org/10.3390/drones7060386 - 8 Jun 2023
Cited by 3 | Viewed by 1646
Abstract
Detecting damage in bridges can be an arduous task, fraught with challenges stemming from the limitations of the inspection environment and the considerable time and resources required for manual acquisition. Moreover, prevalent damage detection methods rely heavily on pixel-level segmentation, rendering it infeasible [...] Read more.
Detecting damage in bridges can be an arduous task, fraught with challenges stemming from the limitations of the inspection environment and the considerable time and resources required for manual acquisition. Moreover, prevalent damage detection methods rely heavily on pixel-level segmentation, rendering it infeasible to classify and locate different damage types accurately. To address these issues, the present study proposes a novel fully automated concrete bridge damage detection system that harnesses the power of unmanned aerial vehicle (UAV) remote sensing technology. The proposed system employs a Swin Transformer-based backbone network, coupled with a multi-scale attention pyramid network featuring a lightweight residual global attention network (LRGA-Net), culminating in unprecedented breakthroughs in terms of speed and accuracy. Comparative analyses reveal that the proposed system outperforms commonly used target detection models, including the YOLOv5-L and YOLOX-L models. The proposed system’s robustness in visual inspection results in the real world reinforces its efficacy, ushering in a new paradigm for bridge inspection and maintenance. The study findings underscore the potential of UAV-based inspection as a means of bolstering the efficiency and accuracy of bridge damage detection, highlighting its pivotal role in ensuring the safety and longevity of vital infrastructure. Full article
Show Figures

Figure 1

15 pages, 5430 KiB  
Article
Measurements of the Thickness and Area of Thick Oil Slicks Using Ultrasonic and Image Processing Methods
by Hualong Du, Huijie Fan, Qifeng Zhang and Shuo Li
Remote Sens. 2023, 15(12), 2977; https://doi.org/10.3390/rs15122977 - 7 Jun 2023
Cited by 1 | Viewed by 1497
Abstract
The in situ measurement of thick oil slick thickness (>0.5 mm) and area in real time in order to estimate the volume of an oil spill is very important for determining the oil spill response strategy and evaluating the oil spill disposal efficiency. [...] Read more.
The in situ measurement of thick oil slick thickness (>0.5 mm) and area in real time in order to estimate the volume of an oil spill is very important for determining the oil spill response strategy and evaluating the oil spill disposal efficiency. In this article, a method is proposed to assess the volume of oil slicks by simultaneously measuring the thick oil slick thickness and area using ultrasonic inspection and image processing methods, respectively. A remotely operated vehicle (ROV), integrating two ultrasonic immersion transducers, was implemented as a platform to receive ultrasonic reflections from an oil slick. The oil slick thickness was determined by multiplying the speed of sound by the ultrasonic traveling time within the oil slick, which was calculated using the cross-correlation method. Images of the oil slick were captured by an optical camera using an airborne drone. The oil slick area was calculated by conducting image processing on images of the oil slick using the proposed image processing algorithms. Multiple measurements were performed to verify the proposed method in the laboratory experiments. The results show that the thickness, area and volume of a thick oil slick can be accurately measured with the proposed method. The method could potentially be used as an applicable tool for measuring the volume of an oil slick during an oil spill response. Full article
Show Figures

Figure 1

22 pages, 10248 KiB  
Article
Identifying the Optimal Radiometric Calibration Method for UAV-Based Multispectral Imaging
by Louis Daniels, Eline Eeckhout, Jana Wieme, Yves Dejaegher, Kris Audenaert and Wouter H. Maes
Remote Sens. 2023, 15(11), 2909; https://doi.org/10.3390/rs15112909 - 2 Jun 2023
Cited by 13 | Viewed by 3251
Abstract
The development of UAVs and multispectral cameras has led to remote sensing applications with unprecedented spatial resolution. However, uncertainty remains on the radiometric calibration process for converting raw images to surface reflectance. Several calibration methods exist, but the advantages and disadvantages of each [...] Read more.
The development of UAVs and multispectral cameras has led to remote sensing applications with unprecedented spatial resolution. However, uncertainty remains on the radiometric calibration process for converting raw images to surface reflectance. Several calibration methods exist, but the advantages and disadvantages of each are not well understood. We performed an empirical analysis of five different methods for calibrating a 10-band multispectral camera, the MicaSense RedEdge MX Dual Camera System, by comparing multispectral images with spectrometer measurements taken in the field on the same day. Two datasets were collected, one in clear-sky and one in overcast conditions on the same field. We found that the empirical line method (ELM), using multiple radiometric reference targets imaged at mission altitude performed best in terms of bias and RMSE. However, two user-friendly commercial solutions relying on one single grey reference panel were only slightly less accurate and resulted in sufficiently accurate reflectance maps for most applications, particularly in clear-sky conditions. In overcast conditions, the increase in accuracy of more elaborate methods was higher. Incorporating measurements of an integrated downwelling light sensor (DLS2) did not improve the bias nor RMSE, even in overcast conditions. Ultimately, the choice of the calibration method depends on required accuracy, time constraints and flight conditions. When the more accurate ELM is not possible, commercial, user-friendly solutions like the ones offered by Agisoft Metashape and Pix4D can be good enough. Full article
Show Figures

Figure 1

35 pages, 8889 KiB  
Article
JO-TADP: Learning-Based Cooperative Dynamic Resource Allocation for MEC–UAV-Enabled Wireless Network
by Shabeer Ahmad, Jinling Zhang, Adil Khan, Umar Ajaib Khan and Babar Hayat
Drones 2023, 7(5), 303; https://doi.org/10.3390/drones7050303 - 4 May 2023
Cited by 4 | Viewed by 1843
Abstract
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. [...] Read more.
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. However, energy consumption and lifetime issues in UAVs severely limit the resources and communication services. In this paper, we propose a dynamic cooperative resource allocation scheme for MEC–UAV-enabled wireless networks called joint optimization of trajectory, altitude, delay, and power (JO-TADP) using anarchic federated learning (AFL) and other learning algorithms to enhance data rate, use rate, and resource allocation efficiency. Initially, the MEC–UAVs are optimally positioned based on the MU density using the beluga whale optimization (BLWO) algorithm. Optimal clustering is performed in terms of splitting and merging using the triple-mode density peak clustering (TM-DPC) algorithm based on user mobility. Moreover, the trajectory, altitude, and hovering time of MEC–UAVs are predicted and optimized using the self-simulated inner attention long short-term memory (SSIA-LSTM) algorithm. Finally, the MUs and MEC–UAVs play auction games based on the classified requests, using an AFL-based cross-scale attention feature pyramid network (CSAFPN) and enhanced deep Q-learning (EDQN) algorithms for dynamic resource allocation. To validate the proposed approach, our system model has been simulated in Network Simulator 3.26 (NS-3.26). The results demonstrate that the proposed work outperforms the existing works in terms of connectivity, energy efficiency, resource allocation, and data rate. Full article
Show Figures

Figure 1

25 pages, 20160 KiB  
Article
The Development of Copper Clad Laminate Horn Antennas for Drone Interferometric Synthetic Aperture Radar
by Anthony Carpenter, James A. Lawrence, Richard Ghail and Philippa J. Mason
Drones 2023, 7(3), 215; https://doi.org/10.3390/drones7030215 - 20 Mar 2023
Cited by 3 | Viewed by 3207
Abstract
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for [...] Read more.
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for drone deployment, including antennas for the transmission and reception of microwave electromagnetic signals. We present the design, simulation, fabrication, and testing of two lightweight and inexpensive copper clad laminate (CCL)/printed circuit board (PCB) horn antennas for C-band radar deployed on the DJI Matrice 600 Pro drone. This is the first demonstration of horn antennas fabricated from CCL, and the first complete overview of antenna development for drone radar applications. The dimensions are optimised for the desired gain and centre frequency of 19 dBi and 5.4 GHz, respectively. The S11, directivity/gain, and half power beam widths (HPBW) are simulated in MATLAB, with the antennas tested in a radio frequency (RF) electromagnetic anechoic chamber using a calibrated vector network analyser (VNA) for comparison. The antennas are highly directive with gains of 15.80 and 16.25 dBi, respectively. The reduction in gain compared to the simulated value is attributed to a resonant frequency shift caused by the brass input feed increasing the electrical dimensions. The measured S11 and azimuth HPBW either meet or exceed the simulated results. A slight performance disparity between the two antennas is attributed to minor artefacts of the manufacturing and testing processes. The incorporation of the antennas into the drone payload is presented. Overall, both antennas satisfy our performance criteria and highlight the potential for CCL/PCB/FR-4 as a lightweight and inexpensive material for custom antenna production in drone radar and other antenna applications. Full article
Show Figures

Figure 1

22 pages, 9548 KiB  
Article
Mine Pit Wall Geological Mapping Using UAV-Based RGB Imaging and Unsupervised Learning
by Peng Yang, Kamran Esmaeili, Sebastian Goodfellow and Juan Carlos Ordóñez Calderón
Remote Sens. 2023, 15(6), 1641; https://doi.org/10.3390/rs15061641 - 18 Mar 2023
Cited by 6 | Viewed by 2512
Abstract
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual [...] Read more.
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual observations and laboratory testing results, which can be both time- and labour-intensive and can expose the technical staff to different safety hazards on the ground. In this work, a case study was conducted by investigating the use of drone-acquired RGB images for pit wall mapping. High spatial resolution RGB image data were collected using a commercially available unmanned aerial vehicle (UAV) at two gold mines in Nevada, USA. Cluster maps were produced using unsupervised learning algorithms, including the implementation of convolutional autoencoders, to explore the use of unlabelled image data for pit wall geological mapping purposes. While the results are promising for simple geological settings, they deviate from human-labelled ground truth maps in more complex geological conditions. This indicates the need to further optimize and explore the algorithms to increase robustness for more complex geological cases. Full article
Show Figures

Figure 1

19 pages, 2928 KiB  
Article
Estimating Black Oat Biomass Using Digital Surface Models and a Vegetation Index Derived from RGB-Based Aerial Images
by Lucas Renato Trevisan, Lisiane Brichi, Tamara Maria Gomes and Fabrício Rossi
Remote Sens. 2023, 15(5), 1363; https://doi.org/10.3390/rs15051363 - 28 Feb 2023
Cited by 3 | Viewed by 1690
Abstract
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. [...] Read more.
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. Despite its importance, few studies have been conducted to introduce more accurate and technological applications. Plant height (H) correlates with biomass production, which is related to yield. Similarly, productivity status can be estimated from vegetation indices (VIs). The use of unmanned aerial vehicles (UAV) for imaging enables greater spatial and temporal resolutions from which to derive information such as H and VI. However, faster and more accurate methodologies are necessary for the application of this technology. This study intended to obtain high-quality digital surface models (DSMs) and orthoimages from UAV-based RGB images via a direct-to-process means; that is, without the use of ground control points or image pre-processing. DSMs and orthoimages were used to derive H (HDSM) and VIs (VIRGB), which were used for H and dry biomass (DB) modeling. Results showed that HDSM presented a strong correlation with actual plant height (HREF) (R2 = 0.85). Modeling biomass based on HDSM demonstrated better performance for data collected up until and including the grain filling (R2 = 0.84) and flowering (R2 = 0.82) stages. Biomass modeling based on VIRGB performed better for data collected up until and including the booting stage (R2 = 0.80). The best results for biomass estimation were obtained by combining HDSM and VIRGB, with data collected up until and including the grain filling stage (R2 = 0.86). Therefore, the presented methodology has permitted the generation of trustworthy models for estimating the H and DB of black oats. Full article
Show Figures

Figure 1

16 pages, 9064 KiB  
Article
Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach
by Daoquan Zhang, Deping Li, Liang Zhou and Jiejie Wu
Sensors 2023, 23(4), 2180; https://doi.org/10.3390/s23042180 - 15 Feb 2023
Cited by 5 | Viewed by 2029
Abstract
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking [...] Read more.
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking the nighttime static monocular tilted light images of communities near Meixi Lake in Changsha City as research data. Using an object-oriented classification method to fully extract the spectral, textural and geometric features of urban nighttime lights, we build four types of classification models based on random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN) and decision tree (DT), respectively, to finely extract five types of nighttime lights: window light, neon light, road reflective light, building reflective light and background. The main conclusions are as follows: (i) The equal division of the image into three regions according to the visual direction can alleviate the variable scale problem of monocular tilted images, and the multiresolution segmentation results combined with Canny edge detection are more suitable for urban nighttime lighting images; (ii) RF has the highest classification accuracy among the four classification algorithms, with an overall classification accuracy of 95.36% and a kappa coefficient of 0.9381 in the far view region, followed by SVM, KNN and DT as the worst; (iii) Among the fine classification results of urban light types, window light and background have the highest classification accuracy, with both UA and PA above 93% in the RF classification model, while road reflective light has the lowest accuracy; (iv) Among the selected classification features, the spectral features have the highest contribution rates, which are above 59% in all three regions, followed by the textural features and the geometric features with the smallest contribution rates. This paper demonstrates the feasibility of nighttime UAV static monocular tilt image data for fine classification of urban light types based on an object-oriented classification approach, provides data and technical support for small-scale urban nighttime research such as community building identification and nighttime human activity perception. Full article
Show Figures

Figure 1

19 pages, 44461 KiB  
Article
AFL-Net: Attentional Feature Learning Network for Building Extraction from Remote Sensing Images
by Yue Qiu, Fang Wu, Haizhong Qian, Renjian Zhai, Xianyong Gong, Jichong Yin, Chengyi Liu and Andong Wang
Remote Sens. 2023, 15(1), 95; https://doi.org/10.3390/rs15010095 - 24 Dec 2022
Cited by 4 | Viewed by 1856
Abstract
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction [...] Read more.
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction between buildings and complex backgrounds. To overcome this challenge, we propose an Attentional Feature Learning Network (AFL-Net) that can accurately extract buildings from remote sensing images. We designed an attentional multiscale feature fusion (AMFF) module and a shape feature refinement (SFR) module to improve building recognition accuracy in complex environments. The AMFF module adaptively adjusts the weights of multi-scale features through the attention mechanism, which enhances the global perception and ensures the integrity of building segmentation results. The SFR module captures the shape features of the buildings, which enhances the network capability for identifying the area between building edges and surrounding nonbuilding objects and reduces the over-segmentation of buildings. An ablation study was conducted with both qualitative and quantitative analyses, verifying the effectiveness of the AMFF and SFR modules. The proposed AFL-Net achieved 91.37, 82.10, 73.27, and 79.81% intersection over union (IoU) values on the WHU Building Aerial Imagery, Inria Aerial Image Labeling, Massachusetts Buildings, and Building Instances of Typical Cities in China datasets, respectively. Thus, the AFL-Net offers the prospect of application for successful extraction of buildings from remote sensing images. Full article
Show Figures

Figure 1

Back to TopTop