Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = UAV high-resolution optical image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 9389 KB  
Article
Unravelling the Characteristics of Microhabitat Alterations in Floodplain Inundated Areas Based on High-Resolution UAV Imagery and Remote Sensing: A Case Study in Jingjiang, Yangtze River
by Yichen Zheng, Dongshuo Lu, Zongrui Yang and Jianbo Chang
Drones 2025, 9(4), 315; https://doi.org/10.3390/drones9040315 - 18 Apr 2025
Viewed by 897
Abstract
The floodplain of a large river plays a crucial role in the river’s ecosystem and serves as an essential microhabitat for river fish to complete their life history events. Over the past four decades, the floodplain represented by the Jingjiang section in the [...] Read more.
The floodplain of a large river plays a crucial role in the river’s ecosystem and serves as an essential microhabitat for river fish to complete their life history events. Over the past four decades, the floodplain represented by the Jingjiang section in the middle reaches of the Yangtze River has experienced a significant reduction in area, complexity, and diversity of fish microhabitats. This study quantitatively analyzed the dynamic changes and geomorphological structure of the floodplain in the Jingjiang reach (JJR) of the Yangtze River using satellite remote sensing images and high-resolution unmanned aerial vehicle (UAV) optical images. We built an enhanced U-Net model incorporating both the CBAM and SE parallel attention mechanisms to classify these images and identify environmental structural units. The accuracy of the enhanced model was 16.39% higher compared to original U-Net model. At the same time, the improved normalized difference water index (mNDWI), enhanced vegetation index (EVI), and normalized difference vegetation index (NDVI) were utilized to extract the flood frequency of the floodplain and analyze the area changes of the floodplain in the JJR. The trend of the flood area in the JJR during the flood season was consistent with the overall trend of flood areas in the flood season, which generally exhibits a downward tendency. In 2022, the floodplain of the JJR underwent substantial anthropogenic disturbances, with 40% of its area comprising anthropogenic environmental units. Compared to historical periods, the impervious surface within the floodplain has increased annually, while ecological units such as riparian forests and trees have gradually diminished or even disappeared, leading to a simplification of structural complexity. These findings provide a critical background and robust data foundation for the protection and restoration of fish habitats and the formulation of strategies for fish population reconstruction in the Yangtze River. Full article
Show Figures

Figure 1

17 pages, 9384 KB  
Article
Multi-Spectral Point Cloud Constructed with Advanced UAV Technique for Anisotropic Reflectance Analysis of Maize Leaves
by Kaiyi Bi, Yifang Niu, Hao Yang, Zheng Niu, Yishuo Hao and Li Wang
Remote Sens. 2025, 17(1), 93; https://doi.org/10.3390/rs17010093 - 30 Dec 2024
Viewed by 1380
Abstract
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization [...] Read more.
Reflectance anisotropy in remote sensing images can complicate the interpretation of spectral signature, and extracting precise structural information under these pixels is a promising approach. Low-altitude unmanned aerial vehicle (UAV) systems can capture high-resolution imagery even to centimeter-level detail, potentially simplifying the characterization of leaf anisotropic reflectance. We proposed a novel maize point cloud generation method that combines an advanced UAV cross-circling oblique (CCO) photography route with the Structure from the Motion-Multi-View Stereo (SfM-MVS) algorithm. A multi-spectral point cloud was then generated by fusing multi-spectral imagery with the point cloud using a DSM-based approach. The Rahman–Pinty–Verstraete (RPV) model was finally applied to establish maize leaf-level anisotropic reflectance models. Our results indicated a high degree of similarity between measured and estimated maize structural parameters (R2 = 0.89 for leaf length and 0.96 for plant height) based on accurate point cloud data obtained from the CCO route. Most data points clustered around the principal plane due to a constant angle between the sun and view vectors, resulting in a limited range of view azimuths. Leaf reflectance anisotropy was characterized by the RPV model with R2 ranging from 0.38 to 0.75 for five wavelength bands. These findings hold significant promise for promoting the decoupling of plant structural information and leaf optical characteristics within remote sensing data. Full article
Show Figures

Figure 1

13 pages, 5286 KB  
Article
Eye-Inspired Single-Pixel Imaging with Lateral Inhibition and Variable Resolution for Special Unmanned Vehicle Applications in Tunnel Inspection
by Bin Han, Quanchao Zhao, Moudan Shi, Kexin Wang, Yunan Shen, Jie Cao and Qun Hao
Biomimetics 2024, 9(12), 768; https://doi.org/10.3390/biomimetics9120768 - 18 Dec 2024
Viewed by 1310
Abstract
This study presents a cutting-edge imaging technique for special unmanned vehicles (UAVs) designed to enhance tunnel inspection capabilities. This technique integrates ghost imaging inspired by the human visual system with lateral inhibition and variable resolution to improve environmental perception in challenging conditions, such [...] Read more.
This study presents a cutting-edge imaging technique for special unmanned vehicles (UAVs) designed to enhance tunnel inspection capabilities. This technique integrates ghost imaging inspired by the human visual system with lateral inhibition and variable resolution to improve environmental perception in challenging conditions, such as poor lighting and dust. By emulating the high-resolution foveal vision of the human eye, this method significantly enhances the efficiency and quality of image reconstruction for fine targets within the region of interest (ROI). This method utilizes non-uniform speckle patterns coupled with lateral inhibition to augment optical nonlinearity, leading to superior image quality and contrast. Lateral inhibition effectively suppresses background noise, thereby improving the imaging efficiency and substantially increasing the signal-to-noise ratio (SNR) in noisy environments. Extensive indoor experiments and field tests in actual tunnel settings validated the performance of this method. Variable-resolution sampling reduced the number of samples required by 50%, enhancing the reconstruction efficiency without compromising image quality. Field tests demonstrated the system’s ability to successfully image fine targets, such as cables, under dim and dusty conditions, achieving SNRs from 13.5 dB at 10% sampling to 27.7 dB at full sampling. The results underscore the potential of this technique for enhancing environmental perception in special unmanned vehicles, especially in GPS-denied environments with poor lighting and dust. Full article
(This article belongs to the Special Issue Advanced Biologically Inspired Vision and Its Application)
Show Figures

Figure 1

19 pages, 53371 KB  
Article
Efficient UAV-Based Automatic Classification of Cassava Fields Using K-Means and Spectral Trend Analysis
by Apinya Boonrang, Pantip Piyatadsananon and Tanakorn Sritarapipat
AgriEngineering 2024, 6(4), 4406-4424; https://doi.org/10.3390/agriengineering6040250 - 22 Nov 2024
Cited by 1 | Viewed by 1496
Abstract
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery [...] Read more.
High-resolution images captured by Unmanned Aerial Vehicles (UAVs) play a vital role in precision agriculture, particularly in evaluating crop health and detecting weeds. However, the detailed pixel information in these images makes classification a time-consuming and resource-intensive process. Despite these challenges, UAV imagery is increasingly utilized for various agricultural classification tasks. This study introduces an automatic classification method designed to streamline the process, specifically targeting cassava plants, weeds, and soil classification. The approach combines K-means unsupervised classification with spectral trend-based labeling, significantly reducing the need for manual intervention. The method ensures reliable and accurate classification results by leveraging color indices derived from RGB data and applying mean-shift filtering parameters. Key findings reveal that the combination of the blue (B) channel, Visible Atmospherically Resistant Index (VARI), and color index (CI) with filtering parameters, including a spatial radius (sp) = 5 and a color radius (sr) = 10, effectively differentiates soil from vegetation. Notably, using the green (G) channel, excess red (ExR), and excess green (ExG) with filtering parameters (sp = 10, sr = 20) successfully distinguishes cassava from weeds. The classification maps generated by this method achieved high kappa coefficients of 0.96, with accuracy levels comparable to supervised methods like Random Forest classification. This technique offers significant reductions in processing time compared to traditional methods and does not require training data, making it adaptable to different cassava fields captured by various UAV-mounted optical sensors. Ultimately, the proposed classification process minimizes manual intervention by incorporating efficient pre-processing steps into the classification workflow, making it a valuable tool for precision agriculture. Full article
(This article belongs to the Special Issue Computer Vision for Agriculture and Smart Farming)
Show Figures

Figure 1

21 pages, 12271 KB  
Article
Detection of Marine Oil Spill from PlanetScope Images Using CNN and Transformer Models
by Jonggu Kang, Chansu Yang, Jonghyuk Yi and Yangwon Lee
J. Mar. Sci. Eng. 2024, 12(11), 2095; https://doi.org/10.3390/jmse12112095 - 19 Nov 2024
Cited by 9 | Viewed by 2959
Abstract
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) [...] Read more.
The contamination of marine ecosystems by oil spills poses a significant threat to the marine environment, necessitating the prompt and effective implementation of measures to mitigate the associated damage. Satellites offer a spatial and temporal advantage over aircraft and unmanned aerial vehicles (UAVs) in oil spill detection due to their wide-area monitoring capabilities. While oil spill detection has traditionally relied on synthetic aperture radar (SAR) images, the combined use of optical satellite sensors alongside SAR can significantly enhance monitoring capabilities, providing improved spatial and temporal coverage. The advent of deep learning methodologies, particularly convolutional neural networks (CNNs) and Transformer models, has generated considerable interest in their potential for oil spill detection. In this study, we conducted a comprehensive and objective comparison to evaluate the suitability of CNN and Transformer models for marine oil spill detection. High-resolution optical satellite images were used to optimize DeepLabV3+, a widely utilized CNN model; Swin-UPerNet, a representative Transformer model; and Mask2Former, which employs a Transformer-based architecture for both encoding and decoding. The results of cross-validation demonstrate a mean Intersection over Union (mIoU) of 0.740, 0.840 and 0.804 for all the models, respectively, indicating their potential for detecting oil spills in the ocean. Additionally, we performed a histogram analysis on the predicted oil spill pixels, which allowed us to classify the types of oil. These findings highlight the considerable promise of the Swin Transformer models for oil spill detection in the context of future marine disaster monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Marine Environmental Monitoring)
Show Figures

Figure 1

18 pages, 2655 KB  
Article
Advanced Image Preprocessing and Integrated Modeling for UAV Plant Image Classification
by Girma Tariku, Isabella Ghiglieno, Anna Simonetto, Fulvio Gentilin, Stefano Armiraglio, Gianni Gilioli and Ivan Serina
Drones 2024, 8(11), 645; https://doi.org/10.3390/drones8110645 - 6 Nov 2024
Cited by 5 | Viewed by 3021
Abstract
The automatic identification of plant species using unmanned aerial vehicles (UAVs) is a valuable tool for ecological research. However, challenges such as reduced spatial resolution due to high-altitude operations, image degradation from camera optics and sensor limitations, and information loss caused by terrain [...] Read more.
The automatic identification of plant species using unmanned aerial vehicles (UAVs) is a valuable tool for ecological research. However, challenges such as reduced spatial resolution due to high-altitude operations, image degradation from camera optics and sensor limitations, and information loss caused by terrain shadows hinder the accurate classification of plant species from UAV imagery. This study addresses these issues by proposing a novel image preprocessing pipeline and evaluating its impact on model performance. Our approach improves image quality through a multi-step pipeline that includes Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) for resolution enhancement, Contrast-Limited Adaptive Histogram Equalization (CLAHE) for contrast improvement, and white balance adjustments for accurate color representation. These preprocessing steps ensure high-quality input data, leading to better model performance. For feature extraction and classification, we employ a pre-trained VGG-16 deep convolutional neural network, followed by machine learning classifiers, including Support Vector Machine (SVM), random forest (RF), and Extreme Gradient Boosting (XGBoost). This hybrid approach, combining deep learning for feature extraction with machine learning for classification, not only enhances classification accuracy but also reduces computational resource requirements compared to relying solely on deep learning models. Notably, the VGG-16 + SVM model achieved an outstanding accuracy of 97.88% on a dataset preprocessed with ESRGAN and white balance adjustments, with a precision of 97.9%, a recall of 97.8%, and an F1 score of 0.978. Through a comprehensive comparative study, we demonstrate that the proposed framework, utilizing VGG-16 for feature extraction, SVM for classification, and preprocessed images with ESRGAN and white balance adjustments, achieves superior performance in plant species identification from UAV imagery. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Figure 1

24 pages, 16499 KB  
Article
Estimating Maize Crop Height and Aboveground Biomass Using Multi-Source Unmanned Aerial Vehicle Remote Sensing and Optuna-Optimized Ensemble Learning Algorithms
by Yafeng Li, Changchun Li, Qian Cheng, Fuyi Duan, Weiguang Zhai, Zongpeng Li, Bohan Mao, Fan Ding, Xiaohui Kuang and Zhen Chen
Remote Sens. 2024, 16(17), 3176; https://doi.org/10.3390/rs16173176 - 28 Aug 2024
Cited by 16 | Viewed by 3731
Abstract
Accurately assessing maize crop height (CH) and aboveground biomass (AGB) is crucial for understanding crop growth and light-use efficiency. Unmanned aerial vehicle (UAV) remote sensing, with its flexibility and high spatiotemporal resolution, has been widely applied in crop phenotyping studies. Traditional canopy height [...] Read more.
Accurately assessing maize crop height (CH) and aboveground biomass (AGB) is crucial for understanding crop growth and light-use efficiency. Unmanned aerial vehicle (UAV) remote sensing, with its flexibility and high spatiotemporal resolution, has been widely applied in crop phenotyping studies. Traditional canopy height models (CHMs) are significantly influenced by image resolution and meteorological factors. In contrast, the accumulated incremental height (AIH) extracted from point cloud data offers a more accurate estimation of CH. In this study, vegetation indices and structural features were extracted from optical imagery, nadir and oblique photography, and LiDAR point cloud data. Optuna-optimized models, including random forest regression (RFR), light gradient boosting machine (LightGBM), gradient boosting decision tree (GBDT), and support vector regression (SVR), were employed to estimate maize AGB. Results show that AIH99 has higher accuracy in estimating CH. LiDAR demonstrated the highest accuracy, while oblique photography and nadir photography point clouds were slightly less accurate. Fusion of multi-source data achieved higher estimation accuracy than single-sensor data. Embedding structural features can mitigate spectral saturation, with R2 ranging from 0.704 to 0.939 and RMSE ranging from 0.338 to 1.899 t/hm2. During the entire growth cycle, the R2 for LightGBM and RFR were 0.887 and 0.878, with an RMSE of 1.75 and 1.76 t/hm2. LightGBM and RFR also performed well across different growth stages, while SVR showed the poorest performance. As the amount of nitrogen application gradually decreases, the accumulation and accumulation rate of AGB also gradually decrease. This high-throughput crop-phenotyping analysis method offers advantages, such as speed and high accuracy, providing valuable references for precision agriculture management in maize fields. Full article
Show Figures

Graphical abstract

20 pages, 7943 KB  
Article
Decomposition of Submesoscale Ocean Wave and Current Derived from UAV-Based Observation
by Sin-Young Kim, Jong-Seok Lee, Youchul Jeong and Young-Heon Jo
Remote Sens. 2024, 16(13), 2275; https://doi.org/10.3390/rs16132275 - 21 Jun 2024
Cited by 1 | Viewed by 2240
Abstract
The consecutive submesoscale sea surface processes observed by an unmanned aerial vehicle (UAV) were used to decompose into spatial waves and current features. For the image decomposition, the Fast and Adaptive Multidimensional Empirical Mode Decomposition (FA-MEMD) method was employed to disintegrate multicomponent signals [...] Read more.
The consecutive submesoscale sea surface processes observed by an unmanned aerial vehicle (UAV) were used to decompose into spatial waves and current features. For the image decomposition, the Fast and Adaptive Multidimensional Empirical Mode Decomposition (FA-MEMD) method was employed to disintegrate multicomponent signals identified in sea surface optical images into modulated signals characterized by their amplitudes and frequencies. These signals, referred to as Bidimensional Intrinsic Mode Functions (BIMFs), represent the inherent two-dimensional oscillatory patterns within sea surface optical data. The BIMFs, separated into seven modes and a residual component, were subsequently reconstructed based on the physical frequencies. A two-dimensional Fast Fourier Transform (2D FFT) for each high-frequency mode was used for surface wave analysis to illustrate the wave characteristics. Wavenumbers (Kx, Ky) ranging between 0.01–0.1 radm−1 and wave directions predominantly in the northeastward direction were identified from the spectral peak ranges. The Optical Flow (OF) algorithm was applied to the remaining consecutive low-frequency modes as the current signal under 0.1 Hz for surface current analysis and to estimate a current field with a 1 m spatial resolution. The accuracy of currents in the overall region was validated with in situ drifter measurements, showing an R-squared (R2) value of 0.80 and an average root-mean-square error (RMSE) of 0.03 ms−1. This study proposes a novel framework for analyzing individual sea surface dynamical processes acquired from high-resolution UAV imagery using a multidimensional signal decomposition method specialized in nonlinear and nonstationary data analysis. Full article
Show Figures

Figure 1

23 pages, 16794 KB  
Article
Discontinuous Surface Ruptures and Slip Distributions in the Epicentral Region of the 2021 Mw7.4 Maduo Earthquake, China
by Longfei Han, Jing Liu-Zeng, Wenqian Yao, Wenxin Wang, Yanxiu Shao, Xiaoli Liu, Xianyang Zeng, Yunpeng Gao and Hongwei Tu
Remote Sens. 2024, 16(7), 1250; https://doi.org/10.3390/rs16071250 - 1 Apr 2024
Cited by 1 | Viewed by 2276
Abstract
Geometric complexities play an important role in the nucleation, propagation, and termination of strike-slip earthquake ruptures. The 2021 Mw7.4 Maduo earthquake rupture initiated at a large releasing stepover with a complex fault intersection. In the epicentral region, we conducted detailed mapping and [...] Read more.
Geometric complexities play an important role in the nucleation, propagation, and termination of strike-slip earthquake ruptures. The 2021 Mw7.4 Maduo earthquake rupture initiated at a large releasing stepover with a complex fault intersection. In the epicentral region, we conducted detailed mapping and classification of the surface ruptures and slip measurements associated with the earthquake, combining high-resolution uncrewed aerial vehicle (UAV) images and optical image correlation with field investigations. Our findings indicate that the coseismic ruptures present discontinuous patterns mixed with numerous lateral spreadings due to strong ground shaking. The discontinuous surface ruptures are uncharacteristic in slip to account for the large and clear displacements of offset landforms in the epicentral region. Within the releasing stepovers, the deformation zone revealed from the optical image correlation map indicates that a fault may cut diagonally across the pull-apart basin at depth. The left-lateral horizontal coseismic displacements from field measurements are typically ≤0.6 m, significantly lower than the 1–2.7 m measured from the optical image correlation map. Such a discrepancy indicates a significant proportion of off-fault deformation or the possibility that the rupture stopped at a shallow depth during its initiation phase instead of extending to the surface. The fault network and multi-fault junctions west and south of the epicenter suggest a possible complex path, which retarded the westward propagation at the initial phase of rupture growth. A hampered initiation might enhance the seismic ground motion and the complex ground deformation features at the surface, including widespread shaking-related fissures. Full article
Show Figures

Figure 1

19 pages, 5108 KB  
Article
Individual Tree Species Identification and Crown Parameters Extraction Based on Mask R-CNN: Assessing the Applicability of Unmanned Aerial Vehicle Optical Images
by Zongqi Yao, Guoqi Chai, Lingting Lei, Xiang Jia and Xiaoli Zhang
Remote Sens. 2023, 15(21), 5164; https://doi.org/10.3390/rs15215164 - 29 Oct 2023
Cited by 11 | Viewed by 3210
Abstract
Automatic, efficient, and accurate individual tree species identification and crown parameters extraction is of great significance for biodiversity conservation and ecosystem function assessment. UAV multispectral data have the advantage of low cost and easy access, and hyperspectral data can finely characterize spatial and [...] Read more.
Automatic, efficient, and accurate individual tree species identification and crown parameters extraction is of great significance for biodiversity conservation and ecosystem function assessment. UAV multispectral data have the advantage of low cost and easy access, and hyperspectral data can finely characterize spatial and spectral features. As such, they have attracted extensive attention in the field of forest resource investigation, but their applicability for end-to-end individual tree species identification is unclear. Based on the Mask R-CNN instance segmentation model, this study utilized UAV hyperspectral images to generate spectral thinning data, spectral dimensionality reduction data, and simulated multispectral data, thereby evaluating the importance of high-resolution spectral information, the effectiveness of PCA dimensionality reduction processing of hyperspectral data, and the feasibility of multispectral data for individual tree identification. The results showed that the individual tree species identification accuracy of spectral thinning data was positively correlated with the number of bands, and full-band hyperspectral data were better than other hyperspectral thinning data and PCA dimensionality reduction data, with Precision, Recall, and F1-score of 0.785, 0.825, and 0.802, respectively. The simulated multispectral data are also effective in identifying individual tree species, among which the best result is realized through the combination of Green, Red, and NIR bands, with Precision, Recall, and F1-score of 0.797, 0.836, and 0.814, respectively. Furthermore, by using Green–Red–NIR data as input, the tree crown area and width are predicted with an RMSE of 3.16m2 and 0.51m, respectively, along with an rRMSE of 0.26 and 0.12. This study indicates that the Mask R-CNN model with UAV optical images is a novel solution for identifying individual tree species and extracting crown parameters, which can provide practical technical support for sustainable forest management and ecological diversity monitoring. Full article
Show Figures

Figure 1

43 pages, 1140 KB  
Review
Measurement of Total Dissolved Solids and Total Suspended Solids in Water Systems: A Review of the Issues, Conventional, and Remote Sensing Techniques
by Godson Ebenezer Adjovu, Haroon Stephen, David James and Sajjad Ahmad
Remote Sens. 2023, 15(14), 3534; https://doi.org/10.3390/rs15143534 - 13 Jul 2023
Cited by 186 | Viewed by 58207
Abstract
This study provides a comprehensive review of the efforts utilized in the measurement of water quality parameters (WQPs) with a focus on total dissolved solids (TDS) and total suspended solids (TSS). The current method used in the measurement of TDS and TSS includes [...] Read more.
This study provides a comprehensive review of the efforts utilized in the measurement of water quality parameters (WQPs) with a focus on total dissolved solids (TDS) and total suspended solids (TSS). The current method used in the measurement of TDS and TSS includes conventional field and gravimetric approaches. These methods are limited due to the associated cost and labor, and limited spatial coverages. Remote Sensing (RS) applications have, however, been used over the past few decades as an alternative to overcome these limitations. Although they also present underlying atmospheric interferences in images, radiometric and spectral resolution issues. Studies of these WQPs with RS, therefore, require the knowledge and utilization of the best mechanisms. The use of RS for retrieval of TDS, TSS, and their forms has been explored in many studies using images from airborne sensors onboard unmanned aerial vehicles (UAVs) and satellite sensors such as those onboard the Landsat, Sentinel-2, Aqua, and Terra platforms. The images and their spectral properties serve as inputs for deep learning analysis and statistical, and machine learning models. Methods used to retrieve these WQP measurements are dependent on the optical properties of the inland water bodies. While TSS is an optically active parameter, TDS is optically inactive with a low signal–noise ratio. The detection of TDS in the visible, near-infrared, and infrared bands is due to some process that (usually) co-occurs with changes in the TDS that is affecting a WQP that is optically active. This study revealed significant improvements in incorporating RS and conventional approaches in estimating WQPs. The findings reveal that improved spatiotemporal resolution has the potential to effectively detect changes in the WQPs. For effective monitoring of TDS and TSS using RS, we recommend employing atmospheric correction mechanisms to reduce image atmospheric interference, exploration of the fusion of optical and microwave bands, high-resolution hyperspectral images, utilization of ML and deep learning models, calibration and validation using observed data measured from conventional methods. Further studies could focus on the development of new technology and sensors using UAVs and satellite images to produce real-time in situ monitoring of TDS and TSS. The findings presented in this review aid in consolidating understanding and advancement of TDS and TSS measurements in a single repository thereby offering stakeholders, researchers, decision-makers, and regulatory bodies a go-to information resource to enhance their monitoring efforts and mitigation of water quality impairments. Full article
Show Figures

Graphical abstract

26 pages, 18907 KB  
Article
Comparing Unmanned Aerial Multispectral and Hyperspectral Imagery for Harmful Algal Bloom Monitoring in Artificial Ponds Used for Fish Farming
by Diogo Olivetti, Rejane Cicerelli, Jean-Michel Martinez, Tati Almeida, Raphael Casari, Henrique Borges and Henrique Roig
Drones 2023, 7(7), 410; https://doi.org/10.3390/drones7070410 - 21 Jun 2023
Cited by 20 | Viewed by 5685
Abstract
This work aimed to assess the potential of unmanned aerial vehicle (UAV) multi- and hyper-spectral platforms to estimate chlorophyll-a (Chl-a) and cyanobacteria in experimental fishponds in Brazil. In addition to spectral resolutions, the tested platforms differ in the price, payload, [...] Read more.
This work aimed to assess the potential of unmanned aerial vehicle (UAV) multi- and hyper-spectral platforms to estimate chlorophyll-a (Chl-a) and cyanobacteria in experimental fishponds in Brazil. In addition to spectral resolutions, the tested platforms differ in the price, payload, imaging system, and processing. Hyperspectral airborne surveys were conducted using a push-broom system 276-band Headwall Nano-Hyperspec camera onboard a DJI Matrice 600 UAV. Multispectral airborne surveys were conducted using a global shutter-frame 4-band Parrot Sequoia camera onboard a DJI Phantom 4 UAV. Water quality field measurements were acquired using a portable fluorometer and laboratory analysis. The concentration ranged from 14.3 to 290.7 µg/L and from 0 to 112.5 µg/L for Chl-a and cyanobacteria, respectively. Forty-one Chl-a and cyanobacteria bio-optical retrieval models were tested. The UAV hyperspectral image achieved robust Chl-a and cyanobacteria assessments, with RMSE values of 32.8 and 12.1 µg/L, respectively. Multispectral images achieved Chl-a and cyanobacteria retrieval with RMSE values of 47.6 and 35.1 µg/L, respectively, efficiently mapping the broad Chl-a concentration classes. Hyperspectral platforms are ideal for the robust monitoring of Chl-a and CyanoHABs; however, the integrated platform has a high cost. More accessible multispectral platforms may represent a trade-off between the mapping efficiency and the deployment costs, provided that the multispectral cameras offer narrow spectral bands in the 660–690 nm and 700–730 nm ranges for Chl-a and in the 600–625 nm and 700–730 nm spectral ranges for cyanobacteria. Full article
(This article belongs to the Special Issue Advances of Unmanned Aerial Vehicles in Hydrology)
Show Figures

Figure 1

42 pages, 38264 KB  
Review
A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications
by Zhengxin Zhang and Lixue Zhu
Drones 2023, 7(6), 398; https://doi.org/10.3390/drones7060398 - 15 Jun 2023
Cited by 285 | Viewed by 48815
Abstract
In recent years, UAV remote sensing has gradually attracted the attention of scientific researchers and industry, due to its broad application prospects. It has been widely used in agriculture, forestry, mining, and other industries. UAVs can be flexibly equipped with various sensors, such [...] Read more.
In recent years, UAV remote sensing has gradually attracted the attention of scientific researchers and industry, due to its broad application prospects. It has been widely used in agriculture, forestry, mining, and other industries. UAVs can be flexibly equipped with various sensors, such as optical, infrared, and LIDAR, and become an essential remote sensing observation platform. Based on UAV remote sensing, researchers can obtain many high-resolution images, with each pixel being a centimeter or millimeter. The purpose of this paper is to investigate the current applications of UAV remote sensing, as well as the aircraft platforms, data types, and elements used in each application category; the data processing methods, etc.; and to study the advantages of the current application of UAV remote sensing technology, the limitations, and promising directions that still lack applications. By reviewing the papers published in this field in recent years, we found that the current application research of UAV remote sensing research can be classified into four categories according to the application field: (1) Precision agriculture, including crop disease observation, crop yield estimation, and crop environmental observation; (2) Forestry remote sensing, including forest disease identification, forest disaster observation, etc.; (3) Remote sensing of power systems; (4) Artificial facilities and the natural environment. We found that in the papers published in recent years, image data (RGB, multi-spectral, hyper-spectral) processing mainly used neural network methods; in crop disease monitoring, multi-spectral data are the most studied type of data; for LIDAR data, current applications still lack an end-to-end neural network processing method; this review examines UAV platforms, sensors, and data processing methods, and according to the development process of certain application fields and current implementation limitations, some predictions are made about possible future development directions. Full article
Show Figures

Figure 1

16 pages, 16682 KB  
Article
Forest Vertical Structure Mapping Using Multi-Seasonal UAV Images and Lidar Data via Modified U-Net Approaches
by Jin-Woo Yu and Hyung-Sup Jung
Remote Sens. 2023, 15(11), 2833; https://doi.org/10.3390/rs15112833 - 29 May 2023
Cited by 7 | Viewed by 2618
Abstract
With the acceleration of global warming, research on forests has become important. Vertical forest structure is an indicator of forest vitality and diversity. Therefore, further studies are essential. The investigation of forest structures has traditionally been conducted through in situ surveys, which require [...] Read more.
With the acceleration of global warming, research on forests has become important. Vertical forest structure is an indicator of forest vitality and diversity. Therefore, further studies are essential. The investigation of forest structures has traditionally been conducted through in situ surveys, which require substantial time and money. To overcome these drawbacks, in our previous study, vertical forest structure was mapped through machine learning techniques and multi-seasonal remote sensing data, and the classification performance was improved to a 0.92 F1-score. However, the use of multi-seasonal images includes tree location errors owing to changes in the timing and location of acquisition between images. This error can be reduced by using a modified U-Net model that generates a low-resolution output map from high-resolution input data. Therefore, we mapped vertical forest structures from a multi-seasonal unmanned aerial vehicle (UAV) optic and LiDAR data using three modified U-Net models to improve mapping performance. Spectral index maps related to forests were calculated as optic images, and canopy height maps were produced using the LiDAR-derived digital surface model (DSM) and digital terrain model (DTM). Spectral index maps and filtered canopy height maps were then used as input data and applied to the following three models: (1) a model that modified only the structure of the decoder, (2) a model that modified both the structure of the encoder and decoder, and (3) a model that modified the encoder, decoder, and the part that concatenated the encoder and decoder. Model 1 had the best performance with an F1-score of 0.97. The F1-score value was higher than 0.9 for both Model 2 and Model 3. Model 1 improved the performance by 5%, compared to our previous research. This implies that the model performance is enhanced by reducing the influence of position error. Full article
Show Figures

Figure 1

20 pages, 17944 KB  
Article
Repeated UAV Observations and Digital Modeling for Surface Change Detection in Ring Structure Crater Margin in Plateau
by Weidong Luo, Shu Gan, Xiping Yuan, Sha Gao, Rui Bi, Cheng Chen, Wenbin He and Lin Hu
Drones 2023, 7(5), 298; https://doi.org/10.3390/drones7050298 - 30 Apr 2023
Cited by 1 | Viewed by 2317
Abstract
As UAV technology has been leaping forward, small consumer-grade UAVs equipped with optical sensors are capable of easily acquiring high-resolution images, which show bright prospects in a wide variety of terrains and different fields. First, the crater rim landscape of the Dinosaur Valley [...] Read more.
As UAV technology has been leaping forward, small consumer-grade UAVs equipped with optical sensors are capable of easily acquiring high-resolution images, which show bright prospects in a wide variety of terrains and different fields. First, the crater rim landscape of the Dinosaur Valley ring formation located on the central Yunnan Plateau served as the object of the surface change detection experiment, and two repetitive UAV ground observations of the study area were performed at the same altitude of 180 m with DJI Phantom 4 RTK in the rainy season (P1) and the dry season (P2). Subsequently, the UAV-SfM digital three-dimensional (3D) modeling method was adopted to build digital models of the study area at two points in time, which comprised the Digital Surface Model (DSM), Digital Orthomosaic Model (DOM), and Dense Image Matching (DIM) point cloud. Lastly, a quantitative analysis of the surface changes at the pit edge was performed using the point-surface-body surface morphological characterization method based on the digital model. As indicated by the results, (1) the elevation detection of the corresponding check points of the two DSM periods yielded a maximum positive difference of 0.2650 m and a maximum negative value of −0.2279 m in the first period, as well as a maximum positive difference of 0.2470 m and a maximum negative value of −0.2589 m in the second period. (2) In the change detection of the two DOM periods, the vegetation was 9.99% higher in the wet season than in the dry season in terms of coverage, whereas the bare soil was 10.54% more covered than the wet season. (3) In general, the M3C2-PM distances of the P1 point cloud and the P2 point cloud were concentrated in the interval (−0.2,0.2), whereas the percentage of the interval (−0.1,0) accounted for 26.69% of all intervals. The numerical model of UAV-SfM was employed for comprehensive change detection analysis. As revealed by the result of the point elevation difference in the constant area, the technique can conform to the requirements of earth observation with certain accuracy. The change area suggested that the test area can be affected by natural conditions to a certain extent, such that the multi-source data can be integrated to conduct more comprehensive detection analysis. Full article
(This article belongs to the Topic Advances in Earth Observation and Geosciences)
Show Figures

Figure 1

Back to TopTop