Special Issue "Systems and Technologies for Remote Sensing Application through Unmanned Aerial Systems (STRATUS)"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (1 September 2021).

Special Issue Editors

Dr. Bahram Salehi
E-Mail Website
Guest Editor
College of Environmental Science and Forestry (SUNY-ESF), State University of New York, Syracuse, NY 13210, USA
Interests: remote sensing of environment (wetland, permafrost, forest, oil spill, land cover, harmful algal bloom, etc.); SAR (PolSAR and InSAR) remote sensing; photogrammetry and image processing of UAVs; machine learning and image processing; nanosatellite data processing
Special Issues, Collections and Topics in MDPI journals
Dr. Emmett Ientilucci
E-Mail Website
Guest Editor
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA
Interests: general remote sensing; spectral image processing and exploitation; hyperspectral target and shadow detection; radiative transfer; radiometric hardware and calibration; atmospheric compensation
Prof. Dr. Chris S. Renschler
E-Mail Website
Guest Editor
Professor, Dept. of Geography; Director, Geographic Information and Analysis Lab (GIAL); Director, Landscape-based Environmental System Analysis & Modeling Laboratory (LESAM) at the University at Buffalo (SUNY), 116 Wilkeson Quad, Buffalo, NY 14226, USA
Interests: remote sensing; GIScience; GIS; spatial analysis; environmental modeling, hydrology; geomorphology; natural hazards; geohazards; land use/land cover change; climate change; soil erosion; watershed; geoecology; integrated management; resilience; sustainability
Dr. Peter J. Spacher
E-Mail
Guest Editor
Physics, Hobart and William Smith Colleges, 4095 Scandling Center, Geneva, NY 14456, USA
Interests: biophysical modeling; remote sensing of the environment (harmful algal bloom; agricultural analysis; sea turtle monitoring); AI; image processing/analysis; radiation detection/medical physics; creating scientific payloads/satellites (radiation monitoring; geophysical monitoring; spectral analysis)
Dr. Souma Chowdhury
E-Mail
Guest Editor
Assistant Professor, Department of Mechanical and Aerospace Engineering, University at Buffalo (SUNY), 246 Bell Hall, Buffalo, NY 14260, USA
Interests: design optimization; physics-infused machine learning; neuroevolution; swarm robotics; multi-Unmanned Aerial Vehicle (UAV) operations; hybrid UAVs; collision avoidance in UAVs

Special Issue Information

Dear Colleagues,

The emergence of low-cost and easy-to-use unmanned aerial systems or vehicles (UAS or UAV), commonly referred to as drones, has led to an explosion of their use for numerous applications. In particular, these new platforms have enabled new imaging and remote sensing technologies and applications previously unavailable due to the high cost and operational limitations of manned aircraft or satellites. UAVs have gained increasing attention within the remote sensing community in many areas including environment, natural resource, agriculture, and water, to name but a few. The development and advancement of hardware are ever-increasing, algorithm/data processing, and applications of UAS necessitating more frequent scientific publications, gatherings and conferences to exchange the latest advances in UAV remote sensing.

The 4th Systems and Technologies for Remote Sensing Application through Unmanned Aerial Systems or STRATUS Conference had been planned to be held on May 18–20, 2020 at the University at Buffalo, NY, USA (stratus-conference.com). However, due to the COVID-19 crisis, the organizing committee decided to postpone the 2020 STRATUS Conference to MAY 2021. The 2021 STRATUS conference will be held at the University at Buffalo, NY, USA on May 17-19, 2021 and it will be a mix of in-person and virtual meetings and talks. Please check the conference website for more details: Stratus-conference.com. Accordingly, the full paper submissions to the special issue of Remote Sensing- Open Access Journal is extended to September 1, 2021.

The conference offers an opportunity for US and international researchers, industry representatives, and domain specialists to discuss and present their research related to UAV hardware software/data processing, and application development. In addition to oral presentations and posters, conference presenters are welcome to submit scientific papers for publication in this Special Issue of the open access journal Remote Sensing. All paper submissions will go through the normal peer-review process of the journal. Up to five high-quality submissions will be selected as Feature Papers and published free of charge!

The Special Issue is open to all abstracts submitted to 2020STRATUS and 2021STRATUS Conference, as well as outside papers focused on software and algorithm development, data processing, photogrammetry, and applications of UAV.

We would like to invite articles on studies using state-of-the-art UAV systems and data. Submissions are encouraged to cover a broad range of topics, which may include, without being limited to, the following UAV-related subjects:

  • Remote sensing
  • 3D and multi-view Imaging
  • Geospatial and mapping
  • Artificial intelligence, machine learning, and deep learning of UAV data
  • Applications in forestry, wetland, agriculture, construction, surveying, transportation, disaster management, emergency response, and more.
  • Environmental monitoring and modeling
  • Platforms, calibration, sensor, and imaging systems
  • Technologies and applications
  • Algorithm and image processing
  • Photogrammetry and image matching
  • Geohazards and extreme events

Dr. Bahram Salehi
Dr. Emmett Ientilucci
Dr. Christian Renschler
Dr. Peter J. Spacher
Dr. Souma Chowdhury
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • drone
  • remote sensing
  • environmental monitoring
  • UAV data processing
  • UAV hardware
  • UAV software
  • UAV applications
  • STRATUS

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Road Traffic Monitoring from UAV Images Using Deep Learning Networks
Remote Sens. 2021, 13(20), 4027; https://doi.org/10.3390/rs13204027 - 09 Oct 2021
Viewed by 483
Abstract
In this paper, we propose a deep neural network-based method for estimating speed of vehicles on roads automatically from videos recorded using unmanned aerial vehicle (UAV). The proposed method includes the following; (1) detecting and tracking vehicles by analyzing the videos, (2) calculating [...] Read more.
In this paper, we propose a deep neural network-based method for estimating speed of vehicles on roads automatically from videos recorded using unmanned aerial vehicle (UAV). The proposed method includes the following; (1) detecting and tracking vehicles by analyzing the videos, (2) calculating the image scales using the distances between lanes on the roads, and (3) estimating the speeds of vehicles on the roads. Our method can automatically measure the speed of the vehicles from the only videos recorded using UAV without additional information in both directions on the roads simultaneously. In our experiments, we evaluate the performance of the proposed method with the visual data at four different locations. The proposed method shows 97.6% recall rate and 94.7% precision rate in detecting vehicles, and it shows error (root mean squared error) of 5.27 km/h in estimating the speeds of vehicles. Full article
Show Figures

Figure 1

Article
Comparison of UAS-Based Structure-from-Motion and LiDAR for Structural Characterization of Short Broadacre Crops
Remote Sens. 2021, 13(19), 3975; https://doi.org/10.3390/rs13193975 - 04 Oct 2021
Viewed by 393
Abstract
The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights [...] Read more.
The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points. Full article
Show Figures

Graphical abstract

Article
Corn Grain Yield Prediction and Mapping from Unmanned Aerial System (UAS) Multispectral Imagery
Remote Sens. 2021, 13(19), 3948; https://doi.org/10.3390/rs13193948 - 02 Oct 2021
Viewed by 653
Abstract
Harvester-mounted yield monitor sensors are expensive and require calibration and data cleaning. Therefore, we evaluated six vegetation indices (VI) from unmanned aerial system (Quantix™ Mapper) imagery for corn (Zea mays L.) yield prediction. A field trial was conducted with N sidedress treatments [...] Read more.
Harvester-mounted yield monitor sensors are expensive and require calibration and data cleaning. Therefore, we evaluated six vegetation indices (VI) from unmanned aerial system (Quantix™ Mapper) imagery for corn (Zea mays L.) yield prediction. A field trial was conducted with N sidedress treatments applied at four growth stages (V4, V6, V8, or V10) compared against zero-N and N-rich controls. Normalized difference vegetation index (NDVI) and enhanced vegetation index 2 (EVI2), based on flights at R4, resulted in the most accurate yield estimations, as long as sidedressing was performed before V6. Yield estimations based on earlier flights were less accurate. Estimations were most accurate when imagery from both N-rich and zero-N control plots were included, but elimination of the zero-N data only slightly reduced the accuracy. Use of a ratio approach (VITrt/VIN-rich and YieldTrt/YieldN-rich) enables the extension of findings across fields and only slightly reduced the model performance. Finally, a smaller plot size (9 or 75 m2 compared to 150 m2) resulted in a slightly reduced model performance. We concluded that accurate yield estimates can be obtained using NDVI and EVI2, as long as there is an N-rich strip in the field, sidedressing is performed prior to V6, and sensing takes place at R3 or R4. Full article
Show Figures

Graphical abstract

Article
An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery
Remote Sens. 2020, 12(21), 3521; https://doi.org/10.3390/rs12213521 - 27 Oct 2020
Cited by 3 | Viewed by 1281
Abstract
Traditional mapping and monitoring of agricultural fields are expensive, laborious, and may contain human errors. Technological advances in platforms and sensors, followed by artificial intelligence (AI) and deep learning (DL) breakthroughs in intelligent data processing, led to improving the remote sensing applications for [...] Read more.
Traditional mapping and monitoring of agricultural fields are expensive, laborious, and may contain human errors. Technological advances in platforms and sensors, followed by artificial intelligence (AI) and deep learning (DL) breakthroughs in intelligent data processing, led to improving the remote sensing applications for precision agriculture (PA). Therefore, technological advances in platforms and sensors and intelligent data processing methods, such as machine learning and DL, and geospatial and remote sensing technologies, have improved the quality of agricultural land monitoring for PA needs. However, providing ground truth data for model training is a time-consuming and tedious task and may contain multiple human errors. This paper proposes an automated and fully unsupervised framework based on image processing and DL methods for plant detection in agricultural lands from very high-resolution drone remote sensing imagery. The proposed framework’s main idea is to automatically generate an unlimited amount of simulated training data from the input image. This capability is advantageous for DL methods and can solve their biggest drawback, i.e., requiring a considerable amount of training data. This framework’s core is based on the faster regional convolutional neural network (R-CNN) with the backbone of ResNet-101 for object detection. The proposed framework’s efficiency was evaluated by two different image sets from two cornfields, acquired by an RGB camera mounted on a drone. The results show that the proposed method leads to an average counting accuracy of 90.9%. Furthermore, based on the average Hausdorff distance (AHD), an average object detection localization error of 11 pixels was obtained. Additionally, by evaluating the object detection metrics, the resulting mean precision, recall, and F1 for plant detection were 0.868, 0.849, and 0.855, respectively, which seem to be promising for an unsupervised plant detection method. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

Review
Meta-analysis of Unmanned Aerial Vehicle (UAV) Imagery for Agro-environmental Monitoring Using Machine Learning and Statistical Models
Remote Sens. 2020, 12(21), 3511; https://doi.org/10.3390/rs12213511 - 26 Oct 2020
Cited by 7 | Viewed by 1891
Abstract
Unmanned Aerial Vehicle (UAV) imaging systems have recently gained significant attention from researchers and practitioners as a cost-effective means for agro-environmental applications. In particular, machine learning algorithms have been applied to UAV-based remote sensing data for enhancing the UAV capabilities of various applications. [...] Read more.
Unmanned Aerial Vehicle (UAV) imaging systems have recently gained significant attention from researchers and practitioners as a cost-effective means for agro-environmental applications. In particular, machine learning algorithms have been applied to UAV-based remote sensing data for enhancing the UAV capabilities of various applications. This systematic review was performed on studies through a statistical meta-analysis of UAV applications along with machine learning algorithms in agro-environmental monitoring. For this purpose, a total number of 163 peer-reviewed articles published in 13 high-impact remote sensing journals over the past 20 years were reviewed focusing on several features, including study area, application, sensor type, platform type, and spatial resolution. The meta-analysis revealed that 62% and 38% of the studies applied regression and classification models, respectively. Visible sensor technology was the most frequently used sensor with the highest overall accuracy among classification articles. Regarding regression models, linear regression and random forest were the most frequently applied models in UAV remote sensing imagery processing. Finally, the results of this study confirm that applying machine learning approaches on UAV imagery produces fast and reliable results. Agriculture, forestry, and grassland mapping were found as the top three UAV applications in this review, in 42%, 22%, and 8% of the studies, respectively. Full article
Show Figures

Graphical abstract

Back to TopTop