Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = digital multispectral camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 8486 KiB  
Article
An Efficient Downwelling Light Sensor Data Correction Model for UAV Multi-Spectral Image DOM Generation
by Siyao Wu, Yanan Lu, Wei Fan, Shengmao Zhang, Zuli Wu and Fei Wang
Drones 2025, 9(7), 491; https://doi.org/10.3390/drones9070491 - 11 Jul 2025
Viewed by 207
Abstract
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral [...] Read more.
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral cameras, as well as external disturbances such as strong wind gusts and abrupt changes in flight attitude, DLS data often become unreliable, particularly at UAV turning points. Building upon traditional angle compensation methods, this study proposes an improved correction approach—FIM-DC (Fitting and Interpolation Model-based Data Correction)—specifically designed for data collection under clear-sky conditions and stable atmospheric illumination, with the goal of significantly enhancing the accuracy of reflectance retrieval. The method addresses three key issues: (1) field tests conducted in the Qingpu region show that FIM-DC markedly reduces the standard deviation of reflectance at tie points across multiple spectral bands and flight sessions, with the most substantial reduction from 15.07% to 0.58%; (2) it effectively mitigates inconsistencies in reflectance within image mosaics caused by anomalous DLS readings, thereby improving the uniformity of DOMs; and (3) FIM-DC accurately corrects the spectral curves of six land cover types in anomalous images, making them consistent with those from non-anomalous images. In summary, this study demonstrates that integrating FIM-DC into DLS data correction workflows for UAV-based multispectral imagery significantly enhances reflectance calculation accuracy and provides a robust solution for improving image quality under stable illumination conditions. Full article
Show Figures

Figure 1

15 pages, 1887 KiB  
Article
Multispectral Reconstruction in Open Environments Based on Image Color Correction
by Jinxing Liang, Xin Hu, Yifan Li and Kaida Xiao
Electronics 2025, 14(13), 2632; https://doi.org/10.3390/electronics14132632 - 29 Jun 2025
Viewed by 198
Abstract
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open [...] Read more.
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open environment presents significant challenges for spectral reconstruction. This is because a spectral reconstruction model established under one set of imaging conditions is not suitable for use under different imaging conditions. In this study, considering the principle of multispectral reconstruction, we proposed a method of multispectral reconstruction in open environments based on image color correction. In the proposed method, a whiteboard is used as a medium to calculate the color correction matrices from an open environment and transfer them to the laboratory. After the digital image is corrected, its multispectral image can be reconstructed using the pre-established multispectral reconstruction model in the laboratory. The proposed method was tested in simulations and practical experiments using different datasets and illuminations. The results show that the root-mean-square error of the color chart is below 2.6% in the simulation experiment and below 6.0% in the practical experiment, which illustrates the efficiency of the proposed method. Full article
(This article belongs to the Special Issue Image Fusion and Image Processing)
Show Figures

Figure 1

21 pages, 23619 KiB  
Article
Optimizing Data Consistency in UAV Multispectral Imaging for Radiometric Correction and Sensor Conversion Models
by Weiguang Yang, Huaiyuan Fu, Weicheng Xu, Jinhao Wu, Shiyuan Liu, Xi Li, Jiangtao Tan, Yubin Lan and Lei Zhang
Remote Sens. 2025, 17(12), 2001; https://doi.org/10.3390/rs17122001 - 10 Jun 2025
Viewed by 396
Abstract
Recent advancements in precision agriculture have been significantly bolstered by the Uncrewed Aerial Vehicles (UAVs) equipped with multispectral sensors. These systems are pivotal in transforming sensor-recorded Digital Number (DN) values into universal reflectance, crucial for ensuring data consistency irrespective of collection time, region, [...] Read more.
Recent advancements in precision agriculture have been significantly bolstered by the Uncrewed Aerial Vehicles (UAVs) equipped with multispectral sensors. These systems are pivotal in transforming sensor-recorded Digital Number (DN) values into universal reflectance, crucial for ensuring data consistency irrespective of collection time, region, and illumination. This study, conducted across three regions in China using Sequoia and Phantom 4 Multispectral cameras, focused on examining the effects of radiometric correction on data consistency and accuracy, and developing a conversion model for data from these two sensors. Our findings revealed that radiometric correction substantially enhances data consistency in vegetated areas for both sensors, though its impact on non-vegetated areas is limited. Recalibrating reflectance for calibration plates significantly improved the consistency of band values and the accuracy of vegetation index calculations for both cameras. Decision tree and random forest models emerged as more effective for data conversion between the sensors, achieving R2 values up to 0.91. Additionally, the P4M generally outperformed the Sequoia in accuracy, particularly with standard reflectance calibration. These insights emphasize the critical role of radiometric correction in UAV remote sensing for precision agriculture, underscoring the complexities of sensor data consistency and the potential for generalization of models across multi-sensor platforms. Full article
Show Figures

Figure 1

27 pages, 14459 KiB  
Review
Disease Detection on Cocoa Crops Based on Computer-Vision Techniques: A Systematic Literature Review
by Joan Alvarado, Juan Felipe Restrepo-Arias, David Velásquez and Mikel Maiza
Agriculture 2025, 15(10), 1032; https://doi.org/10.3390/agriculture15101032 - 10 May 2025
Viewed by 2098
Abstract
Computer vision in the agriculture field aims to find solutions to guarantee and assure farmers the quality of their products. Therefore, studies to diagnose diseases and detect anomalies in crops, through computer vision, have been growing in recent years. However, crops such as [...] Read more.
Computer vision in the agriculture field aims to find solutions to guarantee and assure farmers the quality of their products. Therefore, studies to diagnose diseases and detect anomalies in crops, through computer vision, have been growing in recent years. However, crops such as cocoa required further attention to drive advances in computer vision to the detection of diseases. As a result, this paper aims to explore the computer vision methods used to diagnose diseases in crops, especially in cocoa. Therefore, the purpose of this paper is to provide answers to the following research questions: (Q1) What are the diseases affecting cocoa crop production? (Q2) What are the main Machine Learning algorithms and techniques used to detect and classify diseases in cocoa? (Q3) What are the types of imaging technologies (e.g., RGB, hyperspectral, or multispectral cameras) commonly used in these applications? (Q4) What are the main Machine Learning algorithms used in mobile applications and other platforms for cocoa disease detection? This paper carries out a Systematic Literature Review approach. The Scopus Digital, Science Direct Digital, Springer Link, and IEEE Explore databases were explored from January 2019 to August 2024. These questions have identified the main diseases that affect cocoa crops and their production. From this, it was identified that mostly Machine Learning algorithms based on computer vision are employed to detect anomalies in cocoa. In addition, the main sensors were explored, such as RGB and hyperspectral cameras, used for the creation of datasets and as a tool to diagnose or detect diseases. Finally, this paper allowed us to explore a Machine Learning algorithm to detect disease deployed in mobile and Internet of Things applications for detecting diseases in cocoa crops. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

36 pages, 68826 KiB  
Article
A Holistic High-Resolution Remote Sensing Approach for Mapping Coastal Geomorphology and Marine Habitats
by Evagoras Evagorou, Thomas Hasiotis, Ivan Theophilos Petsimeris, Isavela N. Monioudi, Olympos P. Andreadis, Antonis Chatzipavlis, Demetris Christofi, Josephine Kountouri, Neophytos Stylianou, Christodoulos Mettas, Adonis Velegrakis and Diofantos Hadjimitsis
Remote Sens. 2025, 17(8), 1437; https://doi.org/10.3390/rs17081437 - 17 Apr 2025
Cited by 4 | Viewed by 1135
Abstract
Coastal areas have been the target of interdisciplinary research aiming to support studies related to their socio-economic and ecological value and their role in protecting backshore ecosystems and assets from coastal erosion and flooding. Some of these studies focus on either onshore or [...] Read more.
Coastal areas have been the target of interdisciplinary research aiming to support studies related to their socio-economic and ecological value and their role in protecting backshore ecosystems and assets from coastal erosion and flooding. Some of these studies focus on either onshore or inshore areas using sensors and collecting valuable information that remains unknown and untapped by other researchers. This research demonstrates how satellite, aerial, terrestrial and marine remote sensing techniques can be integrated and inter-validated to produce accurate information, bridging methodologies with different scope. High-resolution data from Unmanned Aerial Vehicle (UAV) data and multispectral satellite imagery, capturing the onshore environment, were utilized to extract underwater information in Coral Bay (Cyprus). These data were systematically integrated with hydroacoustic including bathymetric and side scan sonar measurements as well as ground-truthing methods such as drop camera surveys and sample collection. Onshore, digital elevation models derived from UAV observations revealed significant elevation and shoreline changes over a one-year period, demonstrating clear evidence of beach modifications and highlighting coastal zone dynamics. Temporal comparisons and cross-section analyses displayed elevation variations reaching up to 0.60 m. Terrestrial laser scanning along a restricted sea cliff at the edge of the beach captured fine-scale geomorphological changes that arise considerations for the stability of residential properties at the top of the cliff. Bathymetric estimations derived from PlanetScope and Sentinel 2 imagery returned accuracies ranging from 0.92 to 1.52 m, whilst UAV reached 1.02 m. Habitat classification revealed diverse substrates, providing detailed geoinformation on the existing sediment type distribution. UAV data achieved 89% accuracy in habitat mapping, outperforming the 83% accuracy of satellite imagery and underscoring the value of high-resolution remote sensing for fine-scale assessments. This study emphasizes the necessity of extracting and integrating information from all available sensors for a complete geomorphological and marine habitat mapping that would support sustainable coastal management strategies. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics (Second Edition))
Show Figures

Graphical abstract

18 pages, 22866 KiB  
Article
Real-Time Compensation for Unknown Image Displacement and Rotation in Infrared Multispectral Camera Push-Broom Imaging
by Tongxu Zhang, Guoliang Tang, Shouzheng Zhu, Fang Ding, Wenli Wu, Jindong Bai, Chunlai Li and Jianyu Wang
Remote Sens. 2025, 17(7), 1113; https://doi.org/10.3390/rs17071113 - 21 Mar 2025
Viewed by 637
Abstract
Digital time-delay integration (TDI) enhances the signal-to-noise ratio (SNR) in infrared (IR) imaging, but its effectiveness in push-broom scanning is contingent upon maintaining a stable image shift velocity. Unpredictable image shifts and rotations, caused by carrier or scene movement, can affect the imaging [...] Read more.
Digital time-delay integration (TDI) enhances the signal-to-noise ratio (SNR) in infrared (IR) imaging, but its effectiveness in push-broom scanning is contingent upon maintaining a stable image shift velocity. Unpredictable image shifts and rotations, caused by carrier or scene movement, can affect the imaging process. This paper proposes an advanced technical approach for infrared multispectral TDI imaging. This methodology concurrently estimates the image shift and rotation between frames by utilizing a high-resolution visible camera aligned parallel to the optical axis of the IR camera. Subsequently, parameter prediction is conducted using the Kalman model, and real-time compensation is achieved by dynamically adjusting the infrared TDI integration unit based on the predicted parameters. Simulation and experimental results demonstrate that the proposed algorithm enhances the BRISQUE score of the TDI images by 21.37%, thereby validating its efficacy in push-scan imaging systems characterized by velocity-height ratios instability and varying camera attitudes. This research constitutes a significant contribution to the advancement of high-precision real-time compensation for image shift and rotation in infrared remote sensing and industrial inspection applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

18 pages, 7292 KiB  
Article
Concurrent Viewing of H&E and Multiplex Immunohistochemistry in Clinical Specimens
by Larry E. Morrison, Tania M. Larrinaga, Brian D. Kelly, Mark R. Lefever, Rachel C. Beck and Daniel R. Bauer
Diagnostics 2025, 15(2), 164; https://doi.org/10.3390/diagnostics15020164 - 13 Jan 2025
Viewed by 1280
Abstract
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens [...] Read more.
Background/Objectives: Performing hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) on the same specimen slide provides advantages that include specimen conservation and the ability to combine the H&E context with biomarker expression at the individual cell level. We previously used invisible deposited chromogens and dual-camera imaging, including monochrome and color cameras, to implement simultaneous H&E and IHC. Using this approach, conventional H&E staining could be simultaneously viewed in color on a computer monitor alongside a monochrome video of the invisible IHC staining, while manually scanning the specimen. Methods: We have now simplified the microscope system to a single camera and increased the IHC multiplexing to four biomarkers using translational assays. The color camera used in this approach also enabled multispectral imaging, similar to monochrome cameras. Results: Application is made to several clinically relevant specimens, including breast cancer (HER2, ER, and PR), prostate cancer (PSMA, P504S, basal cell, and CD8), Hodgkin’s lymphoma (CD15 and CD30), and melanoma (LAG3). Additionally, invisible chromogenic IHC was combined with conventional DAB IHC to present a multiplex IHC assay with unobscured DAB staining, suitable for visual interrogation. Conclusions: Simultaneous staining and detection, as described here, provides the pathologist a means to evaluate complex multiplexed assays, while seated at the microscope, with the added multispectral imaging capability to support digital pathology and artificial intelligence workflows of the future. Full article
(This article belongs to the Special Issue New Promising Diagnostic Signatures in Histopathological Diagnosis)
Show Figures

Figure 1

18 pages, 4434 KiB  
Article
Monitoring of Heracleum sosnowskyi Manden Using UAV Multisensors: Case Study in Moscow Region, Russia
by Rashid K. Kurbanov, Arkady N. Dalevich, Alexey S. Dorokhov, Natalia I. Zakharova, Nazih Y. Rebouh, Dmitry E. Kucher, Maxim A. Litvinov and Abdelraouf M. Ali
Agronomy 2024, 14(10), 2451; https://doi.org/10.3390/agronomy14102451 - 21 Oct 2024
Viewed by 1611
Abstract
Detection and mapping of Sosnowsky’s hogweed (HS) using remote sensing data have proven effective, yet challenges remain in identifying, localizing, and eliminating HS in urban districts and regions. Reliable data on HS growth areas are essential for monitoring, eradication, and control measures. Satellite [...] Read more.
Detection and mapping of Sosnowsky’s hogweed (HS) using remote sensing data have proven effective, yet challenges remain in identifying, localizing, and eliminating HS in urban districts and regions. Reliable data on HS growth areas are essential for monitoring, eradication, and control measures. Satellite data alone are insufficient for mapping the dynamics of HS distribution. Unmanned aerial vehicles (UAVs) with high-resolution spatial data offer a promising solution for HS detection and mapping. This study aimed to develop a method for detecting and mapping HS growth areas using a proposed algorithm for thematic processing of multispectral aerial imagery data. Multispectral data were collected using a DJI Matrice 200 v2 UAV (Dajiang Innovation Technology Co., Shenzhen, China) and a MicaSense Altum multispectral camera (MicaSense Inc., Seattle, WA, USA). Between 2020 and 2022, 146 sites in the Moscow region of the Russian Federation, covering 304,631 hectares, were monitored. Digital maps of all sites were created, including 19 digital maps (orthophoto, 5 spectral maps, and 13 vegetation indices) for four experimental sites. The collected samples included 1080 points categorized into HS, grass cover, and trees. Student’s t-test showed significant differences in vegetation indices between HS, grass, and trees. A method was developed to determine and map HS-growing areas using the selected vegetation indices NDVI > 0.3, MCARI > 0.76, user index BS1 > 0.10, and spectral channel green > 0.14. This algorithm detected HS in an area of 146.664 hectares. This method can be used to monitor and map the dynamics of HS distribution in the central region of the Russian Federation and to plan the required volume of pesticides for its eradication. Full article
Show Figures

Figure 1

25 pages, 14077 KiB  
Article
Estimating Leaf Area Index in Apple Orchard by UAV Multispectral Images with Spectral and Texture Information
by Junru Yu, Yu Zhang, Zhenghua Song, Danyao Jiang, Yiming Guo, Yanfu Liu and Qingrui Chang
Remote Sens. 2024, 16(17), 3237; https://doi.org/10.3390/rs16173237 - 31 Aug 2024
Cited by 4 | Viewed by 5285
Abstract
The Leaf Area Index (LAI) strongly influences vegetation evapotranspiration and photosynthesis rates. Timely and accurately estimating the LAI is crucial for monitoring vegetation growth. The unmanned aerial vehicle (UAV) multispectral digital camera platform has been proven to be an effective tool for this [...] Read more.
The Leaf Area Index (LAI) strongly influences vegetation evapotranspiration and photosynthesis rates. Timely and accurately estimating the LAI is crucial for monitoring vegetation growth. The unmanned aerial vehicle (UAV) multispectral digital camera platform has been proven to be an effective tool for this purpose. Currently, most remote sensing estimations of LAIs focus on cereal crops, with limited research on economic crops such as apples. In this study, a method for estimating the LAI of an apple orchard by extracting spectral and texture information from UAV multispectral images was proposed. Specifically, field measurements were conducted to collect LAI data for 108 sample points during the final flowering (FF), fruit setting (FS), and fruit expansion (FE) stages of apple growth in 2023. Concurrently, UAV multispectral images were obtained to extract spectral and texture information (Gabor transform). The Support Vector Regression Recursive Feature Elimination (SVR-REF) was employed to select optimal features as inputs for constructing models to estimate the LAI. Finally, the optimal model was used for LAI mapping. The results indicate that integrating spectral and texture information effectively enhances the accuracy of LAI estimation, with the relative prediction deviation (RPD) for all models being greater than 2. The Categorical Boosting (CatBoost) model established for FF exhibits the highest accuracy, with a validation set R2, root mean square error (RMSE), and RPD of 0.867, 0.203, and 2.482, respectively. UAV multispectral imagery proves to be valuable in estimating apple orchard LAIs, offering real-time monitoring of apple growth and providing a scientific basis for orchard management. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Figure 1

23 pages, 19881 KiB  
Article
Identification of Damaged Canopies in Farmland Artificial Shelterbelts Based on Fusion of Unmanned Aerial Vehicle LiDAR and Multispectral Features
by Zequn Xiang, Tianlan Li, Yu Lv, Rong Wang, Ting Sun, Yuekun Gao and Hongqi Wu
Forests 2024, 15(5), 891; https://doi.org/10.3390/f15050891 - 20 May 2024
Cited by 1 | Viewed by 1703
Abstract
With the decline in the protective function for agricultural ecosystems of farmland shelterbelts due to tree withering and dying caused by pest and disease, quickly and accurately identifying the distribution of canopy damage is of great significance for forestry management departments to implement [...] Read more.
With the decline in the protective function for agricultural ecosystems of farmland shelterbelts due to tree withering and dying caused by pest and disease, quickly and accurately identifying the distribution of canopy damage is of great significance for forestry management departments to implement dynamic monitoring. This study focused on Populus bolleana and utilized an unmanned aerial vehicle (UAV) multispectral camera to acquire red–green–blue (RGB) images and multispectral images (MSIs), which were fused with a digital surface model (DSM) generated by UAV LiDAR for feature fusion to obtain DSM + RGB and DSM + MSI images, and random forest (RF), support vector machine (SVM), maximum likelihood classification (MLC), and a deep learning U-Net model were employed to build classification models for forest stand canopy recognition for the four image types. The model results indicate that the recognition performance of RF is superior to that of U-Net, and U-Net performs better overall than SVM and MLC. The classification accuracy of different feature fusion images shows a trend of DSM + MSI images (Kappa = 0.8656, OA = 91.55%) > MSI images > DSM + RGB images > RGB images. DSM + MSI images exhibit the highest producer’s accuracy for identifying healthy and withered canopies, with values of 95.91% and 91.15%, respectively, while RGB images show the lowest accuracy, with producer’s accuracy values of 79.3% and 78.91% for healthy and withered canopies, respectively. This study presents a method for identifying the distribution of Populus bolleana canopies damaged by Anoplophora glabripennis and healthy canopies using the feature fusion of multi-source remote sensing data, providing a valuable data reference for the precise monitoring and management of farmland shelterbelts. Full article
(This article belongs to the Special Issue UAV Application in Forestry)
Show Figures

Figure 1

20 pages, 6947 KiB  
Article
Fusion of Multimodal Imaging and 3D Digitization Using Photogrammetry
by Roland Ramm, Pedro de Dios Cruz, Stefan Heist, Peter Kühmstedt and Gunther Notni
Sensors 2024, 24(7), 2290; https://doi.org/10.3390/s24072290 - 3 Apr 2024
Cited by 3 | Viewed by 2426
Abstract
Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual’s health conditions. Combining multimodal camera data [...] Read more.
Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual’s health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this article, a new method is demonstrated to superimpose multimodal image data onto a 3D model created by multi-view photogrammetry. While a high-resolution photo camera captures a set of images from varying view angles to reconstruct a detailed 3D model of the scene, low-resolution multimodal camera(s) simultaneously record the scene. All cameras are pre-calibrated and rigidly mounted on a rig, i.e., their imaging properties and relative positions are known. The method was realized in a laboratory setup consisting of a professional photo camera, a thermal camera, and a 12-channel multispectral camera. In our experiments, an accuracy better than one pixel was achieved for the data fusion using multimodal superimposition. Finally, application examples of multimodal 3D digitization are demonstrated, and further steps to system realization are discussed. Full article
(This article belongs to the Special Issue Multi-Modal Image Processing Methods, Systems, and Applications)
Show Figures

Figure 1

20 pages, 4387 KiB  
Technical Note
Methods to Calibrate a Digital Colour Camera as a Multispectral Imaging Sensor in Low Light Conditions
by Alexandre Simoneau and Martin Aubé
Remote Sens. 2023, 15(14), 3634; https://doi.org/10.3390/rs15143634 - 21 Jul 2023
Cited by 2 | Viewed by 3445
Abstract
High-sensitivity multispectral imaging sensors for scientific use are expensive and consequently not available to scientific teams with limited financial resources. Such sensors are used in applications such as nighttime remote sensing, astronomy, and night time studies in general. In this paper, we present [...] Read more.
High-sensitivity multispectral imaging sensors for scientific use are expensive and consequently not available to scientific teams with limited financial resources. Such sensors are used in applications such as nighttime remote sensing, astronomy, and night time studies in general. In this paper, we present a method aiming to transform non-scientific multispectral imaging sensors into science-friendly ones. The method consists in developing a calibration procedure applied to digital colour cameras not initially designed for scientific purposes. One of our targets for this project was that the procedure would not require any complex or costly equipment. The development of this project was motivated by a need to analyze airborne and spaceborne pictures of the earth surface at night, as a way to determine the optical properties (e.g., light flux, spectrum type and angular emission function) of artificial light sources. This kind of information is an essential part of the input data for radiative transfer models used to simulate light pollution and its effect on the natural environment. Examples of applications of the calibration method are given for that specific field. Full article
Show Figures

Figure 1

21 pages, 11097 KiB  
Article
Implementing Cloud Computing for the Digital Mapping of Agricultural Soil Properties from High Resolution UAV Multispectral Imagery
by Samuel Pizarro, Narcisa G. Pricope, Deyanira Figueroa, Carlos Carbajal, Miriam Quispe, Jesús Vera, Lidiana Alejandro, Lino Achallma, Izamar Gonzalez, Wilian Salazar, Hildo Loayza, Juancarlos Cruz and Carlos I. Arbizu
Remote Sens. 2023, 15(12), 3203; https://doi.org/10.3390/rs15123203 - 20 Jun 2023
Cited by 14 | Viewed by 5894
Abstract
The spatial heterogeneity of soil properties has a significant impact on crop growth, making it difficult to adopt site-specific crop management practices. Traditional laboratory-based analyses are costly, and data extrapolation for mapping soil properties using high-resolution imagery becomes a computationally expensive procedure, taking [...] Read more.
The spatial heterogeneity of soil properties has a significant impact on crop growth, making it difficult to adopt site-specific crop management practices. Traditional laboratory-based analyses are costly, and data extrapolation for mapping soil properties using high-resolution imagery becomes a computationally expensive procedure, taking days or weeks to obtain accurate results using a desktop workstation. To overcome these challenges, cloud-based solutions such as Google Earth Engine (GEE) have been used to analyze complex data with machine learning algorithms. In this study, we explored the feasibility of designing and implementing a digital soil mapping approach in the GEE platform using high-resolution reflectance imagery derived from a thermal infrared and multispectral camera Altum (MicaSense, Seattle, WA, USA). We compared a suite of multispectral-derived soil and vegetation indices with in situ measurements of physical-chemical soil properties in agricultural lands in the Peruvian Mantaro Valley. The prediction ability of several machine learning algorithms (CART, XGBoost, and Random Forest) was evaluated using R2, to select the best predicted maps (R2 > 0.80), for ten soil properties, including Lime, Clay, Sand, N, P, K, OM, Al, EC, and pH, using multispectral imagery and derived products such as spectral indices and a digital surface model (DSM). Our results indicate that the predictions based on spectral indices, most notably, SRI, GNDWI, NDWI, and ExG, in combination with CART and RF algorithms are superior to those based on individual spectral bands. Additionally, the DSM improves the model prediction accuracy, especially for K and Al. We demonstrate that high-resolution multispectral imagery processed in the GEE platform has the potential to develop soil properties prediction models essential in establishing adaptive soil monitoring programs for agricultural regions. Full article
Show Figures

Figure 1

26 pages, 17184 KiB  
Article
Growth Monitoring and Yield Estimation of Maize Plant Using Unmanned Aerial Vehicle (UAV) in a Hilly Region
by Sujan Sapkota and Dev Raj Paudyal
Sensors 2023, 23(12), 5432; https://doi.org/10.3390/s23125432 - 8 Jun 2023
Cited by 10 | Viewed by 3552
Abstract
More than 66% of the Nepalese population has been actively dependent on agriculture for their day-to-day living. Maize is the largest cereal crop in Nepal, both in terms of production and cultivated area in the hilly and mountainous regions of Nepal. The traditional [...] Read more.
More than 66% of the Nepalese population has been actively dependent on agriculture for their day-to-day living. Maize is the largest cereal crop in Nepal, both in terms of production and cultivated area in the hilly and mountainous regions of Nepal. The traditional ground-based method for growth monitoring and yield estimation of maize plant is time consuming, especially when measuring large areas, and may not provide a comprehensive view of the entire crop. Estimation of yield can be performed using remote sensing technology such as Unmanned Aerial Vehicles (UAVs), which is a rapid method for large area examination, providing detailed data on plant growth and yield estimation. This research paper aims to explore the capability of UAVs for plant growth monitoring and yield estimation in mountainous terrain. A multi-rotor UAV with a multi-spectral camera was used to obtain canopy spectral information of maize in five different stages of the maize plant life cycle. The images taken from the UAV were processed to obtain the result of the orthomosaic and the Digital Surface Model (DSM). The crop yield was estimated using different parameters such as Plant Height, Vegetation Indices, and biomass. A relationship was established in each sub-plot which was further used to calculate the yield of an individual plot. The estimated yield obtained from the model was validated against the ground-measured yield through statistical tests. A comparison of the Normalized Difference Vegetation Index (NDVI) and the Green–Red Vegetation Index (GRVI) indicators of a Sentinel image was performed. GRVI was found to be the most important parameter and NDVI was found to be the least important parameter for yield determination besides their spatial resolution in a hilly region. Full article
(This article belongs to the Special Issue Sensors for Aerial Unmanned Systems 2021-2023)
Show Figures

Figure 1

14 pages, 3211 KiB  
Article
Estimation of Productivity and Above-Ground Biomass for Corn (Zea mays) via Vegetation Indices in Madeira Island
by Fabrício Lopes Macedo, Humberto Nóbrega, José G. R. de Freitas, Carla Ragonezi, Lino Pinto, Joana Rosa and Miguel A. A. Pinheiro de Carvalho
Agriculture 2023, 13(6), 1115; https://doi.org/10.3390/agriculture13061115 - 24 May 2023
Cited by 12 | Viewed by 4900
Abstract
The advancement of technology associated with the field, especially the use of unmanned aerial vehicles (UAV) coupled with multispectral cameras, allows us to monitor the condition of crops in real time and contribute to the field of machine learning. The objective of this [...] Read more.
The advancement of technology associated with the field, especially the use of unmanned aerial vehicles (UAV) coupled with multispectral cameras, allows us to monitor the condition of crops in real time and contribute to the field of machine learning. The objective of this study was to estimate both productivity and above-ground biomass (AGB) for the corn crop by applying different vegetation indices (VIs) via high-resolution aerial imagery. Among the indices tested, strong correlations were obtained between productivity and the normalized difference vegetation index (NDVI) with a significance level of p < 0.05 (0.719), as well as for the normalized difference red edge (NDRE), or green normalized difference vegetation index (GNDVI) with crop productivity (p < 0.01), respectively 0.809 and 0.859. The AGB results align with those obtained previously; GNDVI and NDRE showed high correlations, but now with a significance level of p < 0.05 (0.758 and 0.695). Both GNDVI and NDRE indices showed coefficients of determination for productivity and AGB estimation with 0.738 and 0.654, and 0.701 and 0.632, respectively. The use of the GNDVI and NDRE indices shows excellent results for estimating productivity as well as AGB for the corn crop, both at the spatial and numerical levels. The possibility of predicting crop productivity is an essential tool for producers, since it allows them to make timely decisions to correct any deficit present in their agricultural plots, and further contributes to AI integration for drone digital optimization. Full article
(This article belongs to the Special Issue Remote Sensing Technologies in Agricultural Crop and Soil Monitoring)
Show Figures

Graphical abstract

Back to TopTop