Next Article in Journal
Development, Implementation and Evaluation of An Epidemic Communication System
Previous Article in Journal
FedMon: A Federated Learning Monitoring Toolkit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smart Agriculture Drone for Crop Spraying Using Image-Processing and Machine Learning Techniques: Experimental Validation

1
School of Information Technology, Engineering, Mathematics and Physics, University of the South Pacific, Laucala Campus, Suva 1168, Fiji
2
Murdoch University, Perth, WA 6150, Australia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
IoT 2024, 5(2), 250-270; https://doi.org/10.3390/iot5020013
Submission received: 8 April 2024 / Revised: 6 May 2024 / Accepted: 16 May 2024 / Published: 22 May 2024

Abstract

:
Smart agricultural drones for crop spraying are becoming popular worldwide. Research institutions, commercial companies, and government agencies are investigating and promoting the use of technologies in the agricultural industry. This study presents a smart agriculture drone integrated with Internet of Things technologies that use machine learning techniques such as TensorFlow Lite with an EfficientDetLite1 model to identify objects from a custom dataset trained on three crop classes, namely, pineapple, papaya, and cabbage species, achieving an inference time of 91 ms. The system’s operation is characterised by its adaptability, offering two spray modes, with spray modes A and B corresponding to a 100% spray capacity and a 50% spray capacity based on real-time data, embodying the potential of Internet of Things for real-time monitoring and autonomous decision-making. The drone is operated with an X500 development kit and has a payload of 1.5 kg with a flight time of 25 min, travelling at a velocity of 7.5 m/s at a height of 2.5 m. The drone system aims to improve sustainable farming practices by optimising pesticide application and improving crop health monitoring.

1. Introduction

Agriculture today not only faces the challenge of feeding a rapidly growing global population but also faces the compounded pressures of climate change, environmental sustainability, and the need for technological integration. In Pacific Island Countries (PICs), where between 50% and 70% of the population relies on agriculture and fishing for their livelihoods, these challenges are particularly acute [1]. The region’s vulnerability to climate-induced disasters such as rising sea levels and increased cyclone activity, compounded by economic shocks such as the COVID-19 pandemic and global trade disruptions, underscores the urgent need for resilient food systems.
Recent disruptions have highlighted the fragility of PICs’ food systems, heavily reliant on imports due to limited arable land and the high cost of food imports, which strains already-limited resources. The pandemic and subsequent global crises have exacerbated these vulnerabilities, making the stable, reliable food supply a critical concern for these nations [2].
To counter these threats, there is a growing recognition of the potential of smart agricultural technologies. The integration of Internet of Things (IoT) devices into agriculture offers promising solutions to enhance food security by enabling more precise farming practices that optimise resource use and reduce environmental impacts [3]. For instance, smart agricultural drones equipped with advanced sensors can help monitor crop health, optimise pesticide use, and reduce dependence on imported chemical inputs, which is crucial for maintaining the economic and environmental health of the region. The application of drone-captured attendance has been observed not only on land but also in fish water agriculture [4].
Thus, this research aims to explore the application of a low-cost, autonomous UAV system tailored to the agricultural context in the Pacific region. It is important to note that the terms “drone”, “UAV”, and “quadrotor” are used interchangeably in this article and refer to the same category of aircraft. Firstly, the areas integrating IoT, object detection, edge computing, autonomous drones, and agricultural technology were studied to gain a comprehensive understanding of the work carried out in these domains. Based on these reviewed works, a gap analysis was performed, and the methodology was developed. The autonomous UAV is designed to perform multiple functions: crop detection through image processing, targeted crop spraying, and autonomous navigation. These functions are intended to operate under the constraints of limited technical resources and the need for high adaptability in varied geographical and climatic conditions.

2. Literature Review

In precision agriculture (PA), real-time data processing is crucial for efficient decision-making. This is facilitated by technologies such as edge computing and image processing, which have applications beyond their initial domains. Edge computing optimises deep learning model deployment and execution to maximise the limited resources on edge devices. This technique improves object recognition and analytics data handling by reducing the need to send data to data centres and cloud transfers [5]. TensorFlow (TF) Lite is a deep learning framework developed by Google to deploy ML models on edge and mobile devices. TF Lite reduces the size of the model and speeds up processing through model quantisation, operation fusion, and pruning [6]. EfficientDet is a family of object detection models that prioritise scalability and efficiency. It has a compound scaling method that makes the network’s resolution, depth, and width the same size and a bi-directional feature pyramid network (BiFPN) for accurate multi-scale feature union. EfficientDetLite is a type of EfficientDet model that works best with TF Lite and is aimed at mobile and edge devices. TF Lite provides the structure and tools to run these models on devices with limited resources.
A smart multipurpose agricultural robot with simple image processing was developed to handle pesticides and water sprinkling on the ground surface [7]. This direct application of proven methods to agriculture illustrates how technologies across domains can improve agricultural productivity by enabling timely interventions.
Beyond infrastructure development, technology is also finding applications in PA, where Unmanned Aerial Vehicles (UAVs) and drones are transforming traditional farming practices. For example, a smart multipurpose agricultural robot has been developed with simple image-processing capabilities to handle tasks such as pesticide application and water sprinkling on the ground surface [7]. In another study, Bouguettaya et al. [8] used UAV-based convolutional neural networks (CNNs) for crop categorisation to advance precision agriculture. The Faster R-CNN network demonstrated a 97.1% accuracy in recognising apple trees, while U-Net architectures could classify fully mature trees with 98.8% accuracy. However, the accuracy dropped to 56.6% for seedling detection due to challenges associated with high-altitude imaging. The research evaluated CNN models and datasets, including the VOAI dataset, containing images of 12 tree species, and the Weedmap dataset, with more than 10,000 multispectral (MS) images. While demonstrating the promising potential of UAVs and deep learning for agricultural surveillance and high categorisation accuracy, the authors acknowledged several challenges, such as the computational demands of CNNs, the need for large datasets, and the performance variability depending on crop and UAV attributes.
Building on the theme of precision and efficiency, the study by Renella et al. [9] focused mainly on using the TF Lite Model Maker to find and separate strawberry plants and weeds. The models were trained on 96 pictures that had been labelled and had more than 1000 annotations. The EfficientDetLite1 to Lite4 architectures were used. After training, the models were tested on ten sets of images that had not been seen before. The results showed that the models had different true positive rates for finding strawberries and weeds, with EfficientDetLite4 having the highest level of precision. Strawberry true positive rates of up to 78%, small-weed true positive rates of up to 94%, and precision scores of up to 0.9873 for strawberry identification are some of the most important numbers.
Furthermore, Yallappa et al. [10] developed a pesticide sprayer that could be attached to a drone. The sprayer had a hexacopter frame, six BLDC motors, and two 8000 mAh LiPo batteries that could lift a 5 L pesticide tank. At a forward speed of 3.6   k m h −1 and a spray height of 1 m , this drone was able to cover an average of 1.15   h a h −1 of groundnuts and 1.08   h a h −1 of wheat crops. This made it much more efficient. The cost-effectiveness of the processes was found to be 345 Rs/ h a for groundnuts and 367 Rs/ h a for wheat crops. The spray was more even when the working pressure and spray height were increased. The volume median diameter (VMD) and the number median diameter (NMD) of the spray drops were measured in the laboratory to be 345 μm and 270 μm, respectively. This drone sprayer can reach places that are difficult to reach and reduces pesticide costs and environmental damage.
Reflecting these advancements, the study by Abdul Hafeez et al. [11] took a thorough look at the development and usefulness of drone technology in farming. The authors made some significant discoveries, such as the use of ML methods that were 99.97% accurate in the detection of diseases and the testing of pesticides. A Crop Water Stress Index (CWSI) between 0.28 and 0.69 can be used to check the health of crops, and there should be less than 20% error in droplet formation when pesticides are applied. Using drones requires 20% to 90% less chemical use, water, and work. A 6-litre-capacity octocopter had been developed to improve the efficiency of farm tracks. This shows how UAV technology can significantly simplify farming processes. A pumping range of 0.10 to 0.22 litres per minute was put in place to spray pesticides, saving approximately 9 USD per hectare in costs.
In addition to making great strides in agriculture, UAVs have also come a long way in improving operational efficiency and tracking. A case in point is the research by Jubair et al. [12], which describes a drone that can carry 2 k g of seeds and fly at 2 m above ground at 15 k m h −1 for up to 20 min on a single charge but can reach a maximum altitude of 500 m and a maximum flying speed of 14.76 km h−1. The efficiency is at least seven times higher than when sowing by hand. Its usefulness is affected by the maximum 15 min flight time with a payload. This leads to multiple battery changes for large farms and a set operational altitude, which could make it difficult to use on uneven ground or near tall plants.
In addition to emphasising the importance of operational efficiency, Piriyasupakij et al. [13] shed light on how a drone’s detection accuracy is affected in various testing situations. When tested by manual flights, the flight path accuracy was 97.50 % in Stabiliser mode and 95.00 % in Loiter mode. Stabiliser mode was chosen because it was faster and more responsive. The accuracy levels for the flight mode changed with altitude in automatic flight tests, going from 98.50 % at 3 m to 90.00 % at 200 m . The accuracy for human detection was best at 3 m ( 97.00 %) and lowest at 5 m ( 90.25 %). The drone’s ability to detect intruders varies depending on its altitude and flight mode, showing how important it is to fine-tune operating factors for monitoring purposes.
Advancing these operational efficiencies, in their research, Naufal et al. [14] presented a vision-based automated landing system for quadcopter drones that uses OpenMV cameras to track AprilTags and achieved a landing accuracy of up to 11.7   c m indoors and 8.94   c m outdoors. It combines Pixhawk and Mission Planner (MP), showing that it can work in a range of conditions, with a detection range of 20 c m to 450 c m , depending on the tag size. A mere 4.35 % error rate in the distance measurement further proves how reliable the system is. The planned improvements involve increasing camera quality and adding a gimbal for better detection and picture stability. This shows that drone landing technologies are moving in the right direction towards improving accuracy and reliability [14].
Lastly, by using CNN with transfer learning from the Inception V3 design, Ukaegbu et al. [15] created a system for detecting plants and applying herbicides that was 89.6% accurate for training and 90.6% accurate for validation. Using this Raspberry Pi-based device, less pesticide is needed. The researchers showed how resilient the model was by using a collection of 15,336 pictures and dropout and data augmentation methods to prevent overfitting. Beyond how accurate and valuable it is, the study suggests that it could be improved to improve detection and make farming more efficient. Improved features include adding to the dataset, using a better-quality camera, and adding deep learning engines.
Using UAVs and machine learning (ML) methods to monitor saffron plantations, Kiropoulos et al. [16] demonstrate significant success. The research emphasises RF and MLP models along with RGB and MS cameras. Remarkably, the RF model achieves 100% accuracy in recognising saffron flowers and vegetation, and the MLP model scores 95%, illustrating the potential of ML algorithms and UAV technology in improving agricultural monitoring and management.
A crop-monitoring system designed to enhance agricultural output was developed by Vijayalakshmi et al. [17], integrating a CNN, data fusion, UAVs, and IoT. This system focuses on improving conditions in low-yield locations and achieves 93% accuracy in data fusion, 84% in CMS, 87% in FANET, and 91% in the CNN. By distinguishing between weeds and crops under various weather conditions, this research by Vijayalakshmi et al. underscores the transformative potential of ML, IoT, and UAV technology to advance agricultural practices.
The application of UAVs for bridge inspection was explored in a study by Aliyari et al. [18], using data from a spectroradiometer and multispectral (MS) camera data alongside deep learning models such as VGG16, ResNet50, ResNet152v2, Xception, InceptionV3, and DenseNet201. Focusing on the Skodsberg Bridge in Norway with 19,023 photos, the study found that CNN models must balance trainable and non-trainable features for optimal performance, achieving 83% accuracy in identifying fractured and undamaged components through image cropping.
Focusing on disaster management, specifically flood detection, Munawar et al. [19] examined the use of CNNs and UAVs. Analysing a segment of the Pakistani Indus River, the study used UAV footage, spectroradiometers, and multispectral (MS) cameras and trained a CNN model to identify flood-affected areas with 91% accuracy. This demonstrates the model’s capability to assess infrastructure damage and enhance flood response efforts, highlighting the importance of CNN models in disaster management.
Investigating the detection of diseases in various crops using sensors in UAVs analysed by ML and deep learning (DL) methods, including CNNs, Tej Bahadur Shahi et al. [20] found that ML methods categorised agricultural diseases with 72–98% accuracy, and DL methods achieved 85–100%. This highlights the potential of UAVs in early disease detection to mitigate crop loss, despite challenges related to high-resolution image usage and UAV imaging variability.
The evolution of remote sensing in sugarcane cultivation from 1981 to 2020 was reviewed by Som-ard et al. [21], who showcased the effectiveness of Landsat and Sentinel-2 MSI data. With satellite sensors offering 10-metre spatial resolution for precision monitoring, machine learning methods such as Random Forest (RF) and convolutional neural networks (CNNs) have significantly enhanced sugarcane growth, health, and yield analysis. This review identified data preprocessing and environmental variability as challenges affecting monitoring precision.
In a comprehensive review of more than 70 articles, Imran Zualkernan et al. [22] assessed the use of machine learning and UAV images in addressing agricultural challenges such as crop classification, vegetation identification, and disease diagnosis. The evaluation of SVM and CNN machine learning models highlights CNNs’ ability to classify diseases with 99.04% accuracy, showcasing the effectiveness of using various sensors for identifying crops and weeds.
A study on the hyperspectral response of soybean cultivars to target spot disease conducted by Otone et al. [23] used machine learning for severity classification. Using data collected with fungicides for various diseases during the 2022–2023 agricultural season, the research found that SVM and RF were effective in classifying disease severity with 85% and 76% accuracy, respectively. This underscores the potential of hyperspectral scanning and machine learning in the early detection and monitoring of soybean target spots.
Research by Mohidem et al. [24] on the integration of machine learning algorithms with UAVs equipped with RGB, multispectral, hyperspectral, and thermal cameras for weed detection in agricultural fields highlights the critical role of UAV technology and machine learning in minimising crop loss. With SVM achieving 94% accuracy and CNNs reaching 99.8%, the study underscores the importance of improving UAV-based remote sensing in agricultural settings.
Almasoud et al. [25] present a novel approach to identifying maize, banana, forest, legumes, and structural crops using remote sensing imagery from UAVs using the RSMPA-DLFCC method. Incorporating deep learning with the Marine Predators Algorithm, this method achieved a high accuracy rate of 98.22% on a UAV dataset, marking a significant improvement over previous crop classification models.
Phenotyping citrus production with UAV-based multispectral imaging and deep learning convolutional neural networks is addressed by Ampatzidis et al. [26]. The success of this method in tree detection, gap identification, and health assessment demonstrates significant accuracy improvements, achieving an overall accuracy of 99.8%, highlighting the application of CNNs and multispectral cameras in agriculture.
RF-OBIA, an approach to early weed mapping in crop rows developed by Ana I. de Castro et al. [27], utilises high-resolution UAV data to differentiate crops and weeds using the Random Forest (RF) classifier. Achieving 81% and 84% accuracy in sunflower and cotton agriculture, respectively, this method emphasises the critical role of precision imaging in agriculture.
Investigating tea plant cold stress, Mao et al. [28] used UAVs equipped with MS, TIR, and RGB sensors, employing the CNN-GRU model in conjunction with the GRU, PLSR, SVR, and RFR models. This method, which incorporates MS and RGB data, significantly exceeds traditional models in predictive accuracy, highlighting the application of machine learning and UAV technology in precision agriculture to assess cold stress in tea plantations.
Another study [29] presents the development of a UAV-based automated aerial spraying system in China that is designed to improve plant protection operations. Utilising a highly integrated MSP430 single-chip microcomputer, the system features independent functional modules for route planning and GPS navigation, ensuring precise spray area targeting. The test results demonstrate low route deviations in crosswinds and low variations in spray uniformity tests. Table 1 provides a comprehensive summary of UAV-based developments in various areas.
The literature reviewed encompasses works from diverse focus areas within the domain of agricultural technology. This review aimed to underscore the limitations of existing prototypes, which are often costly or lack the versatility to be easily repurposed for other agricultural applications. Furthermore, a significant portion of the developed UAV systems prioritise PA, with a primary emphasis on the accuracy of crop identification. Consequently, the focus on detection and monitoring overshadows the development of prototypes that could potentially minimise human labour in pesticide application. Therefore, there is a pressing need for cost-effective solutions that enable the development of real-time, adaptable, UAV-based crop-spraying applications.

3. Methodology

Stemming from the gap identified in the literature, the methodology for developing an autonomous UAV system for real-time crop detection and crop spraying was divided into three main components, as shown in Figure 1. The image-processing component (Figure 1a) and the crop-spraying component (Figure 1b) are interdependent; the object detection algorithm determines the amount of pesticide to spray on each crop and then automatically activates the sprayer. On the other hand, the autonomous flight-mapping and navigation component (Figure 1c) operates independently of the other two functions. This approach distributes the processing power, enabling the drone to function in real time without overburdening the involved microprocessors and microcontrollers.

3.1. Image-Processing System

The steps involved in developing the image-processing system are described in Figure 1a). These steps are further elaborated on in the following subsections.

3.1.1. Image Capture and Preprocessing

The quality, diversity, and volume of the dataset directly influence the model’s ability to learn, generalise, and accurately detect objects under varied conditions. Due to logistical constraints and lack of access to a traditional farm, the dataset was compiled using images of crops planted in pots at the University of the South Pacific (USP) Engineering Block, located at 18°08′57.1″ S 178°26′52.3″ E. This controlled premises was chosen to ensure a variety of lighting conditions, backgrounds, and angles, which is essential for improving model performance and preventing overfitting. Overfitting occurs when a model trained on a dataset with limited variability learns to memorise specific patterns rather than generalising to new data.
All images were RGB images that were resized to 800 × 600 pixels and converted to JPEG format. Preprocessing was conducted using Python scripts with OpenCV libraries in Google Colab, demonstrating an IoT-enabled approach, which is essential for maintaining training efficiency and managing dataset storage demands.
Machine learning models are trained to learn from any dataset provided during the training phase. Although the initial dataset comprised pot-planted crops to begin development and testing, the models are adaptable and can learn from datasets that include crops planted in typical farm conditions. Future work will expand the dataset with images from actual agricultural fields to ensure that the models are well adapted to real-world farming environments. Figure 2 shows sample images from the custom dataset, highlighting the diverse conditions under which the data were collected.

3.1.2. Annotation and Data Setup

For the annotation task of the dataset, the project utilised makesense.ai, a web-based tool that utilises IoT’s cloud connectivity, designed to facilitate the remote annotation of images. This tool allows users to outline objects within images by drawing bounding boxes, making it an ideal platform for labelling specific crop classes, namely, pineapple, papaya, and cabbage, within the dataset. Through this platform, 531 annotations were made in 177 images, ensuring a comprehensive description of each crop type for model training. The precision of the crafted bounding boxes for annotations directly influences the ability to recognise and classify the targeted crops.

3.1.3. Model Training and Evaluation

Google Colab was utilised for training the EfficientDetLite models; it was selected for its cloud-based capabilities, which meet the computational demands of machine learning without the need for local hardware. This choice was specifically due to Google Colab’s compatibility with TensorFlow Lite, facilitating the training of lightweight models that are efficient for deployment on edge devices.
The project employed transfer learning techniques utilising pre-trained models on TensorFlow Lite, which were further fine-tuned to suit the specific requirements of precision agriculture. This approach leveraged existing neural networks trained on extensive datasets, thus enabling improved model performance on a specialised, smaller dataset, which is characteristic of the agricultural domain. The training involved adjusting model parameters like the learning rate and batch size to optimise detection accuracy for various types of crops under diverse environmental conditions.
The models’ performance was evaluated using a range of metrics from the COCO dataset, a standard benchmark in the field of object detection. These metrics include Average Precision (AP) at various Intersection over Union (IoU) thresholds (AP50, AP75), providing insight into the models’ accuracy at different levels of detection precision. Additionally, Average Recall (AR) metrics (AR1, ARm, ARmax1, ARmax10, ARmax100, ARs) offer a comprehensive view of the models’ ability to detect all relevant objects within the images. Average Precision (AP) was identified as the primary metric for how well each model identifies and sorts the three types of crops, guiding the selection of the best model for use by looking at its overall performance.

3.1.4. Deployment and Testing

For deployment, the project used a Raspberry Pi 4B with 4 GB RAM, which is a compact single-board computer capable of performing real-time image-processing tasks and operating on approximately 15 W of power. The widespread use of the Raspberry Pi in DIY projects and research, combined with its affordability, made it an ideal choice for the deployment of the trained model. This setup enabled the system to process images directly in the field, testing the model’s detection abilities at different heights, ranging from 0.5 to 2.5 m. However, for intensive tasks such as image processing, Raspberry Pi hardware can be quite limiting. To further enhance the processing speed and ensure real-time performance, the Coral USB Accelerator was integrated into the system. This portable device offloads the TF Lite model inference from the Raspberry Pi, demonstrating the practical application of IoT technologies to reduce the computational load on the Raspberry Pi and enable faster image processing. The overall image processing system with components is shown in Figure 3.

3.2. Crop-Spraying System

The steps involved in developing the crop-spraying system are outlined as follows.

3.2.1. Hardware Setup and Calibration

The development of the crop-spraying system focused on minimising weight to meet the unique needs of the drone. The system features a variable 5 V to 12 V rated self-priming pump with a 1/4-inch outlet, a compact 220 mL tank, and 360-degree nozzles, all connected by lightweight 1/4-inch flexible PVC piping. To control the pump motor, an L298N motor driver was utilised and interfaced with the Raspberry Pi. Calibration efforts focused on determining the optimal operating voltage of the pump and then assessing how various parameters, such as the spray width, spray flow rate, and coefficient of variation (CV) of different spray modes (mode A being 100% and mode B being 50%), operate at different heights, ranging from 0.5 m to 2.5 m.

3.2.2. Spray Integration

The crop-spraying system was then integrated with the image-processing system with each of the crop classes assigned to a spray mode. The block diagram of the crop-spraying system is shown in Figure 4. The pineapple was assigned to spray mode A at 100% full capacity for the pump, while the papaya was assigned to spray mode B at 50% of the pump capacity for a gentler spray. Cabbage was assigned to 0% of the pump capacity, resulting in no spraying action. A threshold confidence of 98% was added to avoid incorrect activation, and each time there was an activation, the system would spray for a 3 s duration.

3.3. Drone Flight System

The steps involved in developing the drone flight system are outlined as follows:

3.3.1. Hardware Setup and Flight Control System

The drone system utilised the PX4 X500 development kit (see Figure 5) and was assembled according to its documentation. The parameters of the quadrotor are presented in Table 2. As suggested earlier, a PID controller is used to stabilise and manoeuvre the quadrotor precisely to the desired trajectory, and the quadrotor mathematical model is taken from [31]. The aircraft position information is measured via onboard GPS sensors, which are ( x , y , z ) from the centre of the axis of the aircraft. The orientations are obtained from an Inertial Measurement Unit, which consists of a gyroscope and accelerometer; these angular poses are given as ( ϕ , θ , ψ ) , which are also known as roll, pitch, and yaw, respectively. Hence, the final state vector of the quadrotor consists of the following 12 variables: ( x , y , z , ϕ , θ , ψ ) positions and their respective velocities ( u , v , w , p , q , r ) . The onboard system architecture is presented in Figure 6. Since the quadrotor has four rotor speeds that aid in manoeuvring, to ensure a fully controllable system, the ϕ and θ angles are maintained close to zero. This allows the quadrotor to achieve the four desired outputs: ( x , y , z , ψ ) . Additionally, maintaining ϕ and θ close to zero during spraying ensures that the centre of gravity remains in position. It is also ensured that the spraying mechanism is attached in such a way that the centre of gravity of the entire system does not change.
Moreover, the four rotors of the quadrotor are designed to generate thrust through their spin, supporting the weight and payload of the aircraft while also producing additional acceleration. The relationship between each rotor’s rotation and the resulting thrust force F i is roughly quadratic and is given by F i = k F ω 2 , where ω represents the speed of the rotor, k F is the force constant, and i = 1 , 2 , 3 , 4 corresponds to the four rotors. To be able to spin at a desired velocity, each rotor has to overcome a drag moment, which is also quadratic with the velocity itself, that is, M i = k M ω 2 , where k M is the torque constant.
Furthermore, with the speed–torque characteristic, the motor’s size can be computed relative to the payload or vice versa. During vertical takeoff or hovering, each rotor must support a quarter of the vehicle’s weight plus the payload, represented as W o = 1 4 m g + p , where g is gravity, and p is the payload weight in N. Therefore, the nominal speed of the rotors during hovering or vertical takeoff is calculated as follows:
ω 0 = m g + p 4 K F

3.3.2. Software Setup and Flight Control

The ground station setup involved installing the mission planner (MP) software, Ver. 1.3.80 on the ground control station laptop. MP is an application to configure and control the drone through the ArduPilot platform. This stage included flashing the drone’s flight controller with the latest firmware to ensure that certain flight modes and autonomous mission operations were fully supported. Utilising MP software at the ground station to manage and set up the drones shows how IoT can be used to directly control and monitor the drone, making operations more flexible and effective.

3.3.3. Calibration and Pre-Flight Testing

This process involved calibrating onboard sensors such as the compass, accelerometer, gyroscope, and GPS. This ensured that the drone could accurately sense its orientation and position during flight. The LiPo batteries used to power the system (4 s, 14.8 V, 5500 mAh) were recommended by the development kit. Furthermore, a TX-12 radio controller, operating at a 2.4 GHz frequency, and radio telemetry at 433 MHz were also set up, ensuring communication within the acceptable band ranges for the Fiji Islands. This setup was to allow the manual control and remote monitoring of the drone from the ground station. The drone flight system can be easily understood from Figure 7.

3.3.4. Flight and Mission Testing

The drone began flight tests with manual control in three primary modes: Stabilise, Alt Hold, and Loiter. These were to observe the stability of basic manoeuvres. Stabilise mode allowed the full manual control of the drone parameters. Alt Hold mode allowed the pilot to control the horizontal movement while the drone maintained a constant altitude. Loiter mode allowed the drone to hover in its current position in space. Following manual flight tests, the focus was on autonomous mission execution. Using MP, a mission was designed with specific waypoints that detail the path, altitude, and speed of the drone to follow. Once the drone was in Auto mode, it navigated the predefined waypoints without manual input. Post-flight, an analysis of flight log data was performed, focusing on GPS accuracy, to evaluate drone adherence to planned missions.

3.4. Overall System

The steps involved in developing the overall system are outlined as follows.

3.4.1. System Integration Process

The final stage was integrating the image-processing, crop-spraying, and drone flight systems into a cohesive unit. Due to the additional weight, the drone’s flight dynamics were altered. To address this and ensure stable and responsive flight behaviour, the drone’s PID (Proportional, Integral, Derivative) parameters for the positions and attitudes had to be tuned as presented in [31]. Ref. [31] also shows the decoupling of the states of the quadrotor into individual transfer functions that can be easily stabilised using PID or even a Fraction PI-D controller.

3.4.2. System Testing

After tuning the PID parameters of the drone, manual flight tests were performed to test the stability and responsiveness of the system in different flight modes. This was followed by autonomous flight tests in a 3 × 3 plant matrix setup, which included three specimens from each crop class organised in random order. These tests were conducted on USP grounds on controlled premises with natural weather conditions at coordinates 18°08′57.1″ S 178°26′52.3″ E, 18°09′05.8″ S 178°26′26.6″ E, and 18°09′17.5″ S 178°26′40.8″ E. Waypoints in MP were created to match the plant matrix setup, and tests for any potential deviations or drifts during flight were conducted. Once the flight path was nailed down, the spraying system was activated.

3.4.3. Evaluation and Optimisation

Following the testing phases, the system was evaluated, and after iterative trials, the flight logs were also analysed to ensure that the drone kept its course even in strong winds. The system was optimised and tweaked to give a satisfactory output.

4. Results and Discussion

In pursuit of deploying AI for drone-based PA, the evaluation of model architectures is a step towards ensuring the effective real-time monitoring of crop health. Table 3 provides a comparative look at the Average Precision scores of the EfficientDetLite models in three key crop classes: pineapple, papaya, and cabbage. These data are essential for understanding each model’s ability to accurately detect and classify crop types from aerial images. The data indicate that the EfficientDetLite3 model achieves the highest precision across all crops, with an AP score of 92.30% for cabbage, while the EfficientDetLite0 model had the lowest AP score of 88.53% for cabbage. The models were each trained for 300 epochs on a batch size of 8, with the practical limitation of available computational resources dictating this value.
To manage the power needs of the drone, which is portable and has four motors, the Coral USB Accelerator, which features Google’s Edge TPU, was utilised to enable quick, on-device processing, which then improves efficiency and reduces overall power use. The Edge TPU is capable of processing in real time, as it allows the data feed from the drone’s camera to be processed on the drone, reducing the need to send data to a server for processing, which reduces latency and power consumption. Inference time is an indicative metric of the response of the model. Without the Edge TPU, Table 4 indicates a progressive increase in inference time with model complexity. EfficientDetLite0, as the least complex model, processes images at 125 ms, which is suitable for its simplicity of design. In contrast, EfficientDetLite3 registers the longest inference time of 1250 ms, potentially introducing a delay in time-sensitive tasks. Moreover, with Edge TPU support, inference times are notably reduced across all models. The most noticeable improvement is observed in EfficientDetLite0, with a decrease to 56 ms, and EfficientDetLite1, with a reduction to 91 ms.
The model storage requirement is an important factor for edge devices, where memory is at a premium. Table 4 shows that models without the Edge TPU are inherently smaller, but Edge TPU optimisation only slightly increases its size. For instance, the increase from 7.38 MB to 7.65 MB for EfficientDetLite1 is a minor trade-off for the benefit of accelerated inference capabilities. In particular, there exists an inversely proportional relationship between precision (accuracy) and speed (inference time), with higher accuracy often coming at the cost of reduced processing speed. The EfficientDetLite1 model emerged as the preferred choice, striking an optimal balance between these key factors. The use of the Coral USB Accelerator reduces inference times across all models, as illustrated in Table 4. This acceleration is particularly beneficial for our drone’s operational context, where four motors significantly increase power consumption. By processing data on the edge, we considerably reduce the power required for communication and data processing, which directly contributes to extending the drone’s flight time. These enhancements are critical in maintaining longer operational periods, which are essential for thorough crop-monitoring and treatment applications. Moreover, while the performance and inference times of Raspberry Pi have been improved by the Edge TPU, its limitations become apparent, and future improvement could benefit from exploring the use of the Coral Dev Board or the Nvidia Jetson Nano, which contain more powerful processors and are designed to handle intense computational tasks.
Figure 8 of the object detection interface demonstrates the EfficientDetLite1 model’s precision in identifying different crop types and labelling each within a bounding box. At an altitude of 2.5 m, the model confidently detects and annotates pineapples, papayas, and cabbages with a success rate of 100%. This singular detection focus is by design, setting a high-accuracy detection by imposing a one-object-per-frame detection limit, which reduces the chance of misclassification and ensures that the drone’s actions are based on the most reliable data.
As observed in Figure 9, there is a linear relationship between the spray width and height. Spray mode A achieves a 25% wider spray than spray mode B across all heights.
Figure 10 shows the flow rate and depicts initial increases across the different heights. At varying heights, Mode A’s flow rate fluctuates but generally increases, peaking at 2.5 m with 1883 L/min, while Mode B exhibits a less consistent pattern with a peak flow rate of 915 L/min at 2.5 m.
The CV is a relative measure of the variability in relation to the mean of the dataset, as shown in Figure 11. A lower CV value indicates a more uniform spray distribution. In this dataset, in spray mode A, the most uniform spray distribution occurs at a 2.5 m height with a CV of 0.61. In spray mode B, the most uniform distribution is at a 2.5 m height with a CV of 0.74. This suggests that, of the scenarios tested, raising the drone to a height of 2.5 m results in the most uniform spray distribution in both spray modes.
The MP waypoint interface, as shown in Figure 12, provides a visual representation of the planning that precedes a drone mission, illustrating the detailed path intended for the drone to follow over simulated agricultural fields. This 3-by-3 matrix, as shown in Figure 13, consists of pineapples, papayas, and cabbages arranged in a randomised order and serves as a controlled environment to evaluate the drone’s performance. The precision and clarity of the system’s visual outputs show how important IoT is for turning data into useful agricultural information. In line with this, the GPS flight log reveals the drone’s adherence to the planned path, as shown in Figure 14. Operating at a speed of 7.5 m/s and an altitude of 2.5 m, the drone has a flight time of 25 min with a payload of 1.5 kg. The drone strikes a balance between plant object detection and efficient field coverage. Despite encountering environmental variables such as strong winds, the drone’s flight control system, calibrated through PID tuning, enables it to correct course deviations. The values of the tuned PID parameters are shown in Table 5.
Figure 15 presents the Smart Agri-Drone viewed from the side, top, and end, showcasing the organised arrangement of components and the overall structure of the drone. The top view is a top-down perspective of the Smart Agri-Drone, showcasing the symmetry and placement of components within the overall system.
Figure 16 captures a moment of the Smart Agri-Drone in action, demonstrating its application in a simulated agricultural setting. The image shows the drone during a precision spraying task, highlighting the use of the object detection interface alongside the Raspberry Pi terminal’s feedback. This instance, where a papaya plant is detected with confidence 99% (0.99), shows the drone’s ability to perform real-time object detection. The terminal’s message, “Detected Papaya, Turn on the motor at 50%”, signifies the drone response to the detection, where the spraying system operates in spray mode B.
Our experiments focused on how the drone’s speed affects image processing and crop detection. The drone’s speed was kept at 7.5 m/s, matching the image-processing time of 91 ms. This speed ensures that the drone captures and processes images effectively, leading to accurate and timely spraying. Changes in lighting can affect image quality. Using a camera with adjustable settings can help maintain image quality in different lighting conditions. High winds or quick drone manoeuvres can cause blur. A gimbal can stabilise the camera to keep images clear. Wind can make the drone unstable, and therefore, tuning the flight controller’s settings, such as roll, pitch, and yaw response rates, helps stabilise the drone in high winds, maintaining image quality.

5. Conclusions and Future Works

In conclusion, the integration of smart agriculture drones with precision agriculture methods through image detection and autonomous spraying systems offers further advancement to contemporary farming practices. An affordable system that allows the creation of flexible and real-time UAV-based crop-spraying applications was developed with real-time analysis. This study involves the principles of IoT to demonstrate the development and evaluation of a system that uses TF Lite models deployed on the Raspberry Pi and accelerated using the Coral Edge TPU for the custom object detection of pineapple, papaya, and cabbage crop classes. These are integrated into an autonomous drone that flies according to set waypoints in MP and a crop-spraying system that sprays two modes based on class detection. These steps show a method of how IoT is making farming better with systems that work in real time and can adjust on their own. The accuracy of the object detection model was obtained through training, even on a small dataset with lower-quality images. A larger dataset and high-quality images can improve its performance further.
Future work in this field could enhance system performance through more powerful hardware, such as the Coral Dev Board or the Jetson Nano development board. The current setup shows the need for a more robust system design to adapt to different agricultural environments. Future improvements could include advanced stabilisation technologies, better camera sensors for fast-moving scenes and diverse lighting, and optimised flight controller settings for stability in bad weather. Developing algorithms for variable image conditions could also improve detection accuracy and reliability. Another feature could include the dispersion of plant seeds in addition to crop spraying. Implementing advanced imaging through MS image techniques will also enable farmers to gather detailed plant health data and optimise productivity through sustainable practices.

Author Contributions

Conceptualization, E.S. and A.P.; methodology, E.S., A.P. and U.M.; software, E.S.; validation, E.S. and A.P.; formal analysis, E.S.; investigation, E.S.; resources, U.M. and S.I.A.; data curation, E.S. and A.P.; writing—original draft preparation, E.S. and A.P.; writing—review and editing, U.M. and S.I.A.; visualization, E.S. and U.M.; supervision, U.M. and S.I.A.; project administration, U.M.; funding acquisition, U.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No data except that in the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
BLDCBrushless DC motor
CNNConvolutional neural network
CNN-GRUConvolutional Neural Network-Gated Recurrent Unit
CMSCrop-monitoring system
CWSICrop Water Stress Index
DJIDa-Jiang Innovations
DLDeep learning
DNNDeep Neural Network
FANETFlying Ad Hoc Network
FPSFrames Per Second
GRUGated Recurrent Unit
IMUInertial Measurement Unit
IoTInternet of Things
LiPoLithium Polymer
LRLogistic Regression
MLMachine learning
MLPMultilayer Perceptron
MPMission Planner
MSMultispectral
MSIMultispectral imaging
NMDNumber median diameter
PAPrecision agriculture
PIDProportional Integral Derivative
PLSRPartial Least Squares Regression
RFRandom Forest
RFRRandom Forest Regression
RGBRed Green Blue
RSMPA-DLFCCRobust Softmax with Maximum Probability Analysis-Deep Local Feature Cross-Correlation
SBODL-FCCSigmoid-Based Objective Detection Layer-Feature Cross-Correlation
SVMSupport Vector Machine
TFTensorFlow
TIRThermal Infrared
UAVUnmanned Aerial Vehicle
VGGVisual Geometry Group
VMDVolume median diameter

References

  1. Pajon, C. Food Systems in the Pacific: Addressing Challenges in Cooperation with Europe; Briefings de l’Ifri: Paris, France, 2023. [Google Scholar]
  2. Berry, F.; Bless, A.; Davila Cisneros, F. Evidence Brief UNFSS Pacific Country Food System Pathways Analysis. In Proceedings of the United Nations Food Systems Summit (UNFSS), New York, NY, USA, 23 September 2021. [Google Scholar]
  3. Karunathilake, E.M.B.M.; Le, A.T.; Heo, S.; Chung, Y.S.; Mansoor, S. The Path to Smart Farming: Innovations and Opportunities in Precision Agriculture. Agriculture 2023, 13, 1593. [Google Scholar] [CrossRef]
  4. Kumar, S.; Lal, R.; Ali, A.; Rollings, N.; Mehta, U.; Assaf, M. A Real-Time Fish Recognition Using Deep Learning Algorithms for Low-Quality Images on an Underwater Drone. In Artificial Intelligence and Sustainable Computing, Proceedings of the ICSISCET 2023, Gwalior, India, 21–22 October 2023; Pandit, M., Gaur, M.K., Kumar, S., Eds.; Springer: Singapore; Berlin/Heidelberg, Germany, 2024; pp. 67–79. [Google Scholar]
  5. Verma, G.; Gupta, Y.; Malik, A.M.; Chapman, B. Performance Evaluation of Deep Learning Compilers for Edge Inference. In Proceedings of the 2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), New Orleans, LA, USA, 18–22 May 2020; pp. 858–865. [Google Scholar] [CrossRef]
  6. Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 10778–10787. [Google Scholar] [CrossRef]
  7. Chand, A.A.; Prasad, K.A.; Mar, E.; Dakai, S.; Mamun, K.A.; Islam, F.R.; Mehta, U.; Kumar, N.M. Design and Analysis of Photovoltaic Powered Battery-Operated Computer Vision-Based Multi-Purpose Smart Farming Robot. Agronomy 2021, 11, 530. [Google Scholar] [CrossRef]
  8. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Design and Development of Object Detection System in Augmented Reality Based Indoor Navigation Application. In Proceedings of the 2022 International Conference on Electrical Engineering and Informatics, Banda Aceh, Indonesia, 27–28 September 2022; pp. 89–94. [Google Scholar] [CrossRef]
  9. Renella, N.T.; Bhandari, S.; Raheja, A. Machine learning models for detecting and isolating weeds from strawberry plants using UAVs. In Proceedings of the Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VIII, Orlando, FL, USA, 30 April–5 May 2023; Bauer, C., Thomasson, J.A., Eds.; SPIE: Orlando, FL, USA, 2023; p. 8. [Google Scholar] [CrossRef]
  10. Yallappa, D.; Veerangouda, M.; Maski, D.; Palled, V.; Bheemanna, M. Development and evaluation of drone mounted sprayer for pesticide applications to crops. In Proceedings of the 2017 IEEE Global Humanitarian Technology Conference (GHTC), San Jose, CA, USA, 19–22 October 2017; pp. 1–7. [Google Scholar] [CrossRef]
  11. Hafeez, A.; Husain, M.A.; Singh, S.; Chauhan, A.; Khan, M.T.; Kumar, N.; Chauhan, A.; Soni, S. Implementation of drone technology for farm monitoring & pesticide spraying: A review. Inf. Process. Agric. 2023, 10, 192–203. [Google Scholar] [CrossRef]
  12. Jubair, M.A.; Hossain, S.; Masud, M.A.A.; Hasan, K.M.; Newaz, S.H.S.; Ahsan, M.S. Design and development of an autonomous agricultural drone for sowing seeds. In Proceedings of the 7th Brunei International Conference on Engineering and Technology 2018 (BICET 2018), Bandar Seri Begawan, Brunei, 12–14 November 2018; p. 101. [Google Scholar] [CrossRef]
  13. Piriyasupakij, J.; Prasitphan, R. Design and Development of Autonomous Drone to Detect and Alert Intruders in Surveillance Areas. In Proceedings of the 2023 8th International Conference on Business and Industrial Research (ICBIR), Bangkok, Thailand, 18–19 May 2023; pp. 1094–1099. [Google Scholar] [CrossRef]
  14. Naufal, R.I.; Karna, N.; Shin, S.Y. Vision-based Autonomous Landing System for Quadcopter Drone Using OpenMV. In Proceedings of the 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 19–21 October 2022; pp. 1233–1237. [Google Scholar] [CrossRef]
  15. Ukaegbu, U.; Mathipa, L.; Malapane, M.; Tartibu, L.K.; Olayode, I.O. Deep Learning for Smart Plant Weed Applications Employing an Unmanned Aerial Vehicle. In Proceedings of the 2022 IEEE 13th International Conference on Mechanical and Intelligent Manufacturing Technologies (ICMIMT), Cape Town, South Africa, 25–27 May 2022; pp. 321–325. [Google Scholar] [CrossRef]
  16. Kiropoulos, K.; Tsouros, D.C.; Dimaraki, F.; Triantafyllou, A.; Bibi, S.; Sarigiannidis, P.; Angelidis, P. Monitoring Saffron Crops with UAVs. Telecom 2022, 3, 301–321. [Google Scholar] [CrossRef]
  17. Vijayalakshmi, K.; Al-Otaibi, S.; Arya, L.; Almaiah, M.A.; Anithaashri, T.P.; Karthik, S.S.; Shishakly, R. Smart Agricultural–Industrial Crop-Monitoring System Using Unmanned Aerial Vehicle–Internet of Things Classification Techniques. Sustainability 2023, 15, 11242. [Google Scholar] [CrossRef]
  18. Aliyari, M.; Droguett, E.L.; Ayele, Y.Z. UAV-Based Bridge Inspection via Transfer Learning. Sustainability 2021, 13, 11359. [Google Scholar] [CrossRef]
  19. Munawar, H.S.; Ullah, F.; Qayyum, S.; Khan, S.I.; Mojtahedi, M. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547. [Google Scholar] [CrossRef]
  20. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  21. Som-ard, J.; Atzberger, C.; Izquierdo-Verdiguier, E.; Vuolo, F.; Immitzer, M. Remote Sensing Applications in Sugarcane Cultivation: A Review. Remote Sens. 2021, 13, 4040. [Google Scholar] [CrossRef]
  22. Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. [Google Scholar] [CrossRef]
  23. de Queiroz Otone, J.D.; Theodoro, G.d.F.; Santana, D.C.; Teodoro, L.P.R.; de Oliveira, J.T.; de Oliveira, I.C.; da Silva Junior, C.A.; Teodoro, P.E.; Baio, F.H.R. Hyperspectral Response of the Soybean Crop as a Function of Target Spot (Corynespora cassiicola) Using Machine Learning to Classify Severity Levels. AgriEngineering 2024, 6, 330–343. [Google Scholar] [CrossRef]
  24. Mohidem, N.A.; Che’Ya, N.N.; Juraimi, A.S.; Fazlil Ilahi, W.F.; Mohd Roslim, M.H.; Sulaiman, N.; Saberioon, M.; Mohd Noor, N. How Can Unmanned Aerial Vehicles Be Used for Detecting Weeds in Agricultural Fields? Agriculture 2021, 11, 1004. [Google Scholar] [CrossRef]
  25. Almasoud, A.S.; Mengash, H.A.; Saeed, M.K.; Alotaibi, F.A.; Othman, K.M.; Mahmud, A. Remote Sensing Imagery Data Analysis Using Marine Predators Algorithm with Deep Learning for Food Crop Classification. Biomimetics 2023, 8, 535. [Google Scholar] [CrossRef] [PubMed]
  26. Ampatzidis, Y.; Partel, V. UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence. Remote Sens. 2019, 11, 410. [Google Scholar] [CrossRef]
  27. De Castro, A.I.; Torres-Sánchez, J.; Peña, J.M.; Jiménez-Brenes, F.M.; Csillik, O.; López-Granados, F. An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef]
  28. Mao, Y.; Li, H.; Wang, Y.; Wang, H.; Shen, J.; Xu, Y.; Ding, S.; Wang, H.; Ding, Z.; Fan, K. Rapid monitoring of tea plants under cold stress based on UAV multi-sensor data. Comput. Electron. Agric. 2023, 213, 108176. [Google Scholar] [CrossRef]
  29. Xue, X.; Lan, Y.; Sun, Z.; Chang, C.; Hoffmann, W.C. Develop an unmanned aerial vehicle based automatic aerial spraying system. Comput. Electron. Agric. 2016, 128, 58–66. [Google Scholar] [CrossRef]
  30. Primicerio, J.; Di Gennaro, S.F.; Fiorillo, E.; Genesio, L.; Lugato, E.; Matese, A.; Vaccari, F.P. A flexible unmanned aerial vehicle for precision agriculture. Precis. Agric. 2012, 13, 517–523. [Google Scholar] [CrossRef]
  31. Shankaran, V.P.; Azid, S.I.; Mehta, U.; Fagiolini, A. Improved Performance in Quadrotor Trajectory Tracking Using MIMO PIμ-D Control. IEEE Access 2022, 10, 110646–110660. [Google Scholar] [CrossRef]
Figure 1. Methodology showing (a) image-processing system, (b) crop-spraying system, (c) drone flight system.
Figure 1. Methodology showing (a) image-processing system, (b) crop-spraying system, (c) drone flight system.
Iot 05 00013 g001
Figure 2. Sample images from custom dataset.
Figure 2. Sample images from custom dataset.
Iot 05 00013 g002
Figure 3. Image processing system.
Figure 3. Image processing system.
Iot 05 00013 g003
Figure 4. Crop-spraying system.
Figure 4. Crop-spraying system.
Iot 05 00013 g004
Figure 5. Hardware setup of X500 development kit.
Figure 5. Hardware setup of X500 development kit.
Iot 05 00013 g005
Figure 6. Closed-loop control system.
Figure 6. Closed-loop control system.
Iot 05 00013 g006
Figure 7. Drone system and ground station.
Figure 7. Drone system and ground station.
Iot 05 00013 g007
Figure 8. Object detection at an altitude of 2.5 m with Pi Camera.
Figure 8. Object detection at an altitude of 2.5 m with Pi Camera.
Iot 05 00013 g008
Figure 9. Comparing spray width in modes A and B.
Figure 9. Comparing spray width in modes A and B.
Iot 05 00013 g009
Figure 10. Comparing spray flow rate in modes A and B.
Figure 10. Comparing spray flow rate in modes A and B.
Iot 05 00013 g010
Figure 11. Spray coefficient of variation vs. height.
Figure 11. Spray coefficient of variation vs. height.
Iot 05 00013 g011
Figure 12. Mission Planner waypoint interface over simulated agricultural fields.
Figure 12. Mission Planner waypoint interface over simulated agricultural fields.
Iot 05 00013 g012
Figure 13. Plant matrix setup.
Figure 13. Plant matrix setup.
Iot 05 00013 g013
Figure 14. GPS flight log.
Figure 14. GPS flight log.
Iot 05 00013 g014
Figure 15. Drone setup.
Figure 15. Drone setup.
Iot 05 00013 g015
Figure 16. Drone at the test site with a remote terminal message and detection at height 2.5 m.
Figure 16. Drone at the test site with a remote terminal message and detection at height 2.5 m.
Iot 05 00013 g016
Table 1. Summary of UAV-based ML applications in agriculture.
Table 1. Summary of UAV-based ML applications in agriculture.
Ref.Focus AreaML MethodEquipmentPerformance
[16]SaffronRF, MLPRGB, MS camerasAccuracy: RF 100%, MLP 95%
[17]Agricultural weed, soil, and crop analysisCMS, FANET, CNN, data fusionElectrochemical sensors, DJI Inspire 2Accuracy: CMS 84%, FANET 87%, CNN 91%, data fusion 94%
[18]Bridge inspection (non-agricultural)VGG16, ResNet50, ResNet152v2, Xception, InceptionV3, DenseNet201Spectroradiometer, MS cameraAccuracy: up to 83%
[19]Flood detection (environmental
monitoring)
CNNSpectroradiometer, MS cameraAccuracy: 91%
[20]Multiple crops (wheat, maize, potato, grape, banana, coffee, radish, sugar, cotton, soybean, tea)RF, SVM, ANN, CNNRGB, MS, hyperspectral, thermalAccuracy: ML 72–98%, DL 85–100%
[21]Sugarcane cultivation analysisRF, CNNLandsat, Sentinel-2 MSIAccuracy: RF 80–98%, CNN 95%
[22]General crop and weed detectionSVM, CNNImaging sensors, RGB, MS, hyperspectral, thermalAccuracy: SVM 88.01%, CNN 96.3%
[23]Soybean cultivation and health analysisLR, SVMSpectroradiometer, MS cameraAccuracy: RF 76%, SVM 85%
[24]Weed identificationAlexNet, CNN, SVMRGB, MS, hyperspectral, thermalAccuracy: AlexNet 99.8%, CNN 98.8%, SVM 94%
[25]Various crops (maize, banana, forest, legume, structural, others) AnalysisRSMPA-DLFCC, SBODL-FCC, DNN, AlexNet, VGG-16, ResNet, SVMUAV datasetAccuracy: RSMPA-DLFCC 98.22%, SBODL-FCC 97.43%, DNN 86.23%, AlexNet 90.49%, VGG-16 90.35%, ResNet 87.7%, SVM 86.69%
[26]Citrus tree health and growth monitoringCNNsMS camera (RedEdge-M)Accuracy: overall 99.8%, tree gaps 94.2%, canopy 85.5%
[27]Sunflower and cotton cultivation analysisRFRGB camera (Sony ILCE-6000)Accuracy: sunflower 81%, cotton 84%
[28]Tea plant cold injury assessmentCNN-GRU, GRU, PLSR, SVR, RFRMS, TIR, RGBAccuracy: R 2 = 0.862 ; RMSEP = 0.138; RPD = 2.220
[29]Automatic aerial spraying systemMSP430 single-chip microcomputerGPS, spray uniformity testsRoute deviation: 0.2 m; coefficient of variation: 25%
[30]Vineyard managementNDVI-based classificationSix-rotor UAV, pitch-and-roll-compensated multispectral cameraGood agreement with ground-based observations
Table 2. Drone parameters.
Table 2. Drone parameters.
ParameterValueUnit
Drone mass1kg
Arm length0.25m
Inertial moment about x0.063kg/m2
Inertial moment about y0.063kg/m2
Inertial moment about z0.063kg/m2
Motor thrust (at 100% throttle)1293kV
Motor torque (at 100% throttle)0.18Nm
Gravitational acceleration9.8m/s2
Table 3. Average Precision of models for different classes.
Table 3. Average Precision of models for different classes.
Model ArchitecturePineapplePapayaCabbage
Efficientdetlite070.7476.3988.53
Efficientdetlite175.3479.1189.71
Efficientdetlite276.8983.1391.86
Efficientdetlite378.0784.9592.30
Table 4. Comparison of model performance with and without Edge TPU.
Table 4. Comparison of model performance with and without Edge TPU.
ModelWithout Edge TPUWith Edge TPU
ArchitectureInference Time (ms)File Size (MB)Inference Time (ms)File Size (MB)
Efficientdetlite01255.12565.57
Efficientdetlite11857.38917.65
Efficientdetlite23709.412509.74
Efficientdetlite3125011.9550012.32
Table 5. Tuned PID parameters.
Table 5. Tuned PID parameters.
ParametersXYZRollPitchYaw
Proportional1.51.53.990.2250.3291.036
Integral0.00010.00011.560.2250.3290.104
Derivative0.050.053.100.0090.0110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, E.; Pratap, A.; Mehta, U.; Azid, S.I. Smart Agriculture Drone for Crop Spraying Using Image-Processing and Machine Learning Techniques: Experimental Validation. IoT 2024, 5, 250-270. https://doi.org/10.3390/iot5020013

AMA Style

Singh E, Pratap A, Mehta U, Azid SI. Smart Agriculture Drone for Crop Spraying Using Image-Processing and Machine Learning Techniques: Experimental Validation. IoT. 2024; 5(2):250-270. https://doi.org/10.3390/iot5020013

Chicago/Turabian Style

Singh, Edward, Aashutosh Pratap, Utkal Mehta, and Sheikh Izzal Azid. 2024. "Smart Agriculture Drone for Crop Spraying Using Image-Processing and Machine Learning Techniques: Experimental Validation" IoT 5, no. 2: 250-270. https://doi.org/10.3390/iot5020013

APA Style

Singh, E., Pratap, A., Mehta, U., & Azid, S. I. (2024). Smart Agriculture Drone for Crop Spraying Using Image-Processing and Machine Learning Techniques: Experimental Validation. IoT, 5(2), 250-270. https://doi.org/10.3390/iot5020013

Article Metrics

Back to TopTop