Next Article in Journal
Effect of Cutting Phenological Stage, Chemical Treatments, and Substrate on Rooting Softwood Cuttings of Tree Peony
Previous Article in Journal
Isolation and Functional Analysis of the DhMYB2 and DhbHLH1 Promoters from Phalaenopsis-Type Dendrobium Involved in Stress Responses and Tissue-Specific Expression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Environmental Sensing Technologies for Targeted Spraying in Orchards

1
School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China
2
Key Laboratory of Plant Protection Engineering, Ministry of Agriculture and Rural Affairs, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Horticulturae 2025, 11(5), 551; https://doi.org/10.3390/horticulturae11050551
Submission received: 8 April 2025 / Revised: 16 May 2025 / Accepted: 16 May 2025 / Published: 20 May 2025
(This article belongs to the Section Fruit Production Systems)

Abstract

:
Precision pesticide application is a key focus in orchard management, with targeted spraying serving as a core technology to optimize pesticide delivery and reduce environmental pollution. However, its accurate implementation relies on high-precision environmental sensing technologies to enable the precise identification of target objects and dynamic regulation of spraying strategies. This paper systematically reviews the application of orchard environmental sensing technologies in targeted spraying. It first focuses on key sensors used in environmental sensing, providing an in-depth analysis of their operational mechanisms and advantages in orchard environmental perception. Subsequently, this paper discusses the role of multi-source data fusion and artificial intelligence analysis techniques in improving the accuracy and stability of orchard environmental sensing, supporting crown structure modeling, pest and disease monitoring, and weed recognition. Additionally, this paper reviews the practical paths of environmental sensing-driven targeted spraying technologies, including variable spraying strategies based on canopy structure perception, precise pesticide application methods combined with intelligent pest and disease recognition, and targeted weed control technologies relying on weed and non-target area detection. Finally, this paper summarizes the challenges faced by multi-source sensing and targeted spraying technologies in light of current research progress and industry needs, and explores potential future developments in low-cost sensors, real-time data processing, intelligent decision making, and unmanned agricultural machinery.

1. Introduction

With population growth and the upgrading of consumption structures, fruit tree cultivation is becoming increasingly important in modern agricultural systems. Compared to field crops, orchard environments are more complex and characterized by diverse topography and rich canopy layers, with the targets of operations extending from the entire plant to the finer structural levels such as branches, leaves, and fruits. At present, orchard pest and disease management continues to depend primarily on chemical pesticide applications [1,2]. The traditional pesticide application mode usually involves large-area, uniform dosage spraying, which leads to ineffective and excessive spraying within the orchard space [3].
Specifically, traditional pesticide application faces several critical challenges. First, pesticide droplets often fail to penetrate dense tree canopies effectively and reach target areas, leading to significant liquid loss and reduced spraying efficiency [4,5]. Second, the spatial heterogeneity of pests, diseases, and weeds results in uneven infestation, where excessive pesticide use not only raises costs but also causes phytotoxicity and environmental pollution [6,7,8]. Third, topographical variability and the structural complexity of canopies increase the difficulty of spray control, making it challenging to achieve accurate and efficient applications based solely on manual experience [9,10]. To address these issues, the integration of advanced sensors and intelligent algorithms enables real-time acquisition of multidimensional environmental data [11,12], supporting variable-rate application and data-driven decision making for precise, targeted spraying [13,14,15].
From a technical perspective, orchard environmental sensing involves the detection and identification of natural environmental factors (such as temperature, humidity, pest and disease distribution, and soil composition) [16], target objects (such as tree morphology, canopy structure, and fruit distribution) [17,18,19], and operational environments (such as topography and weather conditions). Visual sensors, LiDAR, multispectral/hyperspectral cameras, and other technologies have significant application value in orchard scenarios [20,21,22]. However, in practical deployment, they are still constrained by factors such as cost, weather resistance, occlusion, and field of view. To improve the accuracy and stability of environmental sensing, multi-source data fusion and deep learning algorithms have become key research directions. Researchers have integrated RGB images, 3D point clouds, and spectral reflectance information to achieve comprehensive sensing of orchard pest detection, canopy structure analysis [23,24], and weed distribution recognition [25,26].
Targeted spraying, driven by environmental perception, emphasizes the real-time integration of sensing data with spraying strategies. Information collected, such as the orchard canopy volume [27], leaf distribution, and pest location [28], will guide the nozzle control and flow rate adjustment of spraying robots. Compared to fixed-dose pesticide application modes, environmental perception-driven targeted spraying can effectively reduce pesticide use while improving pest and disease control efficacy.
Based on the above background, this paper investigates several key aspects of orchard environmental sensing and targeted spraying. First, it introduces the definitions and current development status of relevant sensing technologies, with a focus on the working principles and application scenarios of core sensors such as vision systems, LiDAR, and multispectral imaging. Second, it highlights the crucial role of multi-source data fusion and deep learning algorithms in enabling accurate environmental perception. It further analyzes practical approaches for implementing perception-driven targeted spraying, including variable spraying strategies based on canopy structure sensing, precise pesticide application guided by intelligent pest and disease identification, and targeted weed control using weed and non-target area detection.
Subsequently, this paper discusses the main challenges currently faced by these technologies. These include high hardware costs, low data processing efficiency, and limited adaptability in complex orchard environments. Finally, considering industry needs and emerging technological trends, this paper outlines future development directions for precision pesticide application in orchards. The aim is to provide both theoretical insights and practical guidance for researchers, system developers, and orchard practitioners.

2. Environmental Sensing Technology

2.1. Definition and Principles of Environmental Sensing Technology

Environmental sensing technology refers to a technical system that integrates sensors, data processing techniques, and intelligent algorithms [29,30] to collect and analyze multidimensional environmental data in real time. It enables informed decision-making and intelligent task execution. In the context of precision agriculture and orchard management, environmental sensing encompasses not only the monitoring of natural environmental parameters—including temperature, humidity, soil composition, and pest or disease presence—but also the real-time perception of operational targets such as tree morphology, growth status, and fruit distribution, as well as external conditions like terrain and weather. The core objective of this technology is to capture dynamic changes in both the environment and crops, thereby supporting precise fertilization, targeted spraying, and optimized resource allocation. Ultimately, it contributes to the advancement of intelligent and efficient agricultural production systems.

2.2. Environmental Sensing Technology for Orchards

The orchard environment is complex, featuring a three-dimensional spatial structure and highly heterogeneous crop varieties, while also being constrained by complex topography [31]. It is further influenced by environmental disturbances such as variations in wind speed, uneven light exposure, and microclimatic effects [32,33]. The interactions among various environmental factors make the dynamic changes in orchard environments more pronounced. To meet the demands of precision agriculture, orchard environmental sensing technology has continuously evolved in areas such as sensor hardware, data fusion analysis, and intelligent decision-making, gradually constructing an intelligent perception system tailored to the complex environmental characteristics of orchards.
In recent years, driven by advances in optical sensing, LiDAR, multispectral/hyperspectral imaging, and embedded computing, orchard sensing systems have notably improved in accuracy, robustness, and real-time responsiveness [34,35]. LiDAR in orchard environmental sensing needs to work in synergy with visual sensors and incorporate related algorithms such as deep learning and point cloud processing [36,37], to achieve precise modeling of tree structures, quantitative assessment of canopy density, and effective segmentation of non-target areas [38,39]. Multispectral and hyperspectral imaging technologies are used for pest and disease detection, tree health monitoring, and precise variable pesticide application, enhancing the precision of orchard management. The relevant sensors are shown in Figure 1.
The orchard environment is highly complex and characterized by dynamic changes, making it difficult for single sensors to meet the demands of detailed management [40]. Therefore, multi-source data fusion [41] and artificial intelligence (AI) analysis have become important development directions for orchard environmental sensing technologies [42,43]. Multi-sensor data fusion technology combines various sensors, such as RGB cameras, LiDAR, and hyperspectral imaging [44,45,46], and uses fusion strategies at the data layer, feature layer, or decision layer to achieve more comprehensive information extraction and optimization.
By integrating deep learning algorithms (such as Convolutional Neural Networks (CNN) and Transformers), tasks such as canopy detection [47], pest and disease recognition [48], and monitoring of tree growth status can be performed. At the same time, AI technologies can optimize the data fusion process, enhancing the stability, environmental adaptability, and decision-making accuracy of the sensing system. This, in turn, improves the level of intelligence in orchard management and supports the development of precision agriculture, as illustrated in Figure 2.
With the development of smart agriculture, the application of environmental sensing technologies in orchard management has become increasingly widespread. In precision-targeted spraying, the sensing system can combine the tree canopy structure, pest and disease distribution, weed presence, and non-target areas to dynamically adjust the spraying volume, angle, and path in real-time. This optimizes pesticide use, reduces waste, and minimizes environmental pollution.

3. Key Sensing Technologies for Orchards

3.1. Visual Sensors: Imaging Technologies and Orchard Applications

Visual sensors are essential components for achieving orchard environmental perception. These sensors capture image data of crop growth status, tree canopy structure, and surrounding obstacles based on visible or infrared spectral information. Compared to other sensors, visual sensors offer advantages such as easy installation, small size, and low energy consumption. They can be deployed on various platforms for orchard information collection. The proper selection of visual sensors to match the appropriate imaging technologies is fundamental to realizing orchard environmental perception and is a key factor for implementing precision-targeted spraying.

3.1.1. Monocular and Binocular (Multi-Camera) Visual Sensors

Monocular RGB sensors are widely used in orchard environmental perception and precision targeted spraying due to their simple structure, low cost, and mature algorithms. These sensors, based on the pinhole imaging principle, project three-dimensional spatial information onto a two-dimensional plane. This allows for the real-time capture of environmental data related to the canopy, pests and diseases [49], fruits [50,51,52], and weeds [53,54], providing a data foundation for subsequent analysis. The relevant sensors are shown in Figure 3.
Liu et al. [55] used the RealSense D435i for target perception, capturing RGB images of Qingpingle plums in real-time. This provided crucial data support for subsequent fruit detection, segmentation, and related technical analysis. For tree canopies, researchers equipped drones with RGB cameras to capture detailed information about the canopy, branch extensions, and shadows, facilitating further analysis of the tree canopy and the extraction of its refined features [56]. The applications of monocular and binocular vision sensors in orchard environmental perception are summarized in Table 1 and Table 2.
In orchard environmental perception, while two-dimensional image data provides information about the appearance of crops, it has limitations in capturing the spatial location of fruits, parsing canopy layers, and reconstructing the overall structure, making it difficult to effectively represent the three-dimensional features of the orchard environment. However, monocular imaging is constrained by the field of view and depth estimation accuracy, and methods based on image features struggle to accurately reconstruct the three-dimensional structure of objects. In practical orchard environmental sensing applications, multi-camera (or multi-view) systems are commonly used for image capture to capture spatial features.
Huang et al. [63] used a heterogeneous binocular system to capture RGB images and depth information of fruit trees in real time, enabling subsequent tree analysis. Mirbod et al. [64] used a BFS-U3-88S6C-C camera system to capture image and depth data of apples to analyze fruit dimensions, including diameter, surface area, and other parameters.
Table 2. Application of binocular vision sensors and systems in orchard environmental sensing.
Table 2. Application of binocular vision sensors and systems in orchard environmental sensing.
ResearcherCollection EquipmentImage DataData UsageRef.
Zhang et al.Binaural CameraApple fruitFruit detection and localization[65]
Wang et al.MV-VD120SCLitchi fruitLocalization of overlapping fruits[66]
Zheng et al.MV-SUA134GCTomato fruitLocalization of overlapping fruits[67]
Liu et al.MV-CA060-10GCPineapple fruitLocalization of overlapping fruits[68]
Pan et al.ZED 2iPear fruitSegmentation of overlapping fruits[69]
Sun et al.ZED 2iPear tree trunkTrunk detection and distance measurement[70]
Tang et al.ZED2iOil-seed camellia fruitLocalization of overlapping fruits[71]
Zhang et al.ZED2iFruit tree branchesThree-dimensional reconstruction of branches[72]
Compared to monocular vision, binocular (or multi-camera) vision uses disparity information to calculate depth, overcoming the limitations of monocular imaging in three-dimensional reconstruction. It performs depth estimation based on passive vision, not relying on active light sources. As a result, it is less affected by ambient light and is suitable for the complex lighting conditions in orchards, including sunny days, cloudy days, and nighttime lighting.
However, the depth calculation in binocular vision relies on the stereo matching process, which requires precise camera calibration and algorithm optimization. Additionally, low-contrast lighting conditions, such as weak lighting, direct strong light, or shadowed areas, can further affect the accuracy and stability of stereo matching.

3.1.2. RGB-D and Dynamic Visual Sensors

In contrast, RGB-D cameras have the ability to actively perceive depth, primarily relying on infrared structured light projection or time-of-flight (TOF) ranging principles [73] to capture depth information. The structured light method calculates depth by projecting specific infrared light spots onto the target and analyzing the degree of distortion at different distances. TOF sensing, on the other hand, determines the target’s distance based on the time difference between the emission and return of infrared pulses. The relevant sensors are shown in Figure 4.
Therefore, RGB-D cameras do not rely on natural light, allowing for stable depth data acquisition even in overcast, shaded, or low-light environments at night. This enables tasks such as canopy recognition [74], leaf detail perception [75], and fruit detection. Xiao et al. [76] established a Kinect system to capture real-time images and depth information of fruit trees and the spaces between them, enabling real-time analysis of parameters such as tree spacing, leaf area, tree height, canopy shape, and density, as shown in Figure 5a. Yu et al. [77] used an RGB-D camera to capture real-time images and corresponding depth information of winter jujube, achieving detection and localization of the winter jujube, as shown in Figure 5b.
In comparison to LiDAR, RGB-D cameras offer distinct advantages in capturing image texture and facilitating semantic understanding of targets. By simultaneously acquiring both color and depth information, RGB-D systems enable the integration of multiple features, such as color, shape, and texture, for accurate object segmentation and recognition. This capability is particularly valuable for tasks like fruit detection and structural analysis of foliage. Moreover, the compact design and lower cost of RGB-D cameras make them well-suited for deployment on orchard platforms where space and weight constraints are critical. These systems demonstrate robust overall performance, especially in short- to medium-range sensing applications. The applications of RGB-D vision sensors in orchard environmental perception are summarized in Table 3.
In addition, orchard environmental perception not only relies on static structural information but also requires dynamic data collection, which is a key component in implementing precision-targeted spraying technology. Therefore, Dynamic Vision Sensors (DVSs), as an event-driven imaging technology, can achieve real-time dynamic information capture due to their high spatiotemporal resolution and low latency.
Zhang et al. used the Sony IMX596 camera to capture dynamic videos of citrus fruits, enabling real-time dynamic tracking and localization of the fruits [84]. For apples in dynamic environments, Abeyrathna et al. [85] used the GoPro Hero 10 to capture dynamic video data in real time, combined with object detection and tracking methods, achieving fruit localization and tracking. However, for slow-moving or low-contrast targets, due to the event-triggering nature of DVS that depends on changes in light intensity, its detection capability is weaker, which may lead to the loss of perception information.
Orchard environmental perception involves the acquisition of multidimensional information. Static structural perception (e.g., canopy shape) can be achieved using binocular vision or RGB-D sensors, with the former suitable for complex lighting environments and the latter performing stably under low-light conditions. Target recognition and detection (e.g., pest and disease monitoring and fruit recognition) are primarily conducted using monocular RGB sensors, which offer high resolution and mature computer vision algorithm support. Dynamic information perception requires the use of Dynamic Vision Sensors (DVSs) for high spatiotemporal resolution and low-latency detection. In integrated applications, multi-sensor fusion can be used for orchard environmental perception to meet the accuracy and adaptability requirements.

3.2. LiDAR: Orchard Environmental Perception

3.2.1. Working Principle and Composition

LiDAR (Light Detection and Ranging) is widely adopted for 3D reconstruction in orchard environmental perception due to its high spatial resolution, strong resistance to lighting interference, and large detection range [86,87]. It operates by emitting laser beams toward a target and capturing the reflected signals. By calculating the time-of-flight (ToF) or phase shift, it accurately determines the distance between the object and the sensor, resulting in dense point cloud data [88]. The relevant sensors and their performance parameters are shown in Figure 6 and Table 4.
A typical LiDAR system consists of a laser emitter, receiver, and a scanning module. The ToF approach measures the round-trip time of a laser pulse, providing a wide detection range and robust environmental adaptability. In contrast, the phase-based method calculates distance from the phase shift of the light wave, offering higher accuracy but being more susceptible to noise.
Depending on the application, LiDAR systems employ either single-line or multi-line scanning. Single-line LiDAR uses one laser channel to scan point-by-point and is preferred for localized tasks such as tree structure analysis due to its lower cost and energy consumption [89]. Multi-line LiDAR employs multiple beams to capture high-density data across wider fields of view, making it ideal for large-scale orchard modeling, albeit with increased hardware complexity and power usage [90].

3.2.2. Application of LiDAR in Orchard Environmental Perception

In orchard production, achieving efficient targeted spraying relies on precise environmental perception technologies. Factors such as tree height, canopy shape, and leaf distribution directly affect spray coverage and pesticide deposition efficiency. Therefore, efficiently collecting orchard environmental data can significantly enhance the accuracy of targeted pesticide application.
LiDAR, as a high-precision 3D reconstruction technology, can capture key parameters such as tree canopy volume, trunk distribution, and leaf layers in real-time, reconstructing the spatial structure and geometric form of the target. It generates high-precision point cloud data [91,92,93], providing reliable data support for precise pesticide application. Karolina et al. [94] used a low-altitude platform equipped with a full-waveform LiDAR system (Riegl LMS-Q560) to conduct 3D environmental perception in an orange orchard. They obtained high-precision point cloud data, including information on orange trees, weeds, and the ground surface, and achieved automatic classification of various ground objects based on waveform parameters. This demonstrated the efficiency and applicability of LiDAR in orchard environmental perception in complex environments, as shown in Figure 7a. Joan et al. [95] deployed a 2D LiDAR sensor (Sick LMS200) on an agricultural tractor and conducted point cloud data collection experiments in a pear orchard. They achieved 3D structural reconstruction of fruit trees, crops, and environmental objects. The research showed that LiDAR can quickly and non-destructively obtain geometric parameters of orchard vegetation, including height, width, volume, leaf area index, and leaf area density, generating 3D digital models of plants that match real plant heights, as shown in Figure 7b. The applications of LiDAR in orchard environmental perception are summarized in Table 5.
The point cloud data generated by LiDAR provides the three-dimensional structural characteristics of the orchard environment, including tree spacing, canopy density, and fruit distribution. During targeted spraying, using multi-temporal point cloud data allows for dynamic adjustments of spray volume and spraying angle [102,103], ensuring that droplets accurately cover the target area. Compared to stereo and RGB-D sensors, LiDAR exhibits superior robustness and adaptability in orchard environment perception. It can reliably capture high-precision 3D point cloud data under challenging outdoor conditions such as strong sunlight, shadows, and fog. In contrast, the depth accuracy of stereo and RGB-D sensors is more susceptible to fluctuations in ambient lighting and variations in surface reflectance. Additionally, LiDAR offers extended sensing range and stronger penetration capability, making it particularly suitable for scenarios with significant canopy height variation. These advantages allow LiDAR to acquire more comprehensive spatial structure information, thereby enhancing the spatial alignment accuracy of spray decision-making in precision orchard applications.
Although LiDAR technology offers significant advantages in 3D reconstruction of orchard environments, when point cloud data is collected in densely vegetated areas, laser signals can cause multiple reflections, affecting the completeness and accuracy of the point cloud data. Additionally, the LiDAR data processing process is complex, and the large volume of point cloud data requires efficient algorithms for noise removal, classification, and analysis to ensure the accuracy of environmental information extraction. Therefore, in orchard environmental perception applications, LiDAR technology needs to be combined with other sensing methods (such as RGB cameras, multispectral imaging, or deep learning models) to enhance its comprehensiveness and reliability in environmental perception.

3.3. Multispectral and Hyperspectral Sensors: Crop Health Monitoring and Pest Detection

Spectral imaging technologies—including both multispectral and hyperspectral imaging—are widely applied in orchard production for crop health monitoring and pest detection due to their sensitivity to vegetation spectral reflectance characteristics [104,105]. These sensors capture the reflectance data of crops across multiple spectral bands, ranging from visible to near-infrared wavelengths. Through optical filtering or spectral dispersion mechanisms, incoming light is decomposed into discrete spectral channels, generating images with distinct spectral features that reflect the physiological or pathological states of crops [106]. The relevant sensors are shown in Figure 8.
Multispectral sensors typically consist of several optical channels, each targeting specific wavelengths such as red, green, and near-infrared. They are well-suited for rapid assessment of crop attributes such as leaf moisture content, chlorophyll concentration, and visible disease symptoms like spots or discoloration [107,108]. For instance, Johansen et al. [109] deployed the Parrot Sequoia multispectral imaging sensor on a 3DR Solo quadcopter to collect canopy data of lychee orchards at varying flight altitudes (30/50/70 m), enabling the extraction of structural parameters such as canopy width, height, and plant projection coverage (PPC), as illustrated in Figure 9a. Similarly, Chandel et al. [110] utilized the RedEdge 3 multispectral sensor to acquire spectral imagery of apple orchards, effectively detecting and mapping the spatial distribution of powdery mildew (PM), as shown in Figure 9b. The applications of multispectral sensors in orchard environmental perception are summarized in Table 6.
In contrast, hyperspectral sensors offer significantly higher spectral resolution and band density, capturing continuous reflectance data across tens to hundreds of narrow, contiguous bands. This allows for finer discrimination of subtle spectral variations in vegetation, making them particularly suitable for early-stage disease detection, severity classification, and functional leaf status assessment. In orchard environments where pests and diseases often manifest with minimal visual cues, hyperspectral imaging provides a more robust solution for quantitative identification and monitoring, thus demonstrating broader application potential.
Despite these advantages, spectral imaging technologies still face several practical challenges. High hardware costs, complex data processing workflows, and susceptibility to environmental conditions can hinder performance. For example, strong sunlight, shadowing, and sensor noise can degrade the signal-to-noise ratio of the captured imagery, while canopy layering, leaf overlap, and occlusion can compromise reflectance consistency, thereby increasing the difficulty of accurate image interpretation.

3.4. Sensor Data Fusion: Enhancing the Accuracy and Real-Time Performance of Orchard Environmental Perception

The complexity of orchard environments makes it difficult for a single sensor to comprehensively and accurately perceive the target information. Therefore, sensor data fusion technology has become an important research direction for improving the accuracy and real-time performance of orchard environmental perception. Each type of sensor offers distinct advantages based on its sensing modality. For example, LiDAR can provide high-precision spatial information but lacks color features, while RGB cameras can capture rich visible light information but are significantly affected by lighting conditions. By using multi-sensor data fusion technologies and strategies at the data, feature, or decision layers, multiple sources of information can be effectively integrated, improving the stability, accuracy, and robustness of the perception system.

3.4.1. Principles and Methods of Sensor Data Fusion

Sensor data fusion exploits sensor complementarity [115,116] and adopts multi-level fusion strategies to enhance the accuracy of orchard environmental perception [117]. Based on the level of data processing, sensor data fusion is typically divided into three main methods: data-level fusion, feature-level fusion, and decision-level fusion.
Data-level Fusion: This method involves the registration or stitching of raw data to obtain complete environmental information. It provides a rich amount of information but involves a larger computational workload, making it suitable for high-precision orchard modeling and target detection tasks. Jiang et al. [118] used data-level fusion techniques to synchronize and integrate 3D LiDAR, IMU, and encoder data for SLAM mapping and autonomous localization tasks. The results showed that multi-source sensor fusion significantly improved the accuracy of orchard spraying robots’ localization and navigation, as shown in Figure 10.
Feature-level Fusion: This method first extracts key features (such as edges, textures, spectral reflectance, etc.) from the data of each sensor and then performs feature fusion to reduce data size and improve processing efficiency. Lin et al. [119] used a feature-level fusion approach by incorporating a self-attention mechanism to weight and fuse multimodal features, constructing a CNN-SA network that integrates point cloud, control points, RGB images, and depth map features. The results demonstrated that this method could effectively fuse multi-sensor features, significantly improving the accuracy and robustness of 3D reconstruction, as shown in Figure 11.
Decision-level Fusion: After each sensor independently completes detection and recognition tasks, the detection results or confidence levels are then combined and evaluated to improve the accuracy of the final judgment. Basir et al. [120] used a decision-level fusion method to perform fault diagnosis within the feature space of multiple sensors and applied Dempster–Shafer evidence theory to merge belief functions, resulting in more robust fault recognition outcomes. The results showed that this method effectively enhanced the accuracy and reliability of fault identification.
In orchard environments, real-time performance is a key factor in achieving effective environmental perception. Through parallel computing and distributed sensor networks, data processing efficiency can be significantly improved. Additionally, combining deep learning frameworks (such as Convolutional Neural Networks and Graph Neural Networks) can enhance the accuracy of multi-source data fusion while also increasing the speed of data fusion, thus meeting the high-efficiency environmental perception requirements of orchard operations.

3.4.2. Sensor Data Fusion Combinations and Methods in Orchard Perception

Orchard environmental perception relies on the collaborative work of multiple sensors to overcome the limitations of individual sensors and improve the accuracy and real-time performance of data acquisition. Different types of sensors can be combined based on their characteristics to achieve a comprehensive perception of the orchard environment.
LiDAR (Light Detection and Ranging) can perform point cloud reconstruction of the orchard environment, providing point cloud data [121] that includes spatial structural information of fruit trees, such as canopy volume, leaf density, and branch distribution. Meanwhile, multispectral sensors can capture the spectral characteristics of fruit trees across different bands, reflecting the health status of the crops, such as chlorophyll content and early-stage disease detection. By registering point cloud data and spectral information, both the morphology and health status of fruit trees can be analyzed simultaneously. Chen et al. [122] used a feature-level fusion method to merge canopy structure point cloud features and multispectral vegetation indices at the feature level, employing SVM and KNN to construct a single-tree yield prediction model for apple trees, achieving spatial yield prediction for orchards. The results showed that the fusion model significantly outperformed models based on a single data source in terms of prediction accuracy, effectively improving the accuracy and stability of orchard yield monitoring, as shown in Figure 12.
Visual sensors can capture visible light information such as color and texture, making them suitable for object recognition and localization [123], but they struggle to reflect the physiological state of crops. Multispectral sensors, on the other hand, can perceive information related to vegetation health, moisture, and pest/disease presence across multiple bands [124], but they have lower spatial resolution and less clear structural features. The fusion of both sensor types combines the detailed representation of visual images with the spectral sensitivity of multispectral data, enhancing the accuracy and comprehensiveness of target recognition and crop status perception in orchards.
Li et al. [48] used a data-level fusion method to concatenate RGB images with multispectral vegetation indices at the channel level, incorporating the ReliefF algorithm and channel attention mechanism to enhance the model’s sensitivity to key pest and disease features. The results showed that the fusion model, AMMFNet, significantly improved the accuracy and stability of fruit tree pest and disease diagnosis, as shown in Figure 13a. Li et al. [125] employed a feature-level fusion method, performing spatial registration and feature extraction of RGB and multispectral images, merging color indices, texture features, and multispectral vegetation indices, and combining LightGBM and random forest models to build a leaf area index (LAI) estimation model. The study found that the model integrating multi-source features outperformed models based on single feature sources in both estimation accuracy and stability, as shown in Figure 13b.
Although visual sensors can capture color, texture, and shape information, they lack depth perception capabilities. While depth cameras have some distance measuring ability, their accuracy is limited due to factors such as lighting and occlusion [126]. LiDAR, on the other hand, offers high-precision, interference-resistant 3D ranging capabilities and can accurately reconstruct orchard structures, but it lacks color and texture information, limiting its target recognition performance. Therefore, combining visual sensors with LiDAR allows for complementary strengths between visual information and spatial measurements, significantly enhancing the accuracy and stability of orchard environmental perception [127].
In summary, multi-source sensor data fusion technology can compensate for the limitations of individual sensors, significantly improving the accuracy and real-time responsiveness of orchard environmental perception. With the ongoing iteration of sensor hardware and the deepening development of big data analysis methods, multi-source sensor fusion technology will play a critical role in orchard environmental perception.

4. Orchard Targeted Spraying Technology Environmental Perception

4.1. Canopy Perception and Targeted Spraying

In orchard operations, accurate identification of the canopy area is a key step in achieving targeted spraying, as it directly affects the spatial distribution of pesticides, deposition efficiency, and spray uniformity. Precise sensing of canopy structures enhances spraying strategy optimization, reduces pesticide waste, improves treatment efficacy, and mitigates environmental impact. Early canopy perception primarily relied on visual sensing technologies, using aerial photography or ground-based visual sensors to capture features such as fruit tree outlines, leaf color, and canopy density.
However, two-dimensional visual perception is limited by the lack of precise depth information, making it difficult to accurately represent the spatial layering of the canopy, branch-leaf occlusion relationships, and volume distribution characteristics. Recently, LiDAR [128] and depth cameras [129], integrated with 3D point cloud processing, have been employed to reconstruct canopy structures with high precision, supplying critical data for targeted spraying. Mahmud et al. [130] used the VLP-16 LiDAR to collect 3D point cloud data of fruit trees. By applying point cloud processing algorithms, they accurately removed trunks, supports, and ground backgrounds, and estimated canopy density and volume. The results showed that this method can accurately reconstruct canopy structures, providing spraying systems with partitioned density data to support multi-nozzle precise control, thus achieving accurate targeted spraying operations, as shown in Figure 14.
With the development of machine vision and deep learning technologies, deep learning models (such as CNN and Transformer) [131,132,133] have been widely applied in canopy image analysis. Semantic segmentation or object detection methods based on deep learning [134,135] can achieve precise segmentation and extraction of canopy areas, effectively removing non-target backgrounds. Khan et al. [136] used a DS-E24S camera to collect orchard images in real-time and combined the improved YOLOv8 model to build a canopy instance segmentation system, enabling high-precision real-time recognition of fruit tree canopy areas. As shown in Figure 15a, the system was demonstrated to seamlessly integrate with agricultural spraying robots, supporting accurate canopy identification and precise spraying control. Therefore, the study verified the key role of canopy perception in targeted spraying, with its accurate recognition capability significantly reducing the pesticide coverage on non-target areas, thus improving the precision of application and resource utilization efficiency.
Yu et al. [73] used a KinectV2 camera to capture RGB-D images and combined the improved DeepLabv3+ model to achieve precise recognition of orchard tree canopies. By incorporating depth-separable convolution and attention mechanisms and integrating RGB and depth information, the model’s perception ability was significantly enhanced. The model achieved real-time detection at 32.69 FPS and 95.62% mIoU accuracy on the Jetson Xavier NX. The results showed that the inclusion of depth information significantly improved the accuracy of canopy perception, providing reliable targeted spraying data for agricultural spraying robots, as shown in Figure 15b.
Multisensor fusion technology is gradually becoming an important direction for enhancing the capability of canopy structure analysis. Through spatial alignment and joint processing of data from multiple sensing modalities, both spatial and semantic integration can be achieved. At the same time, deep learning facilitates the efficient fusion of multimodal information and the extraction of deep features, enabling models to more comprehensively perceive and represent the structural and morphological features of the canopy.
Wang et al. [127] used a feature-level fusion method, where boundary and texture information extracted from RGB images were projected as initial masks into the LiDAR point cloud space. As shown in Figure 16, kernel density estimation was then combined with an improved Gaussian mixture model (GMM) for precise 3D segmentation of individual tree canopies. The results demonstrated that the multisensor perception strategy significantly improved the accuracy of canopy detection and structural parameter extraction. The application examples of canopy sensing technology for targeted spraying are summarized in Table 7.
Subsequently, the canopy’s morphology, volume, and density are analyzed to precisely characterize the canopy structure. Based on the canopy structural characteristics, variable spraying technology is employed to dynamically adjust the spray volume. Coupled with real-time canopy detection, the nozzle flow rate and spray pattern are adjusted accordingly. In dense canopy regions, where foliage obstructs penetration, pesticide coverage becomes less uniform. Therefore, the spray volume needs to be increased, and the spray angle optimized, ensuring that the pesticide can penetrate the canopy and reach the inner leaves and fruit surfaces. In sparse canopy areas, where there are fewer leaves and higher pest exposure, reducing the spray volume can prevent pesticide waste while minimizing the environmental impact of pesticide residues.
In the targeted spraying task, multimodal sensor fusion technology, combined with deep learning models and point cloud data processing techniques, enables high-precision canopy detection, 3D reconstruction, and structural analysis. This allows for the precise characterization of canopy morphology, volume, and density features, providing data support for variable spraying decisions.

4.2. Pest and Disease Area Detection and Precision Application

In orchard management, pests and diseases are key factors affecting crop yield and quality, and the precise identification of pest and disease areas is essential for implementing targeted spraying. Early pest and disease detection relied mainly on manual inspections, but this method is dependent on human experience, with low efficiency and high labor intensity. The advancement of computer vision and AI has enabled the widespread use of aerial and ground-based imagery to capture pathological symptoms, such as leaf discoloration, yellowing, and spotting. These inputs, combined with image processing and deep learning algorithms [142,143], have become the dominant paradigm for pest and disease detection [144].
Torey et al. [145] used RGB images of apple leaves and applied Mask R-CNN combined with the ResNet-50 backbone network to achieve precise detection and segmentation of leaf and rust disease, as shown in Figure 17a. This method provides accurate information, such as disease spot location and area, for the targeted spraying system. Apacionado et al. [146] employed low-cost night vision cameras combined with the YOLOv7 deep learning algorithm to perform high-precision detection of citrus tree canopy sooty mold at night (with an mAP of 74.4%), demonstrating the feasibility of targeted perception with simplified equipment, as shown in Figure 17b.
However, in orchard environments, factors such as lighting changes, shadows, and weather conditions (e.g., haze and strong light reflection) can affect the visibility of disease features in captured leaf images, thereby reducing the accuracy and stability of detection algorithms. Additionally, in high-density vegetation or overlapping leaf areas, disease features may be obstructed, leading to difficulties in accurate recognition and resulting in false positives or missed detections.
Some early-stage diseases (such as fungal infections or viral diseases) lack obvious visual features, making visual detection challenging. Following infection, leaf physiology undergoes measurable changes, such as chlorophyll degradation, water loss, and cellular damage [147]. These changes alter spectral reflectance patterns across multiple wavelengths. Therefore, spectral imaging technology can be used to quantify the pathological changes in leaves, enabling early detection and warning of diseases [148]. Furthermore, hyperspectral imaging technology can capture spectral anomalies in leaves at different levels of the canopy, accurately mapping disease areas and enhancing the deep perception and spatial resolution of disease detection.
Di Nisio et al. [149] used a multispectral camera to capture multispectral information from olive tree canopies and combined it with 3D reconstruction technology. By integrating NDVI, NIR, and other multi-source features and applying the LDA classification algorithm, they achieved precise identification of diseased trees (sensitivity of 98% and accuracy of 100%). This method provides technical support for the precise localization of disease areas and targeted spraying in orchards, as shown in Figure 18a. Zhang et al. [150] used hyperspectral imaging to build a multi-source fusion model integrating spectral features, vegetation indices, and texture features, and combined it with BPNN for early identification of pear leaf anthracnose. Additionally, SMACC and SID methods were introduced to visualize and locate disease spots in the asymptomatic stage. The results showed that this method could accurately locate disease areas, providing data support for targeted spraying, as shown in Figure 18b. The application examples of pest and disease area sensing technology for targeted spraying are summarized in Table 8.
To address the limitations of single-sensor approaches, researchers have adopted multi-sensor fusion strategies. By integrating multimodal data, such as RGB and hyperspectral imagery, these systems have improved the accuracy and stability of pest and disease detection. Additionally, by combining deep learning algorithms, the system can automatically learn pest and disease features, enhancing the ability to precisely identify disease areas. Furthermore, multimodal data can be fused at the feature level to integrate data from different sensors, enhancing the discernibility of disease features, or at the decision level to combine multiple detection results, improving disease classification accuracy.
Yang et al. [151] proposed a multi-source data fusion decision method based on ShuffleNet V2, which integrates visible light (RGB) and multispectral (MSI) data for feature extraction and decision optimization, achieving precise recognition of grape leaves. This method combines discriminative features from different modalities, addressing the issue of insufficient adaptability of a single sensor in complex field environments. As shown in Figure 19, experimental results confirmed that the fusion model significantly outperformed single-modality baselines in accuracy, stability, and precision of disease recognition.
Table 8. Application examples of pest and disease area sensing technology for targeted spraying.
Table 8. Application examples of pest and disease area sensing technology for targeted spraying.
ResearcherResearch ObjectResearch MethodResearch ObjectiveRef.
Zhang et al.Orchard pestsYOLOv5 + GhostNetReal-time pest detection[152]
Luo et al.Citrus pests and diseasesLight-SA YOLOV8Real-time pest and disease recognition[153]
Chao et al.Apple tree leaf diseasesXDNetDisease identification[154]
Sun et al.Apple tree leaf diseasesMEAN-SSDReal-time disease detection [155]
Building on accurate disease detection, variable application technology employs an on-demand spraying strategy that targets disease areas for precise pesticide application, avoiding large-area spraying. Meanwhile, multi-nozzle control allows for adjustments in the spray volume based on disease distribution, achieving precise coverage. Unlike conventional uniform-dosage spraying, precision application differentiates treatment intensity based on outbreak severity and spatial location. This approach reduces pesticide overuse and minimizes environmental contamination and phytotoxicity risks.
In summary, the core of pest and disease area detection and precise application lies in the integration of multimodal perception and intelligent decision-making. Leveraging cutting-edge algorithms such as computer vision and deep learning can significantly enhance the accuracy and stability of pest and disease early warning systems and precise control measures.

4.3. Weed and Non-Target Area Recognition and Targeted Weed Control

In orchard management, weeds compete with fruit trees for nutrients and water and are often hosts for pests and diseases, which accelerates the spread of diseases and severely impacts tree health and yield. Traditional weed control methods rely on large-area herbicide spraying or mechanical cutting. Although these methods are simple to implement, they often lead to herbicide waste, environmental pollution, and potential pesticide damage to the fruit trees. Therefore, adopting environmental sensing to accurately distinguish weeds from non-target areas is crucial for enhancing control efficiency, reducing chemical usage, and mitigating ecological impact.
Early weed detection primarily relied on visual sensors, classifying weeds based on color and texture features in images. RGB cameras were used to capture orchard ground images [156], and traditional image processing methods such as threshold segmentation and edge detection were employed to differentiate weeds from tree vegetation. Chen et al. [157] proposed a weed detection method based on RGB image color features. By enhancing contrast using the super red and super green channels and applying the OTSU method to extract vegetation areas, they identified major weeds (e.g., lambsquarters) using color distribution and RGB component differences. The identification accuracy reached 82.1%, as shown in Figure 20. The results indicate that this method is suitable for weed detection in complex environments and provides technical support for targeted weed control.
However, this method is significantly affected by lighting variations, vegetation density, and the complexity of the ground surface, making it difficult to meet the precise recognition requirements in large-scale and complex environments.
To improve detection stability, researchers have introduced multispectral and hyperspectral imaging technologies, which use reflectance features from different spectral bands to achieve precise weed identification. Due to notable differences in near-infrared reflectance between healthy fruit trees and weeds, hyperspectral imaging improves classification by extracting vegetation indices such as NDVI and SAVI. Su et al. [158] used a five-band multispectral camera to capture ryegrass multispectral data, combining spectral vegetation indices and machine learning algorithms to achieve accurate weed identification and mapping. The results showed that the RedEdge and near-infrared bands significantly enhanced the recognition of ryegrass (with average precision and recall rates exceeding 93%), providing technical support for precise weed monitoring and targeted weed control management, as shown in Figure 21a. Zisi et al. [159] used a multispectral camera to capture vegetation images, merging texture features and vegetation height information to construct a multi-source data fusion model for high-precision weed recognition. The study showed that the multispectral band fusion strategy, combined with texture or height auxiliary information, significantly improved recognition performance, achieving an accuracy rate of 95.5%. Compared to traditional spectral feature-based classification methods, this fusion approach enhanced the ability to conduct detailed weed recognition in the field, as shown in Figure 21b.
However, hyperspectral data processing comes with high costs and large computational demands, limiting its widespread use in real-time field applications.
With the development of machine vision and artificial intelligence, researchers have begun to integrate visual sensors with deep learning algorithms [160,161]. Through end-to-end learning, they optimize feature extraction and improve the ability to distinguish between weeds and fruit tree vegetation [162,163]. Compared to traditional methods, deep learning algorithms exhibit greater robustness under complex lighting conditions, effectively mitigating the impact of lighting variations on recognition accuracy in natural environments. Sampurno et al. [164] used the YOLOv5/YOLOv8 instance segmentation algorithm to achieve between-row weed recognition in orchards, as shown in Figure 22. Under conditions of complex occlusion and uneven lighting, the model achieved an mAP@0.5 exceeding 80%. The results demonstrate that the study successfully enabled multi-object detection of weeds, tree trunks, and support poles, while also exhibiting good lighting adaptability. The application examples of weed sensing technology for targeted spraying are summarized in Table 9.
Based on precise weed detection, targeted weed control can be carried out using variable spraying technology. The spraying rate is adjusted according to spatial weed distribution, confining herbicide exposure to target areas. This minimizes unintended effects on fruit tree root systems and soil microbial communities. Additionally, for non-target areas such as working pathways and field embankments, low-dose spraying or physical removal strategies can be employed to further reduce pesticide waste.
In summary, the precise recognition of weeds and non-target areas, along with targeted weed control technology, is undergoing an evolution—from traditional visual perception to multispectral detection, and now to 3D perception and deep learning—continuously improving the precision and intelligence of orchard weed management. In the future, upgrades in sensor hardware, optimization of intelligent algorithms, and the development of automated equipment will further enhance the precision and sustainability of orchard weed management, promoting the development of green smart agriculture.

5. Challenges and Future Development Directions

5.1. Technical Adaptability in Orchard Complex Environments

The complex canopy architecture and fluctuating environmental conditions in orchards significantly hinder the accuracy and adaptability of sensing and targeted spraying systems. In addition, fruit tree varieties differ in environmental sensitivity, resulting in spatially heterogeneous pest and disease outbreaks. These variations further complicate the precision detection necessary for effective targeted spraying.
Environmental Interference and Data Reliability: Factors such as dust, humidity changes, and branch-leaf occlusion in orchards can interfere with visual and LiDAR data collection, creating perception blind spots or errors. Additionally, strong sunlight or shadow conditions may cause multispectral distortion, which requires corresponding optical compensation or processing algorithms for correction.
Hardware Durability and Versatility: Some high-precision sensors, such as LiDAR and multispectral imaging equipment, are expensive (as shown in Table 10) and have high requirements for working environment conditions, limiting their application under the complex conditions of orchards. Orchard work environments often feature high humidity, rain, fog, and muddy conditions, so the sensors used need to have high durability and adaptability.
Platform and Sensor Coupling: UAV platforms must balance flight time and payload capacity, while ground agricultural machines often face obstacles from uneven terrain. Therefore, the way sensors are mounted must be compatible with the platform to ensure stable data collection, sufficient coverage, and overall reliability of the system.
To address these issues, future development should focus on optimizing hardware design to resist environmental interference, improving multi-source sensor data fusion algorithms, and enhancing the adaptability of work platforms. This will ensure the stable operation of orchard environmental perception and targeted spraying systems under complex environmental conditions.

5.2. Real-Time Data Processing and Big Data Analysis Challenges

As the scope of orchard environmental perception expands, the number and variety of sensors continuously increase, leading to significant growth in data volume and dimensionality. Effectively preprocessing, fusing, and conducting deep analysis on multi-source data under limited computational power has become a critical factor restricting real-time decision-making for targeted spraying.
Data Throughput and Latency: The acquisition and transmission of high-resolution video data, 3D point clouds, and spectral data require high bandwidth and large-scale storage resources [169]. In remote areas like orchards, limited network connectivity can result in high latency or data loss during real-time cloud processing of sensor data, which in turn affects the system’s real-time performance and stability.
Algorithm Complexity and Model Deployment: Targeted spraying systems typically integrate deep learning modules for object detection and data analysis, resulting in high computational complexity for the models. Relying entirely on cloud computing could lead to response delays and increased data security risks. Therefore, edge computing technology should be utilized to process some real-time data locally, enhancing the system’s responsiveness and reliability.
Data Fusion and Fault Tolerance: Multi-source sensor data may have errors in temporal and spatial synchronization, and measurement deviations due to hardware differences. This necessitates the development of high-precision registration algorithms and fault tolerance mechanisms. In dynamic orchard work environments, sensor dropout, outliers, and data loss are inevitable, requiring robust algorithms to ensure continuous and reliable system operation.
To address these challenges, orchard environmental perception systems need to optimize edge computing architectures, communication protocols, and data compression and processing algorithms. By integrating artificial intelligence and distributed learning technologies [170], these systems can enhance real-time processing capabilities while ensuring the accuracy of data fusion and the stability of system operation.

5.3. Technological Innovation and Future Outlook in Sustainable Agricultural Development

The integration of environmental perception and targeted spraying technologies has become essential for optimizing resource use, reducing chemical inputs, and improving crop quality. To meet the goals of sustainable agriculture, future developments should focus on the following key directions:
Green and Low-Carbon Development: Environmentally driven spraying technologies can precisely identify target areas and dynamically optimize application strategies, thereby enhancing pesticide use efficiency, minimizing chemical overuse, and reducing non-point source pollution. Promoting renewable energy in agricultural machinery will further lower carbon emissions and support low-carbon development.
Intelligent and Autonomous Operations: The fusion of robotics and autonomous navigation enables coordinated aerial–ground operations in orchards. This reduces labor dependence, lowers costs, and improves operational precision, supporting intelligent orchard management.
Data-Driven and Platform-Based Management: A unified data platform with standardized interfaces can facilitate interoperability across sensors and equipment. Leveraging big data analytics and cloud computing, these platforms support large-scale monitoring, remote diagnostics, and collaborative decision-making.
Looking ahead, orchard management technologies will continue evolving toward green, efficient, and intelligent paradigms. With ongoing advances in sensor fusion, edge computing, and AI, orchard operations are poised to become increasingly autonomous and data driven.
In addition, stronger integration between sensing technologies and commercially available smart sprayers is essential. Systems such as GUSS autonomous sprayers, Pulverizadores Fede’s Smartomizer H3O, and Bosch–BASF’s Smart Spray have demonstrated the feasibility of perception-based variable spraying in orchard settings. These platforms combine real-time sensing, automated control, and precision decision-making. However, current research largely focuses on isolated modules under experimental conditions. Future efforts should prioritize field validation, evaluate robustness across orchard types, and establish standardized interfaces to bridge perception modules with commercial spraying platforms. Such integration will enhance practical applicability and accelerate the adoption of intelligent spraying systems in modern orchard production.

6. Conclusions

This paper focuses on orchard environmental perception-driven targeted spraying technology, systematically reviewing the core application scenarios of environmental perception technologies and discussing key sensors (such as visual sensors, LiDAR, and multispectral sensors) and their data fusion strategies. The research shows that multi-source perception methods, integrating RGB images, 3D point clouds, and spectral information in real-time environmental data, can significantly improve the accuracy of pest and disease detection, canopy structure analysis, and weed recognition, providing precise decision support for targeted spraying operations.
Currently, technological developments mainly exhibit the following characteristics:
Sensor Diversification and Integration: High-precision sensors, such as visual sensors, LiDAR, and multispectral sensors, have been widely applied in orchards. Multi-source data fusion technologies continue to improve, enhancing the environmental adaptability and data reliability of operational systems.
Maturing Intelligent Algorithms: Deep learning-based target recognition and data analysis algorithms have demonstrated high accuracy and efficiency in orchard environmental perception. However, such algorithms still face practical issues such as high computational load, large data requirements, and limited model generalization capabilities.
Expanding Targeted Spraying Applications: Practical achievements have been made in areas such as variable spraying, precise pest and disease control, and targeted weed control. However, further optimization is needed in terms of adaptability in complex environments and large-scale implementation.
In summary, while orchard environmental perception and targeted spraying technologies have made significant progress, achieving large-scale, full-process intelligent orchard management still requires ongoing optimization and breakthroughs in hardware durability, algorithm real-time performance, and system integration.

Author Contributions

Conceptualization, Y.W. and W.J.; methodology, Y.W.; software, X.D.; validation, S.D., Y.W., and Z.Z.; formal analysis, Y.W.; investigation, M.O.; resources, W.J.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W.; visualization, Z.Z.; supervision, X.D.; project administration, M.O.; funding acquisition, W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (grant number: PAPD-2023-87), Research and Development of Mechanized Technology and Equipment for Key Stages in the Production of Wine Grapes and Daylilies, and Demonstration (8331203012) and the 2021 Provincial Key Laboratory—Research and Development of Key Technologies for Rice-Wheat Water, Fertilizer, and Pesticide Compound Operations and Integration of Intelligent Equipment (8261200003).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality and privacy restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Endalew, A.M.; Debaer, C.; Rutten, N.; Vercammen, J.; Delele, M.A.; Ramon, H.; Verboven, P. Modelling pesticide flow and deposition from air-assisted orchard spraying in orchards: A new integrated CFD approach. Agric. For. Meteorol. 2010, 150, 1383–1392. [Google Scholar] [CrossRef]
  2. Xun, L.; Campos, J.; Salas, B.; Fabregas, F.X.; Zhu, H.; Gil, E. Advanced spraying systems to improve pesticide saving and reduce spray drift for apple orchards. Precis. Agric. 2023, 24, 1526–1546. [Google Scholar] [CrossRef]
  3. Brown, D.L.; Giles, D.K.; Oliver, M.N.; Klassen, P. Targeted spray technology to reduce pesticide in runoff from dormant orchards. Crop Prot. 2008, 27, 545–552. [Google Scholar] [CrossRef]
  4. Lin, J.; Cai, J.; Ouyang, J.; Xiao, L.; Qiu, B. The Influence of Electrostatic Spraying with Waist-Shaped Charging Devices on the Distribution of Long-Range Air-Assisted Spray in Greenhouses. Agronomy 2024, 14, 2278. [Google Scholar] [CrossRef]
  5. Giles, D.K.; Klassen, P.; Niederholzer, F.J.; Downey, D. “Smart” sprayer technology provides environmental and economic benefits in California orchards. Calif. Agric. 2011, 65, 85–89. [Google Scholar] [CrossRef]
  6. Wang, G.; Lan, Y.; Qi, H.; Chen, P.; Hewitt, A.; Han, Y. Field evaluation of an unmanned aerial vehicle (UAV) sprayer: Effect of spray volume on deposition and the control of pests and disease in wheat. Pest Manag. Sci. 2019, 75, 1546–1555. [Google Scholar] [CrossRef]
  7. Schatke, M.; Ulber, L.; Kämpfer, C.; Redwitz, C. Estimation of weed distribution for site-specific weed management—Can Gaussian copula reduce the smoothing effect? Precis. Agric. 2025, 26, 37. [Google Scholar] [CrossRef]
  8. Xie, Q.; Song, M.; Wen, T.; Cao, W.; Zhu, Y.; Ni, J. An intelligent spraying system for weeds in wheat fields based on a dynamic model of droplets impacting wheat leaves. Front. Plant Sci. 2024, 15, 1420649. [Google Scholar] [CrossRef]
  9. Owen-Smith, P.; Perry, R.; Wise, J.; Jamil, R.Z.R.; Gut, L.; Sundin, G.; Grieshop, M. Spray coverage and pest management efficacy of a solid set canopy delivery system in high density apples. Pest Manag. Sci. 2019, 75, 3050–3059. [Google Scholar] [CrossRef]
  10. Grella, M.; Gallart, M.; Marucco, P.; Balsari, P.; Gil, E. Ground Deposition and Airborne Spray Drift Assessment in Vineyard and Orchard: The Influence of Environmental Variables and Sprayer Settings. Sustainability 2017, 9, 728. [Google Scholar] [CrossRef]
  11. Wang, S.; Song, J.; Qi, P.; Yuan, C.; Wu, H.; Zhang, L.; He, X. Design and development of orchard autonomous navigation spray system. Front. Plant Sci. 2022, 13, 960686. [Google Scholar] [CrossRef] [PubMed]
  12. Seol, J.; Kim, J.; Son, H.I. Field evaluations of a deep learning-based intelligent spraying robot with flow control for pear orchards. Precis. Agric. 2022, 23, 712–732. [Google Scholar] [CrossRef]
  13. Osterman, A.; Godeša, T.; Hočevar, M.; Širok, B.; Stopar, M. Real-time positioning algorithm for variable-geometry air-assisted orchard sprayer. Comput. Electron. Agric. 2013, 98, 175–182. [Google Scholar] [CrossRef]
  14. Zheng, K.; Zhao, X.; Han, C.; He, Y.; Zhai, C.; Zhao, C. Design and Experiment of an Automatic Row-Oriented Spraying System Based on Machine Vision for Early-Stage Maize Corps. Agriculture 2023, 13, 691. [Google Scholar] [CrossRef]
  15. Zhu, C.; Hao, S.; Liu, C.; Wang, Y.; Jia, X.; Xu, J.; Guo, S.; Huo, J.; Wang, W. An Efficient Computer Vision-Based Dual-Face Target Precision Variable Spraying Robotic System for Foliar Fertilisers. Agronomy 2024, 14, 2770. [Google Scholar] [CrossRef]
  16. Román, C.; Peris, M.; Esteve, J.; Tejerina, M.; Cambray, J.; Vilardell, P.; Planas, S. Pesticide dose adjustment in fruit and grapevine orchards by DOSA3D: Fundamentals of the system and on-farm validation. Sci. Total Environ. 2022, 808, 152158. [Google Scholar] [CrossRef]
  17. Wang, S.; Wang, W.; Lei, X.; Wang, S.; Li, X.; Norton, T. Canopy Segmentation Method for Determining the Spray Deposition Rate in Orchards. Agronomy 2022, 12, 1195. [Google Scholar] [CrossRef]
  18. Holterman, H.J.; Zande, J.C.; Huijsmans, J.F.; Wenneker, M. An empirical model based on phenological growth stage for predicting pesticide spray drift in pome fruit orchards. Biosyst. Eng. 2017, 154, 46–61. [Google Scholar] [CrossRef]
  19. Ma, J.; Liu, K.; Dong, X.; Huang, X.; Ahmad, F.; Qiu, B. Force and motion behaviour of crop leaves during spraying. Biosyst. Eng. 2023, 235, 83–99. [Google Scholar] [CrossRef]
  20. Wang, A.; Li, W.; Men, X.; Gao, B.; Xu, Y.; Wei, X. Vegetation detection based on spectral information and development of a low-cost vegetation sensor for selective spraying. Pest Manag. Sci. 2022, 78, 2467–2476. [Google Scholar] [CrossRef]
  21. Liu, L.; Liu, Y.; He, X.; Liu, W. Precision Variable-Rate Spraying Robot by Using Single 3D LIDAR in Orchards. Agronomy 2022, 12, 2509. [Google Scholar] [CrossRef]
  22. Li, L.; He, X.; Song, J.; Liu, Y.; Zeng, A.; Liu, Y.; Liu, Z. Design and experiment of variable rate orchard sprayer based on laser scanning sensor. Int. J. Agric. Biol. Eng. 2018, 11, 101–108. [Google Scholar] [CrossRef]
  23. Salas, B.; Salcedo, R.; Garcia-Ruiz, F.; Gil, E. Design, implementation and validation of a sensor-based precise airblast sprayer to improve pesticide applications in orchards. Precis. Agric. 2024, 25, 865–888. [Google Scholar] [CrossRef]
  24. Berk, P.; Hocevar, M.; Stajnko, D.; Belsak, A. Development of alternative plant protection product application techniques in orchards, based on measurement sensing systems: A review. Comput. Electron. Agric. 2016, 124, 273–288. [Google Scholar] [CrossRef]
  25. Dang, F.; Chen, D.; Lu, Y.; Li, Z. YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput. Electron. Agric. 2023, 205, 107655. [Google Scholar] [CrossRef]
  26. Pantazi, X.E.; Moshou, D.; Tamouridou, A.A. Automated leaf disease detection in different crop species through image features analysis and One Class Classifiers. Comput. Electron. Agric. 2019, 156, 96–104. [Google Scholar] [CrossRef]
  27. Gu, C.; Zou, W.; Wang, X.; Chen, L.; Zhai, C. Wind loss model for the thick canopies of orchard trees based on accurate variable spraying. Front. Plant Sci. 2022, 13, 1010540. [Google Scholar] [CrossRef]
  28. Liu, H.; Du, Z.; Shen, Y.; Du, W.; Zhang, X. Development and evaluation of an intelligent multivariable spraying robot for orchards and nurseries. Comput. Electron. Agric. 2024, 222, 109056. [Google Scholar] [CrossRef]
  29. Chen, Q.; Xie, Y.; Guo, S.; Bai, J.; Shu, Q. Sensing system of environmental perception technologies for driverless vehicle: A review of state of the art and challenges. Sens. Actuators A Phys. 2021, 319, 112566. [Google Scholar] [CrossRef]
  30. Wang, D.; Li, W.; Liu, X.; Li, N.; Zhang, C. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution. Comput. Electron. Agric. 2020, 175, 105523. [Google Scholar] [CrossRef]
  31. Ye, L.; Wu, F.; Zou, X.; Li, J. Path planning for mobile robots in unstructured orchard environments: An improved kinematically constrained bi-directional RRT approach. Comput. Electron. Agric. 2023, 215, 108453. [Google Scholar] [CrossRef]
  32. Tang, Y.; Qiu, J.; Zhang, Y.; Wu, D.; Cao, Y.; Zhao, K.; Zhu, L. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: A review. Precis. Agric. 2023, 24, 1183–1219. [Google Scholar] [CrossRef]
  33. Chen, W.; Lu, S.; Liu, B.; Li, G.; Qian, T. Detecting citrus in orchard environment by using improved YOLOv4. Sci. Program. 2020, 2020, 8859237. [Google Scholar] [CrossRef]
  34. Kang, H.; Chen, C. Fruit Detection and Segmentation for Apple Harvesting Using Visual Sensor in Orchards. Sensors 2019, 19, 4599. [Google Scholar] [CrossRef]
  35. Jiang, A.; Ahamed, T. Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection. Sensors 2023, 23, 4808. [Google Scholar] [CrossRef] [PubMed]
  36. Zheng, C.; Abd-Elrahman, A.; Whitaker, V.M.; Dalid, C. Deep learning for strawberry canopy delineation and biomass prediction from high-resolution images. Plant Phenomics 2022, 2022, 9850486. [Google Scholar] [CrossRef]
  37. Sun, G.; Wang, X.; Ding, Y.; Lu, W.; Sun, Y. Remote Measurement of Apple Orchard Canopy Information Using Unmanned Aerial Vehicle Photogrammetry. Agronomy 2019, 9, 774. [Google Scholar] [CrossRef]
  38. Chen, X.; Jiang, K.; Zhu, Y.; Wang, X.; Yun, T. Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning. Forests 2021, 12, 131. [Google Scholar] [CrossRef]
  39. Liu, Y.; You, H.; Tang, X.; You, Q.; Huang, Y.; Chen, J. Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data. Forests 2023, 14, 1327. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Zhang, B.; Shen, C.; Liu, H.; Huang, J.; Tian, K.; Tang, Z. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Biol. Eng. 2024, 17, 1–13. [Google Scholar]
  41. Ren, Y.; Huang, X.; Aheto, J.H.; Wang, C.; Ernest, B.; Tian, X.; Wang, C. Application of volatile and spectral profiling together with multimode data fusion strategy for the discrimination of preserved eggs. Food Chem. 2021, 343, 128515. [Google Scholar] [CrossRef] [PubMed]
  42. Li, L.; Xie, S.; Ning, J.; Chen, Q.; Zhang, Z. Evaluating green tea quality based on multisensor data fusion combining hyperspectral imaging and olfactory visualization systems. J. Sci. Food Agric. 2019, 99, 1787–1794. [Google Scholar] [CrossRef] [PubMed]
  43. Zhou, X.; Sun, J.; Tian, Y.; Wu, X.; Dai, C.; Li, B. Spectral classification of lettuce cadmium stress based on information fusion and VISSA-GOA-SVM algorithm. J. Food Process Eng. 2019, 42, e13085. [Google Scholar] [CrossRef]
  44. Zhu, W.; Feng, Z.; Dai, S.; Zhang, P.; Wei, X. Using UAV Multispectral Remote Sensing with Appropriate Spatial Resolution and Machine Learning to Monitor Wheat Scab. Agriculture 2022, 12, 1785. [Google Scholar] [CrossRef]
  45. Qin, H.; Zhou, W.; Yao, Y.; Wang, W. Individual tree segmentation and tree species classification in subtropical broadleaf forests using UAV-based LiDAR, hyperspectral, and ultrahigh-resolution RGB data. Remote Sens. Environ. 2022, 280, 113143. [Google Scholar] [CrossRef]
  46. Zhu, W.; Li, J.; Li, L.; Wang, A.; Wei, X.; Mao, H. Nondestructive diagnostics of soluble sugar, total nitrogen and their ratio of tomato leaves in greenhouse by polarized spectra–hyperspectral data fusion. Int. J. Agric. Biol. Eng. 2020, 13, 189–197. [Google Scholar] [CrossRef]
  47. Guan, H.; Deng, H.; Ma, X.; Zhang, T.; Zhang, Y.; Zhu, T.; Lu, Y. A corn canopy organs detection method based on improved DBi-YOLOv8 network. Eur. J. Agron. 2024, 154, 127076. [Google Scholar] [CrossRef]
  48. Li, H.; Tan, B.; Sun, L.; Liu, H.; Zhang, H.; Liu, B. Multi-Source Image Fusion Based Regional Classification Method for Apple Diseases and Pests. Appl. Sci. 2024, 14, 7695. [Google Scholar] [CrossRef]
  49. Duan, Y.; Han, W.; Guo, P.; Wei, X. YOLOv8-GDCI: Research on the Phytophthora Blight Detection Method of Different Parts of Chili Based on Improved YOLOv8 Model. Agronomy 2024, 14, 2734. [Google Scholar] [CrossRef]
  50. Ji, W.; Gao, X.; Xu, B.; Pan, Y.; Zhang, Z.; Zhao, D. Apple target recognition method in complex environment based on improved YOLOv4. J. Food Process Eng. 2021, 44, e13866. [Google Scholar] [CrossRef]
  51. Ji, W.; Pan, Y.; Xu, B.; Wang, J. A real-time apple targets detection method for picking robot based on ShufflenetV2-YOLOX. Agriculture 2022, 12, 856. [Google Scholar] [CrossRef]
  52. Zhang, F.; Chen, Z.; Ali, S.; Yang, N.; Fu, S.; Zhang, Y. Multi-class detection of cherry tomatoes using improved Yolov4-tiny model. Int. J. Agric. Biol. Eng. 2023, 16, 225–231. [Google Scholar]
  53. Pei, H.; Sun, Y.; Huang, H.; Zhang, W.; Sheng, J.; Zhang, Z. Weed Detection in Maize Fields by UAV Images Based on Crop Row Preprocessing and Improved YOLOv4. Agriculture 2022, 12, 975. [Google Scholar] [CrossRef]
  54. Deng, L.; Miao, Z.; Zhao, X.; Yang, S.; Gao, Y.; Zhai, C.; Zhao, C. HAD-YOLO: An Accurate and Effective Weed Detection Model Based on Improved YOLOV5 Network. Agronomy 2025, 15, 57. [Google Scholar] [CrossRef]
  55. Liu, Q.; Lv, J.; Zhang, C. MAE-YOLOv8-based small object detection of green crisp plum in real complex orchard environments. Comput. Electron. Agric. 2024, 226, 109458. [Google Scholar] [CrossRef]
  56. Zhang, W.; Chen, X.; Qi, J.; Yang, S. Automatic instance segmentation of orchard canopy in unmanned aerial vehicle imagery using deep learning. Front. Plant Sci. 2022, 13, 1041791. [Google Scholar] [CrossRef]
  57. Cheng, Z.; Cheng, Y.; Li, M.; Dong, X.; Gong, S.; Min, X. Detection of Cherry Tree Crown Based on Improved LA-dpv3+ Algorithm. Forests 2023, 14, 2404. [Google Scholar] [CrossRef]
  58. Mahmud, M.S.; He, L.; Zahid, A.; Heinemann, P.; Choi, D.; Krawczyk, G.; Zhu, H. Detection and infected area segmentation of apple fire blight using image processing and deep transfer learning for site-specific management. Comput. Electron. Agric. 2023, 209, 107862. [Google Scholar] [CrossRef]
  59. Anagnostis, A.; Tagarakis, A.C.; Asiminari, G.; Papageorgiou, E.; Kateris, D.; Moshou, D.; Bochtis, D. A deep learning approach for anthracnose infected trees classification in walnut orchards. Comput. Electron. Agric. 2021, 182, 105998. [Google Scholar] [CrossRef]
  60. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep learning-based identification system of weeds and crops in strawberry and pea fields for a precision agriculture sprayer. Precis. Agric. 2021, 22, 1711–1727. [Google Scholar] [CrossRef]
  61. Zhang, X.; Xun, Y.; Chen, Y. Automated identification of citrus diseases in orchards using deep learning. Biosyst. Eng. 2022, 223, 249–258. [Google Scholar] [CrossRef]
  62. Liu, J.; Abbas, I.; Noor, R.S. Development of Deep Learning-Based Variable Rate Agrochemical Spraying System for Targeted Weeds Control in Strawberry Crop. Agronomy 2021, 11, 1480. [Google Scholar] [CrossRef]
  63. Zhao, G.; Yang, R.; Jing, X.; Zhang, H.; Wu, Z.; Sun, X.; Fu, L. Phenotyping of individual apple tree in modern orchard with novel smartphone-based heterogeneous binocular vision and YOLOv5s. Comput. Electron. Agric. 2023, 209, 107814. [Google Scholar] [CrossRef]
  64. Mirbod, O.; Choi, D.; Heinemann, P.H.; Marini, R.P.; He, L. On-tree apple fruit size estimation using stereo vision with deep learning-based occlusion handling. Biosyst. Eng. 2023, 226, 27–42. [Google Scholar] [CrossRef]
  65. Zhang, L.; Hao, Q.; Mao, Y.; Su, J.; Cao, J. Beyond Trade-Off: An Optimized Binocular Stereo Vision Based Depth Estimation Algorithm for Designing Harvesting Robot in Orchards. Agriculture 2023, 13, 1117. [Google Scholar] [CrossRef]
  66. Wang, C.; Zou, X.; Tang, Y.; Luo, L.; Feng, W. Localisation of litchi in an unstructured environment using binocular stereo vision. Biosyst. Eng. 2016, 145, 39–51. [Google Scholar] [CrossRef]
  67. Zheng, S.; Liu, Y.; Weng, W.; Jia, X.; Yu, S.; Wu, Z. Tomato Recognition and Localization Method Based on Improved YOLOv5n-seg Model and Binocular Stereo Vision. Agronomy 2023, 13, 2339. [Google Scholar] [CrossRef]
  68. Liu, T.H.; Nie, X.N.; Wu, J.M.; Zhang, D.; Liu, W.; Cheng, Y.F.; Qi, L. Pineapple (Ananas comosus) fruit detection and localization in natural environment based on binocular stereo vision and improved YOLOv3 model. Precis. Agric. 2023, 24, 139–160. [Google Scholar] [CrossRef]
  69. Pan, S.; Ahamed, T. Pear Recognition in an Orchard from 3D Stereo Camera Datasets to Develop a Fruit Picking Mechanism Using Mask R-CNN. Sensors 2022, 22, 4187. [Google Scholar] [CrossRef]
  70. Sun, H.; Xue, J.; Zhang, Y.; Li, H.; Liu, R.; Song, Y.; Liu, S. Novel method of rapid and accurate tree trunk location in pear orchard combining stereo vision and semantic segmentation. Measurement 2025, 242, 116127. [Google Scholar] [CrossRef]
  71. Tang, Y.; Zhou, H.; Wang, H.; Zhang, Y. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision. Expert Syst. Appl. 2023, 211, 118573. [Google Scholar] [CrossRef]
  72. Zeng, X.; Wan, H.; Fan, Z.; Yu, X.; Guo, H. MT-MVSNet: A lightweight and highly accurate convolutional neural network based on mobile transformer for 3D reconstruction of orchard fruit tree branches. Expert Syst. Appl. 2025, 268, 126220. [Google Scholar] [CrossRef]
  73. Yu, T.; Hu, C.; Xie, Y.; Liu, J.; Li, P. Mature pomegranate fruit detection and location combining improved F-PointNet with 3D point cloud clustering in orchard. Comput. Electron. Agric. 2022, 200, 107233. [Google Scholar] [CrossRef]
  74. Xue, X.; Luo, Q.; Bu, M.; Li, Z.; Lyu, S.; Song, S. Citrus Tree Canopy Segmentation of Orchard Spraying Robot Based on RGB-D Image and the Improved DeepLabv3+. Agronomy 2023, 13, 2059. [Google Scholar] [CrossRef]
  75. Zhang, J.; He, L.; Karkee, M.; Zhang, Q.; Zhang, X.; Gao, Z. Branch detection for apple trees trained in fruiting wall architecture using depth features and Regions-Convolutional Neural Network (R-CNN). Comput. Electron. Agric. 2018, 155, 386–393. [Google Scholar] [CrossRef]
  76. Xiao, K.; Ma, Y.; Gao, G. An intelligent precision orchard pesticide spray technique based on the depth-of-field extraction algorithm. Comput. Electron. Agric. 2017, 133, 30–36. [Google Scholar] [CrossRef]
  77. Yu, C.; Shi, X.; Luo, W.; Feng, J.; Zheng, Z.; Yorozu, A.; Guo, J. MLG-YOLO: A Model for Real-Time Accurate Detection and Localization of Winter Jujube in Complex Structured Orchard Environments. Plant Phenomics 2024, 6, 0258. [Google Scholar] [CrossRef]
  78. Sun, X.; Fang, W.; Gao, C.; Fu, L.; Majeed, Y.; Liu, X.; Li, R. Remote estimation of grafted apple tree trunk diameter in modern orchard with RGB and point cloud based on SOLOv2. Comput. Electron. Agric. 2022, 199, 107209. [Google Scholar] [CrossRef]
  79. Tong, S.; Zhang, J.; Li, W.; Wang, Y.; Kang, F. An image-based system for locating pruning points in apple trees using instance segmentation and RGB-D images. Biosyst. Eng. 2023, 236, 277–286. [Google Scholar] [CrossRef]
  80. Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Li, J. Guava Detection and Pose Estimation Using a Low-Cost RGB-D Sensor in the Field. Sensors 2019, 19, 428. [Google Scholar] [CrossRef]
  81. Wang, X.; Kang, H.; Zhou, H.; Au, W.; Chen, C. Geometry-aware fruit grasping estimation for robotic harvesting in apple orchards. Comput. Electron. Agric. 2022, 193, 106716. [Google Scholar] [CrossRef]
  82. Qi, Z.; Hua, W.; Zhang, Z.; Deng, X.; Yuan, T.; Zhang, W. A novel method for tomato stem diameter measurement based on improved YOLOv8-seg and RGB-D data. Comput. Electron. Agric. 2024, 226, 109387. [Google Scholar] [CrossRef]
  83. Ahmed, D.; Sapkota, R.; Churuvija, M.; Karkee, M. Estimating optimal crop-load for individual branches in apple tree canopies using YOLOv8. Comput. Electron. Agric. 2025, 229, 109697. [Google Scholar] [CrossRef]
  84. Feng, Y.; Ma, W.; Tan, Y.; Yan, H.; Qian, J.; Tian, Z.; Gao, A. Approach of Dynamic Tracking and Counting for Obscured Citrus in Smart Orchard Based on Machine Vision. Appl. Sci. 2024, 14, 1136. [Google Scholar] [CrossRef]
  85. Abeyrathna, R.M.R.D.; Nakaguchi, V.M.; Minn, A.; Ahamed, T. Recognition and Counting of Apples in a Dynamic State Using a 3D Camera and Deep Learning Algorithms for Robotic Harvesting Systems. Sensors 2023, 23, 3810. [Google Scholar] [CrossRef]
  86. Underwood, J.P.; Jagbrant, G.; Nieto, J.I.; Sukkarieh, S. Lidar-based tree recognition and platform localization in orchards. J. Field robotics 2015, 32, 1056–1074. [Google Scholar] [CrossRef]
  87. Shen, Y.; Zhu, H.; Liu, H.; Chen, Y.; Ozkan, E. Development of a laser-guided, embedded-computer-controlled, air-assisted precision sprayer. Trans. ASABE 2017, 60, 1827–1838. [Google Scholar] [CrossRef]
  88. Berk, P.; Stajnko, D.; Belsak, A.; Hocevar, M. Digital evaluation of leaf area of an individual tree canopy in the apple orchard using the LIDAR measurement system. Comput. Electron. Agric. 2020, 169, 105158. [Google Scholar] [CrossRef]
  89. Lu, H.; Xu, S.; Cao, S. SGTBN: Generating dense depth maps from single-line LiDAR. IEEE Sens. J. 2021, 21, 19091–19100. [Google Scholar] [CrossRef]
  90. Liu, J.; Liang, H.; Wang, Z.; Chen, X. A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment. Sensors 2015, 15, 21931–21956. [Google Scholar] [CrossRef]
  91. Yan, T.; Zhu, H.; Sun, L.; Wang, X.; Ling, P. Investigation of an experimental laser sensor-guided spray control system for greenhouse variable-rate applications. Trans. ASABE 2019, 62, 899–911. [Google Scholar] [CrossRef]
  92. Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
  93. Sun, Y.; Luo, Y.; Zhang, Q.; Xu, L.; Wang, L.; Zhang, P. Estimation of Crop Height Distribution for Mature Rice Based on a Moving Surface and 3D Point Cloud Elevation. Agronomy 2022, 12, 836. [Google Scholar] [CrossRef]
  94. Fieber, K.D.; Davenport, I.J.; Ferryman, J.M.; Gurney, R.J.; Walker, J.P.; Hacker, J.M. Analysis of full-waveform LiDAR data for classification of an orange orchard scene. ISPRS J. Photogramm. Remote Sens. 2013, 82, 63–82. [Google Scholar] [CrossRef]
  95. Rosell, J.R.; Llorens, J.; Sanz, R.; Arnó, J.; Ribes-Dasi, M.; Masip, J.; Palacín, J. Obtaining the three-dimensional structure of tree orchards from remote 2D terrestrial LIDAR scanning. Agric. For. Meteorol. 2009, 149, 1505–1515. [Google Scholar] [CrossRef]
  96. Underwood, J.P.; Hung, C.; Whelan, B.; Sukkarieh, S. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors. Comput. Electron. Agric. 2016, 130, 83–96. [Google Scholar] [CrossRef]
  97. Sanz, R.; Rosell, J.R.; Llorens, J.; Gil, E.; Planas, S. Relationship between tree row LIDAR-volume and leaf area density for fruit orchards and vineyards obtained with a LIDAR 3D Dynamic Measurement System. Agric. For. Meteorol. 2013, 171, 153–162. [Google Scholar] [CrossRef]
  98. Gu, C.; Zhao, C.; Zou, W.; Yang, S.; Dou, H.; Zhai, C. Innovative Leaf Area Detection Models for Orchard Tree Thick Canopy Based on LiDAR Point Cloud Data. Agriculture 2022, 12, 1241. [Google Scholar] [CrossRef]
  99. Wang, K.; Zhou, J.; Zhang, W.; Zhang, B. Mobile LiDAR Scanning System Combined with Canopy Morphology Extracting Methods for Tree Crown Parameters Evaluation in Orchards. Sensors 2021, 21, 339. [Google Scholar] [CrossRef]
  100. Murray, J.; Fennell, J.T.; Blackburn, G.A.; Whyatt, J.D.; Li, B. The novel use of proximal photogrammetry and terrestrial LiDAR to quantify the structural complexity of orchard trees. Precis. Agric. 2020, 2, 473–483. [Google Scholar] [CrossRef]
  101. Wang, M.; Dou, H.; Sun, H.; Zhai, C.; Zhang, Y.; Yuan, F. Calculation Method of Canopy Dynamic Meshing Division Volumes for Precision Pesticide Application in Orchards Based on LiDAR. Agronomy 2023, 13, 1077. [Google Scholar] [CrossRef]
  102. Luo, S.; Wen, S.; Zhang, L.; Lan, Y.; Chen, X. Extraction of crop canopy features and decision-making for variable spraying based on unmanned aerial vehicle LiDAR data. Comput. Electron. Agric. 2024, 224, 109197. [Google Scholar] [CrossRef]
  103. Mahmud, M.S.; Zahid, A.; He, L.; Choi, D.; Krawczyk, G.; Zhu, H. LiDAR-sensed tree canopy correction in uneven terrain conditions using a sensor fusion approach for precision sprayers. Comput. Electron. Agric. 2021, 191, 106565. [Google Scholar] [CrossRef]
  104. Chen, C.; Cao, G.Q.; Li, Y.B.; Liu, D.; Ma, B.; Zhang, J.L.; Li, L.; Hu, J.P. Research on monitoring methods for the appropriate rice harvest period based on multispectral remote sensing. Discret. Dyn. Nat. Soc. 2022, 2022, 1519667. [Google Scholar]
  105. Zhang, S.; Xue, X.; Chen, C.; Sun, Z.; Sun, T. Development of a low-cost quadrotor UAV based on ADRC for agricultural remote sensing. Int. J. Agric. Biol. Eng. 2019, 12, 82–87. [Google Scholar] [CrossRef]
  106. Meng, L.; Audenaert, K.; Van Labeke, M.C.; Höfte, M. Detection of Botrytis cinerea on strawberry leaves upon mycelial infection through imaging technique. Sci. Hortic. 2024, 330, 113071. [Google Scholar] [CrossRef]
  107. Wei, L.; Yang, H.; Niu, Y.; Zhang, Y.; Xu, L.; Chai, X. Wheat biomass, yield, and straw-grain ratio estimation from multi-temporal UAV-based RGB and multispectral images. Biosyst. Eng. 2023, 234, 187–205. [Google Scholar] [CrossRef]
  108. Zhang, L.; Wang, A.; Zhang, H.; Zhu, Q.; Zhang, H.; Sun, W.; Niu, Y. Estimating Leaf Chlorophyll Content of Winter Wheat from UAV Multispectral Images Using Machine Learning Algorithms under Different Species, Growth Stages, and Nitrogen Stress Conditions. Agriculture 2024, 14, 1064. [Google Scholar] [CrossRef]
  109. Johansen, K.; Raharjo, T.; McCabe, M.F. Using multi-spectral UAV imagery to extract tree crop structural properties and assess pruning effects. Remote Sens. 2018, 10, 854. [Google Scholar] [CrossRef]
  110. Chandel, A.K.; Khot, L.R.; Sallato, B. Apple powdery mildew infestation detection and mapping using high-resolution visible and multispectral aerial imaging technique. Sci. Hortic. 2021, 287, 110228. [Google Scholar] [CrossRef]
  111. Yu, J.; Zhang, Y.; Song, Z.; Jiang, D.; Guo, Y.; Liu, Y.; Chang, Q. Estimating Leaf Area Index in Apple Orchard by UAV Multispectral Images with Spectral and Texture Information. Remote Sens. 2024, 16, 3237. [Google Scholar] [CrossRef]
  112. Noguera, M.; Aquino, A.; Ponce, J.M.; Cordeiro, A.; Silvestre, J.; Arias-Calderón, R.; Andújar, J.M. Nutritional status assessment of olive crops by means of the analysis and modelling of multispectral images taken with UAVs. Biosyst. Eng. 2021, 211, 1–18. [Google Scholar] [CrossRef]
  113. Zhao, X.; Zhao, Z.; Zhao, F.; Liu, J.; Li, Z.; Wang, X.; Gao, Y. An Estimation of the Leaf Nitrogen Content of Apple Tree Canopies Based on Multispectral Unmanned Aerial Vehicle Imagery and Machine Learning Methods. Agronomy 2024, 14, 552. [Google Scholar] [CrossRef]
  114. Sarabia, R.; Aquino, A.; Ponce, J.M.; López, G.; Andújar, J.M. Automated Identification of Crop Tree Crowns from UAV Multispectral Imagery by Means of Morphological Image Analysis. Remote Sens. 2020, 12, 748. [Google Scholar] [CrossRef]
  115. Adade, S.Y.S.S.; Lin, H.; Johnson, N.A.N.; Nunekpeku, X.; Aheto, J.H.; Ekumah, J.N.; Chen, Q. Advanced Food Contaminant Detection through Multi-Source Data Fusion: Strategies, Applications, and Future Perspectives. Trends Food Sci. Technol. 2024, 156, 104851. [Google Scholar] [CrossRef]
  116. Cheng, J.; Sun, J.; Shi, L.; Dai, C. An effective method fusing electronic nose and fluorescence hyperspectral imaging for the detection of pork freshness. Food Biosci. 2024, 59, 103880. [Google Scholar] [CrossRef]
  117. Xu, S.; Xu, X.; Zhu, Q.; Meng, Y.; Yang, G.; Feng, H.; Wang, B. Monitoring leaf nitrogen content in rice based on information fusion of multi-sensor imagery from UAV. Precis. Agric. 2023, 24, 2327–2349. [Google Scholar] [CrossRef]
  118. Jiang, S.; Qi, P.; Han, L.; Liu, L.; Li, Y.; Huang, Z.; He, X. Navigation system for orchard spraying robot based on 3D LiDAR SLAM with NDT_ICP point cloud registration. Comput. Electron. Agric. 2024, 220, 108870. [Google Scholar] [CrossRef]
  119. Lin, X.; Chao, S.; Yan, D.; Guo, L.; Liu, Y.; Li, L. Multi-Sensor Data Fusion Method Based on Self-Attention Mechanism. Appl. Sci. 2023, 13, 11992. [Google Scholar] [CrossRef]
  120. Basir, O.; Yuan, X. Engine fault diagnosis based on multi-sensor information fusion using Dempster–Shafer evidence theory. Inf. Fusion 2007, 8, 379–386. [Google Scholar] [CrossRef]
  121. Ahmed, S.; Qiu, B.; Ahmad, F.; Kong, C.W.; Xin, H. A state-of-the-art analysis of obstacle avoidance methods from the perspective of an agricultural sprayer UAV’s operation scenario. Agronomy 2021, 11, 1069. [Google Scholar] [CrossRef]
  122. Chen, R.; Zhang, C.; Xu, B.; Zhu, Y.; Zhao, F.; Han, S.; Yang, H. Predicting individual apple tree yield using UAV multi-source remote sensing data and ensemble learning. Comput. Electron. Agric. 2022, 201, 107275. [Google Scholar] [CrossRef]
  123. Tang, S.; Xia, Z.; Gu, J.; Wang, W.; Huang, Z.; Zhang, W. High-precision apple recognition and localization method based on RGB-D and improved SOLOv2 instance segmentation. Front. Sustain. Food Syst. 2024, 8, 1403872. [Google Scholar] [CrossRef]
  124. Sun, J.; Zhang, L.; Zhou, X.; Yao, K.; Tian, Y.; Nirere, A. A method of information fusion for identification of rice seed varieties based on hyperspectral imaging technology. J. Food Process Eng. 2021, 44, e13797. [Google Scholar] [CrossRef]
  125. Li, Y.; Qi, X.; Cai, Y.; Tian, Y.; Zhu, Y.; Cao, W.; Zhang, X. A Rice Leaf Area Index Monitoring Method Based on the Fusion of Data from RGB Camera and Multi-Spectral Camera on an Inspection Robot. Remote Sens. 2024, 16, 4725. [Google Scholar] [CrossRef]
  126. Wang, J.; Gao, Z.; Zhang, Y.; Zhou, J.; Wu, J.; Li, P. Real-Time Detection and Location of Potted Flowers Based on a ZED Camera and a YOLO V4-Tiny Deep Learning Algorithm. Horticulturae 2022, 8, 21. [Google Scholar] [CrossRef]
  127. Wang, R.; Hu, C.; Han, J.; Hu, X.; Zhao, Y.; Wang, Q.; Xie, Y. A Hierarchic Method of Individual Tree Canopy Segmentation Combing UAV Image and LiDAR. Arab. J. Sci. Eng. 2025, 50, 7567–7585. [Google Scholar] [CrossRef]
  128. Liu, H.; Zhu, H. Evaluation of a laser scanning sensor in detection of complex-shaped targets for variable-rate sprayer development. Trans. ASABE 2016, 59, 1181–1192. [Google Scholar]
  129. Wang, J.; Zhang, Y.; Gu, R. Research status and prospects on plant canopy structure measurement using visual sensors based on three-dimensional reconstruction. Agriculture 2020, 10, 462. [Google Scholar] [CrossRef]
  130. Mahmud, M.S.; Zahid, A.; He, L.; Choi, D.; Krawczyk, G.; Zhu, H.; Heinemann, P. Development of a LiDAR-guided section-based tree canopy density measurement system for precision spray applications. Comput. Electron. Agric. 2021, 182, 106053. [Google Scholar] [CrossRef]
  131. Zhang, Z.; Yang, M.; Pan, Q.; Jin, X.; Wang, G.; Zhao, Y.; Hu, Y. Identification of tea plant cultivars based on canopy images using deep learning methods. Sci. Hortic. 2025, 339, 113908. [Google Scholar] [CrossRef]
  132. Zhang, Z.; Lu, Y.; Zhao, Y.; Pan, Q.; Jin, K.; Xu, G.; Hu, Y. Ts-yolo: An all-day and lightweight tea canopy shoots detection model. Agronomy 2023, 13, 1411. [Google Scholar] [CrossRef]
  133. You, J.; Li, D.; Wang, Z.; Chen, Q.; Ouyang, Q. Prediction and visualization of moisture content in Tencha drying processes by computer vision and deep learning. J. Sci. Food Agric. 2024, 104, 5486–5494. [Google Scholar] [CrossRef]
  134. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Yao, M.; Shi, J.; Hu, J. Seedling-YOLO: High-Efficiency Target Detection Algorithm for Field Broccoli Seedling Transplanting Quality Based on YOLOv7-Tiny. Agronomy 2024, 14, 931. [Google Scholar] [CrossRef]
  135. Ma, J.; Zhao, Y.; Fan, W.; Liu, J. An Improved YOLOv8 Model for Lotus Seedpod Instance Segmentation in the Lotus Pond Environment. Agronomy 2024, 14, 1325. [Google Scholar] [CrossRef]
  136. Khan, Z.; Liu, H.; Shen, Y.; Zeng, X. Deep learning improved YOLOv8 algorithm: Real-time precise instance segmentation of crown region orchard canopies in natural environment. Comput. Electron. Agric. 2024, 224, 109168. [Google Scholar] [CrossRef]
  137. Churuvija, M.; Sapkota, R.; Ahmed, D.; Karkee, M. A pose-versatile imaging system for comprehensive 3D modeling of planar-canopy fruit trees for automated orchard operations. Comput. Electron. Agric. 2025, 230, 109899. [Google Scholar] [CrossRef]
  138. Xu, S.; Zheng, S.; Rai, R. Dense object detection based canopy characteristics encoding for precise spraying in peach orchards. Comput. Electron. Agric. 2025, 232, 110097. [Google Scholar] [CrossRef]
  139. Vinci, A.; Brigante, R.; Traini, C.; Farinelli, D. Geometrical Characterization of Hazelnut Trees in an Intensive Orchard by an Unmanned Aerial Vehicle (UAV) for Precision Agriculture Applications. Remote Sens. 2023, 15, 541. [Google Scholar] [CrossRef]
  140. Zhu, Y.; Zhou, J.; Yang, Y.; Liu, L.; Liu, F.; Kong, W. Rapid Target Detection of Fruit Trees Using UAV Imaging and Improved Light YOLOv4 Algorithm. Remote Sens. 2022, 14, 4324. [Google Scholar] [CrossRef]
  141. Bing, Q.; Zhang, R.; Zhang, L.; Li, L.; Chen, L. UAV-SfM Photogrammetry for Canopy Characterization Toward Unmanned Aerial Spraying Systems Precision Pesticide Application in an Orchard. Drones 2025, 9, 151. [Google Scholar] [CrossRef]
  142. Guo, Y.; Gao, J.; Tunio, M.H.; Wang, L. Study on the identification of mildew disease of cuttings at the base of mulberry cuttings by aeroponics rapid propagation based on a BP neural network. Agronomy 2022, 13, 106. [Google Scholar] [CrossRef]
  143. Zuo, Z.; Gao, S.; Peng, H.; Xue, Y.; Han, L.; Ma, G.; Mao, H. Lightweight Detection of Broccoli Heads in Complex Field Environments Based on LBDC-YOLO. Agronomy 2024, 14, 2359. [Google Scholar] [CrossRef]
  144. Zhou, X.; Chen, W.; Wei, X. Improved Field Obstacle Detection Algorithm Based on YOLOv8. Agriculture 2024, 14, 2263. [Google Scholar] [CrossRef]
  145. Storey, G.; Meng, Q.; Li, B. Leaf Disease Segmentation and Detection in Apple Orchards for Precise Smart Spraying in Sustainable Agriculture. Sustainability 2022, 14, 1458. [Google Scholar] [CrossRef]
  146. Apacionado, B.V.; Ahamed, T. Sooty Mold Detection on Citrus Tree Canopy Using Deep Learning Algorithms. Sensors 2023, 23, 8519. [Google Scholar] [CrossRef]
  147. Zhou, X.; Sun, J.; Mao, H.; Wu, X.; Zhang, X.; Yang, N. Visualization research of moisture content in leaf lettuce leaves based on WT-PLSR and hyperspectral imaging technology. J. Food Process Eng. 2018, 41, e12647. [Google Scholar] [CrossRef]
  148. Wang, A.; Song, Z.; Xie, Y.; Hu, J.; Zhang, L.; Zhu, Q. Detection of Rice Leaf SPAD and Blast Disease Using Integrated Aerial and Ground Multiscale Canopy Reflectance Spectroscopy. Agriculture 2024, 14, 1471. [Google Scholar] [CrossRef]
  149. Di Nisio, A.; Adamo, F.; Acciani, G.; Attivissimo, F. Fast Detection of Olive Trees Affected by Xylella Fastidiosa from UAVs Using Multispectral Imaging. Sensors 2020, 20, 4915. [Google Scholar] [CrossRef]
  150. Zhang, Y.; Li, X.; Wang, M.; Xu, T.; Huang, K.; Sun, Y.; Lv, X. Early detection and lesion visualization of pear leaf anthracnose based on multi-source feature fusion of hyperspectral imaging. Front. Plant Sci. 2024, 15, 1461855. [Google Scholar] [CrossRef]
  151. Yang, R.; Lu, X.; Huang, J.; Zhou, J.; Jiao, J.; Liu, Y.; Liu, F.; Su, B.; Gu, P. A Multi-Source Data Fusion Decision-Making Method for Disease and Pest Detection of Grape Foliage Based on ShuffleNet V2. Remote Sens. 2021, 13, 5102. [Google Scholar] [CrossRef]
  152. Zhang, Y.; Cai, W.; Fan, S.; Song, R.; Jin, J. Object Detection Based on YOLOv5 and GhostNet for Orchard Pests. Information 2022, 13, 548. [Google Scholar] [CrossRef]
  153. Luo, D.; Xue, Y.; Deng, X.; Yang, B.; Chen, H.; Mo, Z. Citrus diseases and pests detection model based on self-attention YOLOV8. IEEE Access 2023, 11, 139872–139881. [Google Scholar] [CrossRef]
  154. Chao, X.; Sun, G.; Zhao, H.; Li, M.; He, D. Identification of apple tree leaf diseases based on deep learning models. Symmetry 2020, 12, 1065. [Google Scholar] [CrossRef]
  155. Sun, H.; Xu, H.; Liu, B.; He, D.; He, J.; Zhang, H.; Geng, N. MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks. Comput. Electron. Agric. 2021, 189, 106379. [Google Scholar] [CrossRef]
  156. Memon, M.S.; Chen, S.; Shen, B.; Liang, R.; Tang, Z.; Wang, S.; Memon, N. Automatic visual recognition, detection and classification of weeds in cotton fields based on machine vision. Crop Prot. 2025, 187, 106966. [Google Scholar] [CrossRef]
  157. Chen, S.; Memon, M.S.; Shen, B.; Guo, J.; Du, Z.; Tang, Z.; Memon, H. Identification of weeds in cotton fields at various growth stages using color feature techniques. Ital. J. Agron. 2024, 19, 100021. [Google Scholar] [CrossRef]
  158. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  159. Zisi, T.; Alexandridis, T.K.; Kaplanis, S.; Navrozidis, I.; Tamouridou, A.-A.; Lagopodi, A.; Moshou, D.; Polychronos, V. Incorporating Surface Elevation Information in UAV Multispectral Images for Mapping Weed Patches. J. Imaging 2018, 4, 132. [Google Scholar] [CrossRef]
  160. Tao, T.; Wei, X. STBNA-YOLOv5: An Improved YOLOv5 Network for Weed Detection in Rapeseed Field. Agriculture 2025, 15, 22. [Google Scholar] [CrossRef]
  161. Zhao, Y.; Zhang, X.; Sun, J.; Yu, T.; Cai, Z.; Zhang, Z.; Mao, H. Low-cost lettuce height measurement based on depth vision and lightweight instance segmentation model. Agriculture 2024, 14, 1596. [Google Scholar] [CrossRef]
  162. Wang, Y.; Zhang, X.; Ma, G.; Du, X.; Shaheen, N.; Mao, H. Recognition of weeds at asparagus fields using multi-feature fusion and backpropagation neural network. Int. J. Agric. Biol. Eng. 2021, 14, 190–198. [Google Scholar] [CrossRef]
  163. Zhu, H.; Zhang, Y.; Mu, D.; Bai, L.; Wu, X.; Zhuang, H.; Li, H. Research on improved YOLOx weed detection based on lightweight attention module. Crop Prot. 2024, 177, 106563. [Google Scholar] [CrossRef]
  164. Sampurno, R.M.; Liu, Z.; Abeyrathna, R.M.R.D.; Ahamed, T. Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations. Sensors 2024, 24, 893. [Google Scholar] [CrossRef]
  165. Jin, T.; Liang, K.; Lu, M.; Zhao, Y.; Xu, Y. WeedsSORT: A weed tracking-by-detection framework for laser weeding applications within precision agriculture. Smart Agric. Technol. 2025, 11, 100883. [Google Scholar] [CrossRef]
  166. Fan, X.; Chai, X.; Zhou, J.; Sun, T. Deep learning based weed detection and target spraying robot system at seedling stage of cotton field. Comput. Electron. Agric. 2023, 214, 108317. [Google Scholar] [CrossRef]
  167. Xu, Y.; Gao, Z.; Khot, L.; Meng, X.; Zhang, Q. A real-time weed mapping and precision herbicide spraying system for row crops. Sensors 2018, 18, 4245. [Google Scholar] [CrossRef] [PubMed]
  168. Yang, S.; Cui, Z.; Li, M.; Li, J.; Gao, D.; Ma, F.; Wang, Y. A grapevine trunks and intra-plant weeds segmentation method based on improved Deeplabv3 Plus. Comput. Electron. Agric. 2024, 227, 109568. [Google Scholar] [CrossRef]
  169. Sabarina, K.; Priya, N. Lowering data dimensionality in big data for the benefit of precision agriculture. Procedia Comput. Sci. 2015, 48, 548–554. [Google Scholar] [CrossRef]
  170. San Emeterio de la Parte, M.; Lana Serrano, S.; Muriel Elduayen, M.; Martínez-Ortega, J.-F. Spatio-Temporal Semantic Data Model for Precision Agriculture IoT Networks. Agriculture 2023, 13, 360. [Google Scholar] [CrossRef]
Figure 1. Environmental sensing sensors: Visual, LiDAR, Multispectral.
Figure 1. Environmental sensing sensors: Visual, LiDAR, Multispectral.
Horticulturae 11 00551 g001
Figure 2. Canopy detection model architecture: This model architecture is derived from the literature [47] and was constructed by Guan et al. based on the improved DBi-YOLOv8. This structure enhances detection performance for complex canopy regions in variable lighting conditions.
Figure 2. Canopy detection model architecture: This model architecture is derived from the literature [47] and was constructed by Guan et al. based on the improved DBi-YOLOv8. This structure enhances detection performance for complex canopy regions in variable lighting conditions.
Horticulturae 11 00551 g002
Figure 3. Monocular visual sensor.
Figure 3. Monocular visual sensor.
Horticulturae 11 00551 g003
Figure 4. RGB-D visual sensor.
Figure 4. RGB-D visual sensor.
Horticulturae 11 00551 g004
Figure 5. RGB-D camera-based image data acquisition system: (a) Winter Jujube RGB and Depth image acquisition system; (b) This image acquisition system is derived from the literature and is used for the RGB and depth image acquisition of citrus tree canopies, providing data support for canopy detection. These systems enhance spatial understanding of canopy layers, supporting real-time spraying decisions.
Figure 5. RGB-D camera-based image data acquisition system: (a) Winter Jujube RGB and Depth image acquisition system; (b) This image acquisition system is derived from the literature and is used for the RGB and depth image acquisition of citrus tree canopies, providing data support for canopy detection. These systems enhance spatial understanding of canopy layers, supporting real-time spraying decisions.
Horticulturae 11 00551 g005
Figure 6. LiDAR.
Figure 6. LiDAR.
Horticulturae 11 00551 g006
Figure 7. LiDAR-based orchard environmental information examples: (a) Orange orchard scanned using full-waveform LiDAR to differentiate canopy and ground objects; (b) Vineyard structure reconstructed via 2D LiDAR mounted on a tractor, enabling volumetric analysis and spray planning.
Figure 7. LiDAR-based orchard environmental information examples: (a) Orange orchard scanned using full-waveform LiDAR to differentiate canopy and ground objects; (b) Vineyard structure reconstructed via 2D LiDAR mounted on a tractor, enabling volumetric analysis and spray planning.
Horticulturae 11 00551 g007
Figure 8. Multispectral camera.
Figure 8. Multispectral camera.
Horticulturae 11 00551 g008
Figure 9. Multispectral data acquisition system for pest and disease detection: (a) Lychee canopy multispectral data acquisition; (b) Apple orchard multispectral imagery analyzed for powdery mildew detection using RedEdge3 sensor.
Figure 9. Multispectral data acquisition system for pest and disease detection: (a) Lychee canopy multispectral data acquisition; (b) Apple orchard multispectral imagery analyzed for powdery mildew detection using RedEdge3 sensor.
Horticulturae 11 00551 g009
Figure 10. Data fusion-based orchard environmental multi-source information integration technology framework: LiDAR, IMU, and encoder data are synchronized to build high-accuracy SLAM maps and localization systems, supporting autonomous navigation and precision spraying.
Figure 10. Data fusion-based orchard environmental multi-source information integration technology framework: LiDAR, IMU, and encoder data are synchronized to build high-accuracy SLAM maps and localization systems, supporting autonomous navigation and precision spraying.
Horticulturae 11 00551 g010
Figure 11. Feature fusion-based multi-sensor data fusion technology framework: A CNN-SA network integrates point cloud, RGB images, depth maps, and control points using self-attention mechanisms, improving 3D reconstruction accuracy and robustness.
Figure 11. Feature fusion-based multi-sensor data fusion technology framework: A CNN-SA network integrates point cloud, RGB images, depth maps, and control points using self-attention mechanisms, improving 3D reconstruction accuracy and robustness.
Horticulturae 11 00551 g011
Figure 12. Application example of LiDAR and multispectral sensor data fusion technology: Structural features from LiDAR point clouds are fused with multispectral vegetation indices to predict single-tree yield using SVM and KNN models.
Figure 12. Application example of LiDAR and multispectral sensor data fusion technology: Structural features from LiDAR point clouds are fused with multispectral vegetation indices to predict single-tree yield using SVM and KNN models.
Horticulturae 11 00551 g012
Figure 13. Application example of visual sensor and multispectral sensor data fusion: (a) RGB image and multispectral data fusion for pest and disease diagnosis. A channel-level fusion approach with attention enhancement improves detection accuracy in fruit tree disease identification; (b) RGB and multispectral image fusion for leaf area index (LAI) estimation. Features including color, texture, and vegetation indices are integrated to improve model prediction performance.
Figure 13. Application example of visual sensor and multispectral sensor data fusion: (a) RGB image and multispectral data fusion for pest and disease diagnosis. A channel-level fusion approach with attention enhancement improves detection accuracy in fruit tree disease identification; (b) RGB and multispectral image fusion for leaf area index (LAI) estimation. Features including color, texture, and vegetation indices are integrated to improve model prediction performance.
Horticulturae 11 00551 g013
Figure 14. Three-dimensional canopy structure reconstruction based on VLP-16 LiDAR data. The reconstructed point cloud supports density-based zonal pesticide application, demonstrating enhanced precision and control in orchard spraying.
Figure 14. Three-dimensional canopy structure reconstruction based on VLP-16 LiDAR data. The reconstructed point cloud supports density-based zonal pesticide application, demonstrating enhanced precision and control in orchard spraying.
Horticulturae 11 00551 g014
Figure 15. Canopy perception technology combining sensor technology and deep learning: (a) The DS-E24S camera combined with the improved YOLOv8 model achieves canopy segmentation, providing target localization for precision spraying; (b) The KinectV2 camera combined with the improved DeepLabv3+ model enables accurate identification of orchard tree canopies, providing data support for the precision spraying system.
Figure 15. Canopy perception technology combining sensor technology and deep learning: (a) The DS-E24S camera combined with the improved YOLOv8 model achieves canopy segmentation, providing target localization for precision spraying; (b) The KinectV2 camera combined with the improved DeepLabv3+ model enables accurate identification of orchard tree canopies, providing data support for the precision spraying system.
Horticulturae 11 00551 g015
Figure 16. Canopy perception framework based on multi-sensor data fusion: RGB image features are projected into LiDAR space, and canopy segmentation is performed using kernel density estimation and Gaussian mixture modeling.
Figure 16. Canopy perception framework based on multi-sensor data fusion: RGB image features are projected into LiDAR space, and canopy segmentation is performed using kernel density estimation and Gaussian mixture modeling.
Horticulturae 11 00551 g016
Figure 17. Environmental sensing application examples for pest and disease area detection: (a) RGB image combined with the improved Mask R-CNN model achieves accurate detection of leaf rust disease, providing disease spot localization information for the spraying system; (b) Night vision camera combined with YOLOv7 achieves accurate detection of sooty mold disease in citrus tree canopies, providing disease spot perception information for precision spraying.
Figure 17. Environmental sensing application examples for pest and disease area detection: (a) RGB image combined with the improved Mask R-CNN model achieves accurate detection of leaf rust disease, providing disease spot localization information for the spraying system; (b) Night vision camera combined with YOLOv7 achieves accurate detection of sooty mold disease in citrus tree canopies, providing disease spot perception information for precision spraying.
Horticulturae 11 00551 g017
Figure 18. Application examples of pest and disease detection based on hyperspectral sensors: (a) Multispectral camera combined with 3D reconstruction and LDA algorithm achieves disease tree detection and localization; (b) Hyperspectral imaging combined with multi-source fusion model enables early identification of anthracnose disease in pear leaves.
Figure 18. Application examples of pest and disease detection based on hyperspectral sensors: (a) Multispectral camera combined with 3D reconstruction and LDA algorithm achieves disease tree detection and localization; (b) Hyperspectral imaging combined with multi-source fusion model enables early identification of anthracnose disease in pear leaves.
Horticulturae 11 00551 g018
Figure 19. Multi-sensor fusion technology framework for disease recognition: The multi-source fusion model based on RGB images and multispectral data achieves disease detection.
Figure 19. Multi-sensor fusion technology framework for disease recognition: The multi-source fusion model based on RGB images and multispectral data achieves disease detection.
Horticulturae 11 00551 g019
Figure 20. Weed detection using RGB color feature enhancement: Contrast-enhanced super-red and super-green channels are combined with OTSU thresholding for effective segmentation of orchard weeds.
Figure 20. Weed detection using RGB color feature enhancement: Contrast-enhanced super-red and super-green channels are combined with OTSU thresholding for effective segmentation of orchard weeds.
Horticulturae 11 00551 g020
Figure 21. Application examples of weed detection based on multispectral sensing: (a) Multispectral sensor combined with a machine learning algorithm achieves ryegrass detection, providing data foundation for the spraying system; (b) Multispectral sensor combined with a multi-source fusion model achieves weed detection, providing target information for drone precision spraying.
Figure 21. Application examples of weed detection based on multispectral sensing: (a) Multispectral sensor combined with a machine learning algorithm achieves ryegrass detection, providing data foundation for the spraying system; (b) Multispectral sensor combined with a multi-source fusion model achieves weed detection, providing target information for drone precision spraying.
Horticulturae 11 00551 g021
Figure 22. Weed detection and segmentation framework combining visual sensors and YOLOv5/YOLOv8: The model enables instance segmentation of weeds, trunks, and supports under complex lighting and occlusion conditions.
Figure 22. Weed detection and segmentation framework combining visual sensors and YOLOv5/YOLOv8: The model enables instance segmentation of weeds, trunks, and supports under complex lighting and occlusion conditions.
Horticulturae 11 00551 g022
Table 1. Application of monocular vision sensors in orchard environmental sensing.
Table 1. Application of monocular vision sensors in orchard environmental sensing.
ResearcherCollection EquipmentImage DataData UsageRef.
Cheng et al.Sony EXMOR 1/2.3Cherry tree crownCanopy detection[57]
Mahmud et al.Logitech C920Apple tree crownCanopy segmentation[58]
Anagnostis et al.RX100 IILeaf diseases and pestsDisease and pest area Segmentation[59]
Khan et al.DJI Spark CameraWeeds in the strawberry orchardWeed detection[60]
Zhang et al.DSC-W170Citrus diseasesDisease classification[61]
Liu et al.Aluratek AWC01FWeeds in the strawberry orchardWeed detection[62]
Table 3. Application of RGB-D vision sensors in orchard environmental sensing.
Table 3. Application of RGB-D vision sensors in orchard environmental sensing.
ResearcherCollection EquipmentImage DataData UsageRef.
Sun et al.Kinect V2RGB and depth images of apple treesPhenotypic analysis of fruit trees[78]
Tong et al.Realsense D435iRGB and depth images of apple treesPruning point localization of fruit trees[79]
Zhang et al.Kinect V2RGB and depth images of guavaDetection and localization of guava[80]
Wang et al.Realsense D435RGB and depth images of apple treesFruit localization and pose detection[81]
Qiu et al.Microsoft Azure DKRGB and depth images of tomato plantsBranch and trunk dimension analysis[82]
Sun et al.Azure Kinect DK AIRGB and depth images of apple treesBranch diameter estimation[83]
Table 4. Lidar performance parameters.
Table 4. Lidar performance parameters.
Product NameLine CountRange CapabilityAccuracyFrame Rate
Sick LMS111-1010010.5 m~20 m±30 mm25 Hz/50 Hz
RS-16 LiDAR160.4 m~150 m±2 cm5 Hz/10 Hz/20 Hz
Sick LMS511-20100 PRO10.2 m~80 m±12 mm25 Hz/35 Hz/50 Hz/75 Hz/100 Hz
Helios 16160.2 m~150 m1 cm5 Hz/10 Hz/20 Hz
Table 5. Application of LiDAR in orchard environmental sensing.
Table 5. Application of LiDAR in orchard environmental sensing.
ResearcherCollection EquipmentImage DataData UsageRef.
Underwood et al.SICK LMS-291Apricot tree point cloud dataOrchard yield assessment[96]
Sanz et al.Sick LMS200Pear tree point cloud dataLeaf area density (LAD) calculation[97]
Gu et al.Sick LMS111–10100Apple tree canopy point cloud dataCanopy leaf area calculation[98]
Wang et al.RoboSense RS-16Fruit tree canopy point cloud dataCanopy morphological parameter measurement[99]
Qiu et al.Riegl LMS Z210iiFruit tree point cloud dataTree structure quantification[100]
Sun et al.Sick LMS111-10100Fruit tree canopy point cloud dataCanopy volume calculation[101]
Table 6. Application of multispectral sensors in orchard environmental sensing.
Table 6. Application of multispectral sensors in orchard environmental sensing.
ResearcherCollection EquipmentImage DataData UsageRef.
Yu et al.CI-110Apple canopy LAI imageApple orchard leaf area index calculation[111]
Noguera et al.MicaSense RedEdge-MOlive tree multispectral dataNutritional status assessment of olive crops[112]
Zhao et al.Micro-MCA SnapMultispectral images of apple tree canopiesLeaf Nitrogen Content Estimation (LNCE)[113]
Sarabia et al.FLIR C3Multispectral Images of Apple TreesCanopy detection[114]
Table 7. Application examples of canopy sensing technology for targeted spraying.
Table 7. Application examples of canopy sensing technology for targeted spraying.
ResearcherResearch ObjectResearch MethodResearch ObjectiveRef.
Churuvija et al.Cherry tree canopyMulti-functional posture imagingAnalysis of canopy structural parameters[137]
Xu et al.Peach tree canopyDepth-enhanced object detection algorithmCanopy detection and precision spraying[138]
Vinci et al.Hazelnut tree canopyPoint cloud reconstruction + DSM analysis + NDVI fusionCanopy identification[139]
Zhu et al.Fruit tree canopyYOLOv4Canopy detection and counting[140]
Bing et al.Mango tree canopySfM modeling + LiDAR point cloud comparative analysisCanopy parameter extraction and variable spray map generation[141]
Table 9. Application examples of weed sensing technology for targeted spraying.
Table 9. Application examples of weed sensing technology for targeted spraying.
ResearcherResearch ObjectResearch MethodResearch ObjectiveRef.
Jin et al.Field weedsYOLOv11-TA + SuperPoint + SuperGlue + Adaptive EKFWeed detection and laser weeding[165]
Fan et al.Field weedsImproved Faster R-CNN (CBAM + BiFPN)Weed detection and precision spraying[166]
Xu et al.Field weedsIPSO algorithm + Horizontal histogramWeed detection and variable spraying[167]
Yang et al.Vineyard weedsImproved Deeplabv3Weed detection[168]
Table 10. Cost comparison of environmental sensing technologies.
Table 10. Cost comparison of environmental sensing technologies.
Technology TypeLow-Cost SolutionHigh-Cost SolutionCore Factors Affecting Price Difference
Monocular VisionUSD 70–USD 420USD 1120–USD 4200Resolution (60 fps > 200 fps), low-light performance (20 db > 50 db)
Binocular VisionUSD 350–USD 1120USD 2800–USD 7000Depth calculation accuracy (5% < 1%), effective coverage (3 m > 20 m)
RGB-DUSD 252–USD 840USD 2100–USD 5600Point cloud density (50 K points > 1 M points)
LiDARUSD 210–USD 1680USD 7000–USD 42,000Angle resolution (1° > 0.1°), high penetration rate (30% > 90%)
MultispectralUSD 1120–USD 4900USD 8400–USD 35,000Number of bands (5~12), accuracy (±8%~±1%)
HyperspectralUSD 1400–USD 11,200USD 28,000–USD 140,000Spectral channels (50~500), sampling speed (1 fps~100 fps)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhang, Z.; Jia, W.; Ou, M.; Dong, X.; Dai, S. A Review of Environmental Sensing Technologies for Targeted Spraying in Orchards. Horticulturae 2025, 11, 551. https://doi.org/10.3390/horticulturae11050551

AMA Style

Wang Y, Zhang Z, Jia W, Ou M, Dong X, Dai S. A Review of Environmental Sensing Technologies for Targeted Spraying in Orchards. Horticulturae. 2025; 11(5):551. https://doi.org/10.3390/horticulturae11050551

Chicago/Turabian Style

Wang, Yunfei, Zhengji Zhang, Weidong Jia, Mingxiong Ou, Xiang Dong, and Shiqun Dai. 2025. "A Review of Environmental Sensing Technologies for Targeted Spraying in Orchards" Horticulturae 11, no. 5: 551. https://doi.org/10.3390/horticulturae11050551

APA Style

Wang, Y., Zhang, Z., Jia, W., Ou, M., Dong, X., & Dai, S. (2025). A Review of Environmental Sensing Technologies for Targeted Spraying in Orchards. Horticulturae, 11(5), 551. https://doi.org/10.3390/horticulturae11050551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop