Next Article in Journal
Biological Enzymatic Hydrolysis—Single Screw Co-Extrusion Treatment to Improve the Mechanical Properties of Biodegradable Straw Fiber Mulching Films
Next Article in Special Issue
Improving Rice Nitrogen Nutrition Index Estimation Using UAV Images Combined with Meteorological and Fertilization Variables
Previous Article in Journal
Spatiotemporal Evolution of Soil Quality Under Long-Term Apple Cultivation in the Taihang Mountains, China
Previous Article in Special Issue
Study on Cherry Blossom Detection and Pollination Parameter Optimization Using the SMD-YOLO Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms

1
College of Engineering, South China Agricultural University, Guangzhou 510642, China
2
School of Modern Information Industry, Guangzhou College of Commerce, Guangzhou 511363, China
3
School of Mechanical Engineering, Xinjiang University, Urumqi 830000, China
*
Authors to whom correspondence should be addressed.
Agronomy 2025, 15(8), 1924; https://doi.org/10.3390/agronomy15081924
Submission received: 1 July 2025 / Revised: 31 July 2025 / Accepted: 5 August 2025 / Published: 9 August 2025

Abstract

Pineapple harvesting in natural orchard environments faces challenges such as high occlusion rates caused by foliage and the need for complex spatial planning to guide robotic arm movement in cluttered terrains. This study proposes an innovative visual detection model, Yolov10n-SSE, which integrates split convolution (SPConv), squeeze-and-excitation (SE) attention, and efficient multi-scale attention (EMA) modules. These improvements enhance detection accuracy while reducing computational complexity. The proposed model achieves notable performance gains in precision (93.8%), recall (84.9%), and mAP (91.8%). Additionally, a dimensionality-reduction strategy transforms 3D path planning into a more efficient 2D image-space task using point clouds from a depth camera. Combining the artificial potential field (APF) method with an improved RRT* algorithm mitigates randomness, ensures obstacle avoidance, and reduces computation time. Experimental validation demonstrates the superior stability of this approach and its generation of collision-free paths, while robotic arm simulation in ROS confirms real-world feasibility. This integrated approach to detection and path planning provides a scalable technical solution for automated pineapple harvesting, addressing key bottlenecks in agricultural robotics and fostering advancements in fruit-picking automation.

1. Introduction

Pineapple is a commercially significant tropical crop predominantly cultivated in tropical and subtropical agro-ecological zones [1]. Commercial orchards are typically established on undulating or mountainous terrain characterized by irregular topography. Characteristic planting patterns are high-density and exhibit substantial inter-plant variability in height [2]. These environmental and agronomic factors markedly constrain the operational efficacy of conventional agricultural machinery. Moreover, mechanical transit across sloped surfaces frequently causes mechanical damage to immature inflorescences, thereby compromising subsequent yield cycles. Consequently, despite progressive advances in automation technologies, pineapple harvesting remains overwhelmingly dependent on manual labor. Manual operations are not only labor-intensive and skill-intensive but also entail significant occupational hazards, including lacerations from foliar spines, pericarp abrasion, and puncture injuries; these obstacles to production collectively elevate production costs and diminish harvesting efficiency [3,4]. Empirical surveys [5] indicate that labor costs associated with pineapple harvesting account for 42.71% of the total production expenditures. In response to evolving developmental imperatives, the development of mechanized harvesting technologies specifically tailored to pineapple production is therefore imperative.
Rapid advancements in machine vision and robot-control technologies have driven innovations in agricultural automation, including robotic harvesting systems for various fruits such as apples, citrus, and bananas [6,7,8]. For pineapples, however, robotic harvesting solutions remain comparatively scarce. Previous research has explored diverse designs, such as flexible finger rollers for manual-like picking [9,10], three-degree-of-freedom robotic arms for continuous harvesting [11], and double-gantry frameworks for specialized equipment [12]. These robots typically integrate key components, including robotic arms, end-effectors, vision systems, and mobility devices. Roller-type harvesters and gantry-based mobile frameworks encounter significant operational constraints in hilly or mountainous terrains; furthermore, software-level picking solutions tailored to the specific growth environment of pineapple remain scarce. This study employs a tracked mountain chassis as the mobile platform and focuses on the development of vision-based recognition and robotic arm path-planning algorithms for harvesting.
Early-stage fruit-picking robots predominantly relied on traditional image-processing algorithms, utilizing manually crafted features such as color, shape, and texture for detection [13]. Despite achieving reasonable recognition rates, these methods often suffered from slow processing speeds and poor generalization, rendering them unsuitable for real-time applications in dynamic orchard environments. Modern deep learning-based detectors, such as the YOLO series and SSD models, have significantly improved detection efficiency and robustness, especially under challenging field conditions such as variable lighting and heavy occlusion. Recent research on the subject of deep learning for pineapple detection has seen various model enhancements. Liu [14] enhanced YOLOv5s by incorporating CBAM and Ghost-CBAM attention mechanisms along with GhostConv to reduce model parameters, achieving a recognition accuracy of 91.89%. Li [15] adapted YOLOv7 using a MobileOne backbone and a slender-neck architecture, greatly increasing detection speed (to 17.52 FPS) in real-time applications. Similarly, Lai [16] refined YOLOv7 by integrating SimAM attention mechanisms, MPConv, and Soft-NMS, resulting in a mAP of 95.82% and a recall of 89.83%. Chen [17] further embedded the ECA attention mechanism into RetinaNet, achieving a recognition rate of 94.75% even under heavily occluded conditions. These models demonstrate the capacity of deep learning-based methods to address challenges in agricultural environments, particularly with respect to robustness and precision.
Robotic arms have become essential for fruit harvesting, including harvesting of apples, citrus, and tomatoes. A critical aspect of their functionality is path planning, which aims to create safe, collision-free trajectories from a start point to a goal point within a constrained workspace, optimizing for both distance and time [18,19]. Widely employed methods for path planning include Ant Colony Optimization [20], Artificial Potential Field (APF) [21], A* [22], and the Rapidly-exploring Random Tree (RRT). However, the inherent randomness of RRT can often lead to suboptimal solutions. To mitigate this, researchers have proposed various improvements tailored to agricultural scenarios, Zhang [23] introduced a Cauchy goal-biased bidirectional RRT algorithm, which leverages a Cauchy distribution to enhance sampling efficiency and integrates a goal-attraction mechanism to guide tree expansion; this algorithm was specifically designed for multi-DOF manipulators. Ma [24] developed an improved RRT-Connect algorithm by incorporating guidance points derived from prior knowledge of the configuration space, significantly enhancing its performance in obstacle avoidance. While the aforementioned approaches refine stochasticity within the original algorithmic framework, several investigators have sought to attain superior planning performance through hybrid strategies. Yin [25] introduced GI-RRT, which synergizes deep learning-based grasp-pose estimation with the RRT algorithm to attain precise and efficient grasping in complex agricultural scenarios. Xiong [26] fused the artificial potential-field method with reinforcement learning, substantially enhancing convergence efficiency. Hybrid planning paradigms consistently demonstrate heightened robustness and adaptability in intricate environments. In the highly unstructured settings characteristic of agriculture, traditional robotic-arm path-planning algorithms alone prove insufficient to satisfy harvesting requirements.
In conclusion, this study addresses two primary challenges in the deployment of pineapple-harvesting robots: robust visual detection and planning of efficient robotic arm motion. To enhance detection reliability in complex field environments, an improved YOLOv10n model is proposed. This model incorporates advanced feature extraction and attention mechanisms for higher accuracy and computational efficiency. For motion planning, the 3D pathfinding problem is simplified to a 2D plane through preprocessing of point cloud data, enabling more efficient trajectory generation. A hybrid algorithm combining the APF method with an improved RRT* algorithm is employed to ensure optimal collision-free path planning. This integrated approach provides a scalable solution for automated pineapple harvesting and mitigates key challenges in the deployment of agricultural robotics. A detailed workflow for this methodology is illustrated in Figure 1, below.

2. Materials and Methods

2.1. Data Collection

The pineapple image dataset employed in this study was acquired at a plantation in Huangpu District, Guangzhou, using an Azure Kinect DK (resolution 1920 × 1080, 30 fps, operating range 50–150 cm). The pineapples are in the green mature stage. Considering the leaves of pineapple plants are rosette-shaped, narrow, and sharp, and thus easily form obstructions, the robotic arm harvests the fruits at an angle close to the fruit, either diagonally above or directly in front, in order to avoid obstruction by the leaves to the greatest extent possible. Therefore, the data were taken from both overhead and flat viewpoints and collected under different lighting and shading conditions to enhance the sample diversity and inhibit over-simulation, as shown in Figure 2. After overexposed and blurred images had been filtered out, a total of 1225 samples were divided into training, validation and test sets in a 7:1:2 ratio, and LabelImg (v1.8.6) was used to label the categories and bounding boxes.

2.2. Yolov10-SSE

To improve pineapple detection in natural environments, this study proposes the Yolov10n-SSE network, an enhanced version of Yolov10n designed for agricultural applications. The backbone of Yolov10n-SSE is augmented with SPConv [27] in the first and third convolutional layers, which expands the receptive field and enhances the extraction of features from occluded pineapple fruits while simultaneously reducing computational complexity. Additionally, a SE [28] attention mechanism is integrated into the ninth layer to refine features by recalibrating feature weights after downsampling through SCDown and C2F transformations. This mechanism suppresses irrelevant features and enriches contextual information. At the end of the backbone, an EMA [29] mechanism is implemented to strengthen multi-scale feature representation, further improving detection in diverse field conditions. The improved architecture derived from Yolov10n is illustrated in Figure 3 and is referred to as Yolov10n-SSE in this study.
SPConv addresses the issue of redundant computations within convolutional layers by categorizing input features into two distinct types, Representative and Redundant, as depicted in Figure 4. In the lower-dimensional layers of the network, representative features of the pineapple, such as color and texture, are extracted via grouped convolutions to capture critical information, which is then integrated with the outputs of standard convolutions and subsequently subjected to global average pooling to encapsulate global information. In contrast, redundant features are processed through standard convolutions and then through global average pooling. The representative and redundant features are concatenated and combined, and these steps are followed by a softmax weighting, which produces the output. This strategy significantly enhances the model’s capability to capture characteristic features of pineapples in lower-dimensional layers while maintaining computational efficiency.
The EMA mechanism incorporates spatial information into channel attention, drawing inspiration from the CA [30] mechanism. As illustrated in Figure 5. The EMA mechanism extracts attention weights through three parallel branches. After spatial information has been captured through horizontal and vertical average pooling of the feature maps, feature concatenation is followed by a 1 × 1 convolution to integrate features. Subsequently, the features are adjusted using a sigmoid function, re-weighted, normalized with group normalization, and then subjected to global average pooling to further condense information. A 3 × 3 convolutional branch is utilized to capture multi-scale spatial features, which are then fed into global average pooling. The combined feature information from both branches is adjusted for feature weighting to produce the output feature map. This operation captures pixel-level relational pairings, highlighting contextual information across all pixels and thereby enhancing feature representation.

2.3. 2D Maps Construction

Following the application of statistical filtering and moving least squares smoothing to the pineapple plant point cloud, color-based segmentation is employed to separate the fruits into distinct clusters. The bounding box coordinates, as predicted by YOLOv10n-SSE, facilitate the alignment of the 2D image from the depth camera with the depth image, thereby enabling the accurate localization of fruit clusters within the depth image. This process allows for the extraction and further refinement of the pineapple fruit point cloud, as well as of the surrounding plant point cloud.
For the construction of 2D maps, point cloud data within a 0.3-m radius of the fruit centroid are selected. Alpha-shape surface reconstruction is then applied to both the refined fruit and the remaining plant point clouds, yielding distinct surfaces, as depicted in Figure 6a,b. The X–Z plane cross-sections at the base and middle of the pineapple are extracted from surfaces (a) and (b), respectively. Subtraction of these images reveals the fruit’s outline, which is then dilated to generate the path map necessary for the APF algorithm.
To construct X–Y plane maps, cross-section images are extracted at 0.05-m intervals along the Z-axis, from the fruit centroid to the zero point. Pixel-subtraction methods are utilized to pinpoint the fruit’s critical pixels, and adjacent cross-sections are integrated to form a detailed map suitable for use by the RRT* algorithm. This approach ensures the precision needed for motion planning while capturing the spatial intricacies of both fruit and plant structures.

2.4. Enhanced Robotic Arm Path Planning for Pineapple Picking via APF and RRT* (AIR)

The three-dimensional (3D) path planning for the robotic arm’s picking operation is executed utilizing the 2D maps, as detailed in Section 2.3. The artificial potential field (APF) algorithm is initially applied to achieve a preliminary positioning in the X–Z plane. Following this, an enhanced RRT* algorithm is employed to conduct segmented path searches in the X–Y plane, leveraging the initial positioning along the X–Z axes. When tne algorithm reaches the final map, the XY coordinates for the robotic arm’s end-effector are ascertained. At this stage, the improved RRT* algorithm transitions to a bidirectional RRT-Connect algorithm, generating nodes concurrently from both the start and end points to bolster the efficiency and success rate of pathfinding. The path points from the distinct plane axes are amalgamated to construct the 3D spatial path.
Given the probabilistic nature of RRT-based algorithms, there exists a risk that the generated paths may encroach upon obstacles. To counteract this, the RRT* algorithm incorporates an obstacle-range threshold detection mechanism. If a generated node’s proximity to an obstacle surpasses the predefined threshold, the algorithm reverts to the parent node and initiates the generation of a new path. The program’s flowchart is illustrated in Figure 7. The corresponding pseudo-code, labeled as Algorithm 1, is presented for clarity.
Algorithm 1 Algorithm for AIR
Input: 2D Maps in the X–Z and X–Y planes
  • Main parameters of APF: gravitational constant = 100; repulsive constant = 50; range of repulsive force = 100 pixels
  • Main parameters of RRT*: Expand step size = 7 pixels; maximum number of iterations = 10,000; obstacle range threshold = 20 pixels
  • Starting coordinates of the robot arm in the 2D plane.
Output: Three-dimensional motion path of a robotic arm picking pineapples.
  1:
Use the APF algorithm in the X–Z plane to obtain the trajectory of the robot arm and its control points ( X , Z ) .
  2:
Extract the X-axis coordinates from the APF control points, constructing a set X 0 , X 1 , , X n where X 0 is the pineapple location and X n is the robot arm’s end position.
  3:
Calculate the initial Y-coordinate Y 0 from the pineapple’s center of mass in the X–Y plane.
  4:
n number of X n number of X Y plane maps Determine the number of cycles
  5:
for  k = 0 to n 1  do
  6:
   if  k < n 1  then
  7:
     On the k-th X–Y map, use the improved RRT* algorithm to find a path from ( X n k , Y n k ) to the vertical line at X n ( k + 1 ) :
  8:
        (a) Generate a random point x r a n d .
  9:
        (b) Find a new node towards the goal line X n ( k + 1 ) .
10:
        (c) Calculate path cost; if unreasonable, return to step (b).
11:
        (d) Perform obstacle detection; if path enters threshold, return to step (b).
12:
        (e) If goal line X n ( k + 1 ) is reached, record the endpoint ( X n ( k + 1 ) , Y n ( k + 1 ) ) and switch to the next map.
13:
   else if  k = n 1  then
14:
     On the final X–Y map, use the RRT-Connect algorithm to pathfind from the previous endpoint to the robot arm’s final X–Y coordinates.
15:
   end if
16:
end for
17:
Combine the coordinates to get the full set of 3D path points ( X n , Y n , Z n ) .
18:
According to the coordinate axis mapping relationship, project the path points ( X n , Y n , Z n ) back into the pineapple point cloud to obtain the final 3D picking path.

3. Results and Discussion

3.1. Experimental Results of Yolov100n-SSE

3.1.1. Deep Learning Trial Configuration Environment and Evaluation Metrics

Deep learning experiments were conducted on a Windows 10 operating system. In terms of hardware, the system was equipped with an Intel Core i7-10700K CPU @ 3.80 GHz with 16 GB of RAM and an NVIDIA GeForce RTX 2080 Super graphics card. On the software side, the experiments employed the PyTorch deep learning framework, version 2.1.2+cu118, along with Python 3.9, complemented by CUDA 11.8 and cuDNN 8.9.7 for GPU computation acceleration. The training parameters used in the experiment are detailed in Table 1.
In this study, precision (Precison, P), recall (Recall, R), F1 score, and average precision (Average Precision, AP) were employed as metrics to evaluate the performance and effectiveness of the algorithm. The calculation formulas are presented as follows.
P = T P T P + F P × 100 % R = T P T P + F N × 100 % F 1 s c o r e = 2 × P × R P + R × 100 % A P = 0 1 P × R d R
In the formula, TP denotes the number of true positives, that is, positives correctly predicted as positive; FN represents the number of false negatives, where the actual positives were incorrectly predicted as negative; FP refers to the number of false positives, where the actual negatives were incorrectly predicted as positive; and TN is the number of true negatives, negatives correctly predicted as negative. P is the proportion of actual positives among those predicted as positive. R is the ratio of the number of true positives, positives correctly predicted as positive, to the total number of actual positives. The F1 score is the harmonic mean of P and R. AP is the area under the PR curve.

3.1.2. Training Performance of the Yolov10n-SSE Model and Comparison with Other Yolo Models

The variation in the loss curve during the training process of the Yolov10n-SSE model, as well as changes in Precision, Recall, and mAP, are illustrated in Figure 8. The steady decline of the loss curve during training indicates that the training process is stable and that the settings of hyperparameters such as the learning rate are relatively appropriate; the convergence of the loss curve at a lower value suggests that the model has achieved a good fit with the data, with no signs of overfitting or underfitting issues.
To illustrate the detection performance of the proposed Yolov10n-SSE model, this paper conducts comparative experiments with several mainstream object detection networks, including Yolov10n, Yolov10s, and the lightweight model Yolov8-ghost. The final performance comparison is based on metrics such as Precision, Recall, AP, and F1 score, with the means and standard deviations of these metrics calculated from the results of the last 20 training epochs. The results are presented in Table 2.
As shown in the table above, the proposed Yolov10n-SSE model has been compared with several mainstream network models, demonstrating its superior performance. Compared to Yolov10n, Yolov10n-SSE has achieved a 1.2% increase in Precision, a 3.1% improvement in Recall rate, and a 3% enhancement in AP. When compared with Yolov10s, Yolov10n-SSE has reduced the number of FLOPs and the number of parameters by 16.6 G and 5.31 M, respectively, while maintaining comparable levels of Precision, Recall, and AP. In comparison with the lightweight model Yolov8-ghost, although Yolov10n-SSE’s precision is 2.2% lower, it has increased Recall and AP by 5.8% and 5.1%, respectively. Regarding the image-processing speed for practical applications, these four models exhibit similar performance, with no significant differences among them. This paper also presents the F1 score results obtained from 150 training epochs, as illustrated in Figure 9.
As illustrated in Figure 10, the four models exhibited varying performance when detecting pineapples occluded by foliage. One of them, the Yolov10n-SSE model, achieved a detection confidence score of 95% on occluded pineapple targets. The Yolov10s model was the only other model to achieve such high performance, with a confidence of 93%. In contrast, the performance of both the Yolov10n and Yolov8-ghost models was negatively impacted, with their confidence scores dropping to 73% and 83%, respectively.

3.1.3. Incremental Ablation Experiments and Grad-CAM Visualization

To evaluate the contribution of the proposed SPConv, SE, and EMA modules, an incremental ablation study was performed on the Yolov10n model. Table 3 presents the results.
As shown in Table 3, the Yolov10n model, after its low-level convolutional layers was replaced with SPConv, achieved a 1.6% increase in Recall and a 1.6% increase in AP compared to the baseline Yolov10n. Its Flops were reduced by 0.8 G, but the parameter count increased by 3.06 M. Next, after the SE attention mechanism was introduced into the middle of the model’s backbone to recalibrate features from the downsampling and feature fusion layers, this version showed a 3.6% increase in Precision. Although Flops increased by 0.4 G, the parameter count was reduced by 3.3 M. Finally, with the addition of the EMA attention module at the tail of the backbone, the complete Yolov10n-SSE model demonstrated a 3.1% improvement in Recall and a 2.3% improvement in AP over the previous version, though Precision slightly dropped by 0.3%. The Flops and parameter count showed no significant changes.This paper also presents the F1 score results for each model in the ablation study after 150 training epochs, as illustrated in Figure 11. The results from the ablation test indicate that the model proposed in this paper can effectively enhance performance across various metrics.
To further elucidate the specific impact of the SPConv, SE, and EMA modules on model performance, this paper adopts the Gradient-weighted Class Activation Mapping (Grad-CAM) [31] technique for visualization analysis. As an effective visualization tool, Grad-CAM can reveal the decision-making basis of a model during prediction. Grad-CAM heatmaps use a color gradient from blue to red to show the model’s degree of attention to different image regions, where red or warm tones indicate positive contributions and blue or cool tones represent negative contributions. This intuitive color coding allows for the clear identification of the image parts that the model prioritizes during the prediction process.
Through a comparative analysis, Figure 12 presents the Grad-CAM results before and after optimization with each module. The results indicate that after traditional convolutional layers have been replaced with SPConv, the model’s attention becomes significantly more focused on the main body of the pineapple while reducing its focus on non-target areas. Furthermore, by introducing the SE attention mechanism before the SPPF pyramid pooling, the improved model markedly reduces its attention on the pineapple’s leaves and stems—both near and far—compared to the original Yolov10n. Regarding the PSA layer, the original Yolov10n model’s attention tends to concentrate on the region surrounding the pineapple. In contrast, the model incorporating the EMA module more precisely focuses its attention on the pineapple itself, effectively suppressing excessive attention on distant leaves and stems.

3.2. AIR for Path Planning for Robotic Arms in Pineapple Harvesting

3.2.1. AIR Experimental Results

Path planning for the robotic arm was performed in the X–Z plane cross-section using the Artificial Potential Field algorithm, with the results shown in Figure 13. The green line represents the calculated path, while the blue line indicates the sum of attractive and repulsive forces. The green pixel path is then decomposed by inserting a control point every five steps. This process yields the X-coordinate control points that are used for the subsequent step-by-step path planning in the X–Y plane.
After obtaining the X-coordinate control points, the operation transitions to the X–Y plane projection. Using the X–Y plane map constructed in Section 2.3, the X-coordinate control points are proportionally allocated to each map segment. The starting point for the first map is the coordinate of the pineapple’s centroid projected from the 3D point cloud onto the X–Y plane, while the endpoint for the final map is the projected position of the robotic arm. For all intermediate maps, the starting point is the endpoint calculated from the previous map’s path and the goal is a red finish line defined by the last of the allocated X-coordinate control points.The path is computed using the improved RRT* algorithm, with the results presented in Figure 14. Notably, subfigures (c) and (d) demonstrate that when the search path approaches a leaf obstacle, the algorithm backtracks to a parent node to find a new path. This illustrates how the Artificial Potential Field algorithm, in conjunction with obstacle threshold detection, effectively constrains the randomness of the RRT* algorithm.

3.2.2. Comparative Experiments on Path Planning Performance in 2D Space

To validate the obstacle avoidance effectiveness of the improved RRT* algorithm, a comparative experiment was conducted by altering its core component. The core was replaced with three alternatives—standard RRT, RRT*, and Informed-RRT* and the performance of each variant was evaluated. Three key metrics were measured: runtime, the number of generated nodes, and the number of nodes in the final path. As shown in Figure 15, without supplementary obstacle threshold detection, the randomness of the standard RRT algorithm can lead to paths that grow very close to obstacles. This would result in a high risk of the robotic arm colliding with pineapple leaves in a practical application.The Informed-RRT* algorithm also demonstrated unsuitability for this specific task. Its elliptical search optimization, while powerful in static scenarios, is not adaptive to the frequent changes in the start-to-goal distance in this application. This rigidity resulted in an excessive generation of nodes and a substantial computational burden, rendering the algorithm impractical.
Table 4 shows the runtime, number of generated nodes and path node count for different algorithm cores.
As indicated by the table, among the baseline algorithms, RRT* demonstrates superior performance over RRT and Informed-RRT* in terms of runtime, the number of generated nodes, and the path node count. Therefore, RRT* was selected as the basis for the improvements proposed in this paper. Compared to the standard RRT* algorithm, our proposed method has an advantage in node generation: it reduces the number of exploration nodes within the vicinity of obstacles to generate a collision-free path.
Finally, by storing the Y-axis pixel coordinates that correspond to the X-coordinate control points and combining them with the coordinates obtained from the aforementioned Artificial Potential Field algorithm, the control points for motion in 3D space are obtained after applying a projection transformation.

3.2.3. Comparison of Path Planning Performance in 3D Space

To validate the feasibility of the path planning algorithm proposed in this paper, comparative experiments were conducted against three baseline algorithms in 3D space: RRT, RRT*, and Informed-RRT*. Three different pineapple point cloud maps were used for the evaluation. The advantages of our proposed algorithm are demonstrated by comparing the runtime, the number of generated nodes, and the path node count among the algorithms. The main parameters for the 3D path planning algorithms were set as follows: the expansion step size was 0.03, the maximum number of iterations was 10,000, and the local search range was 0.05. The experimental results are visualized in Figure 16, where the red lines represent the generated paths and the blue dots represent the nodes explored by the 3D path planning algorithms. A detailed comparison of the specific parameters for the four algorithms is provided in Table 5. The unit for the expansion step size in the 3D point cloud corresponds to the world coordinate system native to the point cloud data acquisition, meaning an expansion step of 0.03 corresponds to 3 cm in the world coordinate system. The 2D image canvas size is 800 × 800 pixels, and the captured point cloud covers a practical range of 0.3 m around the pineapple. The 2D expansion step is 7 pixels, which translates to an approximate step length of 0.00525 m in the world coordinate system. The number of nodes generated by our proposed algorithm cannot be directly compared to that of the 3D algorithms because the 3D path expansion step is approximately 5.7 times longer than the 2D path expansion step. Therefore, to facilitate a fair comparison with the 3D path planning algorithms, a compensation factor is applied to the path node count of our algorithm. In Table 5, the numbers in parentheses for our algorithm’s path node count represent the compensated results.The rationale for not directly comparing the 3D path planning algorithm with the 2D algorithm by setting the expansion step size to 0.00525 m is that, under such a scale, the 3D algorithm tends to generate an excessive number of search nodes, thereby leading to a significantly reduced path success rate.
The experimental results reveal that the performance of RRT, RRT, and Informed-RRT is inconsistent in 3D environments, a consequence of their inherent reliance on random sampling. Their effectiveness fluctuates across the three test maps, with each algorithm showing situational strengths and weaknesses. A clear example is shown for Map 3, where Informed-RRT*, despite its superior efficiency in runtime and node generation, produces a suboptimal path with significant detours, rendering it less effective than RRT* in terms of path quality. Conversely, the AIR algorithm achieves greater stability by leveraging an Artificial Potential Field to provide heuristic guidance. By predefining the general trajectory in the X–Z plane, it effectively constrains the random exploration of the RRT component in the X–Y plane. As a result, the AIR algorithm exhibits demonstrably superior stability and reliability compared to the other three 3D path planning methods.

3.2.4. Robotic Arm Motion Simulation

To validate the effectiveness and feasibility of the generated path, the robotic arm was modeled, simulated, and its path planning was verified using the Robot Operating System (ROS) and MoveIt. The Denavit-Hartenberg (D-H) Table 6 model was used to simplify the robotic arm. The model used in this study is the AUBO-i5, which, combined with components such as the tracked chassis base and the control hub, was used to simulate the real-world scene from Figure 17a. The result is shown in Figure 17b. In the simulation figure, the light blue areas represent obstacles, namely the pineapple plant, the tracked chassis base, and the control hub, while the target pineapple is represented by a light green sphere. As the robotic arm is equipped with an end-effector, a certain degree of compensation is required for the calculated picking path. The bounding box of the end-effector was set to 300 mm in length, 140 mm in width, and 150 mm in height. To verify the feasibility of the end-effector motion paths calculated by the proposed picking-path planning algorithm, 15 different starting positions for the robotic arm were selected for testing.
Out of the fifteen simulation trials, the inverse kinematics solver failed to find a corresponding joint configuration in two cases. The simulation results for three of the successful picking paths are presented in Figure 17d. This figure includes images of the robotic arm’s initial pose and the trajectory line to the destination, which is computed after the arm’s joints have moved.
Using the 13 successfully computed inverse kinematics paths, the robotic arm was controlled to its target position, and the approximate deviation between the end-effector’s base and the pineapple’s center was recorded. The measurement process is schematically shown in Figure 18. Table 7 was generated by subtracting the Y-axis and Z-axis deviations from the maximum opening of the soft gripper in those respective directions; the resulting value indicates that the pineapple’s center is within the gripper’s grasping range. Figure 19 shows the box plot for the data in Table 7, and this analysis is combined with the end-effector compensation data from the algorithm described previously.
Firstly, an analysis of the X-axis (lateral) positioning error reveals a high degree of precision. The lateral deviation was consistently maintained within a ±15 mm range, with an absolute mean error of just 4.13 mm. Secondly, the feasibility of the grasp is confirmed by the Y-axis and Z-axis margins. The vertical grasping margin (Y-axis) varied between 16.08 mm and 58.37 mm, confirming that the pineapple was successfully captured within the gripper’s vertical workspace in every trial. Likewise, the depth grasping margin (Z-axis) ranged from 34.33 mm to 56.85 mm. Being consistently positive, this value indicates exceptional fault tolerance along the approach axis, effectively preventing failures from depth-perception errors and ensuring a secure grasp on the fruit.

4. Conclusions

This study addresses critical challenges in automated pineapple harvesting posed by high occlusion rates and complex terrains in natural orchard environments. By integrating enhanced visual detection and efficient path planning methodologies, the proposed approach demonstrates significant improvements in both detection accuracy and trajectory generation for the robotic arm.
The visual detection model, Yolov10n-SSE, achieves precision, recall, and mAP metrics of 93.8%, 84.9%, and 91.8%, respectively, outperforming several baseline models in occlusion-heavy conditions. These results highlight its applicability for robust fruit detection in real-world field environments, where traditional methods often struggle under conditions of fluctuating lighting and dense foliage. The integration of split convolution, squeeze-and-excitation attention, and multi-scale attention modules further reduces computational complexity, enabling real-time deployment on resource-constrained systems.
A novel dimensionality-reduction strategy was introduced to simplify 3D path planning into a tractable 2D image-space task using depth camera point clouds. By combining the APF method with an improved RRT* algorithm, the proposed approach ensures the generation of collision-free paths while achieving faster convergence and stable trajectory planning. Experimental results show an 18.7% reduction in computation time compared to standard RRT algorithms and a high success rate in generating obstacle-free paths. Validation in ROS-based robotic arm simulations confirms the feasibility of the methodology for real-world applications, demonstrating scalability across different types of agricultural tasks.
The integration of advanced visual detection and hybrid path planning algorithms offers a viable technical solution for automated pineapple harvesting. Beyond increasing operational efficiency, this study establishes foundational methodologies that can be adapted for harvesting other fruits or operations in highly cluttered agricultural environments. The findings support a broader transformation of traditional agricultural practices into sustainable, automated systems, addressing labor shortages, reducing costs, and enhancing productivity in precision farming.
Despite the promising results achieved in this study, several limitations remain that warrant further investigation and improvement. Specifically, scenarios where pineapples are partially outside the camera’s field of view or subject to severe occlusion pose challenges that require additional solutions. Additionally, depth estimation inaccuracies arising from environmental influences on the depth camera also necessitate further refinement. These issues will be addressed in future work to enhance the robustness and applicability of the proposed methodologies.

Author Contributions

Conceptualization, H.W.; methodology, A.Z.; software, A.Z.; validation, Y.Z. and G.Z.; investigation, Y.Z. and G.Z.; writing—original draft preparation, A.Z.; writing—review and editing, H.W., F.W. and X.Z.; supervision, H.W., F.W. and X.Z.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China (Grant No. 32372001), Guangzhou Science and Technology Project (Grant No. 2023B01J0046), Basic and Applied Basic Research Fundation of Guangdong Province (Grant No. 2023A1515110586), Guangdong Provincial Education Department Characteristic Innovation Project (Grant No. 2024KTSCX132), University-Industry Collaborative Education Program of Ministry of Education (Grant No. 2024XTYR08).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, D.; Jing, M.; Dai, X.; Chen, Z.; Ma, C.; Chen, J. Current status of pineapple breeding, industrial development, and genetics in China. Euphytica 2022, 218, 85. [Google Scholar] [CrossRef]
  2. Li, M.T.; He, L.; Yue, D.D.; Wang, B.B.; Li, J.L. Fracture mechanism and separation conditions of pineapple fruit-stem and calibration of physical characteristic parameters. Int. J. Agric. Biol. Eng. 2023, 16, 248–259. [Google Scholar] [CrossRef]
  3. Salleh, N.F.M.; Sukadarin, E.H.; Khamis, N.K.; Ramli, R. Pattern of muscle contraction in different postures among Malaysia pineapple plantation workers. IOP Conf. Ser. Mater. Sci. Eng. 2019, 469, 012088. [Google Scholar] [CrossRef]
  4. Singh, H.J.; Chauhan, J.S.; Karmakar, S. Ergonomic Risk Factors Associated with Pineapple Harvesting Task in Northeast India. In Ergonomics for Design and Innovation. HWWE 2021; Chakrabarti, D., Karmakar, S., Salve, U.R., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 391. [Google Scholar] [CrossRef]
  5. He, F.; Zhang, Q.; Deng, G.; Li, G.; Yan, B.; Pan, D.; Luo, X.; Li, J. Research Status and Development Trend of Key Technologies for Pineapple Harvesting Equipment: A Review. Agriculture 2024, 14, 975. [Google Scholar] [CrossRef]
  6. Lammers, K.; Zhang, K.; Zhu, K.; Chu, P.; Li, Z.; Lu, R. Development and evaluation of a dual-arm robotic apple harvesting system. Comput. Electron. Agric. 2024, 227 Pt 2, 109586. [Google Scholar] [CrossRef]
  7. Mehta, S.S.; Burks, T.F. Vision-based control of robotic manipulator for citrus harvesting. Comput. Electron. Agric. 2014, 102, 146–158. [Google Scholar] [CrossRef]
  8. Huang, P.; Zhu, L.; Zhang, Z.; Yang, C. Row End Detection and Headland Turning Control for an Autonomous Banana-Picking Robot. Machines 2021, 9, 103. [Google Scholar] [CrossRef]
  9. Liu, T.; Liu, W.; Zeng, T.; Cheng, Y.; Zheng, Y.; Qiu, J. A Multi-Flexible-Fingered Roller Pineapple Harvesting Mechanism. Agriculture 2022, 12, 1175. [Google Scholar] [CrossRef]
  10. Liu, T.; Cheng, Y.; Li, J.; Chen, S.Y.; Lai, J.S.; Liu, Y.; Qi, L.; Yang, X. Feeding-type harvesting mechanism with the rotational lever for pineapple fruit. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2023, 39, 27–38. [Google Scholar] [CrossRef]
  11. Guo, A.F.; Li, J.; Guo, L.Q.; Jiang, T.; Zhao, Y.P. Structural design and analysis of an automatic pineapple picking and collecting straddle machine. J. Phys. Conf. Ser. 2021, 1777, 012029. [Google Scholar] [CrossRef]
  12. Bui, L.C.Q.; Hoang, S.; Nguyen, P.T.A.; Tran, C.C. Dynamics and Motion Control of a Pineapple Harvesting Robotic System. In Proceedings of the 2022 6th International Conference on Robotics and Automation Sciences (ICRAS), Wuhan, China, 9–11 June 2022; pp. 132–137. [Google Scholar] [CrossRef]
  13. Chaivivatrakul, S.; Dailey, M.N. Texture-based fruit detection. Precis. Agric. 2014, 15, 662–683. [Google Scholar] [CrossRef]
  14. Liu, T.H.; Nie, X.N.; Wu, J.M.; Zhang, D.; Liu, W.; Cheng, Y.F.; Zheng, Y.; Qiu, J.; Qi, L. Pineapple (Ananas comosus) fruit detection and localization in natural environment based on binocular stereo vision and improved YOLOv3 model. Precis. Agric. 2023, 24, 139–160. [Google Scholar] [CrossRef]
  15. Li, J.; Li, C.; Zeng, S.; Luo, X.; Chen, C.L.P.; Yang, C. A lightweight pineapple detection network based on YOLOv7-tiny for agricultural robot system. Comput. Electron. Agric. 2025, 231, 109944. [Google Scholar] [CrossRef]
  16. Lai, Y.; Ma, R.; Chen, Y.; Wan, T.; Jiao, R.; He, H. A Pineapple Target Detection Method in a Field Environment Based on Improved YOLOv7. Appl. Sci. 2023, 13, 2691. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zheng, L.; Peng, H. Assessing pineapple maturity in complex scenarios using an improved RetinaNet algorithm. Eng. Agrícola 2023, 43, e20220180. [Google Scholar] [CrossRef]
  18. Liang, Z.; Li, X.; Wang, G.; Wu, F.; Zou, X. Palm vision and servo control strategy of tomato picking robot based on global positioning. Comput. Electron. Agric. 2025, 237, 110668. [Google Scholar] [CrossRef]
  19. Wang, H.; Zhang, G.; Cao, H.; Hu, K.; Wang, Q.; Deng, Y.; Gao, J.; Tang, Y. Geometry-Aware 3D Point Cloud Learning for Precise Cutting-Point Detection in Unstructured Field Environments. J. Field Robot. 2025, e20250421. [Google Scholar] [CrossRef]
  20. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef]
  21. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
  22. Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Yue, X.; Li, B.; Jiang, X.; Xiong, Z.; Xu, C. Motion planning of picking manipulator based on CTB-RRT* algorithm. Trans. Chin. Soc. Agric. Mach. 2021, 52, 129–136. [Google Scholar] [CrossRef]
  24. Ma, J.; Wang, Y.; He, Y.; Wang, K.; Zhang, Y. Motion planning of citrus harvesting manipulator based on informed guidance point of configuration space. Trans. Chin. Soc. Agric. Eng. 2019, 35, 100–108. [Google Scholar] [CrossRef]
  25. Yin, X.; Chen, Y.; Guo, W.; Yang, Z.; Chen, H.; Liao, A.; Yao, D. Flexible grasping of robot arm based on improved Informed-RRT star. Chin. J. Eng. 2025, 47, 113–120. [Google Scholar] [CrossRef]
  26. Xiong, C.; Xiong, J.; Yang, Z.; Hu, W. Path Planning for Citrus Picking Robotic Arms Based on Deep Reinforcement Learning. J. South China Agric. Univ. 2023, 44, 473–483. [Google Scholar]
  27. Zhang, Q.; Jiang, Z.; Lu, Q.; Han, J.N.; Zeng, Z.; Gao, S.H.; Men, A. Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI-20), Yokohama, Japan, 7–15 January 2021; pp. 3195–3201. [Google Scholar] [CrossRef]
  28. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef]
  29. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. In Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
  30. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar] [CrossRef]
  31. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
Figure 1. Overall workflow.
Figure 1. Overall workflow.
Agronomy 15 01924 g001
Figure 2. Pineapple dataset with (a) top view, smooth light, 183 images. (b) straight view, smooth light, 263 images. (c) occlusion, smooth light, 171 images. (d) top view, shadows, 211 images. (e) straight view, shadows, 198 images. (f) occlusion, shadows, 199 images.
Figure 2. Pineapple dataset with (a) top view, smooth light, 183 images. (b) straight view, smooth light, 263 images. (c) occlusion, smooth light, 171 images. (d) top view, shadows, 211 images. (e) straight view, shadows, 198 images. (f) occlusion, shadows, 199 images.
Agronomy 15 01924 g002
Figure 3. Structure of Yolov10n-SSE.
Figure 3. Structure of Yolov10n-SSE.
Agronomy 15 01924 g003
Figure 4. Structure of SPConv.
Figure 4. Structure of SPConv.
Agronomy 15 01924 g004
Figure 5. Structure of EMA.
Figure 5. Structure of EMA.
Agronomy 15 01924 g005
Figure 6. 2D map construction process; (a) Plant surface with pineapple fruit; (b) Plant surface without pineapple fruit.
Figure 6. 2D map construction process; (a) Plant surface with pineapple fruit; (b) Plant surface without pineapple fruit.
Agronomy 15 01924 g006
Figure 7. AIR Procedure.
Figure 7. AIR Procedure.
Agronomy 15 01924 g007
Figure 8. Yolov10n-SSE training result.
Figure 8. Yolov10n-SSE training result.
Agronomy 15 01924 g008
Figure 9. F1-scores of the Yolov10n-SSE, Yolov10n, Yolov10s, and Yolov8-ghost models after 150 training epochs.
Figure 9. F1-scores of the Yolov10n-SSE, Yolov10n, Yolov10s, and Yolov8-ghost models after 150 training epochs.
Agronomy 15 01924 g009
Figure 10. Comparison of model performance in detecting occluded pineapples.
Figure 10. Comparison of model performance in detecting occluded pineapples.
Agronomy 15 01924 g010
Figure 11. F1 scores of Different Models in the Incremental Ablation test.
Figure 11. F1 scores of Different Models in the Incremental Ablation test.
Agronomy 15 01924 g011
Figure 12. Comparative Grad-CAM Visualizations.
Figure 12. Comparative Grad-CAM Visualizations.
Agronomy 15 01924 g012
Figure 13. Results of the Artificial Potential Field Algorithm.
Figure 13. Results of the Artificial Potential Field Algorithm.
Agronomy 15 01924 g013
Figure 14. Results of the Improved RRT* Algorithm; (ae) are cross-sectional composite maps of pineapple surface X-Y.
Figure 14. Results of the Improved RRT* Algorithm; (ae) are cross-sectional composite maps of pineapple surface X-Y.
Agronomy 15 01924 g014
Figure 15. Path Search Performance with Different Algorithm Cores.
Figure 15. Path Search Performance with Different Algorithm Cores.
Agronomy 15 01924 g015
Figure 16. Four algorithms for path planning in 3D space.
Figure 16. Four algorithms for path planning in 3D space.
Agronomy 15 01924 g016
Figure 17. Robotic arm simulation test.
Figure 17. Robotic arm simulation test.
Agronomy 15 01924 g017
Figure 18. XYZ axis-measurement diagram.
Figure 18. XYZ axis-measurement diagram.
Agronomy 15 01924 g018
Figure 19. Calculated Deviations between the Pineapple’s Center and the Gripper’s Grasping Envelope.
Figure 19. Calculated Deviations between the Pineapple’s Center and the Gripper’s Grasping Envelope.
Agronomy 15 01924 g019
Table 1. Model training parameters.
Table 1. Model training parameters.
ParameterConfiguration
Epoch150
Initial learning rate0.01
Learning decline Factor0.01
Batch size8
Momentum0.937
Table 2. Comparison of test results from different models.
Table 2. Comparison of test results from different models.
ModelYolov10nYolov10sYolov8-ghostYolov10n-SSE
Precision (%) 92.6 ± 1.6 94.7 ± 1.6 96.0 ± 1.2 93.8 ± 1.4
Recall (%) 81.8 ± 1.3 85.7 ± 1.0 79.1 ± 1.4 84.9 ± 1.3
AP (%) 88.8 ± 0.6 93.0 ± 0.4 86.7 ± 0.9 91.8 ± 0.6
F1 score 86.7 ± 0.7 89.9 ± 0.7 86.7 ± 0.8 89.1 ± 0.6
Flops (G)8.224.45.07.8
Param (M)2.698.041.712.73
Processing time per photo (ms)85.589.977.687.0
Table 3. Module ablation test.
Table 3. Module ablation test.
ModuleYolov10n+SPConv+SPConv + SEYolov10n-SSE
Precision (%) 92.6 ± 1.6 90.5 ± 1.7 94.1 ± 1.7 93.8 ± 1.4
Recall (%) 81.8 ± 1.3 83.4 ± 1.4 81.8 ± 1.7 84.9 ± 1.3
AP (%) 88.8 ± 0.6 90.4 ± 0.3 89.5 ± 1.0 91.8 ± 0.6
F1 score 86.7 ± 0.7 86.8 ± 0.5 87.5 ± 0.7 89.1 ± 0.6
Flops (G)8.27.47.87.8
Param (M)2.693.052.722.73
Table 4. Performance Comparison of Different Algorithm Cores.
Table 4. Performance Comparison of Different Algorithm Cores.
AlgorithmRuntimeGenerated NodesPath Node
RRT0.136914121
RRT*0.05866672
Imformed-RRT*2.30333,03580
Ours0.05850078
Table 5. Performance comparison of path planning algorithms on different maps.
Table 5. Performance comparison of path planning algorithms on different maps.
MapAlgorithmRuntime (s)Generated NodesPath Nodes
Map 1RRT2.07311427
RRT*3.13410414
Informed-RRT*1.36255028
Ours2.3750078 (14)
Map 2RRT0.35294321
RRT*0.21343315
Informed-RRT*0.647125222
Ours1.46305188 (33)
Map 3RRT4.54751719
RRT*1.36165211
Informed-RRT*0.591107220
Ours1.34263167 (29)
Table 6. DH parameters of the robotic arm.
Table 6. DH parameters of the robotic arm.
Joint i a i 1 (mm) d i (mm) α i 1 (°) θ i (°)
101220−180
20121.5−90−90
340801800
43760180−90
50102.5−900
6094900
Table 7. The deviation data of pineapple centers from the grasping soft end-effectors for 13 sets of experiments.
Table 7. The deviation data of pineapple centers from the grasping soft end-effectors for 13 sets of experiments.
12345678910111213
X (mm)6.636.360.86−6.792.11−9.039.730.331.57−0.80−4.62−2.4810.68
Y (mm)28.3338.6241.2223.8053.7016.0858.1758.3716.8932.4120.4121.6627.29
Z (mm)43.6945.5056.0334.3338.0356.6047.9836.5056.8555.7951.3039.6345.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Zhao, A.; Zhong, Y.; Zhang, G.; Wu, F.; Zou, X. Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms. Agronomy 2025, 15, 1924. https://doi.org/10.3390/agronomy15081924

AMA Style

Wang H, Zhao A, Zhong Y, Zhang G, Wu F, Zou X. Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms. Agronomy. 2025; 15(8):1924. https://doi.org/10.3390/agronomy15081924

Chicago/Turabian Style

Wang, Hongjun, Anbang Zhao, Yongqi Zhong, Gengming Zhang, Fengyun Wu, and Xiangjun Zou. 2025. "Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms" Agronomy 15, no. 8: 1924. https://doi.org/10.3390/agronomy15081924

APA Style

Wang, H., Zhao, A., Zhong, Y., Zhang, G., Wu, F., & Zou, X. (2025). Enhanced Visual Detection and Path Planning for Robotic Arms Using Yolov10n-SSE and Hybrid Algorithms. Agronomy, 15(8), 1924. https://doi.org/10.3390/agronomy15081924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop