Next Article in Journal
Effects of Planting Methods on the Establishment, Yield, and Nutritional Composition of Hybrid Grass Cuba OM-22 in the Dry Tropics of Peru
Previous Article in Journal
A Methodological Framework for Assessing the Potential Performance of Maize/Soybean Intercropping Under 2050 Climate Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Key Technologies of Robotic Arms in Unmanned Greenhouse

1
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
2
Jiangsu Agricultural Mechanization Service Station, Nanjing 210017, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2025, 15(11), 2498; https://doi.org/10.3390/agronomy15112498
Submission received: 23 September 2025 / Revised: 22 October 2025 / Accepted: 25 October 2025 / Published: 28 October 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

As a pioneering solution for precision agriculture, unmanned, robotics-centred greenhouse farms have become a key technological pathway for intelligent upgrades. The robotic arm is the core unit responsible for achieving full automation, and the level of technological development of this unit directly affects the productivity and intelligence of these farms. This review aims to systematically analyze the current applications, challenges, and future trends of robotic arms and their key technologies within unmanned greenhouse. The paper systematically classifies and compares the common types of robotic arms and their mobile platforms used in greenhouses. It provides an in-depth exploration of the core technologies that support efficient manipulator operation, focusing on the design evolution of end-effectors and the perception algorithms for plants and fruit. Furthermore, it elaborates on the framework for integrating individual robots into collaborative systems analyzing typical application cases in areas such as plant protection and fruit and vegetable harvesting. The review concludes that greenhouse robotic arm technology is undergoing a profound transformation evolving from single-function automation towards system-level intelligent integration. Finally, it discusses the future development directions highlighting the importance of multi-robot systems, swarm intelligence, and air-ground collaborative frameworks incorporating unmanned aerial vehicles (UAVs) in overcoming current limitations and achieving fully autonomous greenhouses.

1. Introduction

In response to the dual pressures of a growing global population and diminishing arable land, modern agriculture is undergoing a profound transformation towards more intensive, efficient, and sustainable models [1,2]. Against this backdrop, unmanned greenhouse farms guided by the principles of precision agriculture and integrating controlled environment agriculture technologies are emerging as a forward-looking production system. They demonstrate immense potential to maximize crop yields and resource efficiency [3,4]. However, the traditional model of greenhouse agriculture has increasingly revealed several bottlenecks in its development. Firstly, its heavy reliance on manual labor not only leads to pressures from labor shortages and rising costs but also poses health risks to workers due to repetitive physical tasks and potential pesticide exposure. Secondly, the precision and efficiency of manual operations struggle to meet the demands of large-scale, standardized production in terms of precision and efficiency, and are susceptible to subjective factors, leading to inconsistencies and a lack of predictability in the production process. Furthermore, certain greenhouse environments, such as those with high temperatures and humidity, are uncomfortable for human workers, limiting working hours.
Together, these challenges constitute the primary obstacles to the upgrading of the traditional greenhouse industry. In order to overcome these challenges and advance greenhouse agriculture towards higher levels of automation and intelligence, the development and integration of robotic arms have become a key area of research and a critical breakthrough in the field [5,6]. As a highly flexible automated execution unit, the robotic arm, through its ability to simulate or even surpass human arm capabilities in complex environments, demonstrates immense potential to replace manual labor throughout the entire greenhouse production process when combined with advanced machine vision [7,8,9,10,11,12] and multi-sensor fusion control strategies [13]. From the delicate cultivation of seedling [14] and transplanting to the identification of pests and diseases and the targeted spraying of crops [15], and finally, the non-destructive harvesting of mature fruit and vegetables [16], robotic arms can cover virtually every key stage of greenhouse production. They not only promise to significantly reduce reliance on labor and enhance operational precision and consistency but can also carry various sensors to monitor the crop’s environment and status in real time. This provides crucial support for precision decision-making, ultimately leading to optimized management and efficient resource utilization in the greenhouse production process. Therefore, as an indispensable core technology for precision agriculture and greenhouse automation, the deep integration and innovative application of robotic arms are revolutionizing the mode of greenhouse agricultural production, laying a solid foundation for building a sustainable, high-yield, and intelligent modern agricultural system.
To clarify the uniqueness and academic value of this review, we directly compared it with representative reviews published in related fields in recent years (Table 1). Through this comparison, it is evident that although the existing review articles provide valuable macroscopic perspectives for the field of agricultural robots, their discussion scopes often come at the expense of sacrificing the depth of specific scenarios or only focusing on a single technical module. For instance, the latest review by Jin & Han [17] inevitably treats greenhouses as one of many scenarios for general discussion, while the classic review by Bac et al. [18] focuses on the high value “harvesting” stage. Other works, such as Zhao et al. [11] and Zhang et al. [16], conducted extremely in-depth analyses of specific technical components such as “visual control” and “end effectors”. In contrast, the core contribution of this review lies in its unique positioning and perspective: we do not simply list technologies, but view “unmanned greenhouses” as an independent and complete operational ecosystem for overall analysis. This positioning enables us to focus specifically on the greenhouse environment, deeply analyze the specific challenges and customized solutions brought about by its unique structure and crop growth patterns, thereby providing a depth that general reviews cannot match. At the same time, our perspective goes beyond a single technology or task stage, but emphasizes a system integration framework, highlighting the collaborative working mode of perception, decision-making, and execution modules in the greenhouse closed-loop system. Ultimately, based on a systematic analysis of the latest frontier literature, this review aims to construct a forward-looking development blueprint, outlining a clear development path for the next generation of automated greenhouses, particularly focusing on emerging paradigms such as multi-robot collaboration and air-ground integration. Therefore, this work aims to provide researchers and practitioners specializing in controlled environment agriculture automation with a more in-depth, systematic, and forward-looking reference.
Table 1. Comparison with Representative Review Articles in Agricultural Robotics.
Table 1. Comparison with Representative Review Articles in Agricultural Robotics.
Reference PaperYearPrimary FocusScope and LimitationsDistinction & Our
Contribution
Jin & Han [17]2024Precision agriculture (general)Broadest scope, covering greenhouses, open fields, and orchards; analysis of greenhouse-specific challenges is not centralized.Offers breadth but lacks depth on unique greenhouse issues. Our work is exclusively focused on the greenhouse ecosystem for a more in-depth analysis.
Bac et al. [18]2014High-value crop harvestingA classic review, but focuses on the single task of ‘harvesting’ and lacks coverage of the last decade’s advancements.Single-task focus. Our work covers the entire workflow from monitoring to operation and integrates the latest progress.
Zhao et al. [11]2016Vision-based control for harvesting robotsTechnology-specific focus, providing a deep dive into the ‘vision’ module.Perspective is limited to a single technology. Our work emphasizes the systemic integration of modules like vision, control, and mobility.
Zhang et al. [16]2020End-effectors for agricultural robotsComponent-specific focus, offering a comprehensive overview of ‘gripper’ design and control.Perspective is limited to a single hardware component. Our work discusses the gripper as part of a larger, integrated system.
This Review2025The unmanned greenhouse as a unique, self-contained operational ecosystemA focused, in-depth, and systematic exploration of robotic technologies, integration frameworks, and future paradigms specifically for the greenhouse context.1. Exclusive focus;
2. Systemic perspective;
3. Forward-looking blueprint.

2. Robotic Arms Used in Greenhouses

2.1. Types of Robotic Arms Used in Greenhouses

In greenhouse automation, the choice of robotic arm is crucial for operational efficiency, cost and the range of tasks that can be performed. The robotic arms currently used in greenhouses can be categorised into several main types: SCARA (Selective Compliance Assembly Robot Arm), articulated and Cartesian arms, as well as more advanced multi-arm collaborative systems derived from them. Each type has distinct advantages and disadvantages in terms of degrees of freedom, workspace, motion characteristics, and cost shown in Table 2.
The core advantage of SCARA robotic arms lies in their high speed and precision within a planar working plane [17], enabling efficient execution of rapid seedling sorting and transplanting tasks in automated seedling cultivation processes [5,19]. Its structure exhibits excellent compliance in the horizontal plane while maintaining high rigidity in the vertical direction. Although its motion flexibility and three-dimensional spatial capabilities are limited, this configuration retains application value in specific scenarios with low posture requirements, such as harvesting strawberries in elevated cultivation systems [20,21].
Articulated robotic arms possess unparalleled flexibility due to their human-like multi-degree-of-freedom structure (typically 4 to 7 degrees of freedom) [17]. This characteristic enables them to perform complex tasks within unstructured environments such as deep within crop canopies [14,18]. In fruit and vegetable harvesting, their multidimensional motion capabilities support intricate obstacle-avoidance path planning to grasp fruits obscured by foliage [22]. Simultaneously, they have been extensively adapted for tasks requiring precise posture control, including plant pruning, assisted pollination [23], and targeted pesticide spraying [24,25].
Unlike the rotational motion of articulated arms, Cartesian coordinate robotic arms (or gantry-type robotic arms) operate based on three orthogonal linear axes (X, Y, Z) [26]. This design delivers high rigidity, high positioning accuracy, and a well-defined, large working space, making it an ideal platform for standardized processes such as automated seedling production lines, large-scale environmental monitoring, and crop phenotyping data collection [27]. For instance, they perform precise seeding and batch transplanting on automated seedling production lines, while sensor-equipped variants enable large-scale environmental monitoring and crop imaging. However, their limited motion flexibility makes them ill-suited for unstructured tasks requiring posture adjustments, such as fruit harvesting [28,29].
To address complex operations beyond the capabilities of single-arm robots, multi-arm collaborative robotic systems have emerged, aiming to mimic or even surpass the bilateral dexterity of human hands [30]. For instance, in fruit harvesting, one arm can stabilize the fruit or clear obstructing foliage while another performs the cut, significantly improving operational success rates and fruit quality [31,32]. Through coordinated motion, these systems can also enhance overall efficiency via parallel operations or optimized motion sequencing [33,34]. However, due to high technical complexity in areas such as coordinated control, shared perception, and task planning, they remain primarily confined to laboratory research and prototype validation stages [35,36,37].

2.2. Mobile Platforms for Robotic Arms

To expand the application range of robotic arms from fixed workstations to the entire greenhouse, they must be equipped with suitable deployment platforms. These platforms can be categorized into two main types: fixed and mobile. Fixed platforms mount the robotic arm in a stationary position and are suitable for scenarios where materials are actively transported to the robot for processing, such as at automated sorting or grafting workstations. However, to achieve autonomous management of dispersed crops within the greenhouse, mobile platforms are essential. Mobile platforms primarily include rail-mounted mobile platforms and Unmanned Ground Vehicles (UGVs). Rail-mounted platforms move along pre-set tracks, are stable and precise, and are well-suited for structured, linear planting environments. UGVs, on the other hand, possess autonomous navigation capabilities, allowing them to move freely in unstructured environments and offering extremely high flexibility. Table 3 provides a quantitative comparison of the characteristics of these three deployment methods.

2.2.1. Rail-Mounted Mobile Platforms

Rail-mounted mobile platforms are a crucial and widely adopted solution within unmanned greenhouse farms for achieving large-scale, precise navigation and positioning of robotic arms. These platforms typically consist of a robotic arm or its operational unit mounted on a cart, trolley, or gantry frame that moves along predefined tracks. These tracks are generally laid parallel to the crop rows or the main structure of the greenhouse and can include ground-level rails, overhead tracks suspended from the greenhouse roof, or guide systems integrated into the edges of cultivation beds and troughs [38].
In a greenhouse setting, rail-mounted platforms are well-suited for linear planting patterns commonly used for crops such as tomatoes, cucumbers, sweet peppers, and strawberries. They are also ideal for tasks that require precise movements along a fixed path. For instance, Hemming et al. [6] developed a sweet pepper harvesting robot, Feng et al. [39] designed a tomato harvesting robot, Tian et al. [38] created a multi-purpose tracked vehicle for tasks like tying and de-leafing, and Li et al. [40] designed a rail-based spraying robot, all of which utilize a carrier that moves on greenhouse rails. The main advantage of this mode of transportation is its extremely high positioning accuracy and repeatability, allowing the robotic arm to reach any predetermined point along the rail with great precision and stability. This simplifies navigation and path planning, reducing complexity. Qi et al. [41] also indirectly demonstrated the potential of rail systems for ensuring uniform and precise operations through their research on the airflow velocity field of a rail-based fogger machine. Furthermore, the rail system provides a stable work base for the robotic arm, minimizing positioning errors caused by platform vibrations from uneven ground or platform instability [38]. The rails themselves can also integrate power and data communication lines, providing a continuous energy supply and reliable data transmission for the robotic arm.
Rail-mounted mobile platforms also have their inherent limitations, the most significant is their limited flexibility. The range of motion for the robotic arm is strictly confined to the area covered by the tracks, making it difficult to perform tasks across rows or in non-railed areas without complex track switching or transfer mechanisms [38]. Additionally, the initial installation cost of the tracks and the potential need for structural modifications to the greenhouse can be high. The rails themselves can also create spatial obstructions for other manual or mechanized operations within the greenhouse. In situations where the greenhouse layout needs to be frequently changed or different crops need to be cultivated, the fixed rails have poor adaptability.

2.2.2. UGV Mobile Platforms

Unlike platforms that rely on fixed paths, the UGV is a trackless, autonomously navigating mobile robot. It integrates a robotic arm and auxiliary equipment, such as controllers, power supplies, and sensors, onto a compact chassis. This allows the UGV to move freely within the greenhouse, serving multiple work areas and performing various tasks. This greatly enhances the system’s flexibility and scalability [42,43,44]. UGVs use diverse navigation technologies, resulting in varying levels of autonomy. In scenarios that require high path accuracy and repeatability, the functionality and navigation methods of UGVs are similar to those of traditional AGV (Automated Guided Vehicle) platforms. These systems rely on external physical markers for high-precision guidance [45,46,47,48,49]. For example, the RoAD phenotyping platform developed by Xiang et al. [50] utilizes a chassis that navigates precisely by following magnetic tape laid on the ground, ensuring the robot strictly adheres to a predefined path when moving between different experimental pots. However, to cope with the dynamic changes (such as plant growth and temporary obstacles) and complex layouts of the indoor greenhouse environment, more advanced UGVs typically adopt autonomous navigation techniques that do not require physical markers.
Equipped with sensors such as LiDAR or cameras, these UGVs are capable of performing Simultaneous Localization and Mapping (SLAM) to build a map of the environment in real-time. This allows them to build a map of their environment while simultaneously determining their own position within it, making them highly adaptable to dynamic and unstructured settings. Roure [51] tested various SLAM algorithms like Gmapping and KartoSLAM [52] in the GRAPE project, using 2D/3D LiDAR and stereo cameras to map and navigate a real vineyard. Similarly, the P-AgBot developed by Kim et al. [53] developed the P-AgBot, which utilizes a 2D LiDAR and the Adaptive Monte Carlo Localization (AMCL) algorithm to achieve reliable autonomous movement between crop rows.
Although UGVs offer a great deal of flexibility, their application in greenhouses also presents unique challenges. The unstructured nature of the greenhouse environment, characterized by continuously growing plants and mobile personnel or equipment, places high demands on the UGV’s perception and obstacle avoidance capabilities. Notably, under the dense canopies of high-density plantings, GPS signals are often weak or entirely unavailable, making SLAM-based autonomous navigation an essential requirement. Furthermore, the UGV’s endurance and autonomous charging management are practical issues that must be carefully considered during deployment. Nevertheless, with ongoing advancements in navigation algorithms, sensor technology, and battery technology, mobile robotic arm systems based on UGV/AGV platforms are regarded as a key enabling technology for achieving fully unmanned and efficient production throughout the greenhouse, owing to their unparalleled flexibility and autonomy.

3. Key Technologies of the Robotic Arm

The discussion of the core technology of greenhouse robotic arms must have a basic distinction criterion: The environment it faces is completely different from the precise and controllable environment in which industrial robotic arms operate. Industrial robotic arms are products of a deterministic world, and they perform repetitive tasks with near-perfect precision in strictly planned environments. Their technical pursuits can almost all be attributed to the ultimate optimization of speed and stability. In contrast, agricultural robotic arms are situated in a domain full of uncertainties. Here, light changes are unpredictable, crops grow freely, and the target fruits are always hidden among the tangled branches and leaves. This visual chaos forces robotic arms to possess strong perception and cognitive abilities, merely “seeing” is not enough; they must “understand”. At the same time, interacting with fragile organisms imposes nearly harsh requirements on their physical operation flexibility and force control. Any error could directly damage the crops, so their movement trajectories are no longer pre-set rigid scripts, but dynamic decision-making processes based on real-time sensory input. This leads to the most essential difference between the two: industrial robotic arms aim to complete “known tasks” at a faster speed and with higher precision, while agricultural robotic arms must always correctly complete “new tasks” in the “unknown world”. Table 4 summarizes the differences between industrial robotic arms and agricultural robotic arms.

3.1. End-Effector for Specific Tasks

The end-effector is a crucial component that allows a robotic arm to directly interact with its environment. The design of this component has a direct impact on the efficiency, success rate, and final quality of agricultural products, making it one of the most critical and challenging aspects of greenhouse robotics technology [16]. Greenhouse crops are typically characterized by diverse morphologies, fragile textures, and complex growing environments, which require the end-effector to be adaptable, dexterous, and safe, surpassing the requirements of industrial applications. One of the most complex tasks in automation is harvesting fruits and vegetables, leading to the development of various end-effector designs. These designs often involve the integration of advanced grasping and detaching capabilities.
One common non-invasive grasping method uses a suction-cup-based end-effector, which creates negative pressure to adhere to the target. This approach is particularly effective for fruits with smooth surfaces, as it effectively prevents compression damage. For instance, the harvesting robot developed by Lehnert et al. [54] for sweet peppers in greenhouses features a suction cup as its core component. The suction cup stabilizes and secures the pepper by adhering to it before cutting. In contrast, the more prevalent gripper-style end-effectors use mechanical jaws to hold the target. While traditional rigid grippers are simple in structure, they can cause damage when handling delicate fruits, as demonstrated by Yoshida et al. [33] in their application to harder fruits such as apples and pears (Figure 1a). To address this limitation, soft grippers have become a primary focus of current research. These grippers, often made from compliant materials like silicone, can adaptively conform to objects of various shapes and distribute pressure by increasing the contact area, resulting in low-damage grasping [55]. The field trials by Brown and Sukkarieh [56] confirmed that when harvesting plums, soft grippers achieved significantly higher success rates and preserved fruit integrity compared to rigid grippers.
Successful harvesting also requires an efficient separation mechanism Therefore, modern end-effectors often have an integrated grasping and cutting design. There have been numerous innovations in this area. For example, Hemming et al. [6] and Arad et al. [57] developed a gripper with an integrated blade and a high-frequency vibrating knife, respectively, for sweet pepper harvesting. Meanwhile, the strawberry harvesting robot by Feng et al. [58] utilizes a unique “suction cup + heated knife” combination to perform the dual function of cutting and sterilizing the incision. In addition to complex harvesting tasks, specialized end-effectors are also needed for other operations within the greenhouse. For example, Vatavuk et al. [59] created a robotic arm with a precisely controlled nozzle for a vineyard spraying system (Figure 1b), and Ming et al. [23] developed a poll brush as the end-effector for a forsythia pollination robot. Table 5 summarizes the common types of end-effectors. The current trend in end-effector development for unmanned greenhouse farms is towards softer, more intelligent solutions with greater biomimetic properties and multifunctional integration. This is necessary to meet the demands of increasingly complex automation tasks.
Figure 1. (a) Rigid gripper [33]. (b) Nozzle end-effector for spraying [59].
Figure 1. (a) Rigid gripper [33]. (b) Nozzle end-effector for spraying [59].
Agronomy 15 02498 g001
Table 5. Summary of Common End-Effector Types.
Table 5. Summary of Common End-Effector Types.
End-Effector TypeKey FeaturesApplication CasesDeveloper(s)
Suction CupUtilizes negative pressure for adhesion; non-contact, no compression.Strawberry harvestingLehnert et al. [54]
Gripper (Rigid)Mechanical grasping with rigid fingers; simple structure.Harvesting of harder fruits like apples and pearsYoshida et al. [33]
Gripper (Soft)Uses compliant materials; adaptively envelops the target, distributes pressure.Low-damage harvesting of various fruits and vegetables like plumsBrown and Sukkarieh [56]
Integrated Grasping & CuttingIntegrates grasping function with a cutting tool.Harvesting of sweet peppers, strawberries, tomatoesArad et al. [57]
Task-SpecificThe end-effector is a specialized tool, e.g., a nozzle or a brush.Precision spraying, flower pollinationVatavuk, Ming et al. [23,59]

3.2. Perception Algorithms for Plants and Fruits

Perception algorithms for plants and fruits serve as the bridge connecting the robot’s vision sensors with the physical world, with their core task being to accurately identify, locate, and understand operational targets within the complex, unstructured greenhouse environment. This provides the decisional basis for the precise operation of the robotic arm [11,17]. The evolution of these algorithms reflects progression from simple 2D image analysis to the complex understanding of 3D scenes and multi-modal perception.

3.2.1. Two-Dimensional Target Identification and Localization

The primary task of perception is to accurately identify and isolate the target in a 2D image. This includes not only the fruit itself but often finer structures such as stems, peduncles, or pruning points. Early research primarily relied on traditional computer vision techniques, using a combination of hand-crafted features to achieve its goals. For example, researchers would utilize features such as color, shape, and texture to segment the image. An example of this can be seen in the system developed by Benavides et al. [60], which utilized classic image processing steps such as color space transformation, edge detection, and morphological operations to identify tomato fruits and their peduncles. To address overlapping apples, Lv et al. [61] analyzed the Euclidean distance map of the image and the peaks of its projection curves to determine the state of overlap. While these methods are computationally efficient, they are sensitive to changes in lighting and background interference, resulting in limited robustness.
The emergence of deep learning has greatly enhanced the effectiveness of object detection. Models that utilize Convolutional Neural Networks (CNNs), such as YOLO and Faster R-CNN, have the ability to learn robust features, showcasing a clear advantage over traditional methods. In order to address the challenges of nighttime operation, He et al. [62] proposed an improved version of the YOLOv5 model. Through the optimization of the loss function and adaptive anchor boxes, their model achieved a recognition accuracy of up to 96.8% for tomatoes, in low light and occluded conditions. For more intricate tasks, such as locating pruning points, Feng et al. [63] utilized Mask R-CNN to perform pixel-level segmentation of tomato main stems and lateral branches. They then calculated the intersection of their centerlines via image moments to determine the precise operation point.

3.2.2. Three-Dimensional Spatial Perception and Occlusion Handling

To ensure that the robotic arm can perform precise physical interactions, such as grasping and pruning, in three-dimensional space, relying solely on 2D localization is not enough. It is crucial to accurately obtain the target’s 3D spatial coordinates and pose, and to effectively address the common problem of occlusion.
The key to solving this challenge lies in multi-view 3D reconstruction. By deploying an “eye-in-hand” RGB-D camera at the end of the manipulator, the robot can observe the target from different angles and fuse the multi-viewpoint information to construct a more complete and more accurate 3D model of the target than a single snapshot could provide [64]. The four-arm harvesting robot system developed by Li et al. [34] is equipped with multiple stereo cameras, which build a global map of the apples by fusing information from each viewpoint, effectively reducing localization errors caused by occlusion. To further improve the accuracy of spatial orientation (pose) of the target, researchers have further employed geometric model fitting methods. Lehnert et al. [64] used a fused point cloud of a sweet pepper and fitted a super ellipsoid geometric model to it. This allowed for the precise calculation of the sweet pepper’s 6-degree-of-freedom (6DOF) pose, which is crucial for ensuring the end-effector can perform cutting at the correct angle.
A more advanced solution for handling dynamic occlusion during continuous operation is the dynamic world model, which utilizes temporal information fusion. The method proposed by Rapado-Rincón et al. [12] utilizes 3D Multi-Object Tracking (MOT) technology to continuously integrate information from different viewpoints over time. This allows the system to establish and maintain a dynamic “world model,” meaning that even if a tomato is completely occluded at a given moment, the system can still track its position based on its historical trajectory and model predictions. This greatly improves the robustness and continuity of perception in cluttered environments.

3.2.3. Multi-Modal Perception and Biomimetic Interaction

The most advanced perception systems are now moving beyond relying solely on visual information and are instead incorporating multiple sensing modalities to achieve a more “human-like” dexterity. A notable example of this approach is the bionic manipulator developed by Qian et al. [9], which integrates an array of sensors based on PVDF piezoelectric film into its flexible fingertips. This allows the manipulator to perceive and combine various types of information, such as touch, slippage, and temperature. This multi-modal perception capability enables the manipulator to handle tasks that would be challenging to solve with vision alone. For instance, it can delicately grasp fragile or irregularly shaped objects, like eggs, by using tactile feedback. It can also automatically retract when in contact with high-temperature objects, showcasing an environmental interaction capability that far surpasses that of single-modality visual perception.

4. Framework for Integrating Robotic Arms into Unmanned Greenhouses

Transforming standalone robotic technologies into an efficient, autonomous unmanned greenhouse hinges on the construction of an integrated, multi-layered system framework designed to coordinate and manage the farm’s entire range of monitoring and operational tasks [65]. The framework for an unmanned greenhouse relies on a system that tightly integrates data acquisition, intelligent decision-making, and physical execution (Figure 2). The core concept of this framework is to use a central intelligent dispatching system to command a heterogeneous team of specialized robots. This team works collaboratively, supported by the Internet of Things (IoT), to ultimately achieve full-process automation of greenhouse production [66,67].
The operational flow of this integrated framework begins at a central intelligent dispatching and management center, which serves as the “brain” of the entire unmanned farm. This center receives high-level production commands from the Greenhouse Management System (GMS) or the user, such as “conduct pest inspection in Area A” or “harvest mature tomatoes” [66]. These commands are then broken down into specific, executable sub-tasks by a task planning module. Advanced planning algorithms, such as distributed multi-agent planning, are used to assign these tasks to the heterogeneous robot team and create an optimized operational schedule [68]. This decision-making process relies on real-time feedback from the perception layer, which includes crop status, environmental data, and the robots’ own positions and statuses.
At the physical execution layer, a heterogeneous team of robots is tasked with carrying out the commands from the dispatching center. This team follows the principle of task specialization, with each robot having a specific role. Unmanned Ground Vehicles (UGVs) are responsible for navigating through the rows in the greenhouse and act as mobile operational platforms. These platforms can be equipped with various professional robotic arms and end-effectors to perform precise tasks such as targeted spraying using manipulators equipped with multispectral cameras and specialized end-effectors, or the harvesting and pruning of crops [69,70]. Additionally, the system may include a suspended cable robot (Cablebot) for aerial inspections, as well as stationary robotic arms at fixed workstations for batch processing. Transport robots are also included in the system to move crops into the working range of the stationary robotic arms [67].
The successful operation of the entire framework relies heavily on a stable and dependable communication and data network, typically utilizing an Internet of Things (IoT) architecture. Each robot serves as an IoT node, connecting wirelessly to a central server. During task execution, the robots’ sensors gather large amounts of crop and environmental data. This data is then transmitted in real-time to the central system for processing, analysis and storage. It is used to update crop growth maps, identify pests and diseases, and assess the nutritional status [71]. The structured information obtained from this processing and analysis is then fed back into the dispatching system to optimize the future decision-making. Additionally, it is presented to the farm manager through a user interface (UI), allowing for remote monitoring of the greenhouse’s status and high-level management. This data-driven, closed-loop control process enables the unmanned greenhouse system to operate autonomously and self-optimize, ultimately achieving precise, efficient, and sustainable agricultural production.
Figure 2 presents the hierarchical architecture, aiming to clearly illustrate the functional division and interrelationships of various technical modules in the unmanned greenhouse. The foundation of this architecture is the sensing layer, which serves as the interface between the system and the outside world. Its core responsibility lies in data collection, continuously obtaining raw information on crop physiology and environmental parameters from the environment, providing a basis for the operation of the entire system. These raw data are transmitted to the decision layer, which is responsible for processing, analyzing and integrating the information, thereby completing intelligent planning and optimal scheduling of tasks. Essentially, the information input by the perception layer is transformed into clear and executable strategic instructions. Finally, these instructions are dispatched to the execution layer, which is composed of all the physical execution units in the greenhouse. Its function is to execute the decisions and precisely complete various specific operations in the real environment. These three layers are closely connected through a real-time feedback loop. The status and results of the execution layer will be transmitted in reverse, enabling the system to dynamically adjust its decisions and behaviors according to the actual situation, forming a closed-loop control system with adaptive and self-optimizing capabilities.

5. Applications of Robotic Arms in Unmanned Greenhouses

5.1. Robotic Arms for Plant Protection

5.1.1. Robotic Arms for Plant Status Monitoring

To achieve unmanned greenhouse operations, automated crop care and monitoring technologies are crucial. These technologies utilize robots to replace manual labor and accurately assess crop health and development throughout their entire life cycle, allowing for early intervention. The tasks of this technology are extensive, including early detection of pests and diseases, diagnosis of nutritional stress, and quantitative measurement of key growth indicators. For example, robots equipped with high-resolution RGB cameras and algorithms such as Principal Component Analysis (PCA) [72] can identify powdery mildew (PM) on sweet peppers with up to 95% accuracy. Additionally, for smaller targets like thrips on strawberry flowers, the system can utilize a Support Vector Machine (SVM) for classification, achieving a detection error rate of less than 2.5% [73].
While visual diagnostic techniques based on standard RGB images have effectively solved identification problems, advancing to more complex interactive tasks requires an upgrade from 2D image analysis to 3D spatial perception. To this end, the current research frontier is the integration of advanced 3D sensors like LiDAR and RGB-D cameras into robotic systems. This enables non-contact measurement of key phenotypic parameters, sometimes even supplemented by visually guided physical contact.
LiDAR technology plays a crucial role in high-precision 3D perception tasks, valued for its exceptional accuracy and ability to function in varying lighting conditions. By capturing high-density point clouds, robots are able to create detailed 3D models of crops and their surroundings. For instance, the P-AgBot robot, developed by Kim et al. [53], cleverly combines a vertically mounted 3D LiDAR with a horizontally mounted 2D LiDAR to scan the plant canopy and base, respectively. This allows for precise measurements of maize height and stem radius, with an error margin of less than 10%. As a more cost-effective 3D sensing solution, RGB-D cameras have also been widely adopted for growth monitoring. The key to their technical success lies in effectively fusing color (RGB) and depth information to overcome complex environmental challenges. To address the difficulties posed by similar appearances and mutual occlusions between targets and the background in high-density planting environments, Cho et al. [74] proposed a novel multi-modal image fusion strategy. They skillfully utilized depth information to assist in processing RGB images, creating a training dataset that focuses on foreground targets by blurring or removing the background. The YOLOR detector, trained on this dataset, is able to accurately locate growth points and branch points even in cluttered backgrounds, resulting in a 3.8% increase in mean average precision (mAP) and providing a reliable region of interest (ROI) for subsequent calculations of stem diameter and plant height based on the depth map.
Some advanced techniques go beyond non-contact perception by combining visual guidance with physical contact measurement to gather data that optical sensors cannot directly obtain. Atefi et al.’s system [8] serves as a prime example of this “see then touch” approach. It utilizes a ToF camera and a deep learning network to accurately identify and locate the stems of maize or sorghum in 3D space. Then, guided by the vision system, the robotic arm precisely manipulates a custom gripper integrated with a linear potentiometer to physically grasp the stem and directly measure its diameter (Figure 3). This method combines the flexibility of non-contact visual positioning with the accuracy of contact measurement, resulting in fully automated, high-precision measurement of stem thickness. The results show a strong correlation (R2) of over 0.98 with manual caliper measurements.
Figure 3. A robotic arm combining vision with physical measurement [8].
Figure 3. A robotic arm combining vision with physical measurement [8].
Agronomy 15 02498 g003

5.1.2. Robotic Arms for Targeted Spraying

After the monitoring robotic arm accurately diagnoses the condition of the crop, the next crucial step is to translate this information into precise physical intervention. Targeted spraying is the primary application for closing this “perception-to-action” loop. Its goal is to use robotic technology to precisely apply substances such as pesticides, pheromones, or nutrient solutions to specific parts of the crop. This results in a significant reduction in chemical usage, lower production costs, protection of the environment, and ensuring personnel safety [51,72]. The prerequisite for targeted spraying is an accurate perception of the crop’s status. The robotic system must first identify the location of pests and diseases or the nutritional stress status of the plant through its vision sensors before it can perform precise follow-up actions [73].
A typical targeted spraying robot system is an integrated mobile manipulation platform that combines various complex technologies, including autonomous navigation, environmental perception, path planning, and precise manipulation. Martin [15] developed a generic, ROS-based control architecture for this purpose. The system consists of a mobile base, a multi-DOF robotic arm, and a perception and execution module mounted at the end-effector, such as a 3D camera and a nozzle (Figure 4). Its standard operational workflow is as follows: the robot first navigates autonomously to the target plant; then uses its vision system to identify the leaves or areas that require treatment; and finally, the robotic arm plans and executes a collision-free path to precisely move the nozzle to the target location for spot spraying. This modular architecture greatly simplifies the development of new applications, allowing the robot to flexibly perform the complete closed-loop task from disease detection to precision spraying.
Based on the EU’s CROPS project, Oberti et al. [24,25] provided a detailed description of a modular robotic system designed specifically for controlling powdery mildew in vineyards. The core of this system lies in its advanced multispectral perception and precision execution capabilities. At the perception end, the robot is equipped with an R-G-NIR (Red–Green–Near-Infrared) multispectral camera, which identifies leaf areas infected with powdery mildew by analyzing specific spectral indices. To enhance detection robustness, the algorithm also incorporates local gradient information from the image, effectively distinguishing real disease spots from specular reflections on leaves. At the execution end, a custom precision sprayer is mounted on the end-effector of a six-degrees-of-freedom robotic arm. This sprayer features an axial fan that generates an airflow to carry the pesticide, creating a precise circular spray pattern with a diameter of about 15 cm, ensuring the liquid is applied only to the infected areas. The entire system was tested in a simulated vineyard environment within a greenhouse. The results showed that the robot could automatically detect and treat 85% to 100% of the diseased areas, reducing pesticide use by 65% to 85% compared to traditional uniform spraying, fully demonstrating the immense potential of targeted spraying in precision agriculture.
In row-crop scenarios, it is crucial to optimize the spray trajectory and efficiency. Vatavuk et al. [59] have developed an advanced algorithm, based on Model Predictive Control (MPC), for this specific purpose. This system is capable of generating an optimized “lawnmower” pattern spray path in real-time, taking into account the actual shape of the grapevine canopy. Additionally, it coordinates the motion of the mobile base and the robotic arm. This allows the robot to accelerate in areas without vines and slow down in densely foliated areas, maximizing pesticide reduction while ensuring full coverage.
In addition to traditional methods of pest and disease control, targeted spraying robotic arms have the potential to improve nutrient management through a closed-loop system. Nadafzadeh’s system utilizes an onboard camera to capture images of spinach, and then uses artificial intelligence algorithms (ANN and SVR) to automatically diagnose iron deficiency and determine its severity. Upon confirmation of a deficiency, the robot immediately activates its spraying device to apply a precise foliar fertilizer to the affected plant. This system achieves a fully automated closed loop from diagnosis to treatment, with a diagnostic accuracy of 83%, comparable to that of an expert. Furthermore, it has the potential to reduce nutrient solution consumption by over 50%, showcasing the intelligence of precision agriculture.
Moreover, the concept of “targeted application” has evolved beyond just liquid spraying and now includes precise deployment of solids or devices. The GRAPE project, developed by Roure et al. [51], focuses on using a robotic arm to accurately place pheromone dispensers for pest control on the main branches of grapevines. To achieve this, the Jaco2 robotic arm is equipped with a 2D laser scanner at its end-effector. The scanner scans the target grapevine as the arm moves, creating a real-time 3D model of its structure. Using this model, the arm can plan a collision-free path to precisely place the dispenser at the designated location. This innovative project expands the potential applications of agricultural robots, particularly in the area of precise physical manipulation.

5.2. Robotic Arms for Fruit and Vegetable Harvesting

Crops grown in greenhouses typically have a high economic value due to their dense planting and stable environment. As a result, greenhouses are an ideal setting for the use of harvesting robotic arms, which have seen significant development in recent years. The current mainstream approach involves the use of machine vision technology, which analyzes pixel distribution, brightness, and color information from images and converts it into digital signals. Through algorithmic processing, the robotic arm can then isolate and identify target features, allowing for precise recognition of fruits. The end-effector of the robotic arm is then used to perform the harvesting operation. This chapter mainly summarizes the harvesting robotic arms for strawberries (Table 6), tomatoes (Table 7) and other (Table 8) greenhouse crops.
Table 6. Summary of Strawberry Harvesting Robotic Arms.
Table 6. Summary of Strawberry Harvesting Robotic Arms.
CropRobotic Arm UsedDOFEnd-EffectorTest ResultsDeveloper(s)
StrawberryUr3e6Pneumatic soft gripperSuccess rate: 78%
Damage rate: 23%
Ren et al. [22]
StrawberryDenso VS-6556 G6Suction cup + thermal cutterSuccess rate: 86%
Average time: 31.3 s
Feng Qingchun et al. [58]
StrawberryMitsubishi RV-2AJ5Enclosing gripperSuccess rate: 53.6%
Average time: 7.5 s
Xiong et al. [75]
StrawberryNoronn3Enclosing gripperHigh-precision mode: 89.2–89.9%
High-density mode: 98.9%
Ge et al. [20]
StrawberryOctinion6Soft gripperAverage time: 4 sOctinion Company [76]
Table 7. Summary of Tomato Harvesting Robotic Arms.
Table 7. Summary of Tomato Harvesting Robotic Arms.
CropRobotic Arm UsedDOFEnd-EffectorTest ResultsDeveloper(s)
TomatoAUBO i56Pneumatically controlled nylon fingersAverage time: 6.4 s
Highest success rate: 84% (right)
Lowest success rate: 69.4% (front)
Gao et al. [77]
TomatoUr56Three-jaw gripperAverage speed: 23 sYAGUCHI et al. [78]
TomatoHRP2W7Custom shearsDemonstrated the feasibility of humanoid robot harvestingChen et al. [32]
TomatoMotoman6Suction cup + mechanical claw + air-puff solenoid valveAlternating mode: 70%
Composite mode: 83.3%
Liu Jizhan et al. [79]
TomatoDenso VS-6556 G6Integrated clamping-shearingSuccess rate: 83%
Average time: 8 s
Feng Qingchun et al. [80]
Tomato6Soft gripperFirst-time success rate: 86%
Second-time success rate: 96%
Yu Fenghua et al. [81]
Table 8. Harvesting Robotic Arms for Other Crops in Greenhouses.
Table 8. Harvesting Robotic Arms for Other Crops in Greenhouses.
CropRobotic Arm UsedDOFEnd-EffectorTest ResultsDeveloper
Sweet PepperUr56Suction cup + vibrating bladeSuccess rate: 76.5%Lehnert et al. [54]
Sweet PepperFanuc LR Mate 200iD6Mechanical claw + vibrating bladeAverage speed: 24 s
Success rate: 61%
Arad et al. [57]
CucumberMitsubishi RV-E27Gripper and suction cup + thermal cutting deviceSuccess rate: 80%
Average time: 45 s
Van Henten et al. [82]
Cucumber4Soft gripper + cutterSuccess rate: 85%
Average time: 28.6 s
Ji Chao et al. [83]
RaspberryUr56Silicone gripperHarvesting success rate: 80%Junge et al. [84]
Eggplant4Mechanical clawSuccess rate: 89%
Average time: 37.4 s
Song Jian et al. [85]

5.2.1. Strawberry Harvesting Robotic Arms

Ren et al. [22] designed a mobile robotic platform (MRP) for precision indoor farming systems. This platform is capable of monitoring strawberry growth status and non-destructively harvesting ripe fruit. In addition, they developed a rule-based algorithm to classify strawberry growth scenes, which improves harvesting efficiency. The system utilizes the YOLOv4-tiny model for ripeness detection. By combining RGB and depth images with a deep learning algorithm, it accurately calculates the 3D position of each strawberry with an average detection accuracy of 96.4%. To pick the strawberries, a pneumatic soft gripper is used, which employs dragging and rotating motions. This method has a success rate of 78% and a damage rate of 23%.
In 2012, Feng Qingchun et al. [58,86,87] designed a robotic arm specifically for harvesting strawberries in elevated greenhouse cultivation. The system was based on a four-wheeled chassis with sonar navigation and a 6-DOF binocular vision robotic arm. They developed a non-destructive end-effector that could adsorb the fruit, grasp it, and cut the peduncle with a thermal wire. In trials, the system automatically identified 100 ripe strawberries with a successful picking rate of 86%, but the average picking time was long, at 31.3 s per operation. Addressing the shortcomings of this robot, Feng Qingchun et al. improved the system in 2019 by adopting a combined far-and-near-field vision technique. A far-view camera identifies ripe strawberries and acquires their positions, while a near-view camera is used to precisely locate the peduncle’s cutting point. They also developed a new two-fingered gripper for the peduncle, which is then severed by an overhead cutter. This entire process avoids contact with the fruit, maximally preserving its integrity. In trials, the improved robot’s successful picking rate was 84%, and the average picking time per strawberry was reduced to 10.7 s.
Xiong et al. [28,75] designed a 5-DOF robotic arm for harvesting strawberries, using the Thorvald II platform. The arm is equipped with an RGB-D camera and a color threshold algorithm, allowing for the quick detection and location of ripe strawberries. The end-effector has six enveloping fingers that can open simultaneously to form a closed ring, allowing the arm to “swallow” the strawberry from below and then push the stem into a cutting area to sever the peduncle. In their experiment, the average harvesting cycle for a single strawberry was 7.5 s, with a success rate of 53.6% and a damage rate of 59.0%.
Ge et al. [20,21] used deep learning and computer vision techniques in the greenhouse to precisely determine the 3D location of fruits for the robotic arm and gripper. Among various camera types, the Time-of-Flight (ToF) camera performed better in terms of accurate 3D representation. The researchers investigated seven fruit localization methods and found that deriving 3D boxes from 2D images and depth information was faster and more effective, providing a basis for selecting suitable cameras and end-effectors for harvesting small-sized samples.
The Belgian company Octinion [76] has developed a 6-DOF robotic arm specifically for harvesting strawberries in elevated systems. An RGB camera determines the strawberry’s position by analyzing its color, and a soft gripper is used to pick the high-hanging strawberries from below. The prototype of this robotic arm can pick a strawberry within 4 s.

5.2.2. Tomato Harvesting Robotic Arms

Gao et al. [77] designed a 6-DOF cherry tomato harvesting robotic arm installed on a track. They also developed a pneumatically controlled nylon finger end-effector, which incorporates a finger-like gripper, a rotating and extending pneumatic cylinder, and an RGB-D camera. By comparing two harvesting methods—telescopic and rotational—they found that the rotational method was superior in terms of applied force and interference, providing valuable insights for end-effector design. Experiments revealed that the average cycle time for harvesting a single cherry tomato was 6.4 s. Harvest success rates in different directions were 84% (right), 83.3% (rear), 79.8% (left) and 69.4% (front). Failures were mainly attributed to collisions and positioning errors.
Yaguchi et al. [78] used a UR5 robotic arm, a PS4 stereo camera and a three-jaw gripper as the end-effector to harvest fruit by grasping, rotating and extending. The final harvesting speed was 23 s per fruit. Chen et al. [32], who were also part of the same research group, used an HRP2W humanoid robot, equipping both its head and hands with RGB-D cameras. Both arms were 7-DOF and fitted with custom shears, enabling the robot to cut and grasp tomatoes simultaneously. However, the system cannot yet achieve full automation. This study verified the feasibility of using a dual-arm humanoid robot for harvesting.
Feng Qingchun et al. [39,80] used a tracked vehicle capable of moving both on the ground and on rails as a carrier, integrating a 5-DOF robotic arm. They employed a visual servo unit to identify and locate ripe fruit, designing an integrated grasping-shearing end-effector based on the fruit’s mechanical properties to facilitate grasping and detachment. Field trials were conducted to evaluate the performance of the newly developed robot, achieving a successful harvesting rate of 83% with a cycle time of 8 s.
Liu Jizhan et al. [79,88,89] achieved coordinated control of the arm of a tomato harvesting robot by integrating a self-designed end-effector with a Motoman commercial robotic arm. The end-effector uses a suction cup and a mechanical claw to detach and secure the target tomato; then, an air-puff solenoid valve separates the fruit. Experimental results showed that the successful harvesting rates for the alternating and composite modes were 70.0% and 83.3%, respectively. Team member Li [90] conducted a kinematic analysis of the Motoman-sv3x robotic arm and the 3-DOF claw-type end effector and determined that such robotic arms can efficiently perform picking tasks in greenhouses.
Yu Fenghua et al. [81] used a Mecanum-wheeled omnidirectional mobile platform as the robot’s mobile base. They used a depth camera driven by a Raspberry Pi 4B controller to identify ripe tomatoes and installed a fan on the platform to improve the identification rate of tomatoes occluded by leaves. They designed a soft gripper equipped with a thin-film pressure sensor to enable precise control of the picking force and prevent damage to the tomatoes. In the experiment, the success rates for the first and second attempts were 86% and 96%, respectively.

5.2.3. Harvesting Robotic Arms for Other Greenhouse Crops

Lehnert et al. [54] used a UR5 robotic arm in conjunction with a bespoke vacuum gripper, employing a vibrating blade to cut pepper stems. This study introduced a new magnetic decoupling mechanism that enables grasping and cutting operations to be performed independently, thereby improving the system’s operational flexibility. The experiment achieved a success rate of 76.5%.
Arad et al. [57] designed a harvesting robot with a lifting platform to position the arm in the working area. Its end-effector integrates an RGB-D camera, LED lighting, a six-fingered mechanical claw and a vibrating blade. In the experiment, the average harvesting time was 24 s, and the harvesting success rate was 61%.
Earlier research by Van Henten et al. [82,91] in the Netherlands involved a robotic arm for harvesting cucumbers. This consisted of an autonomous mobile platform, a seven-degree-of-freedom robotic arm, an end-effector and two vision systems. The end-effector employed a thermal cutting method that severed the cucumber’s stem with a high-frequency current, thereby preventing the spread of viruses between plants. During greenhouse trials, the robot achieved an 80% success rate, harvesting one cucumber in an average of 45 s. In recent years, Ji Chao et al. [83,92] in China designed a 4-degree-of-freedom cucumber-picking robotic arm equipped with auxiliary lighting. The end effector used a flexible gripper and a stainless-steel cutter. The success rate of the trials reached 85%, and the average picking time was 28.6 s.
Junge et al. [84] proposed a sensorised physical simulator for training robots to harvest raspberries in a laboratory setting, thus eliminating the need for seasonal field tests. The system consists of a six-degree-of-freedom robotic arm and a silicone gripper driven by a Dynamixel motor. By simulating the interaction between the robot and the fragile fruit, grasping performance improved, achieving an 80% success rate in greenhouse raspberry trials. This approach reduces reliance on costly field experiments and provides a new way to train harvesting robots.
Song Jian et al. [85,93] developed a 4-DOF mechanical arm for eggplant harvesting in the early years. They used a fixed double-threshold method based on histograms to segment the G-B grayscale image and extract the features of the fruit target, such as contour, area, and centroid. In the experiments, the measurement error of a single camera ranged within 18 mm, the success rate of grasping was 89%, and the average time consumption was 37.4 s.

6. Discussion

The development of greenhouse robotic arm technology has reached a relatively mature stage. However, to achieve the leap from the laboratory to large-scale application, the challenges remain formidable. The key to overcoming these obstacles lies in shifting the research focus from optimizing individual robots in isolation to systematically constructing an unmanned operation system that integrates collaborative capabilities and group intelligence.

6.1. Key Challenges in the Application of Robotic Arms

Despite the promising outlook, transitioning greenhouse robots from controlled laboratory environments to commercial, in-field deployment requires overcoming several critical technological and economic hurdles.
Foremost among these is the formidable test that the greenhouse environment itself poses to a robot’s perception systems. Far from being a static, orderly factory floor, a greenhouse is a quintessentially unstructured and dynamic domain. Here, wildly fluctuating light conditions, the continuous morphological changes from plant growth, and the frequent occlusion of target fruits by dense foliage collectively create immense challenges for visual recognition and localization. Concurrently, the high-humidity and dusty conditions present a persistent challenge to the long-term stability and reliability of various sensors. The enclosed architecture also renders GPS signals ineffective, necessitating a reliance on autonomous navigation techniques like SLAM, whose robustness in the feature-sparse or highly repetitive scenarios typical of greenhouses has yet to be fully proven.
Once the challenge of “seeing and finding” is addressed, the bottleneck shifts to “doing it well and doing it fast,” the inherent conflict between operational dexterity and efficiency. Agricultural tasks, whether harvesting delicate berries or pruning tender shoots, demand a high degree of compliance and precise force control to prevent irreversible damage to the crops. Most current end-effector designs still fall short of emulating the adaptive grasping capabilities of the human hand. More critically, a robot’s operational tempo for a single picking cycle remains significantly slower than that of an experienced human worker. This efficiency gap directly impacts the economic viability of replacing manual labor in real-world production.
Finally, the sheer complexity of system-level integration and prohibitive costs constitute the primary barriers to widespread adoption. A reliable greenhouse robot requires the seamless integration of a mobile platform, a manipulator arm, a perception suite, and complex control algorithms into a single, cohesive whole—a daunting systems engineering task in itself. The substantial upfront investment in hardware, coupled with ongoing maintenance costs and the need for skilled operators, makes the technology inaccessible for many growers. For mobile platforms, energy management strategies, encompassing both battery endurance and autonomous charging capabilities, are another crucial factor determining their capacity for sustained, uninterrupted operation.

6.2. Development of Multi-Robot Systems and Swarm Robotics

The unmanned greenhouses of the future will not be places where single robots work in isolation. Instead, it will be an intelligent farm where multiple robots form a multi-robot system (MRS) and collaborate efficiently [94,95]. This model’s core concept is to break down complex greenhouse production workflows into specialised sub-tasks and assign them to robots with different functions. Advanced task allocation and scheduling algorithms are key to achieving this high-efficiency collaboration. By modelling the task planning problem as a multi-objective optimisation problem and utilising methods such as distributed genetic algorithms, market-based auction mechanisms or multi-agent reinforcement learning, an almost optimal work schedule can be generated for the robot team [69]. This collaborative approach significantly increases overall efficiency through parallel operations and enhances the robustness of the entire system through functional complementarity. If an individual robot fails, its tasks can be taken over by other members of the team. As the number of robots increases, the system evolves into swarm robotics. These swarms exhibit a high degree of self-organisation and environmental adaptability, driven by swarm intelligence algorithms. This offers a highly promising solution for managing ultra-large-scale, high-density greenhouses of the future.

6.3. Integrated Air–Ground Framework with Unmanned Aerial Vehicles (UAVs)

Apart from the collaboration between ground robots, a more disruptive future model lies in the deep integration of ground units with unmanned aerial vehicles (UAVs), thereby constructing a heterogeneous air–ground collaborative robot system [43,96]. The appeal of this model is that it can complement the wide-area rapid reconnaissance capabilities of drones with the high endurance and precise physical operation capabilities of ground robots, thereby forming an efficient “air reconnaissance, ground execution” operation process: the drone quickly locks onto the area that needs attention from high altitude, and then transmits the target coordinates in real time to the ground robot, guiding them to complete precise targeted operations [97]. However, when this vision is realized, one must confront the harsh reality of the internal structure of commercial greenhouses. The dense metal beams, columns and pipes in these facilities, combined with the dense tree canopies formed by the vertical growth of crops such as tomatoes, together create an extremely unfavorable environment for autonomous flight and a complex airspace filled with obstacles. This not only brings extremely high risks of collision and entanglement but also keeps the drone’s perception system in a “noisy signal” state for a long time, posing a fundamental challenge to existing obstacle avoidance algorithms. Only by first overcoming these basic navigation problems can more advanced collaboration be achieved—such as having the drone play the role of an “air guide” to lead the ground robots to explore the forward path and dynamically plan obstacle avoidance routes—is it possible to turn this idea into reality [98]. Therefore, developing miniature drones that can fly stably in enclosed and chaotic spaces and their accompanying advanced navigation algorithms will be a key step in unlocking the potential of air-ground collaboration, ultimately promoting the leapfrog development of intelligent agricultural production.

6.4. The Path to Commercialization: From Research to Application

Although there are numerous new types of robotic systems in academic literature, the actual number of agricultural robots that have been tested in the field and deployed commercially is still very limited. During this difficult transition process, some pioneering enterprises have become important models, such as the Belgian company Okunion, which is famous for its renowned strawberry-picking robot. This robot is known for its patented soft contact gripper [78]; the American company Advanced Farm Technology (AFT), which provides commercial services based on its multi-arm strawberry-picking system per each picking volume; Root AI, which focuses on tomato-picking and has demonstrated feasible solutions for specific high-value tasks; and the Dutch company Priva, which is dedicated to solving the problem of tomato leaf removal through robots. All of these demonstrate feasible solutions for specific high-value tasks. However, the scarcity of these commercial success cases contrasts sharply with the large number of academic publications, which itself reveals a profound gap—the so-called “laboratory to field” gap. This gap mainly stems from significant differences in environment and stability: academic prototypes are born in idealized controlled environments, while commercial applications must face severe challenges in the real greenhouse environment such as fluctuating light, dust, and the inherent chaos of biological systems. Minor failures acceptable in the laboratory can turn into unaffordable economic losses in commercial production. Moreover, the gap in performance and efficiency is also very significant, with the processing rhythm of research results far different from the working rhythm of skilled workers, and it is difficult to cope with “edge cases” in real scenarios (such as tangled stems, dense fruits), which are precisely the key to commercial success. Ultimately, the key that determines its success or failure may also be the most fundamental gap, namely, the gap in economy and availability: high costs and unclear return on investment (ROI) are the main obstacles to its widespread adoption. At the same time, a system that requires expert maintenance rather than intuitive operation by farm workers lacks vitality in reality. To bridge this gap, it is necessary to shift the research focus from the “possibility” proof in basic research to the “reliability” realization in engineering disciplines.

7. Conclusions

In summary, there is currently a profound transition underway in the field of greenhouse robotic arms and their key technologies, moving from single-function automation to system-level intelligent integration. While there are still many challenges in terms of perception accuracy, operational efficiency and human–robot interaction, emerging trends such as multi-robot collaboration and air–ground integration are providing a clear blueprint for the future of unmanned farms. Building an intelligent swarm of ground and aerial robots that can collaborate autonomously, plan dynamically, and execute tasks efficiently will revolutionise traditional greenhouse production models. Future research will deepen collaboration strategies between robots, optimise the efficiency and safety of human–robot interaction, and drive these advanced integrated systems towards commercial application. Ultimately, this will establish a robust technological basis for a modern agricultural system that is sustainable, high-yield and intelligent.

Author Contributions

Conceptualization, S.Z. and T.L.; methodology, S.Z., T.L. and X.L.; software, S.Z., T.L. and X.L.; validation, S.Z., T.L. and C.C. (Chen Cai); formal analysis, S.Z. and T.L.; investigation, S.Z. and T.L., C.C. (Chen Cai) and C.C. (Chun Chang); resources, S.Z., T.L. and C.C. (Chun Chang); data curation, S.Z., T.L. and X.L.; writing—original draft preparation, S.Z. and T.L.; writing—review and editing, S.Z., T.L. and X.X.; visualization, S.Z. and T.L., C.C. (Chen Cai) and C.C. (Chun Chang); supervision, X.L. and X.X.; project administration, S.Z. and T.L.; funding acquisition, X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Agricultural Science and Technology Innovation Project of the Chinese Academy of Agricultural Sciences, Crop Protection Machinery Team (grant no. CAAS-ASTIP-CPMT), the Jiangsu Province’333 High-Level Talent Cultivation Project’ (BRA2024-2029) and the China Agriculture Research System of MOF and MARA (grant no. CARS-12).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, T.; Xu, X.; Wang, C.; Li, Z.; Li, D. From Smart Farming towards Unmanned Farms: A New Mode of Agricultural Production. Agriculture 2021, 11, 145. [Google Scholar] [CrossRef]
  2. Gnauer, C.; Pichler, H.; Tauber, M.; Schmittner, C.; Christl, K.; Knapitsch, J.; Parapatits, M. Towards a Secure and Self-Adapting Smart Indoor Farming Framework. Elektrotechnik Informationstechnik 2019, 136, 341–344. [Google Scholar] [CrossRef]
  3. Daoliang, L.; Zhen, L. System Analysis and Development Prospect of Unmanned Farming. Trans. Chin. Soc. Agric. Mach. 2020, 51, 1–12. [Google Scholar]
  4. Luo, X.; Liao, J.; Hu, L.; Zhou, Z.; Zhang, Z.; Zang, Y.; Wang, P.; He, J. Research progress of intelligent agricultural machinery andpractice of unmanned farm in China. J. South China Agric. Univ. 2021, 42, 8–17. [Google Scholar]
  5. Shamshiri, R.R.; Kalantari, F.; Ting, K.C.; Thorp, K.R.; Hameed, I.A.; Weltzien, C.; Ahmad, D.; Shad, Z.M. Advances in Greenhouse Automation and Controlled Environment Agriculture: A Transition to Plant Factories and Urban Agriculture. Int. J. Agric. Biol. Eng. 2018, 11, 1–22. [Google Scholar] [CrossRef]
  6. Hemming, J.; Bac, C.W.; van Tuijl, B.A.J.; Barth, R.; Bontsema, J.; Pekkeriet, E. A Robot for Harvesting Sweet-Pepper in Greenhouses. In Proceedings of the International Conference on Agricultural Engineering 2014, Zurich, Switzerland, 6 July–10 July 2014; pp. 1–8. [Google Scholar]
  7. Kang, H.; Chen, C. Fruit Detection, Segmentation and 3D Visualisation of Environments in Apple Orchards. Comput. Electron. Agric. 2020, 171, 105302. [Google Scholar] [CrossRef]
  8. Atefi, A.; Ge, Y.; Pitla, S.; Schnable, J. Robotic Detection and Grasp of Maize and Sorghum: Stem Measurement with Contact. Robotics 2020, 9, 58. [Google Scholar] [CrossRef]
  9. Qian, C.; Li, X.; Zhu, J.; Liu, T.; Li, R.; Li, B.; Hu, M.; Xin, Y.; Xu, Y. A Bionic Manipulator Based on Multi-Sensor Data Fusion. Integr. Ferroelectr. 2018, 192, 10–15. [Google Scholar] [CrossRef]
  10. Yang, Y.; Han, Y.; Li, S.; Yang, Y.; Zhang, M.; Li, H. Vision Based Fruit Recognition and Positioning Technology for Harvesting Robots. Comput. Electron. Agric. 2023, 213, 108258. [Google Scholar] [CrossRef]
  11. Zhao, Y.; Gong, L.; Huang, Y.; Liu, C. A Review of Key Techniques of Vision-Based Control for Harvesting Robot. Comput. Electron. Agric. 2016, 127, 311–323. [Google Scholar] [CrossRef]
  12. Rapado-Rincón, D.; van Henten, E.J.; Kootstra, G. Development and Evaluation of Automated Localisation and Reconstruction of All Fruits on Tomato Plants in a Greenhouse Based on Multi-View Perception and 3D Multi-Object Tracking. Biosyst. Eng. 2023, 231, 78–91. [Google Scholar] [CrossRef]
  13. Oliveira, L.F.P.; Moreira, A.P.; Silva, M.F. Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead. Robotics 2021, 10, 52. [Google Scholar] [CrossRef]
  14. Kurtser, P.; Castro-Alves, V.; Arunachalam, A.; Sjöberg, V.; Hanell, U.; Hyötyläinen, T.; Andreasson, H. Development of Novel Robotic Platforms for Mechanical Stress Induction, and Their Effects on Plant Morphology, Elements, and Metabolism. Sci. Rep. 2021, 11, 23876. [Google Scholar] [CrossRef] [PubMed]
  15. Martin, J.; Ansuategi, A.; Maurtua, I.; Gutierrez, A.; Obregon, D.; Casquero, O.; Marcos, M. A Generic ROS-Based Control Architecture for Pest Inspection and Treatment in Greenhouses Using a Mobile Manipulator. IEEE Access 2021, 9, 94981–94995. [Google Scholar] [CrossRef]
  16. Zhang, B.; Xie, Y.; Zhou, J.; Wang, K.; Zhang, Z. State-of-the-Art Robotic Grippers, Grasping and Control Strategies, as Well as Their Applications in Agricultural Robots: A Review. Comput. Electron. Agric. 2020, 177, 105694. [Google Scholar] [CrossRef]
  17. Jin, T.; Han, X. Robotic Arms in Precision Agriculture: A Comprehensive Review of the Technologies, Applications, Challenges, and Future Prospects. Comput. Electron. Agric. 2024, 221, 108938. [Google Scholar] [CrossRef]
  18. Bac, C.W.; van Henten, E.J.; Hemming, J.; Edan, Y. Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead. J. Field Robot. 2014, 31, 888–911. [Google Scholar] [CrossRef]
  19. Roshanianfard, A.; Mengmeng, D.; Nematzadeh, S. 4-Dof Scara Robotic Arm for Various Farm Applications: Designing, Kinematic Modelling, and Parameterization. Acta Technol. Agric 2021, 24, 61–66. [Google Scholar] [CrossRef]
  20. Ge, Y.; Xiong, Y.; From, P.J. Three-Dimensional Location Methods for the Vision System of Strawberry-Harvesting Robots: Development and Comparison. Precis. Agric. 2022, 24, 764–782. [Google Scholar] [CrossRef]
  21. Ge, Y.; Xiong, Y.; Tenorio, G.L.; From, P.J. Fruit Localization and Environment Perception for Strawberry Harvesting Robots. IEEE Access 2019, 7, 147642–147652. [Google Scholar] [CrossRef]
  22. Ren, G.; Wu, T.; Lin, T.; Yang, L.; Chowdhary, G.; Ting, K.C.; Ying, Y. Mobile Robotics Platform for Strawberry Sensing and Harvesting within Precision Indoor Farming Systems. J. Field Robot. 2024, 41, 2047–2065. [Google Scholar] [CrossRef]
  23. Yang, M.; Lyu, H.; Zhao, Y.; Sun, Y.; Pan, H.; Sun, Q.; Chen, J.; Qiang, B.; Yang, H. Delivery of Pollen to Forsythia Flower Pistils Autonomously and Precisely Using a Robot Arm. Comput. Electron. Agric. 2023, 214, 108274. [Google Scholar] [CrossRef]
  24. Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Tona, E.; Hočevar, M.; Baur, J.; Pfaff, J.; Schütz, C.; et al. Selective Spraying of Grapevines for Disease Control Using a Modular Agricultural Robot. Biosyst. Eng. 2016, 146, 203–215. [Google Scholar] [CrossRef]
  25. Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Hočevar, M.; Baur, J.; Pfaff, J.; Schütz, C.; Ulbrich, H. Selective Spraying of Grapevine’s Diseases by a Modular Agricultural Robot. J. Agric. Eng. 2013, 44, 1–5. [Google Scholar] [CrossRef]
  26. Chen, Z.; Wang, J.; Wang, T.; Song, Z.; Li, Y.; Huang, Y.; Wang, L.; Jin, J. Automated In-Field Leaf-Level Hyperspectral Imaging of Corn Plants Using a Cartesian Robotic Platform. Comput. Electron. Agric. 2021, 183, 105996. [Google Scholar] [CrossRef]
  27. Erick, M.; Fiestas, S.; Sixto, R.; Prado, G. Modeling and Simulation of Kinematics and Trajectory Planning of a Farmbot Cartesian Robot. In Proceedings of the 2018 IEEE XXV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Lima, Peru, 8–10 August 2018; pp. 1–4. [Google Scholar]
  28. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An Autonomous Strawberry-harvesting Robot: Design, Development, Integration, and Field Evaluation. J. Field Robot. 2019, 37, 202–224. [Google Scholar] [CrossRef]
  29. Au, C.; Barnett, J.; Lim, S.H.; Duke, M. Workspace Analysis of Cartesian Robot System for Kiwifruit Harvesting. Ind. Robot Int. J. Robot. Res. Appl. 2020, 47, 503–510. [Google Scholar] [CrossRef]
  30. Barnett, J.; Duke, M.; Au, C.K.; Lim, S.H. Work Distribution of Multiple Cartesian Robot Arms for Kiwifruit Harvesting. Comput. Electron. Agric. 2020, 169, 105202. [Google Scholar] [CrossRef]
  31. Ling, X.; Zhao, Y.; Gong, L.; Liu, C.; Wang, T. Dual-Arm Cooperation and Implementing for Robotic Harvesting Tomato Using Binocular Vision. Robot. Auton. Syst. 2019, 114, 134–143. [Google Scholar] [CrossRef]
  32. Chen, X.; Chaudhary, K.; Tanaka, Y. Reasoning-Based Vision Recognition for Agricultural Humanoid Robot toward Tomato Harvesting. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1–8. [Google Scholar]
  33. Yoshida, T.; Onishi, Y.; Kawahara, T.; Fukao, T. Automated Harvesting by a Dual-Arm Fruit Harvesting Robot. ROBOMECH J. 2022, 9, 19. [Google Scholar] [CrossRef]
  34. Li, T.; Xie, F.; Zhao, Z.; Zhao, H.; Guo, X.; Feng, Q. A Multi-Arm Robot System for Efficient Apple Harvesting: Perception, Task Plan and Control. Comput. Electron. Agric. 2023, 211, 107979. [Google Scholar] [CrossRef]
  35. Wang, F.; Lever, P. Cell Mapping Method for General Optimum Trajectory Planning of Multiple Robotic Arms. Robot. Auton. Syst. 1994, 12, 15–27. [Google Scholar] [CrossRef]
  36. Liu, C.; Gao, J.; Bi, Y.; Shi, X.; Tian, D. A Multitasking-Oriented Robot Arm Motion Planning Scheme Based on Deep Reinforcement Learning and Twin Synchro-Control. Sensors 2020, 20, 3515. [Google Scholar] [CrossRef]
  37. Kawasaki, H.; Ueki, S.; Ito, S. Decentralized Adaptive Coordinated Control of Multiple Robot Arms without Using a Force Sensor. Automatica 2006, 42, 481–488. [Google Scholar] [CrossRef]
  38. Tian, S.; Liu, G.; Xing, D.; Sun, Z. Design and Experiment on Greenhouse Multi-function Rail Vehicle. J. Agric. Mech. Res. 2017, 39, 116–121. [Google Scholar]
  39. Feng, Q.; Wang, X.; Wang, G.; Li, Z. Design and test of tomatoes harvesting robot. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015. [Google Scholar]
  40. Liang, L.; Wenai, Z.; Qingchun, F.; Xiu, W. System Design for Rail Spraying Robot in Greenhouse. J. Agric. Mech. Res. 2016, 38, 109–112. [Google Scholar]
  41. Qi, L.; Wang, H.; Zhang, J.; Ji, R.; Wang, J. 3D Numerical Simulation and Experiment of Air-velocity Distribution of Greenhouse Air-assisted Sprayer. Trans. Chin. Soc. Agric. Mach. 2013, 44, 69–74. [Google Scholar]
  42. Alinezhad, E.; Gan, V.; Chang, V.W.-C.; Zhou, J. Unmanned Ground Vehicles (UGVs)-Based Mobile Sensing for Indoor Environmental Quality (IEQ) Monitoring: Current Challenges and Future Directions. J. Build. Eng. 2024, 88, 109169. [Google Scholar] [CrossRef]
  43. Munasinghe, I.; Perera, A.; Deo, R.C. A Comprehensive Review of UAV-UGV Collaboration: Advancements and Challenges. J. Sens. Actuator Netw. 2024, 13, 81. [Google Scholar] [CrossRef]
  44. Farella, A.; Paciolla, F.; Quartarella, T.; Pascuzzi, S. Agricultural Unmanned Ground Vehicle (UGV): A Brief Overview; Springer Nature: Cham, Switzerland, 2024; pp. 137–146. [Google Scholar]
  45. Zhang, T.; Zhou, W.; Meng, F.; Li, Z. Efficiency Analysis and Improvement of an Intelligent Transportation System for the Application in Greenhouse. Electronics 2019, 8, 946. [Google Scholar] [CrossRef]
  46. Yang, Q. Design and Development of AGV-Based IoT Solution for Greenhouse Environmental Monitoring. Master’s Thesis, Swinburne University of Technology, Sarawak, Malaysia, 2025. [Google Scholar]
  47. Zimmer, D.; Šumanovac, L.; Jurišić, M.; Čosić, A.; Lucić, P. Automatically Guided Vehicles (AGV) in Agriculture. Teh. Glas. 2024, 18, 666–672. [Google Scholar] [CrossRef]
  48. Lynch, L.; Newe, T.; Clifford, J.; Coleman, J.; Walsh, J.; Toal, D. Automated Ground Vehicle (AGV) and Sensor Technologies—A Review. In Proceedings of the 2018 12th International Conference on Sensing Technology (ICST), Limerick, Ireland, 4–6 December 2018; pp. 347–352. [Google Scholar]
  49. Saike, J.; Meina, Z.; Xue, L.; Yannan, Q.; Xiaolan, L. Development of navigation and control technology for autonomous mobile equipment in greenhouse. J. Chin. Agric. Mech. 2022, 43, 159–169. [Google Scholar]
  50. Xiang, L.; Nolan, M.; Bao, Y.; Elmore, M.; Tuel, T.; Gai, J.; Shah, D.; Wang, P.; Huser, M.; Hurd, M. Robotic Assay for Drought (RoAD): An Automated Phenotyping System for Brassinosteroid and Drought Responses. Plant J. 2021, 107, 1837–1853. [Google Scholar] [CrossRef]
  51. Roure, F.; Moreno, G.; Soler, M.; Faconti, D.; Serrano, D.; Astolfi, P.; Bardaro, G.; Gabrielli, A.; Bascetta, L.; Matteucci, M. GRAPE: Ground Robot for Vineyard Monitoring and Protection. In Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 249–260. [Google Scholar]
  52. Mohamed, A.; El-Gindy, M.; Ren, J. Advanced Control Techniques for Unmanned Ground Vehicle: Literature Survey. Int. J. Veh. Perform. 2018, 4, 46–73. [Google Scholar] [CrossRef]
  53. Kim, K.; Deb, A.; Cappelleri, J. P-AgBot: In-Row & under-Canopy Agricultural Robot for Monitoring and Physical Sampling. IEEE Robot. Autom. Lett. 2022, 7, 7942–7949. [Google Scholar]
  54. Lehnert, C.; McCool, C.; Sa, I.; Perez, T. Performance Improvements of a Sweet Pepper Harvesting Robot in Protected Cropping Environments. J. Field Robot. 2020, 37, 1197–1223. [Google Scholar] [CrossRef]
  55. Zhuang, Y.; Guo, Y.; Li, J.; Shen, L.; Wang, Z.; Sun, M.; Wang, J. Analysis of Mechanical Characteristics of Stereolithography Soft-Picking Manipulator and Its Application in Grasping Fruits and Vegetables. Agronomy 2023, 13, 2481. [Google Scholar] [CrossRef]
  56. Brown, J.; Sukkarieh, S. Design and Evaluation of a Modular Robotic Plum Harvesting System Utilizing Soft Components. J. Field Robot. 2020, 38, 289–306. [Google Scholar] [CrossRef]
  57. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a Sweet Pepper Harvesting Robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  58. Feng, Q.C.; Wang, X.; Zheng, W.G.; Qiu, Q.; Jiang, K. New Strawberry Harvesting Robot for Elevated-Trough Culture. Int. J. Agric. Biol. Eng. 2011, 5, 1–8. [Google Scholar]
  59. Vatavuk, I.; Vasiljević, G.; Kovačić, Z. Task Space Model Predictive Control for Vineyard Spraying with a Mobile Manipulator. Agriculture 2022, 12, 381. [Google Scholar] [CrossRef]
  60. Benavides, M.; Cantón-Garbín, M.; Sánchez-Molina, J.A.; Rodríguez, F. Automatic Tomato and Peduncle Location System Based on Computer Vision for Use in Robotized Harvesting. Appl. Sci. 2020, 10, 5887. [Google Scholar] [CrossRef]
  61. Lv, J.; Wang, Y.; Ni, H.; Wang, Q.; Rong, H.; Ma, Z.; Yang, B.; Xu, L. Method for Discriminating of the Shape of Overlapped Apple Fruit Images. Biosyst. Eng. 2019, 186, 118–129. [Google Scholar] [CrossRef]
  62. He, B.; Zhang, Y.; Gong, J.; Fu, G.; Zhao, Y.; Wu, R. Fast Recognition of Tomato Fruit in Greenhouse at Night Based on Improved YOLO v5. Trans. Chin. Soc. Agric. Mach. 2022, 53, 201–208. [Google Scholar]
  63. Feng, Q.; Cheng, W.; Li, Y.; Wang, B.; Chen, L. Method for identifying tomato plants pruning point using Mask R-CNN. Trans. Chin. Soc. Agric. Eng. 2022, 38, 128–135. [Google Scholar]
  64. Lehnert, C.; Sa, I.; McCool, C.; Upcroft, B.; Perez, T. Sweet pepper pose detection and grasping for automated crop harvesting. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
  65. Ubina, N.; Cheng, S. A Review of Unmanned System Technologies with Its Application to Aquaculture Farm Monitoring and Management. Drones 2022, 6, 12. [Google Scholar] [CrossRef]
  66. Chu, L. Study the Operation Process of Factory Greenhouse Robot Based on Intelligent Dispatching Method. In Proceedings of the 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), Changchun, China, 25–27 February 2022; pp. 291–293. [Google Scholar]
  67. Thomopoulos, V.; Bitas, D.; Papastavros, K.; Tsipianitis, D.; Kavga, A. Development of an Integrated IoT-Based Greenhouse Control Three-Device Robotic System. Agronomy 2021, 11, 405. [Google Scholar] [CrossRef]
  68. Ferreira, B.; Petrović, T.; Bogdan, S. Distributed Mission Planning of Complex Tasks for Heterogeneous Multi-Robot Teams. arXiv 2021, arXiv:2109.10106. [Google Scholar] [CrossRef]
  69. Ma, Y.; Feng, Q.; Sun, Y.; Guo, X.; Zhang, W.; Wang, B.; Chen, L. Optimized Design of Robotic Arm for Tomato Branch Pruning in Greenhouses. Agriculture 2024, 14, 359. [Google Scholar] [CrossRef]
  70. Kaljaca, D.; Vroegindeweij, B.; Henten, E.J. Coverage Trajectory Planning for a Bush Trimming Robot Arm. J. Field Robot. 2019, 37, 283–308. [Google Scholar] [CrossRef]
  71. Nadafzadeh, M.; Banakar, A.; Mehdizadeh, S.A.; Bavani, M.Z.; Minaei, S.; Hoogenboom, G. Design, Fabrication and Evaluation of a Robot for Plant Nutrient Monitoring in Greenhouse (Case Study: Iron Nutrient in Spinach). Comput. Electron. Agric. 2024, 217, 108579. [Google Scholar] [CrossRef]
  72. Schor, N.; Berman, S.; Dombrovsky, A.; Elad, Y.; Ignat, T.; Bechar, A. Development of a Robotic Detection System for Greenhouse Pepper Plant Diseases. Precis. Agric. 2017, 18, 394–409. [Google Scholar] [CrossRef]
  73. Ebrahimi, M.A.; Khoshtaghaza, M.H.; Minaei, S.; Jamshidi, B. Vision-Based Pest Detection Based on SVM Classification Method. Comput. Electron. Agric. 2017, 137, 52–58. [Google Scholar] [CrossRef]
  74. Cho, S.; Kim, T.; Jung, D.H.; Park, S.H.; Na, Y.; Ihn, Y.S.; Kim, K. Plant Growth Information Measurement Based on Object Detection and Image Fusion Using a Smart Farm Robot. Comput. Electron. Agric. 2023, 207, 107703. [Google Scholar] [CrossRef]
  75. Xiong, Y.; Peng, C.; Grimstad, L.; From, P.J.; Isler, V. Development and Field Evaluation of a Strawberry Harvesting Robot with a Cable-Driven Gripper. Comput. Electron. Agric. 2019, 157, 392–402. [Google Scholar] [CrossRef]
  76. Preter, A.D.; Anthonis, J.; Baerdemaeker, J.D. Development of a Robot for Harvesting Strawberries. IFAC-Pap. 2018, 51, 14–19. [Google Scholar] [CrossRef]
  77. Gao, J.; Zhang, F.; Zhang, J.; Yuan, T.; Yin, J.; Guo, H.; Yang, C. Development and Evaluation of a Pneumatic Finger-like End-Effector for Cherry Tomato Harvesting Robot in Greenhouse. Comput. Electron. Agric. 2022, 197, 106879. [Google Scholar] [CrossRef]
  78. Yaguchi, H.; Nagahama, K.; Hasegawa, T.; Inaba, M. Development of An Autonomous Tomato Harvesting Robot with Rotational Plucking Gripper. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 1–6. [Google Scholar]
  79. Liu, J.; Li, Z.; Wang, F.; Li, P.; Xi, N. Hand-Arm Coordination for a Tomato Harvesting Robot Based on Commercial Manipulator. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 2715–2720. [Google Scholar]
  80. Feng, Q.; Zou, W.; Fan, P.; Zhang, C.; Wang, X. Design and Test of Robotic Harvesting System for Cherry Tomato. Int. J. Agric. Biol. Eng. 2016, 11, 96–100. [Google Scholar] [CrossRef]
  81. Yu, F.; Zhou, C.; Yang, X.; Guo, Z.; Chen, C. Design and Experiment of Tomato Picking Robot in Solar Greenhouse. Trans. Chin. Soc. Agric. Mach. 2022, 53, 41–49. [Google Scholar]
  82. Henten, E.J.V.; Hemming, J.; van Tuijl, B.A.J.; Kornet, J.G.; Meuleman, J.; Bontsema, J.; van Os, E.A.; Henten, E.; Hemming, J.; Tuijl, B.V.; et al. An Autonomous Robot for Harvesting Cucumbers in Greenhouses. Auton. Robot. 2002, 13, 241–258. [Google Scholar] [CrossRef]
  83. Ji, C.; Feng, Q.; Yuan, T.; Tan, Y.; Li, W. Development and Performance Analysis on Cucumber Harvesting Robot System in Greenhouse. Robot 2011, 33, 726–730. [Google Scholar]
  84. Junge, K.; Pires, C.; Hughes, J. Lab2Field Transfer of a Robotic Raspberry Harvester Enabled by a Soft Sensorized Physical Twin. Commun. Eng. 2023, 2, 40. [Google Scholar] [CrossRef]
  85. Song, J.; Sun, X.; Zhang, T.; Zhang, B.; Xu, L. Design and Experiment of Opening Picking Robot for Eggplant. Trans. Chin. Soc. Agric. Mach. 2009, 40, 143–147. [Google Scholar]
  86. Qingchun, F.; Wengang, Z.; Kai, J.; Quan, Q.; Rui, G. Design of Strawberry Harvesting Robot on Table-top Culture. J. Agric. Mech. Res. 2012, 34, 122–126. [Google Scholar]
  87. Feng, Q.; Chen, J.; Zhang, M.; Wang, X. Design and Test of Harvesting Robot for Table-top Cultivated Strawberry. In Proceedings of the 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 21–22 August 2019. [Google Scholar]
  88. Liu, J. Research Progress Analysis of Robotic Harvesting Technologies in Greenhouse. Trans. Chin. Soc. Agric. Mach. 2017, 48, 1–18. [Google Scholar]
  89. Liu, J. Analysis and Optimal Control of Vacuum Suction System for Tomato Harvesting Robots. Ph.D. Thesis, Jiangsu University, Zhenjiang, China, 2010. [Google Scholar]
  90. Li, Z.; Liu, J.; Li, P.; Li, W. Analysis of Workspace and Kinematics for a Tomato Harvesting Robot. In Proceedings of the 2008 International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China, 20–22 October 2008; pp. 823–827. [Google Scholar] [CrossRef]
  91. Henten, E.J.V.; Tuijl, B.A.J.V.; Hemming, J.; Kornet, J.G.; Bontsema, J.; Os, E.A.V. Field Test of an Autonomous Cucumber Picking Robot. Biosyst. Eng. 2003, 86, 305–313. [Google Scholar] [CrossRef]
  92. Feng, Q.; Ji, C.; Zhang, J.; Li, W. Optimization Design and Kinematic Analysis of Cucumber Harvesting Robot Manipulator. Trans. Chin. Soc. Agric. Mach. 2010, 41, 244–248. [Google Scholar]
  93. Jian, S. Optimization design and simulation on structure parameter of eggplant picking robot. Mach. Des. Manuf. 2008, 6, 166–168. [Google Scholar]
  94. Ju, C.; Kim, J.; Seol, J.; Son, H.I. A Review on Multirobot Systems in Agriculture. Comput. Electron. Agric. 2022, 202, 107336. [Google Scholar] [CrossRef]
  95. Rizk, Y.; Awad, M.; Tunstel, W.E. Cooperative Heterogeneous Multi-Robot Systems: A Survey. ACM Comput. Surv. 2019, 52, 1–31. [Google Scholar] [CrossRef]
  96. Ren, Z.; Zheng, H.; Chen, J.; Chen, T.; Xie, P.; Xu, Y.; Deng, J.; Wang, H.; Sun, M.; Jiao, W. Integrating UAV, UGV and UAV-UGV Collaboration in Future Industrialized Agriculture: Analysis, Opportunities and Challenges. Comput. Electron. Agric. 2024, 227, 109631. [Google Scholar] [CrossRef]
  97. Polic, M.; Ivanovic, A.; Maric, B.; Arbanas, B.; Tabak, J.; Orsag, M. Structured Ecological Cultivation with Autonomous Robots in Indoor Agriculture. In Proceedings of the 2021 16th International Conference on Telecommunications (ConTEL), Zagreb, Croatia, 30 June–2 July 2021; pp. 189–195. [Google Scholar]
  98. Bhadoriya, A.S.; Rathinam, S.; Darbha, S.; Casbeer, D.; Manyam, S. Assisted Path Planning for a UGV–UAV Team Through a Stochastic Network. J. Indian Inst. Sci. 2024, 104, 691–710. [Google Scholar] [CrossRef]
Figure 2. Architecture for the integration of robotic arms into an unmanned farm.
Figure 2. Architecture for the integration of robotic arms into an unmanned farm.
Agronomy 15 02498 g002
Figure 4. A precision spraying robotic arm based on autonomous navigation and 3D vision [15].
Figure 4. A precision spraying robotic arm based on autonomous navigation and 3D vision [15].
Agronomy 15 02498 g004
Table 2. Comparative Analysis of Common Robotic Arm Types in Greenhouses.
Table 2. Comparative Analysis of Common Robotic Arm Types in Greenhouses.
Robotic Arm TypeKey Technical FeaturesSuitable
Applications
AdvantagesDisadvantagesRelative Cost
SCARA4 DOF; Fast horizontal motion; High vertical rigidity.Seedling tasks, sorting, simple picking.Fast, precise, simple structure, lower cost.Limited flexibility; poor at complex 3D tasks.Low to Medium
Articulated4- to 7 DOF; Human-like motion; Spherical workspace.Complex harvesting, pruning, pollination, targeted spraying.Highly flexible, obstacle avoidance, large reach.Complex control, high cost & integration difficulty.High
Cartesian (Gantry)3 linear axes (X, Y, Z); Rectangular workspace; High rigidity.Large-area tasks, monitoring, imaging.High accuracy & rigidity, large work area, simple control.Inflexible, limited speed, large footprint.Medium to High
Multi-arm CollaborativeDual/multi-arm coordination; shared workspace.Harvesting large/fragile items; tasks needing two handsHigh dexterity for complex tasks.Highly complex control, very high cost, mainly research-stage.Very High
Table 3. Comparative Analysis of Greenhouse Robotic Arm Deployment Platform Types.
Table 3. Comparative Analysis of Greenhouse Robotic Arm Deployment Platform Types.
Platform TypeNavigation/
Movement Method
Suitable
Environment
AdvantagesDisadvantagesRelative
Complexity/Cost
Fixed PlatformNone, fixed position.Fixed workstations for processing conveyed items.High stability & accuracy, simple, low cost.Limited workspace, inflexible, needs material transport.Low
Rail-Mounted PlatformMoves along pre-set physical rails.Structured, row-based environments.High accuracy & repeatability, simple navigation stable.Inflexible, limited to track, high installation cost.Medium
UGVAutonomous or marker-guided.Complex, unstructured environments; cross-row tasks.Highly flexible & adaptable, full greenhouse coverage.Complex navigation, accuracy is environment-dependent, needs flat ground.High
Table 4. Comparison between industrial robotic arms and agricultural robotic arms.
Table 4. Comparison between industrial robotic arms and agricultural robotic arms.
AspectIndustrial Robotic ArmAgricultural Robotic Arm
EnvironmentFixed layout, constant lighting, clean, predictable.Variable lighting, changing plant growth, dust, humidity, unpredictable.
Task NatureHigh-speed, high-precision repetition of the same motion. Each task is unique, requires real-time perception and planning.
Target ObjectStandardized parts with known geometry and properties.Living organisms with varied shapes, sizes, ripeness, and fragility.
Primary Technical ChallengeMaximizing speed, precision, and repeatability. Minimizing cycle time.Robust object detection in clutter, gentle handling, real-time decision-making.
End-EffectorSimple, task-specific, often rigid grippers designed for one object.Complex, adaptive, often “soft” grippers with force/tactile sensing to avoid damage.
MobilityTypically stationery, bolted to the floor.Often mobile; requires integration with a platform and robust navigation capabilities.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Liu, T.; Li, X.; Cai, C.; Chang, C.; Xue, X. Key Technologies of Robotic Arms in Unmanned Greenhouse. Agronomy 2025, 15, 2498. https://doi.org/10.3390/agronomy15112498

AMA Style

Zhang S, Liu T, Li X, Cai C, Chang C, Xue X. Key Technologies of Robotic Arms in Unmanned Greenhouse. Agronomy. 2025; 15(11):2498. https://doi.org/10.3390/agronomy15112498

Chicago/Turabian Style

Zhang, Songchao, Tianhong Liu, Xiang Li, Chen Cai, Chun Chang, and Xinyu Xue. 2025. "Key Technologies of Robotic Arms in Unmanned Greenhouse" Agronomy 15, no. 11: 2498. https://doi.org/10.3390/agronomy15112498

APA Style

Zhang, S., Liu, T., Li, X., Cai, C., Chang, C., & Xue, X. (2025). Key Technologies of Robotic Arms in Unmanned Greenhouse. Agronomy, 15(11), 2498. https://doi.org/10.3390/agronomy15112498

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop