Mobile robots with sensors for localization and improved path planning can navigate fields automatically. This section covers critical components for sustainable agriculture, such as gripping mechanisms, mobile robots, sensing, and navigation.
3.1. Robotic Grasping and Cutting Mechanisms
In recent robotic harvesting studies, various crops have been targeted with different end effectors and mechanisms for grasping and cutting. For tomatoes, Qingchun et al. [
26] proposed a system comprising a vacuum cup and cutting gripper, operated by a dual-arm manipulator. This system features a double cutter and a grasping attachment and employs an open-loop control system with 3D scene reconstruction to assist in the harvesting process. For strawberries, multiple researchers developed harvesting arms utilizing a gripper with an internal container and three active fingers, where the three active fingers work alongside three passive ones. Cutting is performed using two curved blades. The proposed systems also included special features such as an inclined dropping board, a soft sponge inside the gripper, and a cover for the fingers to protect the plant during harvesting [
27,
28,
29]. However, sweet pepper harvesting uses a mechanism with six metal fingers, which are covered in plastic. The fingers are spring-loaded, allowing them to move around the crop, while the cutting mechanism includes a plant stem fixing mechanism combined with a shaking blade. Grasp positions are recognized from a segmented 3D point cloud, with several grasping poses chosen from the point cloud data to ensure the accuracy of the harvesting process [
30,
31]. For lettuce, pneumatic actuators are employed for both grasping and cutting, with a blade that operates using a timing belt system. The linear action of the system is transferred to each side of the blade to ensure efficient cutting [
32,
33]. Lastly, eggplant harvesting relies on a simple design, with arms positioned parallel to the y-axis and the gripper closed, avoiding the need for a complex gripper mechanism to grab leaves, thus reducing the complexity of the system [
26,
34].
In addition, other crops have been targeted by robotic harvesting systems, each utilizing specialized end effectors and cutting mechanisms. When it comes to asparagus, a robotic arm uses two fingers and a slicing blade uses a cylindrical cam mechanism that makes it easy to perform fast arm movements. A tilt adjustment function makes the arm able to tilt up to 15 degrees, allowing it to be adaptable for use in agricultural fields [
35]. Citrus harvesting is made possible by using a snake-like end effector with a biting mode, designed after the head of a snake. The system uses scissors for slicing and uses the optimized harvest postures inspired by the bionic principles to facilitate a faster harvesting process [
36,
37]. In ref. [
38], a robotic gripper is proposed to be used for grapes; it is fused with scissors and fingers. The fingers are coated with rubber to help in defoliation, and the scissors make the slicing process very precise. The system is very diverse as it has a wide range of finger diameters (ranging from 76.2 mm to 265 mm), so it can deal with different grape sizes and shapes, helping with versatility. Lastly, pumpkins use an end effector with five fingers and seven different mechanisms, as presented in ref. [
39]. Each finger has rollers and stabilizers to avoid damaging the plant, and it also includes a 60° slicing blade. This design allows for effective harvesting of pumpkins of different sizes and shapes, with a razor-edge blade that cuts through the stem with minimal force and time. The rotating end effector further helps to cut the stem efficiently.
Figure 1 shows some examples of grasping and cutting end effectors, while
Table 1 shows the performance of these grippers for each crop.
3.2. Mobile Robotic Platforms
In recent years, advancements in agricultural robotics have shifted the focus from retrofitting existing commercial tractors to developing specialized mobile platforms tailored for robotic applications. These platforms are broadly categorized into four-wheel platforms, featuring two- or four-wheel drive with steering, and tracked or six-wheel drive platforms [
40]. Grimstad et al. proposed a robotic platform with the following key considerations: (1) that it can operate in wet conditions without harming the soil; (2) that it ensures affordability; and (3) it ensures flexibility through adaptable frames that allow all wheels to maintain ground contact while minimizing mechanical complexity [
41].
Many other studies have investigated mobile platforms for various agricultural applications. For example, researchers in refs. [
42,
43,
44] have explored self-propelled orchard platforms powered by four-cylinder diesel engines and equipped with hydraulic four-wheel drive for shake-and-catch apple harvesting. Further modifications have been proposed including relocating the control panel and removing worker platforms to accommodate robotic operations [
42,
43,
44]. Articulated steer tractors have also been adapted for cotton-harvesting robots [
45,
46,
47], while autonomous tractors have been used for heavy-duty harvesting of pumpkins and watermelons [
39,
48].
Among the innovative developments, a robotic mock-up model for apple harvesting was built on a Segway platform, featuring modules such as an Intel RealSense camera, a 3-DOF manipulator, and a vacuum-based end effector [
49]. Another project utilized the Husky A200 platform, which, with its 68 cm width, is suitable for standard cotton row spacing. This platform is lightweight, minimizing soil compaction, and can carry loads reaching 75 kg while running on a 24 V DC battery for two to three hours under moderate cycles, while field coverage can vary depending on terrain and crop density. Additionally, it utilizes both LiDAR and GPS for row centering [
50,
51]. A custom-built platform by Superdroid, Inc. supported sugar snap pea harvesting with differential steering and a 30 kg capacity, moving at speeds up to 6 km/h [
52].
For greenhouse applications, railed vehicle platforms have been employed for harvesting tomatoes [
53,
54], cherry tomatoes [
26], strawberries [
55,
56], and sweet peppers [
30]. These systems operate along guided rails, enhancing precision in confined environments. Furthermore, the TERRA-MEPP robotic platform, a tracked system, was developed for biofuel crop phenotyping, offering autonomy and multi-angle crop imaging [
57]. Independently steered devices like Octinion, designed for tabletop strawberry harvesting, provide significant mobility and adaptability [
58].
These platforms, summarized in
Figure 2, represent a range of solutions designed to balance mobility, crop adaptability, and terrain compatibility. While progress has been made in developing semicommercial systems with advanced steering and mobility features, more research is needed to compare the suitability of different platform types for diverse agricultural tasks.
Table 2 also represents an abstract overview of mobile platforms, and
Table 3 shows which are the suitable terrains for each platform.
In contrast, not only unmanned ground vehicles (UGVs) are used to traverse agricultural fields, but also unmanned aerial vehicles (UAVs) such as drones. The advancement of sensor technology in the early 2000s allowed for a broader use of drones across various fields, including precision agriculture—unlike in the 1980s, when their use was exclusive to military and civil surveillance [
59]. Drones can be utilized in many ways. When equipped with spraying systems, they can be used for the targeted application of pesticides, herbicides, or fertilizers. If environmental sensors are integrated, they can collect data that can later be fed into deep learning models for yield prediction, for example. Lastly, when pneumatic projection systems are mounted on drones, they can be used for seeding by propelling seeds into the soil at speeds of 200 km/h to 300 km/h, which is very useful in terrains that are difficult to traverse.
Table 4 lists different types of drones along with their advantages and disadvantages in the field of precision agriculture.
Table 2.
Overview of mobile platforms for agricultural robots [
60].
Table 2.
Overview of mobile platforms for agricultural robots [
60].
| Mobile Platform Type | Characteristics | Applications |
|---|
| Four-wheel platform with two- or four-wheel drive and two- or four-wheel steering | Lightweight, flexible frame, and suitable for wet conditions without damaging the soil structure. | Cotton harvesting, pumpkin and watermelon harvesting, apple harvesting |
| Tracked platform or six-wheel drives | Minimize physical effect on soil, suitable for various environmental situations. | Energy sorghum phenotyping, apple harvesting |
| Railed vehicle robot platform | Guided rail system for greenhouse harvesting. | Tomato harvesting, cherry tomato harvesting, strawberry harvesting, sweet pepper harvesting |
| Independent steering devices | Finest mobility in sloped or irregular terrain. | Strawberry picking, tomato harvesting, sugar snap pea harvesting, kiwi harvesting |
Table 3.
Task–terrain matrix for agricultural robot platforms.
Table 3.
Task–terrain matrix for agricultural robot platforms.
| Platform Type | Suitable Terrain | Crop Row Spacing | Canopy Height Limit | Typical Tasks |
|---|
| Railed platform | Flat and uniform surfaces (e.g., greenhouses) | Any | Low canopy | Greenhouse monitoring, fixed-path spraying |
| Tracked platform | Muddy or uneven ground, soft soil | ≥50 cm | Medium canopy | Field navigation, harvesting in wet soil |
| Four-wheel platform | Firm terrain, moderate slopes | ≥70 cm | High canopy | Fruit picking, transport, field inspection |
| Independent steering platform | Flat to moderately rough terrain | <70 cm | Low to medium canopy | Precision spraying, intra-row navigation, obstacle avoidance |
Table 4.
Comparison of different drone types in precision agriculture [
59].
Table 4.
Comparison of different drone types in precision agriculture [
59].
| Drone Type | Advantages | Disadvantages |
|---|
| Fixed-wing drones | Large surface coverage (up to 1000 ha/day) High-altitude flight (up to 120 m legally) Efficient for large-scale mapping Long flight autonomy (1–2 h)
| Limited maneuverability Requires a clear area for takeoff and landing Less suitable for detailed inspections Minimum speed required for flight
|
| Multirotor drones | | Limited surface coverage (50–100 ha/day) Shorter flight autonomy (20–30 min) Less efficient for mapping large areas More sensitive to strong winds
|
| Hybrid and eVTOL drones | Combines advantages of fixed-wing and multirotor Vertical takeoff and landing Good flight autonomy (up to 1 h) Suitable for various missions
| |
| Foldable-wing drones | Increased portability Easy transport and deployment Performance like fixed-wing drones Suitable for small and medium-sized farms
| Potentially reduced durability due to folding mechanism May have limited payload capacity Potentially higher cost than standard models
|
3.3. Sensing and Localization
Sensing and localization are integral to the functioning of agricultural robots, allowing them to achieve critical tasks such as trajectory tracking, target localization, collision avoidance, and mapping their environment [
8]. Sensors are crucial for enhancing actuation, world modeling, and decision-making, as they deliver real-time data that can be integrated and processed to support robot operations [
61]. However, selecting reliable, accurate, and cost-effective sensors remains a challenge, particularly for precise localization [
62,
63]. To address this, many agricultural robots utilize sensor fusion techniques combining Global Navigation Satellite System (GNSS), vision-based, and Inertial Measurement Unit (IMU) data to maintain accurate positioning. While GNSS provides accurate global localization in open fields, its accuracy degrades under canopy cover or dense vegetation. Vision-based methods and IMU integration compensate for this degradation, ensuring robust localization [
38,
39,
48,
56,
57,
64]. Despite the variety of available sensors, their effectiveness can change across agricultural tasks. Stereo vision systems provide dense depth maps perfect for row-following and plant localization, but their performance degrades under variable lighting and dense canopy conditions. On the other hand, LiDAR provides higher range accuracy and robustness to illumination changes but at a higher cost and power consumption, which makes it not suitable for small-scale robots. Combining LiDAR geometric precision with stereo vision’s texture information improves row orientation accuracy by 15% to 20% compared to using either sensor alone [
65,
66]. Similarly, low-cost ultrasonic or infrared sensors are suitable for obstacle avoidance, but their short range and limited angular resolution reduce reliability in cluttered environments. Hence, sensor selection often involves a trade-off between accuracy and cost, emphasizing the importance of hybrid and multimodal sensing strategies in precision agriculture.
Table 5 illustrates sensors used in the positioning of agriculture systems.
Table 5.
The recent major upgrades of individual GNSS components and their impact on precision agriculture [
67].
Table 5.
The recent major upgrades of individual GNSS components and their impact on precision agriculture [
67].
| GNSS | Recent Upgrades | Impact on Precision Agriculture |
|---|
| GPS | GPS III satellites [68] | Improved signal strength |
| | L5 civil signal [69,70] | Increased resistance to multipath interference |
| GLONASS | GLONASS-K satellites [71,72] | Increased satellite availability and signal strength |
| Galileo | Full operational capability [73] | Global coverage and reliable signal reception |
| | High Accuracy Service [74] | Centimeter-level positioning accuracy |
| BeiDou | BeiDou-3 satellites [75,76] | Global coverage and reliable signal reception |
| | New signals (B1C, B2a, B2b) [77,78] | Improved positioning accuracy |
In agricultural robotics, a wide array of sensors is employed, including force-torque, tactile, encoders, infrared, sonar, ultrasonic, gyroscopes, accelerometers, active beacons, laser range finders, and vision-based sensors such as color tracking, proximity, contact, pressure, and depth sensors [
79]. Stereo cameras, which use multiple lenses and image sensors, are especially common for plant localization, allowing robots to accurately map the position of crops in real time [
26,
31,
49,
53,
56,
65,
66,
80,
81]. Furthermore, object-detection AI models, such as YOLOv3 (You Only Look Once, Version 3), play a pivotal role in real-time object recognition. YOLOv3 utilizes deep convolutional neural networks (CNNs) to identify objects in videos, live streams, or still images, enabling agricultural robots to localize plants and other objects with high precision [
66,
82,
83].
A central element of precision agriculture (PA) is the wireless sensor network (WSN), which consists of multiple wireless nodes that collect environmental data through various sensors. These nodes, which include a micro-controller, radio transceiver, sensors, and antenna, are connected to one another and transmit data to a central system for processing and analysis [
84]. The ability to monitor soil conditions, crop health, and environmental variables in real time has become possible due to advancements in WSN technologies, leading to a reduction in the size and cost of sensors. This has made sensor deployment feasible in diverse agricultural applications, as illustrated in
Table 6, which lists common sensors used for aiding precision agriculture [
85].
Wireless sensor nodes are typically categorized into source nodes, which gather the data, and sink nodes, which aggregate and transmit the data to the central system. Sink nodes are more powerful, offering enhanced computational and processing capabilities compared to source nodes. However, choosing the right wireless node depends on several aspects, such as power, memory, size, data rate, and cost.
Table 7 shows a comparison of various wireless nodes and their specifications, highlighting their suitability for agricultural sensing and localization applications. Among these, the MICA2 wireless node is particularly notable due to its numerous expansion connectors, making it an ideal choice for connecting to multiple sensors and supporting complex monitoring tasks in agriculture [
85].
Table 7.
Wireless nodes and their sensors [
86].
Table 7.
Wireless nodes and their sensors [
86].
| WN1 | MC2 | Expansion Connector | Available Sensors | Data Rate |
|---|
| 1 | MICA2DOT | ATmega128L | GPS, Light, Humidity, Barometric pressure, Temperature, Accelerometer, Acoustic, RH | 38.4 K Baud |
| 2 | Imote2 | Marvell/XScalePXA271 | Light, Temperature, Humidity, Accelerometer | 250 Kbps |
| 3 | IRIS | ATmega128L | Light, Barometric pressure, RH, Acoustic, Acceleration/seismic, Magnetic and video | 250 Kbps |
| 4 | MICAz | ATmega128L | Light, Humidity, Temperature, Barometric pressure, GPS, RH, Accelerometer, Acoustic, Video sensor, Sounder, Magnetometer, Microphone | 250 Kbps |
| 5 | TelosB | TIMSP430 | Light, Temperature, Humidity | 250 Kbps |
| 6 | Cricket | ATmega128L | Accelerometer, Light, Temperature, Humidity, GPS, RH, Acoustic, Barometric pressure, Ultrasonic, Video sensor, Microphone, Magnetometer, Sounder | 38.4 K Baud |
| 7 | MICA2 | ATmega128L | Temperature, Light, Humidity, Accelerometer, GPS, Barometric pressure, RH, Acoustic, Sounder, Video, Magnetometer | 38.4 K Baud |
Moreover, the Robot Operating System (ROS) is extensively utilized in agricultural robotics for facilitating communication between hardware and software components. The ROS is an open-source framework that has significantly advanced robotics applications in agriculture. Researchers commonly use Python or C++ for ROS programming [
31,
48,
49]. In agricultural robotics, the ROS is organized around core stacks such as perception, localization and mapping, planning, control, and navigation. Most implementations rely on ROS 1, which is more widely supported by open-source libraries and has a larger community for debugging [
38]. The adoption of precision agricultural mobile robots is expected to grow based on the decreasing prices of sensors and the availability of open-source platforms.
Figure 3 illustrates some of the basic localization and sensing components used in agricultural robots.
3.4. Path Planning and Navigation
Path planning is the process of calculating a robot’s continuous journey from an initial state to a goal state or configuration [
46]. This method is based on a preexisting map of the surroundings stored in the robot’s memory. The state or configuration describes the robot’s potential position in the environment, and transitions between states are accomplished by particular actions [
8]. Effective path planning is essential for robotic control and must satisfy several criteria, including collision avoidance, reachability, smooth movement, minimized travel time, optimal distance from obstacles, and minimal abrupt turns or movement constraints [
87]. In agricultural applications, like fruit harvesting, path planning is influenced by the type of manipulator, end effector, and crop being harvested. Path planning becomes computationally intensive with manipulators that have many degrees of freedom (DOF), though efficiency improves significantly when the DOF is limited to the requirements of the task [
81].
Path planning algorithms are employed across various applications, including autonomous vehicles, unmanned aerial vehicles, and mobile robots, to determine safe, efficient, collision-free, and cost-effective paths from a starting point to a destination [
88]. Depending on the environment, there may be multiple viable paths—or none—connecting the start and target configurations. Additional optimization criteria, such as minimizing path length, are often introduced to achieve specific objectives.
Robot navigation pertains to the robot’s ability to determine its location within the environment and plan a route to a specified destination. This requires both a map of the environment and the capacity to interpret it. In open fields, robots often utilize GPS and cameras for navigation, employing path-tracking algorithms. In contrast, robots in greenhouses are typically guided by tracks; therefore, they need position-control algorithms instead of full navigation algorithms [
38,
39,
48,
53,
54,
55,
56,
57]. Some research has focused on motion planning for robotic arms without incorporating obstacle avoidance or traditional path-planning mechanisms [
64,
65,
89]. Recent advancements have introduced sophisticated path-planning approaches in agriculture, such as leveraging convolutional neural networks (CNNs) [
49,
80] and navigating along predefined paths mapped in advance [
38,
48].