Next Article in Journal
Unsupervised Port Berth Localization from Automatic Identification System Data
Previous Article in Journal
Intelligent Traffic Control Strategies for VLC-Connected Vehicles and Pedestrian Flow Management
Previous Article in Special Issue
Implementation of SLAM-Based Online Mapping and Autonomous Trajectory Execution in Software and Hardware on the Research Platform Nimbulus-e
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Fusion of Robotics, AI, and Thermal Imaging Technologies for Intelligent Precision Agriculture Systems

1
Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
2
College of Artificial Intelligence, Arab Academy for Science Technology and Maritime Transport, Alamein 51718, Egypt
3
School of Mathematical and Computer Sciences, Heriot Watt University, Dubai P.O. Box 501745, United Arab Emirates
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(22), 6844; https://doi.org/10.3390/s25226844
Submission received: 15 September 2025 / Revised: 27 October 2025 / Accepted: 3 November 2025 / Published: 8 November 2025

Abstract

The world population is expected to grow to over 10 billion by 2050 and therefore impose further stress on food production. Precision agriculture has become the main approach used to enhance productivity with sustainability in agricultural production. This paper conducts a technical review of how robotics, artificial intelligence (AI), and thermal imaging (TI) technologies transform precision agriculture operations, focusing on sensing, automation, and farm decision making. Agricultural robots promote labor solutions and efficiency by utilizing their sensing devices and kinematics in planting, spraying, and harvesting. Through accurate assessment of pests/diseases and quality assurance of the harvested crops, AI and TI bring efficiency to the crop monitoring sector. Different deep learning models are employed for plant disease diagnosis and resource management, namely the VGG16 model, InceptionV3, and MobileNet; the PlantVillage, PlantDoc, and FieldPlant datasets are used respectively. To reduce crop losses, AI–TI integration enables early recognition of fluctuations caused by pests or diseases, allowing control and mitigation in good time. While the issues of cost and environmental variability (illumination, canopy moisture, and microclimate instability) are taken into consideration, the advancement in artificial intelligence, robotics technology, and combined technologies will offer sustainable solutions to the existing gaps.

1. Introduction

As the global population is projected to exceed 10 billion by 2050 [1], the demand for innovative and efficient methods of food production is more pressing than ever. Precision agriculture, an interdisciplinary approach that combines advanced technologies with traditional agricultural practices, has emerged as a cornerstone for addressing this challenge. By integrating machines, sensors, and data-driven management strategies, precision agriculture enhances productivity and resource efficiency by addressing variability and uncertainties inherent in agricultural systems [2].
This aligns with global initiatives such as the United Nations’ SDG 2 (Zero Hunger) and the 50 × 2030 Initiative [3], which emphasize data-driven agricultural transformation through improved farm-level information and precision technologies. Emerging technologies, in particular, with the beginning of the Industrial Revolution 4.0 [4], give a great contribution to sustainable development and improvement in quality of life by means of autonomous systems, especially in agriculture [5]. For more than three decades [6], both laboratory prototypes and field-validated agricultural robots have been developed to support operations such as planting, disease detection, spraying, and harvesting [7,8]. These robots substitute for the labor shortage, enhance efficiency, and support environmentally friendly practices to solve some of the major problems that characterize the agricultural industry: shortage of labor and inefficiency.
Recent breakthroughs in the area of AI [9,10,11,12] have transformed precision agriculture in the fields of the detection of pests and diseases in crops, their monitoring, and the management of resources. Some AI-driven technologies that have been highly transformative in the agricultural landscape include computer vision and thermal imaging (TI). TI can show subtle temperature variations that are linked to early-stage infestations and infections that are generally not visible to the human eye [13]. When employing thermal imaging, emissivity must be standardized (typically 0.95 for vegetation), and image capture should occur under consistent time-of-day, wind speed, and vapor pressure-deficit conditions to isolate disease- or stress-induced thermal signatures from general water stress effects. Therefore, AI and TI together with deep learning (DL) algorithms such as convolutional neural networks (CNNs), have provided scalable solutions to large agricultural environments in the significant improvement of their efficiencies of management of pests and diseases.
Furthermore, AI enhances the ability of data analytics automation in plant health monitoring and water stress detection in crop fields to optimize irrigation practices accordingly. DL models have been leading the race for several years in detecting rows of crops using large datasets and improving the accuracy of plant disease identification, pest attacks, and crop stress. Training of AI models for precision agriculture has been facilitated by important open-source plant disease datasets such as PlantVillage [14,15], PlantDoc [16], and FieldPlant [17]. These datasets have different characteristics that enable model training for VGG16, InceptionV3, and MobileNet in image classification tasks, particularly capturing the minutest changes in plant health conditions [18,19,20]. These models denote the capability of AI to bring a revolution to agricultural practices by reducing crop losses and therefore achieving early intervention.
The power of precision agriculture is in treating artificial intelligence (AI), robotics, and thermal imaging (TI) as complementing technologies to create autonomous intelligent farming systems. Agricultural robots that use thermal and visual sensors can capture high-resolution data on crop health and environmental conditions, while AI algorithms analyze these datasets to guide decisions such as targeted irrigation, precision spraying, or autonomous navigation [21,22,23]. Thermal imaging boosts robotic perception by detecting small canopy temperature variations that are linked to plant stress, allowing AI models to differentiate between biotic and abiotic factors with better accuracy [24,25]. This triad creates a self-optimizing ecosystem where thermal data improve AI inference, AI enhances robotic autonomy, and robotics enables scalable, real-time deployment of intelligent sensing systems.
The motivation behind this paper stems from the urgent need to address the global challenge of feeding an estimated population of over 10 billion by 2050. Traditional agricultural methods alone are insufficient to sustainably meet this rising demand for food. Labor shortages, resource inefficiencies, and climate-related uncertainties further exacerbate this challenge. By critically reviewing the latest advancements in agricultural robotics, artificial intelligence, and thermal imaging, this paper aims to highlight how integrating these technologies can revolutionize precision agriculture. The objective is to provide a review of recent studies on how smart systems can enhance crop productivity, reduce losses due to pests and diseases through early detection, and optimize resource utilization while maintaining sustainability. This comprehensive analysis will guide researchers, practitioners, and policymakers in adopting innovative, data-driven solutions to bridge existing gaps and build resilient food systems for the future. It will also discuss future work including domain adaptation that can be done in the field of precision agriculture to improve it.

2. Review Paper Structure

This review paper was structured systematically based on thematic areas identified through an extensive literature search. In total, approximately 62 papers related to the field of artificial intelligence (AI) and about 45 papers focusing on agricultural robotics were reviewed, covering studies published between 2015 and 2025. Major sections were defined by grouping studies using relevant keywords in online databases such as IEEE Xplore, ScienceDirect, and Google Scholar.
  • For the Agricultural Robots Design and System Components section, keywords such as “agricultural robot design,” “robotic end effectors,” and “robot localization in agriculture” were used.
  • The AI for Precision Agriculture section was informed by searches using “plant disease detection deep learning,” “crop monitoring CNN models,” “PlantVillage dataset,” “PlantDoc dataset,” and related terms.
  • For AI and Thermal Imaging, keywords included “thermal imaging plant stress,” “AI thermal pest detection,” and “post-harvest fruit quality.”

3. Agricultural Robots Design and System Components

Mobile robots with sensors for localization and improved path planning can navigate fields automatically. This section covers critical components for sustainable agriculture, such as gripping mechanisms, mobile robots, sensing, and navigation.

3.1. Robotic Grasping and Cutting Mechanisms

In recent robotic harvesting studies, various crops have been targeted with different end effectors and mechanisms for grasping and cutting. For tomatoes, Qingchun et al. [26] proposed a system comprising a vacuum cup and cutting gripper, operated by a dual-arm manipulator. This system features a double cutter and a grasping attachment and employs an open-loop control system with 3D scene reconstruction to assist in the harvesting process. For strawberries, multiple researchers developed harvesting arms utilizing a gripper with an internal container and three active fingers, where the three active fingers work alongside three passive ones. Cutting is performed using two curved blades. The proposed systems also included special features such as an inclined dropping board, a soft sponge inside the gripper, and a cover for the fingers to protect the plant during harvesting [27,28,29]. However, sweet pepper harvesting uses a mechanism with six metal fingers, which are covered in plastic. The fingers are spring-loaded, allowing them to move around the crop, while the cutting mechanism includes a plant stem fixing mechanism combined with a shaking blade. Grasp positions are recognized from a segmented 3D point cloud, with several grasping poses chosen from the point cloud data to ensure the accuracy of the harvesting process [30,31]. For lettuce, pneumatic actuators are employed for both grasping and cutting, with a blade that operates using a timing belt system. The linear action of the system is transferred to each side of the blade to ensure efficient cutting [32,33]. Lastly, eggplant harvesting relies on a simple design, with arms positioned parallel to the y-axis and the gripper closed, avoiding the need for a complex gripper mechanism to grab leaves, thus reducing the complexity of the system [26,34].
In addition, other crops have been targeted by robotic harvesting systems, each utilizing specialized end effectors and cutting mechanisms. When it comes to asparagus, a robotic arm uses two fingers and a slicing blade uses a cylindrical cam mechanism that makes it easy to perform fast arm movements. A tilt adjustment function makes the arm able to tilt up to 15 degrees, allowing it to be adaptable for use in agricultural fields [35]. Citrus harvesting is made possible by using a snake-like end effector with a biting mode, designed after the head of a snake. The system uses scissors for slicing and uses the optimized harvest postures inspired by the bionic principles to facilitate a faster harvesting process [36,37]. In ref. [38], a robotic gripper is proposed to be used for grapes; it is fused with scissors and fingers. The fingers are coated with rubber to help in defoliation, and the scissors make the slicing process very precise. The system is very diverse as it has a wide range of finger diameters (ranging from 76.2 mm to 265 mm), so it can deal with different grape sizes and shapes, helping with versatility. Lastly, pumpkins use an end effector with five fingers and seven different mechanisms, as presented in ref. [39]. Each finger has rollers and stabilizers to avoid damaging the plant, and it also includes a 60° slicing blade. This design allows for effective harvesting of pumpkins of different sizes and shapes, with a razor-edge blade that cuts through the stem with minimal force and time. The rotating end effector further helps to cut the stem efficiently. Figure 1 shows some examples of grasping and cutting end effectors, while Table 1 shows the performance of these grippers for each crop.

3.2. Mobile Robotic Platforms

In recent years, advancements in agricultural robotics have shifted the focus from retrofitting existing commercial tractors to developing specialized mobile platforms tailored for robotic applications. These platforms are broadly categorized into four-wheel platforms, featuring two- or four-wheel drive with steering, and tracked or six-wheel drive platforms [40]. Grimstad et al. proposed a robotic platform with the following key considerations: (1) that it can operate in wet conditions without harming the soil; (2) that it ensures affordability; and (3) it ensures flexibility through adaptable frames that allow all wheels to maintain ground contact while minimizing mechanical complexity [41].
Many other studies have investigated mobile platforms for various agricultural applications. For example, researchers in refs. [42,43,44] have explored self-propelled orchard platforms powered by four-cylinder diesel engines and equipped with hydraulic four-wheel drive for shake-and-catch apple harvesting. Further modifications have been proposed including relocating the control panel and removing worker platforms to accommodate robotic operations [42,43,44]. Articulated steer tractors have also been adapted for cotton-harvesting robots [45,46,47], while autonomous tractors have been used for heavy-duty harvesting of pumpkins and watermelons [39,48].
Among the innovative developments, a robotic mock-up model for apple harvesting was built on a Segway platform, featuring modules such as an Intel RealSense camera, a 3-DOF manipulator, and a vacuum-based end effector [49]. Another project utilized the Husky A200 platform, which, with its 68 cm width, is suitable for standard cotton row spacing. This platform is lightweight, minimizing soil compaction, and can carry loads reaching 75 kg while running on a 24 V DC battery for two to three hours under moderate cycles, while field coverage can vary depending on terrain and crop density. Additionally, it utilizes both LiDAR and GPS for row centering [50,51]. A custom-built platform by Superdroid, Inc. supported sugar snap pea harvesting with differential steering and a 30 kg capacity, moving at speeds up to 6 km/h [52].
For greenhouse applications, railed vehicle platforms have been employed for harvesting tomatoes [53,54], cherry tomatoes [26], strawberries [55,56], and sweet peppers [30]. These systems operate along guided rails, enhancing precision in confined environments. Furthermore, the TERRA-MEPP robotic platform, a tracked system, was developed for biofuel crop phenotyping, offering autonomy and multi-angle crop imaging [57]. Independently steered devices like Octinion, designed for tabletop strawberry harvesting, provide significant mobility and adaptability [58].
These platforms, summarized in Figure 2, represent a range of solutions designed to balance mobility, crop adaptability, and terrain compatibility. While progress has been made in developing semicommercial systems with advanced steering and mobility features, more research is needed to compare the suitability of different platform types for diverse agricultural tasks. Table 2 also represents an abstract overview of mobile platforms, and Table 3 shows which are the suitable terrains for each platform.
In contrast, not only unmanned ground vehicles (UGVs) are used to traverse agricultural fields, but also unmanned aerial vehicles (UAVs) such as drones. The advancement of sensor technology in the early 2000s allowed for a broader use of drones across various fields, including precision agriculture—unlike in the 1980s, when their use was exclusive to military and civil surveillance [59]. Drones can be utilized in many ways. When equipped with spraying systems, they can be used for the targeted application of pesticides, herbicides, or fertilizers. If environmental sensors are integrated, they can collect data that can later be fed into deep learning models for yield prediction, for example. Lastly, when pneumatic projection systems are mounted on drones, they can be used for seeding by propelling seeds into the soil at speeds of 200 km/h to 300 km/h, which is very useful in terrains that are difficult to traverse. Table 4 lists different types of drones along with their advantages and disadvantages in the field of precision agriculture.
Table 2. Overview of mobile platforms for agricultural robots [60].
Table 2. Overview of mobile platforms for agricultural robots [60].
Mobile Platform TypeCharacteristicsApplications
Four-wheel platform with two- or four-wheel drive and two- or four-wheel steeringLightweight, flexible frame, and suitable for wet conditions without damaging the soil structure.Cotton harvesting, pumpkin and watermelon harvesting, apple harvesting
Tracked platform or six-wheel drivesMinimize physical effect on soil, suitable for various environmental situations.Energy sorghum phenotyping, apple harvesting
Railed vehicle robot platformGuided rail system for greenhouse harvesting.Tomato harvesting, cherry tomato harvesting, strawberry harvesting, sweet pepper harvesting
Independent steering devicesFinest mobility in sloped or irregular terrain.Strawberry picking, tomato harvesting, sugar snap pea harvesting, kiwi harvesting
Table 3. Task–terrain matrix for agricultural robot platforms.
Table 3. Task–terrain matrix for agricultural robot platforms.
Platform TypeSuitable TerrainCrop Row SpacingCanopy Height LimitTypical Tasks
Railed platformFlat and uniform surfaces (e.g., greenhouses)AnyLow canopyGreenhouse monitoring, fixed-path spraying
Tracked platformMuddy or uneven ground, soft soil≥50 cmMedium canopyField navigation, harvesting in wet soil
Four-wheel platformFirm terrain, moderate slopes≥70 cmHigh canopyFruit picking, transport, field inspection
Independent steering platformFlat to moderately rough terrain<70 cmLow to medium canopyPrecision spraying, intra-row navigation, obstacle avoidance
Table 4. Comparison of different drone types in precision agriculture [59].
Table 4. Comparison of different drone types in precision agriculture [59].
Drone TypeAdvantagesDisadvantages
Fixed-wing drones
  • Large surface coverage (up to 1000 ha/day)
  • High-altitude flight (up to 120 m legally)
  • Efficient for large-scale mapping
  • Long flight autonomy (1–2 h)
  • Limited maneuverability
  • Requires a clear area for takeoff and landing
  • Less suitable for detailed inspections
  • Minimum speed required for flight
Multirotor drones
  • High maneuverability
  • Hovering capability
  • Vertical takeoff and landing
  • Ideal for detailed inspections and targeted spraying
  • Limited surface coverage (50–100 ha/day)
  • Shorter flight autonomy (20–30 min)
  • Less efficient for mapping large areas
  • More sensitive to strong winds
Hybrid and eVTOL drones
  • Combines advantages of fixed-wing and multirotor
  • Vertical takeoff and landing
  • Good flight autonomy (up to 1 h)
  • Suitable for various missions
  • Increased mechanical complexity
  • Higher cost
  • May require specific training for use
Foldable-wing drones
  • Increased portability
  • Easy transport and deployment
  • Performance like fixed-wing drones
  • Suitable for small and medium-sized farms
  • Potentially reduced durability due to folding mechanism
  • May have limited payload capacity
  • Potentially higher cost than standard models

3.3. Sensing and Localization

Sensing and localization are integral to the functioning of agricultural robots, allowing them to achieve critical tasks such as trajectory tracking, target localization, collision avoidance, and mapping their environment [8]. Sensors are crucial for enhancing actuation, world modeling, and decision-making, as they deliver real-time data that can be integrated and processed to support robot operations [61]. However, selecting reliable, accurate, and cost-effective sensors remains a challenge, particularly for precise localization [62,63]. To address this, many agricultural robots utilize sensor fusion techniques combining Global Navigation Satellite System (GNSS), vision-based, and Inertial Measurement Unit (IMU) data to maintain accurate positioning. While GNSS provides accurate global localization in open fields, its accuracy degrades under canopy cover or dense vegetation. Vision-based methods and IMU integration compensate for this degradation, ensuring robust localization [38,39,48,56,57,64]. Despite the variety of available sensors, their effectiveness can change across agricultural tasks. Stereo vision systems provide dense depth maps perfect for row-following and plant localization, but their performance degrades under variable lighting and dense canopy conditions. On the other hand, LiDAR provides higher range accuracy and robustness to illumination changes but at a higher cost and power consumption, which makes it not suitable for small-scale robots. Combining LiDAR geometric precision with stereo vision’s texture information improves row orientation accuracy by 15% to 20% compared to using either sensor alone [65,66]. Similarly, low-cost ultrasonic or infrared sensors are suitable for obstacle avoidance, but their short range and limited angular resolution reduce reliability in cluttered environments. Hence, sensor selection often involves a trade-off between accuracy and cost, emphasizing the importance of hybrid and multimodal sensing strategies in precision agriculture. Table 5 illustrates sensors used in the positioning of agriculture systems.
Table 5. The recent major upgrades of individual GNSS components and their impact on precision agriculture [67].
Table 5. The recent major upgrades of individual GNSS components and their impact on precision agriculture [67].
GNSSRecent UpgradesImpact on Precision Agriculture
GPSGPS III satellites [68]Improved signal strength
L5 civil signal [69,70]Increased resistance to multipath interference
GLONASSGLONASS-K satellites [71,72]Increased satellite availability and signal strength
GalileoFull operational capability [73]Global coverage and reliable signal reception
High Accuracy Service [74]Centimeter-level positioning accuracy
BeiDouBeiDou-3 satellites [75,76]Global coverage and reliable signal reception
New signals (B1C, B2a, B2b) [77,78]Improved positioning accuracy
In agricultural robotics, a wide array of sensors is employed, including force-torque, tactile, encoders, infrared, sonar, ultrasonic, gyroscopes, accelerometers, active beacons, laser range finders, and vision-based sensors such as color tracking, proximity, contact, pressure, and depth sensors [79]. Stereo cameras, which use multiple lenses and image sensors, are especially common for plant localization, allowing robots to accurately map the position of crops in real time [26,31,49,53,56,65,66,80,81]. Furthermore, object-detection AI models, such as YOLOv3 (You Only Look Once, Version 3), play a pivotal role in real-time object recognition. YOLOv3 utilizes deep convolutional neural networks (CNNs) to identify objects in videos, live streams, or still images, enabling agricultural robots to localize plants and other objects with high precision [66,82,83].
A central element of precision agriculture (PA) is the wireless sensor network (WSN), which consists of multiple wireless nodes that collect environmental data through various sensors. These nodes, which include a micro-controller, radio transceiver, sensors, and antenna, are connected to one another and transmit data to a central system for processing and analysis [84]. The ability to monitor soil conditions, crop health, and environmental variables in real time has become possible due to advancements in WSN technologies, leading to a reduction in the size and cost of sensors. This has made sensor deployment feasible in diverse agricultural applications, as illustrated in Table 6, which lists common sensors used for aiding precision agriculture [85].
Wireless sensor nodes are typically categorized into source nodes, which gather the data, and sink nodes, which aggregate and transmit the data to the central system. Sink nodes are more powerful, offering enhanced computational and processing capabilities compared to source nodes. However, choosing the right wireless node depends on several aspects, such as power, memory, size, data rate, and cost. Table 7 shows a comparison of various wireless nodes and their specifications, highlighting their suitability for agricultural sensing and localization applications. Among these, the MICA2 wireless node is particularly notable due to its numerous expansion connectors, making it an ideal choice for connecting to multiple sensors and supporting complex monitoring tasks in agriculture [85].
Table 7. Wireless nodes and their sensors [86].
Table 7. Wireless nodes and their sensors [86].
WN1MC2Expansion ConnectorAvailable SensorsData Rate
1MICA2DOTATmega128LGPS, Light, Humidity, Barometric pressure, Temperature, Accelerometer, Acoustic, RH38.4 K Baud
2Imote2Marvell/XScalePXA271Light, Temperature, Humidity, Accelerometer250 Kbps
3IRISATmega128LLight, Barometric pressure, RH, Acoustic, Acceleration/seismic, Magnetic and video250 Kbps
4MICAzATmega128LLight, Humidity, Temperature, Barometric pressure, GPS, RH, Accelerometer, Acoustic, Video sensor, Sounder, Magnetometer, Microphone250 Kbps
5TelosBTIMSP430Light, Temperature, Humidity250 Kbps
6CricketATmega128LAccelerometer, Light, Temperature, Humidity, GPS, RH, Acoustic, Barometric pressure, Ultrasonic, Video sensor, Microphone, Magnetometer, Sounder38.4 K Baud
7MICA2ATmega128LTemperature, Light, Humidity, Accelerometer, GPS, Barometric pressure, RH, Acoustic, Sounder, Video, Magnetometer38.4 K Baud
WN1: Wireless node, MC2: Micro-controller.
Moreover, the Robot Operating System (ROS) is extensively utilized in agricultural robotics for facilitating communication between hardware and software components. The ROS is an open-source framework that has significantly advanced robotics applications in agriculture. Researchers commonly use Python or C++ for ROS programming [31,48,49]. In agricultural robotics, the ROS is organized around core stacks such as perception, localization and mapping, planning, control, and navigation. Most implementations rely on ROS 1, which is more widely supported by open-source libraries and has a larger community for debugging [38]. The adoption of precision agricultural mobile robots is expected to grow based on the decreasing prices of sensors and the availability of open-source platforms. Figure 3 illustrates some of the basic localization and sensing components used in agricultural robots.

3.4. Path Planning and Navigation

Path planning is the process of calculating a robot’s continuous journey from an initial state to a goal state or configuration [46]. This method is based on a preexisting map of the surroundings stored in the robot’s memory. The state or configuration describes the robot’s potential position in the environment, and transitions between states are accomplished by particular actions [8]. Effective path planning is essential for robotic control and must satisfy several criteria, including collision avoidance, reachability, smooth movement, minimized travel time, optimal distance from obstacles, and minimal abrupt turns or movement constraints [87]. In agricultural applications, like fruit harvesting, path planning is influenced by the type of manipulator, end effector, and crop being harvested. Path planning becomes computationally intensive with manipulators that have many degrees of freedom (DOF), though efficiency improves significantly when the DOF is limited to the requirements of the task [81].
Path planning algorithms are employed across various applications, including autonomous vehicles, unmanned aerial vehicles, and mobile robots, to determine safe, efficient, collision-free, and cost-effective paths from a starting point to a destination [88]. Depending on the environment, there may be multiple viable paths—or none—connecting the start and target configurations. Additional optimization criteria, such as minimizing path length, are often introduced to achieve specific objectives.
Robot navigation pertains to the robot’s ability to determine its location within the environment and plan a route to a specified destination. This requires both a map of the environment and the capacity to interpret it. In open fields, robots often utilize GPS and cameras for navigation, employing path-tracking algorithms. In contrast, robots in greenhouses are typically guided by tracks; therefore, they need position-control algorithms instead of full navigation algorithms [38,39,48,53,54,55,56,57]. Some research has focused on motion planning for robotic arms without incorporating obstacle avoidance or traditional path-planning mechanisms [64,65,89]. Recent advancements have introduced sophisticated path-planning approaches in agriculture, such as leveraging convolutional neural networks (CNNs) [49,80] and navigating along predefined paths mapped in advance [38,48].

4. AI for Precision Agriculture

4.1. Available Datasets

4.1.1. Multispectral Dataset for Tomato Disease Detection

In 2022, a multispectral dataset was presented by Georgantopoulos et al [90], intended for identifying plant diseases and pests in tomato crops. The dataset includes 314 multispectral images that combine three RGB channels with two Near-Infrared (NIR) channels (850 nm and 980 nm), recorded in real greenhouse settings using a MUSES9-MS-PL multispectral camera. It focuses on two major tomato diseases: Tuta absoluta (tomato leaf miner) and Leveillula taurica (powdery mildew). A visual representation of the dataset’s multispectral cube, highlighting the captured NIR and visible spectrum channels [90], is shown in Figure 4.
Figure 4. A multispectral cube of the dataset. Top: the NIR channels at 850 nm and 980 nm. Bottom: channels of the visible spectrum in the wavelength regions of red, green, and blue (see Table 8). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article [90]).
Figure 4. A multispectral cube of the dataset. Top: the NIR channels at 850 nm and 980 nm. Bottom: channels of the visible spectrum in the wavelength regions of red, green, and blue (see Table 8). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article [90]).
Sensors 25 06844 g004
Table 8. Wavelength regions supported by the MUSES9-MS-PL camera and those used in the current study [90].
Table 8. Wavelength regions supported by the MUSES9-MS-PL camera and those used in the current study [90].
Spectral BandLower Limit (nm)Upper Limit (nm)Current Study (nm)FWHM * (nm)
Infrared8001000980, 85050
Red60070063040
Green50060054030
Blue40050046030
Ultraviolet365385--
* FWHM: Full Width at Half Maximum.
The dataset was created in a greenhouse environment. The infected tomato plants were classified based on various stages of disease development. The images were then annotated with bounding boxes for lesions, and the levels of lesion stage progression were tagged to show the disease severity. The addition of NIR channels improves disease detection by offering more spectral data for separating backgrounds and locating lesions. This dataset is an important tool for training machine learning models aimed at automated disease identification. As reported by Georgantopoulos et al. [90], a Faster R-CNN baseline (with a ResNet-50 backbone, default anchor configuration, and 512 × 512 input resolution) was trained and evaluated on this dataset. Using an 80/10/10 train–validation–test split and an IoU threshold of 0.5, the model achieved a mean Average Precision (mAP) of 20.2 across the target classes, as detailed in their evaluation protocol. The open-access nature of the dataset promotes additional studies in multispectral imaging for precision agriculture.

4.1.2. PlantVillage Dataset

As of 2024, the PlantVillage dataset [14,15] contains over 54,000 RGB images. These images are categorized into 38 distinct classes, each representing a specific plant disease, and are further divided into two main groups: healthy and infected (sick) crop leaves. Each original RGB image is accompanied by a grayscale version and a segmented version, providing additional modalities for model training and evaluation. The inclusion of these variants allows for the exploration of advanced preprocessing and feature extraction techniques, enhancing the versatility of the dataset. Additionally, the dataset is organized into training and validation sets in an 80/20 ratio while maintaining the original directory structure. The dataset contains a variety of species, as shown in Table 9 and Figure 5. Some sample images from the dataset can be seen in Figure 6 [91].
Figure 6. Some sample images from the PlantVillage dataset [92]. (a) Pepper bell—bacterial spot, (b) Pepper bell—healthy, (c) Potato—Early blight, (d) Potato—healthy, (e) Potato—late blight, (f) Tomato—target spot, (g) Tomato—tomato mosaic virus, (h) Tomato—tomato yellow leaf curl virus, (i) Tomato—bacterial spot, (j) Tomato—early blight, (k) Tomato—healthy, (l) Tomato—late blight.
Figure 6. Some sample images from the PlantVillage dataset [92]. (a) Pepper bell—bacterial spot, (b) Pepper bell—healthy, (c) Potato—Early blight, (d) Potato—healthy, (e) Potato—late blight, (f) Tomato—target spot, (g) Tomato—tomato mosaic virus, (h) Tomato—tomato yellow leaf curl virus, (i) Tomato—bacterial spot, (j) Tomato—early blight, (k) Tomato—healthy, (l) Tomato—late blight.
Sensors 25 06844 g006
Table 9. Summary of the PlantVillage dataset [93].
Table 9. Summary of the PlantVillage dataset [93].
SpeciesDiseasesHealthy CategoriesNumber of Images
Apple3133,172
Blueberry011502
Cherry111906
Corn313852
Grape314063
Orange105507
Peach112657
Bell Pepper112475
Potato212152
Raspberry01371
Soybean015090
Squash201835
Strawberry111565
Tomato9118,162

4.1.3. PlantDoc Dataset

In 2020, the PlantDoc dataset [16] was introduced to overcome the limitations of the PlantVillage dataset, which primarily features images captured under controlled conditions. While the controlled environment of the PlantVillage dataset ensures consistency, it limits the dataset’s effectiveness for real-world disease detection, where plant images often include multiple leaves amidst diverse background conditions and varying lighting. In contrast, the PlantDoc dataset was specifically designed to address these challenges, consisting of 2598 images spanning 13 plant species and 27 classes, of which 17 represent diseases and 10 represent healthy leaves. Notably, PlantDoc is the first dataset to provide data captured in non-controlled environments, enhancing its utility for detecting plant diseases in realistic agricultural settings.
The dataset not only benchmarks curated data for disease detection but also showcases its ability to navigate the complexities of natural settings. For instance, Figure 7 illustrates the statistical breakdown of leaf diseases within the PlantDoc dataset, providing a comprehensive overview of its class distribution. Additionally, Figure 8 compares sample images captured under laboratory and field conditions, highlighting the dataset’s capacity to reflect real-world scenarios, such as diverse backgrounds and fluctuating lighting. These features make PlantDoc an invaluable resource for practical applications in agricultural disease detection.

4.1.4. FieldPlant Dataset

In 2023, the FieldPlant dataset [17] was introduced to address the limitations of the PlantVillage [14,15] and PlantDoc [16] datasets. The PlantVillage dataset that is almost always used in plant disease detection research is mostly composed of leaf images taken under controlled conditions. While this maintains consistency, it limits the dataset’s applicability in real-world scenarios. The PlantDoc dataset, on the other hand, combines both field-captured and web-sourced images collected from diverse environments to reflect real-world variability in lighting, background, and leaf conditions [16]. While some images were gathered from online sources, the original PlantDoc paper emphasizes that many were contributed directly by agricultural researchers and practitioners to better represent natural farm settings.
The FieldPlant dataset doesn’t have these limitation as it consists of photos captured directly from farms. That dataset consists of 5170 annotated images of leaves collected from Cameroonian plantations, with a significant focus on three tropical crops: maize, cassava, and tomatoes. These images were precisely annotated using the Roboflow platform and classified into 27 disease classes by a group of plant pathologists. In total, the dataset has 8629 different leaf annotations, creating a robust source for training and evaluating machine learning models.
Images were captured by smartphone cameras with a resolution of 4608 × 3456 pixels (4:3 aspect ration). Figure 9 shows the areas in Cameroon where the FieldPlant dataset images where taken, it also shows the environmental diversity and conditions of areas where the images were taken. Figure 10 shows the statistics of the dataset class distribution. Sample images include a corn single-leaf image depicted in Figure 11 and a tomato multiple-leaf image shown in Figure 12, showing the that the dataset is relevant in both single leaf and multi-leaf analysis. All of this makes the FieldPlant dataset perfect for advancing plant disease detection in tropical crop systems.

4.2. Deep Learning Model

Deep learning (DL) algorithms utilize deep neural networks with multiple hidden layers to imitate the human cortex operations [94]. Convolutional neural networks (CNNs), which are deep learning algorithms, are best for large datasets and extracting very complex features from 2D images [95]. Deep learning architectures have been instrumental in automating plant disease detection, weed detection, and crop classification. Their design characteristics influence how they perform in agricultural imaging.

4.2.1. VGG16

In 2015, VGG-16 [18], a deep CNN, was used for small datasets such as CIFAR-10, showing its ability to handle small-scale image classification tasks. The research shows how models that are built to handle large datasets can be modified to also handle smaller datasets. It stressed the different challenges that small datasets pose, like overfitting and feature vanishing, while using the power of the VGG-16 architecture.
The input size was changed to adapt the the CIFAR-10 dataset, which has images of 32 × 32. VGG-16 consists of 13 convolution layers grouped into five sets, then 3 fully connected layers, but in this study, the architecture was modified. The two 4096 -dimensional fully connected layers were shrunk to one 100-dimensional layer, which helped decrease the number of parameters and avoid overfitting. The filter size of 3 × 3, with a stride of 1 and a pooling region of 2 × 2, was applied without overlap throughout the network.
To improve the performance, batch normalization was added before each non-linearity layer, as it helps remove the problem of internal covariate shift, which leads to slow convergence. This stabilized training, resulting in faster convergence, lower error rates, and reduced overfitting. Another modification was the use of strong dropout settings in both the fully connected layers and the convolution layers. The dropout rate was different among different convolution layers, as deeper layers had higher dropout rates. Dropout and batch normalization, when used together, are effective in preventing overfitting on small datasets.
Various activation functions were explored, but no significant performance impacts were observed, The authors suggested that fine-tuning negative slope settings for leaky ReLU could lead to further optimization.
When it comes to acceleration techniques, the authors tried replacing the current 3 × 3 convolution layer with 5 × 5 layers. This modification decreased the computational time, but at the same time increased the error rate, highlighting the trade-off between speed and accuracy.
The study successfully shows that VGG-16 can be modified to perform well with small datasets like CIFAR-10. Modifications like batch normalization and strong dropout settings can help avoid challenges like overfitting and feature vanishing. See Figure 13 for the full architecture.
Figure 13. Architecture of VGGNet [96].
Figure 13. Architecture of VGGNet [96].
Sensors 25 06844 g013
VGG-16 demonstrated its effectiveness when it achieved a classification accuracy of 12.75% when transferring from PlantVillage (100%) to PlantDoc (100%), 40.3% when evaluated on PlantDoc (80% training and 20% testing), and 80.54% on FieldPlant (80% training and 20% testing). These results underscore the model’s capacity to handle different dataset complexities, performing particularly well in datasets that simulate real-world conditions, such as FieldPlant, as shown in Table 10.
VGG16 helps advance precision agriculture for disease classification tasks when high accuracy is needed and the image complexity is moderate. VGG16 has many convolution layers, which makes it good at capturing fine spatial details. For example, it achieved 95.62% accuracy in detecting rice plant diseases under transfer learning [97]. As previously mentioned, VGG16 has many convolution layers, which leads to slower inference and high memory usage, which makes deployment on edge devices more challenging [98]. Due to its deep structure and large number of parameters, it performs best on high quality datasets like PlantVillage but struggles with generalization to real-field conditions.

4.2.2. InceptionV3

In 2017, InceptionV3 [20], a deep and efficient convolutional neural network, was introduced specifically for image classification tasks. The architecture builds upon earlier Inception models, integrating advanced design principles and optimizations to enhance computational efficiency and predictive performance across diverse applications. Notable innovations include the factorization of convolutions, where larger spatial filters (e.g., 5 × 5 or 7 × 7) are replaced by sequences of smaller 3 × 3 convolutions, reducing computational costs without compromising the network’s representational power. Additionally, asymmetric convolutions replace traditional n × n convolutions with a 1 × n convolution followed by an n × 1 convolution, significantly lowering computational expense, particularly for medium grid sizes (e.g., 12 × 12 to 20 × 20).
Auxiliary classifiers, initially introduced in earlier Inception models to address vanishing gradient problems, serve as regularizers in InceptionV3, improving overall network performance during training. Furthermore, efficient grid size-reduction techniques replace traditional methods, such as pooling followed by convolution, with parallel stride-2 blocks: one employing a pooling layer and the other a convolutional layer. By concatenating their outputs, the network reduces grid size without introducing representational bottlenecks, effectively balancing computational efficiency and performance.
The architecture of InceptionV3 shown in Figure 14 represents a significant improvement over previous benchmarks. It replaces 7 × 7 convolutions with three sequential 3 × 3 convolutions and includes three traditional Inception modules operating on a 35 × 35 grid with 288 filters each. Using grid reduction techniques, it transitions to a 17 × 17 grid with 768 filters, followed by five factorized inception modules that reduce the dimensions further, to an 8 × 8 × 1280 grid. At the coarsest 8 × 8 level, the architecture employs two additional Inception modules, culminating in a concatenated output filter bank size of 2048. Despite being 42 layers deep, InceptionV3 maintains a computational cost only 2.5 times that of GoogLeNet [99] while being significantly more efficient than VGGNet [18].
When applied to plant disease datasets, InceptionV3 demonstrated its versatility and effectiveness. It achieved a classification accuracy of 14.25% when transferring from PlantVillage (100%) to PlantDoc (100%), 51.27% when evaluated on PlantDoc (80% training and 20% testing), and 82.54% on FieldPlant (80% training and 20% testing) [17]. These results underscore the model’s ability to adapt to varying dataset complexities, excelling in datasets with realistic field conditions, such as FieldPlant, as shown in Table 11.
InceptionV3 is more suitable for sophisticated feature extraction and better handling of scale variation and different feature sizes. Its architecture helps capture both broad context and minute details [100]. In recent cassava disease classification tasks, as well as in edge computing settings for thermal imaging, it was among the models that achieved strong trade-offs between accuracy and inference performance when deployed on edge devices [98]. InceptionV3 is still more computationally expensive than MobileNet, but less so than very large and deep models. The multi-scale convolutional filters InceptionV3 has allow it to capture complex plant structure under different natural light conditions, see Figure 14.
Figure 14. InceptionV3 architecture [101].
Figure 14. InceptionV3 architecture [101].
Sensors 25 06844 g014

4.2.3. MobileNet

In 2017, MobileNet [19], a lightweight and efficient deep learning model, made its debut specifically for image classification tasks on resource-constrained devices.
The input size for the baseline MobileNet model is 224 × 224 pixels. However, the resolution can be adjusted using the resolution multiplier ( ρ ) hyperparameter, enabling reduced-computation MobileNet variants with input resolutions of 192 × 192 , 160 × 160 , or 128 × 128 . This adaptability allows MobileNet to be tailored to various computational constraints and application requirements.
The MobileNet architecture shown in Figure 15 is mainly based on depthwise separable convolutions, a factorization technique, and model size. The convolution operation is separated into two steps, depthwise convolutions and pointwise convolution, which uses 1 × 1 convolution to combine these filters. This factorization enables a substantial reduction in computation, achieving an efficiency improvement of 8 to 9 times compared to standard convolutions, with minimal loss in accuracy.
Figure 15. MobileNet architecture [102].
Figure 15. MobileNet architecture [102].
Sensors 25 06844 g015
The architecture comprises 28 layers, treating depthwise and pointwise convolutions as separate layers. The first layer performs a full convolution, while all subsequent layers (except the final fully connected layer) are followed by batch normalization and ReLU nonlinearity. Down-sampling is achieved through strided depthwise convolutions and the first convolutional layer. An average pooling layer reduces spatial resolution before the fully connected layer, further enhancing computational efficiency.
MobileNet incorporates additional enhancements over traditional architectures. Depthwise separable convolutions leverage 1 × 1 convolutions to optimize general matrix multiplication (GEMM) functions, further improving efficiency. The introduction of width multiplier ( α ) and resolution multiplier ( ρ ) hyperparameters offers flexibility, allowing users to create smaller and faster MobileNet versions without designing a new architecture. This adaptability makes MobileNet highly suitable for a range of tasks, including image classification, object detection, and fine-grained recognition, while maintaining a significantly smaller size and computational cost compared to models such as VGG16 [18], GoogleNet [99], and AlexNet [103].
When applied to plant disease datasets, MobileNet demonstrated notable performance. It achieved a classification accuracy of 16.75% when transferring from PlantVillage (100%) to PlantDoc (100%), 60.14% on PlantDoc (80% training and 20% testing), and 82.9% on FieldPlant (80% training and 20% testing). These results illustrate MobileNet’s ability to balance computational efficiency and performance, particularly in datasets with realistic field conditions, such as FieldPlant, as shown in Table 12.

4.3. Applied Analysis of AI Models in Agriculture

Deep learning models have demonstrated wide applicability across agricultural domains beyond disease detection. Table 13 shows some of those uses.
Table 13. Comparison of deep learning models in precision agriculture.
Table 13. Comparison of deep learning models in precision agriculture.
ModelStrengthsApplicationsCited Work
VGG16High accuracy and robust feature extraction; strong transfer learning capabilities.Disease classification; soil fertility mapping.[15,104]
InceptionV3Multi-scale feature extraction; good accuracy–efficiency trade-off.Weed detection; yield prediction.[105,106]
MobileNetLightweight and fast, suitable for edge devices with limited compute power.Pest detection; real-time field monitoring.[107,108]

5. AI and Thermal Imaging in Precision Agriculture

Thermal imaging serves as an essential tool in scientific research due to its ability to provide unique thermal information that is not available through standard RGB images [109]. Unlike RGB images, where pixel intensity represents color values (red, green, or blue), thermal images encode temperature as pixel intensity, offering a detailed thermal perspective of objects [110,111,112]. This capability is particularly valuable in agriculture, where leaf temperature is a standard indicator of plant water stress. By analyzing thermal data, it becomes possible to distinguish between stressed and unstressed plants, making thermal imaging a vital resource in monitoring plant health. Table 14 shows how thermal imaging outperforms RGB, which is a traditional monitoring method for precision agriculture [113,114].
Thermal images, paired with their accompanying colorbars, deliver abundant and precise thermal details that are crucial for accurate analysis; see Figure 16.
In contrast, RGB images provide visual representations but lack the temperature-based insights that thermal imaging offers; see Figure 17. These unique advantages also make thermal imaging indispensable for broader applications, such as object detection [115], object classification [116], and scene reconstruction [117].
Table 14. Comparison of thermal imaging (TI) and RGB imaging in precision agriculture.
Table 14. Comparison of thermal imaging (TI) and RGB imaging in precision agriculture.
AspectThermal Imaging (TI)RGB ImagingCited Work
Water Stress Detection (Wheat)Achieved 98.4% accuracy with ResNet50 under varied irrigation.Reached 96.9%, less reliable under light variation.[118]
Water Stress Detection (Okra)84–88% accuracy using TI + DL models.74–79% using RGB data only.[119]
Pest Infestation (Maize)Detected +3.3 °C canopy rise in FAW-infested crops before visible symptoms.Detected only post-damage discoloration.[120]
Stored Grain InfestationIdentified internal heat from insect activity before surface damage.Failed to detect internal infestations.[121]

5.1. The Role of AI and Thermal Imaging in Precision Agriculture

As of 2024, the integration of artificial intelligence (AI) and machine learning (ML) into agriculture has revolutionized the detection and management of plant pests and diseases [122,123]. Mirzaev and Kiourt et al. highlighted the transformative role of thermal imaging (TI) in precision agriculture, particularly when paired with advanced AI models. TI captures subtle variations in plant temperature that are imperceptible to the human eye, enabling the identification of early-stage infestations and infections. For example, AI thermal analysis is being used to detect aphid infestation in crops where pests drain sap and endanger crop health [124]. Also, spider mites in crops like cotton are recognized by temperature changes that are caused by their feeding activity [125]. These early detections help significantly reduce crop losses [126]. AI alone, or even when powered by traditional perception methods like RGB imaging, is limited and cannot capture the certain diseases that do not show any physical symptoms that thermal imaging can capture. Additionally, thermal imaging can detect certain diseases earlier.
AI, when used alone or combined with traditional perception methods such as RGB imaging, is limited and cannot detect certain diseases that show no visible symptoms but can be identified through thermal imaging.
Thermal imaging also helps in detecting fungal diseases, which can grow in crops like wheat, as small variations in temperature even before visual symptoms appear. AI and deep learning (DL) algorithms process the data to find the affected areas precisely, giving farmers time to take preventive measures. Figure 18 is an example that shows how thermal imaging works, by converting a normal RGB image to a thermal image.
Figure 18. Description of thermal imaging conversion [127].
Figure 18. Description of thermal imaging conversion [127].
Sensors 25 06844 g018
AI advances the abilities of thermal imaging by automating the analysis and classification of data, delivering solutions specific to different crop and pest types. Convolutional neural networks have shown great accuracy in understanding thermal data for pest and disease identification.
AI-driven thermal imaging systems also help in optimizing irrigation by detecting water stress in crops, assessing plant growth stages, and predicting harvest readiness.
Together, AI, and TI represent a paradigm shift in agricultural diagnostics. They address the increasing challenges of pest and disease control in modern farming and also advance precision agriculture by improving efficiency and promoting environmental sustainability.

5.2. AI and Thermal Imaging in Post-Harvest Fruit Quality Assessment and Pest Detection

Furthermore, Pugazhendi et al. emphasized the transformative role of artificial intelligence (AI) in conjunction with thermal imaging (TI) for fruit quality assessment and pest detection. Thermal imaging, as a non-destructive and non-invasive tool, leverages the unique thermal signatures of fruits affected by pests or diseases, which result from alterations in their metabolic activity. By detecting subtle temperature variations, thermal cameras generate precise images that aid in the early identification of infested or infected produce. For instance, TI has been employed to monitor the surface temperatures of apples stored in plastic and cardboard containers [128], achieving exceptionally low root mean square error (RMSE) values of 0.410 °C and 0.086 °C, respectively, when tested on apple batches stored under controlled ambient conditions (22 ± 1 °C) in plastic and cardboard containers. This demonstrates TI’s effectiveness in maintaining optimal storage conditions and preventing post-harvest losses [129].
Further applications include optimizing post-harvest treatments, as shown in a study on Opuntia ficus-indica (cactus pear) [130]. The experiment involved brief cauterization of harvested cladodes at 200 °C for a few seconds to seal wounds and reduce microbial infection, significantly extending shelf life. This controlled thermal treatment highlights TI’s potential for guiding post-harvest process optimization in a safe and species-specific context. Additionally, TI has enhanced fruit detection within orange canopies by combining thermocouples with infrared cameras. A novel fusion of thermal and visible imaging, using fuzzy logic, outperformed traditional Local Pattern Texture (LPT)-based detection methods on an orange canopy dataset, achieving a detection accuracy of 94.3% compared to 88.6% for LPT. This quantitative improvement underscores the effectiveness of multimodal imaging for robust fruit detection and identification [129].
Incorporating AI into thermal imaging workflows expands its utility even further. Machine learning (ML) algorithms analyze large spectral datasets to identify patterns and attributes indicative of fruit quality [131]. Computer vision techniques use captured images to assess quality and detect surface defects like bruises and decay. However, both spectral imaging and computer vision face challenges such as high equipment costs, environmental sensitivity, and computational demands, which can hinder real-time applications. In contrast, TI offers distinct advantages, including its ability to provide consistent and objective data about temperature, moisture content, and other critical fruit characteristics [129].
Despite these advancements, Pugazhendi et al. note the need for further research to develop standardized protocols and algorithms for precise pest and disease detection using TI. Validating thermal signatures, optimizing imaging parameters, and integrating additional detection methods remain critical areas for investigation. Nonetheless, TI, when coupled with AI, holds immense promise as a rapid and reliable method to reduce post-harvest losses, ensure produce quality, and revolutionize agricultural practices [129].
An illustrative table, Table 15, accompanies this discussion, offering insights into the technology powering these applications.

6. Synthesis and Implications

Artificial intelligence integration with thermal imaging for precision agriculture has been advancing at a radical rate to attain high efficiency in pest and disease detection, crop monitoring, and post-harvest management. As shown in the discussion, AI models, especially deep learning algorithms such as CNNs, have shown exceptional performance in plant health classification and identifying subtle plant stress signals.

6.1. Challenges and Limitations

Despite the promising results, several challenges are yet to be overcome in the deployment of AI and TI systems in large-scale agricultural settings. The high cost of thermal cameras and the requirement for specialized equipment continue to hinder their widespread adoption. Besides, environmental factors such as temperature and humidity may affect the accuracy of the thermal imaging system. Such challenges can be minimized through further advances in technology and the development of cost-effective solutions.
Furthermore, the computational requirements for deep learning models, particularly in real-time applications, require high processing power, which is not always feasible in remote agricultural settings. More research on the efficiency of algorithms and hardware solutions is needed to overcome these limitations and make AI and TI technologies more accessible to farmers. The domain gap problem is one example—despite high accuracy in controlled datasets, some deep learning models underperform in real-life scenarios due to domain shifts caused by lighting, background, and occlusion differences. For instance, models trained on PlantVillage achieve 12% to 16% lower accuracy when tested on PlantDoc, stressing the need for domain adaption and data augmentation to enhance robustness [15,104].

6.2. Future Work

As AI and thermal imaging become increasingly sophisticated, so too will their uses in precision agriculture. Integration of AI models with other sensing technologies, such as multispectral and hyperspectral imaging, is the key for advancement in retrieving comprehensive data regarding plant health in future studies. Furthermore, establishing a standardized protocol for the validation of thermal signatures and optimization of parameters will also be critical in order to enhance the reliability of the system.
Most important in this development pipeline, though, is scalability for these AI-powered systems. The current models have tended to succeed in small-scale, heavily controlled environments, and their extension into larger and more varied farming landscapes will require further adaptation. By focusing on these areas, AI and TI systems can be further honed to be more reliable, economical, and impactful transformers of modern agriculture.

6.3. Conclusions

The integration of artificial intelligence, robotics, and thermal imaging forms a cohesive ecosystem rather than three separate technologies. AI algorithms help robots interpret complex sensory data from thermal and visual modalities, allowing autonomous systems to make decisions for tasks as disease detection, targeted irrigation, and precision spraying. Thermal imaging specifically boosts robotic perception by detecting early physiological stress responses that are invisible to traditional RGB sensors, providing vital inputs for AI models to analyze plant health. When integrated with robotic platforms, theses AI-thermal systems create closed-loop feedback mechanisms in which action, reasoning, and perception work together seamlessly to increase crop productivity.

Author Contributions

Conceptualization, O.S., A.E., F.F., A.A., M.S., E.N., M.E.-S. and E.K.; Methodology, O.S. and E.K.; Validation, O.S.; Formal analysis, O.S. and E.K.; Data curation, O.S.; Writing—original draft, O.S., A.E., F.F., A.A., M.S., E.N. and E.K.; Visualization, O.S.; Supervision, O.S.; Project administration, O.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The researchers acknowledge Ajman University for its support in this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. O’Sullivan, J.N. Demographic Delusions: World Population Growth Is Exceeding Most Projections and Jeopardising Scenarios for Sustainable Futures. World 2023, 4, 545–568. [Google Scholar] [CrossRef]
  2. Gebbers, R.; Adamchuk, V.I. Precision Agriculture and Food Security. Science 2010, 327, 828–831. [Google Scholar] [CrossRef]
  3. Villarino, M.; Tejada, M.; Patterson, S. From agricultural statistics to zero hunger: How the 50x2030 Initiative is closing data gaps for SDG2 and beyond. Stat. J. Iaos 2022, 38, 63–73. [Google Scholar] [CrossRef]
  4. Willekens, A.; Temmerman, S.; Wyffels, F.; Pieters, J.G.; Cool, S.R. Development of an Agricultural Robot Taskmap Operation Framework. J. Field Robot. 2025, 42, 1633–1648. [Google Scholar] [CrossRef]
  5. Elkholy, M.; Shalash, O.; Hamad, M.S.; Saraya, M.S. Empowering the grid: A comprehensive review of artificial intelligence techniques in smart grids. In Proceedings of the 2024 International Telecommunications Conference (ITC-Egypt), Cairo, Egypt, 22–25 July 2024; pp. 513–518. [Google Scholar]
  6. Sistler, F. Robotics and intelligent machines in agriculture. IEEE J. Robot. Autom. 1987, 3, 3–6. [Google Scholar] [CrossRef]
  7. Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Hellmann Santos, C.; Pekkeriet, E. Agricultural Robotics for Field Operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef]
  8. Oliveira, L.F.P.; Moreira, A.P.; Silva, M.F. Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead. Robotics 2021, 10, 52. [Google Scholar] [CrossRef]
  9. Salah, Y.; Shalash, O.; Khatab, E.; Hamad, M.; Imam, S. AI-Driven Digital Twin for Optimizing Solar Submersible Pumping Systems. Inventions 2025, 10, 93. [Google Scholar] [CrossRef]
  10. Sallam, M.; Salah, Y.; Osman, Y.; Hegazy, A.; Khatab, E.; Shalash, O. Intelligent Dental Handpiece: Real-Time Motion Analysis for Skill Development. Sensors 2025, 25, 6489. [Google Scholar] [CrossRef]
  11. Shalash, O. Design and Development of Autonomous Robotic Machine for Knee Arthroplasty. Ph.D. Thesis, University of Strathclyde, Glasgow, UK, 2018. Available online: https://stax.strath.ac.uk/concern/theses/0k225b04p (accessed on 1 November 2025).
  12. Salah, Y.; Shalash, O.; Khatab, E. A lightweight speaker verification approach for autonomous vehicles. Robot. Integr. Manuf. Control. 2024, 1, 15–30. [Google Scholar] [CrossRef]
  13. Singh, D.; Agrawal, N.; Saini, J.; Kumar, M. Foundations of Agricultural AI. In Emerging Smart Agricultural Practices Using Artificial Intelligence; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2025; Chapter 5; pp. 87–104. [Google Scholar] [CrossRef]
  14. Ali, A.H.; Youssef, A.; Abdelal, M.; Raja, M.A. An ensemble of deep learning architectures for accurate plant disease classification. Ecol. Inform. 2024, 81, 102618. [Google Scholar] [CrossRef]
  15. Mohanty, S.P.; Hughes, D.P.; Salathe, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 215232. [Google Scholar] [CrossRef]
  16. Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A Dataset for Visual Plant Disease Detection. In Proceedings of the CoDS COMAD 2020: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, Hyderabad, India, 5–7 January 2020; pp. 249–253. [Google Scholar] [CrossRef]
  17. Moupojou, E.; Tagne, A.; Retraint, F.; Tadonkemwa, A.; Wilfried, D.; Tapamo, H.; Nkenlifack, M. FieldPlant: A Dataset of Field Plant Images for Plant Disease Detection and Classification With Deep Learning. IEEE Access 2023, 11, 35398–35410. [Google Scholar] [CrossRef]
  18. Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar] [CrossRef]
  19. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  20. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  21. Łągiewska, M.; Panek-Chwastyk, E. Integrating Remote Sensing and Autonomous Robotics in Precision Agriculture: Current Applications and Workflow Challenges. Agronomy 2025, 15, 2314. [Google Scholar] [CrossRef]
  22. Atefi, A.; Ge, Y.; Pitla, S.; Schnable, J. Robotic Technologies for High-Throughput Plant Phenotyping: Contemporary Reviews and Future Perspectives. Front. Plant Sci. 2021, 12, 611940. [Google Scholar] [CrossRef]
  23. Zhu, H.; Lin, C.; Liu, G.; Wang, D.; Qin, S.; Li, A.; Xu, J.L.; He, Y. Intelligent agriculture: Deep learning in UAV-based remote sensing imagery for crop diseases and pests detection. Front. Plant Sci. 2024, 15, 1435016. [Google Scholar] [CrossRef]
  24. Cho, S.B.; Soleh, H.M.; Choi, J.W.; Hwang, W.H.; Lee, H.; Cho, Y.S.; Cho, B.K.; Kim, M.S.; Baek, I.; Kim, G. Recent Methods for Evaluating Crop Water Stress Using AI Techniques: A Review. Sensors 2024, 24, 6313. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, Y.; Sun, J.; Wu, Z.; Jia, Y.; Dai, C. Application of Non-Destructive Technology in Plant Disease Detection: Review. Agriculture 2025, 15, 1670. [Google Scholar] [CrossRef]
  26. Qingchun, F.; Zou, W.; Fan, P.; Zhang, C.; Wang, X. Design and test of robotic harvesting system for cherry tomato. Int. J. Agric. Biol. Eng. 2018, 11, 96–100. [Google Scholar] [CrossRef]
  27. Xiong, Y.; Peng, C.; Grimstad, L.; From, P.; Isler, V. Development and field evaluation of a strawberry harvesting robot with a cable-driven gripper. Comput. Electron. Agric. 2019, 157, 392–402. [Google Scholar] [CrossRef]
  28. Hayashi, S.; Yamamoto, S.; Tsubota, S.; Ochiai, Y.; Kobayashi, K.; Kamata, J.; Kurita, M.; Inazumi, H.; Peter, R. Automation technologies for strawberry harvesting and packing operations in Japan1. J. Berry Res. 2014, 4, 19–27. [Google Scholar] [CrossRef]
  29. Hayashi, S.; Yamamoto, S.; Saito, S.; Ochiai, Y.; Kamata, J.; Kurita, M.; Yamamoto, K. Field Operation of a Movable Strawberry-harvesting Robot using a Travel Platform. Jpn. Agric. Res. Q. Jarq 2014, 48, 307–316. [Google Scholar] [CrossRef]
  30. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  31. Lehnert, C.; English, A.; Mccool, C.; Tow, A.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef]
  32. Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed]
  33. Said, H.; Mohamed, S.; Shalash, O.; Khatab, E.; Aman, O.; Shaaban, R.; Hesham, M. Forearm Intravenous Detection and Localization for Autonomous Vein Injection Using Contrast-Limited Adaptive Histogram Equalization Algorithm. Appl. Sci. 2024, 14, 7115. [Google Scholar] [CrossRef]
  34. Sepulveda, D.; Fernandez, R.; Navas, E.; Armada, M.; Gonzalez-de Santos, P. Robotic Aubergine Harvesting Using Dual-Arm Manipulation. IEEE Access 2020, 8, 121889–121904. [Google Scholar] [CrossRef]
  35. Irie, N.; Taguchi, N.; Horie, T.; Ishimatsu, T. Asparagus harvesting robot coordinated with 3-D vision sensor. In Proceedings of the 2009 IEEE International Conference on Industrial Technology, Churchill, VIC, Australia, 10–13 February 2009. [Google Scholar] [CrossRef]
  36. Wang, Y.; Yang, Y.; Yang, C.; Zhao, H.; Chen, G.; Zhang, Z.; Fu, S.; Zhang, M.; Xu, H. End-effector with a bite mode for harvesting citrus fruit in random stalk orientation environment. Comput. Electron. Agric. 2019, 157, 454–470. [Google Scholar] [CrossRef]
  37. Shalash, O.; Rowe, P. Computer-assisted robotic system for autonomous unicompartmental knee arthroplasty. Alex. Eng. J. 2023, 70, 441–451. [Google Scholar] [CrossRef]
  38. Vrochidou, E.; Tziridis, K.; Nikolaou, A.; Kalampokas, T.; Papakostas, G.A.; Pachidis, T.P.; Mamalis, S.; Koundouras, S.; Kaburlasos, V.G. An Autonomous Grape-Harvester Robot: Integrated System Architecture. Electronics 2021, 10, 1056. [Google Scholar] [CrossRef]
  39. Roshanianfard, A.; Noguchi, N. Pumpkin harvesting robotic end-effector. Comput. Electron. Agric. 2020, 174, 105503. [Google Scholar] [CrossRef]
  40. Yasser, M.; Shalash, O.; Ismail, O. Optimized Decentralized Swarm Communication Algorithms for Efficient Task Allocation and Power Consumption in Swarm Robotics. Robotics 2024, 13, 66. [Google Scholar] [CrossRef]
  41. Grimstad, L.; Pham, C.D.; Phan, H.T.; From, P.J. On the design of a low-cost, light-weight, and highly versatile agricultural robot. In Proceedings of the 2015 IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO), Lyon, France, 30 June–2 July 2015. [Google Scholar] [CrossRef]
  42. Zhang, X.; He, L.; Karkee, M.; Whiting, M.; Zhang, Q. Field Evaluation of Targeted Shake-and-Catch Harvesting Technologies for Fresh Market Apple. Trans. ASABE 2020, 63, 1759–1771. [Google Scholar] [CrossRef]
  43. Mu, L.; Cui, G.; Liu, Y.; Cui, Y.; Fu, L.; Gejima, Y. Design and simulation of an integrated end-effector for picking kiwifruit by robot. Inf. Process. Agric. 2020, 7, 58–71. [Google Scholar] [CrossRef]
  44. He, L.; Zhang, X.; Ye, Y.; Karkee, M.; Zhang, Q. Effect of Shaking Location and Duration on Mechanical Harvesting of Fresh Market Apples. Appl. Eng. Agric. 2019, 35, 175–183. [Google Scholar] [CrossRef]
  45. Fue, K.; Porter, W.; Barnes, E.; Rains, G. Visual row detection using pixel-based algorithm and stereo camera for cotton-picking robot. In Proceedings of the 2019 Beltwide Cotton Conferences, New Orleans, LA, USA, 8–10 January 2019. [Google Scholar]
  46. Fue, K.; Rains, G.; Porter, W. Real-Time 3D Measurement of Cotton Boll Positions Using Machine Vision under Field Conditions. In Proceedings of the 2018 Beltwide Cotton Conferences, San Antonio, TX, USA, 3–5 January 2018; pp. 43–54. [Google Scholar]
  47. Rains, G.; Faircloth, A.; Thai, C.; Raper, R. Evaluation of a simple pure pursuit path-following algorithm for an autonomous, articulated-steer vehicle. Appl. Eng. Agric. 2014, 30, 367–374. [Google Scholar] [CrossRef]
  48. Roshanianfard, A.; Noguchi, N.; Kamata, T. Design and performance of a robotic arm for farm use. Int. J. Agric. Biol. Eng. 2019, 12, 146–158. [Google Scholar] [CrossRef]
  49. Zhang, K.; Lammers, K.; Chu, P.; Li, Z.; Lu, R. System design and control of an apple harvesting robot. Mechatronics 2021, 79, 102644. [Google Scholar] [CrossRef]
  50. Maja, J.M.; Polak, M.; Burce, M.E.; Barnes, E. CHAP: Cotton-Harvesting Autonomous Platform. AgriEngineering 2021, 3, 199–217. [Google Scholar] [CrossRef]
  51. Abouelfarag, A.; Elshenawy, M.A.; Khattab, E.A. Accelerating sobel edge detection using compressor cells over FPGAs. In Smart Technology Applications in Business Environments; IGI Global Scientific Publishing: Hershey, PA, USA, 2017; pp. 1–21. [Google Scholar]
  52. Tejada, V.F.; Stoelen, M.; Kusnierek, K.; Heiberg, N.; Korsaeth, A. Proof-of-concept robot platform for exploring automated harvesting of sugar snap peas. Precis. Agric. 2017, 18, 952–972. [Google Scholar] [CrossRef]
  53. Zhao, Y.; Gong, L.; Liu, C.; Huang, Y. Dual-arm Robot Design and Testing for Harvesting Tomato in Greenhouse. IFAC-PapersOnLine 2016, 49, 161–165. [Google Scholar] [CrossRef]
  54. Ling, X.; Zhao, Y.; Gong, L.; Liu, C.; Wang, T. Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision. Robot. Auton. Syst. 2019, 114, 134–143. [Google Scholar] [CrossRef]
  55. Grimstad, L.; Zakaria, R.; Dung Le, T.; From, P.J. A Novel Autonomous Robot for Greenhouse Applications. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
  56. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot. 2019, 37, 202–224. [Google Scholar] [CrossRef]
  57. Young, S.; Kayacan, E.; Peschel, J. Design and field evaluation of a ground robot for high-throughput phenotyping of energy sorghum. Precis. Agric. 2019, 20, 697–722. [Google Scholar] [CrossRef]
  58. De Preter, A.; Anthonis, J.; De Baerdemaeker, J. Development of a Robot for Harvesting Strawberries. IFAC-PapersOnLine 2018, 51, 14–19. [Google Scholar] [CrossRef]
  59. Guebsi, R.; Mami, S.; Chokmani, K. Drones in Precision Agriculture: A Comprehensive Review of Applications, Technologies, and Challenges. Drones 2024, 8, 686. [Google Scholar] [CrossRef]
  60. Mail, M.F.; Maja, J.M.; Marshall, M.; Cutulle, M.; Miller, G.; Barnes, E. Agricultural Harvesting Robot Concept Design and System Components: A Review. AgriEngineering 2023, 5, 777–800. [Google Scholar] [CrossRef]
  61. Bac, C.W.; Van Henten, E.; Hemming, J.; Edan, Y. Harvesting Robots for High-Value Crops: State-of-the-Art Review and Challenges Ahead. J. Field Robot. 2014, 31, 888–911. [Google Scholar] [CrossRef]
  62. Shalash, O.; Sakr, A.; Salem, Y.; Abdelhadi, A.; Elsayed, H.; El-Shaer, A. Position and orientation analysis of Jupiter robot arm for navigation stability. Iaes Int. J. Robot. Autom. (IJRA) 2025, 14, 1–10. [Google Scholar] [CrossRef]
  63. Abouelfarag, A.; El-Shenawy, M.; Khatab, E. High speed edge detection implementation using compressor cells over rsda. In Proceedings of the International Conference on Interfaces and Human Computer Interaction 2016, Game and Entertainment Technologies 2016 and Computer Graphics, Visualization, Computer Vision and Image Processing 2016-Part of the Multi Conference on Computer Science and Information Systems 2016, Madeira, Portugal, 2–4 July 2016; pp. 206–214. [Google Scholar]
  64. Jones, M.H.; Bell, J.; Dredge, D.; Seabright, M.; Scarfe, A.; Duke, M.; MacDonald, B. Design and testing of a heavy-duty platform for autonomous navigation in kiwifruit orchards. Biosyst. Eng. 2019, 187, 129–146. [Google Scholar] [CrossRef]
  65. Khare, D.; Cherussery, S.; Mohan, S. Investigation on design and control aspects of a new autonomous mobile agricultural fruit harvesting robot. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2022, 236, 9966–9977. [Google Scholar] [CrossRef]
  66. Jun, J.; Kim, J.; Seol, J.; Kim, J.; Son, H.I. Towards an Efficient Tomato Harvesting Robot: 3D Perception, Manipulation, and End-Effector. IEEE Access 2021, 9, 17631–17640. [Google Scholar] [CrossRef]
  67. Radočaj, D.; Plaščak, I.; Jurišić, M. Global Navigation Satellite Systems as State-of-the-Art Solutions in Precision Agriculture: A Review of Studies Indexed in the Web of Science. Agriculture 2023, 13, 1417. [Google Scholar] [CrossRef]
  68. Thölert, S.; Steigenberger, P.; Montenbruck, O.; Meurer, M. Signal analysis of the first GPS III satellite. GPS Solutions 2019, 23, 92. [Google Scholar] [CrossRef]
  69. Hein, G. Status, perspectives and trends of satellite navigation. Satell. Navig. 2020, 1, 22. [Google Scholar] [CrossRef] [PubMed]
  70. Wang, M.; Lu, X.; Rao, Y. GNSS Signal Distortion Estimation: A Comparative Analysis of L5 Signal from GPS II and GPS III. Appl. Sci. 2022, 12, 3791. [Google Scholar] [CrossRef]
  71. Duan, B.; Hugentobler, U.; Hofacker, M.; Selmke, I. Improving solar radiation pressure modeling for GLONASS satellites. J. Geod. 2020, 94, 72. [Google Scholar] [CrossRef]
  72. Wu, J.; Li, X.; Yuan, Y.; Li, X.; Zheng, H.; Wei, Z. Estimation of GLONASS inter-frequency clock bias considering the phase center offset differences on the L3 signal. Gps Solut. 2023, 27, 130. [Google Scholar] [CrossRef]
  73. Ogutcu, S. Assessing the contribution of Galileo to GPS+GLONASS PPP: Towards full operational capability. Measurement 2019, 151, 107143. [Google Scholar] [CrossRef]
  74. Fernandez-Hernandez, I.; Chamorro-Moreno, A.; Cancela-Diaz, S.; Calle-Calle, J.D.; Zoccarato, P.; Blonski, D.; Senni, T.; de Blas, F.J.; Hernández, C.; Simón, J.; et al. Galileo high accuracy service: Initial definition and performance. Gps Solut. 2022, 26, 65. [Google Scholar] [CrossRef]
  75. Wang, N.; Li, Z.; Montenbruck, O.; Tang, C. Quality assessment of GPS, Galileo and BeiDou-2/3 satellite broadcast group delays. Adv. Space Res. 2019, 64, 1764–1779. [Google Scholar] [CrossRef]
  76. Wang, M.; Wang, J.; Dong, D.; Meng, L.; Chen, J.; Wang, A.; Cui, H. Performance of BDS-3: Satellite visibility and dilution of precision. Gps Solut. 2019, 23, 56. [Google Scholar] [CrossRef]
  77. Yang, Y.; Gao, W.; Guo, S.; Mao, Y.; Yang, Y. Introduction to BeiDou-3 navigation satellite system. Navigation 2019, 66, 7–18. [Google Scholar] [CrossRef]
  78. Liu, T.; Chen, H.; Chuanfeng, S.; Wang, Y.; Yuan, P.; Geng, T.; Jiang, W. Beidou-3 precise point positioning ambiguity resolution with B1I/B3I/B1C/B2a/B2b phase observable-specific signal bias and satellite B1I/B3I legacy clock. Adv. Space Res. 2023, 72, 488–502. [Google Scholar] [CrossRef]
  79. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419839596. [Google Scholar] [CrossRef]
  80. Onishi, Y.; Yoshida, T.; Kurita, H.; Fukao, T.; Arihara, H.; Iwai, A. An automated fruit harvesting robot by using deep learning. Robomech J. 2019, 6, 13. [Google Scholar] [CrossRef]
  81. Fue, K.G.; Porter, W.M.; Barnes, E.M.; Rains, G.C. An Extensive Review of Mobile Agricultural Robotics for Field Operations: Focus on Cotton Harvesting. AgriEngineering 2020, 2, 150–174. [Google Scholar] [CrossRef]
  82. Gharakhani, H.; Thomasson, J.A.; Lu, Y. An end-effector for robotic cotton harvesting. Smart Agric. Technol. 2022, 2, 100043. [Google Scholar] [CrossRef]
  83. Kuznetsova, A.; Maleva, T.; Soloviev, V. Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot. Agronomy 2020, 10, 1016. [Google Scholar] [CrossRef]
  84. Wang, N.; Zhang, N.; Wang, M. Wireless sensors in agriculture and food industry—Recent development and future perspective. Comput. Electron. Agric. 2006, 50, 1–14. [Google Scholar] [CrossRef]
  85. ur Rehman, A.; Abbasi, A.Z.; Islam, N.; Shaikh, Z.A. A review of wireless sensors and networks’ applications in agriculture. Comput. Stand. Interfaces 2014, 36, 263–270. [Google Scholar] [CrossRef]
  86. Shafi, U.; Mumtaz, R.; García-Nieto, J.; Hassan, S.A.; Zaidi, S.A.R.; Iqbal, N. Precision Agriculture Techniques and Practices: From Considerations to Applications. Sensors 2019, 19, 3796. [Google Scholar] [CrossRef] [PubMed]
  87. Huang, P.; Zhang, Z.; Luo, X.; Zhang, J.; Huang, P. Path Tracking Control of a Differential-Drive Tracked Robot Based on Look-ahead Distance. IFAC-PapersOnLine 2018, 51, 112–117. [Google Scholar] [CrossRef]
  88. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  89. Li, T.; Xie, F.; Feng, Q.; Qiu, Q. Multi-vision-based Localization and Pose Estimation of Occluded Apple Fruits for Harvesting Robots. In Proceedings of the 2022 37th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Beijing, China, 19–20 November 2022; pp. 767–772. [Google Scholar]
  90. Georgantopoulos, P.; Papadimitriou, D.; Constantinopoulos, C.; Manios, T.; Daliakopoulos, I.; Kosmopoulos, D. A Multispectral Dataset for the Detection of Tuta Absoluta and Leveillula Taurica in Tomato Plants. Smart Agric. Technol. 2023, 4, 100146. [Google Scholar] [CrossRef]
  91. Ahmad, M.; Abdullah, M.; Moon, H.; Han, D. Plant Disease Detection in Imbalanced Datasets Using Efficient Convolutional Neural Networks With Stepwise Transfer Learning. IEEE Access 2021, 9, 140565–140580. [Google Scholar] [CrossRef]
  92. Salman, Z.; Muhammad, A.; Piran, M.J.; Han, D. Crop-saving with AI: Latest trends in deep learning techniques for plant pathology. Front. Plant Sci. 2023, 14, 1224709. [Google Scholar] [CrossRef]
  93. Faye, M.; Bingcai, C.; Amath Sada, K. Plant Disease Detection with Deep Learning and Feature Extraction Using Plant Village. J. Comput. Commun. 2020, 8, 10–22. [Google Scholar] [CrossRef]
  94. Métwalli, A.; Shalash, O.; Elhefny, A.; Rezk, N.; Gohary, F.E.; Hennawy, O.E.; Akrab, F.; Shawky, A.; Mohamed, Z.; Hassan, N.; et al. Enhancing hydroponic farming with Machine Learning: Growth prediction and anomaly detection. Eng. Appl. Artif. Intell. 2025, 157, 111214. [Google Scholar] [CrossRef]
  95. Khatab, E.; Onsy, A.; Varley, M.; Abouelfarag, A. A lightweight network for real-time rain streaks and rain accumulation removal from single images captured by AVS. Appl. Sci. 2022, 13, 219. [Google Scholar] [CrossRef]
  96. Prakash, N.; Rajesh, V.; Inthiyaz, S.; Pande, S.; Shaik, H. A DenseNet CNN-based Liver Lesion Prediction and Classification for Future Medical Diagnosis. Sci. Afr. 2023, 20, e01629. [Google Scholar] [CrossRef]
  97. Latif, G.; Abdelhamid, S.E.; Mallouhy, R.E.; Alghazo, J.; Kazimi, Z.A. Deep Learning Utilization in Agriculture: Detection of Rice Plant Diseases Using an Improved CNN Model. Plants 2022, 11, 2230. [Google Scholar] [CrossRef] [PubMed]
  98. Correa da Silva, P.E.; Almeida, J. An Edge Computing-Based Solution for Real-Time Leaf Disease Classification Using Thermal Imaging. IEEE Geosci. Remote. Sens. Lett. 2025, 22, 7000105. [Google Scholar] [CrossRef]
  99. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  100. Costa Junior, A.G.; da Silva, F.S.; Rios, R. Deep Learning-Based Transfer Learning for Classification of Cassava Disease. arXiv 2025, arXiv:2502.19351. [Google Scholar] [CrossRef]
  101. Chulu, F.; Phiri, J.; Nkunika, P.O.; Nyirenda, M.; Kabemba, M.M.; Sohati, P.H. A Convolutional Neural Network for Automatic Identification and Classification of Fall Army Worm Moth. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 112–118. [Google Scholar] [CrossRef]
  102. Han, B.; Hu, M.; Wang, X.; Ren, F. A Triple-Structure Network Model Based upon MobileNet V1 and Multi-Loss Function for Facial Expression Recognition. Symmetry 2022, 14, 2055. [Google Scholar] [CrossRef]
  103. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  104. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  105. dos Santos Ferreira, A.; Matte Freitas, D.; Gonçalves da Silva, G.; Pistori, H.; Theophilo Folhes, M. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
  106. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  107. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  108. Jiang, H.; Zhang, C.; Qiao, Y.; Zhang, Z.; Zhang, W.; Song, C. CNN feature based graph convolutional network for weed and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [Google Scholar] [CrossRef]
  109. Li, Y.; Ko, Y.; Lee, W. A Feasibility Study on Translation of RGB Images to Thermal Images: Development of a Machine Learning Algorithm. SN Comput. Sci. 2023, 4, 555. [Google Scholar] [CrossRef]
  110. Optris. Thermal Image. 2024. Available online: https://www.optris.global/thermal-image (accessed on 28 December 2024).
  111. Mozaffari, M.H.; Li, Y.; Ko, Y. Detecting Flashover in a Room Fire based on the Sequence of Thermal Infrared Images using Convolutional Neural Networks. In Proceedings of the Canadian Conference on Artificial, Toronto, Canada, 30 May–3 June 2022. [Google Scholar] [CrossRef]
  112. Mozaffari, M.H.; Li, Y.; Ko, Y. Real-time detection and forecast of flashovers by the visual room fire features using deep convolutional neural networks. J. Build. Eng. 2022, 64, 105674. [Google Scholar] [CrossRef]
  113. Ismail, I. The Smart Agriculture based on Reconstructed Thermal Image. JITCE (J. Inf. Technol. Comput. Eng.) 2022, 6, 8–13. [Google Scholar] [CrossRef]
  114. Grant, O.M.; Tronina, Ł.; Jones, H.G.; Chaves, M.M. Exploring thermal imaging variables for the detection of stress responses in grapevine under different irrigation regimes. J. Exp. Bot. 2006, 58, 815–825. [Google Scholar] [CrossRef] [PubMed]
  115. Krišto, M.; Ivasic-Kos, M.; Pobar, M. Thermal Object Detection in Difficult Weather Conditions Using YOLO. IEEE Access 2020, 8, 125459–125476. [Google Scholar] [CrossRef]
  116. Chen, C.; Chandra, S.; Han, Y.; Seo, H. Deep Learning-Based Thermal Image Analysis for Pavement Defect Detection and Classification Considering Complex Pavement Conditions. Remote. Sens. 2022, 14, 106. [Google Scholar] [CrossRef]
  117. Yang, M.D.; Su, T.C.; Lin, H.Y. Fusion of Infrared Thermal Image and Visible Image for 3D Thermal Model Reconstruction Using Smartphone Sensors. Sensors 2018, 18, 2003. [Google Scholar] [CrossRef]
  118. Chandel, N.S.; Rajwade, Y.A.; Dubey, K.; Chandel, A.K.; Subeesh, A.; Tiwari, M.K. Water Stress Identification of Winter Wheat Crop with State-of-the-Art AI Techniques and High-Resolution Thermal-RGB Imagery. Plants 2022, 11, 3344. [Google Scholar] [CrossRef] [PubMed]
  119. Rajwade, Y.A.; Chandel, N.S.; Chandel, A.K.; Singh, S.K.; Dubey, K.; Subeesh, A.; Chaudhary, V.P.; Ramanna Rao, K.V.; Manjhi, M. Thermal—RGB Imagery and Computer Vision for Water Stress Identification of Okra (Abelmoschus esculentus L.). Appl. Sci. 2024, 14, 5623. [Google Scholar] [CrossRef]
  120. Yones, M.; Khdery, G.A.; Kadah, T.; Aboelghar, M. Exploring the Potential of Thermal Imaging for Pre-Symptomatic Diagnosis of Fall Armyworm Infestation in Maize: A Case Study from Ismailia Governorate, Egypt. Sci. J. Agric. Sci. 2024, 6, 103–110. [Google Scholar] [CrossRef]
  121. Ibrahim, A.; Yousry, M.; Saad, M.; Farag Mahmoud, M.; Said, M.; Abdullah, A. Infrared Thermal Imaging as an Innovative Approach for Early Detection of Infestation of Stored Product Insects in Certain Stored Grains. Cercet. Agron. Mold. (Agronomic Res. Mold.) 2020, LII, 321–331. [Google Scholar] [CrossRef]
  122. Khaled, A.; Shalash, O.; Ismaeil, O. Multiple Objects Detection and Localization using Data Fusion. In Proceedings of the 2023 2nd International Conference on Automation, Robotics and Computer Engineering (ICARCE), Wuhan, China, 14–16 December 2023. [Google Scholar]
  123. Khattab, Y.; Pott, P.P. Active/robotic capsule endoscopy-A review. Alex. Eng. J. 2025, 127, 431–451. [Google Scholar] [CrossRef]
  124. Anton, S.R.; Martínez-Ojeda, R.M.; Hristu, R.; Stanciu, G.A.; Toma, A.; Banica, C.K.; Fernández, E.J.; Huttunen, M.; Bueno, J.M.; Stanciu, S.G. On the Automated Detection of Corneal Edema with Second Harmonic Generation Microscopy and Deep Learning. arXiv 2022, arXiv:2210.00332. [Google Scholar] [CrossRef]
  125. Jia, W.; Zhang, Z.; Shao, W.; Ji, Z.; Hou, S. RS-Net: Robust segmentation of green overlapped apples. Precis. Agric. 2022, 23, 492–513. [Google Scholar] [CrossRef]
  126. Mirzaev, K.G.; Kiourt, C. Machine Learning and Thermal Imaging in Precision Agriculture. In Proceedings of the International Conference on Information, Intelligence, Systems, and Applications, Chania Crete, Greece, 17–19 July 2024; pp. 168–187. [Google Scholar]
  127. John, M.; Bankole, I.; Ajayi-Moses, O.; Ijila, T.; Jeje, T.; Patil, L. Relevance of Advanced Plant Disease Detection Techniques in Disease and Pest Management for Ensuring Food Security and Their Implication: A Review. Am. J. Plant Sci. 2023, 14, 1260–1295. [Google Scholar] [CrossRef]
  128. Badia-Melis, R.; Emond, J.P.; Ruiz-Garcia, L.; Hierro, J.; Villalba, J. Explorative study of using Infrared imaging for temperature measurement of pallet of fresh produce. Food Control 2016, 75, 211–219. [Google Scholar] [CrossRef]
  129. Pugazhendi, P.; Gnanavel, B.K.; Sundaram, A.; Sathiyamurthy, S. Advancing post-harvest fruit handling through AI-based thermal imaging: Applications, challenges, and future trends. Discov. Food 2023, 3, 27. [Google Scholar] [CrossRef]
  130. Hahn, F.; Cruz, J.; Barrientos, A.; Perez, R.; Valle, S. Optimal pressure and temperature parameters for prickly pear cauterization and infrared imaging detection for proper sealing. J. Food Eng. 2016, 191, 131–138. [Google Scholar] [CrossRef]
  131. Elsayed, H.; Tawfik, N.S.; Shalash, O.; Ismail, O. Enhancing human emotion classification in human-robot interaction. In Proceedings of the 2024 International Conference on Machine Intelligence and Smart Innovation (ICMISI), Alexandria, Egypt, 12–14 May 2024. [Google Scholar]
Figure 1. Robots with grasping and cutting concepts: (a) harvesting system for cherry tomato [26]; (b) sweet pepper harvesting robot [30]; (c) pumpkin harvesting robotic end effector [39].
Figure 1. Robots with grasping and cutting concepts: (a) harvesting system for cherry tomato [26]; (b) sweet pepper harvesting robot [30]; (c) pumpkin harvesting robotic end effector [39].
Sensors 25 06844 g001
Figure 2. Robot mobile platform types: (a) railed platform [26]; (b) tracked platform (TERRA_MEPP) [57]; (c) independent steering devices (Octinion) [43]; (d) four-wheel platform (Segway) [49]; (e) four-wheel platform (Husky A200) [50].
Figure 2. Robot mobile platform types: (a) railed platform [26]; (b) tracked platform (TERRA_MEPP) [57]; (c) independent steering devices (Octinion) [43]; (d) four-wheel platform (Segway) [49]; (e) four-wheel platform (Husky A200) [50].
Sensors 25 06844 g002
Figure 3. Localization and sensing components in agricultural robot systems [60].
Figure 3. Localization and sensing components in agricultural robot systems [60].
Sensors 25 06844 g003
Figure 5. Statistics of PlantVillage dataset. Source: [17].
Figure 5. Statistics of PlantVillage dataset. Source: [17].
Sensors 25 06844 g005
Figure 7. Statistics of PlantDoc dataset leaves with diseases [17].
Figure 7. Statistics of PlantDoc dataset leaves with diseases [17].
Sensors 25 06844 g007
Figure 8. Some plant disease images under laboratory and field conditions [17].
Figure 8. Some plant disease images under laboratory and field conditions [17].
Sensors 25 06844 g008
Figure 9. Agro -ecological zones of Cameroon [17].
Figure 9. Agro -ecological zones of Cameroon [17].
Sensors 25 06844 g009
Figure 10. Statistics of Fieldplant dataset leaves with diseases [17].
Figure 10. Statistics of Fieldplant dataset leaves with diseases [17].
Sensors 25 06844 g010
Figure 11. Corn single leaf image [17].
Figure 11. Corn single leaf image [17].
Sensors 25 06844 g011
Figure 12. Tomato multiple leaves image [17].
Figure 12. Tomato multiple leaves image [17].
Sensors 25 06844 g012
Figure 16. Sample of RGB images [113].
Figure 16. Sample of RGB images [113].
Sensors 25 06844 g016
Figure 17. Sample of IR images [113].
Figure 17. Sample of IR images [113].
Sensors 25 06844 g017
Table 1. Performance of different crops in robotic harvesting.
Table 1. Performance of different crops in robotic harvesting.
CropSuccess Rate (%)Damage Rate (%)Speed (s/Cycle)Cited Work
Sweet Pepper61 (modified crop) and 18 (unmodified crop)N/A24[30]
Cherry Tomato83N/A8[26]
Pumpkin7921N/A[39]
Table 6. Main sensors used in precision agriculture.
Table 6. Main sensors used in precision agriculture.
Sensor TypeExamplesParametersAdvantages
SoilSoil moisture, pH, EC sensorsMonitoring soil conditions (moisture, salinity)Real-time, easy to deploy
WeatherAnemometers, Rain gaugesMonitoring local weather (wind, rain, temperature)Optimize irrigation, pesticide use
OpticalNDVI, RGB camerasMeasuring plant health, canopy coverNon-invasive, covers large area
Remote SensingLiDAR, Multispectral, Thermal, Satellite camerasTopography mapping, vegetation analysis, crop stress detectionHigh accuracy, covers large area
ProximityUltrasonic, Capacitive, Inductive sensorsEstimating plant height, density, detecting growth stagesAffordable, simple to deploy
NutrientNPK sensorsMeasuring soil nutrient levelsDirect nutrient management
Table 10. VGG-16 classification accuracy on different plant disease datasets [17].
Table 10. VGG-16 classification accuracy on different plant disease datasets [17].
CNN ModelTraining (%)Testing (%)Accuracy (%)
VGG-16PV (100%)PD (100%)12.75
VGG-16PD (80%)PD (20%)40.3
VGG-16FD (80%)FD (20%)80.54
Note: PV, PD, and FP represent PlantVillage, PlantDoc, and FieldPlant datasets, respectively.
Table 11. InceptionV3 classification accuracy on different plant disease datasets [17].
Table 11. InceptionV3 classification accuracy on different plant disease datasets [17].
CNN ModelTraining (%)Testing (%)Accuracy (%)
InceptionV3PV (100%)PD (100%)14.25
InceptionV3PD (80%)PD (20%)51.27
InceptionV3FD (80%)FD (20%)82.54
Note: PV, PD, and FP represent PlantVillage, PlantDoc, and FieldPlant datasets, respectively.
Table 12. MobileNet classification accuracy on different plant disease datasets [17].
Table 12. MobileNet classification accuracy on different plant disease datasets [17].
CNN ModelTraining (%)Testing (%)Accuracy (%)
MobileNetPV (100%)PD (100%)16.75
MobileNetPD (80%)PD (20%)60.14
MobileNetFD (80%)FD (20%)82.9
Note: PV, PD, and FP represent PlantVillage, PlantDoc, and FieldPlant datasets, respectively.
Table 15. Comparison of thermal cameras [129].
Table 15. Comparison of thermal cameras [129].
ModelTypeSens. (mK)Res.Range (°C)Price ($)
FLIR ONE edge ProSmart70160 × 120 20 to 400550
FLIR 5CHandheld70160 × 120 20 to 400799
RevealPROHandheld70320 × 240 40 to 330699
Flir One Gen 3Smart15080 × 60 20 to 120229
GTC400CHandheld50320 × 240 40 to 400966
Compact ProSmart70160 × 120 40 to 330499
FLIR TG267Handheld70320 × 240 25 to 380549
ShotProHandheld70160 × 120 40 to 330699
Scout TKHandheld50160 × 120 20 to 40649
CAT S62 ProSmart150160 × 120 20 to 400579
E8-XTHandheld50320 × 240 20 to 550619
IR202Smart15080 × 60 40 to 400140
Note: Sens. and Res. represent Sensitivity and Resolution, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shalash, O.; Emad, A.; Fathy, F.; Alzogby, A.; Sallam, M.; Naser, E.; El-Sayed, M.; Khatab, E. Fusion of Robotics, AI, and Thermal Imaging Technologies for Intelligent Precision Agriculture Systems. Sensors 2025, 25, 6844. https://doi.org/10.3390/s25226844

AMA Style

Shalash O, Emad A, Fathy F, Alzogby A, Sallam M, Naser E, El-Sayed M, Khatab E. Fusion of Robotics, AI, and Thermal Imaging Technologies for Intelligent Precision Agriculture Systems. Sensors. 2025; 25(22):6844. https://doi.org/10.3390/s25226844

Chicago/Turabian Style

Shalash, Omar, Ahmed Emad, Fares Fathy, Abdallah Alzogby, Mohamed Sallam, Eslam Naser, Mohamed El-Sayed, and Esraa Khatab. 2025. "Fusion of Robotics, AI, and Thermal Imaging Technologies for Intelligent Precision Agriculture Systems" Sensors 25, no. 22: 6844. https://doi.org/10.3390/s25226844

APA Style

Shalash, O., Emad, A., Fathy, F., Alzogby, A., Sallam, M., Naser, E., El-Sayed, M., & Khatab, E. (2025). Fusion of Robotics, AI, and Thermal Imaging Technologies for Intelligent Precision Agriculture Systems. Sensors, 25(22), 6844. https://doi.org/10.3390/s25226844

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop