Next Article in Journal
SHIELD Human Factors Taxonomy and Database for Learning from Aviation and Maritime Safety Occurrences
Previous Article in Journal
Relationship between Butyrylcholinesterase Activity and Cognitive Ability in Workers Exposed to Chlorpyrifos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Safety of Automated Agricultural Machineries: A Systematic Literature Review

Department of Agricultural and Biological Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Safety 2023, 9(1), 13; https://doi.org/10.3390/safety9010013
Submission received: 19 January 2023 / Revised: 18 February 2023 / Accepted: 28 February 2023 / Published: 6 March 2023

Abstract

:
Automated agricultural machinery has advanced significantly in the previous ten years; however, the ability of such robots to operate safely will be critical to their commercialization. This study provides a holistic evaluation of the work carried out so far in the field of automated agricultural machines’ safety, as well as a framework for future research considerations. Previous automated agricultural machines’ safety-related studies are analyzed and grouped into three categories: (1) environmental perception, (2) risk assessment as well as risk mitigation, and (3) human factors as well as ergonomics. The key findings are as follows: (1) The usage of single perception, multiple perception sensors, developing datasets of agricultural environments, different algorithms, and external solutions to improve sensor performance were all explored as options to improve automated agricultural machines’ safety. (2) Current risk assessment methods cannot be efficient when dealing with new technology, such as automated agricultural machines, due to a lack of pre-existing knowledge. Full compliance with the guidelines provided by the current International Organization for Standardization (ISO 18497) cannot ensure automated agricultural machines’ safety. A regulatory framework and being able to test the functionalities of automated agricultural machines within a reliable software environment are efficient ways to mitigate risks. (3) Knowing foreseeable human activity is critical to ensure safe human–robot interaction.

1. Introduction

The world human population will reach 8.5 billion in 2030, 9.7 billion in 2050, and 10.9 billion in 2100 [1]. In addition, about 9% of the global population (770 million in 2021) is undernourished [2]. Meeting the food demands of an increasing population, in addition to tackling the issue of undernourishment, will require new innovative strategies in agriculture. Moreover, farm workers are migrating from rural to urban areas due to poverty, limited access to social protection, inequality, and environmental degradation [3]. To meet the increased demand for agricultural products, with limited inputs, modern farmers will turn to automation, using technologies such as AI, machine learning, sensors, and other digital hardware as well as software to boost productivity and meet rising food demands in a sustainable way that is less dependent on inputs and the labor force.
There has been considerable development of AI-based automation in the areas of pruning, spraying fertilizer, autonomous weed removal, and harvesting. Ramin et al. [4] reviewed 152 research papers that focused on the research and development of autonomous weed control, field scouting, and harvesting equipment. Similarly, Lytridis et al. [5] reviewed 77 papers that focused exclusively on agricultural cooperative robots that were split into five different topics, including (a) human–robot cooperation, (b) cooperative multiple unmanned aerial vehicles (UAVs), (c) cooperative multiple unmanned ground vehicles (UGVs), (d) hybrid teams of UAVs as well as UGVs (UAVs/UGVs), and (e) cooperative manipulation by multi-arm systems.
Despite tremendous development in agriculture automation, automated agricultural machines have not yet reached a commercial scale [4] owing to several obstacles, including machine system safety, data privacy, initial investment costs, associated expenses, and a lack of knowledge of their potential benefits [6,7]. One of the major obstacles to the full deployment of automated agricultural machines is safety. Automated agricultural machines must be designed to work safely, their safety record must be confirmed, and, more importantly, they must be viewed as safe by end users. Therefore, investigating and ensuring the safety of automated agricultural machines will be critical for their adoption.
The main objective of this work is to survey, organize, review, and summarize the different bodies of published research related to the safety of automated agricultural machines, including discussing potential future work.

2. Materials and Methods

A systematic literature research method that utilized the identification, screening, and inclusion procedures outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (https://prisma-statement.org/ accessed on 3 October 2022) was used in this study [8]. A systematic literature review was utilized in this study as it is method-driven, replicable, and comprehensive. In addition, this method aims to reduce bias through exhaustive literature searches, providing a scientific as well as transparent process, and by providing an audit record of the reviewers’ procedures [9].

2.1. Identification

The systematic literature review was conducted by using the Scopus database, as it was identified as being the most relevant for publications in the areas of engineering and manufacturing [10]. The search was conducted in October 2022. Titles, abstracts, and keywords were used as the search fields in the Scopus electronic database. The query used in the search was “object detection agriculture” OR “human–robot interaction in agriculture” OR “robot perception agriculture” OR “robotic safety agriculture” OR “human factor ergonomic agriculture” OR “autonomous agricultural vehicles safety” OR “risk assessment agriculture robotics”. These keywords were selected based on a preliminary search that found them effective in finding an initial list of known relevant articles. Furthermore, the automated search options in the Scopus electronic database were used to include the following criteria: (1) years (2012–2022), (2) subject areas (agricultural and biological sciences, engineering, and computer science), (3) keywords (agriculture, agriculture robots, object detection, deep learning, robotics, computer vision, object recognition, precision agriculture, and remote sensing), and (4) language (English). In total, 1209 matching articles were identified and exported into an Excel spreadsheet for further analyses.

2.2. Screening

The authors examined all of the 1209 articles by reading the titles and abstracts. The criteria for inclusion in the next stage were as follows:
  • Article is about robots and/or autonomous systems;
  • Article is about agriculture applications;
  • Article is about safety.
All of the 1209 articles were reviewed at this stage, and only 40 articles were found to meet all of the inclusion criteria.

2.3. Eligibility

The authors examined all of the 40 articles by reading the full texts. The criteria for inclusion in the next stage were as follows:
  • The full text of the article is available;
  • Article is about automated and autonomous systems;
  • Article is about agriculture applications;
  • Article is about safety;
  • Articles is not a review paper;
  • Article is not a book.
Review papers and books were not included in this systematic review, as the goal of this paper was to review current primary research findings. Including review articles and books might skew the data on the amount and type of research conducted. A total of 40 articles were processed in this stage; 9 articles were excluded as they did not meet the criteria, leading to 31 articles being left for the next stage. Figure 1 shows a flow diagram of the review’s methodology.

2.4. Additional Review

To identify additional articles related to the safety of automated agriculture machines that might have been missed by the keywords, each of the 31 articles were found in Scopus and saved as an article list. The automated search options in the Scopus electronic database were then used to obtain all of the articles that were either cited or referenced by these 31 articles. In total, 1444 articles were identified and exported into an Excel spreadsheet for further analyses. Moreover, the Excel data filter option was used to only include articles from 2012 to 2022. This narrowed the total down to 1114 articles. The same screening process method conducted in Section 2.2. was applied to the 1114 articles, which narrowed the total down to 86 articles. The remaining articles were read and included if they met the criteria stated in Section 2.3. A total of 51 articles was obtained; however, it was the case that 20 out of the 51 articles were already in the initial 31 articles found. This occurred as a result of several articles citing other works that were already identified in the initial examination. Thus, these articles were excluded, and the remaining 31 articles were considered. Overall, a total of 62 articles were found and reviewed. Figure 2 shows a flow diagram of the additional review’s methodology.

3. Results

The reviewed papers were organized into three categories that improve the safety of automated agricultural machines: (1) environmental perception, (2) risk assessment as well as risk mitigation, and (3) human factors as well as ergonomics, as illustrated in Figure 3. A total of 48 articles are related to the perception of automated agricultural machines, 9 articles focused on risk assessment as well as risk mitigation, and 5 articles involved human factors as well as ergonomics. Overall, there were 43 journal articles, 17 conference articles, 1 standard, and 1 thesis.
The geographical region of the first author was considered to be the geographical location of the publication. Over half of these publications came from just four countries: Denmark, with 10 papers, and the USA, Italy, and China, with 8 each. The next tier was Germany, Australia, Japan, and Canada, with three–four publications each. Korea, Greece, the United Kingdom, Brazil, Taiwan, Norway, Portugal, Finland, Israel, Spain, and Poland all had one–two publications each (Figure 4). On average, five–six publications were published per year, with a minimum of two and a maximum of nine articles. There were no clear trends in recent publications on articles related to the safety of autonomous systems in agriculture (Figure 5). This trend contrasts with trends for articles published on AI, or autonomous systems in OSH which showed a rapid increase in the last 10 years [11].

3.1. Environmental Perception

Robotic technology has been extensively developed for agricultural tasks in recent years to produce intelligent vehicles that can boost productivity and competitiveness. Safety is a significant concern with regard to autonomous vehicles, as unforeseen obstructions and highly deformable terrain in working areas must be robustly identified and avoided to prevent accidents. Failure to recognize and avoid obstacles as well as dangerous terrain in a timely manner can cause serious problems, including harm or death to humans as well as animals and/or damage to robots in cases of incidents. As a result, accurate and reliable environmental perception is vital [12].
The perception of agricultural environments is achieved using perception sensors and machine learning or artificial intelligence. Perception sensors, positioning and guidance systems, and associated algorithms are used to identify and classify significant objects, and a safety-related control system is used to control machines without human intervention. (1) Perception sensors are used to collect and record images (including all types of obstacles and jeopardizing terrains) in agricultural environments. Examples of perception sensors used with automated agricultural machines include stereo vision cameras, LiDAR, radars, thermal cameras, RGB cameras, and laser scanners. Table 1 provides a description of some of these perception sensors (Table 1). (2) Machine learning algorithms analyze the data collected. Algorithms are trained with the datasets for the identification and classification of significant objects. Examples of machine learning algorithms used with automated agricultural machines are convolutional neural networks and hidden Markov models. The goal of training the algorithms is to allow them to robustly identify and classify images in various agricultural environment settings. (3) Finally, when the algorithms are well-trained, and the hazards are identified during robots’ operations, logic is sent to the robot to either stop, change direction to avoid an obstacle, or reduce its speed [11,13], as shown in Figure 6.
Robust obstacle detection in agricultural settings is challenging due to the complex and unstructured nature of such environments. Furthermore, training algorithms on all possible obstacles is complex because many hazards have camouflaged appearances or structures and are diverse. There are four main categories of obstacles encountered in agriculture, including positive, negative, and moving obstacles, as well as deformable terrain. A positive obstacle is any object higher than ground level, whereas a negative obstacle is lower than ground level. Examples of positive obstacles include trees, metallic poles, and buildings. Examples of negative obstacles include holes and ditches. In the case of a positive obstacle, an automated agricultural machine risks a collision; conversely, a negative obstacle can cause a crash. A dynamic obstacle is something that suddenly appears in front of a robot, generally represented by human operators, other moving machinery, animals, or even children. Moreover, obstacles can also vary widely based on the type of crop, fruit, and vegetable, as well as the curving of the landscape. Figure 7 depicts examples of positive, negative, and moving obstacles commonly seen in agricultural settings [12].
Furthermore, because an autonomous vehicle’s stability can be jeopardized by highly deformable terrain, deformable terrains remain another potential hazard faced by automated agricultural machines. As a result, effective terrain identification and classification, as well as subsequent appropriate automated machine reactions, are crucial aspects of improving safety. Figure 8 displays rough terrains that are frequently found in agricultural settings.

3.1.1. Single Perception Sensor

Given the challenging environment in which automated agricultural machines operate, numerous works have been developed regarding robust obstacle detection and terrain classification systems to improve their safety. For example, Freitas et al. [14] introduced the use of 3D laser scanners to detect obstacles in agricultural settings. An autonomous orchard vehicle was equipped with a 3D laser scanner and tested in a field. As the vehicle moves forward, the laser measures different lines on the ground, allowing the creation of 3D point clouds representing the terrain surface as well as the obstacles detected. Within the same year, the authors of Kohanbash et al. [15] proposed a safety architecture with which to guide the development of safe automated agricultural machines, with an initial focus on autonomous agricultural vehicles. They developed a new hierarchy of agricultural robotic safety based on three elements: (1) sensing/computing/actuation technologies, (2) the user interface, and (3) establishing standards based on the level of automation implemented in machines. A John Deere tractor was retrofitted to make it autonomous by following their hierarchy and was equipped with a laser scanner in addition to wheel and steering encoders. Based on their hierarchy, the retrofitted tractor was able to detect and stop in front of obstacles, but could not drive around them. Additionally, Bellone et al. [16] looked at autonomous vehicles safely navigating through challenging terrain and detecting positive, negative, and dynamic obstacles by using stereo vision techniques. All types of obstacles and terrain imperfections were correctly detected by the system; however, the results were less precise because the stereo vision techniques could not produce point clouds of equal density and quality. In line with this issue, the authors of Reina et al. [17] proposed the use of machine learning to improve radar classification in outdoor settings. A self-trained radar classifier was developed, where the ground model automatically learned during a bootstrapping stage and was continuously updated based on the most recent ground labels to forecast ground instances in subsequent scans.
Dvorak et al. [18] looked at the potential of an ultrasonic sensor to detect things that are often seen in an outside farming environment. The technology correctly identified a water jug and oriented strand board but underperformed in recognizing animals, a mannequin (human model), and a fence. Moreover, Yang et al. [19] presented human detection via an automated tractor using stereo vision in an agricultural environment. The method detected a human in the range of 4 to 11 m with an error distance of about 0.5 m in both stationary and motion conditions during daytime. Furthermore, Ross et al. [20] used a novelty detector to inform stereo vision matching to improve obstacle detection systems for autonomous robots in agricultural field scenarios. This approach has no lighting dependence and requires no pretraining because the novelty is based solely on current image data. While the system correctly detected all of the obstacles during the day, there were some problems when detecting the obstacles at night. More recently, the authors of Fleischmann et al. [21] created a stereo-vision-camera-based obstacle-detecting system. The entire categorization process was based on dividing the created point cloud into cells and analyzing the cells further. Within the same year, a sophisticated sensor camera was introduced by Campos et al. [22], who explored the safety of automated agricultural machines through robust obstacle detection with a unique CCD high-resolution camera operating in the visible RGB spectral range. Automatic video analyses were performed to detect static and dynamic obstacles in agricultural environments via spatial–temporal analyses. A key feature of this method is that it does not require any training process. The authors Yan et al. [23] used a stereo vision camera combined with fuzzy logic and a neural network to successfully detect vehicles, humans, trees, plastic bars, and other common obstacles in an agricultural environment.
Researchers also employed a single 2D laser scanner sensor to detect obstacles in an agricultural environment [24]. The sensor was placed in front of an automated agricultural tractor and was designed to find a man-made round pole in a farm setting. The system was able to accurately identify the obstacles and stop the vehicle; however, in the field’s corners, where the sensor was sensing beyond the field plot’s margins, the system produced false positives. In order to recognize an environment and identify as well as avoid obstacles in agricultural settings, the researchers of Inoue et al. [25] used an object detection system based on a stereo camera and convolutional neural network. The study tested the positions of obstacles and landmarks as well as the position of detected objects in relation to the robot rather than the system’s ability to detect and avoid obstacles. The outcomes, however, revealed that the robot’s final position inaccuracy was 5.83 m. Rotating the robot caused an error to accrue, resulting in a self-position value that was not accurate. Consequently, a single sensor seems to be unable to reliably detect all obstacles in agricultural settings, which is not surprising considering the variety and complexity of common field obstacles; however, since each perception sensor has its strengths and shortcomings, perhaps combining multiple sensors to improve obstacle detection in agricultural settings may be a better method [26]. Table 1 shows the functionalities, as well as some advantages and disadvantages of different sensor modalities for outdoor perception.

3.1.2. Multiple Perception Sensors

Considering the fact that one sensing technology cannot be effective in all situations encountered in agricultural environments, a set of sensor technologies was evaluated in [27]. The authors used a stereo vision camera, LADAR (Laser Detection And Ranging), thermal camera, and radar. The stereo vision camera effectively identified the obstacles. The thermal camera revealed high temperatures on living beings, automobiles, and water. The radar was able to create a panoramic image of the surrounding area; however, the LADAR was unable to recognize moving obstacles in time when the tractor was moving. Recently, the authors of [28] used three imaging sensors (RGB, thermal, and stereo cameras) and two active sensors (LiDAR and radar). The RGB camera was able to recognize upright pedestrians, but its performance degraded for more complicated postures. Detecting items that extend significantly above the ground was easy with the LiDAR sensor and stereo camera. The thermal camera demonstrated excellent skills by detecting things of different temperatures, such as humans and living obstacles. More recently, Christiansen et al. [29] conducted the same research as above but with fewer sensors. The authors used an RGB camera, stereo camera, thermal camera, and LiDAR. Additionally, different obstacles were considered. The obstacles included standing/lying adult and child mannequins, two real humans, and an ISO 18497 barrel. The authors obtained similar results to those stated previously. The authors of Ross et al. [30] explored the usage of multiple stereo vision cameras to detect obstacles in agricultural environments during both the daytime and nighttime. The system successfully detected all obstacles during the daytime, but humans and tires were not detected during the nighttime.
Nissimov et al. [31] suggested that a Kinect sensor could be combined with an infrared laser emitter, infrared camera, and RGB camera to develop an obstacle detection system for greenhouse environments; however, problems with smooth and shiny surfaces, misalignment between RGB and depth images, a time delay (30 s) for a stable depth measurement after a quick rotation, synchronization, and mismatch between the RGB and depth images’ fields of view and points of view were all mentioned as potential sources of error. In Ball et al. [32], a vision-based obstacle detection and navigation system for a crop field robot was created. To follow the crop rows, the robot used a combination of GPS, inertial sensors, and a stereovision camera sensor to navigate the terrain. An extra set of stereoscopic wide-angle cameras was utilized to observe and detect obstacles in the area 10 m ahead of the robot. More recently, Reina et al. [11] looked into combining stereovision, LiDAR, radar, and thermography to improve agricultural robot environmental awareness. Stereovision and LiDAR were coupled first, followed by radar and stereovision, and lastly stereovision and thermography. Additionally, to detect and avoid obstacles in an agricultural area, Franzius et al. [33] investigated a low-cost compact module made up of color cameras and an ARM-based processor board installed on an autonomous lawn mower. The results showed that the technology maintains good mowing performance while reducing collision incidents. Moreover, a new method for multimodal obstacle detection by fusing RGB cameras and LiDAR sensing with a conditional random field (CRF) was explored [34]. The proposed method was evaluated on a dairy paddock and different orchards with a perception research robot in Australia. The results have shown a better performance of the proposed method compared to when a single perception sensor was used.
Using four 120-degree viewing angle cameras in conjunction with the YOLO-v3 object identification system, the authors of Jung et al. [35] recently investigated the potential to improve obstacle detection skills. For testing, the cameras were put on a tractor. The system had an accuracy of 88.43% in identifying obstacles. In a more contemporary work, Skoczeń et al. [36] explored an obstacle detection system for agricultural mobile robot applications that used four RGB-D cameras and two LiDAR sensors. The system was able to correctly detect obstacles with a high accuracy of 95.2%. The key aspect about this system is that it generates maps that contain information on the working area, unknown places, and obstacle position. Moreover, multiple sensors have also been investigated for robust outdoor terrain classification. Reina et al. [37] conducted an outdoor terrain classification with LiDAR and two stereovision cameras. Their system successfully detected the obstacles and classified the outdoor terrain. Another study on statistical ground categorization using a combination of radar, monocular vision, four 2D laser scanners, a thermal infrared camera, and RTK was conducted by [38]. With an average classification accuracy of roughly 80%, the system was successful in detecting drivable surfaces. This feature exemplifies the potential value of combining radar and vision, in which the radar provides range information and the vision provides color-based sub-ground separation for an augmented map of the environment.
Reina et al. [39] explored the estimation of terrain classification supervised learning method to recognize terrain surface for a vehicle operating in rural settings with a stereo vision camera and electrical current as well as voltage sensors. Their system successfully predicted (89.1%) the terrain classifications. Additionally, Kragh et al. [40] investigated obstacle detection and terrain classification in agricultural fields by using a single 3D LiDAR as well as several visual and pose sensors. The system successfully classified the terrain and detected objects, such as humans, animals, cars, and buildings, with an accuracy of 91.6% and 81.1%, respectively. Finally, Christiansen et al. [41] developed an anomaly detection method for obstacle perception by using a stereovision camera composed of two Flea 3 GigE color cameras. The sensors benefit from integrating deep convolutional neural networks (CNNs) with background subtraction or anomaly detection to overcome the problem of recognizing distant and obstructed objects that are not recognized as obstacles by traditional CNNs. The studies reveal that the proposed technique can detect a human at longer ranges, up to 90 m, while maintaining real-time operation in agricultural settings.
Figure 9 displays the total number of each sensor/device used in all of the articles to improve the safety of automated agricultural machines. The most used sensor/device was stereo vision cameras, which were used 23 times. Another commonly used sensor was the 3D LiDAR (16 times), as well as thermal cameras, radars, and 3D laser scanners (nine times each). RGB cameras were used eight times, 120-degree cameras were used four times, and monocular vision cameras and ultrasonic sensors were used two times. All other sensors/devices (i.e., trinocular stereo vision cameras, HD webcams, ToF cameras, 2D laser scanners, 3D cameras, web cameras, 360-degree cameras, omnidirectional stereo vision cameras, and high-resolution CCD cameras) were only used once.

3.1.3. Datasets and Algorithms

With the introduction of numerous high-quality datasets of urban and highway driving, person detection from automobiles has advanced quickly recently; however, no large-scale benchmark is available for the same problem in off-road or agricultural contexts. To enhance research in an agricultural setting, Pezzementi et al. [42] proposed the National Robotics Engineering Center (NREC) Agricultural Person-Detection Dataset. Two perception platforms (a tractor and a pickup truck) were used to collect the data. The dataset consists of 19,000 sampled person-free photos and 76,000 labeled person images. The dataset highlights several key challenges of the domain, including varying environments, substantial occlusion by vegetation, people in motion and nonstandard poses, and people seen from several different distances. Furthermore, Kragh et al. [43] presented a multimodal dataset for recognizing agricultural obstacles. A platform of sensors/cameras, including a stereo camera, thermal camera, web camera, 360 camera, LiDAR, and radar, was mounted on a tractor. The tractor was used to mow the grass on a 3.3 hectare field featuring several static and moving obstacles. A drone was utilized to capture the positions of all of the obstacles, which were then manually labeled and synchronized with all sensor data. The whole dataset is available at https://vision.eng.au.dk/fieldsafe/ (accessed on 3 November 2022). Moreover, the lack of public datasets that address the recognition of 3D pedestrians in various agricultural situations has also held back relevant research in this area. The authors of this study created datasets for 3D pedestrian recognition in agriculture called “FieldSafePedestrian”, which contain field photographs taken during both the day and at night [44]. The data can be downloaded at https://github.com/tjiiv-cprg/3D-Pedestrian-Detection-in-Farmland (accessed on 1 November 2022). All of these datasets can be used to support agricultural robot object detection research.
The type of algorithm utilized might be a way to make obstacle detection systems more effective. Doerr et al. [45] claim that effective and robust computational methods are essential for detecting image features via image processing or dealing with sensor data fusion to provide the basic information required by agricultural vehicle autonomous guidance systems. Therefore, the choice and implementation of methods and signal-processing algorithms are critical. Many algorithms have been proposed in the literature for the detection of foreign objects or obstacles to the operation of autonomous vehicles. Yin et al. [46] evaluated the effectiveness of different feature recognition algorithms (average height, density, connectivity, and discontinuity methods) for a specific LiDAR sensor in obstacle detection. Additionally, a 3D camera was mounted on a field robot that same year to detect obstructions in an agricultural setting; however, impediments were distinguished from the noise and background data were gathered with the 3D camera by using noise and background reduction techniques. After segmenting and extracting information about obstacles, a clustering algorithm was utilized to determine when the field robot should slow down or halt during autonomous runs in the field. Recently, an information-processing architecture for multimodal obstacle as well as environment detection and recognition approach for process evaluation in agricultural areas was created by the authors of [47]. A sensor platform, inverse sensor model, fusion and mapping, and process assessment comprise the proposed architecture’s four components. Furthermore, for the safety of agricultural automated machines, especially those moving at a reasonably high speed, the capacity to identify obstacles more quickly and respond appropriately is crucial. Sadgrove et al. [48] developed a quick obstacle detection technique to be used in complex agricultural settings, called the multiple-expert color feature extreme learning machine (MEC-ELM). The MEC-ELM can locate and categorize items quickly, with 84% precision and 91% recall in weed detection in 0.5 s per frame, thanks to the color implementation employed with the SAT.
More recently, to decrease the rate of missed detection and erroneous identification of multiple farming obstacles, resulting in improved real-time detection performance, an enhanced YOLOv5 algorithm based on the k-means clustering algorithm and CIoU loss function was developed [49]. The upgraded YOLOv5 algorithm outperformed the recurrent convolutional neural network (R-CNN) in terms of small-target-obstacle identification. Moreover, it is challenging to detect obstacles precisely and effectively in orchard environments since they are complex and unstructured. Therefore, to recognize common obstacles in orchards, such as people, cement columns, and utility poles, Li et al. [50] enhanced a lightweight object detection approach based on the YOLOv3 object recognition algorithm. In the model, a Gaussian model was added to enhance the detection effect, while the MobileNetV2 network was employed to decrease the running time when extracting picture features. In terms of accuracy and speed, the system surpasses previous models, such as the single-shot detector (SSD) and R-CNN. Furthermore, agricultural robots currently utilize vision-based algorithms to detect barriers based on their classification. As a result, it is complex to locate new classes of things, which are frequent in agricultural settings given the variety of objects. Convolutional autoencoders are used to recognize any items deviating from the regular pattern in the strategy suggested by Mujkic et al. [51]. In terms of object detection, the system has demonstrated superior performance to the most recent vision-based algorithms.
Contemporary research suggests using a deep learning algorithm to locate electricity lines in paddy fields to assure the safe operation of an agricultural drone sprayer. The tiny-YOLOv3 model was used to train the dataset of electricity lines, which was collected in an agricultural setting. Powerlines that had bounding box labels were used to create the training dataset. The results of electricity line detection in real-time frames per second (FPS) are, on average, 12.5 FPS. The weakness of this method is that when the rice is not grown sufficiently, the suggested algorithm may mistake ridges for electricity lines [52]. To improve the UAV’s perception of its surroundings and its capacity for autonomous obstacle recognition as well as avoidance in an agricultural setting, the authors of Wang et al. [53] also integrated a stereo vision camera with a deep-learning-based object detection system. The convolutional neural network (CNN) model with the YOLOv3 object detection algorithm was trained and evaluated using a dataset. The findings indicate that the CNN model has an average obstacle detection accuracy of 75.4%. Moreover, another study was performed to improve the classic A* algorithms used in the obstacle detection system of plant protection UAVs using dynamic heuristic functions, search point optimization, and inflection point optimization based on the fusion of data from monocular cameras and millimeter wave radars. The improved algorithm increased the capability of the obstacle avoidance system by decreasing the number of grid searches, turning points, and data processing time [54].

3.1.4. Sensing Strategies

The perception sensor fitted on the automated agriculture machine may become less accurate due to vibration. Accordingly, a study by Periu et al. [55] found that, by reducing vibrations, a designed mounting and stabilizing system can improve the accuracy of a LiDAR sensor; however, the mean error distance between the real and detected locations of the obstacles rose as speed increased. Furthermore, for agricultural robotic equipment to operate safely, sensor robustness is essential. To determine, with confidence, if a specific sensor is reliable or not, adequate testing must be performed; however, because of the expense and high level of unpredictability in the agricultural environment, conducting sufficient tests is not always practical. To compare and evaluate the autonomous human detection of sensor systems for the functional safety of autonomous agricultural robots, the authors of Meltebrink et al. [56] developed a dynamic test stand method that has real environment detection areas (REDAs) for each sensor system. This test method allows for continuous, long-term testing of commercially accessible sensor systems throughout the year on a dynamic test stand.
Xue et al. [57] demonstrated the potential of a velocity control approach method for automated agricultural machines to prevent collisions. The control technique includes two steps: (1) a velocity generator for collision avoidance with a cloud model and (2) collision prediction in dynamic situations with an upgraded obstacle space–time grid map. The outcomes of the field tests demonstrate the strong viability and efficiency of the suggested velocity control approach. Finally, Santos et al. [58] suggested a novel open source solution called AgRobPP-CA to autonomously perform obstacle avoidance during robot navigation. AgRobPP is a ROS-based open source path planning framework (robot operating system). AgRobPP considers the robot’s center of mass and continuously checks the present robot’s trajectory for predictable collisions or potentially hazardous inclined zones. AgRobPP operates in real-time to avoid nearby obstacles, allowing deviations, avoiding unforeseen obstacles, and avoiding steep slope zones that can cause the robot to roll over, as has been the case for several tractors.
In conclusion, in this first sub-section of the review, existing research exploring the usage of automated agricultural machines’ perception systems to improve their safety was presented. The studies have focused on four main research areas: (1) The use of a single perception sensor to detect and avoid obstacles. Given the complexity of agricultural settings, it is challenging for a single perception sensor to effectively identify all types of obstacles. (2) The use of multiple perception sensors. Studies have shown that they result in considerable improvements in obstacle detection and avoidance in comparison to single perception sensors. (3) The availability of datasets of agricultural environments (including all types of obstacles) to help automated agricultural machines’ perception systems be sufficiently trained and subsequently perform well in detecting obstacles. Studies have started creating and making such datasets available. Likewise, studies have also focused on improving the types of algorithms used to manipulate data. (4) Finally, external methods intended to improve automated agricultural machines’ perception systems, such as by reducing vibrations and velocity controls.

3.2. Risk Assessment and Risk Mitigation

Another way to improve the safety of automated agricultural machines is through risk assessment and risk mitigation. There are several types of risk assessment and safety analysis methods, including fault tree analysis (FTA), failure mode and effects analysis (FMEA), human factors analysis and classification system (HFACS), and hazard analysis and risk assessment (HARA) [59].
While FMEA is a commonly used and recognized tool in engineering, it has various limitations, including subjectivity, time, cost, and information management. It also strongly relies on prior knowledge, such as experience with design engineering and risk assessment procedures, potential applications or misuses of machines, previous “accidents” and other historical occurrences affecting the equipment, and common design concerns [60]. The reliance on pre-existing knowledge provides a substantial barrier to the efficacy and usability of FMEA when dealing with new and unique technologies, such as agricultural automated machines. In line with this issue, these studies [59,61] explored if historical data involving non-automated machinery could be used to validate numerical values assigned during an FMEA assessment of automated agricultural machines. The authors concluded that using existing historical data to inform the values assigned to an FMEA of automated agricultural machines would not be appropriate. Additionally, the authors of Sandner [61] investigated the possibility of using historical fatality data of non-automated machinery to improve the efficacy of hazard analysis and risk assessment (HARA) in conducting a risk assessment of automated agricultural technology. They determined that using an operating scenario generator based on historical data to examine automated agricultural machines improves the efficacy of HARA.
The International Organization for Standardization (ISO) has been used worldwide for several decades to manage risk/safety. In general, types A, B, and C make up the majority of safety standards in the machinery industry. Type A standards are basic safety standards covering basic concepts, design principles, and general aspects that can be applied to all machinery. Type B standards are generic safety standards that cover safety aspects or one type of safeguard that can be used across a wide range of machinery. Type C standards are machine safety standards dealing with detailed safety requirements for a particular machine or group of machines. In the work presented in Shutske et al. [62], the authors studied which standards and risk assessment methods are commonly referenced by professionals working with automated agricultural machines in the industry. In their investigation, eight standards were taken into account, including ISO 12100:2010 [63], ISO 2519-1:2018 [64], ISO 2519-2:2018 [65], ISO 2519-3:2018 [66], ISO 2519-4:2018 [67], ISO 18497 [68], ANSI S318.18 [69], and ANSI S354.7 [70]. Overall, it was the case that most of them used ISO 12100:2010 when working with automated agricultural machinery. This is not surprising, since ISO 12100:2010 is a type A standard that covers a general overview of machinery safety. In addition, most individuals use a lower number of risk analysis methods and techniques with highly automated machinery than they would when analyzing non-automated machinery.
The application of risk assessment is pertinent for the application of type C standards to convert the provided design principles into specific detailed machine requirements. Type C standards contain precise safety criteria for a single machine or group of machines [12]. One type C standard for a highly automated agricultural machine is ISO 18497. To achieve safe operation, this document outlines design guidelines for highly automated agricultural equipment operations. It covers all significant risks, dangerous circumstances, and occurrences, including those that call for a particular response and are mechanical, electrical, or mobility-related. The ISO 18497 standard contains guidelines for ensuring the safety of agricultural automated machinery. The suggestions are divided into two groups: (1) safety requirements as well as protective or risk reduction measures, and (2) the verification and validation of safety requirements as well as protective or risk reduction measures. Moreover, all safety requirements can be divided into four safety systems: perception, safeguarding, control, and supervisory systems. Examples of safety requirements include measures for machine enabling operations, operational processes, machine operational status, machine operational speeds, perception systems, the verification of minimal system perception as well as safety performance, and remote halting.
However, several factors were missed in the standard. The requirements, for example, only apply to “field operations”, not to scenarios or settings in which human operators, service staff, or others might carry out repairs, travel on roadways, or mount/dismount equipment. Injuries sustained during these operations have been recorded and confirmed in previous studies [71,72]. Furthermore, there are no operations in farmyards or barns, nor on public roads. Additionally, there is no rationale for accommodating maintenance programs, whether executed in the field or in a traditional maintenance shop. These limitations were also mentioned in a more recent study [73]. Nevertheless, this standard is currently under revision to provide substantial improvements.
Under the general requirement of operational procedures, it is written that “It shall not be possible to enable highly automated operation without the perception system confirming that the hazard zone is obstacle-free”. The perception system refers to the perception sensors (LiDAR, radar, stereovision, etc.) explored above under the section on obstacle detection. The perception sensor represents the eyes of the automated agricultural machine. The obstacle is classified by the standard as a barrel-shaped object with a height of 80 cm and a width of 38 cm that is supposed to simulate a human seated position (Figure 10). Steen et al. [74] questioned the validity of the standard for ensuring the safety of highly automated agricultural machinery from an obstacle detection perspective. To detect the barrel as stated by the standard, the authors first trained a deep conventional network. Second, the algorithm detectability of the ISO obstacle on row crops and grass mowing was assessed. The results indicated that the program could detect the ISO obstacle with nearly 100% precision on row crops and 90.8% precision on grass mowing, while not detecting people and other very distinct obstacles such as animals. The key outcome of this research is that the obstacle presented in the standard is not fully adequate to ensure the safe operation of agricultural automated machinery.
In addition, obstacles are not the only hazards present during field operations. Uneven terrain, such as slopes, and avalanches could lead to autonomous vehicle rollovers. Unfortunately, these critical aspects have not been addressed within the standard.
Another study, conducted by Basu et al. [75], suggested a conceptual regulatory framework to reduce the risks associated with small automated agricultural machines from the perspective of a practical and self-contained engineering guide while simultaneously fostering innovation. The following are the main features of the legal system: (1) Multiple parties may be held jointly and severally liable for the use or operation of an automated agricultural machine. (2) Every law that establishes obligations contains equivalent defenses for dodging obligations or minimizing damages. (3) A few defenses specifically address the peculiarities of an automated system. (4) Unless the law specifically prohibits it, contracts can be used to define the obligations and rights of individual parties and shield them from liability. (5) The utility of the activity and its social as well as economic value must be considered when a court decides to award damages for loss and injury.
From a practical standpoint, because the machines must be taken out of the field to be checked, testing every piece of software and hardware that goes into automated agricultural machines is difficult, risky, and time-consuming. In order to address these issues, Kelber et al. [76] developed a method called hardware-in-the-loop (HIL) that makes it possible to verify and validate automated agricultural machinery’s software in a reliable, safe, and adequate manner. Two physical devices are required for hardware-in-the-loop validation: a machine simulator that executes a mathematical model of the automated agricultural machine in real-time and an HIL rack that houses all of the machine hardware as well as software. Automated agricultural machines experienced hardware-in-the-loop validation prior to field testing, and the results showed good consistency between the two test techniques. The repeatability, test coverage assurance, economic viability, testing time reduction, risk minimization, and anomalous behavior identification are the main benefits of hardware-in-the-loop approaches.
In conclusion, in this second sub-section of the review, existing research exploring risk assessment and hazard analysis research efforts to improve the safety of automated agricultural machines was presented. Overall, since the current risk assessment and hazard analysis methods rely on pre-existing knowledge of the equipment, they cannot be effective when dealing with new and revolutionary technologies where pre-existing data are not available, such as automated agricultural machines. Moreover, the guidelines provided by the international organization for standardization (ISO 18497) and existing research exploring these guidelines to improve the safety of automated agricultural machines were presented. While the recommendations provided by the standard are valuable, they only target field operations overlooking other operations, including repairs, travel on roadways, mount/dismount equipment, farmyards, barns, and guidelines to deal with hazardous deformable terrain. Overall, full compliance with the current requirements provided in ISO 18497 cannot guarantee the safety of automated agricultural machines. Finally, a regulatory framework and being able to test the functionalities of automated agricultural machines within a reliable software environment are efficient ways to mitigate risks.

3.3. Human Factors and Ergonomics

Understanding the human factor perspective during human–robot interaction remains a potential method with which to improve the safety of automated agricultural machines. To evaluate and improve the human factor design of a previously developed grafting robot, ergonomic analysis software was used to create computer-simulated human models [77]. The models were then integrated with a real working environment during grafting operations under various working conditions. The studies revealed that the robot needs to be redesigned to include leg space, adjust the distance between the rootstock and scion, and the operator seat location. These changes will significantly reduce the lower back and upper limb stresses of the potential operator [77]. Recently, Bashiri et al. [78] used a tractor driving simulator to assess the operator’s situational awareness when working with agricultural semiautonomous vehicles. Overall, the authors found out that as the automation level increases, the level of operator situation awareness decreases. Thus, considering the level of human situation awareness and robot’s level of automation could improve safety and work efficiency during human–agricultural robot collaborative and cooperative operations.
More recently, researchers conducted field studies in both open and enclosed conditions with strawberry harvesters, evaluating their work alongside that of a Thorvald robot. The research characterized interactions between the robot performing an in-field transportation task and human fruit pickers. From a safety and ergonomic perspective, the information gathered in this study can be used to improve the design of agricultural automated machinery, allowing safer human–robot collaboration [79]. In addition, Benos et al. [80] looked at the safety and ergonomics of human–robot interaction in agricultural operations. The authors highlighted various hazards that could jeopardize human safety and measures for reducing the risk of injury and methods for safe collaboration. Moreover, given that a crucial element in accomplishing safe human–robot interaction is human awareness, the authors of Anagnostis et al. [81] believe that providing the activity “signatures” of the workers has the potential to increase human awareness during human–robot interaction, thus contributing toward establishing safe human–robot interaction. Therefore, the authors investigated human activity variations during human–robot collaboration in an agricultural setting by using a variety of wearable sensors. The dataset was made publicly available (https://ibo.certh.gr/open-datasets/ (accessed on 7 November 2022)) for future usages of ergonomic analyses and machine learning models for human–robot interaction in an agricultural environment.
In conclusion, in this third sub-section of the review, existing research exploring human–robot interaction from an ergonomic perspective to improve the safety of automated agricultural machines was presented. Overall, (1) testing the interaction between an automated agricultural machine and a human within a simulated environment helps identify safety concerns associated with the design of the machine in advance, and subsequently provides improvement before the machine reaches the users. (2) Knowing the foreseeable human activity related to a particular task when interacting with an automated agricultural machine is essential to improve the design of the machine and subsequently ensure safe human–robot interaction.

4. Discussion

The main objective of this study was to survey, organize, review, and summarize eleven years of work addressing the safety of automated agricultural machineries and set up the direction for potential future research opportunities. In this paper, previous efforts were reviewed and organized into three categories: (1) perception, (2) risk assessment as well as risk mitigation, and (3) human factors as well as ergonomics.
  • Environmental perception: The studies have focused on four primary methods: (1) The use of a single perception sensor to detect and avoid obstacles. (2) The use of multiple sensors in obstacle detection and avoidance. (3) The availability of datasets of agricultural environments (including all types of obstacles) to help automated agricultural machines’ perception systems be sufficiently trained and subsequently perform well in detecting obstacles. (4) To improve the perception systems of automated agricultural machines, external ways to improve sensor performance were investigated. Finally, given that it was observed that most of these prior research efforts focused exclusively on obstacle detection while overlooking the reaction of the automated machine afterward, future research should expand on how automated agricultural machines should robustly avoid obstacles upon detection and continue performing their tasks without downtime.
  • Risk assessment, hazards analysis, and standards: Overall, since the current risk assessment and hazard analysis methods rely on pre-existing knowledge of equipment, they cannot be effective when dealing with new and revolutionary technologies where pre-existing data are not available, such as automated agricultural machines. A promising future research direction is the development of real-time situational safety risk assessment methods for automated agricultural machines. While the recommendations provided by the ISO 18497 standard are valuable, they only apply to field operations overlooking other operations, including repairs, travel on roadways, mount/dismount equipment, farmyards, and barns. In addition, the standard is missing guidelines with which to deal with hazardous deformable terrain. Overall, full compliance with the current requirements provided in ISO 18497 cannot guarantee the safety of automated agricultural machines. For future directions, the requirements provided with the ISO 18497 standard should be (1) improved to make sure that complying with the guidelines effectively ensures the safety of machines and (2) expanded to cover all other potential areas where automated agricultural machines are likely to be used. ISO 18497 is currently in review, and a future version is expected to be released. Regulatory frameworks and being able to test the functionalities of automated agricultural machines within a reliable software environment are efficient ways to mitigate risks.
  • Human factors and ergonomics: Overall, (1) testing the interaction between an automated agricultural machine and a human within a simulated environment helps identify safety concerns associated with the design of the machine in advance, and subsequently provides improvement before the machine reaches users. (2) Knowing the foreseeable human activity related to a particular task when interacting with an automated agricultural machine is essential to improve the design of the machine and subsequently ensure safe human–robot interaction. For future directions, given the wide variety of human sizes, the dimensions of a wide variety of human sizes should be considered in future simulated environment experiments.

5. Conclusions

Doubtlessly, automated agricultural machines will become inseparable parts of modern farms; however, without robust safety systems the full deployment of automated agricultural machines would be delayed or even jeopardized. As a result, diverse significant research efforts have been carried out towards this end. This research provides a holistic, synthesized overview of all previous research efforts in a comprehensive manner and sets up future research directions. The outcome is that the perception of automated agricultural machines, risk assessment as well as risk mitigation techniques, and human factors as well as ergonomics have all been explored to make automated agricultural machines safer. Overall, ergonomists, safety engineers, physicians, manufacturers, Internet of things developers, governmental officials, and international organizations must work together to make automated agricultural machines safer.

Author Contributions

Conceptualization, all authors; methodology, all authors; software, all authors; validation, all authors; formal analysis, all authors; investigation, all authors; resources, all authors; writing—original draft preparation, G.R.A.; writing—review and editing, all authors; visualization, all authors; supervision, S.F.I.; project administration, S.F.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nation (UN). World Population Prospects 2019. 2019. Available online: http://www.ncbi.nlm.nih.gov/pubmed/12283219 (accessed on 3 December 2022).
  2. Food and Agriculture Organization of the United Nations. Sustainable Development Goals. Available online: https://www.fao.org/sustainable-development-goals/indicators/211/en/ (accessed on 1 December 2022).
  3. FAO. Migration, Agriculture and Rural Development. Addressing the Root Causes of Migration and Harnessing its Potential for Development. 2016. Available online: http://www.fao.org/3/a-i6064e.pdf (accessed on 3 December 2022).
  4. Ramin, S.R.; Weltzien, C.A.; Hameed, I. Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng. 2018, 11, 4278. [Google Scholar] [CrossRef]
  5. Lytridis, C.; Kaburlasos, V.G.; Pachidis, T. An overview of cooperative robotics in agriculture. Agronomy 2021, 11, 1818. [Google Scholar] [CrossRef]
  6. Drewry, J.L.; Shutske, J.M.; Trechter, D.; Luck, B.D.; Pitman, L. Assessment of digital technology adoption and access barriers among crop, dairy and livestock producers in Wisconsin. Comput. Electron. Agric. 2019, 165, 104960. [Google Scholar] [CrossRef]
  7. Rial-Lovera, R. Agricultural Robots: Drivers; barriers and opportunities for adoption. In Proceedings of the 14th International Conference on Precision Agriculture, Montrel, QC, Canada, 24–27 June 2018; pp. 1–5. [Google Scholar]
  8. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  9. Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  10. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2020, 67, 101998. [Google Scholar] [CrossRef]
  11. Pishgar, M.; Issa, S.F.; Sietsema, M.; Pratap, P.; Darabi, H. Redeca: A novel framework to review artificial intelligence and its applications in occupational safety and health. Int. J. Environ. Res. Public Health 2021, 18, 6705. [Google Scholar] [CrossRef]
  12. Reina, G.; Milella, A.; Rouveure, R.; Nielsen, M.; Worst, R.; Blas, M.R. Ambient awareness for agricultural robotic vehicles. Biosyst. Eng. 2016, 146, 114–132. [Google Scholar] [CrossRef]
  13. Lee, C.; Stefan Lang, A.; Heinz, B. Designing a Perception System for Safe Autonomous Operations in Agriculture. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2019, Bosten, MA, USA, 7–10 July 2019. [Google Scholar] [CrossRef]
  14. Freitas, G.; Hamner, B.; Bergerman, M.; Singh, S. A practical obstacle detection system for autonomous orchard vehicles. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 1–12 October 2012; pp. 3391–3398. [Google Scholar] [CrossRef]
  15. Kohanbash, D.; Bergerman, M.; Lewis, K.M.; Moorehead, S.J. A safety architecture for autonomous agricultural vehicles. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting 2012, Dallas, TX, USA, 29 July–1 August 2012; pp. 686–694. [Google Scholar] [CrossRef]
  16. Bellone, M.; Reina, G.; Giannoccaro, N.I.; Spedicato, L. 3D traversability awareness for rough terrain mobile robots. Sens. Rev. 2014, 34, 220–232. [Google Scholar] [CrossRef]
  17. Reina, G.; Milella, A.; Underwood, J. Self-learning classification of radar features for scene understanding. Robot. Auton. Syst. 2012, 60, 1377–1388. [Google Scholar] [CrossRef]
  18. Dvorak, J.S.; Stone, M.L.; Self, K.P. Object detection for agricultural and construction environments using an ultrasonic sensor. J. Agric. Saf. Health 2016, 22, 107–119. [Google Scholar] [CrossRef] [PubMed]
  19. Yang, L.; Noguchi, N. Human detection for a robot tractor using omni-directional stereo vision. Comput. Electron. Agric. 2012, 89, 116–125. [Google Scholar] [CrossRef]
  20. Ross, P.; English, A.; Ball, D.; Upcroft, B.; Corke, P. Online novelty-based visual obstacle detection for field robotics. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 3935–3940. [Google Scholar] [CrossRef]
  21. Fleischmann, P.; Berns, K. A stereo vision based obstacle detection system for agricultural applications. In Field and Service Robotics: Results of the 10th International Conference; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  22. Campos, Y.; Sossa, H.; Pajares, G. Spatio-temporal analysis for obstacle detection in agricultural videos. Appl. Soft Comput. J. 2016, 45, 86–97. [Google Scholar] [CrossRef]
  23. Yan, J.; Liu, Y. A Stereo Visual Obstacle Detection Approach Using Fuzzy Logic and Neural Network in Agriculture. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 1539–1544. [Google Scholar] [CrossRef]
  24. Oksanen, T. Laser scanner based collision prevention system for autonomous agricultural tractor. Agron. Res. 2015, 13, 167–172. [Google Scholar]
  25. Inoue, K.; Kaizu, Y.; Igarashi, S.; Imou, K. The development of autonomous navigation and obstacle avoidance for a robotic mower using machine vision technique. IFAC Pap. 2019, 52, 173–177. [Google Scholar] [CrossRef]
  26. Discant, A.; Rogozan, A.; Rusu, C.; Bensrhair, A. Sensors for obstacle detection—A survey. In Proceedings of the ISSE 2007—30th International Spring Seminar on Electronics Technology 2007, Cluj-Napoca, Romania, 9–13 May 2007; pp. 100–105. [Google Scholar] [CrossRef]
  27. Rouveure, R.; Nielsen, M.; Petersen, A. The QUAD-AV Project: Multi-sensory approach for obstacle detection in agricultural autonomous robotics. In Proceedings of the International Conference of Agricultural Engineering CIGR-AgEng 2012, Valencia, Spain, 8–12 July 2012; pp. 1–6. [Google Scholar]
  28. Christiansen, P.; Hansen, M.K.; Steen, K.A.; Karstoft, H.; Jorgensen, R.N. Advanced sensor platform for human detection and protection in autonomous farming. Precis. Agric. 2015, 15, 291–297. [Google Scholar] [CrossRef]
  29. Christiansen, P.; Kragh, M.; Steen, K.A.; Karstoft, H.; Jørgensen, R.N. Platform for evaluating sensors and human detection in autonomous mowing operations. Precis. Agric. 2017, 18, 350–365. [Google Scholar] [CrossRef] [Green Version]
  30. Ross, P.; English, A.; Ball, D.; Upcroft, B.; Wyeth, G.; Corke, P. Novelty-based visual obstacle detection in agriculture. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 1699–1705. [Google Scholar] [CrossRef] [Green Version]
  31. Nissimov, S.; Goldberger, J.; Alchanatis, V. Obstacle detection in a greenhouse environment using the Kinect sensor. Comput. Electron. Agric. 2015, 113, 104–115. [Google Scholar] [CrossRef]
  32. David, B.; Patrick, R.; Andrew, E.; Tim, P.; Ben Upcroft, R.F.; Salah, S.; Gordon, W.P.C. Field and Service Robotics: Results of the 9th International Conference, Iolanda-Veronica, Brisbane, Australia, 9–11 December 2013; Springer Tracts in Advanced Robotics: Berlin, Germany, 2015; p. 105. [Google Scholar] [CrossRef]
  33. Franzius, M.; Dunn, M.; Einecke, N.; Dirnberger, R. Embedded Robust Visual Obstacle Detection on Autonomous Lawn Mowers. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 361–369. [Google Scholar] [CrossRef]
  34. Kragh, M.; Underwood, J. Multimodal obstacle detection in unstructured environments with conditional random fields. J. Field Robot. 2020, 37, 53–72. [Google Scholar] [CrossRef] [Green Version]
  35. Jung, T.H.; Cates, B.; Choi, I.K.; Lee, S.H.; Choi, J.M. Multi-camera-based person recognition system for autonomous tractors. Designs 2020, 4, 54. [Google Scholar] [CrossRef]
  36. Skoczeń, M.; Ochman, M.; Spyra, K. Obstacle detection system for agricultural mobile robot application using rgb-d cameras. Sensors 2021, 21, 5292. [Google Scholar] [CrossRef]
  37. Reina, G.; Milella, A.; Halft, W.; Worst, R. LIDAR and stereo imagery integration for safe navigation in outdoor settings. In Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics, Linköping, Sweden, 21–26 October 2013. [Google Scholar] [CrossRef]
  38. Milella, A.; Reina, G.; Underwood, J. A Self-learning Framework for Statistical Ground Classification using Radar and Monocular Vision. J. Field Robot. 2014, 33, 20–41. [Google Scholar] [CrossRef]
  39. Reina, G.; Milella, A.; Galati, R. Terrain assessment for precision agriculture using vehicle dynamic modelling. Biosyst. Eng. 2017, 162, 124–139. [Google Scholar] [CrossRef]
  40. Kragh, M.; Jørgensen, R.N.; Pedersen, H. Object detection and terrain classification in agricultural fields using 3d lidar data. Lect. Notes Comput. Sci. 2015, 9163, 188–197. [Google Scholar] [CrossRef]
  41. Christiansen, P.; Nielsen, L.N.; Steen, K.A.; Jørgensen, R.N.; Karstoft, H. DeepAnomaly: Combining background subtraction and deep learning for detecting obstacles and anomalies in an agricultural field. Sensors 2016, 16, 1904. [Google Scholar] [CrossRef] [Green Version]
  42. Pezzementi, Z.; Tabor, T.; Hu, P. Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset. J. Field Robot. 2018, 35, 545–563. [Google Scholar] [CrossRef]
  43. Kragh, M.F.; Christiansen, P.; Laursen, M.S. FieldSAFE: Dataset for obstacle detection in agriculture. Sensors 2017, 17, 2579. [Google Scholar] [CrossRef] [Green Version]
  44. Tian, W.; Deng, Z.; Yin, D.; Zheng, Z.; Huang, Y.; Bi, X. 3D Pedestrian Detection in Farmland By Monocular Rgb Image and Far-Infrared Sensing. Remote Sens. 2021, 13, 2896. [Google Scholar] [CrossRef]
  45. Doerr, Z.; Mohsenimanesh, A.; Laguë, C.; McLaughlin, N.B. Application of the LIDAR technology for obstacle detection during the operation of agricultural vehicles. Can. Biosyst. Eng. 2013, 55, 9–17. [Google Scholar] [CrossRef] [Green Version]
  46. Yin, X.; Noguchi, N.; Ishii, K. Development of an obstacle avoidance system for a field robot using a 3D camera. Eng. Agric. Environ. Food 2013, 6, 41–47. [Google Scholar] [CrossRef]
  47. Korthals, T.; Kragh, M.; Christiansen, P.; Karstoft, H.; Jørgensen, R.N.; Rückert, U. Multi-modal detection and mapping of static and dynamic obstacles in agriculture for process evaluation. Front. Robot. AI 2018, 5, 28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Sadgrove, E.J.; Falzon, G.; Miron, D.; Lamb, D.W. Real-time object detection in agricultural/remote environments using the multiple-expert colour feature extreme learning machine (MEC-ELM). Comput. Ind. 2018, 98, 183–191. [Google Scholar] [CrossRef]
  49. Xue, J.; Cheng, F.; Li, Y.; Song, Y.; Mao, T. Detection of Farmland Obstacles Based on an Improved YOLOv5s Algorithm by Using CIoU and Anchor Box Scale Clustering. Sensors 2022, 22, 1790. [Google Scholar] [CrossRef] [PubMed]
  50. Li, Y.; Li, M.; Qi, J.; Zhou, D.; Zou, Z.; Liu, K. Detection of typical obstacles in orchards based on deep convolutional neural network. Comput. Electron. Agric. 2021, 181, 105932. [Google Scholar] [CrossRef]
  51. Mujkic, E.; Philipsen, M.P.; Moeslund, T.B.; Christiansen, M.P.; Ravn, O. Anomaly Detection for Agricultural Vehicles Using Autoencoders. Sensors 2022, 22, 3608. [Google Scholar] [CrossRef] [PubMed]
  52. Son, H.S.; Kim, D.K.; Yang, S.H.; Choi, Y.K. Real-Time Power Line Detection for Safe Flight of Agricultural Spraying Drones Using Embedded Systems and Deep Learning. IEEE Access 2022, 10, 54947–54956. [Google Scholar] [CrossRef]
  53. Wang, D.; Li, W.; Liu, X.; Li, N.; Zhang, C. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution. Comput. Electron. Agric. 2020, 175, 105523. [Google Scholar] [CrossRef]
  54. Huang, X.; Dong, X.; Ma, J. The improved A* Obstacle avoidance algorithm for the plant protection UAV with millimeter wave radar and monocular camera data fusion. Remote Sens. 2021, 13, 3364. [Google Scholar] [CrossRef]
  55. Periu, C.F.; Mohsenimanesh, A.; Laguë, C.; McLaughlin, N.B. Isolation of vibrations transmitted to a LIDAR sensor mounted on an agricultural vehicle to improve obstacle detection. Can. Biosyst. Eng. 2013, 55, 2.33–2.42. [Google Scholar] [CrossRef] [Green Version]
  56. Meltebrink, C.; Ströer, T.; Wegmann, B.; Weltzien, C.; Ruckelshausen, A. Concept and realization of a novel test method using a dynamic test stand for detecting persons by sensor systems on autonomous agricultural robotics. Sensors 2021, 21, 2315. [Google Scholar] [CrossRef]
  57. Xue, J.; Xia, C.; Zou, J. A velocity control strategy for collision avoidance of autonomous agricultural vehicles. Auton. Robot. 2020, 44, 1047–1063. [Google Scholar] [CrossRef]
  58. Santos, L.C.; Santos, F.N.; Valente, A.; Sobreira, H.; Sarmento, J.; Petry, M. Collision Avoidance Considering Iterative Bézier Based Approach for Steep Slope Terrains. IEEE Access 2022, 10, 25005–25015. [Google Scholar] [CrossRef]
  59. Shutske, J.; Contact, U.; Society, A.; Engineers, B.; America, N.; America, N.; Operator, R. Risk Assessment of Autonomous Agricultural Machinery by Industry Designers; College of Agricultural and Life Sciences University of Wisconsin-Madison: Madison, WI, USA, 2021. [Google Scholar]
  60. Spreafico, C.; Russo, D.; Rizzi, C. A state-of-the-art review of FMEA/FMECA including patents. Comput. Sci. Rev. 2017, 25, 19–28. [Google Scholar] [CrossRef]
  61. Sandner, K.J. The Efficacy of Risk Assessment Tools and Standards Pertaining to Highly Automated Agricultural Ma-Chinery. Master’s Thesis, Department of Biological Systems Engineering, University of Wisconsin, Madison, WI, USA, 2021. [Google Scholar]
  62. Shutske, J.M.; Sandner, K.; Jamieson, Z. Risk Assessment Methods for Autonomous Agricultural Machines: Review of Current Practices and Future Needs. Appl. Eng. Agric. 2023, 39, 109–120. [Google Scholar] [CrossRef]
  63. ISO 12100; Safety of Machinery—General Principles for Design—Risk assessment and Risk Reduction. International Organization for Standardization: Geneva, Switzerland, 2010. Available online: https://www.iso.org/standard/51528.html (accessed on 10 December 2022).
  64. ISO 25119-1; Tractors and Machinery for Agriculture and Forestry—Safety-Related Parts of Control Systems—Part 1, General Principles for Design and Development. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/69025.html (accessed on 10 December 2022).
  65. ISO 25119-2; Tractors and Machinery for Agriculture and Forestry—Safety-Related Parts of Control Systems—Part 2, Concept Phase. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/69026.html (accessed on 10 December 2022).
  66. ISO 25119-3; Tractors and Machinery for Agriculture and Forestry—Safety-Related Parts of Control Systems—Part 3, Series Development; Hardware and Software. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/69027.html (accessed on 10 December 2022).
  67. ISO 25119-4; Tractors and Machinery for Agriculture and Forestry—Safety-Related Parts of Control Systems—Part 4, Production; Operation; Modification and Supporting Processes. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/69028.html (accessed on 10 December 2022).
  68. ISO 18497-2; Agricultural Machinery and Tractors—Safety of Highly Automated Agricultural Machines—Principles for Design. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/62659.html. (accessed on 10 December 2022).
  69. ANSI S318.18; Safety for Agricultural Field Equipment. ANSI: Washington, DC, USA, 2017.
  70. ANSI S354.7; Safety for Farmstead Equipment. ANSI: Washington, DC, USA, 2018.
  71. Shutske, J.M. Agricultural automation & autonomy: Safety and risk assessment must be at the forefront. J. Agromed. 2023, 28, 5–10. [Google Scholar]
  72. Lee, T.Y.; Gerberich, S.G.; Gibson, R.W.; Carr, W.P.; Shutske, J.; Renier, C.M. A population-based study of tractor-related injuries: Regional rural injury study-I (RRIS-I). J. Occup. Environ. Med. 1996, 38, 782–793. [Google Scholar] [CrossRef] [PubMed]
  73. Aby, G.R.; Issa, S.F.; Chowdhary, G. Safety Risk Assessment of a Highly Automated Agricultural Machine. In Proceedings of the 2022 ASABE Annual International Meeting Sponsored by ASABE, Houston, TX, USA, 19–20 July 2022; pp. 2–12. [Google Scholar]
  74. Steen, K.A.; Christiansen, P.; Karstoft, H.; Jørgensen, R.N. Using deep learning to challenge safety standard for highly autonomous machines in agriculture. J. Imaging 2016, 2, 6. [Google Scholar] [CrossRef] [Green Version]
  75. Basu, S.; Omotubora, A.; Beeson, M.; Fox, C. Legal framework for small autonomous agricultural robots. AI Soc. 2020, 35, 113–134. [Google Scholar] [CrossRef] [Green Version]
  76. Kelber, C.R.; Reis, B.R.R.; Figueiredo, R.M. Improving functional safety in autonomous guided agricultural self propelled machines using hardware-in-The-loop (HIL) systems for software validation. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1438–1444. [Google Scholar] [CrossRef]
  77. Chiu, Y.C.; Chen, S.; Wu, G.J.; Lin, Y.H. Three-dimensional computer-aided human factors engineering analysis of a grafting robot. J. Agric. Saf. Health 2012, 18, 181–194. [Google Scholar] [CrossRef]
  78. Bashiri, B. Automation and the situation awareness of drivers in agricultural semi-autonomous vehicles. Biosyst. Eng. 2014, 124, 8–15. [Google Scholar] [CrossRef]
  79. Baxter, P.; Cielniak, G.; Hanheide, M.; From, P. Safe Human-Robot Interaction in Agriculture. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 59–60. [Google Scholar] [CrossRef]
  80. Benos, L.; Bechar, A.; Bochtis, D. Safety and ergonomics in human-robot interactive agricultural operations. Biosyst. Eng. 2020, 200, 55–72. [Google Scholar] [CrossRef]
  81. Anagnostis, A.; Benos, L.; Tsaopoulos, D.; Tagarakis, A.; Tsolakis, N.; Bochtis, D. Human activity recognition through recurrent neural networks for human–robot interaction in agriculture. Appl. Sci. 2021, 11, 2188. [Google Scholar] [CrossRef]
Figure 1. Flow diagram overview of consecutive stages and results at stage 1 of the systematic literature review. The symbol * indicates that this step should not be skipped.
Figure 1. Flow diagram overview of consecutive stages and results at stage 1 of the systematic literature review. The symbol * indicates that this step should not be skipped.
Safety 09 00013 g001
Figure 2. Flow diagram overview of consecutive stages and results at stage 2 of the systematic literature review. The symbol * indicates that this step should not be skipped.
Figure 2. Flow diagram overview of consecutive stages and results at stage 2 of the systematic literature review. The symbol * indicates that this step should not be skipped.
Safety 09 00013 g002
Figure 3. Diagram depicting a taxonomy of the existing works on the safety of automated agricultural machines with their respective number of articles.
Figure 3. Diagram depicting a taxonomy of the existing works on the safety of automated agricultural machines with their respective number of articles.
Safety 09 00013 g003
Figure 4. Geographical distribution of articles per country per topic.
Figure 4. Geographical distribution of articles per country per topic.
Safety 09 00013 g004
Figure 5. Distribution per year of articles from 2012 to 2022 included in the review.
Figure 5. Distribution per year of articles from 2012 to 2022 included in the review.
Safety 09 00013 g005
Figure 6. Artificial intelligence components of automated agricultural machines. Sensors collect data from their surroundings and transmit them to an artificial intelligence agent. Machine learning algorithms convert and analyze these data before instructing actuators to perform specific actions on the environment.
Figure 6. Artificial intelligence components of automated agricultural machines. Sensors collect data from their surroundings and transmit them to an artificial intelligence agent. Machine learning algorithms convert and analyze these data before instructing actuators to perform specific actions on the environment.
Safety 09 00013 g006
Figure 7. Example of obstacles observed in agricultural environments. Positive obstacle (electrical pole, left), negative obstacle (ditch containing water, right), and moving obstacle (human, bottom). Photo credit: Images provided by Storyblocks.
Figure 7. Example of obstacles observed in agricultural environments. Positive obstacle (electrical pole, left), negative obstacle (ditch containing water, right), and moving obstacle (human, bottom). Photo credit: Images provided by Storyblocks.
Safety 09 00013 g007
Figure 8. Example of deformable terrains found in agricultural environments. Photo credit: Images provided by Storyblocks.
Figure 8. Example of deformable terrains found in agricultural environments. Photo credit: Images provided by Storyblocks.
Safety 09 00013 g008
Figure 9. Total number of each type of sensor/device mentioned in research articles to improve the safety of automated agricultural machines based on this literature review.
Figure 9. Total number of each type of sensor/device mentioned in research articles to improve the safety of automated agricultural machines based on this literature review.
Safety 09 00013 g009
Figure 10. Barrel-shaped object that is defined as an obstacle in ISO 18497 [74].
Figure 10. Barrel-shaped object that is defined as an obstacle in ISO 18497 [74].
Safety 09 00013 g010
Table 1. Advantages and disadvantages of different sensor modalities for outdoor perception [12,13].
Table 1. Advantages and disadvantages of different sensor modalities for outdoor perception [12,13].
Sensor ModalityFunctional MethodsAdvantagesDisadvantages
Stereo vision cameraStereo vision cameras capture wavelengths in the visible range. The images they produce recreate almost exactly what our eyes see.Natural interpretation for humansRisk of occlusions
Relatively high resolutionSensitive to lighting conditions
Relatively high sampling ratePoor performance in low-visibility conditions (rain, fog, smoke, etc.)
LiDARLiDAR is an optical remote sensing technology that measures the distance or other properties of a target by illuminating the target with light.High accuracy, high densityHigh costs related to accuracy and range
Narrow beam spread
Fast operation
Some risk of occlusion
No color or texture information
RadarRadars measure the distance between an emitter and an object by calculating the ToF of an emitted signal and received echo.Long range (up to 100–150 m)Three-dimensional map limited to pencil beam radar
Panoramic perception (360°)Difficulty in signal processing and interpretation
Multiple targetsNo detection of small objects
Ultrasonic
sensor
An ultrasonic sensor uses sonic waves, in the range of 20 kHz to 40 kHz, to measure the distance to an object.Reliable in any lighting environmentPoor detection range and resolution
Less impact on the environmentSensitive to smoothness and angle to obstacles
ToF camera
(time of flight camera)
ToF cameras measure distances with modulated light based on the ToF principle.Provides direct 3D measurements
Invariant to illumination and temperature
No moving parts
Low resolution, limited field of view (FoV)
Relatively short detection range (depends on illumination power)
Thermal cameraA thermal camera is a device that forms a heat image and is used to detect heat radiation that is emitted by all living things.Invariant to illuminationRelatively low resolution
Robust against dust and rain
Detects humans and animals
Some risk of occlusion
Difficulty in calibration
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aby, G.R.; Issa, S.F. Safety of Automated Agricultural Machineries: A Systematic Literature Review. Safety 2023, 9, 13. https://doi.org/10.3390/safety9010013

AMA Style

Aby GR, Issa SF. Safety of Automated Agricultural Machineries: A Systematic Literature Review. Safety. 2023; 9(1):13. https://doi.org/10.3390/safety9010013

Chicago/Turabian Style

Aby, Guy R., and Salah F. Issa. 2023. "Safety of Automated Agricultural Machineries: A Systematic Literature Review" Safety 9, no. 1: 13. https://doi.org/10.3390/safety9010013

APA Style

Aby, G. R., & Issa, S. F. (2023). Safety of Automated Agricultural Machineries: A Systematic Literature Review. Safety, 9(1), 13. https://doi.org/10.3390/safety9010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop