Automatic field phenotyping has been investigated for many years and it can be abundantly found in the literature [8
]. Looking back at some of this early literature, tractors should be regarded as the first ground vehicle to operate automatically in the field [10
], even if still in a rudimentary way. In recent years, given the world requirements for high-yielding crops, demand for more fully automated systems is increasing. Those systems include both manned and unmanned, as well as ground and aerial vehicles, all equipped with advanced sensors and sophisticated control algorithms. From those early systems to today’s field robotics, developing reliable sensing technologies, from reliable odometry to sophisticated machine vision, plays a key role in advancing agricultural automation. So, in the next subsections, we discuss the challenges faced by past and future research when it comes to developing a successful high-throughput plant phenotyping (HTPP) platform.
2.1. Platforms for Phenotyping
Based on the literature, platforms that are designed for HTPP can be grouped into two categories: those developed for indoor (greenhouse or laboratory) [13
] and those designed for outdoor environments (field) [14
]. Alternatively, platforms may be classified based on whether they collect data from a group of plants or from individual plants.
The majority of efforts in recent years has been placed on the development of HTPP platforms for plants grown in controlled environments (i.e., greenhouses). For instance, in [13
], the authors introduced an automated HTPP platform, Phenoscope, to collect data from 735 individual pots brought into imaging stations by conveyor belts. Like other indoor platforms, Phenoscope is able to water, weigh and image plants continuously, allowing for homogenized conditions for all plants. The same is true for the
, made by LemnaTec and reported in [16
]. However, this indoor platform also has the ability to capture images in different wavelengths: i.e., from far infrared (FIR) to ultraviolet (UV). The platform is equipped with a system of conveyor belts capable of carrying 600 individual plants, and its imaging systems can capture 9000 images per day [16
]. Each plant carries a radio frequency identification (RFID) for individualized data management, watering and nutrient supplementation. Although, greenhouses offer extraction of a large number of features throughout the year, correlation of these features with those of plants grown in the field [17
] is often unsatisfactory.
Therefore, platforms for field phenotyping with various degrees of automation have emerged over the years [11
]. In [20
], for example, the authors introduced a tractor-pulled platform, BreedVision, consisting of various optical sensors, such as light curtain, 3D Time-of-Flight (ToF) cameras, laser range sensors, and multispectral cameras, including RGB. A similar platform was recently presented in [15
], with a series of sensors including infrared thermometer (IRT), ultrasonic sensor, and crop canopy sensors: multispectral, NDVI, etc. This combination of sensors allowed the system to collect both architectural and morphological information from plants. However, they employed a vehicle with limited clearance, and hence, limitations on the types of plants to be analyzed could be expected. Also, while the system could cover a reasonably large area at each passage, its limited maneuverability and the fact that navigation was not fully autonomous constrained its throughput and its ability to freely move in the field. In that sense, a faster and more flexible approach is found in [11
], thanks to its more autonomous field navigation and its individual plant phenotyping capabilities. Adapting existing equipment is not always a limiting factor. In [14
] for example, a field-based HTPP platform was developed using a high-clearance tractor—later referred to as a Phenomobile [22
]. Three types of sensors were grouped in four sets and mounted on the front boom of the tractor enabling the simultaneous collection of data from four adjacent rows of cotton plants. Although most phenomobile platforms have a high coverage rate (in this case, 0.84 ha/h), soil compaction is often a main concern [23
One way of coping with soil compaction and to further increase throughput is to phenotype large groups of plants at once, even if at a lower level of detail—e.g., pixel resolution per plant. In that case, Unmanned Aerial Vehicles (UAVs) are the platform of choice for field phenotyping [24
]. However, due to FAA’s regulations and requirements for pre-filing of flight plans, UAV’s limited payload and timely availability, the use of UAVs for field phenotyping is still quite restrictive compared to other approaches [28
]. One such approach is the
, by LemnaTec [7
]. This platform consists of two 125 m long tracks placed up to 10m apart, with a suspended structure that can be lifted up to 4.5 m in the air. The structure houses sensors with a maximum payload of 500 kg and may include RGB, Near IR, IR, VNIR, Multispectral, and Fluorescence intensity cameras, as well as NDVI, carbon dioxide, temperature, light intensity, and wind speed sensors. However, it can only cover a small area (0.12 ha) and is not easily transferable to other sites—the installation of the tracks being the major financial and logistic undertake in the use of this system [9
It was exactly with these criteria in mind—speed, cost, availability and payload—that we developed our architecture. In Table 1
, we summarize the costs involved in the use of an UAV-based solution for phenotyping versus our proposed solution. The commercial UAVs surveyed for Table 1
were: Draganfly X4-P and Commander; Allied Drones HL11 Atlas; Steadi Drone Vader HL and Vader X8; SciAero 4Scight; Xactsense Max-8; and AEE F100. Unfortunately, we did not have access to the cost of the LemnaTec’s
, but it is expectedly more expensive, specially if considered dollars per acres covered. In Section 3.2
, we expand on this comparison by showing the advantages of the proposed architecture in terms of the other criteria—i.e., throughput, payload, and sensor capabilities.
2.2. Navigation in the Field
Autonomous navigation refers to the ability of a robot to move within its habitat automatically. In order to achieve that, motion control and localization algorithms are required to accurately determine the robot’s position in the environment and to compute the path through obstacles—in our case the field and plants, respectively. This is a quite difficult type of navigation referred to as outdoor navigation in unstructured environment
]. Autonomous navigation using differential GPS is frequently used to mitigate some of the challenges of navigation in unstructured environments. However, vision-based guidance is getting more attention as it can potentially reduce costs, handle dynamic situations and simplify installation, while it can achieve precision comparable or even better than from Global Navigation Satellite Systems (GNSS). In that sense, sensors such as LiDAR and RGB cameras along with new algorithms for either 2D or 3D dynamic navigation indeed provide increased flexibility in such a unpredictable environment [21
]. For instance, in [33
], the authors developed an algorithm for detecting tree trunks using data fusion from camera and laser scanner. The system developed could navigate throughout a fruit orchard .
In another system, [35
], the autonomous vehicle used a method to track crop rows using the surrounding texture of the agricultural field. The method worked even in extremely varied appearance of the field, such as under day and night lighting conditions. As data collected from the field had noise due to uncertainty in the environment, a proper manipulation of data was required. Finally, Hiremath et al. proposed a novel probabilistic sensor model for a 2D range finder (LiDAR) and also RGB camera for robot navigation in a maize field [12
In this research, most of the challenges in navigation were mitigated by the use of a semi-autonomous approach. In the future, a completely autonomous method relying on 3D imaging, GPS, and LiDAR will be employed.
2.3. Computer Vision in Plant Phenotyping
In order to understand plant adaptation to the environment and management practices, architectural phenotypes such as height, width, leaf area, leaf angle, leaf color, etc. are very important [37
]. Traditionally, these traits are measured by hand, consuming an enormous amount of time. To date, computer vision has already made an impact in speed and volume of plants phenotyped (i.e., high throughput), specially when it comes to phenotyping in growth chambers and greenhouses [13
]. Also, 3D imaging of plant shoots and roots [43
] are becoming the standard in storing all possible details from plants—i.e., details with hitherto unknown value, but which can prove useful in the future. Indeed, while 3D imaging in the field may still have a long way to go vis-a-vis their greenhouse counterparts, some systems, such as in [46
], are already causing great impact. In this work, the system relied on a structure from motion algorithm [47
] over a sequence of images of the crop rows to build 3D models and estimate plant height and leaf area.
2.4. Plant Canopy Characterization
Crop canopy characteristics critically influence yield formation [48
]. In fact, important traits such as plant height, weight, volume, biomass, shape, color, temperature, light absorption and potentially many others can be obtained from the simple observation of the canopy as a whole. In that case, thermal, multi-spectral and hyper-spectral imaging from either airborne or remote sensing (satellite) can play a greater role in plant phenotyping in the field. Unfortunately, the cost and availability of these systems may pose a large burden on research and still lack the necessary resolution. In that sense, while some researchers invest in larger vehicles to increase payload, optimize volume and type of data acquired on a single flight [49
], other systems rely on multiple micro-UAVs to achieve higher availability [50
] and still perform canopy characterization. In [51
], for example, a simple, but ingenious technique for measuring the height of plants using micro-UAVs was proposed. In that system, a laser scanner was mounted onto the micro-UAV to estimate plant height by measuring the difference between the ground and the top of canopy. Indeed, despite the persisting issues with cost and availability, successful aerial systems for canopy characterization abound, and it would be hard to survey all in this paper.