Next Article in Journal
Is It Worth the Effort? Considerations on Text Mining in AI-Based Corporate Failure Prediction
Next Article in Special Issue
Using a Machine Learning Approach to Evaluate the NOx Emissions in a Spark-Ignition Optical Engine
Previous Article in Journal
SEMKIS-DSL: A Domain-Specific Language to Support Requirements Engineering of Datasets and Neural Network Recognition
Previous Article in Special Issue
Automatic Identification and Geo-Validation of Event-Related Images for Emergency Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping

1
Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council of Italy, Via Amendola 122 D/O, 70126 Bari, Italy
2
Faculty of Civil Engineering, Polytechnic University of Tirana, Bulevardi Dëshmorët e Kombit Nr. 4, 1000 Tiranë, Albania
3
Department of Computer Science, University of Bari, Via E. Orabona, 4, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
Information 2023, 14(4), 214; https://doi.org/10.3390/info14040214
Submission received: 16 February 2023 / Revised: 28 March 2023 / Accepted: 29 March 2023 / Published: 1 April 2023
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)

Abstract

:
Plant phenotyping studies the complex characteristics of plants, with the aim of evaluating and assessing their condition and finding better exemplars. Recently, a new branch emerged in the phenotyping field, namely, high-throughput phenotyping (HTP). Specifically, HTP exploits modern data sampling techniques to gather a high amount of data that can be used to improve the effectiveness of phenotyping. Hence, HTP combines the knowledge derived from the phenotyping domain with computer science, engineering, and data analysis techniques. In this scenario, machine learning (ML) and deep learning (DL) algorithms have been successfully integrated with noninvasive imaging techniques, playing a key role in automation, standardization, and quantitative data analysis. This study aims to systematically review two main areas of interest for HTP: hardware and software. For each of these areas, two influential factors were identified: for hardware, platforms and sensing equipment were analyzed; for software, the focus was on algorithms and new trends. The study was conducted following the PRISMA protocol, which allowed the refinement of the research on a wide selection of papers by extracting a meaningful dataset of 32 articles of interest. The analysis highlighted the diffusion of ground platforms, which were used in about 47% of reviewed methods, and RGB sensors, mainly due to their competitive costs, high compatibility, and versatility. Furthermore, DL-based algorithms accounted for the larger share (about 69%) of reviewed approaches, mainly due to their effectiveness and the focus posed by the scientific community over the last few years. Future research will focus on improving DL models to better handle hardware-generated data. The final aim is to create integrated, user-friendly, and scalable tools that can be directly deployed and used on the field to improve the overall crop yield.

1. Introduction

The exponential population growth experienced over the last century has increased the need for a sustainable food supply. However, adverse factors related to disturbances in plant growth, climate change, development, tolerance, resistance, architecture, physiology, and ecology have reduced crop yields, causing relevant losses in agricultural production. In addition, extensive agriculture has worsened the environmental crisis and, therefore, the destruction of natural resources. For example, a significant portion of the global population is affected by drought [1], which can cause disruptions in plant production, leading to changes in nutrient uptake and plant performance [2]. Using biological stimulants at appropriate plant growth stages can help restore plant productivity [3,4]; to this end, the development and diffusion of advanced sensing technologies have opened a new horizon to deal with challenges related to monitoring unpredictable agricultural ecosystems [5]. Precision agriculture has been promoted to increase production, improve yield quality and quantity, protect the environment, and reduce costs [6].
However, such systems require proper amounts of data to be effectively used. Genomic tools can usually provide enough data to characterize the genomics of a plant. In that case, a systematic quantification of phenotypic traits poses unique challenges [7], which the scientific community has dealt with through a branch named plant phenotyping that, if used in association with genomics, could provide a useful tool for improving plant breeding.
Plant phenotyping mainly deals with both the morphological and the physiological characteristics of the plant. Specifically, morphological traits, such as leaves, stems, flowers, fruits, and roots, describe the vegetative and reproductive characteristics of the plant [8]. As for physiological characteristics, such as vigor, leaf surface, biomass, and inflorescence structure, these are used to quantitatively assess the status of the plant during its life cycle. Hence, these traits are in-depth and accurately assessed by plant phenotyping methods to improve the accuracy of crop monitoring, evaluating the stress level of each plant, and posing the basis for the development of new and more efficient agricultural practices. However, plant phenotyping has been fraught with several constraints over the past decades, being in some cases destructive, time-consuming, or too expensive.
To overcome these limitations, specific hardware tools for improving the throughput of plant phenotyping have been developed [9]. These tools aim to reduce the costs related to plant phenotyping and allow for nondestructive characterization of the traits of interest, contributing to the development of a new field called high-throughput phenotyping (HTP).
HTP is enabled by using HTP platforms, which allow the automatic extraction of plant traits [10]. HTP platforms enable the characterization of plant traits to be automated, reducing time and effort while preventing destructive damage to plants. Each platform embeds several sensing equipment, including unmanned vehicles, drones, and satellites [11]. The selection of such sensors is strictly related to the application scenario in which the specific HTP platform should operate. For example, aerial platforms are generally more flexible and efficient than ground platforms. They can be extensively used in open areas [12], whereas ground-based robotic platforms may be most suited for monitoring plant traits over time in closed environments [13]. Another example may be related to satellite-based platforms, which can be effective when data must be collected over a large geographic area [14].
Although these challenges are the first step in improving the plant phenotyping process, it is important to remember that these systems produce a huge amount of data to identify and classify plant traits in a reasonable amount of time while keeping costs down [15]. Combining biological expertise with computer science and engineering knowledge is necessary to automatically analyze these huge amounts of data generated by platforms, and to improve plant phenotypic activities with noninvasive imaging techniques.
In this scenario, advanced technologies and new sensors on the market play an important role in data gathering. For example, visible RGB sensors are suited to capture images of plant morphology, while near-infrared (NIR) can assess plant chlorophyll, water quantity, and temperature [16,17,18]. Hyperspectral imaging has also been used to investigate leaf tissue structure and pigments [19]. Other available sensors include multi-spectroscopy, Raman spectroscopy, magnetic resonance images (MRI), and X-ray imaging, which have been exploited to study the internal anatomy of fruits [20] and seeds [17,21,22], as well as the overall internal morphology of plants [23]. X-ray imaging is also useful while studying root development [24,25], as visually measuring the root system architectures poses unique challenges due to the presence of the surrounding soil [26]. In addition to specific imaging systems, rhizotubes or minirhizotrons buried underground have been investigated to capture high-resolution images of roots by installing guided scanners and camera systems [27].
Using sensors such as those introduced beforehand, a huge amount of data can be collected and must be labeled (usually by domain experts) before being ready for further analyses. Nearly successful ideas have been implemented to analyze these data using traditional ML and computer vision methods [28]. For example, combining a computer vision algorithm with ML (support vector machines, SVMs) to classify leaves showed that this automatic system helped achieve better accuracy [29]. The development of a pipeline combining computer vision, ML, and robotics showed that image segmentation tasks can be performed with a relatively high level of overall accuracy (86%) [30]. However, the problem with ML approaches is that these analysis techniques cannot work well with low-quality images or generalize if applied to data showing a significant change in imaging conditions [31]. Images should be acquired in the best possible conditions; otherwise, the possibility of introducing noise in the output will increase. A compelling reason for the increasing interest in DL is that it learns the features from the input data fully automatically and generalizes well despite differences in the data [32]. However, DL models require a large amount of data for training to perform, e.g., segmentation and recognition tasks; hence, the transfer can be generalized well [33]. Although both ML and DL approaches can process image data, researchers are trying to determine which is the most useful to better evaluate the plant traits of interest. For example, ML still performs well in regression evaluations, while, in the discussion of classification, there is a possibility that DL can work more quickly and accurately. This could be due to the better capabilities of DL in faster and more accurate identification of plant features. Since many scientists have studied plant phenotyping only from a one-dimensional perspective, no comprehensive study has been published to examine all four influencing factors of HTP examined in this paper. Therefore, this study provides a systematic review of HTP on the morphology of plants (i.e., the aerial part of the plants and their roots) to evaluate four factors: platforms, sensors, algorithms, and new trends in data processing. These factors were selected as they refer to two main categories: hardware (platforms and sensors) and software (algorithms and new trends), as reported in Figure 1. To the best of the knowledge of the authors, this is the first study focused on these factors in HTP.

2. Background

Considering the relevance of plant phenotyping, several contributions have been provided by the scientific community over recent years. Usually, these studies are focused on a single factor.

2.1. First Factor: Platforms

As for platforms, researchers have shown that aerial platforms (i.e., UAVs) can provide accurate information on the physiological parameters and geometric characteristics of plants in real time on a large-scale area and support a large variety of sensors [34,35]. As such, the flexibility of these platforms makes them suitable for widespread applications, and users can easily work with them without complicated training [34]. Nevertheless, environmental factors may affect UAV imagery [34], such as temperature, wind, humidity, and rain, which can negatively impact the data quality. Furthermore, UAVs must deal with constraints related to the life of their batteries, which leads to the need to optimize their trips to gather as much data as possible.
Another important point, expressed by the authors in [36], is that minimizing the cost of phenotyping products with an automated platform might be a good idea if the robots are designed to be multifunctional and can be used for different crops (considering, for example, the different appearance and morphology of the plants, their growth stage, the different sizes, etc.), as the authors believed that robotic platforms can measure many traits related to morphology, structure, development, and physiology, but may increase the costs due to the limitation of their design to different species.

2.2. Second Factor: Sensing Equipment

On the other hand, for image acquisition from platforms, the type of sensors may vary depending on the nature of the features to be extracted and analyzed (i.e., the plant traits).
For example, the use of X-ray sensors for plant seed evaluation has been considered by some researchers. X-ray radiography has been used to classify Jatropha seeds [37], forage grass seeds [38], non-viable watermelon seeds [39], and seed morphology [40]. Nevertheless, more research should be conducted to increase the performance of such sensors, e.g., by reducing the computed tomography (CT) reconstruction time or scanning more objects in one pass [23].
Some other researchers have performed neural network modeling with RGB sensors for implementing leaf counting [41] and segmentation [42]. Therefore, the sensor type should be selected according to the assessable traits to obtain the most relevant data to be analyzed.

2.3. Third Factor: Algorithms

Despite the constraint highlighted in Section 2.2, recent advances in sensing equipment have allowed the scientific community to obtain high-quality data from HTP platforms. This has shifted the discussion from equipment (whose reasonable choice is always a mandatory point) to algorithms for data processing and image understanding.
Let us underline that such data must be first properly managed; however, due to their size, it is not straightforward to use manual methods to label and manage such amounts of data [43]. As such, recent approaches tried to exploit image processing techniques to automatize these processes as much as possible [44]. Furthermore, the advances in computational capabilities due to the development of graphic processing units (GPUs) have led to the wide use of highly accurate methods, even if computationally expensive, using ML, DL, or a combination of both in reasonable time and at acceptable costs [45]. For example, the authors of [46] proposed a method for automatic plant segmentation and leaf counting, which uses SVM and a probabilistic active contour model to label plant leaves automatically. The results showed that the method could achieve 70% accuracy, reducing the required effort in terms of time by at least 90%. Another approach was proposed in [47], where the authors propose using a Gaussian mixture model (GMM) for automatic root tip detection, achieving an accuracy of 97% in predicting the primary root.
Although traditional ML approaches have been successful in phenotyping plants [46,47], recent studies have shown that DL can improve achieved results [45]. As an example, in [39], the authors compared results achieved by several ML approaches (i.e., linear discriminant analysis, quadratic discriminant analysis, and k-nearest neighbors) to those achieved by deep neural networks (i.e., Alex-Net [48], VGG-19 [49], ResNet-50, and ResNet-101 [50]) in classifying watermelon seeds. The results showed that the best-performing DL model (ResNet-50) achieved 87.3% accuracy, while the best ML model (linear discriminant analysis) stopped at 83.6%.

2.4. Fourth Factor: New Trends

DL approaches have shown great generalization capabilities and have been used in several scenarios. As an example, the authors in [51] proposed an approach called Tassel-Net to identify corn tassels by combining convolutional neural networks (CNNs) with local counting regression. The authors claimed an overall average absolute error of 6.6% and an average squared error of 9.6%, over eight test sequences. Other approaches (e.g., [52]) used DL to detect roots and branches, achieving an accuracy above 97% on a custom dataset.
Deep neural networks have also been extensively used to solve image-related tasks such as image segmentation. For example, SpikeSeg-Net [53] combines two networks, i.e., a local network for spot extraction and a global network for mask refinement, achieving an overall accuracy of 99.91% in wheat ear detection on a proprietary dataset. DL can also be used when a three-dimensional representation of a plant is available. For example, in [54], the authors proposed a model named 3D-BoNet, which could achieve a segmentation accuracy of 87.5% on a proprietary dataset containing the three-dimensional representation of plants. Lastly, the success of single-stage detectors, such as those based on the YOLO family [55], has led the research community to widely adopt variants of these tools to deal with other detection and segmentation problems. For example, the authors in [56] used YOLO variants to count sorghum heads starting from drone imagery, achieving an average accuracy of 95%. In contrast, the authors in [57] used the fifth revision of the YOLO family, YOLOv5, to ensemble models of different sizes trained via transfer learning on a challenging dataset containing more than 4000 images of tomatoes (standard quality), achieving an average precision of about 80% in the best case.
This analysis showed that HTP cannot be considered a compartmentalized topic, where several monolithic “blocks” deal with different, noncoupled aspects of the problem. Instead, the factors previously described are all highly dependent on each other. As such, this work aims to investigate these factors, highlight the interdependencies between hardware and software, and provide an overview of the relationships among platforms, algorithms, sensors, and new trends in processing gathered data.

3. Methodology

This section describes the steps of the research method used in this systematic review.

3.1. Review Questions

As this study focused on investigating four relevant factors that influence the research related to HTP, the following research questions (RQs) were used as the basis for the systematic review:
-
About platforms—RQ1: What platforms are used in HTP for aerial parts of plants and roots, and what are their strengths and challenges?
-
About sensing equipment—RQ2: What sensors do experts use to capture plant traits, and what data do these sensors collect for analysis?
-
About algorithms—RQ3: What algorithms can better extract and predict traits obtained from specific phenotypic data?
-
About trends—RQ4: What are the main trends toward which research in the HTP field is moving?

3.2. Literature Search Strategy

The analysis was based on the PRISMA protocol, designed around four concepts: a literature search strategy, led by a series of inclusion and exclusion criteria, followed by a quality assessment, which led to data extraction [58].
Hence, the first step was to gather studies on HTP by performing an extensive literature search. As such, the Scopus database was used for two main reasons:
  • According to [59], using more than one database for a literature search does not guarantee a positive impact on the research outcome.
  • The high degree of reliability of Scopus guarantees the evaluation of high-quality papers published in qualified journals.
The research considered only papers published between 2019 and 2022, aiming to include only recent relevant publications. The following keywords were used to search for articles using AND/OR operators:
  • “Sensor” AND “high-throughput plant phenotyping”.
  • “Machine learning” OR “deep learning” AND “high-throughput plant phenotyping”.
  • “Platform” AND “high-throughput plant phenotyping”.
  • “Image acquisition technique” AND “high-throughput plant phenotyping”.
About 1000 published and in-progress articles were found at the first stage before applying filter restrictions (based on language, abstracts, articles not published in full, and articles not related to the investigated topic). Then, about 500 papers out of 1000 were filtered. Figure 2 shows a PRISMA flowchart detailing the extraction of articles relevant to the study and the subsequent filtering stages that were applied.

3.3. Inclusion and Exclusion Criteria

The papers were selected through a combined inclusion/exclusion test conducted according to the PRISMA protocol. Specifically, Table 1 highlights the exclusion criteria, while Table 2 describes the inclusion criteria. These criteria were used to filter the 500 papers acquired after the screening step, resulting in a final result of 32 relevant items representing the final database used for the review analysis.
The collected and filtered items were carefully ranked according to their merits and categorized with respect to the research questions identified beforehand. The data extraction procedure is summarized in Table 3.

4. Results

This section reports the results of the review conducted on the selected papers, which are summarized in Table 4. First, a quantitative description of the studies based on different grouping strategies is reported; then, they are grouped to highlight their distribution with respect to the research questions that guided the systematic review process.
The trend of publications based on data from this study over the years from 2019 to 2022 is shown in Figure 3. This graph shows that the science of high-throughput plant phenotyping was still in its infancy in 2019, with 6% of the papers that were analyzed. Then, it grew over time, increasing to 47% in 2022. This shows the importance and interest of researchers in plant phenotyping aimed at improving accuracy in evaluating plants and agricultural products.
Figure 4 shows the five high-impact factor journals considered as references for this review article. Most of the 32 selected papers (about 31%) were selected from the journal with an impact factor of 6.627.
Figure 5 shows the predominant approaches in 32 summary studies from 2019 to 2022. The discussion about data analysis and the use of algorithms is currently attracting the most attention. According to the articles selected for this review, 72% discussed data analytics and algorithms. This could be because, with the increasing amount of image data, there is an increasing need to develop powerful analysis tools capable of accurately and quickly assessing phenotypic traits. Thus, scientists are trying to find ways to experiment with and analyze big data by integrating computer vision, ML, DL, and artificial intelligence approaches in general.
Figure 6 describes the distribution of the studies according to the part of the plant under analysis. Specifically, 24 studies (i.e., 75% of the total number of papers) focused on the aerial parts of plants, including leaves, flowers, and fruit. Meanwhile, 19% of reviewed papers concerned root systems, and only two studies (i.e., 6% of the total papers) were conducted on seeds.
To better understand the strengths and weaknesses of different high-throughput plant phenotyping platforms, the papers were grouped by platform type, as shown in Figure 7. Five different platform types were categorized: ground platforms, aerial platforms, root platforms, vehicles, and microscopic platforms, showing that almost one paper out of two focused on ground platforms.
The papers in the review pool used four main types of sensors, which are highly dependent on the research problem and the required quality of gathered data. Thus, there is a need for a better understanding of the kinds of data (and subsequent plant traits) usually captured and studied by the domain experts. The proposed categorization includes RGB, multispectral, and hyperspectral cameras, and X-ray CT. Figure 8 shows the distribution of sensor types within the reviewed studies.
As can be seen from Figure 8, most researchers used RGB images; hyperspectral [61,63,79] and multispectral [72,77,78] sensors were used in about 9% of the reviewed papers. Only one article used X-rays for seed evaluation [23].
Lastly, it is worth analyzing whether specific families of algorithms are used by researchers to evaluate specific phenotypic traits. Figure 9 shows the distribution of studies according to the type of algorithm used by the paper authors. It can be seen that most researchers used DL to perform plant phenotyping activities.

5. Discussion

This section is organized into four subsections, each mapping the related research question introduced in the previous section.

5.1. Platforms

The continuous effort by researchers in creating automated pipelines for image acquisition and processing for plant assessment has led to the development and improvement of HTP platforms.
The plant characteristics to be evaluated suggest using a specific HTP platform instead of another. For example, some platforms are specifically tailored to acquire images concerning the aerial parts of the plant, while others can target the roots; furthermore, some can be used in laboratories and greenhouses under controlled conditions, while others are specifically designed to be deployed directly on the field. Hence, to answer RQ1, a taxonomy of platforms was developed from the reviewed studies, examining their strengths and weaknesses and highlighting that different types of platforms have been effectively used for data collection.
Ground platforms were the most used due to their flexibility, ease of use, and relatively low cost. Among the studies analyzed, it is worth mentioning PhenoTrack3D [62], a pipeline able to reconstruct the 3D architectural development of a maize plant at the organ level throughout its entire lifecycle, as well as the method described in [80], where authors proposed an HTP platform based on CNNs to detect pots and segment lettuces in the greenhouse environment automatically.
Although ground platforms are flexible and easy to use, they may not be well suited for operating in open fields. Consequently, some researchers shifted their focus toward aerial platforms, which encompassed 19% of the reviewed studies. Intuitively, unmanned aerial systems could provide an improved assessment of plant phenotype when used in large-scale environments. For example, the authors in [66] showed that aerial platforms have more control over the crops than ground platforms, providing data captured from points of view not reachable by ground robots. Aerial platforms can also incorporate spectral sensors, exploiting the information on spectral wavelengths to improve the identification of plant organs [72]. Despite the advantages, however, aerial platforms must deal with the constraint limited by adverse weather conditions [32,66]. The introduction of vehicles and robots to perform HTP activities has greatly accelerated the whole process. Among the reviewed papers, two studies focused on agricultural vehicles, achieving promising results [82,85]. Specifically, in [82], vehicles that simultaneously capture a plant from four different angles were used, allowing a complete characterization of a single plant per run. The discussion needs to be moved to the platforms aimed at studying the root system architecture (RSA) of a plant, covered by about 22% of the reviewed papers. The RSA contains the growth information that allows revealing the health status of the plant [43]. As such, phenotyping on root systems provides researchers with a useful tool to identify which plants can perform better [87]. However, as roots are usually covered by soil, which does not allow a direct visual assessment, different tools are required to extract visual information from these systems (e.g., X-ray cameras) and, consequently, to process these data. This highlights both the challenges posed by this topic and the focus of the scientific community on overcoming these issues due to the importance of developing proper automated RSA monitoring and assessment tools. Some examples are given below. The authors in [83] proposed a low-cost hardware system with a controlled chamber to inspect root characteristics, automatically performing image preprocessing, feature extraction, and segmentation on gathered data. The authors of [70] proposed a mobile tool that automatically monitors root growth in a laboratory environment. Let us point out that neither of these platforms handles the challenges related to acquisition settings, such as illumination and occlusions. As such, the authors in [67] proposed a fully automated, customizable, embedded platform that deals with each stage of the root development cycle. This platform allows a rapid assessment of root morphology and growth rate, improving the overall effectiveness of root HTP. To answer RQ1 about root platforms, it must be noted that the platforms analyzed for RSA are suitable only in laboratory environments.
Lastly, 6% of the reviewed papers studied plant leaves using a microscopic platform. Phenotyping of leaves, including assessment of their morphological characteristics, allows a better understanding of their operation and function [88]. Microscopic platforms are a good case for studying leaf morphology. They can image the areas of leaves on both sides with their cameras to automatically determine the stomatal index [75] or phenotype hairy leaves; a microscopic platform can be a simple, powerful, and inexpensive imaging method [65]. However, when phenotyping leaves, attention must be paid to the magnification settings of these platforms, as this measure may positively affect the detection of specific leaf characteristics.

5.2. Sensors

Plant imaging can be performed using various sensors based on different working principles and technologies that can capture, e.g., visible spectrum data (RGB), fluorescence, or specific other wavelength information (IR, etc.), which can be used to evaluate different plant traits. About 63% of reviewed studies used RGB sensors to capture plant images. These sensors are the most popular for capturing the morphological information of plants due to their inexpensiveness and ease of use. RGB sensors operate within the range of human vision, with wavelengths ranging from ~400 to ~700 nm. For example, low-cost RGB sensors have been used to detect grapes and estimate their volume [85] or to count the aerial traits of plants, including ears and grains of cereals [84]. Systems composed of multiple RGB sensors have also been developed. For example, the authors in [82] used an array of four RGB sensors to improve the accuracy of a system used to count flowers. Let us note that this configuration poses unique challenges; for example, the authors highlighted that the sensor array can overlook images of neighboring plants. To overcome this issue, the authors proposed an aerial acquisition setting, where the array is used to look at a set of plants instead of a single one. Aerial gathering vehicles equipped with RGB images were also used in [32] to gather data for plant localization.
Multispectral sensors can also be used to gather images to evaluate the characteristics of plants due to the use of multiple spectral bands, which can provide more information if compared to RGB sensors [76]. The effectiveness of multispectral sensors was demonstrated in [72], where the authors showed improved accuracy in detecting plant organs. Drones can also be equipped with multispectral sensors, achieving improved results in detecting dynamic changes in radiation in low-light environmental conditions and automatically determining the optimal flight altitude for crop estimation [78].
Another type of sensor emerging in HTP is the hyperspectral sensor, which is able to capture images with a few nanometers of wavelength resolution, ranging from ultraviolet radiation to infrared. As for the reviewed papers, only three used hyperspectral sensors as it is quite an expensive sensing technology that emerged on the market in recent years. However, since a hyperspectral image is a collection of many images (depending on the spectral resolution of the sensor), hyperspectral data have huge dimensions and are complex to manage and process. The three papers showed promising results, especially when the data were analyzed using deep neural networks [61,63,79].
X-ray sensors have also been used for HTP to assess internal fruit morphology [23]. Therefore, it is fair to say that the only sensors capable of assessing the internal morphology of fruits and seeds to date are X-ray sensors performing CT scans. Lastly, microscopic sensors are suitable for visualizing microscopic leaf characteristics, such as stomatal leaf index and leaf fluffiness, achieving good accuracy [65,75].

5.3. Algorithms

ML and DL algorithms have been effectively used to analyze data from HTP platforms. ML algorithms have been used in several applications, such as plant segmentation by K-means clustering [76], crop estimation via regression analysis using the Gaussian process and random forest [72,77,78], and the development of processing pipelines for the extraction and analysis of plant characteristics [72,73].
Specifically, the latest application was developed to ease the computational burden on manual labelers, introducing automated or semi-automated pipelines to estimate the number of plants from existing imagery, e.g., separating target plants from a noisy background in field images. As an example, a self-monitoring pipeline called KAT4IA was proposed in [73] to extract pixels belonging to plants and estimate plant height by combining ML models and neural networks. Another approach for measuring root nodes was proposed in [83], where the authors created a pipeline based on DL architectures, such as RetinaNet and UNet [71], to automatically identify the location of roots on images. Let us underline that the idea of automatic pipelines has been proposed in all data analysis steps. Generally speaking, pipelines automating the analysis of the plant were able to reduce processing time, while delivering higher accuracy when compared with manual methods [27,62,69,75,80].
As for DL, an interesting development has been achieved via deep neural networks aimed at object detection applied to HTP. For example, existing architectures, such as AlexNet, VGG16 and 19, and Inception, were used to accurately detect fruit on low-quality images [85]. However, new architectures were also explored to deal with the specific challenges related to evaluating and identifying plant phenotypic traits. For example, the authors in [70] proposed a modification of the traditional U-net model to train a few images, achieving precise segmentation results in seed recognition and plant root evaluation.
Object detection has also been performed using two-stages detectors, such as R-CNN and Fast R-CNN [88], providing high detection accuracy at the cost of high computational complexity [89]. In the HTP field, for example, a model called SlypNet [8] uses variants of two-stages detectors, i.e., Mask R-CNN and U-Net, to perform wheat detection.
To overcome the limitations of two-stage detectors, faster (and more accurate) single-stage detectors, such as YOLO and its successors [55,57], have also been largely employed over recent years. These models improve the ability to extract local features from images and reduce the background detection error rate [90]. For example, FlowerPhenoNet [91] was used to identify flowers and investigate their location within images. Moreover, the authors in [57] compared the effectiveness of YOLOv5 architectures in performing node, fruit, and flower detection on tomato plants.
Lastly, the results achieved by integrating ML and DL also led to a comparison of these types of approaches. As an example, the authors in [60] compared image-based RSA phenotyping methods using several ML and DL algorithms, specifically k-means, naïve Bayes, random forest, shallow neural networks, and deep neural networks, showing that the latter achieved the highest accuracy (about 86%) on a dataset of 617 root images from mature alfalfa plants.

5.4. New HTP Research Ideas and Proposals

The purpose of RQ4 was to identify new methods and research trends for plant phenotyping activities currently hypothesized or implemented by researchers, allowing for new ideas to open up further horizons. Only one study [67] focused on developing a new innovative phenotyping method to simulate a farm by installing scanners and plant-growing containers, specifically acrylic plates. This method fully automated the steps of image acquisition and data analysis, preventing time-wasting because of the implemented automatisms and reducing the costs. This creative work aimed to develop a laboratory platform for capturing the phenotypic characteristics of plant roots while automating the entire process from image capture to data analysis.

6. Conclusions and Future Work

In this paper, a systematic review based on the PRISMA protocol was presented to investigate how specific factors affect high-throughput plant phenotyping activities from the aspects of both hardware and software. A rigorous paper selection led to the identification of 32 relevant scientific articles in this review. The results were discussed, leading to the following conclusions:
  • Ground platforms were among the most commonly used platforms for the aerial part of plants. They were used in laboratories due to their cost and time effectiveness, simplicity, and compatibility for data collection.
  • Researchers widely used digital RGB cameras because of their compatibility and ease of integration with all plant phenotype platforms; moreover, using RGB cameras, it is possible to capture images of both the aerial part of the plants and the root system architecture, thus lowering the costs and achieving relatively good quality images in terms of resolution and general appearance.
  • Deep learning models were among the most widely used methods in plant phenotyping in the last few years. These models can detect and accurately measure, for example, specific parts of the plant (fruit, flowers, roots, etc.).
Using deep neural network-based pipelines gave improved results with respect to classical machine learning approaches. However, particular attention should be given to the adaption of custom models developed by researchers, as well as to the generalization capabilities of the developed models. In this sense, future research trends will be devoted to evaluating and applying deep learning models for continuously changing data and developing and sharing algorithms for data processing. It has been observed that even commonly used deep learning architectures and models, combined with optimization and adaptation techniques, can provide relevant accuracy and boost the performance of plant trait extraction and evaluation, despite the complexity of the processed dataset.

Author Contributions

Conceptualization, F.S. and A.C.; methodology, F.S., G.D. and V.R.; writing—original draft preparation, F.S.; writing—review and editing, F.S., A.C., G.D., M.N., A.L. and V.R.; supervision, G.D. and V.R.; project administration, V.R.; funding acquisition, V.R. All authors have read and agreed to the published version of the manuscript.

Funding

The activities described in this study were within the research project PHENO—Accordo di collaborazione tra ALSIA e CNR STIIMA. Funder: ALSIA. Funding number: prot. CNR STIIMA 3621/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable.

Acknowledgments

The authors thank Michele Attolico for his support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coping with Water Scarcity: An Action Framework for Agriculture and Food Security; FAO Water Reports; Steduto, P.; Faurès, J.-M.; Hoogeveen, J.; Winpenny, J.T.; Burke, J.J. (Eds.) Food and Agriculture Organization of the United Nations: Rome, Italy, 2012; ISBN 978-92-5-107304-9. [Google Scholar]
  2. Danzi, D.; Briglia, N.; Petrozza, A.; Summerer, S.; Povero, G.; Stivaletta, A.; Cellini, F.; Pignone, D.; De Paola, D.; Janni, M. Can High Throughput Phenotyping Help Food Security in the Mediterranean Area? Front. Plant Sci. 2019, 10, 15. [Google Scholar] [CrossRef] [PubMed]
  3. Colla, G.; Rouphael, Y. Special Issue: Biostimulants in Horticulture. Sci. Hortic. 2015, 196, 1–134. [Google Scholar] [CrossRef]
  4. Rouphael, Y.; Spíchal, L.; Panzarová, K.; Casa, R.; Colla, G. High-Throughput Plant Phenotyping for Developing Novel Biostimulants: From Lab to Field or From Field to Lab? Front. Plant Sci. 2018, 9, 1197. [Google Scholar] [CrossRef] [PubMed]
  5. Vasconez, J.P.; Delpiano, J.; Vougioukas, S.; Auat Cheein, F. Comparison of Convolutional Neural Networks in Fruit Detection and Counting: A Comprehensive Evaluation. Comput. Electron. Agric. 2020, 173, 105348. [Google Scholar] [CrossRef]
  6. Paustian, M.; Theuvsen, L. Adoption of Precision Agriculture Technologies by German Crop Farmers. Precis. Agric. 2017, 18, 701–716. [Google Scholar] [CrossRef]
  7. Chen, D.; Neumann, K.; Friedel, S.; Kilian, B.; Chen, M.; Altmann, T.; Klukas, C. Dissecting the Phenotypic Components of Crop Plant Growth and Drought Responses Based on High-Throughput Image Analysis. Plant Cell 2014, 26, 4636–4655. [Google Scholar] [CrossRef] [Green Version]
  8. Maji, A.K.; Marwaha, S.; Kumar, S.; Arora, A.; Chinnusamy, V.; Islam, S. SlypNet: Spikelet-Based Yield Prediction of Wheat Using Advanced Plant Phenotyping and Computer Vision Techniques. Front. Plant Sci. 2022, 13, 2552. [Google Scholar] [CrossRef]
  9. Yang, W.; Duan, L.; Chen, G.; Xiong, L.; Liu, Q. Plant Phenomics and High-Throughput Phenotyping: Accelerating Rice Functional Genomics Using Multidisciplinary Technologies. Curr. Opin. Plant Biol. 2013, 16, 180–187. [Google Scholar] [CrossRef]
  10. Mazis, A.; Choudhury, S.D.; Morgan, P.B.; Stoerger, V.; Hiller, J.; Ge, Y.; Awada, T. Application of High-Throughput Plant Phenotyping for Assessing Biophysical Traits and Drought Response in Two Oak Species under Controlled Environment. For. Ecol. Manag. 2020, 465, 118101. [Google Scholar] [CrossRef]
  11. Fan, J.; Zhang, Y.; Wen, W.; Gu, S.; Lu, X.; Guo, X. The Future of Internet of Things in Agriculture: Plant High-Throughput Phenotypic Platform. J. Clean. Prod. 2021, 280, 123651. [Google Scholar] [CrossRef]
  12. Hu, P.; Chapman, S.C.; Zheng, B.; Hu, P.; Chapman, S.C.; Zheng, B. Coupling of Machine Learning Methods to Improve Estimation of Ground Coverage from Unmanned Aerial Vehicle (UAV) Imagery for High-Throughput Phenotyping of Crops. Funct. Plant Biol. 2021, 48, 766–779. [Google Scholar] [CrossRef] [PubMed]
  13. Atefi, A.; Ge, Y.; Pitla, S.; Schnable, J. Robotic Technologies for High-Throughput Plant Phenotyping: Contemporary Reviews and Future Perspectives. Front. Plant Sci. 2021, 12, 611940. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, C.; Marzougui, A.; Sankaran, S. High-Resolution Satellite Imagery Applications in Crop Phenotyping: An Overview. Comput. Electron. Agric. 2020, 175, 105584. [Google Scholar] [CrossRef]
  15. Arunachalam, A.; Andreasson, H. Real-Time Plant Phenomics under Robotic Farming Setup: A Vision-Based Platform for Complex Plant Phenotyping Tasks. Comput. Electr. Eng. 2021, 92, 107098. [Google Scholar] [CrossRef]
  16. Das Choudhury, S.; Samal, A.; Awada, T. Leveraging Image Analysis for High-Throughput Plant Phenotyping. Front. Plant Sci. 2019, 10, 508. [Google Scholar] [CrossRef]
  17. Li, D.; Li, C.; Yao, Y.; Li, M.; Liu, L. Modern Imaging Techniques in Plant Nutrition Analysis: A Review. Comput. Electron. Agric. 2020, 174, 105459. [Google Scholar] [CrossRef]
  18. Shibayama, M.; Sakamoto, T.; Takada, E.; Inoue, A.; Morita, K.; Takahashi, W.; Kimura, A. Continuous Monitoring of Visible and Near-Infrared Band Reflectance from a Rice Paddy for Determining Nitrogen Uptake Using Digital Cameras. Plant Prod. Sci. 2009, 12, 293–306. [Google Scholar] [CrossRef] [Green Version]
  19. Mahlein, A.-K.; Oerke, E.-C.; Steiner, U.; Dehne, H.-W. Recent Advances in Sensing Plant Diseases for Precision Crop Protection. Eur. J. Plant Pathol. 2012, 133, 197–209. [Google Scholar] [CrossRef]
  20. Hernández-Sánchez, N.; Hills, B.P.; Barreiro, P.; Marigheto, N. An NMR Study on Internal Browning in Pears. Postharvest Biol. Technol. 2007, 44, 260–270. [Google Scholar] [CrossRef] [Green Version]
  21. da Silva, C.B.; Bianchini, V.D.J.M.; de Medeiros, A.D.; de Moraes, M.H.D.; Marassi, A.G.; Tannús, A. A Novel Approach for Jatropha Curcas Seed Health Analysis Based on Multispectral and Resonance Imaging Techniques. Ind. Crops Prod. 2021, 161, 113186. [Google Scholar] [CrossRef]
  22. Köckenberger, W.; De Panfilis, C.; Santoro, D.; Dahiya, P.; Rawsthorne, S. High Resolution NMR Microscopy of Plants and Fungi. J. Microsc. 2004, 214, 182–189. [Google Scholar] [CrossRef]
  23. Van De Looverbosch, T.; Vandenbussche, B.; Verboven, P.; Nicolaï, B. Nondestructive High-Throughput Sugar Beet Fruit Analysis Using X-ray CT and Deep Learning. Comput. Electron. Agric. 2022, 200, 107228. [Google Scholar] [CrossRef]
  24. Flavel, R.J.; Guppy, C.N.; Tighe, M.; Watt, M.; McNeill, A.; Young, I.M. Non-Destructive Quantification of Cereal Roots in Soil Using High-Resolution X-ray Tomography. J. Exp. Bot. 2012, 63, 2503–2511. [Google Scholar] [CrossRef] [Green Version]
  25. Helliwell, J.R.; Sturrock, C.J.; Mairhofer, S.; Craigon, J.; Ashton, R.W.; Miller, A.J.; Whalley, W.R.; Mooney, S.J. The Emergent Rhizosphere: Imaging the Development of the Porous Architecture at the Root-Soil Interface. Sci. Rep. 2017, 7, 14875. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Atkinson, J.A.; Pound, M.P.; Bennett, M.J.; Wells, D.M. Uncovering the Hidden Half of Plants Using New Advances in Root Phenotyping. Curr. Opin. Biotechnol. 2019, 55, 1–8. [Google Scholar] [CrossRef] [PubMed]
  27. Bauer, F.M.; Lärm, L.; Morandage, S.; Lobet, G.; Vanderborght, J.; Vereecken, H.; Schnepf, A. Development and Validation of a Deep Learning Based Automated Minirhizotron Image Analysis Pipeline. Plant Phenomics 2022, 2022, 9758532. [Google Scholar] [CrossRef]
  28. Mochida, K.; Koda, S.; Inoue, K.; Hirayama, T.; Tanaka, S.; Nishii, R.; Melgani, F. Computer Vision-Based Phenotyping for Improvement of Plant Productivity: A Machine Learning Perspective. GigaScience 2019, 8, giy153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Wilf, P.; Zhang, S.; Chikkerur, S.; Little, S.A.; Wing, S.L.; Serre, T. Computer Vision Cracks the Leaf Code. Proc. Natl. Acad. Sci. USA 2016, 113, 3305–3310. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Brichet, N.; Fournier, C.; Turc, O.; Strauss, O.; Artzet, S.; Pradal, C.; Welcker, C.; Tardieu, F.; Cabrera-Bosquet, L. A Robot-Assisted Imaging Pipeline for Tracking the Growths of Maize Ear and Silks in a High-Throughput Phenotyping Platform. Plant Methods 2017, 13, 1–12. [Google Scholar] [CrossRef] [Green Version]
  31. Jiang, Y.; Li, C. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review. Plant Phenomics 2020, 2020, 4152816. [Google Scholar] [CrossRef] [Green Version]
  32. Koh, J.C.O.; Spangenberg, G.; Kant, S. Automated Machine Learning for High-Throughput Image-Based Plant Phenotyping. Remote Sens. 2021, 13, 858. [Google Scholar] [CrossRef]
  33. Abade, A.; Ferreira, P.A.; de Barros Vidal, F. Plant Diseases Recognition on Images Using Convolutional Neural Networks: A Systematic Review. Comput. Electron. Agric. 2021, 185, 106125. [Google Scholar] [CrossRef]
  34. Feng, L.; Chen, S.; Zhang, C.; Zhang, Y.; He, Y. A Comprehensive Review on Recent Applications of Unmanned Aerial Vehicle Remote Sensing with Various Sensors for High-Throughput Plant Phenotyping. Comput. Electron. Agric. 2021, 182, 106033. [Google Scholar] [CrossRef]
  35. Näsi, R.; Viljanen, N.; Kaivosoja, J.; Alhonoja, K.; Hakala, T.; Markelin, L.; Honkavaara, E. Estimating Biomass and Nitrogen Amount of Barley and Grass Using UAV and Aircraft Based Spectral and Photogrammetric 3D Features. Remote Sens. 2018, 10, 1082. [Google Scholar] [CrossRef] [Green Version]
  36. Xu, R.; Li, C. A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground Robots. Plant Phenomics 2022, 2022, 9760269. [Google Scholar] [CrossRef] [PubMed]
  37. de Medeiros, A.D.; da Silva, L.J.; Ribeiro, J.P.O.; Ferreira, K.C.; Rosas, J.T.F.; Santos, A.A.; da Silva, C.B. Machine Learning for Seed Quality Classification: An Advanced Approach Using Merger Data from FT-NIR Spectroscopy and X-ray Imaging. Sensors 2020, 20, 4319. [Google Scholar] [CrossRef]
  38. Medeiros, A.D.D.; Silva, L.J.D.; Pereira, M.D.; Oliveira, A.M.S.; Dias, D.C.F.S. High-Throughput Phenotyping of Brachiaria Grass Seeds Using Free Access Tool for Analyzing X-ray Images. An. Acad. Bras. Ciênc. 2020, 92, e20190209. [Google Scholar] [CrossRef]
  39. Ahmed, M.R.; Yasmin, J.; Park, E.; Kim, G.; Kim, M.S.; Wakholi, C.; Mo, C.; Cho, B.-K. Classification of Watermelon Seeds Using Morphological Patterns of X-ray Imaging: A Comparison of Conventional Machine Learning and Deep Learning. Sensors 2020, 20, 6753. [Google Scholar] [CrossRef]
  40. Liu, W.; Liu, C.; Jin, J.; Li, D.; Fu, Y.; Yuan, X. High-Throughput Phenotyping of Morphological Seed and Fruit Characteristics Using X-ray Computed Tomography. Front. Plant Sci. 2020, 11, 601475. [Google Scholar] [CrossRef]
  41. Aich, S.; Stavness, I. Leaf Counting With Deep Convolutional and Deconvolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2080–2089. [Google Scholar]
  42. Wang, T.; Rostamza, M.; Song, Z.; Wang, L.; McNickle, G.; Iyer-Pascuzzi, A.S.; Qiu, Z.; Jin, J. SegRoot: A High Throughput Segmentation Method for Root Image Analysis. Comput. Electron. Agric. 2019, 162, 845–854. [Google Scholar] [CrossRef]
  43. Gong, L.; Du, X.; Zhu, K.; Lin, C.; Lin, K.; Wang, T.; Lou, Q.; Yuan, Z.; Huang, G.; Liu, C. Pixel Level Segmentation of Early-Stage in-Bag Rice Root for Its Architecture Analysis. Comput. Electron. Agric. 2021, 186, 106197. [Google Scholar] [CrossRef]
  44. Chavarría-Krauser, A.; Nagel, K.A.; Palme, K.; Schurr, U.; Walter, A.; Scharr, H. Spatio-Temporal Quantification of Differential Growth Processes in Root Growth Zones Based on a Novel Combination of Image Sequence Processing and Refined Concepts Describing Curvature Production. New Phytol. 2008, 177, 811–821. [Google Scholar] [CrossRef]
  45. Araus, J.L.; Cairns, J.E. Field High-Throughput Phenotyping: The New Crop Breeding Frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef]
  46. Minervini, M.; Giuffrida, M.V.; Perata, P.; Tsaftaris, S.A. Phenotiki: An Open Software and Hardware Platform for Affordable and Easy Image-Based Phenotyping of Rosette-Shaped Plants. Plant J. 2017, 90, 204–216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Kumar, P.; Huang, C.; Cai, J.; Miklavcic, S.J. Root Phenotyping by Root Tip Detection and Classification through Statistical Learning. Plant Soil 2014, 380, 193–209. [Google Scholar] [CrossRef]
  48. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  49. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  50. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  51. Lu, H.; Cao, Z.; Xiao, Y.; Zhuang, B.; Shen, C. TasselNet: Counting Maize Tassels in the Wild via Local Counts Regression Network. Plant Methods 2017, 13, 1–17. [Google Scholar] [CrossRef] [Green Version]
  52. Pound, M.P.; Atkinson, J.A.; Townsend, A.J.; Wilson, M.H.; Griffiths, M.; Jackson, A.S.; Bulat, A.; Tzimiropoulos, G.; Wells, D.M.; Murchie, E.H.; et al. Deep Machine Learning Provides State-of-the-Art Performance in Image-Based Plant Phenotyping. GigaScience 2017, 6, gix083. [Google Scholar] [CrossRef] [Green Version]
  53. Misra, T.; Arora, A.; Marwaha, S.; Chinnusamy, V.; Rao, A.R.; Jain, R.; Sahoo, R.N.; Ray, M.; Kumar, S.; Raju, D.; et al. SpikeSegNet-a Deep Learning Approach Utilizing Encoder-Decoder Network with Hourglass for Spike Segmentation and Counting in Wheat Plant from Visual Imaging. Plant Methods 2020, 16, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Yang, B.; Wang, J.; Clark, R.; Hu, Q.; Wang, S.; Markham, A.; Trigoni, N. Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  55. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  56. Mosley, L.; Pham, H.; Bansal, Y.; Hare, E. Image-Based Sorghum Head Counting When You Only Look Once. In Proceedings of the Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar]
  57. Cardellicchio, A.; Solimani, F.; Dimauro, G.; Petrozza, A.; Summerer, S.; Cellini, F.; Renò, V. Detection of Tomato Plant Phenotyping Traits Using YOLOv5-Based Single Stage Detectors. Comput. Electron. Agric. 2023, 207, 107757. [Google Scholar] [CrossRef]
  58. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Rice, D.B.; Kloda, L.A.; Levis, B.; Qi, B.; Kingsland, E.; Thombs, B.D. Are MEDLINE Searches Sufficient for Systematic Reviews and Meta-Analyses of the Diagnostic Accuracy of Depression Screening Tools? A Review of Meta-Analyses. J. Psychosom. Res. 2016, 87, 7–13. [Google Scholar] [CrossRef]
  60. Xu, Z.; York, L.M.; Seethepalli, A.; Bucciarelli, B.; Cheng, H.; Samac, D.A. Objective Phenotyping of Root System Architecture Using Image Augmentation and Machine Learning in Alfalfa (Medicago sativa L.). Plant Phenomics 2022, 2022, 9879610. [Google Scholar] [CrossRef] [PubMed]
  61. Islam ElManawy, A.; Sun, D.; Abdalla, A.; Zhu, Y.; Cen, H. HSI-PP: A Flexible Open-Source Software for Hyperspectral Imaging-Based Plant Phenotyping. Comput. Electron. Agric. 2022, 200, 107248. [Google Scholar] [CrossRef]
  62. Daviet, B.; Fernandez, R.; Cabrera-Bosquet, L.; Pradal, C.; Fournier, C. PhenoTrack3D: An Automatic High-Throughput Phenotyping Pipeline to Track Maize Organs over Time. Plant Methods 2022, 18, 130. [Google Scholar] [CrossRef]
  63. Yu, S.; Fan, J.; Xianju, L.; Wen, W.; Shao, S.; Guo, X.; Zhao, C. Hyperspectral Technique Combined with Deep Learning Algorithm for Prediction of Phenotyping Traits in Lettuce. Front. Plant Sci. 2022, 13, 927832. [Google Scholar] [CrossRef] [PubMed]
  64. Oury, V.; Leroux, T.; Turc, O.; Chapuis, R.; Palaffre, C.; Tardieu, F.; Prado, S.A.; Welcker, C.; Lacube, S. Earbox, an Open Tool for High-Throughput Measurement of the Spatial Organization of Maize Ears and Inference of Novel Traits. bioRxiv 2021. [Google Scholar] [CrossRef] [PubMed]
  65. Rolland, V.; Farazi, M.R.; Conaty, W.C.; Cameron, D.; Liu, S.; Petersson, L.; Stiller, W.N. HairNet: A Deep Learning Model to Score Leaf Hairiness, a Key Phenotype for Cotton Fibre Yield, Value and Insect Resistance. Plant Methods 2022, 18, 8. [Google Scholar] [CrossRef]
  66. Petti, D.; Li, C. Weakly-Supervised Learning to Automatically Count Cotton Flowers from Aerial Imagery. Comput. Electron. Agric. 2022, 194, 106734. [Google Scholar] [CrossRef]
  67. Zhao, H.; Wang, N.; Sun, H.; Zhu, L.; Zhang, K.; Zhang, Y.; Zhu, J.; Li, A.; Bai, Z.; Liu, X.; et al. RhizoPot Platform: A High-Throughput in Situ Root Phenotyping Platform with Integrated Hardware and Software. Front. Plant Sci. 2022, 13, 1004904. [Google Scholar] [CrossRef]
  68. Narisetti, N.; Henke, M.; Neumann, K.; Stolzenburg, F.; Altmann, T.; Gladilin, E. Deep Learning Based Greenhouse Image Segmentation and Shoot Phenotyping (DeepShoot). Front. Plant Sci. 2022, 13, 906410. [Google Scholar] [CrossRef] [PubMed]
  69. Zenkl, R.; Timofte, R.; Kirchgessner, N.; Roth, L.; Hund, A.; Van Gool, L.; Walter, A.; Aasen, H. Outdoor Plant Segmentation with Deep Learning for High-Throughput Field Phenotyping on a Diverse Wheat Dataset. Front. Plant Sci. 2022, 12, 774068. [Google Scholar] [CrossRef]
  70. Lube, V.; Noyan, M.A.; Przybysz, A.; Salama, K.; Blilou, I. MultipleXLab: A High-Throughput Portable Live-Imaging Root Phenotyping Platform Using Deep Learning and Computer Vision. Plant Methods 2022, 18, 38. [Google Scholar] [CrossRef] [PubMed]
  71. Jubery, T.Z.; Carley, C.N.; Singh, A.; Sarkar, S.; Ganapathysubramanian, B.; Singh, A.K. Using Machine Learning to Develop a Fully Automated Soybean Nodule Acquisition Pipeline (SNAP). Plant Phenomics 2021, 2021, 9834746. [Google Scholar] [CrossRef]
  72. Zhao, Y.; Zheng, B.; Chapman, S.C.; Laws, K.; George-Jaeggli, B.; Hammer, G.L.; Jordan, D.R.; Potgieter, A.B. Detecting Sorghum Plant and Head Features from Multispectral UAV Imagery. Plant Phenomics 2021, 2021, 9874650. [Google Scholar] [CrossRef]
  73. Guo, X.; Qiu, Y.; Nettleton, D.; Yeh, C.-T.; Zheng, Z.; Hey, S.; Schnable, P.S. KAT4IA: K-Means Assisted Training for Image Analysis of Field-Grown Plant Phenotypes. Plant Phenomics 2021, 2021, 9805489. [Google Scholar] [CrossRef]
  74. Chang, S.; Lee, U.; Hong, M.J.; Jo, Y.D.; Kim, J.-B. Time-Series Growth Prediction Model Based on U-Net and Machine Learning in Arabidopsis. Front. Plant Sci. 2021, 12, 721512. [Google Scholar] [CrossRef] [PubMed]
  75. Zhu, C.; Hu, Y.; Mao, H.; Li, S.; Li, F.; Zhao, C.; Luo, L.; Liu, W.; Yuan, X. A Deep Learning-Based Method for Automatic Assessment of Stomatal Index in Wheat Microscopic Images of Leaf Epidermis. Front. Plant Sci. 2021, 12, 716784. [Google Scholar] [CrossRef]
  76. Zhou, S.; Chai, X.; Yang, Z.; Wang, H.; Yang, C.; Sun, T. Maize-IAS: A Maize Image Analysis Software Using Deep Learning for High-Throughput Plant Phenotyping. Plant Methods 2021, 17, 48. [Google Scholar] [CrossRef]
  77. Pranga, J.; Borra-Serrano, I.; Aper, J.; De Swaef, T.; Ghesquiere, A.; Quataert, P.; Roldán-Ruiz, I.; Janssens, I.A.; Ruysschaert, G.; Lootens, P. Improving Accuracy of Herbage Yield Predictions in Perennial Ryegrass with UAV-Based Structural and Spectral Data Fusion and Machine Learning. Remote Sens. 2021, 13, 3459. [Google Scholar] [CrossRef]
  78. Banerjee, B.P.; Sharma, V.; Spangenberg, G.; Kant, S. Machine Learning Regression Analysis for Estimation of Crop Emergence Using Multispectral UAV Imagery. Remote Sens. 2021, 13, 2918. [Google Scholar] [CrossRef]
  79. Rehman, T.U.; Ma, D.; Wang, L.; Zhang, L.; Jin, J. Predictive Spectral Analysis Using an End-to-End Deep Model from Hyperspectral Images for High-Throughput Plant Phenotyping. Comput. Electron. Agric. 2020, 177, 105713. [Google Scholar] [CrossRef]
  80. Du, J.; Lu, X.; Fan, J.; Qin, Y.; Yang, X.; Guo, X. Image-Based High-Throughput Detection and Phenotype Evaluation Method for Multiple Lettuce Varieties. Front. Plant Sci. 2020, 11, 563386. [Google Scholar] [CrossRef]
  81. Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front. Plant Sci. 2020, 11, 534853. [Google Scholar] [CrossRef] [PubMed]
  82. Jiang, Y.; Li, C.; Xu, R.; Sun, S.; Robertson, J.S.; Paterson, A.H. DeepFlower: A Deep Learning-Based Approach to Characterize Flowering Patterns of Cotton Plants in the Field. Plant Methods 2020, 16, 1–17. [Google Scholar] [CrossRef] [PubMed]
  83. Falk, K.G.; Jubery, T.Z.; Mirnezami, S.V.; Parmley, K.A.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B.; Singh, A.K. Computer Vision and Machine Learning Enabled Soybean Root Phenotyping Pipeline. Plant Methods 2020, 16, 1–19. [Google Scholar] [CrossRef] [Green Version]
  84. Lu, H.; Cao, Z. TasselNetV2+: A Fast Implementation for High-Throughput Plant Counting From High-Resolution RGB Imagery. Front. Plant Sci. 2020, 11, 541960. [Google Scholar] [CrossRef] [PubMed]
  85. Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-Field High Throughput Grapevine Phenotyping with a Consumer-Grade Depth Camera. Comput. Electron. Agric. 2019, 156, 293–306. [Google Scholar] [CrossRef]
  86. Zhou, J.; Fu, X.; Zhou, S.; Zhou, J.; Ye, H.; Nguyen, H.T. Automated Segmentation of Soybean Plants from 3D Point Cloud Using Machine Learning. Comput. Electron. Agric. 2019, 162, 143–153. [Google Scholar] [CrossRef]
  87. Tracy, S.R.; Nagel, K.A.; Postma, J.A.; Fassbender, H.; Wasson, A.; Watt, M. Crop Improvement from Phenotyping Roots: Highlights Reveal Expanding Opportunities. Trends Plant Sci. 2020, 25, 105–118. [Google Scholar] [CrossRef] [Green Version]
  88. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  89. Wang, X.; Liu, J. Tomato Anomalies Detection in Greenhouse Scenarios Based on YOLO-Dense. Front. Plant Sci. 2021, 12, 634103. [Google Scholar] [CrossRef] [PubMed]
  90. Lin, Y.; Cai, R.; Lin, P.; Cheng, S. A Detection Approach for Bundled Log Ends Using K-Median Clustering and Improved YOLOv4-Tiny Network. Comput. Electron. Agric. 2022, 194, 106700. [Google Scholar] [CrossRef]
  91. Das Choudhury, S.; Guha, S.; Das, A.; Das, A.K.; Samal, A.; Awada, T. FlowerPhenoNet: Automated Flower Detection from Multi-View Image Sequences Using Deep Neural Networks for Temporal Plant Phenotyping Analysis. Remote Sens. 2022, 14, 6252. [Google Scholar] [CrossRef]
Figure 1. Diagram of the two main categories—hardware and software—considered in this review, as well as the four factors specifically addressed: platforms, sensing equipment, algorithms, and new trends. Blue: hardware-related factors; orange: software-related factors. All factors are equally important and, thus, equally distributed around the systematic review depicted centrally in green.
Figure 1. Diagram of the two main categories—hardware and software—considered in this review, as well as the four factors specifically addressed: platforms, sensing equipment, algorithms, and new trends. Blue: hardware-related factors; orange: software-related factors. All factors are equally important and, thus, equally distributed around the systematic review depicted centrally in green.
Information 14 00214 g001
Figure 2. Flow diagram of database search using PRISMA.
Figure 2. Flow diagram of database search using PRISMA.
Information 14 00214 g002
Figure 3. Number of publications over the years.
Figure 3. Number of publications over the years.
Information 14 00214 g003
Figure 4. Participation percentage of journals with high impact factor from 2019 to 2022.
Figure 4. Participation percentage of journals with high impact factor from 2019 to 2022.
Information 14 00214 g004
Figure 5. Predominant approaches in 32 summary studies from 2019 to 2022.
Figure 5. Predominant approaches in 32 summary studies from 2019 to 2022.
Information 14 00214 g005
Figure 6. The number of studies (on the Y-axis) according to the specific part of the plant under analysis (on the X-axis). As can be seen, 19% of the studies focused on the root system architecture, in contrast to 75% on the aerial part of the plant, and 6% on the morphology of the seed.
Figure 6. The number of studies (on the Y-axis) according to the specific part of the plant under analysis (on the X-axis). As can be seen, 19% of the studies focused on the root system architecture, in contrast to 75% on the aerial part of the plant, and 6% on the morphology of the seed.
Information 14 00214 g006
Figure 7. The number of studies (on the Y-axis) according to the platform used for HTP (on the X-axis). The distribution shows that most of the studies (about 47%) used ground platforms, followed by root platforms (used by 22% of the studies) and aerial platforms (about 19%). Only 6% of the reviewed papers used vehicles and microscopic platforms.
Figure 7. The number of studies (on the Y-axis) according to the platform used for HTP (on the X-axis). The distribution shows that most of the studies (about 47%) used ground platforms, followed by root platforms (used by 22% of the studies) and aerial platforms (about 19%). Only 6% of the reviewed papers used vehicles and microscopic platforms.
Information 14 00214 g007
Figure 8. The number of studies (on the Y-axis) according to the sensor equipment used for HTP (on the X-axis). The distribution shows that most of the studies (about 63%) used RGB cameras, followed by hyperspectral and multispectral cameras (both at 9%). Few researchers used X-ray CT, while approximately 16% of papers used other types of sensors.
Figure 8. The number of studies (on the Y-axis) according to the sensor equipment used for HTP (on the X-axis). The distribution shows that most of the studies (about 63%) used RGB cameras, followed by hyperspectral and multispectral cameras (both at 9%). Few researchers used X-ray CT, while approximately 16% of papers used other types of sensors.
Information 14 00214 g008
Figure 9. The number of studies (on the Y-axis) according to the algorithm used for data evaluation (on the X-axis). Most studies (about 69%) used DL approaches, while only 12% of the studies were based on traditional ML. Lastly, 19% of the reviewed studies used hybrid approaches involving both ML and DL.
Figure 9. The number of studies (on the Y-axis) according to the algorithm used for data evaluation (on the X-axis). Most studies (about 69%) used DL approaches, while only 12% of the studies were based on traditional ML. Lastly, 19% of the reviewed studies used hybrid approaches involving both ML and DL.
Information 14 00214 g009
Table 1. Exclusion criteria.
Table 1. Exclusion criteria.
#Exclusion Criterion
1Articles not written in English
2Articles that do not refer to high-throughput plant phenotyping
3Articles that relate to phenotyping traits but are unrelated to the discussion
4Articles that do not use DL or ML
5Articles that appear in invalid journals or those with very low-impact factors
6Articles that are reviews
7Articles for which only abstracts are available
Table 2. Inclusion criteria.
Table 2. Inclusion criteria.
#Inclusion Criterion
1Articles written in English
2Articles that refer to the high-throughput plant phenotyping
3Articles that use DL or ML
4Articles that appear with high-impact factors
5Research articles (non-review papers)
6Articles that are fully available
7Articles that relate to selected research questions
Table 3. Data extraction.
Table 3. Data extraction.
ExtractionElement ContentsType
1TitleYes/no
2Research questionsThe clear description of the research question
3Type of articleProblem identification
4Study outcomesShort description of study outcomes
5YearThe year of publication
6JournalImpact factor (Q1)
Table 4. Summary of eligible papers included in the systematic review.
Table 4. Summary of eligible papers included in the systematic review.
#ReferenceYearPlantPlatformSensorAlgorithm
[27]Bauer et al.2022WheatMinirhizotronCameraDL
[60]Xu et al.2022AlfalfaRhizotronRGBML/DL
[61]Islam Elmanawy et al.2022Oilseed rapeHyper platformHyperspectralDL
[23]Van De Looverbosch et al.2022Sugar beetPolystyrene sheetsX-ray CTDL
[16]Das Choudhury et al.2022FlowersLemnaTec ScanalyzerRGB, infraredDL
[62]Daviet et al.2022MaizePhenoArchRGBDL
[63]Yu et al.2022LettuceLQ-FieldPhenoHyperspectralDL
[64]Oury et al.2022MaizeEarboxRGB/IRDL
[65]Rolland et al.2022CottonMicroscopeMicroscopeDL
[66]Petti and Li2022CottonUAVRGBDL
[67]Zhao et al.2022CottonRhizo-Pot platformScannerDL
[8]Maji et al.2022WheatChamber platformRGBDL
[68]Narisetti et al.2022Maize/wheatScanalyzer3DRGBDL
[69]Zenkl et al.2022WheatField platformRGBDL
[70]Lubi et al.2022ArabidopsisMultipleXLabRGBDL
[71]Jubery et al.2021SoybeanSNAP platformRGBML/DL
[72]Zhao et al.2021SorghumUAVMultispectralML
[73]Guo et al.2021MaizeKAT4IA fieldRGBML/DL
[74]Chang et al.2021ArabidopsisControlled PlatformRGBML/DL
[75]Zhu et al.2021WheatDP72 microscopeDP72 microscopeDL
[76]Zhou et al.2021MaizeChamberRGBDL
[77]Pranga et al.2021RyegrassUAVMultispectral/RGBML
[78]Banerjee et al.2021WheatUAVMultispectralML
[32]Koh et al.2021WheatUAVRGBML/DL
[79]Rehman et al.2020MaizeGreenhouse platformHyperspectralDL
[80]Du et al.2020LettuceGreenhouse platformIndustrial cameraDL
[81]Lin and Guo2020SorghumUAVRGBDL
[82]Jiang et al.2020CottonGPhenoVisionRGBDL
[83]Falk et al.2020SoybeanRoot platformRGBML/DL
[84]Lu and Cao2020Wheat/maizeField platformRGBDL
[85]Milella et al.2019GrapevineCaterpillar vehicleRGB-DDL
[86]Zhou et al.2019SoybeanGreenhouseRGBML
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Solimani, F.; Cardellicchio, A.; Nitti, M.; Lako, A.; Dimauro, G.; Renò, V. A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping. Information 2023, 14, 214. https://doi.org/10.3390/info14040214

AMA Style

Solimani F, Cardellicchio A, Nitti M, Lako A, Dimauro G, Renò V. A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping. Information. 2023; 14(4):214. https://doi.org/10.3390/info14040214

Chicago/Turabian Style

Solimani, Firozeh, Angelo Cardellicchio, Massimiliano Nitti, Alfred Lako, Giovanni Dimauro, and Vito Renò. 2023. "A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping" Information 14, no. 4: 214. https://doi.org/10.3390/info14040214

APA Style

Solimani, F., Cardellicchio, A., Nitti, M., Lako, A., Dimauro, G., & Renò, V. (2023). A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping. Information, 14(4), 214. https://doi.org/10.3390/info14040214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop