Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN

: Maize tassels play a critical role in plant growth and yield. Extensive RGB images obtained using unmanned aerial vehicle (UAV) and the prevalence of deep learning provide a chance to improve the accuracy of detecting maize tassels. We used images from UAV, a mobile phone, and the Maize Tassel Counting dataset (MTC) to test the performance of faster region-based convolutional neural network (Faster R-CNN) with residual neural network (ResNet) and a visual geometry group neural network (VGGNet). The results showed that the ResNet, as the feature extraction network, was better than the VGGNet for detecting maize tassels from UAV images with 600 × 600 resolution. The prediction accuracy ranged from 87.94% to 94.99%. However, the prediction accuracy was less than 87.27% from the UAV images with 5280 × 2970 resolution. We modiﬁed the anchor size to [85 2 , 128 2 , 256 2 ] in the region proposal network according to the width and height of pixel distribution to improve detection accuracy up to 89.96%. The accuracy reached up to 95.95% for mobile phone images. Then, we compared our trained model with TasselNet without training their datasets. The average di ﬀ erence of tassel number was 1.4 between the calculations with 40 images for the two methods. In the future, we could further improve the performance of the models by enlarging datasets and calculating other tassel traits such as the length, width, diameter, perimeter, and the branch number of the maize tassels.


Introduction
By 2050, agricultural production will need to be doubled to meet food demands because of population growth [1]. Maize is one of the most important cereal crops in the world. Maize plants are capable of natural pollination, but self-pollination is not ideal. Continuous monitoring of maize tassel growth can ensure the security, quality, and yield of maize [2]. In the past, tassel recognition in breeding mainly depended on human efforts, which was time-consuming, labor-intensive, and with limited sample sizes. Therefore, fast and accurate identification is important for a better understanding of the phenotypic traits of maize tassels.
With the development of a computer vision and an image-based plant phenotyping platform, researchers obtained high-resolution plant growth images easily and implemented automatic and rapid identification of phenotypic traits [3][4][5]. The earliest detection of maize tassels was conducted by image segmentation using the support vector machine method [6]. Researchers adopted the datasets based on maize images with high resolution and achieved high accuracy for detecting maize tassels [4,6]. However, the processes of image capturing used relatively low throughput, were labor-intensive, and were unable to be applied in a larger field. Lu et al. [4,7] developed the mTASSLE software to monitor different stages of maize tassel traits with an automatic fine-grained machine vision system and proposed the TasselNet to count maize tassels. However, the sample size was still limited.
Therefore, improving the throughput of phenotyping measurements is a significant challenge in this kind of research. Recent developments in the application of the unmanned aerial vehicle (UAV) mounted with high definition cameras have increased the sample size tremendously [8][9][10]. Researchers have implemented many applications in plant height estimation [11][12][13], seedling counting [14][15][16], and crop growth estimation [17,18] using UAV images. Nevertheless, there are fewer applications of maize tassel detection using UAV images [19] which is challenging in natural environments due to light conditions, possible occlusions, and different maize genotypes. Deep learning algorithms have been widely used to count stem [20], seedling number [16], and wheat spike number [14]. Among networks in the Faster R-CNN [21] algorithm, Quan et al. [16] conducted an experiment on the detection of maize seedling with different convolutional neural networks and confirmed that VGGNet [22] and ResNet [23] performed better than GoogleNet [24] and SqueezeeNet [25]. Kumar et al. [19] confirmed that the Faster R-CNN algorithm performed better than You Only Look Once (YOLO) [26] for the detection of maize tassel with UAVs.
Therefore, the aim of our study is to use VGGNet and ResNet as the feature extraction networks in Faster R-CNN to detect maize tassels both with images from UAV and photographed on the ground using a mobile phone. Then, the anchor size in the region proposal network was modified according to the real size of the tassel pixels to improve the detection accuracy. Finally, we compared our method with TasselNet [7] to further verify the general performance using an independent dataset.

Field Experiments, Image Acquisition, and Labelling
There were 485 maize inbred lines with extensive genetic diversity in each replicate. Image datasets were collected from two experimental fields at Lishu (43 • 16 45" N, 124 • 26 10" E), Jilin, China, and Shangzhuang (40 • 06 5" N, 116 • 12 22" E), Beijing, China. The 356 images of field-grown maize were obtained with UAV DJI Inspires 2 using a ZENMUSE X5S camera (DJI, Shenzhen, China) at the Lishu experimental farm. The flying height was 15 m from the ground and the camera resolution was 5280 × 2970 pixels, as shown in Figure 1, right part. All inbred lines were planted in one row for each material. The maize flowering time was different, which resulted in no tassels appearing in some areas of the images. In order to reduce the data processing time, the original images were cropped and filtered into 1125 images with 600 × 600 resolution.
The graphical image annotation tool LabelImg [27] was used to draw the bounding boxes in these cropped images, as shown in Figure 2. All the pixels of maize tassel were within the range of the bounding boxes. The annotated images were divided randomly into the training-validation set and the test set according to the ratio 7:3. In the training-validation set, the ratio of the training vs. validation was 7:3, randomly. Another set of 89 maize images was collected using a mobile phone, 3 m from the ground, at the Shangzhuang experimental farm and was used to validate the general performance of the models, as shown in the lower left part of Figure 1. The resolution of the images was 4000 × 2250 pixels. Figure 1. Image datasets were collected at two experimental sites. Images were taken using a mobile phone, 3 m from the ground at the Shangzhuang experimental farm (left). Images were taken using a ZENMUSE X5S camera mounted on a UAV DJI Inspires 2, 15 m from the ground at the Lishu experimental farm (right).

Model Description
The Faster R-CNN implements an end-to-end object detection algorithm and is shown in Figure  3. Faster R-CNN generates feature maps through five shared convolutional layers in a given annotated image (Figure 3a). The region proposal network (RPN) determines whether the object is foreground or background and processes the first bounding box regression (Figure 3b). Then, these

Model Description
The Faster R-CNN implements an end-to-end object detection algorithm and is shown in Figure  3. Faster R-CNN generates feature maps through five shared convolutional layers in a given annotated image (Figure 3a). The region proposal network (RPN) determines whether the object is foreground or background and processes the first bounding box regression (Figure 3b). Then, these

Model Description
The Faster R-CNN implements an end-to-end object detection algorithm and is shown in Figure 3. Faster R-CNN generates feature maps through five shared convolutional layers in a given annotated image (Figure 3a). The region proposal network (RPN) determines whether the object is foreground or background and processes the first bounding box regression ( Figure 3b). Then, these region proposals are processed by the regions of interest (RoI) pooling ( Figure 3c). The softmax classification is used to determine which category the object belongs to. Finally, the boundary box regressor corrects the position of the object subtly. Compared to the previous algorithm [28,29], the Faster R-CNN no longer relies on the experience of feature engineers. Although Faster R-CNN has a long training time, its accuracy is extremely high in the complex context. Remote Sens. 2020, 12, x FOR PEER REVIEW 4 of 13 region proposals are processed by the regions of interest (RoI) pooling ( Figure 3c). The softmax classification is used to determine which category the object belongs to. Finally, the boundary box regressor corrects the position of the object subtly. Compared to the previous algorithm [28,29], the Faster R-CNN no longer relies on the experience of feature engineers. Although Faster R-CNN has a long training time, its accuracy is extremely high in the complex context.

Region Proposal Network
The region proposal network algorithm has the capacity to take an image of any size as input and form a series of rectangular region proposals. In this process, a sliding network employs a 3 × 3 window that slides over the feature maps generated by the last layer. Eventually, a fixed vector is formed. Then, two fully connected layers are created. One is the bounding regression box layer and another is the classification layer (Figure 3b). At each window position, nine candidate boxes with multiple scales and aspect ratios (named anchor) are generated to ensure better translation invariance.

Anchor Size Adjustment
The original default parameters of the anchor size are [128 2 , 256 2 , 512 2 ] in the Faster R-CNN. We found that the distributions of pixel width and height of individual tassels are mostly between 66 to 105 for images with 600 × 600 resolution, as shown in Figure 4a. Inspired by the small object detection in optical remote sensing images via modified Faster R-CNN [30], we adjusted the anchor size to [85 2 , 128 2 , 256 2 ], as shown in Figure 4b.

Region Proposal Network
The region proposal network algorithm has the capacity to take an image of any size as input and form a series of rectangular region proposals. In this process, a sliding network employs a 3 × 3 window that slides over the feature maps generated by the last layer. Eventually, a fixed vector is formed. Then, two fully connected layers are created. One is the bounding regression box layer and another is the classification layer (Figure 3b). At each window position, nine candidate boxes with multiple scales and aspect ratios (named anchor) are generated to ensure better translation invariance.

Anchor Size Adjustment
The original default parameters of the anchor size are [128 2 , 256 2 , 512 2 ] in the Faster R-CNN. We found that the distributions of pixel width and height of individual tassels are mostly between 66 to 105 for images with 600 × 600 resolution, as shown in Figure 4a. Inspired by the small object detection in optical remote sensing images via modified Faster R-CNN [30], we adjusted the anchor size to [85 2 , 128 2 , 256 2 ], as shown in Figure 4b.

Convolutional Neural Network
The visual geometry group neural network (VGGNet) was developed by the Visual Geometry Group at Oxford University. In 2014, VGGNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) classification task [31]. Consequently, researchers have referred to the VGGNet's classic ideas to design advanced classification models. The idea of a 3 × 3 convolution kernel is the basis of the subsequent models. Typical models are VGG16 and VGG19, which differed in the number of convolution layers. The default pixel size of the input image is 224 × 224 for the VGG19 model. However, the input image size was 600 × 600 in our experiment. Then, the image is followed by five convolution-pooling activation layers. We adopted the default VGG19 network structure except the Max pooling layer of the fifth convolution, in order to keep more information in feature maps [16], as shown in Figure 5. In order to avoid overfitting by the network model, a regularization or dropout layer was added after each layer of the convolution-pooling layer [32].

Convolutional Neural Network
The visual geometry group neural network (VGGNet) was developed by the Visual Geometry Group at Oxford University. In 2014, VGGNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) classification task [31]. Consequently, researchers have referred to the VGGNet's classic ideas to design advanced classification models. The idea of a 3 × 3 convolution kernel is the basis of the subsequent models. Typical models are VGG16 and VGG19, which differed in the number of convolution layers. The default pixel size of the input image is 224 × 224 for the VGG19 model. However, the input image size was 600 × 600 in our experiment. Then, the image is followed by five convolution-pooling activation layers. We adopted the default VGG19 network structure except the Max pooling layer of the fifth convolution, in order to keep more information in feature maps [16], as shown in Figure 5. In order to avoid overfitting by the network model, a regularization or dropout layer was added after each layer of the convolution-pooling layer [32].

Convolutional Neural Network
The visual geometry group neural network (VGGNet) was developed by the Visual Geometry Group at Oxford University. In 2014, VGGNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) classification task [31]. Consequently, researchers have referred to the VGGNet's classic ideas to design advanced classification models. The idea of a 3 × 3 convolution kernel is the basis of the subsequent models. Typical models are VGG16 and VGG19, which differed in the number of convolution layers. The default pixel size of the input image is 224 × 224 for the VGG19 model. However, the input image size was 600 × 600 in our experiment. Then, the image is followed by five convolution-pooling activation layers. We adopted the default VGG19 network structure except the Max pooling layer of the fifth convolution, in order to keep more information in feature maps [16], as shown in Figure 5. In order to avoid overfitting by the network model, a regularization or dropout layer was added after each layer of the convolution-pooling layer [32].  The residual neural network (ResNet) was proposed by the Microsoft Research Institute and won the championship at the 2015 ILSVRC competition. ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152 were designed in [23]. The structure of ResNet50 is shown in Figure 6a. The main idea of ResNet is to add a direct connection channel to the network and to propose residual to learn the differences among the neural network layers. Traditional convolution networks or fully connected networks probably result in information loss, and thus cause gradient disappearance or explosions during deep network training. Therefore, the ResNet solved this problem by means of directly bypassing the input information to the output. The residual learning model of ResNet has two branches as shown in Figure 6b. One branch is the "X" which is processed using three convolution layers, batch normalization, and the ReLU layer, and another branch is processed with a shortcut. After integration of the two branches using feature maps fusion, the entire network only needs to learn the differences between input and output with a simplification of the learning objectives and difficulty. We adopted ResNet50, ResNet101, and ResNet152 as feature map extraction in Faster R-CNN.
Remote Sens. 2020, 12, x FOR PEER REVIEW 6 of 13 The residual neural network (ResNet) was proposed by the Microsoft Research Institute and won the championship at the 2015 ILSVRC competition. ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152 were designed in [23]. The structure of ResNet50 is shown in Figure 6a. The main idea of ResNet is to add a direct connection channel to the network and to propose residual to learn the differences among the neural network layers. Traditional convolution networks or fully connected networks probably result in information loss, and thus cause gradient disappearance or explosions during deep network training. Therefore, the ResNet solved this problem by means of directly bypassing the input information to the output. The residual learning model of ResNet has two branches as shown in Figure 6b. One branch is the "X" which is processed using three convolution layers, batch normalization, and the ReLU layer, and another branch is processed with a shortcut. After integration of the two branches using feature maps fusion, the entire network only needs to learn the differences between input and output with a simplification of the learning objectives and difficulty. We adopted ResNet50, ResNet101, and ResNet152 as feature map extraction in Faster R-CNN.
(a) (b) Figure 6. (a) The structure of residual neural network 50 (ResNet50) and (b) the core structure of the residual learning module in ResNet50. Conv represents convolution neural network, Conv1 and Conv3 kernel size is 3 × 3, Conv2 kernel size is 1 × 1, padding is 1, stride is 1 and activation function is ReLU.
In order to achieve better training results and reduce the running time, we used the weight parameters from ImageNet's pretraining model. The model iteration was set to 200, and the number of trainings per iteration was 1000. The learning rate was set to 0.0001. We adopted the standard stochastic gradient descent method as the optimized network parameters. Our initial network weights were adopted from [33].
Location regression loss and classification loss function for an image are referenced from [21] as follows: Figure 6. (a) The structure of residual neural network 50 (ResNet50) and (b) the core structure of the residual learning module in ResNet50. Conv represents convolution neural network, Conv1 and Conv3 kernel size is 3 × 3, Conv2 kernel size is 1 × 1, padding is 1, stride is 1 and activation function is ReLU.
In order to achieve better training results and reduce the running time, we used the weight parameters from ImageNet's pretraining model. The model iteration was set to 200, and the number of trainings per iteration was 1000. The learning rate was set to 0.0001. We adopted the standard stochastic gradient descent method as the optimized network parameters. Our initial network weights were adopted from [33].
Location regression loss and classification loss function for an image are referenced from [21] as follows: where N cls , N loc , and α represent the loss parameter which are 256, 2400, and 10, respectively. Here, i represents the index of an anchor in a mini batch.
where p i represents predicted probability. If anchor is a positive sample, p * i is 1. Otherwise, p * i is 0.
where x, y are center point coordinates of the box; w, h are the width and height of the box; and x, x a , and x * represent the predicted box, anchor box, and ground-truth box, respectively.

Model Evaluation
Our model was implemented using the Keras framework on the Window10 Professional operation system with Inter Core i7-7800X and a GTX1080ti GPU. The evaluation methods were mainly based on the challenges of pattern analysis, statistical modeling and computational learning. For each image, the detection model returned a set of regression boxes with a confidence score between zero and one. The predicted bounding box had the same format as the real bounding box: {Xmin, Ymin, Xmax, Ymax}.
The intersection over union (IoU) is commonly used for evaluation of the model performance. The formula of IoU is as follows: The model detection bounding boxes are calculated. If the real box and the detection bounding box match exactly, the value of IoU is one. The threshold value is set to 0.5. If the value of IoU exceeds the threshold, the detection object is considered correct. The correct tests are marked as true positive (TP), while the other tests are considered as false positives (FP). The value of average precision (AP) is the area enclosed by the accuracy rate, the recall rate curve and the coordinate axis. The test model evaluated all test datasets, with the accuracy and recall rates as follow: where TP, FP, and FN are the numbers of true positives, false positives, and false negatives detected for each image. High accuracy means fewer false positives. A high recall rate means that most of the targets can be detected.

Comparisons between Different Feature Extraction Networks
In the Faster R-CNN, different feature extraction networks lead to different performances of the final object detection [34]. VGGNet and ResNet performed well in classification of the remote sensing images [35,36]. Therefore, VGGNet and ResNet were used as the feature extraction networks in this study. We employed VGG16, VGG19, and VGG20 separately, and we also used ResNet50, ResNet101, and ResNet152 which have more depth in the neural network layers. The final results are shown in Figure 7 and Table 1. The "maize tassel: 99" means 99% probability that the object in the bounding box belongs to a maize tassel. The results demonstrated that ResNet101 was the best for maize tassel detection, with an AP of 94.99%. Furthermore, higher accuracy was obtained with ResNet (ResNet50, ResNet101, ResNet152) as compared with VGGNet (VGG16, VGG19, VGG20). Ren et al. [37] also showed that the ResNet model has the capacity to perform better than VGGNet. The loss drop curve is shown in Figure 8.

Comparisons between Different Feature Extraction Networks
In the Faster R-CNN, different feature extraction networks lead to different performances of the final object detection [34]. VGGNet and ResNet performed well in classification of the remote sensing images [35,36]. Therefore, VGGNet and ResNet were used as the feature extraction networks in this study. We employed VGG16, VGG19, and VGG20 separately, and we also used ResNet50, ResNet101, and ResNet152 which have more depth in the neural network layers. The final results are shown in Figure 7 and Table 1. The "maize tassel: 99" means 99% probability that the object in the bounding box belongs to a maize tassel. The results demonstrated that ResNet101 was the best for maize tassel detection, with an AP of 94.99%. Furthermore, higher accuracy was obtained with ResNet (ResNet50, ResNet101, ResNet152) as compared with VGGNet (VGG16, VGG19, VGG20). Ren et al. [37] also showed that the ResNet model has the capacity to perform better than VGGNet. The loss drop curve is shown in Figure 8.

Comparison between Different Anchor Sizes
Here, ResNet was used as the feature extraction network to detect tassels in the original UAV images with a resolution of 5280 × 2970 because of its better performance. Figure 9 shows the model performances with different anchor sizes. It can be seen that 61 maize tassels were detected with the default anchor size, characterized by the red boxes in Figure 9a. Two recognition errors were found which mistook the decaying maize leaves as tassels and are recorded with blue boxes in Figure 9a. Nine small maize tassels were not detected with the default anchor size. We recorded these missed tassels with yellow boxes in Figure 9a. In Figure 9b, only three maize tassels were not detected with the modified anchor size. Generalized in Table 2, the modified anchor size can improve the accuracy of maize tassel detection, especially for the small tassels.

Comparison between Different Anchor Sizes
Here, ResNet was used as the feature extraction network to detect tassels in the original UAV images with a resolution of 5280 × 2970 because of its better performance. Figure 9 shows the model performances with different anchor sizes. It can be seen that 61 maize tassels were detected with the default anchor size, characterized by the red boxes in Figure 9a. Two recognition errors were found which mistook the decaying maize leaves as tassels and are recorded with blue boxes in Figure 9a. Nine small maize tassels were not detected with the default anchor size. We recorded these missed tassels with yellow boxes in Figure 9a. In Figure 9b, only three maize tassels were not detected with the modified anchor size. Generalized in Table 2, the modified anchor size can improve the accuracy of maize tassel detection, especially for the small tassels.

Comparison between Different Anchor Sizes
Here, ResNet was used as the feature extraction network to detect tassels in the original UAV images with a resolution of 5280 × 2970 because of its better performance. Figure 9 shows the model performances with different anchor sizes. It can be seen that 61 maize tassels were detected with the default anchor size, characterized by the red boxes in Figure 9a. Two recognition errors were found which mistook the decaying maize leaves as tassels and are recorded with blue boxes in Figure 9a. Nine small maize tassels were not detected with the default anchor size. We recorded these missed tassels with yellow boxes in Figure 9a. In Figure 9b, only three maize tassels were not detected with the modified anchor size. Generalized in Table 2, the modified anchor size can improve the accuracy of maize tassel detection, especially for the small tassels.   We tested the generalization of model performance with independent images collected by mobile phone, as shown in Figure 1. The pixel size of the maize tassel, here, was between 400 dpi and 1000 dpi, as shown in Figure 10a. In order to fit the range of modified anchor size, we resized the original image of the Shangzhuang experimental farm from 4000 × 2250 to 1066 × 600 resolution. Then, we used ResNet101 as the feature extraction network. The anchor value was [85 2 , 128 2 , 256 2 ], and the AP value could be up to 95.92%.
Remote Sens. 2020, 12, x FOR PEER REVIEW 10 of 13 We tested the generalization of model performance with independent images collected by mobile phone, as shown in Figure 1. The pixel size of the maize tassel, here, was between 400 dpi and 1000 dpi, as shown in Figure 10a. In order to fit the range of modified anchor size, we resized the original image of the Shangzhuang experimental farm from 4000 × 2250 to 1066 × 600 resolution. Then, we used ResNet101 as the feature extraction network. The anchor value was [85 2 , 128 2 , 256 2 ], and the AP value could be up to 95.92%.

Comparison with TasselNet
Lu et al. [7] proposed TasselNet to count maize tassels with LeNet [38], AlexNet [39], and VGGNet16 [22] as the feature extraction network. After merging and normalizing the local counts, TasselNet can output maize tassel numbers. They released the Maize Tassel Counting (MTC) dataset to draw attention from practitioners working in related fields. Therefore, we tested the MTC dataset with ResNet101 in our Faster R-CNN and the results are shown in Figure 11. There was a 1.4 difference for average tassel number detection between these two models based on a comparison with 40 images. Our Faster R-CNN did not perform better than TasselNet for detecting maize tassels. As shown in Figure 11b, tassels seriously blocked by leaves were not detected by our Faster R-CNN. It should be emphasized that we tested MTC images in our model without training their dataset. This can further validate the generalization of our Faster R-CNN. In the future, we could adopt other recently developed networks [40][41][42] to improve detection accuracy and extract the size and color of individual tassel's traits by combining with the semantic segmentation algorithm [43,44].

Comparison with TasselNet
Lu et al. [7] proposed TasselNet to count maize tassels with LeNet [38], AlexNet [39], and VGGNet16 [22] as the feature extraction network. After merging and normalizing the local counts, TasselNet can output maize tassel numbers. They released the Maize Tassel Counting (MTC) dataset to draw attention from practitioners working in related fields. Therefore, we tested the MTC dataset with ResNet101 in our Faster R-CNN and the results are shown in Figure 11. There was a 1.4 difference for average tassel number detection between these two models based on a comparison with 40 images. Our Faster R-CNN did not perform better than TasselNet for detecting maize tassels. As shown in Figure 11b, tassels seriously blocked by leaves were not detected by our Faster R-CNN. It should be emphasized that we tested MTC images in our model without training their dataset. This can further validate the generalization of our Faster R-CNN. In the future, we could adopt other recently developed networks [40][41][42] to improve detection accuracy and extract the size and color of individual tassel's traits by combining with the semantic segmentation algorithm [43,44].

Conclusions
The aim of our current research was to evaluate the accuracy of detecting the maize tassels using a modified Faster R-CNN algorithm with different resolution images collected by UAV, mobile phone, and from an independent dataset. We found that ResNet, as the feature extraction network, was better than VGGNet for detecting maize tassels from UAV images with 600 × 600 resolution. The AP values differed from 87.94% to 91.51% using VGGNet and from 91.99% to 94.99% using ResNet. Then, we used ResNet to detect tassels from UAV images with 5280 × 2970 resolution. The AP values were from 86.46% to 87.27%. In order to better detect the small objects in one image with 5280 × 2970 resolution obtained by UAV, we modified the anchor size in the region proposal network to improve detection accuracy. The AP values were improved from 87.82% to 89.96%. The AP value could reach up to 95.95% with images obtained using a mobile phone by resizing the images. Then, we compared our modified model with TasselNet using an independent MTC dataset. The average difference of tassel number was 1.4 between the two methods based on a comparison with 40 images. It took fifty minutes to obtain 485 maize images with 5280 × 2970 resolution with UAV in a 10,500 m 2 field. We annotated the training dataset for two weeks. It took one day to train each model. It would take more than five persons to complete the survey within one day, for the whole field, Furthermore, different people would have different evaluation criteria which could result in bias. In the future, we could enlarge our image datasets collected by UAV at different heights and different time series to further improve the performance of the models and calculate other phenotypic traits.