# Attempting to Estimate the Unseen—Correction for Occluded Fruit in Tree Fruit Load Estimation by Machine Vision with Deep Learning

^{*}

## Abstract

**:**

^{2}of 0.68 for the reference method. Error on prediction of whole orchard (880 trees) fruit load compared to packhouse count was 1.6% for the MLP model and 13.6% for the reference method. However, the performance of these models on data of another season was at best equivalent and generally poorer than the reference method. This result indicates that training on one season of data was insufficient for the development of a robust model.

## 1. Introduction

#### 1.1. In-Field Approches to the Estimation of Tree Fruit Load

^{2}= 0.94 and slope = 0.54 for dual view and R

^{2}= 0.90 and slope = 1.01, for multi-view). It was noted that the number of hidden fruits was balanced by the number of over counted fruits in the multiview method, such that a high correlation and unity slope was achieved between machine vision and harvest counts. Such a relationship may not hold for orchards of trees with different canopy architectures, in which case even the multiview methods require an orchard or tree specific adjustment for occluded fruit.

#### 1.2. Direct Prediction of Fruit Load from Machine Vision

^{2}of 0.82 and RMSE of 2.3 kg/tree in estimation of a test set. For the Pinova variety, a 4-10-1 architecture model achieved R

^{2}of 0.88 and RMSE of 2.5 kg/tree in estimation of a test set. In a parallel approach, [12] trained an ANN model (4-14-1 architecture) with four image features (total fruit pixel area, circle fitted fruit pixel area, average radius of fitted circle and residual fruit pixel area after circle fitting) from dual view images of apple trees to predict individual tree yield (kg/tree). The images presented were of very narrow canopies with very low rates of occluded fruit. The model, trained on images of 21 trees and validated on images of five trees, achieved an R

^{2}of 0.996 and RMSE of 1.0 kg/tree on the training set (of mean 40.4 kg/tree) but only a R

^{2}of 0.02 and a RMSE of 37.1 kg/tree on the validation set.

#### 1.3. Current Approach

## 2. Materials and Methods

#### 2.1. Hardware

^{®}Tesla

^{®}P100 GPU (Nvidia, Santa Clara, CA, USA) (16 GB memory) using CUDA v9.0, cuDNN v7.1.1, (Nvidia, Santa Clara, CA, USA) OpenCV-python v4.0.0.21, Python v2.7.14, (Python Software Foundation, Wilmington, DE, USA) Keras v2.2.0, Scikit-learn v0.19.1 and Tensorflow v1.8.0 (Google Brain, Mountain View, CA, USA).

#### 2.2. Orchard Information

#### 2.3. Fruit Counting and Canopy Classification

#### 2.3.1. Fruit Counting

- (i)
- MangoYOLO model: MangoYOLO [3] is a deep learning CNN fruit detection and localization model optimized for speed, computation, and accuracy through re-design of the YOLO object detection framework. MangoYOLO model detects mango fruit, then draws and counts bounding boxes on the detected fruits on tree images. MangoYOLO is comprised of a total of 33 layers, including 3 detection, 2 route, 2 up-sample and 26 convolutional layers (Figure 2). The MangoYOLO model adopted from [3] had been pre-trained on 1300 images containing 11,820 mango fruits and implemented with OpenCV-python v4. The class confidence and NMS thresholds for the MangoYOLO model were set to 0.24 and 0.45, respectively.
- (ii)
- Xception_count model: The Xception _count model was trained to directly predict fruit number on tree images using CNN regression. As number of sample tree images in current training set seemed small for training the regression model, fruit counts from MangoYOLO model on large image set was utilized as ground truth fruit count for training Xception_count model.

^{−4}. The Xception_count model was compiled with the MSE (Mean Squared Error) loss function and “adam” [16] optimizer. MSE is a commonly used loss function for regression modelling and is computed as the mean of the squared difference between estimated and actual values.

#### 2.3.2. Xception_Classification Model

- The Dense_2 layer of Xception_classification model consisted of 3 neurons for 3 canopy categories/classes and “sigmoid” activation function compared to 1 neuron and ‘linear’ activation function for predicting continuous values in Xception_count model.
- The Xception_classification model was compiled with “categorical_crossentropy” loss function compared to MSE loss function in Xception_count model. Cross-entropy is a commonly used loss function for multi-class classification task. Cross-entropy is based on the maximum likelihood (probability distribution across multiple classes). This function tries to minimize the mean difference between the actual and estimated probability distributions for all classes considered.

^{−4}.

#### 2.4. Canopy and Fruit Region Extraction

#### 2.4.1. Canopy Extraction

#### 2.4.2. Shape Fitting

#### 2.5. Yield Estimation

#### 2.5.1. Overview

^{2}values).

#### 2.5.2. MLP_Yield Model

#### Network Architecture

#### Training

_{a}, cF

_{a}, cO

_{a}, cT

_{b}, cF

_{b}, cO

_{b}) and target ground truth harvest count per tree were normalized by dividing by the maximum values found in the training dataset, i.e., 200 for fruit count per tree and 600 for harvest count per tree All three layers of MLP_yield model was initialized using ‘uniform’ weights and trained for 200 epochs with a batch size of 4. The model was compiled with a MSE loss and an Adam optimizer with learning rate of 1 × 10

^{−3}.

#### 2.5.3. Random_Forest_Yield Model

#### Network Architecture

#### Training Method

#### 2.5.4. Deep_Yield Model

#### Network Architecture

#### Training

^{−3}, loss function = MSE, optimizer = Adam and batch size = 8. The Xception network inside the Xception_siamese model was initialized with pre-trained ImageNet weights available from the Keras library. For model training, the target values (harvest count) were normalized by dividing with the maximum harvest count value.

#### 2.5.5. Xception_Yield Model

#### Network Architecture

#### Training Method

^{−4}. Transfer learning was not used and the weights for the CNN model were initialized at random values.

## 3. Results and Discussion

#### 3.1. Fruit Counting Using Xception_Count Compared to MangoYOLO

^{2}) for the linear regression of the results of the two methods was high in both years, but the Xception_count underestimated fruit load relative to the MangoYOLO model.

^{2}on human count) or accurate (slope less < 1, intercept > 0) as the MangoYOLO model result in prediction of fruit load of trees in a different year to that used for training (Table 6).

#### 3.2. Canopy Categorization

#### 3.3. Correlates to Occlusion Factor

^{2}around 0.9) to the number of visible fruit per tree, relations involving harvest count or hidden fruit were poor (Table 9). The exception was a strong correlation (R

^{2}around 0.9) noted between the ratio of hidden to harvest count and the ratio of full exposed to harvest count (Table 9), a relationship of no predictive value in that harvest count is required. With no obvious relationship between visible canopy features and hidden fruit count, this result does not bode well for use of a deep learning model to predict total fruit load of a tree.

#### 3.4. Feature Importance in Models

^{2}= 0.21 and 0.17, respectively) between canopy attributes of canopy volume and trunk circumference and fruit load was also reported for the same orchard [1]. These observations are consistent with the low weighting assigned to the attributes related to canopy size (e.g., RpC).

#### 3.5. Model Performance in Prediction of Tree Fruit Load

^{2}= 0.98 and RMSE = 17.8) (Table 11), however for the ABC-2018 set, the MangoYOLO_yield method using an occlusion factor obtained for the same year achieved the best result, followed by the Random_forest_yield method (Table 12). The direct yield estimation models were therefore not robust in prediction of a population of another season.

## 4. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Anderson, N.; Underwood, J.; Rahman, M.; Robson, A.; Walsh, K. Estimation of fruit load in mango orchards: Tree sampling considerations and use of machine vision and satellite imagery. Precis. Agric.
**2018**. [Google Scholar] [CrossRef] - Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep learning—Method overview and review of use for fruit detection and yield estimation. Comput. Electron. Agric.
**2019**, 162, 219–234. [Google Scholar] [CrossRef] - Koirala, A.; Wang, Z.; Walsh, K.; McCarthy, C. Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘mangoyolo’. Precis. Agric.
**2019**, 20, 1107–1135. [Google Scholar] [CrossRef] - Payne, A.B.; Walsh, K.B.; Subedi, P.; Jarvis, D. Estimation of mango crop yield using image analysis–segmentation method. Comput. Electron. Agric.
**2013**, 91, 57–64. [Google Scholar] [CrossRef] - Wang, Q.; Nuske, S.; Bergerman, M.; Singh, S. Automated crop yield estimation for apple orchards. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 745–758. [Google Scholar]
- Stein, M.; Bargoti, S.; Underwood, J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors
**2016**, 16, 1915. [Google Scholar] [CrossRef] [PubMed] - Moonrinta, J.; Chaivivatrakul, S.; Dailey, M.N.; Ekpanyapong, M. Fruit detection, tracking, and 3D reconstruction for crop mapping and yield estimation. In Proceedings of the 11th International Conference on Control Automation Robotics & Vision (ICARCV), Singapore, 7–10 December 2010; IEEE: New York, NY, USA, 2010; pp. 1181–1186. [Google Scholar]
- Liu, X.; Chen, S.W.; Aditya, S.; Sivakumar, N.; Dcunha, S.; Qu, C.; Taylor, C.J.; Das, J.; Kumar, V. Robust Fruit Counting: Combining Deep Learning, Tracking, and Structure from Motion. arXiv
**2018**, arXiv:1804.00307. [Google Scholar] - Sarron, J.; Malézieux, E.; Sane, C.A.B.; Faye, E. Mango yield mapping at the orchard scale based on tree structure and land cover assessed by UAV. Remote Sens.
**2018**, 10, 1900. [Google Scholar] [CrossRef] [Green Version] - Črtomir, R.; Urška, C.; Stanislav, T.; Denis, S.; Karmen, P.; Pavlovič, M.; Marjan, V. Application of Neural Networks and Image Visualization for Early Forecast of Apple Yield. Erwerbs-Obstbau
**2012**, 54, 69–76. [Google Scholar] - Cheng, H.; Damerow, L.; Sun, Y.; Blanke, M. Early yield prediction using image analysis of apple fruit and tree canopy features with neural networks. J. Imaging
**2017**, 3, 6. [Google Scholar] [CrossRef] - Qian, J.; Xing, B.; Wu, X.; Chen, M.; Wang, Y.a. A smartphone-based apple yield estimation application using imaging features and the ANN method in mature period. Sci. Agric.
**2018**, 75, 273–280. [Google Scholar] [CrossRef] - Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting apples and oranges with deep learning: A data-driven approach. IEEE Robot. Autom. Lett.
**2017**, 2, 781–788. [Google Scholar] [CrossRef] - Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 1251–1258. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res.
**2014**, 15, 1929–1958. [Google Scholar] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] - Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math.
**1987**, 20, 53–65. [Google Scholar] [CrossRef] [Green Version] - Charoenpong, T.; Chamnongthai, K.; Kamhom, P.; Krairiksh, M. Volume measurement of mango by using 2D ellipse model. In Proceedings of the International Conference on Industrial Technology, IEEE ICIT’04, Hammamet, Tunisia, 8–10 December 2004; IEE Corpoarate office: New York, NY, USA, 2004; pp. 1438–1441. [Google Scholar]
- Kader, A.A. Fruit maturity, ripening, and quality relationships. In Proceedings of the International Symposium Effect of Pre-& Postharvest factors in Fruit Storage, Warsaw, Poland, 3 August 1997; Acta Hortic. 485. IEE Corpoarate office: New York, NY, USA, 1997; pp. 203–208. [Google Scholar] [CrossRef]
- Nanaa, K.; Rizon, M.; Rahman, M.N.A.; Ibrahim, Y.; Aziz, A.Z.A. Detecting mango fruits by using randomized hough transform and backpropagation neural network. In Proceedings of the International Conference on Information Visualisation, Paris, France, 16–18 July 2014; IEEE: New York, NY, USA, 2014; pp. 388–391. [Google Scholar]
- Wang, Z.; Walsh, K.B.; Verma, B. On-Tree Mango Fruit Size Estimation Using RGB-D Images. Sensors
**2017**, 17, 2738. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Wang, Z.; Koirala, A.; Walsh, K.; Anderson, N.; Verma, B. In Field Fruit Sizing Using A Smart Phone Application. Sensors
**2018**, 18, 3331. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Breiman, L. Random forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] [Green Version] - Liaw, A.; Wiener, M. Classification and regression by randomForest. R News
**2002**, 2, 18–22. [Google Scholar] - Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]

**Figure 1.**Flowchart of methods used―for count of fruit in each image (

**a**), for canopy classification (

**b**) and for yield estimation (

**c**).

**Figure 3.**Xception_count model (

**a**and schematic of Xception base model (without top classification layers) (

**b**).

**Figure 4.**Example images from the three categories based on Silhouette score. Typical image in cluster-a (

**a**), cluster-b (

**b**) and cluster-c (

**c**).

**Figure 5.**Ellipse fitting and canopy extraction. (

**a**) Fruit enclosed by blue ellipse contour and red ellipse contour represent the fully exposed fruit and partly occluded fruit, respectively. (

**b**) Area enclosed by the white contour around the tree represents the segmented canopy foliage area. Eccentricity values are displayed.

**Figure 6.**(

**a**)—a single image reconstructed from images of two sides of a canopy, with fruits and canopy separated. (

**b**)—closeup view of fruit images.

**Figure 8.**Grad-CAM visualization of activation map of the final convolution layer of Xception_count model trained to directly predict fruit count on input images. Example given are images from two sides of one tree.

**Figure 9.**Grad-CAM visualization of the activation heatmap of the final convolutional layer of Xception_yield model on the reconstructed input images. Panels (

**a**,

**c**) present the raw images of two sides of a tree while (

**b**,

**d**) present activation heat-maps of canopy and fruit regions of reconstructed images, respectively. (

**a**): Typical result for tree with fruit on both sides of the canopy. (

**b**): Typical result for tree with fruit on one side of canopy.

**Table 1.**Methods used in (A) counting of fruit in images, (B) classification of canopy images and (C) fruit load (count) of whole tree.

Method | Description |
---|---|

A. Methods for count of fruit in image | |

MangoYOLO | Automated fruit detection and counting on tree images based on bounding-box training; a modification of YOLOv3 architecture. |

Xception_Count | Automated fruit number estimation on tree images based on CNN regression. |

B. Method for classification of canopy | |

Xception_Classification | Automated classification of tree images into 3 categories (low, medium and high visible fruit density) based on image feature learned by Xception_count model. |

C. Method for estimation of tree fruit load | |

Mango_YOLO_Yield | Estimate of tree fruit yield based on Mango_YOLO count adjusted using an occlusion factor estimated from manual counts of fruit load of a sample of trees. |

MLP_Yield | Automated yield estimation using a MLP neural network with input parameters obtained from canopy and fruit region extraction, including MangoYOLO based estimates of both fully visible and partly occluded fruit number. Partial occlusion of fruit was determined through ellipse fitting. |

Random_Forest_Yield | Automated yield estimation using an ensemble of decision trees for regression based on input variables as used in the MLP_yield model. |

Deep_Yield | Automated yield estimation based on fruit counts from MangoYOLO model and canopy classification of tree images, using a combination of MLP, Regression, Xception_siamese and Xception_classification blocks. |

Xception_Yield | Automated yield estimation based the Xception_count model but extracting canopy and fruit regions of two sides of a tree into a single image as input to the model. This method does not use MangoYOLO. |

**Table 2.**Statistics on the harvest count of sample trees (fruit per tree) for each of two seasons. ABC represent the collection of sample trees from orchards A, B and C.

2017 | 2018 | ||||
---|---|---|---|---|---|

Orchard | Number of Sample Tree | Mean | SD | Mean | SD |

A | 17 | 207 | 86 | 128 | 99 |

B | 6 | 279 | 148 | 205 | 134 |

C | 12 | 148 | 75 | 274 | 126 |

ABC | 35 | 199 | 103 | 191 | 130 |

A-x | 44 | 187 | 76 | - | - |

B-x | 19 | 253 | 160 | - | - |

C-x | 35 | 171 | 90 | - | - |

ABC-x | 98 | 194 | 105 | - | - |

Orchard | Silhouette Score | Number of Trees in Cluster a | Number of Trees in Cluster b | Number of Trees in Cluster c |
---|---|---|---|---|

2017-A | 0.4311 | 443 | 344 | 211 |

2017-B | 0.4622 | 62 | 104 | 76 |

2017-C | 0.4982 | 175 | 242 | 113 |

Total | 681 | 690 | 400 |

**Table 4.**Description of input variables used for MLP_yield and Random_forest_yield models. Subscripts a and b represent data for side A and side B images of a tree, respectively.

Attributes | Description |
---|---|

cT_{a}, cT_{b} | count of all visible fruit on image from MangoYOLO model (= cF + cO) |

cF_{a}, cF_{b} | count of exposed (fully visible) fruit |

cO_{a}, cO_{b} | count of partially occluded fruit |

RpF_{a}, RpF_{b} | ratio of total pixel area of exposed fruit to the canopy pixel area |

RpO_{a}, RpO_{b} | ratio of total pixel area of partially occluded fruit to the canopy pixel area |

RpC_{a}, RpC_{b} | ratio of canopy pixel area to the total image pixel area |

**Table 5.**Linear regression statistics for fruit count using the Xception_count model against the MangoYOLO model for tree images for orchard A (988 images of 494 trees) for 2 seasons. Units for intercept and RMSE are # fruit/image.

Year | Slope | Intercept | R^{2} | RMSE |
---|---|---|---|---|

2017 | 0.731 | 12.32 | 0.96 | 9.12 |

2018 | 0.728 | 19.81 | 0.94 | 11.8 |

**Table 6.**Linear regression statistics for fruit count of 2018 tree images of orchard A (17 trees, 34 images) by Xception_count and MangoYOLO models developed using 2017 data, against human count of fruit on images. Units for intercept and RMSE are # fruit/image.

Model | Slope | Intercept | R^{2} | RMSE |
---|---|---|---|---|

Xception_count | 0.715 | 13.69 | 0.93 | 10.1 |

MangoYOLO | 0.915 | 0.08 | 0.98 | 5.3 |

**Table 7.**Classification relative to ground truth categories from k-means clustering (values in brackets) for the Xception_classification model. Results are of the training set, i.e., the 2017 orchard A image set (988 images of 494 trees).

Xception_Classification (Number of Tree Images) | |||
---|---|---|---|

K Means Classification (Number of Tree Images) | Cat a | Cat b | Cat c |

Cat a (443) | 387 | 51 | 5 |

Cat b (334) | 2 | 332 | 0 |

Cat c (211) | 13 | 0 | 198 |

**Table 8.**Classification relative to ground truth categories from k-means clustering (values in brackets) for the Xception_classification model. Results are of a test set, i.e., 2017 orchard C (530 images of 265 trees) using a model trained on orchard A images (988 images of 494 trees).

Xception_Classification (Number of Tree Images) | ||||
---|---|---|---|---|

K Means Classification (Number of Tree Images) | Cat a | Cat b | Cat c | |

Cat a (175) | 114 | 18 | 43 | |

Cat b (113) | 14 | 99 | 0 | |

Cat c (242) | 8 | 0 | 234 |

**Table 9.**Statistics for linear correlations between combinations of attributes related to the number of partly occluded fruit, non-occluded and hidden (fully occluded) fruit per tree, for image sets ABCx-2017 and ABC2018: (A) count of partly occluded fruit regressed on harvest count of fruit; (B) count of partly occluded fruit regressed on number of visible fruit; (C) ratio of count of hidden fruit to harvest count regressed on ratio of count of fully exposed fruit to harvest count; (D) count of hidden fruit regressed on count of fully exposed fruit; and (E) ratio of count of partly occluded fruit to count of visible fruit regressed on ratio of count of hidden to count visible fruit. For a given tree, harvest count is equivalent to the sum of hidden (fully occluded), partly occluded and fully exposed (non-occluded) fruit, while visible fruit is the total MangoYOLO count for a tree, equivalent to the sum of partly occluded and fully exposed fruit.

Image Set | R^{2} | Slope | Intercept | Ratio |
---|---|---|---|---|

A. Partly occluded vs. harvest count | ||||

ABCx2017 | 0.69 | 0.17 | 5.41 | 0.21 |

ABC-2018 | 0.64 | 0.09 | 6.71 | 0.38 |

B. Partly occluded vs. visible fruit | ||||

ABCx2017 | 0.93 | 2.52 | 7.85 | 0.37 |

ABC-2018 | 0.89 | 0.28 | 0.61 | 0.30 |

C. Ratio of hidden to harvest vs. ratio of fully exposed to harvest | ||||

ABCx-2017 | 0.89 | −1.33 | 0.91 | |

ABC-2018 | 0.91 | −0.71 | 0.71 | |

D. Hidden fruit vs. fully exposed fruit | ||||

ABCx-2017 | 0.19 | 0.80 | 35.8 | |

ABC-2018 | 0.25 | 1.35 | 37.3 | |

E. Ratio of partly occluded to visible vs. ratio of hidden to visible | ||||

ABCx-2017 | 0.007 | 0.0075 | 0.36 | |

ABC-2018 | 0.044 | −0.015 | 0.33 |

**Table 10.**Feature_importance values returned by the Random Forest regressor on the different input variables. Values sum to 1. Refer to Table 4 for the description of variables. The variable with highest weighting (total fruit count) is shown in bold.

cT_{a} | cF_{a} | cO_{a} | RpF_{a} | RpO_{a} | RpC_{a} | cT_{b} | cF_{b} | cO_{b} | RpF_{b} | RpO_{b} | RpC_{b} |
---|---|---|---|---|---|---|---|---|---|---|---|

0.26 | 0.07 | 0.08 | 0.01 | 0.01 | 0.07 | 0.24 | 0.06 | 0.10 | 0.01 | 0.01 | 0.07 |

**Table 11.**Regression statistics for prediction of fruit load per tree, relative to human count. The result for the raw MangoYOLO prediction is included for comparison. Training set prediction results for ABC-x 2017 season sample trees using five methods, all trained with the ABCx 2017 training image set. The data set used for estimation of occlusion factor in MangoYOLO_yield method is shown in brackets. Unit for slope, bias and RMSE is #fruit/tree.

Model | Slope | Intercept | R^{2} | RMSE |
---|---|---|---|---|

MangoYOLO | 0.44 | 17.7 | 0.69 | 113.4 |

MangoYOLO_yield (ABCx 2017) | 0.89 | 36.4 | 0.69 | 65.4 |

MLP_yield | 0.81 | 36.6 | 0.79 | 47.7 |

Random_forest_yield | 0.90 | 19.0 | 0.98 | 17.8 |

Deep_yield | 0.94 | 10.3 | 0.92 | 30.4 |

Xception_yield | 0.71 | 28.8 | 0.94 | 44.8 |

**Table 12.**Regression statistics for prediction of fruit load per tree, relative to human count. The result for the raw MangoYOLO prediction is included for comparison. Test set prediction results for ABC 2018 season sample trees using five methods, trained with ABC-x 2017 images. The data set used for estimation of occlusion factor in MangoYOLO_yield method is shown in brackets. Unit for slope, bias and RMSE is #fruit/tree.

Model | Slope | Intercept | R^{2} | RMSE |
---|---|---|---|---|

MangoYOLO | 0.32 | 17.3 | 0.73 | 143.7 |

MangoYOLO_yield (ABCx 2017) | 0.67 | 35.5 | 0.73 | 72.7 |

MangoYOLO_yield (ABC 2017) | 0.63 | 33.5 | 0.73 | 77.6 |

MangoYOLO_yield (ABC 2018) | 0.83 | 44.5 | 0.73 | 69.0 |

MLP_yield | 0.50 | 30.3 | 0.66 | 102.9 |

Random_forest_yield | 0.46 | 52.9 | 0.60 | 97.4 |

Deep_yield | 0.29 | 61.2 | 0.34 | 129.1 |

Xception_yield | 0.42 | 29.0 | 0.72 | 106.3 |

**Table 13.**Yield estimation results for prediction of fruit load per orchard block, relative to packhouse count. (A) Packhouse count for fruit number of whole orchards, A, B and C, for season 2017. (B) Prediction results for fruit number of whole orchards, A, B and C, for season 2017 from models trained on orchard ABC-x 2017 data. Value in brackets is the percentage error. Best result (closest to packhouse) is shown in bold.

A | B | C | ABC | |
---|---|---|---|---|

A. Packhouse count | 97,382 | 26,273 | 40,837 | 164,492 |

B. Model prediction | ||||

MangoYOLO_yield (ABCx 2017) | 58,074 (2.6) | 16,189 (17.1) | 17,329 (6.1) | 91,592 (13.6) |

MLP_yield | 93,879 (−3.6) | 32,148 (22.4) | 41,025 (0.5) | 167,052 (1.6) |

Random_forest_yield | 99,779 (2.5) | 29,307 (11.5) | 39,188 (−4.0) | 168,274 (2.3) |

Deep_yield | 91760 (−5.8) | 26,307 (0.1) | 39,399 (−3.5) | 157,466 (−4.3) |

Xception_yield | 83638 (−14.1) | 26,022 (−1.0) | 36,624 (−10.3) | 146,284 (−11.1) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Koirala, A.; Walsh, K.B.; Wang, Z.
Attempting to Estimate the Unseen—Correction for Occluded Fruit in Tree Fruit Load Estimation by Machine Vision with Deep Learning. *Agronomy* **2021**, *11*, 347.
https://doi.org/10.3390/agronomy11020347

**AMA Style**

Koirala A, Walsh KB, Wang Z.
Attempting to Estimate the Unseen—Correction for Occluded Fruit in Tree Fruit Load Estimation by Machine Vision with Deep Learning. *Agronomy*. 2021; 11(2):347.
https://doi.org/10.3390/agronomy11020347

**Chicago/Turabian Style**

Koirala, Anand, Kerry B. Walsh, and Zhenglin Wang.
2021. "Attempting to Estimate the Unseen—Correction for Occluded Fruit in Tree Fruit Load Estimation by Machine Vision with Deep Learning" *Agronomy* 11, no. 2: 347.
https://doi.org/10.3390/agronomy11020347