Next Article in Journal
Factors Governing Total and Permanganate Oxidizable C Pools in Agricultural Soils from Southern Italy
Next Article in Special Issue
Evaluating Morphological Growth, Yield, and Postharvest Fruit Quality of Cucumber (Cucumis Sativus L.) Grafted on Cucurbitaceous Rootstocks
Previous Article in Journal
Towards Intensive Co-operated Agribusiness: A Gender-Based Comparative Borich Needs Assessment Model Analysis of Beef Cattle Farmers in Eswatini
Previous Article in Special Issue
Evaluation of the Effectiveness of the Use of Biopreparations as Seed Dressings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Neural Network Modeling of Greenhouse Tomato Yield and Aerial Dry Matter

by
Kelvin López-Aguilar
1,
Adalberto Benavides-Mendoza
2,
Susana González-Morales
3,
Antonio Juárez-Maldonado
4,
Pamela Chiñas-Sánchez
5 and
Alvaro Morelos-Moreno
3,*
1
Doctorado en Ciencias en Agricultura Protegida, Universidad Autónoma Agraria Antonio Narro, Saltillo 25315, Mexico
2
Horticultura, Universidad Autónoma Agraria Antonio Narro, Saltillo 25315, Mexico
3
CONACYT-Universidad Autónoma Agraria Antonio Narro, Saltillo 25315, Mexico
4
Botánica, Universidad Autónoma Agraria Antonio Narro, Saltillo 25315, Mexico
5
Tecnológico Nacional de México, I. T. Saltillo, Saltillo 25280, Mexico
*
Author to whom correspondence should be addressed.
Agriculture 2020, 10(4), 97; https://doi.org/10.3390/agriculture10040097
Submission received: 19 February 2020 / Revised: 20 March 2020 / Accepted: 24 March 2020 / Published: 1 April 2020
(This article belongs to the Special Issue Innovative Agronomic Practices for Maximizing Crop Growth and Yield)

Abstract

:
Non-linear systems, such as biological systems, can be simulated by artificial neural network (ANN) techniques. This research aims to use ANN to simulate the accumulated aerial dry matter (leaf, stem, and fruit) and fresh fruit yield of a tomato crop. Two feed-forward backpropagation ANNs, with three hidden layers, were trained and validated by the Levenberg–Marquardt algorithm for weights and bias adjusted. The input layer consisted of the leaf area, plant height, fruit number, dry matter of leaves, stems and fruits, and the growth degree-days at 136 days after transplanting (DAT); these were obtained from a tomato crop, a hybrid, EL CID F1, with indeterminate growth habits, grown with a mixture of peat moss and perlite 1:1 (v/v) (substrate) and calcareous soil (soil). Based on the experimentation of the ANNs with one, two and three hidden layers, with MSE values less than 1.55, 0.94 and 0.49, respectively, the ANN with three hidden layers was chosen. The 7-10-7-5-2 and 7-10-8-5-2 topologies showed the best performance for the substrate (R = 0.97, MSE = 0.107, error = 12.06%) and soil (R = 0.94, MSE = 0.049, error = 13.65%), respectively. These topologies correctly simulated the aerial dry matter and the fresh fruit yield of the studied tomato crop.

1. Introduction

Quantitative interpretations of plant growth through descriptive models have been developed via two mathematical approaches known as classical and functional analysis [1]. ANNs are a nonlinear mapping structure based on the function of the human brain [2], offering learning capabilities. ANNs have been developed to build mathematical models that mimic the computing power of the human brain, with powerful processing capabilities that have been demonstrated in various real-world applications [3]. Agriculture offers many wide applications for ANNs [4,5,6,7,8].
The neuron is the basic working unit of an ANN. This neuron does not have a predefined meaning and evolves during the learning process in a manner that can characterize the target’s function [3].
ANNs allow us to develop models based on the intrinsic relations among the variables, without prior knowledge of their functional relationships [9]. Soft computing for ANN techniques has been widely used to develop models to predict different crop indicators, such as growth, yield, and other biophysical processes, and also because of the commercial importance of tomato [10,11,12,13,14,15,16,17,18,19,20,21,22,23] and other vegetables, such as lettuce [24,25,26,27,28,29,30], pepper [31,32,33,34], cucumber [35,36,37,38], wheat [39,40,41,42,43,44,45], rice [46,47,48], oat [49], maize [50,51], corn [52,53,54], corn and soybean [55], soybean [56], green peas [57], basil [58], cabbage [59], onion [60], potato [61,62], melon [63], fodder [64], sugar cane [65,66], banana [67,68], orange [69], yacon tuber [70], and jack fruit [71].
Neural networks are models based on emulating human reasoning, which have great advantages in applying mathematical reasoning to situations with unknown relationships between the dependent and independent variables [72]. This research aimed to build two ANNs in order to simulate the aerial dry matter (leaf, stem, and fruit) and fresh fruit yield in a tomato crop grown into two culture systems.

2. Materials and Methods

2.1. Establishment and Growth of Tomato Crop

Seeds of a saladette tomato (Solanum lycopersicum L.) hybrid “EL CID F1” with undetermined growth habits were seeded in polystyrene trays. After 35 days, the seedlings were transplanted equidistantly, with 3 plants per square meter, into 8 L black polyethylene containers. Two culture systems were used: substrate and the soil. Tomato plants were grown in a multi-tunnel greenhouse with a polyethylene cover located in the Horticulture Department at the Agricultural University “Antonio Narro” in Saltillo, Mexico (25°21′ N, 101°01′ W, altitude 1743 m). The crop cycle extended from May 20th to November 11th, 2017, with average values of temperature at 21 °C, photosynthetic active radiation of 565 μmol m−2 s−1, and relative humidity of 51%. The tomato plants were maintained on a single stem by removing the axillary buds. Fertilization consisted of a Steiner nutrient solution [73] applied three times a day by watering to concentrations of 25%, 50%, 75%, and 100% at the transplanting date and 15, 28, and 35 DAT. The irrigation water was gradually increased during the crop cycle from 0.5 to 3 L per plant per day from the transplanting date to harvesting by using an irrigation system.

2.2. Measuring of ANN Input Values

The input variables consisted of six crop variables: leaf area (LA), plant height (PH), fruit number (FN), dry matter of leaves (LDM), stems (SDM), and fruits (FDM), and the accumulated air temperature (growth degree days) at 136 DAT. Four tomato plants were randomly chosen for the substrate and soil culture systems. The leaf areas of the tomato plants were measured with a portable LI-3100C device (LI-COR®, Inc. Nebraska, USA) as the square centimeters per plant (cm2 plant−1). The plant heights or stem lengths of the tomato plants were measured with a flex meter from the substrate surface through the apical bud in centimeters (cm). The fruit number was registered for each plant. The fresh matter (leaves, stems, and fruits) was measured separately with a digital balance, and the leaves, stems, and fruits were dehydrated separately in a drying oven at 70 °C until obtaining a constant weight, expressed in grams per plant (g plant−1). The greenhouse air temperature in Celsius degree (°C) was measured with a WhatchDog 1650 datalogger (Spectrum Technologies Inc., St. Joseph, IN, USA) at a time interval of 15 min. The growth degree-days (GDD) were computed according to the residual method [74] in Excel with Equation (1):
G D D = i = 1 n T i T b ,   T i = T m i n + T m a x 2
where GDD is the growth degree-day on the ith day from the transplanting date to 136 DAT (°D), Ti is the mean greenhouse air temperature on the ith day (°C), Tb is the base temperature at which the growth ceased (for the tomato grown under greenhouse conditions Tb = 10 °C [75]), and Tmin and Tmax are, respectively, the minimum and maximum daily temperatures (°C).
The plant development rate is proportional to TiTb, which implies that the development stage will be proportional to the integrated temperature ∫(TiTb)dt, where the plant development rate ceases when TiTb < Tb [76].

2.3. Artificial Neural Networks

Two feed-forward backpropagation ANNs with an input layer, three hidden layers, and two output layers were trained and validated by the Levenberg–Marquardt algorithm for adjusted weights and bias [77,78,79]. The input layer consisted of seven neurons with the average values of five replications of the leaf area (LA), plant height (PH), fruit number (FN), dry matter of leaves (LDM), stems (SDM) and fruits (FDM), and growth degree days (GDD) from the accumulated greenhouse air temperature over the crop cycle. The output layers consisted of the fresh fruit yield and aerial dry matter. Different ANN topology arrays, varying the neuron number in the three hidden layers, were evaluated in order to determine the appropriated network topology to be used in each cropping system. The 10-7-5 and 10-8-5 topologies were used at the hidden layers for the substrate and soil, respectively, according to their data and learning rates. The input and output data were normalized by the Min–Max method [80,81] in the RStudio software [82], where the data were randomly divided into three sets: training 70%, validation 15%, and testing 15%, according to the literature, where these percentages were used for the data [64,83].

2.4. Neuron Topologies in the Hidden Layers

Neuron numbers between 5 to 10, 4 to 10, and 1 to 5 were randomly chosen in the three hidden layers in order to build different topology arrays [64] (Table 1). In most applications, the neuron number is determined by trial and error [84]. The different topology arrays resulting from these combinations were evaluated in the MATLAB neural network toolbox [85], and the following transfer functions were used: tangent sigmoidal hyperbolic (tansig), logarithmic sigmoidal hyperbolic (logsig), and pure lineal (purelin) [85,86], with a learning rate of 0.5 [63], 1000 epochs [64], minimum performance gradient of 1e−07 and adaptation value of 0.001. The tangent sigmoidal hyperbolic (tansig) transfer function presented the best performance in the hidden layers, and the pure lineal (purelin) transfer function presented the best performance in the output layers, defined by its lower mean square error (MSE) values for the substrate and soil (Table 1).
For the hidden layers, the sigmoid hyperbolic tangent (tansig) transfer function (Equation (2)) was used [87]:
Y j = e X j e X j e X j + e X j
For the output layer, the linear (purelin) transfer function (Equation (3)) was used [87]:
Y j = X j ,   X j = i = 1 m W i j Y i + b j
where m is the number of neurons in the output layer, Wij is the weight of connections between layers i and j, Yi is the output of the neurons in layer i, and bj is the bias of the neurons in layer j.
Correlation and dependence are statistical relationships between two or more random variables or observed data values. Correlation refers to any departure of two or more random variables from independence and indicates a relationship between the mean values, thereby offering predictive relationship that can be used in practice. Dependence indicates if the random variables satisfy a mathematical condition of probabilistic independence [88]. The MSE and correlation coefficient (R) were used in this research.
ANN validation was performed in the Matlab 2017a software through the MSE computing (Equation (4)), according to [89]:
M S E = j = 0 P i = 0 N d i j y i j 2 N P
where P is the number of output neurons, N is the number of exemplars in the dataset, and yij and dij are the network output and desired output for exemplar i at processing element j.
Although the MSE values indicate the difference between the predicted and experimental values, the MSE criterion does not determine their direction, so the R was also calculated with Equation (5), according to [87,89]:
R = i X i X ¯ d i d ¯ N / i d i d ¯ 2 N i X i X ¯ 2 N
where Xi is the network output, X ¯ is the mean of the network outputs, di is the desired output, d ¯ is the mean of the desired outputs, and N is the number of exemplars in the dataset.
The performance indicators for building the ANN model included the higher R [90] and the lower MSE [66,70].

3. Results

3.1. ANN Topologies

Two feed-forward backpropagation ANNs with an input layer, three hidden layers, and two output layers, were trained and validated by the Levenberg–Marquardt algorithm for adjusted weights and bias. The ANN array with three hidden layers was chosen from the minimum MSE values obtained in the experimentation, which increased until 1.55, 0.94 and 0.49 in the ANN with 1, 2 and 3 hidden layers, respectively (Table 1).
Based on the evaluation of the transfer functions, an ANN with a 10-7-5 topology in the hidden layers was built for the substrate culture system (Figure 1a). This ANN showed the best performance (R = 0.97 and MSE = 0.107), with tansig and purelin transfer functions in the hidden layers and output layer, respectively (Table 1), while the ANN with a 10-8-5 topology in the hidden layers was built for the soil culture system (Figure 1b), which showed better performance (R = 0.95 and MSE = 0.049) with tansig and purelin transfer functions in the hidden layers and output layer, respectively (Table 1).

3.2. Training, Validation, and Test Processes of the ANNs

The training, validation, and test processes were performed with the observed and simulated data for the aerial dry matter and fresh fruit yield of the tomato grown in the substrate (Figure 2a) and soil (Figure 2b) culture systems. All evaluation processes showed R values higher than 0.96 for the substrate and higher than 0.94 for the soil.

3.3. Aerial Dry Matter

The observed and simulated data of aerial dry matter over the crop cycle showed R values higher than 0.96 in the substrate (Figure 3a) and higher than 0.98 in the soil (Figure 3b) culture systems. At 136 DAT, the simulated data were underestimated with respect to the observed data because the regression line (black line) was located below the 1–1 line (gray line) for both the substrate and the soil culture systems.

3.4. Fresh Fruit Yield

The observed and simulated data for the fresh fruit yield over the crop cycle showed R values higher than 0.98 in the substrate (Figure 4a) and higher than 0.97 in the soil (Figure 4b) culture systems. At 136 DAT, the simulated data were overestimated with respect to the observed data because the regression line (black line) was located above the 1–1 line (gray line) for both the substrate and the soil culture systems.

4. Discussion

According to the evaluation, two neural network architectures were obtained, one for the substrate (7-10-7-5-2) and the other for the soil culture systems (7-10-8-5-2). These architectures showed the best performance and were appropriate according to Gutiérrez [90], who mentioned that the correlation coefficient measures the intensity of the relationship between two variables (X and Y), to justify the inclusion of the comparison criterion and a low value of the MSE. The MSE is calculated by dividing the sum of the squares of the difference of the target value with the value calculated by the neural network, by the total number of data. The MSE value and the training algorithm will allow one to perform an acceptable training to adjust the weights and bias according to the real data [23]. The MSE was used to measure the efficiency of the training process, as mentioned in [66,70].

4.1. ANN Topologies

Two feed-forward backpropagation ANNs were trained and validated by the Levenberg–Marquardt algorithm for weights and bias adjusted, one for substrate and other for soil culture systems. The tansig–purelin transfer functions in the hidden and output layers, respectively, for the two ANNs were used. The ANN outputs were the aerial dry matter and the fresh fruit yield of the tomato grown for the two culture systems.
The ANN with three hidden layers showed a better fit, corresponding to lower MSE values in the two culture systems, with substrate topologies of 10, 7, and 5 neurons in the first, second, and third hidden layers, respectively, and soil topologies of 10, 8, and 5 neurons in the first, second, and third hidden layers, respectively. The topologies of the two culture systems were different only in the second hidden layer, with the soil data higher in one neuron compared to the substrate data. In this research the tansig, logsig and pureline transfer functions were used, both in the hidden layers and in the output layer, while [91] only used the tansig transfer function in the hidden layers and the pureline transfer function in the output layer.
According to [92], if sigmoidal neurons are used in the output layer, the network output is limited to a very small range; on the other hand, when using a linear neuron, the output can take any value.
The study in [93] used the multilayer perceptron model with the hyperbolic tangent activation function (tansig) for the hidden layers; this study also used the linear function (purelin) in the construction of its topology for the output layer, which was used for the functions of the two networks to simulate the fruit yield and aerial partial biomass in the substrate and soil. The work in [94] used a backpropagation ANN to simulate the photosynthesis rate of tomato plants, a tangent sigmoidal hyperbolic (tansig–logsig) transfer function for the hidden layers, and a linear (purelin) transfer function for the output layer, with the same transfer functions used for the two models in the present study.

4.2. Training, Validation, and Test Processes of the ANNs

Figure 2 shows the correlation between the observed and simulated values for the training, validation, and test data with better performance—that is, the data with R > 0.96 (MSE = 0.107) for the substrate and R > 0.94 (MSE = 0.049) for the soil culture systems. The objective of this validation is to establish the credibility of a model for a specific purpose, which is usually done through a comparative analysis [95].
The learning rate of 0.5 used in this research for the substrate and the soil culture systems is similar to the learning rate (0.6) used in [63] and is in the range of the recommended values (0.05 to 0.5 [96], 0.1 to 0.7 [97], and 0.05 to 0.75 [98]), where the learning rate value has no influence on the ANN error [97].
The ANN frames with the Levenberg–Marquardt algorithm applied to the multilayer perceptron topology without connections across layers feature topologies that are far from optimal [77], similar to those used in this research. The obtained R used to measure the performance of the network for validation and testing of the data from the tomato crop grown in the substrate were higher than those obtained in [18], which used a dynamic neural network to predict the tomato yield in a semi-closed greenhouse. In the soil data validation, the R (0.99) was similar to that reported in [64], which used an ANN to predict the yield indexes and quality of the three grasses.
During the training, validation, and test processes of the ANNs, the 1–1 line (dashed line) was used to represent a perfect fit if the network output was the same as the desired output. A continuous line represents the best linear fit regression between the observed output data and the simulated output data from the network. It was also observed that increasing the layers did not decrease the MSE, which began to remain constant at 10, 7, and 5 neurons in the hidden layers, with eight epochs for the substrate and 10, 8, and 5 neurons in the hidden layers, with six epochs for the soil. The study in [63] evaluated a three-layer ANN with different neurons in the hidden layer and determined different changes in the mean prediction error as the topology increased—that is, a reduction from 3.34% to 2.21% (9-1-1 to 9-5-1), and an increment from 2.21% to 2.56% (9-5-1 to 9-9-1).
The authors in [99] concluded that the number of hidden layers and the number of neurons must be chosen by the designer and that there is no rule that can determine the optimal number of hidden neurons to solve a given problem. In most applications, determination of the epoch and neuron number is determined by trial and error [84].

4.3. Aerial Dry Matter and Fresh Fruit Yield

This research aimed to use soft computing techniques to model tomato growth. However, the ANN topologies for the substrate and soil culture systems were trained, validated, and tested using the neuron values of the input layers containing the scalar values of the data corresponding to 136 DAT, not with the vectors across the entire crop cycle. The simulated data were well fitted to the observed data in both the substrate and the soil culture systems, with R higher than 0.96 for the aerial dry matter and higher than 0.97 for the fresh fruit yield.

5. Conclusions

The employed feed-forward backpropagation ANNs with 7-10-7-5-2 and 7-10-8-5-2 topologies for the substrate and soil culture systems, respectively, and trained and validated by the Levenberg–Marquardt algorithm for weights and bias adjusted, satisfactory simulated the aerial dry matter and the fresh fruit yield compared to the observed values.
As mentioned earlier, in recent years, soft computing techniques, such as ANNs, have been used to analyze, model, predict, and execute real processes. In real processes, there is variability and uncertainty that, in some situations, cannot be evaluated with traditional mathematical models. Therefore, this paper was focused on the use of different ANN feedforward topologies capable of learning and simulating the cumulative aerial dry matter and yield of the fresh fruit from a tomato crop grown under greenhouse conditions in substrate and soil culture systems.
Based on the data obtained from the tomato crop grown in the substrate and soil, the two designed ANN structures represent a reliable and precise alternative for modelling and simulating the cumulative yield of tomato crops cultivated under different greenhouse conditions, giving average relative errors of 12.06% and 13.65% for the substrate and soil conditions and a R greater than 94% in both cases. The training process for the ANN structures ended before 10 iterations, reaching a MSE of 0.107 for the substrate and a MSE of 0.049 for the soil. These results show the ability to generalize the designed networks. The results indicate that using databases containing 280 data from four plants (four replicates), 10 samples, and seven input variables allowed the networks, during training, to learn the relationships between the studied inputs and outputs. Likewise, these results indicate that soft computing techniques are suitable for the analysis of data with variability, uncertainty, and various correlations, as shown by the tomato crop data.

Author Contributions

Conceptualization, A.M.-M.; methodology, P.C.-S.; software, K.L.-A.; investigation, S.G.-M. and A.J.-M.; data curation, A.M.-M. and A.B.-M.; visualization, A.M.-M.; writing—original draft preparation, K.L.-A.; writing—review and editing, K.L.-A., A.B.-M., S.G.-M., A.J.-M., P.C.-S. and A.M.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the Cátedras CONACYT program of the CONACYT (National Council of Science and Technology), project number 1426, for its valuable support with greenhouse, climate sensors and laboratory instruments. KLA received a doctoral scholarship from CONACYT, number 618276.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hunt, R. Growth analysis, individual plants. In Encyclopedia of Applied Plant Sciences; Thomas, B., Murphy, D.J., Murray, B.G., Eds.; Academic Press: London, UK, 2003; pp. 588–596. [Google Scholar]
  2. Lek, S.; Park, Y.S. Artificial Neural Networks. In Encyclopedia of Ecology; Jørgensen, S.E., Fath, B.D., Eds.; Academic Press: London, UK, 2008; pp. 237–245. [Google Scholar]
  3. Tripathi, B.K. High Dimensional Neurocomputing—Growth, Appraisal and Applications; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  4. Jeeva, C.; Shoba, S.A. An efficient modelling agricultural production using artificial neural network (ANN). Int. Res. J. Eng. Technol. 2016, 3, 3296–3303. [Google Scholar]
  5. Jiménez, D.; Pérez-Uribe, A.; Satizábal, H.; Barreto, M.; van Damme, P.; Tomassini, M. A Survey of Artificial Neural Network-Based Modeling in Agroecology. In Soft Computing Applications in Industry; Prasad, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 226, pp. 247–269. [Google Scholar]
  6. Seginer, I. Some artificial neural network applications to greenhouse environmental control. Comput. Electron. Agric. 1997, 18, 167–186. [Google Scholar] [CrossRef]
  7. Schultz, A.; Wieland, R. The use of neural networks in agroecological modelling. Comput. Electron. Agric. 1997, 18, 73–90. [Google Scholar] [CrossRef]
  8. Hashimoto, Y. Applications of artificial neural networks and genetic algorithms to agricultural systems. Comput. Electron. Agric. 1997, 18, 71–72. [Google Scholar] [CrossRef]
  9. Zupan, J.; Gasteiger, J. Neural Networks in Chemistry and Drug Design, 2nd ed.; Wiley & Sons, Inc.: New York, NY, USA, 1999. [Google Scholar]
  10. Istiadi, A.; Sulistiyanti, S.R.; Fitriawan, H. Model Design of Tomato Sorting Machine Based on Artificial Neural Network Method using Node MCU Version 1.0. J. Phys. Conf. Ser. 2019, 1376, 12–26. [Google Scholar] [CrossRef]
  11. Wang, Q.; Qi, F.; Sun, M.; Qu, J.; Xue, J. Identification of Tomato Disease Types and Detection of Infected Areas Based on Deep Convolutional Neural Networks and Object Detection Techniques. Comput. Intell. Neurosci. 2019, 2019, 1–15. [Google Scholar] [CrossRef]
  12. Fuentes, A.F.; Yoon, S.; Lee, J.; Park, D.S. High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System with Refinement Filter Bank. Front. Plant Sci. 2018, 9, 1162. [Google Scholar] [CrossRef] [Green Version]
  13. Karami, R.; Kamgar, S.; Karparvarfard, S.; Rasul, M.; Khan, M. Biodiesel production from tomato seed and its engine emission test and simulation using Artificial Neural Network. J. Oil Gas Petrochem. Technol. 2018, 5, 41–62. [Google Scholar]
  14. Suryawati, E.; Sustika, R.; Yuwana, R.S.; Subekti, A.; Pardede, H.F. Deep Structured Convolutional Neural Network for Tomato Diseases Detection. In Proceedings of the 2018 International Conference on Advanced Computer Science and Information Systems, Yogyakarta, Indonesia, 27–28 October 2018; pp. 385–390. [Google Scholar]
  15. Tm, P.; Pranathi, A.; SaiAshritha, K.; Chittaragi, N.B.; Koolagudi, S.G. Tomato Leaf Disease Detection using Convolutional Neural Networks. In In Proceedings of the 2018 Eleventh International Conference on Contemporary Computing, Noida, India, 2–4 August 2018; pp. 1–5. [Google Scholar]
  16. Pedrosa-Alves, D.; Simões-Tomaz, R.; Bruno-Soares, L.; Dias-Freitas, R.; Fonseca-e Silva, F.; Damião-Cruz, C.; Nick, C.; Henriques-da Silva, D.J. Artificial neural network for prediction of the area under the disease progress curve of tomato late blight. Sci. Agric. 2017, 74, 51–59. [Google Scholar] [CrossRef] [Green Version]
  17. Küçükönder, H.; Boyaci, S.; Akyüz, A. A modeling study with an artificial neural network: Developing estimation models for the tomato plant leaf area. Turk. J. Agric. For. 2016, 40, 203–212. [Google Scholar] [CrossRef]
  18. Salazar, R.; López, I.; Rojano, A.; Schmidt, U.; Dannehl, D. Tomato yield prediction in a semi-closed greenhouse. Acta Hortic. 2015, 1107, 263–270. [Google Scholar] [CrossRef]
  19. Ehret, D.L.; Hill, B.D.; Helmer, T.; Edwards, D.R. Neural network modeling of greenhouse tomato yield, growth and water use from automated crop monitoring data. Comput. Electron. Agric. 2011, 79, 82–89. [Google Scholar] [CrossRef]
  20. Fang, J.; Zhang, C.; Wang, S. Application of Genetic Algorithm (GA) Trained Artificial Neural Network to Identify Tomatoes with Physiological Diseases. In The International Federation for Information Processing, Proceedings of the CCTA 2007 Computer and Computing Technologies in Agriculture, Wuyishan, China, 18–20 August 2007; Li, D., Ed.; Springer: Boston, MA, USA, 2008; pp. 1103–1111. [Google Scholar]
  21. Wang, X.; Zhang, M.; Zhu, J.; Geng, S. Spectral prediction of Phytophthora infestans infection on tomatoes using artificial neural network (ANN). Int. J. Remote Sens. 2008, 29, 1693–1706. [Google Scholar] [CrossRef]
  22. Movagharnejad, K.; Nikzad, M. Modelling of tomato drying using Artificial Neural Network. Comput. Electron. Agric. 2007, 59, 78–85. [Google Scholar] [CrossRef]
  23. Poonnoy, P.; Tansakul, A.; Chinnan, M. Artificial Neural Network Modeling for Temperature and Moisture Content Prediction in Tomato Slices Undergoing Microwawe-Vacuum Drying. J. Food Sci. 2007, 72, 42–47. [Google Scholar] [CrossRef]
  24. Karadžić Banjac, M.Ž.; Kovačević, S.Z.; Jevrić, L.R.; Podunavac-Kuzmanović, S.; Tepić Horeck, A.; Vidović, S.; Šumić, Z.; Ilin, Ž.; Adamović, B.; Kuljanin, T. Artificial neural network modeling of the antioxidant activity of lettuce submitted to different postharvest conditions. J. Food Process. Preserv. 2019, 43, 1–9. [Google Scholar] [CrossRef]
  25. Osco, L.P.; Ramos, A.P.M.; Moriya, É.A.S.; Bavaresco, L.G.; Lima, B.C.; Estrabis, N.; Pereira, D.R.; Creste, J.E.; Júnior, J.M.; Gonçalves, W.N.; et al. Modeling Hyperspectral Response of Water-Stress Induced Lettuce Plants using Artificial Neural Networks. Remote Sens. 2019, 11, 2797. [Google Scholar] [CrossRef] [Green Version]
  26. Valenzuela, I.C.; Puno, J.C.V.; Bandala, A.A.; Baldovino, R.G.; de Luna, R.G.; De Ocampo, A.L.; Cuello, J.; Dadios, E.P. Quality assessment of lettuce using artificial neural network. In Proceedings of the 2017 IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management, Manila, Philippines, 1–3 December 2017; pp. 1–5. [Google Scholar]
  27. Mistico-Azevedo, A.; de Andrade-Júnior, V.C.; Pedrosa, C.E.; Mattes-de Oliveira, C.; Silva-Dornas, M.F.; Damião-Cruz, C.; Ribeiro-Valadares, N. Application of artificial neural networks in indirect selection: A case study on the breeding of lettuce. Bragantia 2015, 74, 387–393. [Google Scholar] [CrossRef]
  28. Sun, J.; Dong, L.; Jin, X.M.; Fang, M.; Zhang, M.X.; Lv, W.X. Identification of Pesticide Residues of Lettuce Leaves Based on LVQ Neural Network. Adv. Mater. Res. 2013, 756–759, 2059–2063. [Google Scholar] [CrossRef]
  29. Lin, W.C.; Block, G. Neural network modeling to predict shelf life of greenhouse lettuce. Algorithms 2009, 2, 623–637. [Google Scholar] [CrossRef] [Green Version]
  30. Zaidi, M.A.; Murase, H.; Honami, N. Neural Network Model for the Evaluation of Lettuce Plant Growth. J. Agric. Eng. Res. 1999, 74, 237–242. [Google Scholar] [CrossRef]
  31. Lee, J.W. Growth Estimation of Hydroponically-grown Bell Pepper (Capsicum annuum L.) Using Recurrent Neural Network Through Nondestructive Measurement of Leaf Area Index and Fresh Weight. Ph.D. Thesis, Seoul National University, Seoul Korea, August 2019; p. 163. [Google Scholar]
  32. Manoochehr, G.; Fathollah, N. Fruit yield prediction of pepper using artificial neural network. Sci. Hortic. 2019, 250, 249–253. [Google Scholar]
  33. Figueredo-Ávila, G.; Ballesteros-Ricaurte, J. Identificación del estado de madurez de las frutas con redes neuronales artificiales, una revisión. Cienc. Agric. 2016, 13, 117–132. [Google Scholar] [CrossRef] [Green Version]
  34. Lin, W.C.; Hill, B.D. Neural network modelling to predict weekly yields of sweet peppers in a commercial greenhouse. Can. J. Plant Sci. 2008, 88, 531–536. [Google Scholar] [CrossRef]
  35. Lin, K.; Gong, L.; Huang, Y.; Liu, C.; Pan, J. Deep Learning-Based Segmentation and Quantification of Cucumber Powdery Mildew Using Convolutional Neural Network. Front. Plant Sci. 2019, 10, 155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Zhang, S.; Zhang, S.; Zhang, C.; Wang, X.; Shi, Y. Cucumber leaf disease identification with global pooling dilated convolutional neural network. Comput. Electron. Agric. 2019, 162, 422–430. [Google Scholar] [CrossRef]
  37. Pawar, P.; Turkar, V.; Patil, P. Cucumber disease detection using artificial neural network. In Proceedings of the 2016 International Conference on Inventive Computation Technologies, Coimbatore, India 26–27 August 2016; pp. 1–5. [Google Scholar]
  38. Vakilian, K.A.; Massah, J. An artificial neural network approach to identify fungal diseases of cucumber (Cucumis sativus L.) plants using digital image processing. Arch. Phytopathol. Plant Prot. 2013, 46, 1580–1588. [Google Scholar] [CrossRef]
  39. Haider, S.A.; Naqvi, S.R.; Akram, T.; Umar, G.A.; Shahzad, A.; Sial, M.R.; Khaliq, S.; Kamran, M. LSTM Neural Network Based Forecasting Model for Wheat Production in Pakistan. Agronomy 2019, 9, 72. [Google Scholar] [CrossRef] [Green Version]
  40. Beres, B.L.; Hill, B.D.; Cárcamo, H.A.; Knodel, J.J.; Weaver, D.K.; Cuthbert, R.D. An artificial neural network model to predict wheat stem sawfly cutting in solid-stemmed wheat cultivars. Can. J. Plant Sci. 2017, 97, 329–336. [Google Scholar] [CrossRef] [Green Version]
  41. Ghodsi, R.; Yani, R.M.; Jalali, R.; Ruzbahman, M. Predicting wheat production in Iran using an artificial neural networks approach. Int. J. Acad. Res. Bus. Soc. Sci. 2012, 2, 34–47. [Google Scholar]
  42. Naderloo, L.; Alimardani, R.; Omid, M.; Sarmadian, F.; Javadikia, P.; Torabi, M.Y.; Alimardani, F. Application of ANFIS to predict crop yield based on different energy inputs. Measurements 2012, 45, 1406–1413. [Google Scholar] [CrossRef]
  43. Khashei-Siuki, A.; Kouchakzadeh, M.; Ghahraman, B. Predicting dryland wheat yield from meteorological data, using expert system, Khorasan Province, Iran. J. Agric. Sci. Tech-Iran 2011, 13, 627–640. [Google Scholar]
  44. Alvarez, R. Predicting average regional yield and production of wheat in the Argentine Pampas by an artificial neural network approach. Eur. J. Agron. 2009, 30, 70–77. [Google Scholar] [CrossRef]
  45. Hill, B.D.; McGinn, S.M.; Korchinski, A.; Burnett, B. Neural network models to predict the maturity of spring wheat in western Canada. Can. J. Plant Sci. 2002, 82, 7–13. [Google Scholar] [CrossRef]
  46. Zhang, H.; Hu, H.; Zhang, X.; Zhu, L.; Zheng, K.; Jin, Q.; Zeng, F. Estimation of rice neck blasts severity using spectral reflectance based on BP-neural network. Acta Physiol. Plant. 2011, 33, 2461–2466. [Google Scholar] [CrossRef]
  47. Ji, B.; Sun, Y.; Yang, S.; Wan, J. Artificial neural networks for rice yield prediction in mountainous regions. J. Agric. Sci. 2007, 145, 249–261. [Google Scholar] [CrossRef]
  48. Chen, C.; Mcnairn, H. A neural network integrated approach for rice crop monitoring. Int. J. Remote Sens. 2006, 27, 1367–1393. [Google Scholar] [CrossRef]
  49. Chantre, G.R.; Blanco, A.M.; Forcella, F.; van Acker, R.C.; Sabbatini, M.R.; Gonzalez-Andular, J.L. A comparative study between non-linear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence. J. Agric. Sci. 2014, 152, 254–262. [Google Scholar] [CrossRef]
  50. Chayjan, R.A.; Esna-Ashari, M. Modeling of heat and entropy sorption of maize (cv. Sc704): Neural network method. Res. Agric. Eng. 2010, 56, 69–76. [Google Scholar] [CrossRef] [Green Version]
  51. O’Neal, M.R.; Engel, B.A.; Ess, D.R.; Frankenberger, J.R. Neural network prediction of maize yield using alternative data coding algorithms. Biosyst. Eng. 2002, 83, 31–45. [Google Scholar]
  52. Matsumura, K.; Gaitan, C.F.; Sugimoto, K.; Cannon, A.J.; Hsieh, W.W. Maize yield forecasting by linear regression and artificial neural networks in Jilin, China. J. Agric. Sci. 2014, 153, 399–410. [Google Scholar] [CrossRef]
  53. Morteza, T.; Asghar, M.; Hassan, G.M.; Hossein, R. Energy consumption and modeling of output energy with multilayer feed-forward neural network for corn silage in Iran. Agric. Eng. Int. CIGR J. 2012, 14, 93–101. [Google Scholar]
  54. Uno, Y.; Prasher, S.; Lacroix, R.; Goel, P.; Karimi, Y.; Viau, A.; Patel, R. Artificial neural networks to predict corn yield from compact airborne spectrographic imager data. Comput. Electron. Agric. 2005, 47, 149–161. [Google Scholar] [CrossRef]
  55. Kaul, M.; Hill, R.L.; Walthall, C. Artificial neural networks for corn and soybean yield prediction. Agric. Syst. 2005, 85, 1–18. [Google Scholar] [CrossRef]
  56. Chayjan, R.A.; Esna-Ashari, M. Modeling Isosteric Heat of Soya Bean for Desorption Energy Estimation Using Neural Network Approach. Chil. J. Agric. Res. 2010, 70, 616–625. [Google Scholar] [CrossRef] [Green Version]
  57. Higgins, A.; Prestwidge Di Tirling, S.D.; Yost, J. Forecasting maturity of green peas: An application of neural networks. Comput. Electron. Agric. 2010, 70, 151–156. [Google Scholar] [CrossRef]
  58. Vázquez-Rueda, M.G.; Ibarra-Reyes, M.; Flores-García, F.G.; Moreno-Casillas, H.A. Redes neuronales aplicadas al control de riego usando instrumentación y análisis de imágenes para un microinvernadero aplicado al cultivo de Albahaca. Res. Comput. Sci. 2018, 147, 93–103. [Google Scholar]
  59. Zhang, W.; Bai, X.; Liu, G. Neural network modeling of ecosystems: A case study on cabbage growth system. Ecol. Model. 2007, 201, 317–325. [Google Scholar] [CrossRef]
  60. Stastny, J.; Konecny, V.; Trenz, O. Agricultural data prediction by means of neural networks. Agric. Econ. Czech 2011, 57, 356–361. [Google Scholar]
  61. Fortin, J.G.; Anctil, F.; Parent, L.E.; Bolinder, M.A. A neural network experiment on the site-specific simulation of potato tuber growth in Eastern Canada. Comput. Electron. Agric. 2010, 73, 126–132. [Google Scholar] [CrossRef]
  62. Fortin, J.G.; Anctil, F.; Parent, L.E.; Bolinder, M.A. Site-specific early season potato yield forecast by neural network in Eastern Canada. Precis. Agric. 2010, 12, 905–923. [Google Scholar] [CrossRef]
  63. Naroui Rad, M.R.; Koohkan, S.; Fanaei, H.R.; Pahlavan Rad, M.R. Application of Artificial Neural Networks to predict the final fruit weight and random forest to select important variables in native population of melon (Cucumis melo L.). Sci. Hortic. 2015, 181, 108–112. [Google Scholar] [CrossRef]
  64. Pascual-Sánchez, I.A.; Ortiz-Díaz, A.A.; Ramírez-de la Rivera, J.; Figueredo-León, A. Predicción del Rendimiento y la Calidad de Tres Gramíneas en el Valle del Cauto. Rev. Cuba. Cienc. Inf. 2017, 11, 144–158. [Google Scholar]
  65. Lobato-Fernandes, J.; Favilla Ebecken, N.F.; dalla Mora-Esquerdo, J.C. Sugarcane yield prediction in Brazil using NDVI time series and neural networks ensemble. Int. J. Remote Sens. 2017, 38, 4631–4644. [Google Scholar] [CrossRef]
  66. Vásquez, V.; Lescano, C. Predicción por Redes Neuronales Artificiales de la Calidad Fisicoquímica de Vinagre de Melaza de Caña por Efecto de Tiempo-Temperatura de Alimentación a un Evaporador Destilador-Flash. Sci. Agropecu. 2010, 1, 63–73. [Google Scholar] [CrossRef] [Green Version]
  67. Soares, J.; Pasqual, M.; Lacerda, W.; Silva, S.; Donato, S. Utilization of Artificial Neural Networks in the Prediction of the Bunches’ Weight in Banana Plants. Sci. Hortic. 2013, 155, 24–29. [Google Scholar] [CrossRef]
  68. Ávila-de Hernández, R.; Rodríguez-Pérez, V.; Hernández-Caraballo, E. Predicción del Rendimiento de un Cultivo de Plátano mediante Redes Neuronales Artificiales de Regresión Generalizada. Publ. Cienc. Tecnol. 2012, 6, 31–40. [Google Scholar]
  69. Hernández-Caraballo, E.A. Predicción del Rendimiento de un Cultivo de Naranja “Valencia” Mediante Redes Neuronales de Regresión Generalizada. Publ. Cienc. Tecnol. 2015, 9, 139–158. [Google Scholar]
  70. Rojas-Naccha, J.; Vásquez-Villalobos, V. Prediction by Artificial Neural Networks (ANN) of the Diffusivity, Mass, Moisture, Volume and Solids on Osmotically Dehydrated Yacon (Smallantus sonchifolius). Sci. Agropecu. 2012, 3, 201–214. [Google Scholar] [CrossRef]
  71. Bala, B.; Ashraf, M.; Uddin, M.; Janjai, S. Experimental and Neural Network Prediction of the Performance of a Solar Tunnel Drier for a Solar Drying Jack Fruit Bulbs and Leather. J. Food Process. Eng. 2005, 28, 552–566. [Google Scholar] [CrossRef]
  72. Hernández-Caraballo, E.A.; Avila, G.R.; Rivas, F. Las Redes Neuronales Artificiales en Química Analítica. Parte I. Fundamentos. Rev. Soc. Venez. Quim. 2003, 26, 17–25. [Google Scholar]
  73. Steiner, A.A. A Universal Method for Preparing Nutrient Solutions of a Certain Desired Composition. Plant Soil 1961, 15, 134–154. [Google Scholar] [CrossRef] [Green Version]
  74. Trudgill, D.L.; Honek, A.; Li, D.; van Straalen, N.M. Thermal time—Concepts and utility. Ann. Appl. Biol. 2005, 146, 1–14. [Google Scholar] [CrossRef]
  75. Ardila, G.; Fischer, G.; Balaguera, H. Caracterización del Crecimiento del Fruto y Producción de Tres Híbridos de Tomate (Solanum lycopersicum L.) en Tiempo Fisiológico bajo Invernadero. Rev. Colomb. Cienc. Hortic. 2011, 5, 44–56. [Google Scholar] [CrossRef] [Green Version]
  76. Goudriaan, J.; van Laar, H.H. Modelling Potential Crop Growth Processes; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1994; p. 239. [Google Scholar]
  77. Hunter, D.; Yu, H.; Pukish, M.; Kolbusz, J.; Wilamowski, B. Selection of Proper Neural Network Sizes and Architectures—A Comparative Study. IEEE Trans. Ind. Inf. 2012, 8, 228–240. [Google Scholar] [CrossRef]
  78. Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  79. Levenberg, K. A method for the solution of certain nonlinear problems in least squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef] [Green Version]
  80. Jain, Y.K.; Bhandare, S.K. Min Max Normalization Based Data Perturbation Method for Privacy Protection. Int. J. Comput. Commun. Technol. 2011, 2, 45–50. [Google Scholar]
  81. Han, J.; Kamber, M. Data Mining—Concepts and Techniques, 2nd ed.; Morgan Kaufmann Publishers: Massachusetts, USA, 2006. [Google Scholar]
  82. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing; Andy Bunn and Mikko Korpela: Vienna, Austria, 2015; Available online: https://www.R-project.org (accessed on 11 February 2018).
  83. Xu, Y.; Goodacre, R. On Splitting Training and Validation Set: A Comparative Study of Cross-Validation, Bootstrap and Systematic Sampling for Estimating the Generalization Performance of Supervised Learning. J. Anal. Test. 2018, 2, 249–262. [Google Scholar] [CrossRef] [Green Version]
  84. Esen, H.; Inalli, M.; Sengur, A.; Esen, M. Forecasting of a ground-coupled heat pump performance using neural networks with statistical data weighting pre-processing. Int. J. Therm. Sci. 2008, 47, 431–441. [Google Scholar] [CrossRef]
  85. Demuth, H.B.; Beale, M.H. Neural Network Toolbox for Use with Matlab: User’s Guide; MathWorks: Pennsylvania, USA, 2001. [Google Scholar]
  86. Demuth, H.B.; Beale, M.H.; De Jesús, O.; Hagan, M.T. Neural Network Design, 2nd ed.; Hagan: Stillwater, OK, USA, 2014. [Google Scholar]
  87. Tohidi, M.; Sadeghi, M.; Mousavi, S.R.; Mireei, S.A. Artificial neural network modeling of process and product indices in deep bed drying of rough rice. Turk. J. Agric. For. 2012, 36, 738–748. [Google Scholar]
  88. Dorofki, M.; Elshafie, A.H.; Jaafar, O.; Karim, O.A.; Mastura, S. Comparison of Artificial Neural Network Transfer Functions Abilities to Simulate Extreme Runoff Data. In Proceedings of the 2012 International Conference on Environment, Energy and Biotechnology, Kuala Lumpur, Malaysia, 5–6 May 2012. [Google Scholar]
  89. Obe, O.O.; Shangodoyin, D.K. Artificial neural network based model for forecasting sugar cane production. J. Comput. Sci. 2010, 6, 439–445. [Google Scholar]
  90. Gutiérrez, H.; De La Vara, R. Análisis y Diseños de Experimentos; McGraw-Hill Interamericana: Mexico, 2003; p. 430. [Google Scholar]
  91. Chandwani, V.; Agrawal, V.; Nagar, R.; Singh, S. Modeling slump of ready mix concrete using artificial neural network. Int. J. Technol. 2015, 6, 207–216. [Google Scholar] [CrossRef] [Green Version]
  92. Ljung, L. Perspectives on system identification. In Proceedings of the 17th IFAC World Congress, Seoul, Korea, 6–11 July 2008; pp. 7172–7184. [Google Scholar]
  93. Ochoa-Martínez, C.I.; Ramaswamy, H.S.; Ayala-Aponte, A.A. Artificial Neural Network modeling of osmotic dehydration mass transfer kinetics of fruits. Dry. Technol. 2007, 25, 85–95. [Google Scholar] [CrossRef]
  94. Vargas-Sállago, J.M.; López-Cruz, I.L.; Rico-García, E. Redes neuronales artificiales aplicadas a mediciones de fitomonitoreo para simular fotosíntesis en jitomate bajo invernadero. Rev. Mex. Cienc. Agríc. 2012, 4, 747–756. [Google Scholar]
  95. Arahal, M.R.; Soria, M.B.; Díaz, F.R. Técnicas de Predicción con Aplicaciones en Ingeniería; Universidad de Sevilla: Sevilla, Spain, 2006; p. 340. [Google Scholar]
  96. Millan, F.R.; Ostojich, Z. Predicción mediante redes neuronales artificiales de la transferencia de masa en frutas osmóticamente deshidratadas. Interciencia 2006, 31, 206–210. [Google Scholar]
  97. Chen, C.R.; Ramaswamy, H.S.; Alli, I. Prediction of quality changes during osmo-convective drying of blueberries using neural network models for process optimization. Dry. Technol. 2001, 19, 515. [Google Scholar] [CrossRef]
  98. Martín, Q.; De Paz, Y.R. Aplicación de las Redes Neuronales Artificiales a la Regresión; La Muralla: Madrid, Spain, 2007; p. 52. [Google Scholar]
  99. Isasi, P.; Galván, I.M. Redes Neuronales Artificiales. Un Enfoque Práctico; Pearson Educación: Madrid, Spain, 2004; p. 90. [Google Scholar]
Figure 1. ANN schemes with (a) a 7-10-7-5-2 topology for the substrate, and (b) a 7-10-8-5-2 topology for the soil.
Figure 1. ANN schemes with (a) a 7-10-7-5-2 topology for the substrate, and (b) a 7-10-8-5-2 topology for the soil.
Agriculture 10 00097 g001
Figure 2. Training, validation, and test processes for the ANNs of the (a) substrate, and (b) soil.
Figure 2. Training, validation, and test processes for the ANNs of the (a) substrate, and (b) soil.
Agriculture 10 00097 g002
Figure 3. Aerial dry matter of the tomato plants grown in the (a) substrate and (b) soil.
Figure 3. Aerial dry matter of the tomato plants grown in the (a) substrate and (b) soil.
Agriculture 10 00097 g003
Figure 4. Fresh fruit yield of tomato plants grown in the (a) substrate and (b) soil.
Figure 4. Fresh fruit yield of tomato plants grown in the (a) substrate and (b) soil.
Agriculture 10 00097 g004
Table 1. Mean square error (MSE) of the evaluated hidden layers topologies for the substrate and soil.
Table 1. Mean square error (MSE) of the evaluated hidden layers topologies for the substrate and soil.
HiddenCultureHidden LayerTransfer Functions
LayersSystem1st2nd3rdHidden LayersOutput LayerEpochsMSE
1Substrate10--LogsigLogsig120.238
10--LogsigPurelin80.136
10--LogsigTansig130.257
10--PurelinPurelin41.25
10--PurelinLogsig90.204
10--PurelinTansig90.221
10--TansigTansig110.15
10--TansigLogsig80.572
10--TansigPurelin80.341
Soil10--LogsigLogsig100.141
10--LogsigPurelin100.253
10--LogsigTansig190.54
10--PurelinPurelin41.55
10--PurelinLogsig100.413
10--PurelinTansig90.154
10--TansigTansig180.261
10--TansigLogsig630.119
10--TansigPurelin70.382
2Substrate105-LogsigLogsig80.128
106-LogsigLogsig90.372
107-LogsigLogsig70.114
108-LogsigLogsig60.253
109-LogsigLogsig70.217
105-LogsigPurelin60.382
106-LogsigPurelin70.781
107-LogsigPurelin60.715
108-LogsigPurelin70.566
109-LogsigPurelin80.938
105-LogsigTansig60.226
106-LogsigTansig70.217
107-LogsigTansig60.288
108-LogsigTansig60.274
109-LogsigTansig60.2
Soil105-LogsigLogsig110.262
106-LogsigLogsig530.148
107-LogsigLogsig70.463
108-LogsigLogsig60.186
109-LogsigLogsig90.0986
105-LogsigPurelin70.229
106-LogsigPurelin60.0935
107-LogsigPurelin70.0825
108-LogsigPurelin70.381
109-LogsigPurelin60.0864
105-LogsigTansig60.234
106-LogsigTansig70.499
107-LogsigTansig160.0854
108-LogsigTansig60.265
109-LogsigTansig90.199
3Substrate540TansigTansig120.194
588TansigTansig110.153
678TansigTansig100.164
1075TansigPurelin80.107
10105TansigTansig110.130
1086PurelinLogsig60.311
1079PurelinTansig100.322
1065PurelinPurelin40.381
694PurelinLogsig100.232
9115PurelinTansig190.376
857LogsigTansig150.289
9104LogsigPurelin170.379
987LogsigPurelin210.275
596LogsigLogsig90.297
785LogsigTansig150.327
Soil540TansigTansig120.323
853TansigPurelin120.370
10105TansigTansig130.375
575TansigTansig170.260
1085TansigPurelin60.049
1087Purelintansig170.276
675Purelinlogsig40.388
964PurelinPurelin60.391
786PurelinLogsig90.321
698PurelinTansig70.286
1078LogsigPurelin100.389
897LogsigTansig120.275
798LogsigLogsig160.432
986LogsigPurelin220.488
6105LogsigTansig110.322

Share and Cite

MDPI and ACS Style

López-Aguilar, K.; Benavides-Mendoza, A.; González-Morales, S.; Juárez-Maldonado, A.; Chiñas-Sánchez, P.; Morelos-Moreno, A. Artificial Neural Network Modeling of Greenhouse Tomato Yield and Aerial Dry Matter. Agriculture 2020, 10, 97. https://doi.org/10.3390/agriculture10040097

AMA Style

López-Aguilar K, Benavides-Mendoza A, González-Morales S, Juárez-Maldonado A, Chiñas-Sánchez P, Morelos-Moreno A. Artificial Neural Network Modeling of Greenhouse Tomato Yield and Aerial Dry Matter. Agriculture. 2020; 10(4):97. https://doi.org/10.3390/agriculture10040097

Chicago/Turabian Style

López-Aguilar, Kelvin, Adalberto Benavides-Mendoza, Susana González-Morales, Antonio Juárez-Maldonado, Pamela Chiñas-Sánchez, and Alvaro Morelos-Moreno. 2020. "Artificial Neural Network Modeling of Greenhouse Tomato Yield and Aerial Dry Matter" Agriculture 10, no. 4: 97. https://doi.org/10.3390/agriculture10040097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop