Zeng et al. [38]; Hu et al. [99] | | Inside temperature Inside humidity
| The model had three layers:Input layer Hidden layer Hidden layer Output layer
| | Gradient descent back-propagation (BP) | Results show that the model proposed has better adaptability, and more satisfactory real-time control performance compared with the offline tuning scheme using genetic algorithm (GA) optimization and proportional, and derivative control (PD) method. |

He et al. [60] | | | FFNN. The model had three layers:Input layer Hidden layer Hidden layer Output layer
| | BP | The principal component analysis (PCA) simplified the data samples and made the model had faster learning speed. |

Ferreira et al. [112] | | | FFNN specifically RBF. | | Off-line methodology:In method 1 they used the linear least squares (LS) In method 2 they used the orthogonal least squares (OLS) In method 3 they used the Levenberg – Marquardt (LM) On-line methodology:In method 1 they used the extended Kalman filter (EKF) In method 2 they based on the interpolation problem with generalized radial basis functions (GRBFs) with regularization In method 3 they used the LM
| In this paper off-line training methods and on-line learning algorithms are analyzed.Whether off-line or on-line, the LM method achieves the best results. |

Dariouchy et al. [123] | External humidity Total radiation Wind Direction Wind Velocity External Temperature Internal temperature Internal humidity
| Internal temperature Internal humidity
| FFNN. | Logistic sigmoid transfer function for all layers | BP | Different architectures were tested. Initially, networks with a single hidden layer were built by successively adding two additional neurons to it. Networks with two hidden layers were also tested, triangular structures were considered, for which the number of neurons in one layer is greater than the next. The optimal model was composed of a hidden layer with six neurons. |

Taki et al. [124] | They used four ANNs models: First model:Second model:Third model:Fourth model: | They used four ANNs models: First model:Second model:Third model:Fourth model: | FFNN. | | Basic BP | Demonstrated that multilayer perceptron (MLP) network with 4 inputs in first layer, 6 neurons in hidden layer and one output, and MLP network with 4 inputs in the first layer, 9 neurons in hidden layer and one output had the best performance to predict inside soil, inside air humidity, inside roof and soil temperature with a low error. |

Seginer et al. [125] | Weather variables:Outside temperature Outside humidity Outside solar radiation Wind speed Control variables:- ▪
Heater heat flux - ▪
Vent opening angle - ▪
Misting time fraction State variables:- ▪
Leaf area index (LAI) Time variables:- ▪
Julian day - ▪
Hour of day
| - ▪
Inside temperature - ▪
Soil temperature - ▪
Inside humidity - ▪
Inside radiation
| FFNN. For the model of the neural network (NN) used a commercial program (NeuroShell™, Ward System Group, Inc.) The model had three layers:- ▪
Input layer - ▪
Hidden layer (The number of the nodes was determined by the program) - ▪
Output layer
| Sigmoid function (S-shape logistic function) for the three layers | BP | They found that leaf area index (LAI) did not have a significant influence on the internal conditions of the greenhouse. Also, they determined that the wind direction has minimal effects on the results. |

Laribi et al. [126] | - ▪
Outside temperature - ▪
Outside humidity - ▪
Wind speed - ▪
Solar radiation
| - ▪
Internal temperature - ▪
Internal humidity
| FFNN. The networks had three layers:- ▪
Input layer - ▪
Hidden layer with 7 neurons - ▪
Output layer with 2 neurons
| - ▪
The sigmoid transfer function for the hidden layer - ▪
The linear transfer function for the output layer
| BP | Two approaches were used to predict the climate of the greenhouse, multimode modeling and neural networks. They point out that the neural network model is easier to obtain and specify that it can be used to simulate different output variables at the same time. |

Bussab et al. [127] | - ▪
External temperature - ▪
External global radiation - ▪
External relative humidity - ▪
Wind speed
| - ▪
Internal Temperature - ▪
Internal - ▪
Relative Humidity
| FFNN. A multilayer NN with two hidden layers:- ▪
First hidden layer with 40 neurons - ▪
Second hidden layer - ▪
with 20 neurons
| - ▪
The hyperbolic tangent function for input layer and for the first hidden layer - ▪
The linear function for second hidden layer
| BP | The NN obtained better results in the prediction of the internal temperature than of the internal relative humidity |

Salazar et al. [128] | - ▪
Outside average temperature - ▪
Outside relative humidity - ▪
Wind velocity - ▪
Solar radiation
| Three different network architectures were tested, where the number of outputs was varied:- ▪
1st inside temperature - ▪
2nd inside relative humidity - ▪
3rd inside temperature and relative humidity
| FFNN. The networks had three layers:- ▪
Input layer - ▪
Hidden layer - ▪
Output layer
| Hyperbolic tangent function for all layers | BP | They report that the third network obtained better results in the prediction of temperature and relative humidity, which explains the interactions between these two variables. Also, they emphasize the relevance of the input variables in the predicted variables, in this study the solar radiation was the most important. |

Alipour et al. [129] | - ▪
Wind speed and direction - ▪
Relative humidity - ▪
Infra-red light - ▪
Visible light - ▪
Air temperature - ▪
Carbon dioxide concentration
| - ▪
Inside temperature - ▪
Light - ▪
Inside Relative humidity - ▪
Carbon dioxide
| FFNN. Three different configurations were tested:- ▪
The feedforward neural network with several delays in input - ▪
Two layers with one feedback from the hidden layer and delay in input - ▪
Three layers neural network with two feedbacks from hidden layer and delay in input
| Not specified | The three-layer neural network with two hidden-layer feedbacks and delayed entry showed better relative humidity and light index results. The FFNN with multiple entries delays better predicted the temperature and infrared index. |

Outanoute et al. [130] | Values and the previous value of:- ▪
External temperature - ▪
External relative humidity - ▪
Command of heater and ventilator Previous values of:- ▪
Internal temperature - ▪
Internal relative humidity
| - ▪
Internal Temperature - ▪
Internal relative humidity
| FFNN. The networks had three layers:- ▪
Input layer - ▪
Hidden layer (the number of nodes depending on the type of network training) - ▪
Output layer
| - ▪
The logistic sigmoid transfer function for the hidden layer - ▪
The linear transfer function for the output layer
| - ▪
Gradient descent with momentum and adaptive learning rate algorithm (GDX) for seven nodes on the hidden layer - ▪
Broyden-Fletcher-Golfarb-Shanno (BFGS) quasi-newton BP for five nodes on the hidden layer - ▪
Resilient Back-propagation algorithm (RPROP) for twelve nodes on the hidden layer
| Three NNs were tested with different training algorithms.BFGS is better than the GDX and the RPROP. |

Taki et al. [131] | - ▪
Outside air temperature - ▪
Wind speed - ▪
Outside solar radiation
| - ▪
Inside air temperature - ▪
Soil temperature - ▪
Plant temperatures
| FFNN.- ▪
Feedforward networks, specifically MLP and RBF, were used in this investigation. Also, different algorithms for network training were applied and compared with each other and with the support vector machine (SVM) method
| For MLP:- ▪
No transfer function for the first layer was used - ▪
Sigmoid functions for the hidden layers - ▪
The linear transfer function for the output layer For radial base function artificial neural networks (RBFANNs):Used radial basis functions as activation functions | - ▪
LM back- propagation - ▪
Bayesian regularization - ▪
Scaled conjugate gradient BP - ▪
RPROPVariable learning rate BP - ▪
Gradient descent with momentum BP - ▪
Gradient descent with adaptive learning rate BP - ▪
Gradient descent BP - ▪
BFGS quasi-Newton back- propagation - ▪
Powell–Beale conjugate gradient BP - ▪
Fletcher–Powell conjugate gradient BP - ▪
Polak–Ribiere conjugate gradient BP - ▪
One step secant BP
| Thirteen different training algorithms were used for ANNs models. Comparison of the models showed that RBFANNs has lowest error between the other models |