Future Climate of Colombo Downscaled with SDSM-Neural Network

The Global Climate Model (GCM) run at a coarse spatial resolution cannot be directly used for climate impact studies. Downscaling is required to extract the sub-grid and local scale information. This paper investigates if the artificial neural network (ANN) is better than the widely-used regression-based statistical downscaling model (SDSM) for downscaling climate for a site in Colombo, Sri Lanka. Based on seasonal and annual model biases and the root mean squared error (RMSE), the ANN performed better than the SDSM for precipitation. This paper proposes a novel methodology for improving climate predictions by combining SDSM with neural networks. This method will allow a user to apply SDSM with a neural network model for higher skills in downscaling. The study uses the Canadian Earth System Model (CanESM2) of the IPCC Fifth Assessment Report, reanalysis from the National Center for Environmental Prediction (NCEP), and the Asian Precipitation Highly Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE) project data as the observation. SDSM and the focused time-delayed neural network (TDNN) models are used for the downscaling. The projected annual increase for Representative Concentration Pathway (RCP) is 8.5; the average temperature is 2.83 ◦C (SDSM) and 3.03 ◦C (TDNN), and rainfall is 33% (SDSM) and 63% (TDNN) for 2080’s.

The year 2016 was the third consecutive hottest year on record according to the National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA).Globally averaged temperatures in 2016 were 0.99 • C warmer than the mid-20th-century average [2].The Paris Summit participants (Conference of Parties, COP21), in 2015, agreed to limit the rise in global temperature below 2 • C above the pre-industrial level till 2100.The global average temperature is already halfway to the target by 2016.Climate change is projected to increase the temperature and intensify the global water cycle, increasing both extreme events and non-rainy days, causing multiple stresses of floods and droughts.It is difficult to predict the future climate due to uncertainties from climate models and various other sources.Prediction of future climate research is important for impact studies and adaptation.
The Global Climate Model (GCM) are the models used for climate predictions and used to study climate variability and change.GCM are numerical coupled models that can simulate global climate features at the continental scale, such as atmospheric circulation cells, intertropical convergence zones, jet streams, and also simulate reasonably well the oceanic circulation like the conveyor belt and thermohaline circulation [3].The model calculates the interactions based on the predefined physical laws within and across different grids (based on resolution) to represent the climate behavior in time.Higher resolution climate models enable potentially better representation of local features.The outputs from the GCM are at a coarse spatial resolution, typically 100's of km, while the resolution required for impact assessments are like the temperature and rainfall for a point location or a catchment.Downscaling is needed to bridge this difference and obtain the sub-grid scale information.Broadly downscaling can be divided into two main types, dynamic and statistical downscaling.In dynamic downscaling, a regional climate model (RCM) is nested within the GCM and run with boundary conditions from the GCM.Statistical methods relate the large-scale predictors with the local climate variables through some transfer function.Comprehensive comparisons of dynamic and statistical methods are available in [4][5][6].Studies have shown the performance of statistical downscaling to be competitive compared to dynamical methods for climate change studies [6].A significant advantage of the statistical methods is that they are computationally inexpensive.
The neural network (NN) has found a wide range of applications in climate science.The algorithms are inspired by the neuron structure and the way the human brain process information and learns from the past.There are many types of neural networks used to solve classification, regression and clustering problems.NN has been utilized for diverse applications like precipitation prediction [7][8][9], water resources studies [10], meteorology and oceanography [11], weather forecasting [12,13], climate variability [7,9] and other climate-related studies.Neural networks have been found useful to extract the non-linear relationships in climate variables [14][15][16][17].NN has good nonlinear mapping, noise tolerance and predictive knowledge [9], believed to be more powerful than regression-based methods [16], and does not require a priori knowledge of the catchment [18].Temporal Neural networks have been shown to be better than regression-based downscaling in climate variability and extremes [7].NN can potentially be used to identify hidden relationships with the extraction of the time information.
The objective of this paper is to investigate if neural networks are better than SDSM for determining the relationships between GCM predictors and the local climate variables of temperature and rainfall.The paper will identify the optimal neural network that can be trained easily and applied for the given data and study site.The paper will propose a combined downscaling methodology of SDSM with the regression outputs from a neural network, allowing a user to apply SDSM for weather generator and other downscaling analysis.The following are used interchangeably: artificial neural network (ANN) and neural network (NN); precipitation and rainfall.The paper is organized as follows: Section 2 provides the overview on downscaling, study site and data used, Section 3 presents the results of SDSM, NN, and the combined methodology, and Section 4 summarizes the study with a discussion.

Downscaling Overview
Downscaling model relates the large-scale predictors with the local climate variables.The GCM predictors are then applied to this model to find the local scale predictands.Statistical downscaling is based on the view that regional climate occurs as an interplay of atmospheric, or oceanic circulation and regional topography, land-sea distribution and land-use, and it is conditioned by the climate on larger scales [16].Conceptually, it can be written as where R is the predictand (regional climate variables such as temperature or rainfall), L is the predictor (large scale climate variables), and F is the transfer function deterministic or stochastic that is conditioned by L.
Several statistical downscaling methods with varying complexity have been proposed and used.Downscaling concepts, methodology and limitations are available in the literature [3,16,19,20].Broadly they are sub-divided into weather typing, regression and generator methods.Some examples are ANN, self-organizing maps, regression, canonical correlation and principal component analysis, etc.Some limitations are stationarity assumptions; the relationship will remain valid outside the calibration period and requirement of large observation data.Statistical methods are computationally inexpensive and can be quickly used for impact assessments to provide onsite local information, for example, daily rainfall at a station to drive a hydrology model.
One of the popular statistical downscaling methods is the statistical downscaling model (SDSM).It is a combination of Multiple Linear Regression and the Stochastic Weather Generator [21].SDSM is a widely-used downscaling model, and is relatively simple to apply.Extensive literature is available on the application of SDSM for climate-related studies and downscaling [22][23][24][25].SDSM performs seven functions of quality control and transformations, screening, model calibration, weather generator, data analysis, graphical analysis and scenario generation [21].Two optimization methods of Ordinary Least Squares and Dual Simplex is available along with various other features like bias correction and variance inflation.Predictors are selected through screening, using explained variance and partial correlations.Calibration builds the model using the selected predictors.Validation is performed for the new data subset and checked with statistical tests like t-tests, variance and mean.Validation is done by comparing with the observation and future scenarios built with scenario generator.Many statistical tests and analysis like frequency and time series analysis done within the Graphical User Interface (GUI).SDSM is used in this study first to assess the skills of the neural network model, and then is combined with the neural network.
Neural networks are defined by the interconnections, learning process and the activation functions.NN are a regression-based statistical method that learns from data to make predictions and solve complex problems.Multilayer perceptron (MLP) trained with backpropagation is probably the most commonly used topology.Feedforward networks and training with backpropagation algorithm is especially popular for hydrology-related studies [10].Short-term memory of the network is the past information available as data, and long-term memory is the information contained in the weights.MLP have long-term memory while dynamic systems have short-term memory structures or recurrent connections.Fully-recurrent networks have memory inside the topology but are complex.Time lagged feed forward network (TLFN) are a special type of dynamic network with a short-term memory [26].It is the most common temporal network consisting of multiple layers of processing elements (PE) with feed forward connection.The focused TLFN have memory only at the input layer, thus can still be adapted with the static backpropagation [26].Time delayed neural network (TDNN) is a type of the TLFN.The TDNN is an MLP with the input PE replaced with a tap delay.The focused TDNN network with one hidden layer is shown in Figure 1 [26].One advantage with the TDNN is that it can quickly be trained with the static backpropagation algorithm.TDNN has been used in non-linear system identification, time series prediction, and temporal pattern recognition [26].Focused TDNN is selected as the best performing network.For scenarios, the IPCC AR5 uses representative concentration pathways (RCP), which replaced the Special Report on Emissions (SRES) of AR4.Radiative forcing, expressed as Watts/m 2 is the energy balance (the difference between the positive forcing due to the greenhouse gasses and the negative forcing due to aerosols) that stays in the atmosphere.There are four RCP's developed by different modeling groups, RCP's 2.6, 4.5, 6 and 8.5.RCP 8.5 is a high emissions scenario with heavy use of fossil fuel comparable to the SRES scenario A1F1.RCP 4.5 and 6 are intermediate emission scenarios similar to SRES B1 and B2 respectively.The study uses RCP 4.5 and 8.5 scenarios.

Study Area and Data
The study area used is Colombo, Sri Lanka.The observation data is from the Asian Precipitation Highly Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE) project.The APHRODITE data is daily gridded precipitation dataset, analyzed from rain gauge observation data across Asia covering more than 57 years [27].Data of 0.25° × 0.25° resolution is used for the grid point at lat/lon 6.875 × 79.875 centered at Colombo.IPCC recommends that the climate baseline period should be representative of the recent climate and should be of sufficient duration to include a range of climate variations and anomalies.The baseline period used in the study is from 1961 to 1990 (30 years).A 30-year normal period is a popular climatological baseline period, defined by the World Meteorological Organization (WMO).1961-1990 is the current WMO normal period which serves as a standard reference for climate and impact studies.Models are validated for the period 1991-2005.
Models are built with the National Center for Environmental Prediction (NCEP) reanalysis [28].Both the NCEP and the GCM have 26 large-scale predictors, shown in Table 1.The selection of the GCM for Colombo is based on an ongoing research at the University of Tokyo.From downscaling the CMIP5 GCM's, three models performed well for Sri Lanka in reproducing the seasonality.The three models were the Canadian Earth System Model (CanESM2) of the Canadian Centre for Climate Modelling and Analysis (CCCma), the Centro Euro-Mediterraneo sui Cambiamenti Climatici (CMCC.CM), Italy, and Institut Pierre Simon Laplace (IPSLCM5A-LR), France.This study used the CanESM2.CanESM2 is the second generation of the Earth System Model, which is the fourth generation coupled global climate model of the Coupled Model Intercomparison Project, Phase 5 (CMIP5) [29].Daily predictor values are available for grid box (128 × 64), covering the whole globe along uniform longitude resolution of 2.8125° and nearly uniform latitude resolution of roughly 2.8125°.The GCM resolution is interpolated to the NCEP resolution of 2.5° × 2.5°, and data is normalized to 1961-1990 mean and standard deviation.Data is available from http://www.cccsn.ec.gc.ca/?page=pred-canesm2.The longitudinal and latitudinal index of the grid corresponds approximately to the centers of the grid boxes.Data used in the study corresponds to the cell no BOX 030X_35Y.Data is available for both temperature and precipitation, from 1961 to 2005 historical, 2006 to 2100 for three scenarios of RCP 2.6, 4.5 and 8.5 and the NCEP/NCAR predictors for 1961 to 2005.For scenarios, the IPCC AR5 uses representative concentration pathways (RCP), which replaced the Special Report on Emissions (SRES) of AR4.Radiative forcing, expressed as Watts/m 2 is the energy balance (the difference between the positive forcing due to the greenhouse gasses and the negative forcing due to aerosols) that stays in the atmosphere.There are four RCP's developed by different modeling groups, RCP's 2.6, 4.5, 6 and 8.5.RCP 8.5 is a high emissions scenario with heavy use of fossil fuel comparable to the SRES scenario A1F1.RCP 4.5 and 6 are intermediate emission scenarios similar to SRES B1 and B2 respectively.The study uses RCP 4.5 and 8.5 scenarios.

Study Area and Data
The study area used is Colombo, Sri Lanka.The observation data is from the Asian Precipitation Highly Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE) project.The APHRODITE data is daily gridded precipitation dataset, analyzed from rain gauge observation data across Asia covering more than 57 years [27].Data of 0.25 • × 0.25 • resolution is used for the grid point at lat/lon 6.875 × 79.875 centered at Colombo.IPCC recommends that the climate baseline period should be representative of the recent climate and should be of sufficient duration to include a range of climate variations and anomalies.The baseline period used in the study is from 1961 to 1990 (30 years).A 30-year normal period is a popular climatological baseline period, defined by the World Meteorological Organization (WMO).1961-1990 is the current WMO normal period which serves as a standard reference for climate and impact studies.Models are validated for the period 1991-2005.
Models are built with the National Center for Environmental Prediction (NCEP) reanalysis [28].Both the NCEP and the GCM have 26 large-scale predictors, shown in Table 1.The selection of the GCM for Colombo is based on an ongoing research at the University of Tokyo.From downscaling the CMIP5 GCM's, three models performed well for Sri Lanka in reproducing the seasonality.The three models were the Canadian Earth System Model (CanESM2) of the Canadian Centre for Climate Modelling and Analysis (CCCma), the Centro Euro-Mediterraneo sui Cambiamenti Climatici (CMCC.CM), Italy, and Institut Pierre Simon Laplace (IPSLCM5A-LR), France.This study used the CanESM2.CanESM2 is the second generation of the Earth System Model, which is the fourth generation coupled global climate model of the Coupled Model Intercomparison Project, Phase 5 (CMIP5) [29].Daily predictor values are available for grid box (128 × 64), covering the whole globe along uniform longitude resolution of 2.8125 • and nearly uniform latitude resolution of roughly 2.8125 • .The GCM resolution is interpolated to the NCEP resolution of 2.5 • × 2.5 • , and data is normalized to 1961-1990 mean and standard deviation.Data is available from http://www.cccsn.ec.gc.ca/?page=pred-canesm2.The longitudinal and latitudinal index of the grid corresponds approximately to the centers of the grid boxes.Data used in the study corresponds to the cell no BOX 030X_35Y.Data is available for both temperature and precipitation, from 1961 to 2005 historical, 2006 to 2100 for three scenarios of RCP 2.6, 4.5 and 8.5 and the NCEP/NCAR predictors for 1961 to 2005.

SDSM Downscaling
SDSM 4.2, an open source software, is used for the study.The user guide SDSM 4.2-A decision support tool for the assessment of regional climate change impacts [21] can be used for first-time users of the software.The model is calibrated with NCEP reanalysis from 1961 to 1990 and validated from 1991 to 2005.The selection of relevant predictors is an important task for the calibration of the model, for both the SDSM and neural network, and has large impacts on the result.Predictors chosen should not only have a strong correlation, but have a physically sensible meaning for the predictand being downscaled [30].Studies have suggested that mid-tropospheric geopotential heights and humidity were the two most relevant predictors for daily precipitation [31], using MSLP for downscaling rainfall [32], and humidity is required to capture the climate change effects on the water-holding capacity of the atmosphere [20].Screening, partial correlation and scatterplots in SDSM were used for the selection of the predictors.The predictors chosen are four for average temperature; p500, p8_v, shum, and temp; and six for precipitation; mslp, p1_f, p8_v, s500, shum, and temp.For the average temperature, monthly model was used.The coefficient of regression r 2 was 0.213, the sum of error SE 0.818.For the rainfall, a seasonal model was used with autoregression.The r 2 was 0.139 for conditional statistics, with SE of 0.510.Validation of the model results for average temperature and rainfall are presented in the final result section.

TDNN Downscaling
The neural network is developed using NeuroDimension's Neuro-Solutions [26].The average temperature and the precipitation are modeled separately.First, the best network is selected with all the 26 NCEP predictors as inputs to the neural network for the outputs of temperature or precipitation.The best performing network is chosen after testing several networks with variations in memory type, activation functions, and backpropagation algorithm.Other network topologies tested were: Linear Regression, Multilayer Perceptron (MLP), and Time-lag Recurrent Network (TLRN).Performance is compared with the root mean squared error (RMSE), mean squared error (MSE), the coefficient of regression (r), and the hit score.TDNN network was selected as the best performing network.TDNN is a type of TLFN which is an MLP with memory components to store past values of the data, and allow the network to learn relationships over time.Different memory types for experimentation include GammaAxon, LaguarreAxon, ContextAxon, etc.
The type of memory is TDNN Axon, hyperbolic tangent tanhAxon transfer function is used in the hidden layer, bias axon at the output layer of the neural network, and trained with backpropagation RProp.The size of the memory layer (the tap delay) depends on the input, and the task and has to be determined on a case-by-case basis.Taps 5 and tap delay 1, and 10 PE's in the hidden layer, are used for both temperature and precipitation.Sensitivity analysis is a measure of relative importance of predictors, and it is a measure of standard deviation of the output divided by the standard deviation of the input [7].The network is then retrained with the selected predictor variables.The predictors chosen are five for average temperature; p1_v, p8_v, p850, shum, and temp and eight for precipitation; mslp, p1_f, p1_u, p1_v, p1_zh, p8_v, s850, and shum.Data is tagged as: 1961-1985 for training, 1986-1990 for cross-validation and 1991-2005 for testing.The sensitivity analysis and scatter plot for the average temperature model is shown in Figure 2: The type of memory is TDNN Axon, hyperbolic tangent tanhAxon transfer function is used in the hidden layer, bias axon at the output layer of the neural network, and trained with backpropagation RProp.The size of the memory layer (the tap delay) depends on the input, and the task and has to be determined on a case-by-case basis.Taps 5 and tap delay 1, and 10 PE's in the hidden layer, are used for both temperature and precipitation.Sensitivity analysis is a measure of relative importance of predictors, and it is a measure of standard deviation of the output divided by the standard deviation of the input [7].The network is then retrained with the selected predictor variables.The predictors chosen are five for average temperature; p1_v, p8_v, p850, shum, and temp and eight for precipitation; mslp, p1_f, p1_u, p1_v, p1_zh, p8_v, s850, and shum.Data is tagged as:

Final Results
The SDSM calibrated model is a parameter (.PAR) file which is used in the weather generator and scenario generator to create the output files as .OUT file and a .SIM file.The .OUT file contains the ensemble of outputs from the weather generator or the scenario generator.To apply the SDSM-NN combined methodology, the regression output of the SDSM is replaced by the output of the TDNN.A .OUT file, and .SIM file with the same name as the single column outputs from the TDNN is created.Scenario generation and the other statistical analysis is done in SDSM using the new .OUT files.With this method, SDSM can be used for scenario generation and to perform various downscaling and statistical analysis with a neural network model.
Seasonal/annual model biases and RMSE are used to compare the performance of SDSM and TDNN.The seasonal and annual mean model biases are given in Table 2 and RMSE in Table 3.For average temperature, the biases and RMSE were lower for the NN for the spring and annual mean.The SDSM errors were lower for winter, summer and autumn seasons.TDNN was better for winter and annual, whereas SDSM was better for spring, summer and autumn seasons' average temperature.The biases and RMSE were lower for the NN for all the seasons and the annual mean for the NN.TDNN performed better than SDSM for all the seasons and the annual mean.Figures 3 and 4 shows the monthly, seasonal and annual biases.For monthly temperature, NN bias was lower for the months of April, May and December, and for other months, SDSM bias was lower (Figure 3b).For monthly precipitation, SDSM bias is lower for the months of February, March, August, October and November, and in other months, NN bias was lower (Figure 4b).Overall, positive biases were observed for temperature and negative biases for precipitation by both the models.

Final Results
The SDSM calibrated model is a parameter (.PAR) file which is used in the weather generator and scenario generator to create the output files as .OUT file and a .SIM file.The .OUT file contains the ensemble of outputs from the weather generator or the scenario generator.To apply the SDSM-NN combined methodology, the regression output of the SDSM is replaced by the output of the TDNN.A .OUT file, and .SIM file with the same name as the single column outputs from the TDNN is created.Scenario generation and the other statistical analysis is done in SDSM using the new .OUT files.With this method, SDSM can be used for scenario generation and to perform various downscaling and statistical analysis with a neural network model.
Seasonal/annual model biases and RMSE are used to compare the performance of SDSM and TDNN.The seasonal and annual mean model biases are given in Table 2 and RMSE in Table 3.For average temperature, the biases and RMSE were lower for the NN for the spring and annual mean.The SDSM errors were lower for winter, summer and autumn seasons.TDNN was better for winter and annual, whereas SDSM was better for spring, summer and autumn seasons' average temperature.The biases and RMSE were lower for the NN for all the seasons and the annual mean for the NN.TDNN performed better than SDSM for all the seasons and the annual mean.Figures 3  and 4 shows the monthly, seasonal and annual biases.For monthly temperature, NN bias was lower for the months of April, May and December, and for other months, SDSM bias was lower (Figure 3b).For monthly precipitation, SDSM bias is lower for the months of February, March, August, October and November, and in other months, NN bias was lower (Figure 4b).Overall, positive biases were observed for temperature and negative biases for precipitation by both the models.Seasonal and annual statistical distribution is shown with the box plot, for temperature (Figure 5) and precipitation (Figure 6).The solid bar shows the monthly median value; boxes are an interquartile range (25th-75th percentile); the whiskers have 95% of the values and the circle showing outliers.Data distribution, skew and percentiles can be observed from the plot.Marginal differences between the two models are observed in the distribution.As expected SDSM compares well with the observed for temperature except for spring, where the NN median and distribution is closer to the observed.The changes of the median temperature are well reproduced by SDSM for three seasons.A closer agreement between NN and observations is found for the median precipitation for spring and summer.Both models do not fully capture the range of the precipitation events.
The GCM future projections for 20's (2011-2040), 50's (2041-2070) and 80's (2071-2099), as compared with the current period of 1961-1990, is shown in Table 4.  Seasonal and annual statistical distribution is shown with the box plot, for temperature (Figure 5) and precipitation (Figure 6).The solid bar shows the monthly median value; boxes are an interquartile range (25th-75th percentile); the whiskers have 95% of the values and the circle showing outliers.Data distribution, skew and percentiles can be observed from the plot.Marginal differences between the two models are observed in the distribution.As expected SDSM compares well with the observed for temperature except for spring, where the NN median and distribution is closer to the observed.The changes of the median temperature are well reproduced by SDSM for three seasons.A closer agreement between NN and observations is found for the median precipitation for spring and summer.Both models do not fully capture the range of the precipitation events.Seasonal and annual statistical distribution is shown with the box plot, for temperature (Figure 5) and precipitation (Figure 6).The solid bar shows the monthly median value; boxes are an interquartile range (25th-75th percentile); the whiskers have 95% of the values and the circle showing outliers.Data distribution, skew and percentiles can be observed from the plot.Marginal differences between the two models are observed in the distribution.As expected SDSM compares well with the observed for temperature except for spring, where the NN median and distribution is closer to the observed.The changes of the median temperature are well reproduced by SDSM for three seasons.A closer agreement between NN and observations is found for the median precipitation for spring and summer.Both models do not fully capture the range of the precipitation events.

Discussion
The objective of this paper was to investigate if neural networks are better than the regression based SDSM for downscaling of temperature and rainfall in Colombo.Seasonal and annual model biases and the RMSE were used to assess the performance.With lower biases and RMSE for the winter, summer and autumn seasons, the SDSM was marginally better than the NN for downscaling average temperature.With lower biases and RMSE for all seasons and the annual mean, the NN performed better than SDSM for downscaling rainfall.The paper used a combined methodology, using SDSM with the regression outputs from the neural network.With this method, SDSM can still be used along with a neural network model for higher skills in downscaling for climate impact studies.
A limitation of the study is the use of one GCM for the downscaling, and the results will be largely dependent on the climate change signals from one GCM.Other limitations that are common to statistical methods are the assumptions of stationarity.Some of the further research directions to overcome these limitations are: to use this model for 2 or 3 selected GCMs, and to obtain ensemble average and uncertainty analysis.For RCP 8.5, the SDSM projections of an annual increase in average temperature for Colombo was 2.83 • C and 3.03 • C for TDNN.The annual increase in rainfall is projected at 33% and 63% for SDSM and TDNN.
Climate change is likely to cause more extreme weather events, flooding and droughts.Results of this study are indicating an increase in rainfall and flooding events in Colombo under an increased emissions scenario.The results from this study will augment other investigation and research for improving the prediction of rainfall.IPCC AR5 reports gaps in understanding the climate impacts on precipitation at the catchment scales [33].Further work to downscale variability and extreme indices are important for impact studies.Within the stated limitations, the results provide daily values of temperature and rainfall for applications, like driving a hydrology model for the Colombo area.It also provides a scientific guideline for impact assessment studies, framing policies and long-term adaptation planning.

Figure 1 .
Figure 1.Focused time-delayed neural network (TDNN) with one hidden layer and tap delay line of k + 1 taps.

Figure 1 .
Figure 1.Focused time-delayed neural network (TDNN) with one hidden layer and tap delay line of k + 1 taps.
predictors as inputs to the neural network for the outputs of temperature or precipitation.The best performing network is chosen after testing several networks with variations in memory type, activation functions, and backpropagation algorithm.Other network topologies tested were: Linear Regression, Multilayer Perceptron (MLP), and Time-lag Recurrent Network (TLRN).Performance is compared with the root mean squared error (RMSE), mean squared error (MSE), the coefficient of regression (r), and the hit score.TDNN network was selected as the best performing network.TDNN is a type of TLFN which is an MLP with memory components to store past values of the data, and allow the network to learn relationships over time.Different memory types for experimentation include GammaAxon, LaguarreAxon, ContextAxon, etc.
1961-1985 for training, 1986-1990 for cross-validation and 1991-2005 for testing.The sensitivity analysis and scatter plot for the average temperature model is shown in Figure 2: (a) Sensitivity analysis (b) Scatter plot.

Table 1 .
Predictor variables of the Global Climate Model (GCM) and the National Center for Environmental Prediction (NCEP).

Table 2 .
Seasonal and annual model biases for statistical downscaling model (SDSM) and TDNN for validation period.

Table 3 .
Seasonal and annual RMSE for SDSM and TDNN.

Table 2 .
Seasonal and annual model biases for statistical downscaling model (SDSM) and TDNN for validation period.

Table 3 .
Seasonal and annual RMSE for SDSM and TDNN.

Table 4 .
Future projections of increase in average temperature and rainfall.