A Random Forests Approach to Predicting Clean Energy Stock Prices

: Climate change, green consumers, energy security, fossil fuel divestment, and technological innovation are powerful forces shaping an increased interest towards investing in companies that specialize in clean energy. Well informed investors need reliable methods for predicting the stock prices of clean energy companies. While the existing literature on forecasting stock prices shows how difﬁcult it is to predict stock prices, there is evidence that predicting stock price direction is more successful than predicting actual stock prices. This paper uses the machine learning method of random forests to predict the stock price direction of clean energy exchange traded funds. Some well-known technical indicators are used as features. Decision tree bagging and random forests predictions of stock price direction are more accurate than those obtained from logit models. For a 20-day forecast horizon, tree bagging and random forests methods produce accuracy rates of between 85% and 90% while logit models produce accuracy rates of between 55% and 60%. Tree bagging and random forests are easy to understand and estimate and are useful methods for forecasting the stock price direction of clean energy stocks.


Introduction
Climate change, green consumers, energy security, fossil fuel divestment, and technological innovation are powerful forces shaping an increased interest towards investing in companies that specialize in clean energy, broadly defined as energy produced from renewable energy sources like biomass, geothermal, hydro, wind, wave, and solar. Through technological innovation, the levelized cost of electricity is falling for renewables and onshore wind is now less costly than coal (The Economist 2020). Investment in clean energy equities totaled $6.6 billion in 2019. While this number was below the record high of $19.7 billion in 2017, the compound annual growth rate between 2004 and 2019 of 24% was above that of clean energy private equity or venture capital funding (Frankfurt School-UNEP Centre/BNEF 2020).
Well informed investors need reliable methods for predicting the stock prices of clean energy companies. There is, however, a noticeable lack of information on the prediction of clean energy stock prices. This is the gap in the literature that this paper fills. There are, however, two complicating issues. First, predicting stock prices is fraught with difficulty and the prevailing wisdom in most academic circles, consistent with the efficient markets hypothesis, has generally been that stock prices are unpredictable (Malkiel 2003). More recently, momentum and economic or psychology behavior factors have been identified as possible sources of stock price predictability (Gray and Vogel 2016;Lo et al. 2000;Christoffersen and Diebold 2006;Moskowitz et al. 2012). In addition, the existing literature on stock price predictability shows that predicting stock price direction is more successful than predicting actual stock prices (Basak et al. 2019;Leung et al. 2000;Nyberg 2011;Nyberg and Pönkä 2016;Pönkä 2016;Ballings et al. 2015;Lohrmann and Luukka 2019).
Second, regression based approaches for predicting stock prices or approaches relying solely on technical indicators provides mixed results (Park and Irwin 2007) but machine learning (ML) methods appear to offer better accuracy (Shah et al. 2019;Ghoddusi et al. 2019;Khan et al. 2020;Atsalakis and Valavanis 2009;Henrique et al. 2019). There are many different types of ML methods but decision tree bagging and random forests (RFs) are easy to understand and motivate and they also tend to have good performance for predicting stock prices (Basak et al. 2019;Khan et al. 2020;Lohrmann and Luukka 2019). Decision trees are a nonparametric supervised leaning method for classification and regression. A decision tree classifier works by using decision rules on the data set features (predictors or explanatory variables) to predict a target variable (James et al. 2013). Decision trees are easy to understand and visualize and work on numerical and categorical data (Mullainathan and Spiess 2017). Decision tree learning can, however, create complicated trees the results of which are susceptible to small changes in data. Bootstrap aggregation, or bagging as it is commonly known as, is one way to reduce the variance of decision trees. Bootstrap replication is used to create many bootstrap training data sets. Even though each tree is grown deep and has high variance, averaging the predictions from these bootstrap trees reduces variance. RFs are ensembles of decision trees and work by introducing decorrelation between the trees by randomly selecting a small set of predictors at each split of the tree (James et al. 2013).
Here are some recent examples of papers that use RFs for stock price prediction. Ampomah et al. (2020) compare the performance of several tree-based ensemble methods (RFs, XGBoost, Bagging, AdaBoost, Extra Trees, and Voting Classifier) in predicting the direction of stock price change for data from three US stock exchanges. The accuracy for each model was good and ranged between 82% and 90%. The Extra Trees method produced the highest accuracy on average. Ballings et al. (2015) point out that among papers that predict stock price direction with machine learning methods, artificial neural networks (ANNs) and support vector machines (SVMs) are more popular than RFs. Only 3 out of the 33 papers that they survey used RFs. It is not clear why more complicated methods like ANN and SVM are preferred over simpler methods like RFs. Using data on 5767 European listed companies, they compare the stock price direction predictive performance of RFs, SVMs, AdaBoost, ANNs, K-nearest neighbor and logistic regression. Feature selection is based on company specific fundamental data. They find strong evidence that ensemble methods like RFs have greater prediction accuracy over a one-year prediction period. Basak et al. (2019) use RFs to predict stock price direction for 10 companies, most of which are technology or social media oriented (AAPL, AMZN, FB, MSFT, TWTR). Feature selection is based on technical indicators. They find the predictive accuracy of RFs and XGBoost to be higher than that of artificial neural networks, support vector machines, and logit models. Khan et al. (2020) use 12 machine learning methods applied to social media and financial data to predict stock prices (3 stock exchanges and 8 US technology companies). The RFs method is consistently ranked as one of the best methods. Lohrmann and Luukka (2019) use RFs to predict the classification of S & P 500 stocks. Stock price direction is based on a four-class structure that depends upon the difference between the open and close stock prices. Feature selection is based on technical indicators. They find that the RFs classifier produces better trading strategies than a buy and hold strategy. Mallqui and Fernandes (2019) find that a combination of recurrent neural networks and a tree classifier to be better at predicting Bitcoin price direction than SVM. Mokoaleli-Mokoteli et al. (2019) study how several ML ensemble methods like boosted, bagged, RUS-boosted, subspace disc, and subspace k-nearest neighbor (KNN) compare in predicting the stock price direction of the Johannesburg Stock Exchange. They find that Boosted methods outperform KNN, logistic regression, and SVM. Nti et al. (2020) also find that RFs tend to be underutilized in studies that focus on stock price prediction. Weng et al. (2018) use machine learning methods (boosted regression tree, RFs, ANN, SVM) combined with social media type data like web page views, financial news sentiment, and search trends to predict stock prices. Twenty large US companies are studied. RFs and boosted regression trees outperform ANN or SVM. The main message from these papers is that RFs have high accuracy when predicting stock price direction but are underrepresented in the literature compared to other machine learning methods.
The purpose of this paper is to predict clean energy stock price direction using random forests. Directional stock price forecasts are constructed from one day to twenty days in the future. A multi-step forecast horizon is used in order to gain an understanding of how forecast accuracy changes across time (Basak et al. 2019;Khan et al. 2020). A fiveday forecast horizon corresponds to one week of trading days, a 10-day forecast horizon corresponds to two weeks of trading days and a twenty-day forecast horizon corresponds to approximately one month of trading days. Forecasting stock price direction over a multi-day horizon provides a more challenging environment to compare models. Clean energy stock prices are measured using several well-known and actively traded exchange traded funds (ETFs). Forecasts are constructed using logit models, bagging decision trees, and RFs. A comparison is made between the forecasting accuracy of these models. Feature selection is based on several well-known technical indicators like moving average, stochastic oscillator, rate of price change, MACD, RSI, and advance decline line (Bustos and Pomares-Quimbaya 2020).
The analysis from this research provides some interesting results. RFs and tree bagging show much better stock price prediction accuracy than logit or step-wise logit. The prediction accuracy from bagging and RFs is very similar indicating that either method is very useful for predicting the stock price direction of clean energy ETFs. The prediction accuracy for RF and tree bagging models is over 80% for forecast horizons of 10 days or more. For a 20-day forecast horizon, tree bagging and random forests methods produce accuracy rates of between 85% and 90% while logit models produce accuracy rates of between 55% and 60%. This paper is organized as follows. The next section sets out the methods and data. This is followed by the results and a discussion. The last section of the paper provides some conclusions and suggestions for future research.

The Logit Method for Prediction
The objective of this paper is to predict the stock price direction of clean energy stocks. Stock price direction can be classified as either up (stock price change from one period to the next is positive) or down (stock price change from one period to the next is non-positive). This is a standard classification problem where the variable of interest can take on one of two values (up, down) and is easily coded as a binary variable. One approach to modelling and forecasting the direction of stock prices is to use logit models. Explanatory variables deemed relevant to predicting stock price direction can be used as features. Logit models are widely used and easy to estimate.
In Equation (1), y t+h = p t+h -p t is a binary variable that takes on the value of "up" if positive or "down" if non-positive and X is a vector of features. The variable p t represents the adjusted closing stock price on day t. The random error term is ε. The value of h = 1, 2, 3, . . . , 20 indicates the number of time periods into the future to predict. A multistep forecast horizon is used in order to see how forecast accuracy changes across the forecast horizon. A 20-day forecast horizon is used in this paper since this is consistent with the average number of trading days in a month. The features include well-known technical indicators like the relative strength indicator (RSI), stochastic oscillator (slow, fast), advancedecline line (ADX), moving average cross-over divergence (MACD), price rate of change (ROC), on balance volume (OBV), and the 200-day moving average. While there are many different technical indicators the ones chosen in this paper are widely used in academics and practice (Yin and Yang 2016;Yin et al. 2017;Neely et al. 2014;Wang et al. 2020;Bustos and Pomares-Quimbaya 2020).

The Random Forests Method for Prediction
Logit regression classifies the dependent (response) variable based on a linear boundary and this can be limiting in situations where there is a nonlinear relationship between the response and the features. In such situations, a decision tree approach may be more useful. Decision trees bisect the predictor space into smaller and smaller non-overlapping regions and are better able to capture the classification between the response and the features in nonlinear situations. The rules used to split the predictor space can be summarized in a tree diagram, and this approach is known as a decision tree method. Tree based methods are easy to interpret but are not as competitive with other methods like bagging or random forests. A brief discussion of decision trees, bagging, and random forest methods is presented here but the reader is referred to James et al. (2013) for a more complete treatment.
A classification tree is used to predict a qualitative response rather than a quantitative one. A classification tree predicts that each observation belongs to the most commonly occurring class of training observations in the region to which it belongs. A majority voting rule is used for classification. The basic steps in building a classification tree can be described as follows.

2.
For every observation that falls into the region Rj, the same prediction is made. This prediction is that each observation belongs to the most commonly occurring class of training observations to which it belongs.
The regions R 1 , . . . , R J can be constructed as follows. Recursive binary splitting is used to grow the tree and splitting rules are determined by a classification error rate. The classification error rate, E, is the fraction of training observations in a region that do not belong to the most common class.
In Equation (2),p mk is the proportion of training observations in the mth region that are from the kth class. The classification error is not very sensitive to the growing of trees so in practice either the Gini index (G) or entropy (D) is used to classify splits.
The Gini index measures total variance across the K classes. In this paper, there are only two classes (stock price direction positive, or not) so K = 2. For small values ofp mk the Gini index takes on a small value. For this reason, G is often referred to as a measure of node impurity since a small G value shows that a node mostly contains observations from a single class. The root node at the top of a decision tree can be found by trying every possible split of the predictor space and choosing the split that reduces the impurity as much as possible (has the highest gain in the Gini index). Successive nodes can be found using the same process and this is how recursive binary splitting is evaluated.
The entropy (D), like the Gini index, will take on small values if the mth node is pure. The Gini index and entropy produce numerically similar values. The analysis in this paper uses the entropy measure.
The outcome of this decision tree building process is typically a very deep and complex tree that may produce good predictions on the training data set but is likely to over fit the data leading to poor performance on unseen data. Decision trees suffer from high variance which means that if the training data set is split into two parts at random and a decision tree fit to both halves, the outcome would be very different. One approach to remedying this is to use bagging. Bootstrap aggregation or bagging is a statistical technique used to reduce the variance of a machine learning method. The idea behind bagging it to take many training sets, build a decision tree on each training data set, and average the predictions to obtain a single low-variance machine learning model. In general, however, the researcher does not have access to many training sets so instead bootstrap replication is used to create many bootstrap training data sets and a decision tree grown for each replication. Each individual tree is constructed by randomly sampling the predictors with replacements. So, for example, if the number of predictors is 6, then each tree has 6 predictors but because sampling is done with replacement some predictors may be sampled more than once. Thus, the number of bootstrap replications is the number of trees. Even though each tree is grown deep and has high variance, averaging the predictions from these bootstrap trees reduces variance. The test error of a bagged model can be easily estimated using the out of bag (OOB) error. In bagging, decision trees are repeatedly fit to bootstrapped subsets of the observations. On average, each bagged tree uses approximately two-thirds of the observations (James et al. 2013). The remaining one-third of the observations not used are referred to as the OOB observations and can be used as a test data set to evaluate prediction accuracy. The OOB test error can be averaged across trees.
Random forests are comprised of a large number of individual decision trees that operate as an ensemble (Breiman 2001). Random forests are non-metric classifiers because no learning parameters need to be set. Random forests are an improvement over bagging trees by introducing decorrelation between the trees. As in the case of bagging, a large number of decision trees are built on bootstrapped training samples. Each individual tree in the random forest produces a prediction for the class and the class with the most votes is the model's prediction. Each time a split in a tree occurs a random sample of predictors is chosen as split candidates from the full set of predictors. Notice how this differs from bagging. In bagging all predictors are used at each split. In random forests, the number of predictors chosen at random is usually calculated as the square root of the total number of predictors (James et al. 2013). While the choice of randomly choosing predictors may seem strange, averaging results from non-correlated trees is much better for reducing variance than averaging trees that are highly correlated. In random forests, trees are trained on different samples due to bagging and also use different features when predicting outcomes.
This paper compares the performance of logit, step-wise logit, bagging decision tree, and random forests for predicting the stock price direction of clean energy ETFs. For the analysis, 80% of the data was used for training and 20% used for testing. Classification prediction is one of the main goals of classification trees and the accuracy of prediction can be obtained from the confusion matrix. The logit model uses all of the features in predicting stock price direction. The step-wise logit uses a backwards step-wise reduction algorithm evaluated using Akaike Information Criteria (AIC) to create a sub-set of influential features. The bagging decision tree model was estimated with 500 trees. The random forecasts were estimated with 500 trees and 3 (the floor of the square root of the number of predictor variables, 10) randomly chosen predictors at each split (Breiman 2001). The results are not sensitive to the number of trees provided a large enough number of trees are chosen. A very large number of trees does not lead to overfitting, but a small number of trees results in high test error. Training control for the random forest was handled with 10-fold cross validation with 10 repeats. All calculations were done in R (R Core Team 2019) and used the random forests machine learning package (Breiman et al. 2018).

The Data
The data for this study consists of the stock prices of five popular, US listed, and widely traded clean energy ETFs. The Invesco WilderHill Clean Energy ETF (PBW) is the most widely known clean energy ETF and has the longest trading period with an inception date of 3 March 2005. This ETF consists of publicly traded US companies that are in the clean energy business (renewable energy, energy storage, energy conversion, power delivery, greener utilities, cleaner fuels). The iShares Global Clean Energy ETF (ICLN) seeks to track the S&P Global Clean Energy Index. The First Trust NASDAQ Clean Edge Green Energy Index Fund (QCLN) tracks the NASDAQ Clean Edge Energy Index. The Invesco Solar ETF (TAN) tracks the MAC Global Solar Energy Index which focuses on companies that generate a significant amount of their revenue from solar equipment manufacturing or enabling products for the solar power industry. The First Trust Global Wind Energy ETF (FAN) tracks the ISE Clean Edge Global Wind Energy Index and consists of companies throughout the world that are in the wind energy industry. TAN and FAN began trading near the middle of 2008. The daily data set starts on 1 January 2009 and ends on 30 September 2020. The data was collected from Yahoo Finance. Several well-known technical indicators like the relative strength indicator (RSI), stochastic oscillator (slow, fast), advance-decline line (ADX), moving average cross-over divergence (MACD), price rate of change (ROC), on balance volume, and the 200-day moving average, calculated from daily data, are used as features in the logit and RFs prediction models.
The time series pattern of the clean energy ETFs shows that the ETFs move together ( Figure 1). There was a double peak formation in early 2009 and 2011 followed by a trough in 2013. This was followed by a peak in 2014 and then a relatively horizontal pattern between 2017 and 2019. In response to the global financial crisis of 2008-2009 some countries, like the US, China, and South Korea, implemented fiscal stimulus packages where the economic stimulus was directed at achieving economic growth and environmental sustainability (Andreoni 2020;Mundaca and Richter 2015). This helped to increase the stock prices of clean energy companies. All of the ETFs have risen sharply since the onset of the World Health Organization's declaration of the COVID19 global pandemic (March 2020).

The Data
The data for this study consists of the stock prices of five popular, US listed, and widely traded clean energy ETFs. The Invesco WilderHill Clean Energy ETF (PBW) is the most widely known clean energy ETF and has the longest trading period with an inception date of 3 March 2005. This ETF consists of publicly traded US companies that are in the clean energy business (renewable energy, energy storage, energy conversion, power delivery, greener utilities, cleaner fuels). The iShares Global Clean Energy ETF (ICLN) seeks to track the S&P Global Clean Energy Index. The First Trust NASDAQ Clean Edge Green Energy Index Fund (QCLN) tracks the NASDAQ Clean Edge Energy Index. The Invesco Solar ETF (TAN) tracks the MAC Global Solar Energy Index which focuses on companies that generate a significant amount of their revenue from solar equipment manufacturing or enabling products for the solar power industry. The First Trust Global Wind Energy ETF (FAN) tracks the ISE Clean Edge Global Wind Energy Index and consists of companies throughout the world that are in the wind energy industry. TAN and FAN began trading near the middle of 2008. The daily data set starts on January 1, 2009 and ends on September 30, 2020. The data was collected from Yahoo Finance. Several wellknown technical indicators like the relative strength indicator (RSI), stochastic oscillator (slow, fast), advance-decline line (ADX), moving average cross-over divergence (MACD), price rate of change (ROC), on balance volume, and the 200-day moving average, calculated from daily data, are used as features in the logit and RFs prediction models.
The time series pattern of the clean energy ETFs shows that the ETFs move together (Figure 1). There was a double peak formation in early 2009 and 2011 followed by a trough in 2013. This was followed by a peak in 2014 and then a relatively horizontal pattern between 2017 and 2019. In response to the global financial crisis of 2008-2009 some countries, like the US, China, and South Korea, implemented fiscal stimulus packages where the economic stimulus was directed at achieving economic growth and environmental sustainability (Andreoni 2020;Mundaca and Luth Richter 2015). This helped to increase the stock prices of clean energy companies. All of the ETFs have risen sharply since the onset of the World Health Organization's declaration of the COVID19 global pandemic (March 2020).  The histograms for the percentage of up days shows little variation for PBW, ICLN, and TAN (Figure 2). The percentage of up days increases with the number of days for QCLN while for FAN, the pattern increases up to about 7 days after which the percentage of up days shows little variation with longer time periods. Compared to the other clean energy ETFs studied in this paper, QCLN has the strongest trend in the data (Figure 1) and this is consistent with the higher proportion of up days. In order to investigate the impact of the number of trees on the random forests model, Figure 3 shows how the test error relates to the number of trees. The analysis is conducted for a 10-step forecast horizon where 80% of the data is used for training and 20% of the data is used for testing. In each case, the test error declines rapidly as the number of trees increases from 1 to 100. After 300 trees there is very small reduction in the test error. Notice how the test error converges. This shows that random forests do not over fit as the number of trees increases. In Figure 3, out of bag (OOB) test error is reported along with test error for the up and down classification. The results for other forecast horizons are similar to those reported here. Consequently, 500 trees are used in estimating the RFs. In order to investigate the impact of the number of trees on the random forests model, Figure 3 shows how the test error relates to the number of trees. The analysis is conducted for a 10-step forecast horizon where 80% of the data is used for training and 20% of the data is used for testing. In each case, the test error declines rapidly as the number of trees increases from 1 to 100. After 300 trees there is very small reduction in the test error. Notice how the test error converges. This shows that random forests do not over fit as the number of trees increases. In Figure 3, out of bag (OOB) test error is reported along with test error for the up and down classification. The results for other forecast horizons are similar to those reported here. Consequently, 500 trees are used in estimating the RFs.

Results
This section reports the results from predicting stock price direction for clean energy ETFs. Since this is a classification problem, the prediction accuracy is probably the single most useful measure of forecast performance. Prediction accuracy is a proportion of the number of true positives and true negatives divided by the total number of predictions. This measure can be obtained from the confusion matrix. Other useful forecast accuracy measures like how well the models predict the up or down classification are also available and are reported since it is interesting to see if the forecast accuracy for predicting the up class is similar or different to that of predicting the down class.

Results
This section reports the results from predicting stock price direction for clean energy ETFs. Since this is a classification problem, the prediction accuracy is probably the single most useful measure of forecast performance. Prediction accuracy is a proportion of the number of true positives and true negatives divided by the total number of predictions. This measure can be obtained from the confusion matrix. Other useful forecast accuracy measures like how well the models predict the up or down classification are also available and are reported since it is interesting to see if the forecast accuracy for predicting the up class is similar or different to that of predicting the down class.
Stock price direction prediction accuracy for PWB ( Figure 4) shows large differences between the logit models and RF or logit models and tree bagging. The prediction accuracy for logit and logit stepwise show that while there is some improvement in accuracy between 1 and 5 days ahead, the prediction accuracy never gets above 0.6 (60%). The prediction accuracy of the RFs and tree bagging methods show considerable improvement in accuracy between 1 and 10 days. Prediction accuracy for predicting stock price direction 10 days into the future is over 85%. There is little variation in prediction accuracy for predicting stock price direction between 10 and 20 days into the future. Notice that the prediction accuracy between tree bagging and RF is very similar. dicting stock price direction between 10 and 20 days into the future. Notice that the prediction accuracy between tree bagging and RF is very similar.
The patterns of prediction accuracy for the other clean energy ETFs are very similar to that which was described for the PBW clean energy ETF (Figures 5-8). For each ETF, the prediction accuracy of RF and bagging trees are very similar and much more accurate than that of the logit models.   The patterns of prediction accuracy for the other clean energy ETFs are very similar to that which was described for the PBW clean energy ETF (Figures 5-8). For each ETF, the prediction accuracy of RF and bagging trees are very similar and much more accurate than that of the logit models. racy for logit and logit stepwise show that while there is some improvement in accuracy between 1 and 5 days ahead, the prediction accuracy never gets above 0.6 (60%). The prediction accuracy of the RFs and tree bagging methods show considerable improvement in accuracy between 1 and 10 days. Prediction accuracy for predicting stock price direction 10 days into the future is over 85%. There is little variation in prediction accuracy for predicting stock price direction between 10 and 20 days into the future. Notice that the prediction accuracy between tree bagging and RF is very similar.
The patterns of prediction accuracy for the other clean energy ETFs are very similar to that which was described for the PBW clean energy ETF (Figures 5-8). For each ETF, the prediction accuracy of RF and bagging trees are very similar and much more accurate than that of the logit models.       Variable importance is used to determine which variables are most important in the RFs method. The mean decrease in accuracy (MD accuracy) is computed from the OOB data. The mean decrease in Gini (MD Gini) is a measure of node impurity. For each ETF at a 10-period forecast horizon, the OBV and MA200 are the two most important features in classifying clean stock price direction because they have the largest values of MD accuracy and MD Gini (Table 1). Further analysis for other forecasting horizons (not reported) shows that OBV and MA200 are also the two most important features in classifying clean stock price direction for other forecast horizons.  Variable importance is used to determine which variables are most important in the RFs method. The mean decrease in accuracy (MD accuracy) is computed from the OOB data. The mean decrease in Gini (MD Gini) is a measure of node impurity. For each ETF at a 10-period forecast horizon, the OBV and MA200 are the two most important features in classifying clean stock price direction because they have the largest values of MD accuracy and MD Gini (Table 1). Further analysis for other forecasting horizons (not reported) shows that OBV and MA200 are also the two most important features in classifying clean stock price direction for other forecast horizons.  Figures 4-8 show the overall prediction accuracy. Another interesting question to ask is how the prediction accuracy compares between positive prediction values and negative prediction values. Positive predictive value is the proportion of predicted positive cases that are actually positive. An alternative way to think about this is, when a model predicts a positive case, how often is it correct? Figure 9 reports the positive prediction value for PBW. This plot shows how accurate the models are in prediction the positive price direction. The RFs and tree bagging methods are more accurate than the logit methods. After 5 days, the RFs and tree bagging methods have an accuracy of over 80% while the accuracy of the logit methods never reaches higher than 70%. The pattern of positive predictive value for the other ETFs (Figures 10-13) are similar to what is observed for PBW. For each ETF, after 10 days the positive predictive values for RFs and bagging are above 0.80 and in most cases above 0.85.         Figures 14-18 show the negative predictive value. The negative predictive value is the proportion of predicted negative cases relative to the actual number of negative cases. Figure 14 reports the negative predictive value for PBW. This plot shows how accurate the models are in predicting the down stock price direction. The RFs and tree bagging methods are more accurate than the logit models. For the RFs and tree bagging models, accuracy increases from 0.5 to 0.8 between 1 and 5 days. After 10 days negative predictive value fluctuates between 0.85 and 0.90. The pattern of negative predictive value for the other ETFs (Figures 15-18) are similar to what is observed for PBW. For each ETF, after 10 days the negative predictive values for RFs and bagging are above 0.80 and in most cases above 0.85.  Figures 14-18 show the negative predictive value. The negative predictive value is the proportion of predicted negative cases relative to the actual number of negative cases. Figure 14 reports the negative predictive value for PBW. This plot shows how accurate the models are in predicting the down stock price direction. The RFs and tree bagging methods are more accurate than the logit models. For the RFs and tree bagging models, accuracy increases from 0.5 to 0.8 between 1 and 5 days. After 10 days negative predictive value fluctuates between 0.85 and 0.90. The pattern of negative predictive value for the other ETFs (Figures 15-18) are similar to what is observed for PBW. For each ETF, after 10 days the negative predictive values for RFs and bagging are above 0.80 and in most cases above 0.85. the proportion of predicted negative cases relative to the actual number of negative cases. Figure 14 reports the negative predictive value for PBW. This plot shows how accurate the models are in predicting the down stock price direction. The RFs and tree bagging methods are more accurate than the logit models. For the RFs and tree bagging models, accuracy increases from 0.5 to 0.8 between 1 and 5 days. After 10 days negative predictive value fluctuates between 0.85 and 0.90. The pattern of negative predictive value for the other ETFs (Figures 15-18) are similar to what is observed for PBW. For each ETF, after 10 days the negative predictive values for RFs and bagging are above 0.80 and in most cases above 0.85.         To summarize, the main take-away from this research is that RFs and tree bagging provide much better predicting accuracy then logit or step-wise logit. The prediction accuracy between bagging and RFs is very similar indicating that either method is very useful for predicting the stock price direction of clean energy ETFs. The prediction accuracy for RF and tree bagging models is over 80% for forecast horizons of 10 days or more. The To summarize, the main take-away from this research is that RFs and tree bagging provide much better predicting accuracy then logit or step-wise logit. The prediction accuracy between bagging and RFs is very similar indicating that either method is very useful for predicting the stock price direction of clean energy ETFs. The prediction accuracy for RF and tree bagging models is over 80% for forecast horizons of 10 days or more. The positive predictive values and negative predictive values are similar indicating that there is little asymmetry between the up and down prediction classifications.

Discussion
The research in this paper shows that RFs produce more accurate clean energy stock price direction forecasts than logit models. These results add to a growing body of research that shows machine learning methods like RFs have considerable stock price direction predictive performance (Ballings et al. 2015;Basak et al. 2019;Lohrmann and Luukka 2019;Weng et al. 2018;Ampomah et al. 2020). None of these studies, however, consider clean energy stock prices. This paper appears to be the first paper to use ML methods to predict clean energy stock price direction.
The results of this present paper could be combined with some of the knowledge discussed in the previous paragraph to expand the feature set used in estimating RFs. It may, for example, be useful to include other variables like oil prices in the set of features used in the estimation of RFs. A comparison could be made between feature sets that are based on technical indicators and feature sets that include oil prices and other macroeconomic variables to see if macroeconomic variables offer additional insight into predicting clean energy stock price direction.

Conclusions
There is a growing interest in investing in clean energy companies and some of the major drivers behind this interest include climate change, green consumers, energy security, fossil fuel divestment, and technological innovation. Investors in clean energy equities would benefit from a better understanding of how to predict clean energy stock prices. There is, however, a noticeable lack of information on this topic. This is the gap in the literature that this paper fills.
Building on the existing finance literature that shows stock price direction is easier to predict than stock prices and recent developments in machine learning showing that ML techniques offer an improvement in prediction over conventional regression-based approaches, this paper uses RFs and decision tree bagging to predict clean energy equity stock price direction. RFs and decision tree bagging are easier to explain and estimate than other ML techniques like ANNs or SVMs, but RFs appear to be underutilized in the existing literature. Five well known and actively traded clean energy ETFs are chosen for study. For each ETF, prediction accuracy is assessed using a time horizon of one day to twenty days (which is approximately one month of trading days).
RFs and tree bagging show much better stock price prediction accuracy then logit or step-wise logit. The prediction accuracy from bagging and RFs is very similar indicating that either method is very useful for predicting the stock price direction of clean energy ETFs. The prediction accuracy for RF and tree bagging models is over 80% for forecast horizons of 10 days or more. For a 20-day forecast horizon, tree bagging and random forests methods produce accuracy rates of between 85% and 90% while logit models produce accuracy rates of between 55% and 60%. These results are in agreement with other research that shows RFs to have a high stock price predictive accuracy (Ballings et al. 2015;Basak et al. 2019;Lohrmann and Luukka 2019;Weng et al. 2018;Ampomah et al. 2020). The positive predictive values and negative predictive values indicate that there is little asymmetry between the up and down prediction classifications.
There are several different avenues for future research. First, this paper has focused on the comparison between bagging decision trees, RFs, and logit models. A deeper analysis could include other ML methods like boosting, ANN and SVM. Second, this paper used a set of well-known technical indicators for features. The feature space could be expanded to include additional technical indicators or other variables like oil prices or other macroeconomic variables. Third, the analysis in this paper was conducted using ETFs. It may also be of interest to apply machine learning techniques to company specific clean energy stock price prediction.
Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The data used in this study are available from Yahoo Finance at https://finance.yahoo.com/.