- freely available
Economies 2013, 1(3), 49-64; doi:10.3390/economies1030049
Abstract: In the wake of the 2008 financial crisis, many countries are hoping that massive increases in their money supplies will revive their economies. Evaluating the effectiveness of this strategy using traditional statistical methods would require the construction of an extremely complex economic model of the world that showed how each country’s situation affected all other countries. No matter how complex that model was, it would always be subject to the criticism that it had omitted important variables. Omitting important variables from traditional statistical methods ruins all estimates and statistics. This paper uses a relatively new statistical method that solves the omitted variables problem. This technique produces a separate slope estimate for each observation which makes it possible to see how the estimated relationship has changed over time due to omitted variables. I find that the effectiveness of monetary policy has fallen between the first quarter of 2003 and the fourth quarter of 2012 by 14%, 36%, 38%, 32%, 29% and 69% for Japan, the UK, the USA, the Euro area, Brazil, and the Russian Federation respectively. I hypothesize that monetary policy is suffering from diminishing returns because it cannot address the fundamental problem with the world’s economy today; that problem is a global glut of savings that is either sitting idle or funding speculative bubbles.
In the wake of the 2008 financial crisis, many nations are depending on massive increases in the money supply to drive their economies . Since September 2012, The Federal Reserve System (the Central Bank) of the USA has been increasing its holdings of bonds by 85 billion dollars every month. Japan’s Prime Minister, Abe, campaigned on the promise to increase Japan’s money supply by however much it takes to change Japan’s deflation into two percent inflation . These policies have led to major shifts in international financial flows [3,4].
Using traditional statistical techniques to analyze the effectiveness of changes in the money supply is not possible because the increases in the money supply in the USA and Japan are unprecedented—never have these countries increased their money supplies by such large amounts. Furthermore, using traditional techniques to analyze the effectiveness of changes in the money supply would require the construction and justification of a worldwide macroeconomic model that included every force that can affect growth and all the ways that nations are inter-connected through trade and financial flows. Such a model would have to capture all changes in relative prices (including exchange rates and interest rates), resource endowments, and technologies. Creating such a model is not possible.
Fortunately, Leightner , Leightner and Inoue , and Inoue, Lafaye de Micheaux, and Leightner  developed Reiterative Truncated Projected Least Squares (RTPLS), a technique that captures the influence of omitted variables without having to measure, model, or find proxies for them. Furthermore RTPLS produces reduced form estimates without having to build macroeconomic models. RTPLS produces a separate slope estimate for every observation which makes it possible to see how the estimated relationship is changing over time. This paper uses RTPLS to estimate the change in Gross Domestic Product (GDP) and in the Consumer Price Index (CPI) due to a change in the money supply (∂GDP/∂M-1 and ∂CPI/∂M-1) using quarterly data that extends to the fourth quarter of 2012 or the first quarter of 2013 for Japan, the United Kingdom, the USA, the European Union, Brazil, China, India, and the Russian Federation. I find that the effectiveness of monetary policy has fallen between the first quarter of 2003 and the fourth quarter of 2012 by 14%, 36%, 38%, 32%, 29% and 69% for Japan, the UK, the USA, the Euro area, Brazil, and the Russian Federation respectively. The only country examined that is enjoying a rising effectiveness of monetary policy is China. The remainder of the paper is organized as follows. Section 2 explains the analytical techniques used, Section 3 explains the data and the empirical results, and Section 4 provides a conclusion.
2. An Intuitive Explanation of the Analytical Technique Used
Omitting important variables from the analysis is regression analysis’ most serious problem. For example, if a researcher estimates Equation (1), ignoring the fact that the slope of Equation (1) is a function of other variables (Equation (2)), then he gets a constant slope when in truth the true slope varies.
This paper models the omitted variables problem by substituting Equation (2) into Equation (1) to produce Equation (3). I am initially assuming that all variation from the fitted line (error) is due to omitted variables. Inoue, Lafaye de Micheaux, and Leightner  show that, in this case, the “error” for the ith observation from estimating Equation (1) without considering Equation (2) is α2Xi[qi − E(q)] where E(q) is the mean value of q.
The standard approach to the omitted variable’s problem is to use instrumental variables. However, to correctly use instrumental variables, the researcher must find instruments that are highly correlated to the omitted variable and that are not related to the dependent variable except through their relationship with the omitted variables. If such variables are found, which is highly unlikely, the researcher must correctly model how the omitted variable affects the dependent variable and how the instrument is related to the omitted variable. All of these conditions are impossible to meet for a subject as complex as the relationship between the money supply and GDP and CPI. For recent papers that express concern over omitted variable bias see [8,9,10,11,12,13,14,15,16,17,18,19,20,21].
Fortunately, Branson and Lovell  explain that the observations at the top of any given data set would be associated with the most favorable values for all omitted variables (in other words, the values of the omitted variables that would lead to the largest values for the dependent variable, ceteris paribus). Building on this intuition, Leightner  developed a new analytical technique named “Reiterative Truncated Projected Least Squares” (RTPLS) that solves the omitted variable problem of regression analysis without using instrumental variables and their unreasonable assumptions. Leightner  and Leightner and Inoue  created the second and third generation of the technique respectively: RTPLS2 and RTPLS3. Leightner and Inoue  also produce an argument that RTPLS3 is unbiased. Inoue, Lafaye de Micheaux, and Leightner  introduce the fourth generation, RTPLS4. They use simulations to show that when the importance of the omitted variable is 10 times the size of measurement and round off error, then Ordinary Least Squares (OLS) produces 3.8 times the error as RTPLS4. When the importance of omitted variables is 100 times the size of measurement and round off error, then OLS produces more than 28 times the error of RTPLS4. Published studies that used RTPLS, RTPLS2, or RTPLS3 in applications include Leightner and Inoue [6,24,25,26,27,28] and Leightner [5,23,29,30,31,32,33,34,35,36].
The easiest way to explain RTPLS4 is to use a diagram like Figure 1. To construct Figure 1, I generated two series of random numbers, X and q, which ranged from 0 to 100. I then set the dependent variable Y as:
The true value for ∂Y/∂X equals 10 + 0.6q. Since q ranges from 0 to 100, the true slope will range from 10 (when q = 0) to 70 (when q = 100). Thus q makes a 700 percent difference to the slope. In Figure 1, I identified each point with that observation’s value for q. Notice that the upper edge of the data corresponds to relatively large qs—93, 97, 98, 98, 98, 98, 91, and 87. The lower edge of the data corresponds to relatively small qs—18, 1, 2, 2, and 7. This makes sense since as q increases so does Y, for any given X. For example, when X approximately equals 25, reading the values of q from top to bottom of Figure 1 produces 78, 61, 53, 36, 26, 17, 8, and 2. Thus the relative vertical position of each observation is directly related to the values of q. If, instead of adding 0.6qX in Equation 4, I had subtracted 0.6qX, then the smallest qs would be on the top and the largest qs on the bottom of Figure 1. Either way, the vertical position of observations captures the influence of q.
In Figure 1, the true value for ∂Y/∂X equals 10 + 0.6q; thus the slope, ∂Y/∂X, will be at its greatest numerical value along the upper edge of the data where q is largest and the slope will be at its smallest numerical value along the bottom edge of the data where q is smallest. The relative vertical position of each observation, for any given X, is directly related to the true slope.
Now imagine that we do not know what q is and that we have to omit it from our analysis. In this case, OLS produces the following estimated Equation: Y = 135.76 + 40.645X with an R-squared of 0.574 and a standard error of the slope of 3.537. On the surface, this OLS regression looks successful, but it is not. Remember that the true Equation is Y = 100 + 10X + 0.6qX. Since q ranges from 0 to 100, the true slope (true derivative) ranges from 10 to 70 and OLS produced a constant slope of 40.645. OLS did the best it could, given its assumption of a constant slope; OLS produced a slope estimate of approximately 10 + 0.6E(q) = 10 + 0.6(52) = 41.2. However, OLS is hopelessly biased by its assumption of a constant slope when, in truth, the slope is varying.
Although OLS is hopelessly biased when there are omitted variables that interact with the included variables, Figure 1 provides us with a very important insight—even when we do not know what the omitted variables are, even when we have no clue how to model the omitted variables or measure them, and even when there are no proxies for the omitted variables, Figure 1 shows us that the relative vertical position of each observation contains information about the combined influence of all omitted variables on the true slope. RTPLS4 exploits this insight.
RTPLS4 draws a frontier around the top data points in Figure 1. It then projects all the data vertically up to this frontier. This projection eliminates the influence of unfavorable omitted variables. In other words, by projecting the data to the frontier, all the data would correspond to the largest values for q. However, there is a possibility that some of the observations will be projected to an upper right hand side horizontal section of the frontier. For example, the 80 which is closest to the upper right hand corner of Figure 1 would be projected to a horizontal section of the frontier. This horizontal section does not show the true relationship between X and Y and it needs to be eliminated (truncated) before a second stage regression is run through the projected data. This second stage regression (OLS) finds a truncated projected least squares (TPLS) slope estimate for when q is at its most favorable level and this TPLS slope estimate is then appended to the data for the observations that determined the frontier.
The observations that determined the frontier are then eliminated and the procedure repeated. This process is reiterated, peeling the data down from the top to the bottom. The first iteration finds a TPLS slope estimate for when the omitted variables cause Y to be at its highest level, ceteris paribus. The second iteration finds a TPLS slope estimate for when the omitted variables cause Y to be at its second highest level, etc. This process is stopped when an additional regression would use fewer than ten observations (the remaining observations will be located at the bottom of the data). It is important to realize that the omitted variable, q, in this process will represent the combined influence of all forces that are omitted from the analysis. For example, if there are 1000 forces that are omitted where 600 of them are positively related to Y and 400 are negatively related to Y, then the first iteration will capture the effect of the 600 variables being at their largest possible levels and the 400 being at their lowest possible levels. Just as the entire data set can be peeled down from the top, the entire data set also can be peeled up from the bottom. Peeling up from the bottom would involve projecting the original data downward to the lower boundary of the data, truncating off any lower left hand side horizontal region, running an OLS regression through the truncated projected data to find a TPLS estimate for the observations that determined the lower boundary of the data, eliminating those observations that determined the lower boundary and then reiterating this process until there are fewer than 10 observations left at the top of the data. By peeling the data from both the top to the bottom and from the bottom to the top, the observations at both the top and the bottom of the data will have an influence on the results. Of course, some of the observations in the middle of the data will have two TPLS estimated slopes associated with them—one from peeling the data downward and the other from peeling the data upward.
Once the entire data set has been peeled from the top and bottom, all the resulting TPLS estimates are stacked on top of each other. These TPLS estimates minus Y/X are then made the dependent variable in a final regression in which 1/X is the explanatory variable (see Equation (10) below). The form of this final regression is explained by the following derivation.
The α0 estimated in this final regression of Equation (10) and the data for Y/X and X are plugged into Equation (9) to produce a separate RTPLS slope estimate for each observation. Alternatively, Generalized Least Squares (GLS) could be used to estimate α0 from Equation (1) and the resulting α0 along with data on Y/X and X could be plugged into Equation (9); Inoue, Lafaye de Micheaux, and Leightner  name this alternative approach “Variable Slope Generalized Least Squares” (VSGLS). Theoretically VSGLS produces the best linear unbiased estimate (BLUE) for α0; however simulations show that VSGLS produces between twice and three times the error of RTPLS4 when sample sizes of 250 observations are used and all error is due to omitted variables. RTPLS4 is better than BLUE probably because it does not assume a linear relationship .
To better understand the role of the final regression in the RTPLS4 procedure, consider Figure 1 again. If all the observations on the upper frontier had been associated with exactly the same omitted variable values (perhaps 98), then the resulting TPLS estimate would perfectly fit all of the observations it was associated with. However, Figure 1 shows that the observations on the upper frontier were associated with omitted variable values of 93, 97, 98, 98, 98, 98, 91, and 87. The resulting TPLS slope estimate would perfectly fit a q value of approximately 95 (the mean of 93, 97, 98, 98, 98, 98, 91, and 87). When a TPLS estimate for a q of 95 is associated with qs of 93, 97, 98, 98, 98, 98, 91, and 87, some random variation (both positive and negative variation) remains. By stacking the results from all iterations when peeling down and up, and then conducting this final regression, this random variation is eliminated. Realize that Y is co-determined by X and q. Thus the combination of X and Y should contain information about q. This final regression exploits this insight in order to better capture the influence of q.
RTPLS4 generates reduced form estimates that include all the ways that X and Y are correlated. Thus, even when many variables interact via a system of Equations, a researcher using RTPLS4 does not have to discover and justify that system of Equations. In contrast, traditional regression analysis theoretically must include all relevant variables in the estimation and the resulting slope estimate for dy/dx is for the effects of just x—holding all other variables constant. RTPLS4’s reduced form estimates are not substitutes for traditional regression analysis’ partial derivative estimates. Instead RTPLS4 and traditional regression estimates are compliments which capture different types of information. RTPLS4 has the disadvantage of not being able to tell the researcher the mechanism by which X affects Y. On the other hand, RTPLS4 has the significant advantage of not having to model and find data for all the forces that can affect Y in order to estimate ∂Y/∂X. Both RTPLS4 and traditional regression techniques find “correlations.” It is impossible for either one of them to prove “causation.”
Traditional regression techniques often try to remove the influence of omitted variables that are related to time by adding trend terms, first differencing the data, and/or looking for structural breaks. RTPLS4 does not need to employ any of these techniques because it, unlike traditional regression techniques, is not trying to “hold everything else constant.” Instead RTPLS attempts to capture all the ways that Y and X are related, both directly and through omitted variables. De-trending the data (or first differencing the data) would remove part of the influence of omitted variables that RTPLS4 attempts to capture which would make RTPLS4 less accurate. Granted if a structural break occurs at the very beginning or end of a data set, and thus there is not enough data to form a separate frontier on one side of that structural break, then RTPLS4 will not be able to detect that structural break, but neither could traditional regression techniques in that case. Inoue, Lafaye de Micheaux and Leightner  believe that RTPLS4 outperforms variable slope generalized least squares specifically because it does not assume a linear relationship between the omitted variables and the true slope. Indeed, Leightner and Inoue  tested several ways that q can interact with X including squares, cubes, and exponentials (they even tested what happens when q changes a positive ∂Y/∂X into a negative one). They found that the first generation of RTPLS outperformed OLS except for the case when omitted variables make a 1000% difference to an exponent (a case where both OLS and RTPLS did very poorly); however, when omitted variables make only a 10% or 100% difference to an exponent, then RTPLS noticeably outperformed OLS.
3. The Data and the Empirical Results
Except where otherwise noted, all the data for this paper came from the OECD statistical website. The GDP data was in millions of nominal US dollars and was seasonally adjusted. Because the OECD website did not contain quarterly GDP data for China, I downloaded quarterly GDP data in 100 million nominal Chinese yuan from the National Bureau of Statistics of China. I then converted China’s GDP in 100 million yuan into millions of nominal US dollars by dividing it by the quarterly exchange rate (yuan/dollars) and then multiplying by 100. The quarterly exchange rate was obtained from the OECD statistical website. The Chinese GDP data was not seasonally adjusted. For the GDP and money supply data, I obtained the maximum number of quarterly observations possible starting in the first quarter of 1960. The money supply data used was an index for M-1 (coin plus currency plus demand deposits) where 2005 was set equal to 100. This index for M-1 was carried to the fourth decimal place. The maximum number of observations for the Consumer Price Index (CPI) starting in the first quarter of 1980 was also used and this index was carried to the third decimal place. I did not include CPI data from the 1970s and earlier because the Organization of Petroleum Exporting Countries’ (OPEC) manipulation of oil prices in the 1970s greatly affected inflation. ∂GDP/∂M-1 and ∂CPI/∂M-1were estimated with data pooled for Brazil, China, and the Russian Federation because of insufficient data to run a separate analysis in these cases for each country. For all other cases, the analysis was conducted for each individual country separately.
I chose to do the analysis using M-1 because M-1 is the quarterly data series available through the OECD that is most directly controlled by the central bank. Although quarterly values for the monetary base, or the discount rate, or central bank loans to commercial banks, or open market operations may be better measures of what the central bank directly controls, these data sets are not available through OECD. Furthermore, I checked for alternative sources of data, like the IMF, and found that quarterly data on M-1 is the best measure of monetary policy available for the most quarters and the most countries. Thus, using M-1 made it possible to do the analysis for the UK, Euro 17, Brazil, China, and Russia, instead of just focusing on the USA and Japan.
Finally, if ∂GDP/∂M-1 and ∂CPI/∂M-1 are falling, then it is likely that the change in GDP and CPI due to a change in the monetary base or the discount rate is falling even more because these tools that central banks directly control are even more removed from production (GDP) and inflation (CPI) than M-1 is. Leightner  found that the percentage change in Japan’s CPI due to a one percent increase in M2 + CD fell by 55% between October 1997 and January 2003. However, Japan’s central bank’s ability to fight deflation by changing the monetary base must have fallen by even more than 55% because Leightner and Inoue  found that ∂(M-2 + CD)/∂(Monetary Base) fell by 12% in Japan between December 1993 and April 2004.
The empirical results for ∂GDP/∂M-1 and ∂CPI/∂M-1 are given in the 7 left hand columns and 7 right hand columns of Table 1 respectively for the first quarter of 1997 through the first quarter of 2013.
No estimates were made for ∂GDP/∂M-1 for India and for ∂CPI/∂M-1 for the Euro area because quarterly GDP data and quarterly CPI data could not be found for India and the Euro area respectively. Figure 2, Figure 3, Figure 4 and Figure 5 depict all the empirical results. The number 65,605 in column 1 and the 0.435 in column 8 of the first row of Table 1 imply that a one unit increase in the M-1 index for Japan in the first quarter of 1997 was correlated with an increase in GDP of 65,605 million US dollars and an increase in the CPI index of 0.435 respectively.
RTPLS4 found a negative value for ∂GDP/∂M-1 for Japan between the first quarters of 1960 and 1963 and for China between the first quarters of 1999 and 2008. RTPLS4 also found a negative value for ∂CPI/∂M-1 for Japan between the first quarter of 1980 and third quarter of 1981, for the United Kingdom between the fourth quarter of 1986 and first quarter of 1989, for India between the first quarter of 1980 and third quarter of 1981, and for the Russian Federation between the third quarters of 1995 and 1998. There are two possible explanations for these negative estimates. First, RTPLS4 is relatively less accurate for observations with the smallest values of the explanatory variable. Consider Figure 1 again. Since RTPLS4 uses the relative vertical distance between observations to capture the influence of omitted variables, when X is relatively small, there is relatively less vertical distance and thus RTPLS4 is relatively less accurate. All of the negative values for ∂GDP/∂M-1 and for ∂CPI/∂M-1 were at the beginning of the data sets (the earliest dates) when the index for M-1 was relatively small. A second explanation is that central banks, in order to “cool” the economy and reduce the risk of inflation, reduce the increase in M-1 during time periods of relatively rapid increases in GDP and CPI. This central bank response would result in a negative correlation between M-1 and GDP and between M-1 and the CPI especially if monetary policy is unsuccessful. In contrast, a positive correlation between M-1 and GDP and between M-1 and the CPI is found if the central bank’s changes in the money supply produce the results that the central bank desires.
Figure 2 shows that ∂GDP/∂M-1 fell by 47.7% for Japan between the first quarter of 1970 and the second quarter of 1980, held relatively constant for 1980 to 1990, and then fell by 67.9% between the third quarter of 1990 and the first quarter of 2013. Table 1 and Figure 2 show that ∂GDP/∂M-1 fell by 42.0% for the USA between the fourth quarter of 2007 and the first quarter of 2013, fell by 33.3% for the Euro area between the first quarter of 2008 and the first quarter of 2013, and fell by 40.9% for the UK between the fourth quarter of 1999 and the first quarter of 2013. Figure 3 shows that ∂GDP/∂M-1 fell by 72.5% for Brazil between the first quarters of 1996 and 2013, and that it fell by 69.4% for Russia between the first quarter of 2003 and the fourth quarter of 2013. Only China seems to be enjoying an increasing effectiveness of monetary policy (∂GDP/∂M-1 is rising).
Using monthly data on Japan’s CPI and M2 + CD, Leightner  showed that the percentage change in the CPI due to a one percent increase in M2 + CD fell by 55% between October 1997 and January 2003. Figure 4 of this paper shows that ∂CPI/∂M-1 fell by 73.8% between the third quarter of 1993 and the first quarter of 2013. This is bad news for Japan’s new prime minister and new governor of the Bank of Japan who hope to change Japan’s current deflation into two percent inflation “as soon as is humanly possible” via massive increases in Japan’s money supply, in conjunction with other government initiatives . Table 1 and Figure 4 show that ∂CPI/∂M-1 fell by 38.4% between the second quarter of 2008 and the first quarter of 2013 for the USA. For the United Kingdom ∂CPI/∂M-1 fell by 54.6% between the second quarter of 1995 and the first quarter of 2008 and then rose by 24.9% between the first quarters of 2008 and 2013; however, the net fall between the second quarter of 1995 and the first quarter of 2013 was 43.3%. Table 1 and Figure 5 show that ∂CPI/∂M-1 fell between the fourth quarters of 1999 and 2012 by 45.7%, 78.9%, 72%, and 51.5% for Brazil, China, Russia, and India respectively.
The effectiveness of increasing the money supply in order to increase GDP has been falling by substantial amounts for Japan, the USA, the UK, the Euro area, Brazil, and Russia. This decline started in 2008 for the USA and the Euro area but much earlier for other countries. Furthermore, the effect of the money supply on the consumer price index has been falling since 1999 (or earlier) for Japan, the UK, Brazil, China, India, and Russia and since 2008 for the USA. I believe that the effectiveness of monetary policy is declining because it is facing diminishing returns due to its inability to address the fundamental problem in the world’s economy today—the global glut of savings [37,38,39,40,41,42,43]. Two conditions are needed for investment. First there must be savings to invest; however, equally important, there also must be a reason to invest. Investors must believe that they will be able to sell what the investment produces. If consumption is insufficient so that investors will not be able to sell what the investment produces, then savings either sits idle or funds speculative bubbles. No matter how much the money supply is increased and the interest rate driven down to zero (or below zero), if investors do not believe they can sell what the investment produces, then they will not invest. Monetary policy cannot fix this problem.
The empirical results of this paper and this conclusion are consistent with John Maynard Keynes’ explanation of a liquidity trap in the General Theory of Employment, Interest and Money . However, the mathematical models that dominate macroeconomics today, including the one that bears Keynes’ name, do not include the possibility of a glut of savings. The only major difference between the mathematical Classical model and the mathematical Keynesian model is that the Classical model assumes full employment in the labor market whereas the Keynesian model does not. This one difference results in the government spending (∂GDP/∂G) and money supply multipliers (∂GDP/∂M-1) being zero in the mathematical Classical model, but greater than one in the mathematical Keynesian model. Likewise, I believe that including the possibility of a glut of savings in our mathematical models would fundamentally change their conclusions. When policy makers and economists focus solely on our current mathematical models, we are blind to the fundamental problem that is destroying the effectiveness of both fiscal and monetary policy today, and that problem is a global glut of savings. The solution to this problem is a redistribution of income from savers to consumers which will increase the marginal propensity to consume, increase aggregate demand, and increase the effectiveness of both fiscal and monetary policy. John Maynard Keynes  would agree.
I appreciate the research help given by Yumiko Deevey.
Conflicts of Interest
I declare no conflict of Interest.
- Mead, J.; Hilsenrath, J. Banks Rush to Ease Supply of Money. Wall Street Journal, 14 May 2013, A7. [Google Scholar]
- Dvorak, P.; Warnock, E. Suffering Japan Rolls Dice on New Era of Easy Money. Wall Street Journal, 21 March 2013, A1–A12. [Google Scholar]
- Frangos, A. Asia Wrestles with a Flood of Cash. Wall Street Journal, 9 May 2013, C1–C4. [Google Scholar]
- McCarthy, E.; Natarajan, P.; Inagaki, K. Japan Triggers a Shift to Emerging Markets. Wall Street Journal, 15 April 2013, C1–C2. [Google Scholar]
- Leightner, J.E. The Changing Effectiveness of Key Policy Tools in Thailand; Institute of Southeast Asian Studies for East Asian Development Network: Singapore, 2002. [Google Scholar]
- Leightner, J.E.; Inoue, T. Tackling the omitted variables problem without the strong assumptions of proxies. Eur. J. Oper. Res. 2007, 178, 819–840. [Google Scholar] [CrossRef]
- Inoue, T.; Lafaye de Micheaux, P.; Leightner, J.E. Several related solutions to the omitted variables problem. 2013. unpublished work. [Google Scholar]
- Abbott, J.K.; Klaiber, H.A. An embarrassment of riches: Confronting omitted variable bias and multiscale capitalization in hedonic price models. Rev. Econ. Stat. 2011, 93, 1331–1342. [Google Scholar] [CrossRef]
- Angrist, J.D.; Krueger, A.B. Instrumental variables and the search for identification: From supply and demand to natural experiments. J. Econ. Perspect. 2001, 15, 69–85. [Google Scholar] [CrossRef]
- Black, S.E.; Lynch, L.M. How to compete: The impact of workplace practices and information technology on productivity. Rev. Econ. Stat. 2001, 83, 434–445. [Google Scholar] [CrossRef]
- Botosan, C.A.; Plumlee, M.A. A re-examination of disclosure level and the expected cost of equity capital. J. Account. Res. 2002, 40, 21–40. [Google Scholar]
- Cellini, S.R. Causal inference and omitted variable bias in financial aid research: Assessing solutions. Rev. High. Educ. 2008, 31, 329–354. [Google Scholar] [CrossRef]
- DiPrete, T.A.; Gangl, M. Assessing Bias in the Estimation of Causal Effects: Rosenbaum Bounds on Matching Estimators and Instrumental Estimation with Imperfect Instruments. Available online: http://www.wjh.harvard.edu/~cwinship/cfa_papers/HBprop_021204.pdf (accessed on 20 March 2013).
- Harris, A.L.; Robinson, K. Schooling behaviors or prior skills? A cautionary tale of omitted variable bias within oppositional culture theory. Sociol. Educ. 2007, 80, 139–157. [Google Scholar] [CrossRef]
- Mustard, D.B. Reexamining criminal behavior: The importance of omitted variable bias. Rev. Econ. Stat. 2003, 85, 205–211. [Google Scholar] [CrossRef]
- Pace, R.K.; LeSage, J.P. Omitted Variable Biases of OLS and Spatial Lag Models. In Progress in Spatial Analysis: Advances in Spatial Science; Springer-Verlag Berlin Heidelberg: Berlin, Germany, 2010. [Google Scholar]
- Paterson, R.W.; Boyle, K.J. Out of sight, out of mind? Using GIS to incorporate visibility in hedonic property value models. Land Econ. 2002, 78, 417–425. [Google Scholar] [CrossRef]
- Scheffler, R.M.; Brown, T.T.; Rice, J.K. The role of social capital in reducing non-specific psychological distress: The importance of controlling for omitted variable bias. Soc. Sci. Med. 2007, 65, 842–854. [Google Scholar] [CrossRef]
- Sessions, D.N.; Stevans, L.K. Investigating omitted variable bias in regression parameter estimation: A genetic algorithm approach. Comput. Stat. Data Anal. 2006, 50, 2835–2854. [Google Scholar] [CrossRef]
- Streams, S.C.; Norton, E.C. Time to include time to death? The future of health care expenditure predictions. Health Econ. 2004, 13, 315–327. [Google Scholar] [CrossRef]
- Swamy, P.A.V.B.; Chang, I.L.; Mehta, J.S.; Tavlas, G.S. Correcting for omitted-variable and measurement-error bias in autoregressive model estimation with panel data. Comput. Econ. 2003, 22, 225–253. [Google Scholar] [CrossRef]
- Branson, J.; Knox Lovell, C.A. Taxation and Economic Growth in New Zealand. In Taxation and the Limits of Government; Scully, G.W., Caragata, P.J., Eds.; Kluwer Academic: Boston, MA, USA, 2000; pp. 37–88. [Google Scholar]
- Leightner, J.E. Omitted variables and how the Chinese yuan affects other Asian currencies. Int. J. Contemp. Math. Sci. 2008, 3, 645–666. [Google Scholar]
- Leightner, J.E.; Inoue, T. Solving the omitted variables problem of regression analysis using the relative vertical position of observations. Adv. Decis. Sci. 2012, 2012, p. 728980. Available online: http://www.hindawi.com/journals/ads/2012/728980/ (accessed on 11 March 2013). [Google Scholar]
- Leightner, J.E.; Inoue, T. Is China replacing the USA as an engine for global growth? Int. Econ. Financ. J. 2012, 7, 55–77. [Google Scholar]
- Leightner, J.E.; Inoue, T. Negative fiscal multipliers exceed positive multipliers during Japanese deflation. Appl. Econ. Lett. 2009, 16, 1523–1527. [Google Scholar] [CrossRef]
- Leightner, J.E.; Inoue, T. Capturing climate’s effect on pollution abatement with an improved solution to the omitted variables problem. Eur. J. Oper. Res. 2008, 191, 539–556. [Google Scholar]
- Leightner, J.E.; Inoue, T. The effect of the Chinese yuan on other Asian Currencies during the 1997–1998 Asian Crisis. Int. J. Econ. Issues 2008, 1, 11–24. [Google Scholar]
- Leightner, J.E. Chinese Overtrading. In Two Asias: The Emerging Postcrisis Divide; Rosefielde, S., Kuboniwa, M., Mizobata, S., Eds.; World Scientific Publishers: Singapore, 2011. [Google Scholar]
- Leightner, J.E. Fiscal stimulus for the USA in the current financial crisis: What does 1930–2008 tell us? Appl. Econ. Lett. 2011, 18, 539–549. [Google Scholar] [CrossRef]
- Leightner, J.E. Are the forces that cause China’s trade surplus with the USA good? J. Chin. Econ. Foreign Trade Stud. 2010, 3, 43–53. [Google Scholar] [CrossRef]
- Leightner, J.E. China’s fiscal stimulus package for the current international crisis, what does 1996–2006 tell us? Front. Econ. China 2010, 5, 1–24. [Google Scholar] [CrossRef]
- Leightner, J.E. How China’s holdings of foreign reserves affect the value of the US dollar in Europe and Asia. China World Econ. 2010, 18, 24–39. [Google Scholar]
- Leightner, J.E. Omitted variables, confidence intervals, and the productivity of exchange rates. Pac. Econ. Rev. 2007, 12, 15–45. [Google Scholar] [CrossRef]
- Leightner, J.E. Fight deflation with deflation, not with monetary policy. Jpn. Econ. Transl. Stud. 2005, 33, 67–93. [Google Scholar]
- Leightner, J.E. The productivity of government spending in Asia: 1983–2000. J. Product. Anal. 2005, 23, 33–46. [Google Scholar] [CrossRef]
- Casselman, B. Cautious Companies Stockpile Cash. Wall Street Journal, 7 December 2012, A2. [Google Scholar]
- Chasan, E. Lots of Cash, Few Alternatives: Company Funds Stay Put at Banks Even after Unlimited Deposit Insurance Expires. Wall Street Journal, 23 April 2013, B6. [Google Scholar]
- Fidler, S. Firms’ Cash Hoarding Stunts Europe. Wall Street Journal, 24–25 March 2012, A10. [Google Scholar]
- Gara, T. S&P: U.S. Companies Underinvest by Billions. Wall Street Journal, 12 December 2012, B6. [Google Scholar]
- Hilsenrath, J.; Simon, R. Penitent Debtors Hobble Recovery in U.S. Wall Street Journal, 22–23 October 2011, A1–A14. [Google Scholar]
- Monga, V. Companies’ $2 Trillion Conundrum. Wall Street Journal, 5 October 2011, B5. [Google Scholar]
- Sidel, R. Wads of Cash Squeeze Bank Margins. Wall Street Journal, 11 January 2013, C1. [Google Scholar]
- Keynes, J.M. The General Theory of Employment, Interest and Money; MacMillan: London, UK, 2007. [Google Scholar]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).