Originally published on January 29, 2012
Hello again readers of the IIEP blog! Today I’d like to share with you a post I co-authored with Kathryn Vesey, Research Associate at the GW Regulatory Studies Center (where I am also affiliated), as well as a Master of Public Policy Candidate at The Trachtenberg School of Public Policy and Public Administration. She and I have been working on a project examining the impact of regulatory agency spending on the macroeconomy.
With the U.S. unemployment rate still painfully high at 8.5%, politicians in Washington and on the campaign trail continue debating over what steps government should take to help put Americans back to work. One popular argument in that vein has been that government regulation is the enemy of job creation, a claim that may be more driven by rhetorical salience than evidence. On this subject, a recent article in the Washington Post reports, “Economists who have studied the matter say that there is little evidence that regulations cause massive job loss in the economy, and that rolling them back would not lead to a boom in job creation.”
A recent empirical study conducted by the Phoenix Center for Advanced Legal and Economic Policy Studies tells a different story however. Using the Regulators’ Budget data as a proxy measure of regulation over time, this study estimates that reducing the total budget of all U.S. federal regulatory agencies by just 5% (or $2.8 billion) would result in an increase in real private-sector GDP of $75 billion annually, as well as 1.2 million more private sector jobs each year. They put it another way too, claiming that firing one regulatory agency staff member will create 98 jobs in the private sector.
This study’s provocative findings have been widely cited by politicians, advocacy organizations, and the media as evidence that cutting regulation will create jobs and grow the economy, even making an appearance in an official congressional report on the Regulatory Flexibility Improvements Act (U.S. House of Representatives 2011). But are these figures, which seem so surprisingly high, really definitive?
We set out to answer that question, first by attempting to replicate the authors’ study following their steps exactly. What we found is that when we use the same data and identical model specification, we do in fact arrive at the same dramatic results. However, we also find that those results are extremely sensitive to small alterations in the specification used.
For example, the Phoenix Center uses the “Regulators’ Budget as a share of private GDP” as the variable for regulatory activity in their Generalized Impulse Response Function (GIRF), which they then use to simulate a “regulatory shock” to observe how the other two variables in the model (private GDP per capita and private sector jobs) respond. However, we tried a number of small changes to the specification, for example replacing the Regulators’ Budget as a share of private GDP with the Regulators’ Budget measured in billions of 2005 dollars – a specification we argue is more appropriate, as it does not require holding private GDP constant in one variable while allowing it to vary in another.
This alternative model leads us to very different results. The Phoenix Center study found a statistically significant, negative relationship between the Regulators’ Budget and private sector GDP and employment. On the other hand, according to our preferred model, we find a positive relationship where a 5% increase in the Regulators’ Budget is associated with a 0.28% increase in private sector employment and a 0.14% increase in real private GDP. However, these results are not statistically significant and therefore indistinguishable from no effect.
This lack of statistical significance is not surprising for several reasons. First, there is bound to be much variation in private sector employment and private sector GDP not accounted for by this macroeconomic model with just three variables. Second, it is quite likely that different types of regulatory agency budgetary spending have diverse effects on the economy, and that these effects may vary at different times. And third, as is the case with more commonly used proxies for regulatory activity, such as number of pages in the Federal Register or Code of Federal Regulations, the Regulators’ Budget is a blunt measure of regulation. The taxpayer costs of staffing and running federal regulatory agencies (as measured by the Regulators’ Budget) may not correlate well with the societal impacts of the regulations they issue.
Of particular concern with respect to using Regulators’ Budget data as the proxy for regulation is one substantial outlier in the budget – Homeland Security budgetary spending (mostly attributed to the Transportation Security Administration (TSA)). As the graph below illustrates, this area of regulation has been by far the largest driver of regulatory agency spending growth in the last decade since the Department of Homeland Security was created after September 11th. Based on this insight, it would seem that this outlier may be partially responsible for the model’s lack of stability.
It is also important to recognize the limitations of vector autoregressive analysis in general and its applications in the Phoenix Center study in particular. Vector autoregressive models have been commonly used for macroeconomic analyses for 25 years. They can be extremely beneficial for describing data, and oftentimes for forecasting purposes too. However, as Stock and Watson discuss in a 2001 paper, “small VARs of two or three variables” – such as the one used in the Phoenix Center study – “are often unstable and thus poor predictors of the future.” Moreover, drawing structural inferences from VARs is difficult and requires one to make weighty assumptions.
Regulations have significant economic and social costs and benefits, as well as important distributional effects. The recent increase in awareness of this reality among citizens and politicians has the potential to affect positive changes to the U.S. regulatory system, making it smarter, more transparent, and more accountable. However, in order to keep the conversation constructive, it is important that the evidence drawn upon in the public discourse about regulation be meaningful and well-informed. Our ongoing analysis seeks to do that.