The evolution of EU legislation (graphed with ggplot2 and R)

During the last half century the European Union has adopted more than 100 000 pieces of legislation. In this presentation I look into the patterns of legislative adoption over time. I tried to create clear and engaging graphs that provide some insight into the evolution of law-making activity: not an easy task given the byzantine nature of policy making in the EU and the complex nomenclature of types of legal acts possible.

The main plot showing the number of adopted directives, regulations and decisions since 1967 is pasted below. There is much more in the presentation. The time series data is available here, as well as the R script used to generate the plots (using ggplot2). Some of the graphs are also available as interactive visualizations via ManyEyes here, here, and here (requires Java). Enjoy.

EU laws over time

Interest groups and the making of legislation

How are the activities of interest groups related to the making of legislation? Does mobilization of interest groups lead to more legislation in the future? Alternatively, does the adoption of new policies motivate interest groups to get active? Together with Dave Lowery, Brendan Carroll and Joost Berkhout, we tackle these questions in the case of the European Union. What we find is that there is no discernible signal in the data indicating that the mobilization of interest groups and the volume of legislative production over time are significantly related. Of course, absence of evidence is the same as the evidence of absence, so a link might still exist, as suggested by theory, common wisdom and existing studies of the US (e.g. here). But using quite a comprehensive set of model specifications we can’t find any link in our time-series sample. The abstract of the paper is below and as always you can find at my website the data, the analysis scripts, and the pre-print full text. One a side-note – I am very pleased that we managed to publish what is essentially a negative finding. As everyone seems to agree, discovering which phenomena are not related might be as important as discovering which phenomena are. Still, there are few journals that would apply this principle in their editorial policy. So cudos for the journal of Interest Groups and Advocacy.

Abstract
Different perspectives on the role of organized interests in democratic politics imply different temporal sequences in the relationship between legislative activity and the influence activities of organized interests.  Unfortunately, lack of data has greatly limited any kind of detailed examination of this temporal relationship.  We address this problem by taking advantage of the chronologically very precise data on lobbying activity provided by the door pass system of the European Parliament and data on EU legislative activity collected from EURLEX.  After reviewing the several different theoretical perspectives on the timing of lobbying and legislative activity, we present a time-series analysis of the co-evolution of legislative output and interest groups for the period 2005-2011. Our findings show that, contrary to what pluralist and neo-corporatist theories propose, interest groups neither lead nor lag bursts in legislative activity in the EU.

Timing is Everything: Organized Interests and the Timing of Legislative Activity
Dimiter Toshkov, Dave Lowery, Brendan Carroll and Joost Berkhout
Interest Groups and Advocacy (2013), vol.2, issue 1, pp.48-70

When ‘just looking’ beats regression

In a draft paper currently under review I argue that the institutionalization of a common EU asylum policy has not led to a race to the bottom with respect to asylum applications, refugee status grants, and some other indicators. The graph below traces the number of asylum applications lodged in 29 European countries since 1997:

My conclusion is that there is no evidence in support of the theoretical expectation of a race to the bottom (an ever-declining rate of registered applications). One of the reviewers insists that I use a regression model to quantify the change and to estimate the uncertainly of the conclusion. While in general I couldn’t agree more that being open about the uncertainty of your inferences is a fundamental part of scientific practice, in this particular case I refused to fit a regression model and calculate standards errors or confidence intervals. Why?

In my opinion, just looking at the graph is convincing that there is no race to the bottom – applications rates have been down and then up again while the institutionalization of a common EU policy has only strengthened over the last decade. Calculating standard errors will be superficial because it is hard to think about the yearly averages as samples from some underlying population. Estimating a regression which would quantify the EU effect would only work if the model is sufficiently good to capture the fundamental dynamics of asylum applications before isolating the EU effect, and there is no such model. But most importantly, I just didn’t feel that a regression coefficient or a standard error will improve on the inference you get by just looking at the graph: applications have been all over the place since the late 1990s and you don’t need a confidence interval to see that! But the issue has bugged me ever since – after all, the reviewer was just asking for what would be the standard way of approaching an empirical question.

Then two days ago I read this blog post by William M. Briggs who (unlike myself) is a professional statistician. After showing that by manipulating the start and end points of a time series you can get any regression coefficient that you want even with randomly generated data, he concludes ‘The lesson is, of course, that straight lines should not be fit to time series.’  But here is the real punch line:

If we want to know if there has been a change from the start to the end dates, all we have to do is look! I’m tempted to add a dozen more exclamation points to that sentence, it is that important. We do not have to model what we can see. No statistical test is needed to say whether the data has changed. We can just look.

But what about hypothesis testing? We need a statistical test to refute a hypothesis, right? Let me quote some more:

It is true that you can look at the data and ponder a “null hypothesis” of “no change” and then fit a model to kill off this straw man. But why? If the model you fit is any good, it will be able to skillfully predict new data…. And if it’s a bad model, why clutter up the picture with spurious, misleading lines?

In the inimitable prose of Prof. Briggs, ‘if you want to claim that the data has gone up, down, did a swirl, or any other damn thing, just look at it!’

The ‘Nobel’ prize for Economics, VAR and Political Science

Yesterday the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel  was awarded to the economists Thomas J. Sargent and Christopher A. Sims “for their empirical research on cause and effect in the macroeconomy” (press-release here, Tyler Cowen presented the laureates here and here). The award for Christopher Sims in particular comes for the development of vector autoregression  – a method for analyzing ‘how the economy is affected by temporary changes in economic policy and other factors’. In fact, the application of vector autoregression (VAR) is not confined to economics and can be used for the analysis of any dynamic relationships.

Unfortunately, despite being developed back in the 1970s, VAR remains somewhat unpopular in political science and public administration (as I learned the hard way trying to publish an analysis that uses VAR to explore the relationship between public opinion and policy output in the EU over time). A quick-and-dirty search for ‘VAR’/’vector autoregression’ in Web of Science [1980-2011] returns 1810 hits under the category Economics and only 52 under Political Science (of which 23 are also filed under Economics). This is the distribution over the last decades:

Time period – Econ/ PolSci
1980-1989 -   13/1
1990-1999 - 406/15
2000-2011 – 1391/36

With all the disclaimers that go with using Web of Science as a data source, the discrepancy is clear.

It remains to be seen whether the Nobel prize for Sims will serve to popularize VAR outside the field of economics.