Why political scientists should continue to (fail to) predict elections?

The results from the British elections last week already claimed the heads of three party leaders. But together with Labour, the Liberal Democrats and UKIP, there was another group that lost big time in the elections: pollsters and electoral prognosticators. Not only were polls and predictions way off the mark in terms of the actual vote shares and seats received by the different parties. Crucially, their major expectation of a hung parliament did not materialize as the Conservatives cruised into a small but comfortable majority of the seats. Even more remarkably, all polls and predictions were wrong, and they were all wrong pretty much in the same way. Not pretty.

This calls for reflection upon the exploding number of electoral forecasting models which sprung up during the build-up to the 2015 national elections in the UK. Many of these models were offered by political scientists and promoted by academic institutions (for example, here, here, and here). At some point, it became passé to be a major political science institution in the country and not have an electoral forecast. The field became so crowded that the elections were branded as ‘a nerd feast’ and the competition of predictions as ‘the battle of the nerds’. The feast is over and everyone lost. It is the time of the scavengers.

The massive failure of British polls and predictions has already led to a frenzy of often vicious attacks on the pollsters and prognosticators coming from politicians, journalists and pundits, in the UK and beyond. A formal inquiry has been launched. The unmistakable smell of schadenfreude is hanging in the air. Most disturbingly, some respected political scientists have voiced a hope that the failure puts a stop to the game of predicting voting results altogether and dismissed electoral predictions as unscientific.

mudde afonso

This is wrong. Political scientists should continue to build predictive models of elections. This work has scientific merit and it has public value. Moreover, political scientists have a mission to participate in the game of electoral forecasting. Their mission is to emphasize the large uncertainties surrounding all kinds of electoral predictions. They should not be in the game in order to win, but to correct on others’ too eager attempts to mislead the public with predictions offered with a false sense of precision and certainty.

The rising number of electoral forecasts done by political scientists has more than a little bit to do with a certain jealousy of Nate Silver – the American forecaster who gained international fame and recognition with his successful predictions of the US presidential elections. (By the way, this time round, Nate Silver got it just as wrong as the others). For once, there was something sexy about political science work, but the irony was, political scientists were not part of it. And if Nate, who is not a professional political scientist, can do it, so can we – academic experts with life-long experience in the study of voting and elections and hard-earned mastery of sophisticated statistical techniques. So the academia was drawn into this forecasting thing.

And that’s fine. Political scientists should be in the business of electoral forecasting because this business is important and because it is here to stay. News outlets have an insatiable appetite for election stories as voting day draws near, and the release of polls and forecasts provides a good excuse to indulge in punditry and sometimes even meaningful discussion. So predictions will continue to be offered and if political scientists move away somebody else will take their place. And the newcomers cannot be trusted to have the public interest at heart.

Election forecasts are important because they feed into the electoral campaign and into the strategic calculations of political parties and of individual voters. Voting is rarely an act of naïve expression of political preferences. Especially in an electoral system that is highly non-proportional, as the one in the UK, voters and parties have a strong incentive to behave strategically in view of the information that polls and forecasts provide. (By the way, ironically, the one prognosis that political scientists got relatively right – the exit poll – is the one that probably matters the least as it only serves to satisfy our impatience to wait a few more hours for the official electoral results.)

Hence, political scientists as servants of the public interest have a mission to offer impartial and professional electoral forecasts based on state of the art methodology and deep substantive knowledge. They must also discuss, correct and when appropriate trash the forecasts offered by others.

And they have one major point to make – all predictions have a much larger degree of uncertainty than what prognosticators want (us) to believe. It is a simple point that experience has been proven right times and again. But it is one that still needs to be pounded over and over as pollsters, forecasters and the media get easily carried away.

It is in this sense that commentators are right: predictions, if not properly bracketed by valid estimates of uncertainty, are unscientific and pure charlatanry.  And it is in this sense that most forecasts offered by political scientists at the latest British elections were a failure. They did not properly gauge the uncertainty of their estimates and as a result misled the public. That they didn’t predict the result is less damaging than the fact they pretended they could.

Since the bulk of the data doing the heavy-lifting in most electoral predictive models is poll data, the failure of prediction can be traced to a failure of polling. But pollsters cannot be blamed for the fact that prognosticators did not adjust the uncertainty estimates of their predictions. The tight sampling margins of error reported by pollsters might be appropriate to characterize the uncertainty of polling estimates (under certain assumptions) of public preferences at a point in time, but they are invariably too low when it comes to making predictions from these estimates. Predictions have other important sources of uncertainty in addition to sampling error and by not taking these into account prognosticators are fooling themselves and others. Another point forecasters should have known: combining different polls reduces sampling margins of error, but if all polls are biased (as they proved to be in the British case), the predictions could still be seriously off the mark.

Offering predictions with wide margins of uncertainty is not sexy. Correcting others for the illusory precision of their forecasts is tedious and risks being viewed as pedantic. But this is the role political scientists need to play in the game of electoral forecasting, and being tedious, pedantic and decidedly unsexy is the price they have to pay.

Constructivism in the world of Dragons

Here is an analysis of Game of Thrones from a realist international relations perspective. Inevitably, here is the response from a constructivist angle. These are supposed to be fun so I approached them with a light heart and popcorn. But halfway through the second article I actually felt sick to my stomach. I am not exaggerating, and it wasn’t the popcorn – seeing the same ‘arguments’ between realists and constructivists rehearsed in this new setting, the same lame responses to the same lame points, the same ‘debate’ where nobody ever changes their mind, the same dreaded confluence of normative, theoretical, and empirical notions that plagues this never-ending exchange in the real (sorry, socially constructed) world, all that really gave me a physical pain. I felt entrapped – even in this fantasy world there was no escape from the Realist and the Constructivist. The Seven Kingdoms were infected by the triviality of our IR theories. The magic of their world was desecrated. Forever….

Nothing wrong with the particular analyses. But precisely because they manage to be good examples of the genres they imitate the bad taste in my mouth felt so real. So is it about interests or norms? Oh no. Is it real politik or the slow construction of a common moral order? Do leader disregard the common folk to their own peril? Oh, please stop. How do norms construct identities? Noooo moooore. Send the Dragons!!!

By the way, just one example of how George R.R. Martin can explain a difficult political idea better than an entire conference of realists and constructivists. Why do powerful people keep their promises? Is it ’cause their norms make them do it or because it is in their interests or whatever? Why do Lannisters always pay their debts even though they appear to be some the meanest self-centered characterless in the entire world of Game of Thrones?  We literally see the answer when Tyrion Lannister tries to escape from the sky cells, and the Lannister’s reputation for paying their debts is the only thing that saves him, the only thing he has left to pay Mord, but it is enough (see episode 1.6). Having a reputation for paying your debts is one of the greatest assets you can have in every world. And it is worth all the pennies you pay to preserve it even when you can actually get away with not honoring your commitments. It could not matter less if you call this interest-based or norm-based explanation: it just clicks, but it takes creativity and insight to convey the point, not impotent meta-theoretical disputes.

The failure of political science

Last week the American Senate supported with a clear bi-partisan majority a decision to stop funding for political science research from the National Science Foundation. Of all disciplines, only political science has been singled out for the cuts and the money will go for cancer research instead.

The decision is obviously wrong for so many reasons but my point is different. How could political scientists who are supposed to understand better than anyone else how politics works allow this to happen? What does it tell us about the state of the discipline that the academic experts in political analysis cannot prevent overt political action that hurts them directly and rather severely?

To me, this failure of American political scientists to protect their own turf in the political game is scandalous. It is as bad as Nobel-winning economists Robert Merton and Myron Scholes leading the hedge fund ‘Long Tern Capital Management‘ to bust and losing 4.6 billion dollars with the help of their Nobel-wining economic theories. As Myron & Scholes’ hedge fund story revels the true real-world value of (much) financial economics theories, so does the humiliation of political science by the Congress reveal the true real-world value of (much) political theories.

Think about it –  the world-leading academic specialists on collective action, interest representation and mobilization could not get themselves mobilized, organized and represented in Washington to protect their funding. The professors of the political process and legislative institutions could not find a way to work these same institutions to their own advantage. The experts on political preferences and incentives did not see the broad bi-partisan coalition against political science forming. That’s embarrassing

It is even more embarrassing because American political science is the most productive, innovative, and competitive in the world. There is no doubt that almost all of the best new ideas, methods, and theories in political science over the last 50 years have come from the US. (And a lot of these innovations have been made possible because of the funding received by the National Science Foundation). So it is not that individual American political scientists are not smart – of course they are, but for some reason as a collective body they have not been able to benefit from their own knowledge and insights. Or that knowledge and insights about US politics are deficient in important ways.The fact remains, political scientists were beaten in what should have been their own game. Hopefully some kind of lesson will emerge from all that…

P.S. No reason for public administration, sociology and other related disciplines to be smug about pol sci’s humiliation – they have been saved (for now) mostly by their own irrelevance. 

The education revolution at our doorstep

University education is at the brink of radical transformation. The revolution is already happening and the Khan Academy, Udacity, Coursera and the Marginal Revolution University are just the harbingers of a change that will soon sweep over universities throughout the world.

Alex Tabarrok has a must-read piece on the coming revolution in education here. The entire piece is highly recommended, so I am not gonna even try to summarize it here, but this part stands out:

Teaching today is like a stage play. A play can be seen by at most a few hundred people at a single sitting and it takes as much labor to produce the 100th viewing as it does to produce the first. As a result, plays are expensive. Online education makes teaching more like a movie. Movies can be seen by millions and the cost per viewer declines with more viewers. Now consider quality. The average movie actor is a better actor than the average stage actor.

As a result, Tabarrok predicts that the market for teachers will became a winner-take-all market with very big payments at the top: the best teachers would be followed by millions and paid accordingly.

My prediction is that the revolution in education will also lead to greater specialization – maybe you can’t be the best  Development Economics teacher, but you can be the best teacher on XIXth Century Agricultural Development in South-East Denmark: economies of scale brought by online education can make such uber-specialization of teaching portfolios profitable (or, indeed necessary).

Surprisingly or not, it is American entrepreneurs and institutions who lead this revolution. In Europe, online education is still relegated to pre-master programs and the like and is too often a thoughtless extrapolation of traditional education practices online. Sooner rather than later, the revolution will be at our doorstep. We better start preparing.

[P.S. the Guardian aslo run a recent piece on the topic as well]

Science is like sex…

‘Science is like sex – it might have practical consequences but that’s not why you do it!’

This seems to be a modified version of a quote by the physicist Richard Feynman that I heard last week at a meeting organized by the Dutch Organization for Scientific Research (the major research funding agency in the Netherlands). It kind of sums up the attitudes of natural scientists to the increasing pressures all researchers face to justify their grant applications in terms of the possible practical use (utilization, or valorization) of their research results. Which is totally fine by me. I perfectly understand that it is impossible to anticipate all the possible future practical consequences of fundmental research. On the other hand, I see no harm in forcing researchers to, at the very least, think about the possible real-world applications of their work. The current equilibrium  in which reflection on possible practical applications is required, but ‘utilization’ is neither necessary nor sufficient for getting a grant, seems like a good compromise.
Of course, I come from a field (public administration) where demonstrating the scientific contribution is usually more difficult than showing the practical applicability of the results: so my view might be biased. I am not even sure what fundamental research in the social sciences looks like. Even rather esoteric work on non-cooperative game theory has been directly spurred by practical concerns related to the Cold War (and sponsored by the RAND corporation) and has rather directly led to the design of real-world social instituions (like the networks for kidney exchange) which won Al Roth his recent Nobel prize.

The hidden structure of (academic) organizations

All organizations have a ‘deep’ hidden structure based on the social interactions among its members which might or might not coincide with the official formal one. University departments are no exception – if anything, the informal alliances, affinities, and allegiances within academic departments are only too visible and salient.

Network analysis provides one way of visualizing and exploring the ‘deep’ organizational structure. In order to learn how to visualize small networks with R, I collected data on the social interactions within my own department and plugged the dataset in R (igraph package) to get the plot below. The figure shows the social network of my institute based on the co-supervision of student dissertations (each Master thesis has a supervisor who selects a so-called ‘second’ reader who reviews the draft and the two supervisors examine the student during the defence). So each link between nodes (people) is based on one joint supervision of a student. The total number of links (edges) is 264 which covers (approximately) all dissertations defended over the last year. In this version of the graph, the people are represented only by numbers but in the full version the actual names of people are plotted, the links are directional, and additional info (like the grade of the thesis) can be incorporated.

Altogether, the organization appears surprisingly well-integrated. Most ‘outsiders’ and most weakly-connected ‘islands’ are either occasional external readers, or new colleagues being ‘socialized’ into the organization. Obviously, some people are more ‘central’ in the sense of connecting to a more diverse set of people, while others serve as boundary-spanners reaching to people who would otherwise remain unconnected to the core.  I find the figure intellectually and aesthetically pleasing (given that it is generated with two lines of code) and perhaps a more thorough analysis of the network can be useful in organizational management as well.

Solve for the equilibrium: Dutch higher education

1) The number of first-year students in the Netherlands has soared from 105 000 in 2000 to 135 000 in 2011. The 30% increase is a direct result of government policy which links university funding with student numbers. In some programs in the country, student numbers have more than doubled during the last five years. Everyone is encouraged to enter the university system.

2) In the general case, there is no selection at the gate. Students cannot be refused to enter a program.

3) Now, the government’s objectives are to reduce the number of first-year drop-outs  and slash the number of students who do not graduate within four years. Both objectives are being supported by financial incentives and penalties for the universities.

Something’s gotta give. I wonder what…

P.S. ‘Solve for the equilibrium’ is the title of a rubric from Marginal Revolution.

Proposal for A World Congress on Referencing Styles

I have been busy over the last few days correcting proofs for two forthcoming articles. One of the journals accepts neither footnotes nor endnotes so I had to find place in the text for the >20 footnotes I had. As usual, most of these footnotes result directly from the review process so getting rid of them is not an option even if many are of marginal significance. The second journal accepts only footnotes – no in-text referencing at all – so I had to rework all the referencing into footnotes. Both journals demanded that I provide missing places of publication for books and missing page numbers for articles. Ah, the joys of academic work!

But seriously… How is it possible that a researcher working in the XXI century still has to spend his/her time changing commas into semicolons and abbreviating author names to conform to the style of a particular journal? I just don’t get it. I am all for referencing and beautifully-formatted bibliographies but can’t we all agree on one single style? Does it really matter if the years of a publication are put in brackets or not? Who cares if the first name of the author follows the family name or the other way round? Do we really need to know the place of publication of a book? Where do you actually look for this information? Is it Thousand Oaks, London, or New Delhi? All three appear on the back of a random SAGE book I picked from the shelf… Who would ever need to know whether it was Thousand Oaks or London in the first place? Maybe libraries, but they certainly don’t get their data from my references. Obviously, the current referencing system is a relic from very different and distant times when knowing the publishing place was necessary to get access to the book. Now, collecting and providing this information is a waste of time and space.

And yes, I have heard of Endnote and BibTeX, and I do use reference management software. But most journals still don’t have their required styles available for import into these programs. So the publisher doesn’t find it necessary to hire somebody for a few hours to prepare an official Endnote style sheet for the journal, but it demands from all authors to spend days in order to rework their references to conform to its rules?!

And why are there different referencing styles anyways? Can you imagine the discussions that journal editors and publishers have before they settle for a particular referencing style?

– Herr Professor, I must insist that we require journal names to be in italics!
– That’s the most ridiculous thing I have ever heard – everybody knows that journal names are supposed to be in bold, not in italics!
– But gentlemen, research by our esteemed colleagues in psychology has shown that journal names put in a regular font and encircled by commas are perceived as 3% more reliable than others.
– Nonsense! I demand that journal names are underlined and every second one in the list should be abbreviated as well.

And so on and so forth… To remedy the situation I boldly propose a World Congress on Referencing Styles. All the academic disciplines and publishers will send delegates to resolve this perennial problem once and for all. There will be panels like Page Numbers: Preceded by a Comma, a Colon, or a Dash, and seminars on topics like Recent Trends in Abbreviating Author Names. No doubt several months of deliberation will be needed, but eventually the two main ‘Chicago’ and ‘Harvard’ parties will reach a compromise which will be endorsed by the United Nations amid the ovations of the world leaders. The academic universe would never be the same again!

Until that day, happy referencing to you all!

Cutting funds for political science research

Just wanted to pass along this troubling piece of news: In the US, the House has voted to abolish funding for political science from the National Science Foundation altogether, and to cut the American Community Survey – an in-depth representative survey providing data to policy makers (education, housing, etc). The Dark Ages are nigh (if they haven’t yet arrived).


Review the reviews

Frank Häge alerts me to a new website which gives you the chance to review the reviews of your journal submissions:

On this site academic social science researchers have the opportunity to comment on the reviews they have received, and the process of decision-making about reviews, affecting articles submitted for publication, book proposals, and funding applications.

So far there seems to be only one submission (by the site’s author) but I can see the potential. The addition of a simple scoring system so that you can rate your experience with certain journals might work even better. The danger is of course that the website becomes just another channel for venting the frustration of rejected authors.

In my opinion, making the reviews public (perhaps after the publication of the article) is the way to go in order to increase the accountability of the review system.