Explanation and the quest for ‘significant’ relationships. Part II

In Part I I argue that the search and discovery of statistically significant relationships does not amount to explanation and is often misplaced in the social sciences because the variables which are purported to have effects on the outcome cannot be manipulated.

Just to make sure that my message is not misinterpreted – I am not arguing for a fixation on maximizing R-squared and other measures of model fit in statistical work, instead of the current focus on the size and significance of individual coefficients. R-squared has been rightly criticized as a standard of how good a model is** (see for example here). But I am not aware of any other measure or standard that can convincingly compare the explanatory potential of different models in different contexts. Predictive success might be one way to go, but prediction is altogether something else than explanation.

I don’t expect much to change in the future with regard to the problem I outlined. In practice, all one could hope for is some clarity on the part of the researchers whether their objective is to explain (account for) or find significant effects. The standards for evaluating progress towards the former objective (model fit, predictive success, ‘coverage’ in the QCA sense) should be different than the standards for the latter (statistical & practical significance and the practical possibility to manipulate the exogenous variables).

Take the so-called garbage-can regressions, for example. These are models with tens of variables all of which are interpreted causally if they reach the magic 5% significance level. The futility of this approach is matched only by its popularity in political science and public administration research. If the research objective is to explore a causal relationship, one better focus on that variable and  include covariates only if it is suspected that they are correlated with the outcome and with the main independent variable of interest. Including everything else that happens to be within easy reach not only leads to inefficiency in the estimation. One should refrain from  interpreting causally the significance of these covariates altogether. On the other hand, if the objective is to comprehensively explain (account for) a certain phenomenon, then including as many variables as possible might be warranted but then the significance of individual variables is of little interest.

The goal of research is important when choosing the research design and the analytic approach. Different standards apply to explanation, the discovery of causal effects, and prediction.

**Just one small example from my current work – a model with one dependent and one exogenous time-series variables in levels with a lagged dependent variable included on the right-hand side of the equation produces an R-squared of 0.93. The same model in first differences has an R-squared of 0.03 while the regression coefficient of the exogenous variable remains significant in both models. So we can ‘explain’ 90% of the variation in the first case by reference to the past values of the outcome. Does this amount to an explanation in any meaningful sense? I guess that depends on the context. Does it provide any leverage to the researcher to manipulate the outcome? Not at all.

Is unit homogeneity a sufficient assumption for causal inference?

Is unit homogeneity a sufficient condition (assumption) for causal inference from observational data?

Re-reading King, Keohane and Verba’s bible on research design [lovingly known to all exposed as KKV] I think they regard unit homogeneity and conditional independence as alternative assumptions for causal inference. For example: “we provide an overview here of what is required in terms of the two possible assumptions that enable us to get around the fundamental problem [of causal inference]” (p.91, emphasis mine). However, I don’t see how unit homogeneity on its own can rule out endogeneity (establish the direction of causality). In my understanding, endogeneity is automatically ruled out with conditional independence, but not with unit homogeneity (“Two units are homogeneous when the expected values of the dependent variables from each unit are the same when our explanatory variables takes on a particular value” [p.91]).

Going back to Holland’s seminal article which provides the basis of KKV’s approach, we can confirm that unit homogeneity is listed as a sufficient condition for inference (p.948). But Holland divides variables into pre-exposure and post-exposure before he even gets to discuss any of the additional assumptions, so reverse causality is ruled out altogether. Hence, in Holland’s context unit homogeneity can indeed be regarded as sufficient, but in my opinion in KKV’s context unit homogeneity needs to be coupled with some condition (temporal precedence for example) to ascertain the causal direction when making inferences from data.

The point is minor but can create confusion when presenting unit homogeneity and conditional independence side by side as alternative assumptions for inference.

Inspiring scientific concepts

EDGE asks 159 selected intellectuals What scientific concept would improve everybody’s cognitive toolkit?

You are welcome to read the individual contributions which range from a paragraph to a short essay here. Many of the entries are truly inspiring but I see little synergy of bringing 159 of them together. Like in a group photo of beauty pageant contenders, the total appeal of the group photo is less than sum of the individual attractiveness of its subjects.

But to my point: It is remarkable that so many of the answers (on my count, in excess of 30) deal, more or less directly, with causal inference. What is even more remarkable is that most of the concepts and ideas about causal inference mentioned by the worlds’ intellectual jet-set (no offense to those left out) are anything but new. Many of the ideas can be traced back to Popper’s The Logic of Scientific Discovery (1934) and Ronald Fisher’s The Design of Experiments (1935). So what is most remarkable of all is how long it takes for these ideas to sink-in and diffuse in society.

Several posts focus on the Popperian requirement for falsifiability (Howard Gardner, Tania Lombrozo) and skeptical empiricism more generally (Gerald Holton). The scientific method is further evoked by Richard Dawkins on the double-blind control experiment (see also Roger Schank), Brian Knutson on replicability, and Kevin Kelly the virtues of negative results. Mark Henderson advocates the use of the scientific method outside science (e.g. policy) – a plea that strikes a chord with this blog.

A significant sample of contributions relate to probability (Seth Lloyd, John Allen Paulos, Charles Seife), and the difficulties humans have in understanding risk, uncertainty and probabilities (Antony Garrett, Gerd Gigerenzer, Lawrence M. Krauss, Carlo Rovelli, Keith Devlin, Mahzarin Banaji, David Pizarro). W. Daniel Hillis and Kevin Devlin mention possibility spaces and base rates respectively as concepts that might help.

Several authors warn of the dangers of anecdotal data (Susan Fiske, Robert Sapolsky) and Christine Finn insists that the absence of evidence is not evidence of absence. Susan Blackmore reminds that correlation is not a cause and Diane Halpern critiques the cult of statistical significance.  Beatrice Golomb discusses misinterpretations of the placebo effect.

You do want to check out some innovative approaches to causality – causation as an information flow (David Dalrymple), nexus causality (John Tooby) and Rebecca Newberger Goldstein’s  ‘best explanation‘ that go beyond the “monocausalitis” disease identified by Ernst Poppel (related argument by Nigel Goldenfeld).

Some highlights from the remaining posts:

– Richard Thaler compares the economic concept of utility to  aether.

– Eric R. Weinstein on kayfabe (!) – the fabricated competition in professional wrestling and… the study of economics

– Fiery Cushman on confabulation (“Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties”)

– Joshua D. Greene on  supervenience (“The Set A properties supervene on the Set B properties if and only if no two things can differ in their A properties without also differing in their B properties””)

– Stephen M. Kosslyn  on constraint satisfaction as a decision mechanism

And Andrian Kreye mentions  free jazz: