by Elena Rocca


In the early 19th century, the Hungarian physician Ignaz Semmelweis noticed from his clinical experience that antiseptic routines in healthcare reduced infections at childbirth. After carrying out some studies on the matter, he proposed that the practice of disinfecting hands in the obstetrician ward of the Vienna General Hospital, where he worked at the time, would have reduced the incidence of puerperal fever. However, for that time this seemed as an implausible suggestion.  The germ theory of disease was still unheard of (Pasteur developed such theory only some decades later), and therefore there was no accepted understanding of how disease could be transmitted from one organism to the other. Semmelweis suggestion was therefore rejected by the medical community.

This historical anecdote is often quoted as a reminder that background knowledge and theoretical understanding of causal mechanisms can be at any time wrong and incomplete, and therefore it can hinder the correct causal inference. How to amend this fact, in modern medical research and practice?

The mainstream strategy comes from the evidence-based medicine (EBM) proponents. Since when we try to understand causation in medicine we risk to run into a lot of troubles, we should improve our ability of looking at correlation data without trying to understand phenomena, or causal mechanisms, underlying such correlations. What when statistical studies give conflicting results? In this case, we should trust the most unbiased experimental design. In other words, we are better off if we focus on judging the quality of the methods used to collect and analyse statistical data, and drop the attempts to understand infinitely complex biological phenomena underlying such data.

In a new CauseHealth paper, ‘The Judgements That Evidence-Based Medicine Adopts’, Elena Rocca objects this strategy by arguing that it is impossible to apply when complex evidence needs to be weighted. When different experimental designs yield conflicting results, we necessarily adopt our background, theoretical understanding of phenomena and causal mechanisms in order to judge which study is less biased. For instance, we need such background understanding to judge whether a trial is successfully randomised. The evaluation of any type of evidence, argues the paper, is based not only on that specific evidence that is being evaluated, but also on background knowledge. This is built by more general, previously accumulated evidence and by theoretical understanding of phenomena.


The paper demonstrates this thesis by looking at complex cases in which conflicting statistical evidence had to be evaluated, for instance the case of correlation between the exposure to the herbicide Glyphosate with higher incidence of lymphoma.

Clearly, background knowledge can be wrong and incomplete. When explanations are wrong, they will probably hinder, rather than favour, the correct causal evaluation. However, as this article attempts to demonstrate, such explanations are irreducibly embedded in the medical sciences. This fallibility, concludes the author, is therefore ‘a motivation for increasing our enquiry on causal explanations, rather than for dismissing it’.


CauseHealth goes to Evidence Live


Evidence Live is an annual conference, jointly hosted by the Centre for Evidence-Based Medicine, University of Oxford and The BMJ. This year, CauseHealth was represented in two of the sessions, by Elena Rocca and Rani Lill Anjum. (more…)

Evidence based medicine. What evidence, whose medicine, and on what basis?


Rani Lill Anjum

The evidence-based medicine movement was intended as a methodological revolution. Its proponents suggested the best way to establish the effectiveness of treatment and new criteria to choose between available treatments without bias. Philosophically, however, these changes were not so innocent, at least not ontologically speaking. In bringing itself closer to science, medicine has become less suitable for dealing with complex illnesses, individual variations and, as I will argue, with causation in general. (more…)

What is the Guidelines Challenge?

Rani Lill Anjum

CauseHealth recently organised a conference in Oxford called The Guidelines Challenge: Philosophy, Practice, Policy.

For those who missed the event, podcasts of the talks are available on our YouTube channel, and there is also a summary from each of the two days on Storify (day 1, day 2). There is also a Twitter hashtag, #GuidelinesChallenge.

New CauseHealth paper about risk assessment of genetically modified plants

by Elena Rocca

One idea promoted by CauseHealth is that, when evaluating evidence, pre-existing theoretical frameworks count as much as the data. For instance, data from a certain trial assume a particular significance depending on the general background theoretical understanding we have when we interpret them. In this new CauseHealth article, Elena Rocca and Fredrik Andersen show that, when evaluating health risks related to the use of genetically modified plants in agriculture, different ontological starting points play an essential role for the final risk evaluation. (more…)

What does CauseHealth mean by N=1?

by Roger Kerry

N=1” is a slogan used to publicise a core purpose of the CauseHealth project. N=1 refers to a project which is focussed on understanding causally important variables which may exist at an individual level, but which are not necessarily represented or understood through scientific inquiry at a population level. There is an assumption that causal variables are essentially context-sensitive, and as such although population data may by symptomatic of causal association, they do not constitute causation. The project seeks to develop existing scientific methods to try and better understand individual variations. In this sense, N=1 has nothing at all to do with acquiescing to “what the patient wants”, or any other similar fabricated straw-man characterisations of the notion which might emerge during discussions about this notion. (more…)

Map versus terrain?

by Anna Luise Kirkengen

When discussing the potentials and limitations of “Evidence Based Medicine”, it might be reasonable to begin by examining the premises inherent in the concept. It might be wise to question, for example, whether the use of the word “Evidence” in this model represents an improper appropriation of the term, as if it had a single, specific meaning. One might object: “What is evident? Well, that depends.” (more…)