Are health statisticians the new post-modernists?
By which I don’t mean: are they edgy and cool? Rather, I mean do they deliberately write a load of impenetrable made-up gobbledygook that makes them sound clever to the layperson? For your delectation, I compare two statistics papers that were published this week, one in PLOS Medicine, the other in The Lancet.
Paper One: ‘Does Development Assistance for Health Really Displace Government Health Spending? Reassessing the Evidence’ by Rajaie Batniji and Eran Bendavid from Stanford University.
Straight up, this is a master-class in clear, simple prose. I loved it (meaning I could understand it), and I don’t often say that about health stats papers. What’s the issue? The paper is a response to a well-known study on displacement effects of aid conducted by a team of researchers at Harvard University. They make the case that for every $ of aid money donors give to a country to spend on their health sector, that country reduces its domestic spend on health by $0.43. So what? Batniji and Bendavid:
“This latter study is referenced frequently in conversations with decision-makers at aid agencies as a cautionary note about DAH. The analysis appears to have reinforced skepticism about health aid”.
Which might be ok if the study findings were correct but, argue the authors, they aren’t. Batniji and Bendavid again:
“A closer look at the analysis by Lu and colleagues, using data made available in May 2011, shows that the association between DAH and displacement of government health expenditures is not robust after exclusion of a small subset of data. The trends are driven by outliers and country data cluster, and follow widely divergent trends (Figure 1). The primary finding by Lu and colleagues, which we are challenging here, is the negative relationship between government health expenditure (GHE-S)/gross domestic product (GDP) and DAH-Gov/GDP. GHE-S/GDP is government health spending from domestic sources as a percentage of GDP for a country in each year from 1995 to 2006. DAH-gov/GDP is development assistance for health dis- bursed to government as a percentage of GDP for each country in each year from 1995 to 2006”.
I love this. Clear, precise – we know what the problem is, we know what the authors are going to do, and they even provide the lay reader with a glossary of key terms ‘as we go’. Thank you.
The paper is also entertaining:
“Concerns about aid displacement are as old as development assistance. As a leading World Bank economist famously said in 1947, ‘‘When the World Bank thinks it is financing an electric power station, it is really financing a brothel’”.
And it is challenging beyond the central contribution it makes to statistical analysis of DAH: “even if displacement does exist, there is no evidence that it is a bad thing”; “the reality is that we still lack a sufficient accounting of public financing on health to make any conclusions on overall trends”. Finally, it is just beautifully written:
“However, our findings should relieve donors of the need to make unrealistic demands on recipient governments, and of the pressure to divert resources to NGOs” – “relieve donors of the need”!
How politely rude!
Paper Two: ‘The effect of an integrated multisector model for achieving the Millennium Development Goals and improving child survival in rural sub-Saharan Africa: a non-randomised controlled assessment’ by Paul M Pronyk, Maria Muniz, Ben Nemser, Marie-Andrée Somers, Lucy McClellan, Cheryl A Palm, Uyen Kim Huynh, Yanis Ben Amor, Belay Begashaw, John W McArthur, Amadou Niang, Sonia Ehrlich Sachs, Prabhjot Singh, Awash Teklehaimanot, Jeffrey D Sachs, for the Millennium Villages Study Group.
Straight up, I don’t get this paper. At all. The title is BIG – the effect of a model on not just the MDGs but also child survival, in a large chunk of a continent. Wow! But wait a minute. What’s this? “a non-randomised controlled assessment” The assessment is non-randomised and controlled? I’d be interested in reading the control assessment. And look at that army of authors! I’m intimidated and I haven’t even got to the abstract.
Right, I’m not going to comment on the stats in this paper; that’s been done by better and more-qualified people than I. See here, here, here, and here, and no doubt elsewhere. Suffice to say the results are not that impressive (changes in 13 out of 18 indicators are statistically insignificant). Surprising considering the line up; surprising considering the Lancet has a review team that looks out for these things.
And I’m not going to mention the methods, but will note that the authors go to great lengths to explain how they mitigate the problems associated with non-randomised village selection – including randomly selecting from three non-randomly chosen villages.
Neither will I dwell on that annoying habit evident in so many academic papers of dropping pseudo-scientificy words into sentences: like ‘sequencing’ or (not in this paper but litters many other leaden analyses) ‘step-wise’ or ‘core’ (“a core set of interventions”, “governments were core partners” – core is simply meaningless and superfluous in both of these examples and is added just to make the claim sound more important or inclusive than it actually is); ‘contiguous’ (why not just say neighbouring), etc, etc.
And I won’t even comment on the interpretation, much. Here the authors seem to be saying something definitive about their results: “Relative to comparison villages, significantly higher levels of food security, skilled birth attendance, bednet use, and access to improved sanitation were observed”. Ok, let’s just check that with the now-infamous Table 2: Food insecurity confidence interval = –36·4 to 0·6; p value 0.057, which I think means that it’s statistically insignificant (doesn’t it?). Be fair though, the other three interventions did improve.
Actually, interpretation seems to be what’s most wrong with this paper, clouded as it is by a mission – a mission to make us see that MPVs WILL work, MUST work. It’s pretty clear, even to someone like me who is not trained in statistics, that the claims are not matched by the evidence.
So, we have two statistics papers published this week, both in prestigious journals, both by established academics. The first one is quality analysis, written well and easily understood by people like me who struggle to understand what the figures mean; the second is none of those things.
Andrew Harmer (with apologies to post-modernists. Or is it postmodernists, or postModernists, or PostModernists?)