Keeping in mind the political bit of policy making – Lessons from the Health Services Research: Evidence Based Practice Conference, London July 2014.Monday, 07 Jul 2014
Last week, I was in London to attend the conference “Health Services Research: Evidence-Based Practice” (1-3 July 2014), organized by BioMed Central. A few weeks ago, when I began preparing my presentation, on the marginal role of evidence in the policy-making and implementation processes of reforms to Human Resources for Health (HRH) in Sierra Leone and the importance of contextual and political features, it felt like I was going to walk straight into the lion’s den.
Fortunately, the conference kicked off with a fascinating talk by Nicholas Mays of the London School of Hygiene and Tropical Medicine making a similar point. He argued that “RCTs and other positivist forms of evaluation” should have much a lesser role than what is today advocated (or better, a role at the beginning rather than the end of the decision-making process), not only because of the limitations of RCTs, but also because research and evaluation are profoundly different from decision-making. They entail the consideration of other complex factors, including the specific contextual features as well as the interpretation of policies by the implementers, etc. Although he didn’t use the term ‘political’, he described the messiness of policy-making and the different concerns of policy-makers and researchers. Therefore, applying evidence to policy and practice cannot be a straightforward, linear process. Others have recently made similar comments on the political nature of policy-making (See for example, for example: Policy is political and Political and Institutional Influences – (Systematic Review) and the use of RCTs in social and development policy (for instance, Lant Pritchett 10.3.2014 and Krause, P, 24.3.2014), but the issue is contentious. The need for greater rigour and delivery science in health systems research has been highlighted in The 2014 Leverhulme Lecture given by Tim Evans, Senior Director, World Bank at the Centre for Applied Health Research & Delivery, Liverpool School of Tropical Medicine.
The presentation was meant to be provocative given the conference theme and, indeed, it sparked an interesting debate. The audience appeared to be divided in two ‘camps’, those who fundamentally agreed and the “trialists and policy modernisers” who held that evidence should be the (only?) base for any policy decisions.
Many of the following presentations and posters seemed to be informed by this latter approach, for example using rankings of the ‘strength’ of evidence from case series at the bottom of the pyramid, to systematic reviews of RCTs at the top of the rank. Or explaining how we can prepare better guidelines (for clinical interventions), so that they will be adopted by policy-makers, clinicians, patients, etc.
On issues around HRH, we heard interesting talks by key experts discussing how much evidence is available to address the challenges of HRH recruitment, retention and distribution, and what the gaps are. Apparently, we know quite a lot about HRH since the last 10 years of research, as Gilles Dessault highlighted, but the evidence production is “work in progress” and not rigorous enough. James Buchan pointed out that, in three recent Cochrane Reviews on HRH issues, none (in two reviews) and one paper (in the third review) met the stringent criteria for systematic reviews.
My own talk - Health Worker Incentives in Sierra Leone built upon the analysis (qualitative and with no control group…) of key informant interviews at central and district-level in Sierra Leone, exploring the policy-making and implementation processes of HRH reforms. It shows how the evidence-base for the HRH reforms introduced around the launch of the Free Health Care initiative in Sierra Leone (2010) was quite thin, almost non-existent. This leaves us wondering what the role of evidence is in post-conflict, data poor environments, when a very brief and politically determined ‘window of opportunity’ for reform opens. Yet, despite the little use of evidence, most actors at central level regard these reforms as quite successful. It looks like the fact of not being based on evidence and ad hoc studies (there was no time for those!), but on inclusive discussions with all stakeholders and a pragmatic approach (for example, about the time and funds available) produced policy designs considered ‘successful’, at least in the narratives of the informants.
But when we take one step further and look at the implementation and the practices at local level, the story is quite different. First of all, the implementation of the HRH policies was (and is) challenging and problem-ridden. Secondly, our analysis shows that HRH practices are heavily influenced by the actors at local level (i.e., the District Medical Teams and the national and international NGOs in each district), their objectives and agendas. It seems that these local-level dynamics have the potential to modify HRH practice quite substantially, with effects that extend all the way to the health workers’ incentive package. Would things have been different, perhaps better, if policies had been strictly ‘evidence-based’? Clearly, practice is not only defined by the policies originally designed (whether evidence-based or not), but by factors that are very specific and contextual, and of a political nature.
By the end of the conference, it seemed that the positions of the two ‘camps’ on the role of evidence in policy-making and practice became less divergent. To me, one of the reasons is that these views are partially dictated by the difference between health services research, focused on rather clinical issues, and health systems research, which by definition looks at broader, arguably less technical and neutral, issues. These research topics refer to different disciplines and make use of different methods – which of course does not mean that some are less rigorous than others. Some of the interventions we (health systems researchers) are looking at are very difficult to use such as randomization approaches and, as our own struggle with ReBUILD in Sierra Leone with the availability and reliability of secondary data shows, quantitative analysis is simply not possible at times, especially in post-conflict, fragile settings with weak routine Health Information Systems.
Moreover, undoubtedly policy-making does not happen in “blank minds” where only evidence exists to determine decisions, but in “busy minds that have ideas on how the world is and ought to be” – as Tikki Pang reminded us in the concluding remarks. The health economists in the captivating last session told us of the moral and ethics dilemmas of cost-effectiveness analysis. And we certainly have to keep in mind the political bit of policy-making and implementation, especially when looking at health system reforms. Explicitly recognizing the political nature of these processes, and actively reflecting on what could be the role of evidence in this, should be the starting point for our work when we think about knowledge translation.
Maria Bertone has worked for the last 7 years with Ministries of Health in numerous African countries. She recently started a PhD at the London School of Hygiene and Tropical Medicine, focusing on the remuneration structure of health workers in Sierra Leone and its consequences on their performance and accountability. Her PhD fieldwork is funded by and carried out in collaboration with ReBUILD.