Editorial Indian Pediatrics 2001; 38: 1349-1353 |
||||||||||||||||||||||
Using Evidence to Improve Clinical Practice |
||||||||||||||||||||||
In the early 1990’s, some policy makers and clinicians expected evidence-based approaches to lead a revolution in care, with a cheaper, more efficient service. In the event, evidence-based medicine is having a substantive impact on the way health professionals do things and think, rather than a prescription for clinical reform. Evidence-based approaches are now central to undergraduate and postgraduate medical training, and it is clear that health professionals need to learn new skills in interpreting and using evidence in clinical practice. Expectations and Realities Why then have we not seen a revolution in health care with substantive changes in what we do? The mechanism for change is appealing: take a health problem, carry out systematic reviews about benefits and harms, use the information to decide best practice, then implement it. The problem is that the process for change is based on three assumptions: that the knowledge is there, hidden away in the existing literature; that knowledge is pure in some way and does not need interpreting; and that knowledge deter-mines behavior. Unfortunately, experience suggests that none of these assumptions are true. Systematic reviews generate new knowledge when they bring together many small studies. For example, a Cochrane Review of amodiaquine for malaria from many small studies mainly in Africa showed convincingly that the drug was more effective than chloroquine(2,4). Systematic reviews may demonstrate uncertainty: for example, growth monitoring in children is widely practiced in developing countries, but the trial from George and colleagues showed no benefit, and the systematic review of the topic raised doubts about whether the practice is worthwhile(5,6). Similarly, the World Bank promotes routine anthelmintic drugs for school children every six months: the trial from Awasthi and colleagues highlighted the potential of the intervention, and the systematic review underlined this but expressed the opinion that further research was needed before countries invested their scarce resources in large community programs(7,8). Taking an example relevant to hospital doctors, the systematic review of steroids for tubercular meningitis suggested potential benefit, but as yet it is unproven(9). The second assumption made is that research evidence is somehow pure, and that people producing systematic reviews are engaged in a process of presenting information impartially for the reader to decide. But systematic reviews are like any other form of research, subject to bias and influence from the researchers and others. As we have seen recently, people have views about what data should be taken into account and how it should be presented. Cochrane reviewers and editors could not agree on what data to include and how the findings should be interpreted in a review of breast cancer screening(10,11). Exactly the same data in a meta-analysis can be interpreted in different ways: we showed a meta-analysis of artemether versus quinine for severe malaria to a clinician and a pharmacist working in the same hospital in Zimbabwe, and they reached diametrically opposite conclusions about which drug was best (PG, personal observations)(12). The third assumption is that knowledge leads providers to change their practice, and yet we all know this is simplistic. Decisions about health care are based on habit, past training, what the Professor does and what the health service allows; and, at times, may depend on an individual clinicians’ pre-ferences, including the benefits to him or her from the sale of drugs or diagnostics arising from the consultation(13). Changing for the Better Systematic reviews rarely provide simple answers to best practice, and it is naive to expect this. What they have done is to shake up the medical profession globally: we have accepted "best-guess" as sufficient to make decisions about other people’s lives for years, but the evidence-based approach is forcing us to be honest about our uncertainty. We need this to move forward. We have to admit what we don’t know to help us delineate the research we need to find out the answer. The simple fact people are questioning practices is a good thing, as it helps clinicians to learn and move medicine forward. Systematic reviews sometimes provide clear messages. Where current practice varies from this research evidence, this provides a mandate for change. Organizations, Ministries and clinicians are grappling with how to identify these areas, and how to bring about the change required to make care more compatible with the research evidence. Active players include the World Health Organization’s technical departments and Essential Drugs Program; national governments in several developing countries; and individual hospital and healthcare providers scattered across the world. The Effective Health Care Alliance Program is a network of such people, and our collective experience suggests the following three practical points may help people and organizations that want to translate research into policy and practice. Focus Translating research into policy and practice needs a focus. Suitable topics should show benefit over existing interventions, and be cheap and easy to implement. For example, reduced osmolarity oral rehydration salt seems to be more effective for diarrhea in children, and a change in formula is currently being considered by WHO and UNICEF; and caregiver support during labor seems to help both mother and infant(14,15). So people responsible for health care need to stay in touch with research evidence and seek out the reviews that have important implications for policies and guidelines. Change is also a priority for interventions which are widely practised but where the evidence suggests no benefit. Change in practice is even more important if these interventions use resources, have the potential to do harm, or are unpleasant for the patient. For example, routine episiotomy for vaginal delivery is still widely practised in some developing countries, despite evidence of harm, that it prolongs hospital stay, and hurts(16,17). Human albumin is widely used to treat shock, yet it is expensive, and has no benefit over saline, and may even be associated with a higher mortality(18). These are target areas that change could improve care and reduce costs. Understand the Science Commonly policy makers and clinicians view their own populations and patients as special, somehow different from everyone else, and in whom the evidence from reviews is not applicable. There are some dangers here, as if this is the case why bother to do research in the first place? However, these comments often reflect that evidence-based specialists have not communicated well the science behind their methods, so that individuals are able to themselves critically appraise systematic reviews and use the information in their own setting. Policy makers, clinicians and user group agents need to understand research evidence and have skills in interpreting and using it. People need to understand where the messages are coming from and how they are generated, particularly as they may change over time. We have found engaging people in conducting systematic reviews is a good way of people understanding what evidence is, and that formal training in critical appraisal of systematic reviews is central to interpreting them.
Change is a social process Health care provision takes place within social networks and organizations that are like living organisms, with complicated politics and random happenings influencing every event and decision. It is clear from systematic reviews of behavior change that effective interventions for change in practice should be multi-dimensional and focused(19). Providers need small group, interactive educational interventions that engage them in the process of change, encourage critical thinking about current practice, and increase individual capability(20). What is more important to remember is that change is likely to be incremental and random(21). Classic linear models that describe change as a logical and planned process are inappropriately applied to health professional behavior. The origins of changes in practice are more likely to be smaller experiments that, if successful, are accepted and transformed into action. Complexity science acknowledges the complicated nature of health systems and interactions between those working within them, and suggests that an organic approach to change is best(22). So we need to keep trying different approaches to change, and not to give up when things do not work as change cannot be predicted. What is important is to find approaches that do seem to be effective, and look at ways to apply them in other settings. With collaborators, we have been exploring ways to make childbirth care in developing countries more evidence-based. The ‘Better Births Initiative’ is an innovative approach using attractive, interactive materials (including workbooks, posters, video presentations, and self-audit) drawing on the principles of evidence-based medicine and relevant systematic reviews. This is being tested in educational workshops with mid-wives in South Africa (Table I)(23). Early results suggest that complex theory holds true; change happens sometimes, it cannot be predicted, and it takes time. We believe that translating evidence into practice is a complex process to institu-tionalize, but needs us all to understand the principles and see where each of us as individuals has an opportunity to influence change and do our job better. Acknowledgements To the Department for International Development, UK for their funding. The views expressed are entirely those of the authors. Competing interests: We are both employed through a U.K. Government funded research grant that aims to promote evidence-based approaches in middle and low income countries.
Paul Garner,
| ||||||||||||||||||||||
References | ||||||||||||||||||||||
|