The Mölnlycke O.R. blog
Grading the evidence
In a series of posts, Hayley Hughes, a member of the clinical research team at Mölnlycke Health Care, will examine the use and growing importance of evidence-based medicine and the critical evaluation of guidelines that are used in everyday medical practice.
Common sense: Would you jump without a parachute?
Let’s say you really want to go skydiving. Would you consider jumping without a parachute? Probably not. Chances are, despite the fact that you do not have hard, scientific evidence in the form of a randomised, controlled trial that proves that a parachute provides safety or protection, you will opt for the parachute. A colleague recently sent me a paper in which the authors carried out a systematic review of the highest level of evidence, Randomised Controlled Trials (RCTs), to “determine whether parachutes are effective in preventing major trauma related to gravitational challenge”1. They couldn’t find any RCTs covering the topic but challenged any evidence-based medicine (EBM) activists to organise (and participate in) a “double-blind, randomised, placebo-controlled, crossover trial of the parachute”. In other words, jump out of a plane without a parachute to support the theory that parachutes are actually beneficial to your health!
This is an extreme example that brings to light an important question – when you need to make a treatment decision, how do you decide? On what evidence do you base your decision? Is a large-scale RCT without any other evidence enough, particularly if the RCT employed questionable methodology or had too few patients? Is a smaller-scale RCT with an additional large-scale observation-based study sufficient? This is where the concepts of evidence-based medicine and grading that evidence come into focus.
Evidence-Based Medicine (EBM) has been used within healthcare since the early 1990s and collates the published evidence available to evaluate scientific observation for actual medical practice2. Before this time, healthcare workers only had their initial training to rely on, and although there were then years of personal experience on which to base decisions, these were usually based on the procedures that had always been used since the practitioner received his/her training. This then meant that the current standard of care may not have been used by that healthcare worker, possibly with disastrous consequences for the patient! Now, the practice of EBM means that different sources of clinical evidence can be assessed to make accurate decisions about a particular medical practice or intervention.
The hierarchy of evidence
Randomised controlled trials (RCTs) are widely perceived as the highest level of evidence available3, ensuring that every aspect of the trial has been controlled and the outcome is not due to something as simple as chance. However, in an environment in which lives are on the line and in situations in which RCTs are prohibitive or difficult to undertake (such as ethical constraints upon a new surgical method or difficulties in generating the financial and staff resources for RCTs2), more evidence is always better than less.
One new EBM rating that has been suggested for therapeutic studies classifies evidence in five levels3:
- Level I is RCTs or systematic reviews of level-I RCT evidence.
- Level II includes prospective cohort studies, poor-quality RCTs or systematic reviews of level-II studies.
- Level III includes case-control studies, retrospective cohort studies or systematic review of level-III studies.
- Level IV includes case series.
- Level V is based on expert opinion.
Upon critical analysis of an article, lower levels of evidence may in fact be more in line with the conditions at a local hospital than the conditions used in the RCT, so may actually be more relevant to assessing a local change.
RCTs, as long as they are performed and reported correctly, definitely have their rightful place atop the hierarchy of evidence, but grading other forms of evidence and putting them into use when they are applicable – without immediately discounting them based on their lack of primacy in the hierarchical structure – is equally valid. “We simply require the soundest evidence available to influence our surgical decisions.”2
Clinical evaluations carried out in accordance with EU guidelines for clinical evaluation4 to help certify medical devices, for example, now look at all these types of literature in order to ensure that a correct appraisal of the evidence is carried out to assess their performance and safety. Why would such a process not be transferred to the ward or the operating room?
The bottom line is – the lack of an RCT alone should not negate the value of other forms of evidence. “Lesser” forms of evidence often exist in great abundance to support various treatment recommendations, including in cases where an RCT has not even been possible3. These forms of evidence take multiple sources and observations into account.
In future posts, we will take a look at how to critically appraise the different types of evidence and grade them accordingly as per newer analyses, as well as how EBM in practice is applied to infection control with regard to surgical procedures.
- Smith GCS, Pell JP. Parachute use to prevent death and major trauma to gravitational challenge: systematic review of randomised controlled trials. BMJ 2003;327: 1459–61.
- Horan FT. Judging the evidence. J Bone Joint Surg [Br] 2005;87-B:1589–90.
- Wright, J., Swiontkowski, M. and Heckman, J. (2003) 'Introducing levels of evidence to The Journal', Journal of Bone and Joint Surgery-American Volume, vol. 85A, no. 1, pp. 1-3.
- Clinical Evaluation Guideline, MedDev 2.7.1 Rev3.