The Mölnlycke O.R. blog
Grading the evidence – Efficacy vs effectiveness
In this post in a series on evidence-based medicine, Hayley Hughes, a member of the clinical research team at Mölnlycke Health Care, will look more closely at alternative sources of evidence when RCTs are not available.
All that glitters is not necessarily an RCT…
No one denies that a randomised controlled trial (RCT) is regarded as the gold standard of evidence. RCTs exist to ensure that allocation bias is minimised as much as possible and that the results obtained can be considered to be the real effect of the intervention and not one arising from chance.
But even the gold standard has its limitations (as do all types of trial) as discussed in a previous blog post so one should not automatically assume a particular RCT is the perfect evidence for a given intervention; it is always good practice to critically analyse it first. Conversely, lower-level evidence should not immediately be discarded. Rather it should be evaluated alongside the related RCT to provide further evidence to support or disagree with the intervention.
One overarching reason to complement an RCT with other forms of evidence is that RCTs are often criticised for lacking external validity – that is, “whether results can be reasonably applied to a definable group of patients in a particular clinical setting in routine practice” – frequently leaving clinicians to make judgments regarding their actual circumstances themselves1. A colleague pointed out to me last year that there is a difference between proving efficacy in a strictly controlled environment and effectiveness in a real-life scenario!
To address this, I would like to highlight two alternative types of trial below that I particularly like when there is an absence of evidence amongst RCTs for my particular evaluations. I find they provide me with enough evidence to ascertain safety and efficacy for continued use of a product, especially when backed up by in-use data with a low complaint profile.
Also referred to as “observational studies”, these use retrospective data on patients undergoing normal (therefore undefined) care according to local practices, which may mean that any external variables may not have been well-documented within the patient’s notes, thus lacking proof that a positive outcome was attributable to the intervention alone.
However, as a recent white paper stated2, some circumstances are better suited to an observational trial than an RCT, e.g. when using a placebo would be considered unethical in your hospital because it offers less protection than your current intervention, or when a new product has been in use for an interim period due to the non-availability of your normal product and you would like to do a comparison of observed efficacy or irritation. The white paper highlights that with improved methodology, valuable data could be gained from observational trials… and an advocate of EBM recently stated that observational studies are advantageous in order to understand a specific patient’s needs in getting a specific outcome from a specific treatment, even though these studies are lower in the hierarchy3.
- What would stop you from gaining evidence for your preferred intervention at a local level via an observational study?
Also referred to as “real-world effectiveness trials”, these are sometimes seen as more realistic than RCTs because they are again carried out under normal clinical surroundings but with the inclusion criteria widened to include patients who are on additional medication and/or suffering from other illnesses that may otherwise exclude them from an RCT4.
Whilst the additional external variables within these trials would dramatically increase the number of participants needed to achieve a significant result (as well as potentially adding complexity in the analysis of multiple variables), it would in fact mean fewer interventions during the trial by the staff and patients than an RCT, so their normal routine would need to be altered very little. (A great plus when staffing shortages are an issue!)
Also, investigators can manage each aspect of a pragmatic trial more closely than observational ones because of their prospective (rather than retrospective) nature.
- What combination of practices or interventions would you include if a pragmatic trial were to be carried out at your hospital?
Combining the evidence
Right at the bottom of the hierarchy of evidence is ‘expert judgement’. This is often discounted in evidence-based medicine, often referred to as “eminence-based practice” because the opinion of one medical professional does not always include recent evidence, but interventions used since their initial medical training.
However, a technique called ‘evidence farming’ may be a process that could ensure expert judgement is logged systematically and possibly graded to a higher level when assessed alongside RCTs and the national guidelines available in each country5. Basically the health care professional uses the local guideline to formulate an infection prevention plan for a specific patient, then monitors the outcome, adding the results to a database. This database can then be accessed and populated by other health care professionals based on the outcomes of their own practice to provide a local database of what works in that particular situation.
To conclude, the fact that the “hierarchy” exists in the first place means that there IS actually a place for expert opinion and non-randomised studies. All these forms of evidence just need to be critically analysed and considered if the prospective or retrospective piece of data is graded high enough to be applicable to the local situation.
In the next post, we will look at specific questions that should be asked of published trials in an effort to critically appraise the different types of evidence and practical tips on how to grade them accordingly.
- Rothwell, PM. (2005), ‘External validity of randomised controlled trials: “To whom do the results of this trial apply?”’, Lancet, vol. 365: pp. 82–93
- Parexel (2012) Unlocking the Value of Observational Research, Parexel White Paper
- Bluhm, R. (2010) 'Evidence-based medicine and philosophy of science', Journal of evaluation in clinical practice, vol. 16, no. 2, pp. 363-364.
- Freemantle, N. and Strack, T. (2010) 'Real-world effectiveness of new medicines should be evaluated by appropriately designed clinical trials', Journal of clinical epidemiology, vol. 63, no. 10, pp. 1053-1058
- Hay, M.C., Weisner, T.S., Subramanian, S., Duan, N., Niedzinski, E.J. and Kravitz, R.L. (2008) 'Harnessing experience: Exploring the gap between evidence-based medicine and clinical practice', Journal of evaluation in clinical practice, vol. 14, no. 5, pp. 707-713.