The trial was terminated early for benefit, great! Right? No, you are probably celebrating a bias.
Author: Henrique A. Puls / @HAPuls
Reviewed by: M. Fernanda Bellolio, M.D.
There are two major reasons to stop a trial earlier than planned for benefit: the unethical situation of keeping the randomization process even with good results, and the idea that research resources should be allocated to other project after the study question is solved.
These two reasons imply the question was correctly addressed and evaluated with the data available; however, early data from Randomized Controlled Trials (RCT’s) are not reliable to answer research questions. Let’s explore why.
This topic gained attention when, in the late 80’s, a simulation study showed that RCT’s terminated early for benefit would, on average, overestimate treatment effects. Almost two decades later, in 2010, a systematic review, the STOPIT-2 study (http://goo.gl/tt2Opn), was published comparing treatment effects from truncated trials (stopped early) with studies addressing the same question but not stopped early.
In the STOPIT-2, 91 truncated and 424 non-truncated RCT’s were compared. Truncated RCT’s were associated with greater effect sizes and were more likely to show benefit when compared to non-truncated trial. Smaller trials (with less than 500 events) had greater overestimates of the treatment effect when compared to larger studies.
On average, trials stopped early overestimate effects. This might be random statistical fluctuations derived from many identical random processes; these fluctuations are fundamental and unavoidable. For example, it is known that if a fair coin is tossed many times the ratio of heads to tails will be very close to one. However, it is extremely common to have a ratio far from one if the coin is tossed few times only. The same situation may occur with the results of truncated RCTs, but in this circumstance we do not know if the “heads/tails ratio” is far from one because of random fluctuations or because of the real effect of the intervention.
Examples of truncated trials are: (1) Intensive insulin therapy in critically ill patients was adopted into several practice guidelines after a 2001 stopped early trial showed a 42% relative risk reduction in mortality using glucose lower than 8.3 mmol/L (150 mg/dL). This finding was contradicted by systematic reviews which reported no mortality difference and an increased risk of hypoglycemia with intensive insulin therapy use.
(2) In 2001, a truncated RCT accounted lower mortality with the use of recombinant human activated protein C (rhAPC) in critically ill patients with sepsis. The results were viewed with enthusiasm and the Surviving Sepsis Campaign recommendation of 2004 was launched advising rhAPC use in the bundle of sepsis interventions. This recommendation was still present in 2009, even with other studies raising concerns about an increased risk of bleeding and the validity of the original results regarding mortality. In 2011, rhAPC was withdrawn from the market.
“We are not able to distinguish between random fluctuation and treatment effect in an early terminated RCT”
Imagine the effect a RCT stopped on time 6 would have on clinical practice.
We believe that a better understanding of the extent to which RCTs exaggerate treatment effects and the factors associated with the magnitude of this bias, can optimize trial design and data monitoring charters, and may aid in the interpretation of the results from trials stopped early for benefit.
These findings should guide our practice as following:
- Clinicians should assess early terminated trials with caution, always remembering that this type of study tends to overestimate treatment effects.
- Trial investigators should have this knowledge in mind when designing the study and, as larger studies with a higher number of events have a lower overestimation rate, they should think early stopping rules demanding large number of events.
- Researchers should account for this type of bias when performing meta-analysis, since this could lead to overestimates of their findings.
- Finally, all health practitioners should understand that data from a stopped early RCT is not as reliable as data from a RCT that reached its planned sample size.
References
- Pocock SJ, Hughes MD. Practical problems in interim analyses, with particular regard to estimation. Control Clin Trials 1989;10(suppl 4):209-21S.
- Bassler, D., M. Briel, et al. (2010). "Stopping randomized trials early for benefit and estimation of treatment effects: Systematic review and meta-regression analysis." JAMA 303(12): 1180-1187.
- Guyatt, G. H., M. Briel, et al. (2012). Problems of stopping trials early.
- Bassler, D., V. M. Montori, et al. (2008). "Early stopping of randomized clinical trials for overt efficacy is problematic." Journal of Clinical Epidemiology 61(3): 241-246.