Move over, RCT

3 minute read


The gold standard of clinical trial design is losing some of its shine as smarter, more adaptive designs prove to be faster, cheaper and more likely to identify a true benefit of an intervention


The gold standard of clinical trial design is losing some of its shine as smarter, more adaptive designs prove to be faster, cheaper and more likely to identify a true benefit of an intervention.

Professor John Isaacs from Newcastle University in the UK, addressing the APLAR conference, said there were problems associated with the conventional double-blinded randomised controlled trial, or DBRCT.

“By definition you don’t know whether a drug’s going to work when you start a trial,” Professor Isaacs said. “Once a RCT starts you can’t change it, it’d be cheating to change something, and so there’s a risk of failure.

“These trials require large numbers of patients, they have strict inclusion/exclusion criteria, which is not really real life. They’re very expensive. And they have quite modest power, especially if you want to start looking at subgroups.”

He outlined some alternatives: adaptive trials, master protocol designs such as umbrella, basket and platform trials, and cohort multiple RCTs, or trials within cohorts (TWICs).

All required significantly more work up front in the design stage and the involvement of a very good statistician, with a particular vigilance over the risk of bias and type I errors.

“If you haven’t got a good statistician, forget it,” Professor Isaacs said.

However, once under way, they used fewer subjects and took less time, making them cheaper, and enhanced the likelihood of finding a true effect.

Adaptive designs were a very different concept from the DBRCT, based on the principle that every patient who completes a trial leads to some new knowledge.

“You accumulate information that really should reduce the uncertainty regarding the treatment,” Professor Isaacs said.

Adaptive designs, he said, were “not cheating” as long as you prospectively planned your interim analyses and potential modifications.

Two subtypes were confirmatory adaptive designs, best for studying efficacy, and exploratory adaptive designs which were best for finding safe and effective doses.

An exploratory trial would start with a broad range of doses, and add patients to the better-performing groups and remove them from the badly performing ones, rather than simply upping the dose until the subjects got a toxic effect.

An example was the trial for the cyclin-dependent kinase inhibitor seliciclib for rheumatoid arthritis, which used the Bayesian technique of continuous reassessment modelling.

“Fewer patients were exposed to toxic doses and fewer patients ended up on low doses,” Professor Isaacs said.

A technique called sample size re-estimation allowed the size of the trial to change to reduce the chance of a neutral result. “If it looks like you’ll get an indeterminate result you can change the power of your trial to ensure you get a positive or negative result at the end.”

Population enrichment designs worked well when you suspected a treatment worked in a particular subset of patients. After an interim analysis you could add more biomarker-positive patients, increasing the likelihood of seeing a result at the end.

He gave the INHANCE indacaterol trial as an example.

Master protocols were appropriate for biomarker-driven stratification studies and pathology-driven studies.

Professor Isaacs said regulators allowed these studies but were still a bit wary, while journals were enthusiastic about publishing the novel trial designs.

End of content

No more pages to load

Log In Register ×