European Multidisciplinary Cancer Congress (EMCC) 2011, 23-27 September, Stockholm
Experimenting with smaller, faster clinical trials
Dr Marie-Cecile Le Deley – Institut Gustave-Roussy, Villejuif, France
I’m very pleased to have the opportunity to present the work I performed with Dan Sargent, who is here, and Karla Ballman at Mayo Clinic, Rochester, Minnesota. With the advent of personalised medicine we were wondering whether smaller, faster clinical trials could improve cancer patient survival. With the increased knowledge of tumour biology, common cancers are more and more frequently recognised as consisting of small subsets of patient with particular tumour abnormalities. These abnormalities may be targeted by specific therapies; a good example is ALK inhibitors in non-small cell lung cancer. The issue is that in rarer diseases it becomes very complicated to evaluate the many treatments that may be available for testing.
A few words to define what we’re talking about: randomised trials are clinical studies performed to demonstrate the efficacy of a new treatment, comparing the new treatment to the best available treatment, the so called standard treatment or control treatment. Traditional trials are designed to avoid wrong decisions, in particular we traditionally use stringent evidence criteria. We adopt the new treatment as the new standard allowing for a very low false positive rate, what we call the alpha error. Consequently many patients are required; the issue is that large sample size clinical trials take a very long time to reach a definitive result, especially if the disease is rare. So nowadays, by running a single trial over many years we cannot evaluate the other drugs available meanwhile.
So what happens if we reduce the sample size of clinical trials, if you relax the evidence criteria to adopt the new standard? To answer these two questions we have simulated a series of clinical trials run over a fifteen year research period assuming, for example, one hundred patients per year with various design parameters. We varied the number of trials run over the fifteen year period and consequently the trial sample size from two big trials to ten smaller trials. We also varied the evidence criteria, the alpha error of the tests, and then we measured the expected survival gain obtained at the fifteen year research horizon. For example, we compared the expected gain at fifteen years after two large randomised trials with stringent evidence criteria to the gain expected after running seven successive trials over fifteen years with relaxed evidence criteria. We observed that the expected gain is much higher with a strategy based on many smaller, faster trials with relaxed evidence criteria than with traditional designs.
The downside of this approach is that by reducing the sample size of trials and relaxing the evidence criteria we might select, as a new temporary standard treatment, a treatment that does not work better than the existing best therapy. But we can demonstrate that there is a very low chance of a truly poorer outcome and also the error could be quickly remedied if many drugs are successively tested.
Additionally, with new targeted agents we think that there are likely fewer safety issues than with cytotoxic chemotherapy. For all these reasons, small backward steps can be judged acceptable if it is a price for a greater long-term benefit.
In conclusion, in rarer diseases where there is no way to increase the trial accrual rate and if there are many drugs available for testing, the ability to test many of the promising agents available is hindered by the need to invest many resources into running a single trial for many years. Gain in cancer survival could be improved by running smaller trials with less stringent evidence criteria. But also, by running smaller trials, a greater number of treatments can be tested and new drugs could become available more quickly.
Thank you for your attention.