I really see two issues: one is that even though biomarkers are increasingly an important part of oncology, it’s very difficult, actually, to align effectively the biomarker development process with the drug development process. The development of a biomarker that’s predictive for the use of a therapy is something that has to happen very early in the sequence of trials to develop the drug. That requires the investment of time and resources that has to happen before it’s actually clear that the drug development itself might succeed. So that’s one challenge.
The other challenge is standardisation. Once it’s clear that a predictive biomarker is going to be useful there are going to be many ways to measure it. So after a drug is approved there may be an approved companion diagnostic, there may be more than one diagnostic. There may also be laboratory-developed tests that are performed at single institutions. All of those tests say they measure the same thing but there is no straightforward way to compare them to find out how they might differ or much less to find out if those differences might actually have an impact on patient outcomes. So those are two of the challenges that I think we face.
What important points need to be kept in mind whilst creating and running biomarker-driven clinical trials?
One of the most critical decisions at the very beginning is whether or not to restrict enrolment to the trial on the basis of a test for some kind of biomarker. If you fail to enrich for cases that are more likely to benefit from the drug, that might result in a negative trial and failing to proceed with a drug that actually might be of benefit for a subset of the patients. For that reason, today a lot of the larger trials are now designed with pre-planned subset analyses to actually investigate the value of a predictive biomarker.
If a biomarker is going to be used it’s critical to understand the limitations of whatever test is used to measure the biomarker. There are some kinds of tests that are much more robustly quantitative than others, so that’s something that needs to be understood at the time that a particular biomarker test is selected for use in a trial.
In larger trials there may be an option to permit testing to occur at the local trial sites, as opposed to requiring testing at a central laboratory. The trade-off there is that while the inter-laboratory heterogeneity that might arise from using local testing might dilute the signal of the drug effect to some extent, the local testing is much more likely to reflect how the real world population of patients will actually be able to receive the drug. So those are some of the things to keep in mind.
How can these issues be addressed?
Really everyone has a part to play. Oncologists, drug developers, need to understand something about biomarker development. They need to appreciate the difference between a biomarker which is a biological characteristic or state and the test that is used to measure that biomarker. Drug developers need to understand something about how diagnostic devices are regulated, which is very different from the way that drugs are regulated. If there is going to be a biomarker test employed in a trial, the drug side people need to understand how much time it takes to develop even a prototype biomarker assay that would be fit for purpose for that trial.
But most of all, I think that everyone needs to appreciate that if a biomarker proves to be necessary to use the drug, then testing for the biomarker in the community after the drug is approved is going to look very different from that prototype assay that was used in the trial.
On the laboratory side, certainly the laboratory people who are involved in any co-development effort, need to understand the drug development process and particularly they need to understand the need for speed in getting trials up and done. They really need to be full partners in the whole process, from beginning to end. They need to be involved because the strategy for biomarker testing may evolve during the sequence of trials as more and more is learned about the way that the drug works.
The thing is, the story doesn’t end after there is an approval. The drug and assay co-development may be a good way to develop marketable products; it may not be the best thing for the public, either the public health or cost containment. So regulators have their issues. There are ways to design trials for therapeutics that don’t rely on the use of a central laboratory or a single version of a biomarker test that they can be open to, that they and developers might consider. It’s important, again, to keep in mind that certain kinds of testing platforms might be preferable because they are less vulnerable to inter-laboratory lack of concordance, inter-laboratory heterogeneity.
Finally, I’ll point out that while it has been possible to achieve approvals of drugs that are biomarker directed in histology agnostic ways, it has been possible to achieve approvals that were agnostic to the type of biomarker testing that was done. So all of these are possibilities that regulators as well as developers should be keeping in mind.
Finally, payers can certainly use the regulatory approval of a companion diagnostic as evidence that that test is reliable but they should not be locked in to specific tests when there may be more cost effective alternatives being developed out in the community, either by other manufacturers or by individual laboratories. These can also be reliable, particularly if there are set-ups or proficiency testing or external quality assessments. So all the participants in this drug development system need to consider these things and play their part.
What does the future look like for drug and biomarker development?
I see a shift towards testing platforms that generate more information from a piece of tissue. So, for instance, think of a panel of biomarkers that can be assessed using an NGS platform as opposed to a one-off immunohistochemistry test. I also definitely see a shift towards building more assays that can measure biomarkers by sampling the blood as opposed to requiring a piece of tumour tissue. Testing the blood is certainly more effective in early phase trials where you are often interested in pharmacodynamics questions about how the drug works and you need to test at multiple time points and not just at baseline.
But even after you have a drug approved and you’re trying to consider whether to prescribe it, it’s much better to run a test on a sample taken from the patient at that time as opposed to relying on a test that may have been performed in the past on a piece of archival tissue that was collected under uncontrolled pre-analytic conditions. So those are two shifts that I definitely expect to see as we move ahead.
Is there anything you would like to add?
I’d just like to encourage as much dialogue as possible between all of the players in the diagnostic testing and oncology drug development community as possible. Everyone has their part to play and these are hard and difficult questions, big questions. None of us is going to be able to solve it on our own and so as much communication and dialogue as possible is the way to go.