3rd EurocanPlatform Translational Research Course
Cancer systems biology
Dr Martin Peifer - University Of Cologne, Cologne, Germany
First of all systems biology, or systems medicine as it was called today, it’s not really well defined what it is but what I take from it is it is basically a very interdisciplinary topic where we basically join disciplines from the more experimental disciplines like medicine or biology with the more mathematical disciplines like mathematics, informatics and, gosh there was a third one, physics of course. Actually I’m a trained physicist so I’m perfect for this area. Of course the challenge there is that both parts, from the experimental to the more systematic, more mathematic, approach have to communicate together. So basically it requires mutual learning to develop a common language.
A perfect example of this exercise is the genomic data. So we basically sequence a lot of tumours and transcriptomes, so the transcribed part of the tumours, to decipher what is driving this particular tumour type. There you generate a lot of data, it is a very complex data set that you have to analyse on high performance computers in order to get the mutations and subsequently the really relevant mutations out of these tumours. For that you need very precise statistical methods or mathematical methods to get basically what is causing these tumours.
As an example, we applied that to small cell lung cancer which is a very aggressive form of lung cancer that occurs roughly 30% of all lung cancers. It’s associated to smoking, very heavily to smoking, and basically this is one of the rare cancers where you have really two driver genes and we identified that in basically all of the cases they are completely lost, these genes. That’s one part and then we mapped all these different alterations to biological processes in order to learn what’s going on there.
The second part, this is a paediatric tumour, it’s called neuroblastoma, it’s not a brain tumour, I have to say, it can be in any part of the body. The very interesting feature of this tumour type is that there are low risk tumours where basically the chance is very unlikely that the patient will die from the disease and there are very unfavourable ones which are high risk tumours. The low risk tumours particularly, you just need to monitor them, they just spontaneously regress, they just go away in most of the cases. It’s kind of weird and we analysed roughly sixty tumours with our genome sequencing, with these sequencing techniques, and found a rearrangement with quite a high percentage in high risk tumours. This is activating a gene that is basically responsible for lengthening the telomeres. The telomeres is something like, you all know the shoelaces, the little caps at the end of the shoelace prevents that it rips off, and this is the same thing in the genome that there’s a kind of sequence that prevents, that just cuts off and the cell cannot really divide anymore and these are these telomeres. They’re actually required that the cell can divide indefinitely, this process, and this rearrangement activates actually this telomerase or this elongation of these parts of the genome. That might give us a clue why actually these high risk tumours are not spontaneously regressing because they’re not immortalised. The others just run out of cells so they can just undergo a few cell divisions and then eventually these genomes get ripped off and then they move away. So both studies have been published recently in Nature.
What significance does this have for clinical practice?
The first small cell lung cancer study is very basic research so there’s still a lot to learn. For instance, there’s one feature that is very unique to small cell lung cancer that the patients react quite well on chemotherapy but they relapse quite early with a very chemoresistant phenotype. So to understand those maybe these biological processes we identified might help so we look more deeper into that. For the neuroblastoma case there are actually drugs targeting this telomerase and we have to see in future if this might be a targetable option. Definitely it can be used for classification, we showed that in the paper as well, that you can classify risk basically and also the survival with that, but the ultimate goal would be that you can target that and hopefully cure patients.
What are the next steps?
What we recently looked at is more into tumour evolution of these cancers and other cancers, also colorectal cancers. We’re doing this with very deep whole genome sequencing at the moment and then we tried to identify subclones from this and try to decipher how do they evolve in time and can we link that to timing of events, try to understand the progression of the disease, so that’s the next steps we will be pursuing.
Besides money, what obstacles have you faced?
The other obstacle is the really big data sets, so using huge data sets, one patient is about 200GB in size, that would be roughly 43 DVDs if you put it like that. So to basically get data from elsewhere and running through the same pipeline is quite tedious, you have to move a lot of data through either the internet or by hard drives and that’s quite an obstacle. There are solutions of cloud computing but there are other issues with ethics because it’s sensitive data.
The other obstacle is this interdisciplinary work may in some cases not really be appreciated by some institutions so we have to work to get this working hand in hand with different disciplines and together to better link that and better get a common language and that basically needs some education programmes for all the disciplines.
What will the eventual benefit be of this work?
The ultimate benefit would be finding processes that are really the driving events of these cancer types. So one would be the rearrangement for these neuroblastomas but we found other alterations in small cells that could be targetable. The ultimate goal is just to find basically a lesion that can be targeted in some way that the tumour at least goes away, the patient has a longer survival if not cure, but that is a long way. But still we are quite positive to pursuing that to get more knowledge of that.
What would be your take home message?
Really look very deep into the data very clearly. Quality is really an issue, it has never settled basically. You can tweak there and there and there and there and ultimately you will find some new insights in the data set. It’s very complex and we will not be finished with analysing these data sets for quite some time. Of course there are all these other issues with doing experiments, doing functional experiments on the downstream work.