Investigation of Ki67 biomarker

Share :
Published: 17 Dec 2012
Views: 4225
Rating:
Save
Dr Torsten Nielsen - University of British Columbia, Vancouver, Canada

Dr Torsten Nielsen talks to ecancer at SABCS 2012 about how the immunohistochemical assessment of the cell proliferation marker Ki67 is of interest for potential use in clinical management.

 

Dr Nielsen explains that a lack in consistency between labs detracts from Ki67's value as a marker. A working group was assembled to devise a strategy for Ki67 analysis and identify procedures to improve concordance.

The 2012 CTRC-AACR San Antonio Breast Cancer Symposium, 4-8 December

Investigation of Ki67 biomarker 

Dr Torsten Nielsen – University of British Columbia, Vancouver, Canada 

 

I was presenting on Ki67 which is a biomarker of breast cancer proliferation. People think that it might become the next standard marker to be used in the evaluation of breast cancer after oestrogen receptor and HER2 but there’s a lot of controversy over how to actually assess Ki67 and that’s what my study was trying to address. The controversy is can pathologists actually agree on a Ki67 score in the proliferation rate when they’re faced with the same case, the same slides, because a lot of evidence, some of it anecdotal, some of it from small studies in literature, suggests that the agreement is not so good. So an international group, the breast international group, the North American Breast Cancer Group, had a meeting in London, we looked at all the literature, we produced a paper and we were able to standardise some aspects of Ki67 but what we couldn’t agree on or figure out was how to actually score it under the microscope properly. So that’s what our study first has been about is can pathologists actually agree when they’re looking at the same cases. So we got together a hundred cases, we distributed them around the world to labs in North America and in Europe and then we asked them to score these same cases. What we found was that the agreement was only moderate, it wasn’t at the level that we thought was needed to make a biomarker ready for prime time, make the test something that you could base clinical decisions around.

 

What range of disagreements did you see?

 

We used a parameter called intraclass correlation and we wanted to reach a level of 0.9, we only got to 0.71. To put that into maybe more striking terms, what we found is that looking at these same hundred cases some pathologists, the average score they’d have across those hundred cases was a 10% Ki67 index, that’s the measurement. In other labs in our group the average score was 28%, so a very big difference, especially since many of the thoughts around clinical decisions are around 15%, that above or below this would make a difference as to whether you might give chemotherapy or not, big decisions like that. Just from the variability in assessment one third of cases would be classified differently in different labs. The test helps, it’s adding information but it’s not at the level of precision that we can really be confident that if my mother came in with her tumour being assessed that we could rely on that to make decisions, at least with the current ways that people seem to be doing the assessments.

 

What are the implications of the study?

 

We’re working hard to try and standardise how the scoring should be done. So we’ve drawn on the experience of the group to identify a scoring method that was particularly consistent, that was relatively easy to do and that could be done with no special equipment anywhere in the world. That particular method, which was from one of the other labs in the group, not even my own group, but that method we’re currently testing. Now everyone in the group is using this exact method that spells out how to recognise a positive cell, where to look on a slide and how brown or how positive that cell has to look to be called a replicating Ki67 positive cell. Now we’ve redistributed all these slides and we now have a web-based online training tool so that the pathologists can first look at these standard images and all keep assessing them until they get the same score on a digital image. Then they go back to their microscope, which is how this is done in the real world in a quick and convenient way, and we’re now seeing whether having gone through all of that extra training if they can score them quickly, efficiently and, most importantly, if we can all agree closely on what the results are when it’s the same case. If we succeed then we will roll out the study more broadly and start to look at the clinically relevant cases of core needle biopsies because right now we’re using a technique called tissue microarrays to gather data quickly. If we fail then most likely it will mean that we need to resort to some kind of automated imaging technology if we’re going to get consistency in Ki67 scoring.

 

The good news from our study is that we did show acceptable, actually quite good, levels of intra-observer consistency. So the same pathologist looking at the same slide over and over again can deliver consistent results and that means it’s not hopeless, that it is possible to be consistent about this. But because the inter-observer variability is still problematic we’ve had to work hard on seeing whether we can transpose those methods that are very consistent into other labs, that we all get the same answer on the same patients.