Today (2020, July 27), Lancet Digital health published a study. The University of Pittsburgh and UPMC researchers have been demonstrating the highest accuracy to date in characterizing and recognizing prostate cancer using an AI (artificial intelligence) program.
Rajiv Dhir is senior author MD, MBA, vice chair of pathology and chief pathologist at UPMC Shadyside and professor of biomedical informatics at Pitt. He said that Humans are good at recognizing anomalies. Nevertheless, they have their experience and own biases. However, machines a different part of that story. There is an element of standardizing care.
Dhir and his colleagues provided images from more than a million parts of stained tissue slides taken from patient biopsies to train the artificial intelligence to recognize prostate cancer. Expert pathologists labeled each image to teach artificial intelligence how to discriminate between abnormal and healthy tissue. Then, they tested the algorithm on a separate set of 1,600 slides taken from 100 consecutive patients seen at UPMC for suspected prostate cancer.
The artificial intelligence demonstrated 97% specificity and 98% sensitivity at detecting prostate cancer, during testing. თჰის was significantly higher than previously reported for algorithms working from tissue slides.
Moreover, that is the first algorithm that extended beyond cancer detection. It also reports high performance for invasion, tumor grading, and sizing of the surrounding nerves. All those are clinically essential features. Moreover, those features are useful as part of a pathology report.
Also, artificial intelligence flagged six slides that expert pathologists did not note.
Nevertheless, Dhir explained that this does not necessarily mean that the machine is superior to humans. For instance, the pathologist could have seen enough evidence of malignancy elsewhere in that patient’s samples to recommend treatment in the course of evaluating those cases. Nevertheless, for less experienced pathologists, the algorithm could act as a failsafe. It will be a failsafe to catch those cases that someone might otherwise miss.
Dhir said that algorithms like those are especially useful in atypical lesions. So, a nonspecialized person may make an incorrect assessment. That is the significant advantage of that kind of system.
Dhir warns that they must train new algorithms to detect different types of cancer. The results are quite promising. The pathology markets are not universal around all tissue types. Nevertheless, he did not see why that could not be done by adopting this technology to work with breast cancer, for example.
Additional authors on the study include Scott Hazelhurst, Ph.D., and Pamela Michelow, MS, of the University of the Witwatersrand; Varda Shalev, Md., MPA, of Maccabi Healthcare Services, Anat Albrecht-Shach, MD, of Shamir Medical Center; Lilach Bien, Daphna Laifenfeld, Ronen Heled Ph.D., Judith Sandbank, Chaim Linhart, MD., Manuela Vecsler, of Ibex Medical Analytics.
Ibex provided funding for that study. Ibex also created a commercially available algorithm. It paid the report fees for Albrecht-Schach, Pantanowitz, and Shalev.
Shalev and Pantanowitz serve on the medical advisory board. Linhart and Bien are authors on pending patents, the United States 62/743,559 and United States 62/981.925. Nevertheless, Ibex did not influence the interpretation of the results or the design of the study.