FOR MORE THAN a century, pathologists have identified cancer by looking at slides containing tissue stained with dyes to make the malignant cells more visible. Spotting aberrant cells among healthy ones is similar in spirit to recognizing familiar faces or objects in pictures. “Our brains are very good at recognizing patterns in images,” says physician and epidemiologist Peter Gann of the University of Illinois at Chicago. “And basically what [pathologists] do is instantaneous pattern recognition.”

Researchers at Radboud University Medical Center in Nijmegen, Netherlands, organized a contest in 2016 intended to improve on this ability to recognize patterns—using artificial intelligence. The contest, called CAMELYON16, was designed not only to reward a winner, but to accelerate the development of cutting-edge technology to benefit cancer patients.

CAMELYON16 attracted 32 entries. Each entrant received 270 digitized slides of lymph nodes removed from patients who had undergone breast cancer surgery. Their task was to accurately identify the slides that showed cancer by using computer algorithms, mathematical recipes written in code to teach a computer how to do a task. The winning entry, a joint effort by researchers at Harvard Medical School in Boston and the Massachusetts Institute of Technology in Cambridge, used an artificial neural network—a sort of computer brain—to tell the difference between tumor images and nontumor images. It had been trained to recognize cancer by reviewing millions of such images.

After the winner had been crowned, the researchers gave the test data to 11 pathologists. The top five digital entries in the contest correctly identified metastatic growths at least as accurately as a pathologist who had as much time as needed to study the slides. Under a time constraint—two hours to study 129 slides, a typical work rate—the pathologists’ diagnostic accuracy fell significantly short of”Š the winning entry’s. In many cases, the algorithms used pattern recognition strategies to detect metastatic growths that none of the pathologists found.

The group that sponsored the contest published its research results in the Dec. 12, 2017, edition of JAMA. “We were shocked to see that the algorithms could do so well so quickly,” says lead author Babak Ehteshami Bejnordi, who organized CAMELYON16 while completing his doctorate in machine learning at Radboud University. He’s now an engineer at Qualcomm Research Netherlands, a technology company in Amsterdam.

Artificial intelligence, or AI, is the broad term used to describe efforts to get computers to function more like human brains and perform tasks ordinarily done by people. Headlines about AI and cancer often focus on efforts like IBM’s Watson for Oncology initiative, which analyzes data from thousands of patients to provide oncologists with treatment guidance and recommendations. Nearly seven years after its introduction, however, IBM’s program is still in its infancy, and no peer-reviewed journals have published studies showing that Watson for Oncology benefits patients.

Currently, a more focused kind of AI is making a difference in other areas of cancer research. Machine learning, an AI approach that’s particularly good at pattern recognition, is already driving the development of image-processing tools that can help pathologists and radiologists learn more about patients’ tumors faster than ever before.

The Signal and the Noise

Ehteshami Bejnordi says the test for CAMELYON16—scanning lymph nodes for metastases—was selected because it’s an important, routine and time-consuming task for pathologists. When cancer spreads, it often shows up first in nearby lymph nodes. Breast cancer spreads to lymph nodes in the armpit early in metastasis, which is why they are often removed and analyzed during mastectomy or lumpectomy. Pathologists have to learn how to distinguish the signal from the noise, that is, see patterns in cells of a tissue sample that indicate the presence of cancer. Radiologists, too, depend on recognizing patterns when looking for tumors in high-resolution images captured by CT scans, MRIs and PET scans. Such images can reveal the location, size and shape of a tumor, and that information influences treatment decisions.

However, there are limits to the abilities of pathologists, says Gann. Pathologists zoom in and count individual cancer cells within “hot spots” on a slide, places they suspect they’ll find cancer. “But they can’t possibly count all the tumor cells in the slide,” he says, “and two pathologists who approach the same case may not examine the same hot spot.”

“It’s really easy to miss the little things,” says Kate Lillard, the chief scientific officer at Indica Labs, based in Corrales, New Mexico. Indica is developing software that enables pathologists to harness AI approaches and more accurately diagnose the type or grade of cancer. “Tumor cells can [be] quite difficult to identify amongst all the lymphocytes and other cells and structures,” says Lillard, who works in Alcester, England.

The Mind in the Machine

AI is already changing modern life. For a variety of applications, computer scientists have produced software that uses algorithms to analyze and categorize data and make decisions based on that data. This is the idea behind self-driving cars and virtual assistants like Siri and Alexa. The photo-sharing app offered by Google recognizes human faces. It can group similar objects and differentiate between cats and dogs, and can even be “trained” to recognize beloved pets. (These algorithms aren’t perfect, though. A group of MIT students showed that Google’s photo app misidentified a 3D printed turtle as a rifle.)

In a study published online in February 2018 in Nature Biomedical Engineering, researchers from Google and Stanford University in California reported on algorithms that can detect risk factors for cardiovascular disease by identifying abnormalities in a person’s eye.

These examples of AI each use a deep-learning algorithm, essentially an artificial brain that attempts to mimic the learning process encoded in the human brain. Just as people get better at a task through repetition—like riding a bike or doing long division—deep-learning algorithms show improved performance as they accumulate and analyze more data. So, for example, the algorithms that outperformed the human pathologists in the CAMELYON16 study are expected to get better as they analyze more slides.

Hugo Aerts sees a future in which deep-learning algorithms do the heavy lifting in cancer detection and characterization. Aerts is an expert in computational imaging at Harvard Medical School in Boston. In pathology, these algorithms could analyze whole slides, not just hot spots, quickly. In radiology, algorithms could analyze 3D CT scans to measure physical features like size and shape to guide treatment decisions.

These techniques could be useful in diagnosing a variety of cancer types. Consider lung cancer. Diagnosis often happens when nodules are detected in a person’s lung by a CT scan, but about 90 percent of small nodules found in the lung turn out to be benign. It’s important for oncologists and radiologists to know whether they need to act, and AI can help differentiate between malignant growths and those that won’t continue to grow. “AI is really good at doing this automatically, fast and with high accuracy,” says Aerts. “A physician would take an hour to do what an algorithm can do in a matter of minutes.”

A Bright Future

In recent years, radiology has gone almost completely digital, says Aerts. Patients can have copies of their scans placed on flash drives or CDs to take to their doctor appointments. Because so much digital data exist for training algorithms, he says, radiology is primed to take advantage of AI’s benefits. Digital pathology is catching up, but pathologists still frequently rely on glass slides studied under a microscope, says Mark Lloyd, a cancer researcher in Tampa, Florida, who founded Inspirata, a company that builds tools for digital pathology. He predicts digital imaging will become more widespread in pathology in the next few years, even at regional hospitals, and that patients will be able to keep digital copies of their own slides. 

Lloyd is confident patients will welcome the additional information, especially those who conduct intensive research online to learn more about their disease. “That information can be critical for patients who want to be educated the moment they’re diagnosed,” he says. Digital images of slides can also make it easier for patients to get a second opinion because the images can be shared more readily. AI has tremendous potential, according to Lloyd, but like other emerging technologies, it has to clear some high hurdles before its use becomes widespread, beginning with the algorithms themselves. They need to be developed, tailored and trained to perform specific tasks. Improving their predictive abilities—which will bolster the confidence of providers and patients alike—means training them on a lot of data. And that’s going to take time. “They have to be validated to be reliable and produce repeatable results which are meaningful,” Lloyd says.

In addition, one of the quirks of deep-learning algorithms is that no one knows exactly how they learn. “We have no way of knowing how deep learning got the right answers,” says Gann. “We can get accurate classifications, but we can’t necessarily know why.”

Understanding how a computer program correctly classifies slides according to whether they contain cancer cells, he says, will improve both individual algorithms and the application of computer code to other tasks that are more difficult than analyzing lymph node tissue, like looking at other types of cancer or tissues. Some experts are worried that computers will replace radiologists and eventually pathologists, but Gann isn’t. He says the real power of artificial intelligence will come from combining its strengths in pattern recognition with an expert’s broader body of knowledge and experience. “AI won’t replace pathologists. It’ll tell the pathologist, hey, pay attention to this node over here, in this area,” he says. This approach is similar, he says, to how pathologists have analyzed Pap tests for decades. 

Using Computers to Make a Good Test Better

Computer programs that find cancerous cells have helped to improve pap testing.

Cancer pathologists and radiologists are increasingly turning to artificial intelligence to better and more quickly analyze slides and images. But the use of computer programs to improve patient health isn’t new. “Weʼve been doing Pap testing this way for a long time,” says physician and epidemiologist Peter Gann of the University of Illinois at Chicago. 
In the 1920s, Greek physician Georgios Papanicolaou conducted research that would revolutionize the detection of cervical cancer by introducing a way to identify abnormal cells. The disease was a leading cancer killer of women at the turn of the 20th century, but since the adoption of the Pap test in the 1940s, the death rate has plummeted in developed nations. (In resource-poor countries, where people have less access to health care, the disease remains a leading cause of death for women.) The development and use of tests that detect cancer-causing strains of the human papillomavirus (HPV) has also led to a decline in incidence and mortality. 
Part of a Pap test involves identifying abnormal cells in a sample collected from a patient. Pathologists have traditionally determined whether the cells are cancerous, but the task is laborious. 

The late 1990s brought the introduction of computer programs designed to automatically find cancerous cells in cervical slides. Those include the ThinPrep Imaging System, which was approved in 2003 and has been widely adopted. Studies of the system have found that it significantly increases the detection of abnormal cells compared to manual screening done by human eyes alone. Such systems won’t make skilled pathologists obsolete, says Gann. But they do show how technology can help reduce the time needed to analyze slides and get accurate results to patients more quickly.

In the near future, researchers want to develop hybrid AI-human approaches that optimize the skills of both to provide the most accurate information for patients. AI software won’t suffer from work overload or fatigue, and it can analyze an entire slide quickly. Doctors making diagnoses can use that information to reduce the number of false positives and negatives. At the same time, advances in AI will likely change medical education. Gann expects that learning how to use algorithms will be part of pathology training in the future.

Ehteshami Bejnordi, in the Netherlands, says he met resistance from some pathologists when he was developing the CAMELYON16 challenge. One of them was well-known at Radboud for his meticulous diagnoses. When Ehteshami Bejnordi described CAMELYON16, the pathologist balked.

“He said, ‘It’s absolutely impossible that you will find any algorithm that is even comparable to me.’ He was a complete nonbeliever at the beginning,” says Ehteshami Bejnordi. “When we showed him the results of the leading algorithm, he was absolutely astonished.”

Since the contest’s end, computer scientists have developed a new algorithm for use at Radboud that mimics the winning entry from CAMELYON16. The formerly skeptical pathologist uses it routinely in his work and says it saves him time. “He says he’s now getting a lot of improvement in his diagnoses,” says Ehteshami Bejnordi.

Stephen Ornes lives and works in Nashville, Tennessee.