Thoughts from Ken Kaufman

AI, Breast Cancer, and a New Mindset for Healthcare

5 minute
read
Healthcare AI

A recent feature in The Washington Post speaks volumes about both the future of healthcare and the potential willingness of society and the healthcare establishment to embrace that future.

Regina Barzilay is an artificial-intelligence researcher at the Massachusetts Institute of Technology and a breast cancer survivor. During the past seven years, Barzilay put her AI experience to work in developing a new machine-learning tool for early detection of breast cancer.

“Learning” is the key word here. AI takes huge data sets and uses algorithms that, over time, learn from patterns in the data. In this case, Barzilay and her team set out to teach the machine-learning tool to see the relationships between the rich data shown in a mammogram—much of it not currently used for diagnosis—and the chances of an individual developing breast cancer.

After using 200,000 mammograms to “teach” Barzilay’s tool, named Mirai, the team conducted a study that showed Mirai was capable of predicting three-quarters of occurrences of breast cancer up to five years before they happened, a 22% improvement over the currently used statistical model, which determines risk based on age, family history, and other factors.

The positive implications for health and healthcare are enormous. Mirai—which is open source and so can potentially be used and improved by multiple researchers and providers—could refine breast cancer screenings to better focus on individuals with high risk. Mirai also could reduce the racial bias that exists in current models for predicting breast cancer, which occurs at a significantly higher rate among women of color.

Three years before her breast cancer diagnosis, Barzilay had a mammogram that indicated “everything was fine.” Years later, out of curiosity, Barzilay fed this mammogram into Mirai. The tool told Barzilay that at the time she had been at high risk for breast cancer.

But what the tool could not do is tell Barzilay why she was at high risk for cancer.

AI is confounding. It goes against the deep instinct we all have to know why things happen. Understanding causal relationships is at the heart of all intellectual inquiry—certainly it is central to the scientific method and to medicine.

Yet AI forces us into a place where explicable causal relationships have been replaced by inexplicable causal relationships.

Consider how medicine works now. A physician orders tests for a patient—blood work, radiology images, etc. The physician and patient sit down and go over the results of those tests, and the physician says, “Based on these results, this is the scientifically determined effective course of treatment.”

With AI, the discussion would be very different. Instead it would be something like this: “The algorithm tells us that you are at high risk for developing breast cancer, but we can’t tell you what the algorithm actually sees or why it thinks you are at risk, but we do know that the algorithm is correct a high percentage of the time.”

That is a very different conversation. And for many providers and patients, it may be a very uncomfortable conversation. If COVID has taught us anything, it’s the importance of societal trust in science. However, that trust may be strained if we cannot explain why a certain condition is being forecast and why a particular course of care is being recommended. For consumers, AI could exacerbate a skepticism about expertise that is already dangerously high in this country. For healthcare professionals—radiologists in particular—AI in medicine may appear to fly in the face of their professional training while disrupting their professional roles.

Cornell mathematician Steven Strogatz articulated this dilemma in a New York Times essay about artificial intelligence’s success in playing chess: “What is frustrating about machine learning,” Strogatz wrote, “is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted.”

In chess, this lack of trust may be frustrating, but in healthcare it could be an impediment to adoption and therefore to the best possible healthcare outcomes, including the saving of lives.

This lack of trust was, perhaps, what informed the reaction of traditional healthcare provider organizations when Barzilay first approached them seven years ago to supply mammograms to assist with developing her AI tool.

Most hospitals turned her away, saying, according the Washington Post article, that breast cancer had been treated for years without AI. Barzilay recalled, “They acted like I was trying to sell snow to an Eskimo.”

Barzilay’s own care provider, Massachusetts General Hospital, eventually agreed to help, and supplied the mammograms for initial development of the tool. Since then, Barzilay has made great progress, with Novant and Emory in the U.S. and health systems in Israel, Sweden, Taiwan, and Brazil participating in the research to show the tool’s capabilities.

Despite the eventual increase in participation in this project, and despite many health systems’ own initiatives in artificial intelligence and precision medicine, the reaction that Barzilay met with is concerning when viewed in a broader context.

The future of healthcare is moving rapidly beyond the legacy intellectual and attitudinal framework. Hospitals and health systems find themselves needing to operate outside their traditional span of responsibility, taking on vast challenges like health equity and public health. Hospitals also need to operate on a macroeconomic platform redefined by big tech companies, a platform of big data, ideas, resources, scale, and strategic aggressiveness.

In many ways, the example of Regina Barzilay and her AI tool for early detection of breast cancer highlights what hospitals are facing on multiple levels. AI is a new idea, one that even data experts don’t fully understand. It requires big data, expertise, and resources. And it requires a new view of the role of healthcare in improving health and preventing disease.

The foundations for success in this environment are curiosity and openness: curiosity about what benefits may come from new concepts, and openness to active participation in bringing those concepts to practical fruition.

When Barzilay first asked health systems to help in her development of a new approach to breast cancer diagnosis, she encountered general unwillingness. The good news is that over time, this mindset was replaced by curiosity about the possibilities and openness to assist.

That is exactly the shift in mindset that will be needed on a large scale as we confront the very new set of challenges and the very new environment that healthcare professionals find themselves facing today.

Download
Read More Thoughts from Ken