Transforming health care with AI: tons of potential, but not without pitfalls

Apr 8, 2019 | 1:15 AM

TORONTO — It’s already crept into so many aspects of everyday life, from powering digital assistants like Siri and Alexa to personalizing entertainment choices on streaming services like Netflix to driving the development of autonomous vehicles.

Now artificial intelligence is poised to revolutionize key aspects of how doctors practise medicine and the ways in which patients are diagnosed and treated.

AI systems, specifically machine learning, have the ability to analyze massive sets of anonymized patient data and look for patterns in a way that the human brain, as elegant and complex as that organ may be, cannot begin to approach.

Take, for instance, the myriad forms of medical imaging that need to be scrutinized by radiologists, pathologists and other specialists to look for anomalies that might indicate disease — from cardiology and cancer to fractures and neurological conditions.    

An example is using an AI system to assess photos of skin lesions to determine if they’re cancerous. And if so, what kind of cancer?

“If you train (the systems) on 100,000 examples, then you’re about as good as a dermatologist at that, maybe slightly better,” says Geoffrey Hinton, chief scientific adviser at the Vector Institute for Artificial Intelligence and head of Google’s Brain Team Toronto.

“We know that if you were to train on 10 million examples, you’d be much better than the best dermatologist.”

That’s because the machines are able to pick up every pixel in the image and discern subtle distinctions that aren’t visible to the human eye, giving the system  “extraordinary accuracy” and consistency, Dr. David Naylor, former president of the University of Toronto, explains during an interview at his home with Hinton. 

Not only can AI/machine-learning systems free up clinicians from tedious, time-consuming tasks, they also aren’t subject to conditions that can lead to human error.

“It doesn’t worry if the dog is barking at the neighbours or there’s domestic distractions or a bad night of sleep,” Naylor says.

“It doesn’t get bored, either,” quips Hinton, renowned as the “godfather” of deep learning — computer systems modelled on the human brain called neural networks.

“In a few years time, a doctor who wants to compete with some of these systems that’s been trained on a huge amount of data will be like someone who wants to have a tug of war with a steam engine,” he says. “It’s not what we’re good at.”

However, some worry that artificial intelligence will erode the role of physicians, nurses and other providers, potentially at a cost to patient care, a position that both these proponents of the technology strongly dispute.

“These are enormously important tools in support of an efficient health-care system,” stresses Naylor. “They don’t supplant physicians and nurses en masse.

“Caring still matters. Much of this will involve high touch, as well as high tech.”

Brendan Frey, a former student of Hinton’s at the University of Toronto, is now using machine learning to discover how mutated genes give rise to abnormal processes inside cells that cause disease.

At Deep Genomics, a startup he founded in 2015, Frey and a team that includes geneticists, biologists and chemists are using deep learning to winnow out potential drugs for treating gene-based neuromuscular diseases, such as Duchenne muscular dystrophy.  

The technology scans and analyzes a patient’s genetic mutation and pinpoints what’s going wrong inside the cells. The next step is to design drugs that could correct the problem, at the cellular level.

The idea is to treat drug development in the same way as the discipline of engineering, with proven constructs, says Frey, instead of a scattergun approach of trying out numerous chemical compounds with the hope of hitting on an effective medication.

“So instead of this old-school way of trying every drug experimentally, we take an intentional approach,” says Frey. “Because that approach of just throwing stuff at the wall and seeing what sticks, it’s not going to work out anymore.

“I really believe that this field of machine learning, and deep learning in particular, has the potential to learn very complicated relationships and go beyond what humans are capable of.” 

Still, the incorporation of AI/machine learning into expanding aspects of medicine is not without potential ethical and legal pitfalls, says Ian Kerr, Canada research chair in ethics, law and technology at the University of Ottawa.

Once machines outstrip human capabilities in certain forms of diagnostics and treatment options, there will be pressure on the health-care system to adopt AI decisions as the standard of care, he predicts, adding that doctors and hospitals who haven’t adopted such systems might fall short of that standard — potentially opening themselves up to medical malpractice suits.

There’s also the danger of an AI system making a mistake in a patient’s diagnosis or recommended treatment. In such a scenario, who would be liable — the doctor? The hospital? The company that created the machine learning software?

That’s why, suggests Kerr, malpractice laws will need to be tweaked as machine-based decision-making is increasingly introduced into health care.

Another concern is a distancing from comprehensive medical knowledge among future practitioners, who may not be trained in areas that have been taken over by AI systems, he says.

“Some of the dangers are around the loss of knowledge that’s potential to humans and another set of the dangers is what happens when we can’t connect to what the machine is doing and then we can’t foresee and try to prevent mistakes that the machines might make.”

As with any major societal shift driven by new technology, it takes time for people to understand and adapt to the concept of machine learning, says Joelle Pineau, co-director of the Reasoning and Learning Lab at McGill University in Montreal and head of the Facebook AI Research lab.

Pineau is also a member of Mila – Quebec artificial intelligence institute, one of three major artificial intelligence hubs in Canada, along with the Vector Institute and the Alberta Machine Intelligence Institute, or Amii, in Edmonton.

“I think interdisciplinary training of the next generation is going to be really important,” she offers. “How do we make sure that the next generation of young doctors that we train is equipped to understand some of the complexity of the technology, at least to know what questions to ask and what to be concerned with?”

Pineau has begun having conversations with various medical schools about how they might integrate some computational knowledge into their curriculums.

“I try to make them think long-term, like you’re preparing the doctors not for the graduation class of 2022, you’re preparing the doctors who are going to be practising in 2050.”

As for patients, she believes AI/machine learning will allow doctors to achieve a much more holistic picture of an individual’s health, allowing for more personalized treatment.  

“So if you have a particular condition and we can draw on information from many other patients that have similar conditions, then we can build much more systematically the link between condition and treatment. I think AI can help us do that much more rigorously.”

 

 

Sheryl Ubelacker, The Canadian Press