The Bedside Manner of the Future?

Artificial intelligence is catching up with human intelligence in the field of medicine, and in some respects has already passed it.

That’s the message on the screen from recent studies on the use of computers in healthcare.

A survey of 14 studies, published by The Lancet Digital Health in September, found that in detecting disease through medical imaging, the diagnostic performance of AI was equivalent to that of (human) healthcare professionals.

Machines are revolutionizing the treatment of eye diseases. They’re already being used to detect retinopathy, a common affliction of diabetics, with an accuracy rate as high as 98%. Researchers at Moorfields Eye Hospital in London developed an algorithm that could recommend the correct treatment approach for over 50 different eye diseases with 94% accuracy, better than that of ophthalmologists, according to a report on Vox.

A Harvard University team created a “smart” microscope that can detect life-threatening blood infections. Accuracy: 95%. A study from Showa University in Yokohama said that a new computer-aided endoscopic system can spot potentially cancerous growths in the colon with 86% accuracy. And that’s only a very partial listing.

As AI accuracy rates approach 100%, some are speculating that doctors and technicians will eventually become largely obsolete.

Yet, advocates of artificial intelligence in healthcare are careful to caution about the limitations of the technology and the irreplaceability of human beings in the healing process. Computers, no matter how sophisticated, cannot (yet) replace the medical judgment of a flesh-and-blood doctor. And bedside manner matters, and always will, they say.

But that either/or — technical efficiency versus old-fashioned humanity — may be on the way out as well. It’s possible to program robots to have a warm, compassionate bedside manner. Computers can, as it were, be “people” — albeit virtually.

Take, for example, Ellie, an avatar (an electronic image that represents a person) created at the University of Southern California to help determine whether a returning combat veteran needs therapy.

Ellie appears on the computer screen and guides a person through preliminary questions. It “makes eye contact, nods and uses hand gestures like a human therapist. It even pauses if the person gives a short answer, to push them to say more,” says the Associated Press in an article this week.

“After the first or second question, you kind of forget that it’s a robot,” said Cheyenne Quilter, a West Point cadet helping to test the program.

The evidence that the human-computer interface is viable in a therapeutic setting seems more than anecdotal. Researchers at USC discovered that not only will people talk with a program like Ellie, they are more willing to do so than with a therapist.

“The shocking result — it wasn’t even a contest,” exclaimed Dr. Eric Topol, who has been exploring the changes that are being wrought by AI in medicine.

Why should a person be more willing to confide in a machine? Maybe because a person feels more exposed when revealing intimate problems to another person, who might be judgmental. Fortunately, the surprising talents of AI do not necessarily spell doom for the medical profession. Topol is one of those who optimistically suggests that the new technology will free up doctors to spend less time on administrative drudgery and more time with patients.

Indeed, the medical establishment has been making a serious effort in recent years to shift the emphasis somewhat more toward the humanitarian dimension of medicine, as in communication between patient and doctor.

For example, the Medical College Admission Test (MCAT) has undergone revisions to include questions that probe the aptitude of a medical school applicant to be a good doctor all-around. As Dr. Darrell Kirch, president and CEO of the Association of American Medical Colleges (AAMC), which runs the MCAT, put it in a Washington Examiner article, “Being a good doctor isn’t just about understanding science, it’s about understanding people.”

That sounds good, but as in so much of academia, political correctness has seeped into the process here too.

Questions that were once thought appropriate only for sociology majors, relating to such issues as gender and cultural influences on expression, poverty and social mobility, as well as how people experience stress, were being vetted for incorporation in the MCAT. It’s all aimed at testing the medical school applicant’s sensitivity in a future doctor-patient relationship. But judging the correctness of those sensitivities is no simple thing.

One MCAT practice question asks whether the wage gap between men and women is the result of prejudice or biological differences. Another asks whether the “lack of minorities such as African Americans or Latinos/Latinas among university faculty members” is due to symbolic racism, institutional racism, hidden racism, or personal bias (the correct answer is institutional racism), reported the Washington Examiner.

The medical educators’ quest for a more humane paradigm is certainly laudable; it’s the direction to go. But implementation of such a program will be a tricky thing.

Maybe what’s needed is an algorithm to exclude political bias from the testing.

Then there’s the more basic question posed by Dr. Charles Hatem, a professor at Harvard Medical School and an expert in medical education:

“Yes, we’ve fallen in love with technology, and patients are crying out, saying, ‘Sit down and listen to me,’” he told The New York Times. “So what the MCAT is doing has a laudable goal. But will recalibrating this instrument work? Do more courses in the humanities make you more humane? I think the best we can say is a qualified maybe.”

And that’s about as exact as you can hope to get in this most humane of hard sciences.

To Read The Full Story

Are you already a subscriber?
Click to log in!