Go Back Up

back to blog

How to improve medical chatbots

Medical Pharmaceutical Translations • Sep 26, 2022 12:00:00 AM

As AI has improved, many experts expected chatbots and other AI conversational aids to be able to help diagnose patients and interact with them in meaningful ways. This would be especially helpful now, with staff shortages in medical professions due to massive quitting over the past few years. Since early 2020, 1 in 5 medical professionals have left their jobs.

But as many of us who’ve used them can probably attest, medical chatbots aren’t yet up to par.

Journalist Leece Norman puts it perfectly:

Think about all the times you’ve shared a specific set of symptoms with some or other medical-minded chatbot….Your thudding headache, it appears, could be indicative of anything from simple dehydration to terminal brain cancer.

As advanced as some things are in 2022, we just don’t have the technology and algorithms to provide a chatbot with deep and nuanced medical knowledge.

There’s also the fact that we have high expectations - and who wouldn’t? Movies show us visions of the future (or an alternative present) where ‘bots converse with humans as if they possess intelligence that goes beyond programming.

Instead, interacting with a chatbot about anything nuanced or complex can feel a little…off. Yan Fossat of Klick Labs comments, ““We can comprehend intelligence without language and intelligence with language, but language without intelligence — which is what chatbots have — is really weird”.

That weirdness is a best-case scenario. Many of us are downright confused or frustrated by our interactions with chatbots. And when it comes to medical chatbots in particular, interactions could have dangerous results: Programmers are, fortunately, very aware of the high risk of a ‘bot misdiagnosing someone.

In a recent article frankly titled Why your chatbot sucks — and what you can do about it, Norman explores a number of things that would have to change in order for medical chatbots to live up to our (admittedly high) expectations. The most notable include:

- teaching them empathy (or at least how to simulate it). Bedside manner is so important that it’s now a part of training for many human doctors. But there’s no real solution for a ‘bot. After all, feelings are what make us human. Still, some bots can be programmed to at least react with sympathetic phrasing - for instance, “I’m sorry to hear that” or “I’m sorry for the bad news.”

- giving them access to patient records. This is a complex issue, from both a programming and a privacy perspective, but it would make chatbots far more effective at providing potential diagnoses. Data about a patient’s preexisting conditions, medications, and so on, are crucial to understanding their current state of health.

- programming them to detect a patient at risk of self-harm. This not only means relying on clues like vocabulary choice, but even simply reading between the lines of conversations. Even if a patient is saying they feel they may harm themself, that may be expressed as a statement in the middle of a long paragraph. The chatbot must be able to pick out crucial words that would show a sign of alarm.

Luckily, this may soon be possible. Fossat says: “[W]e need to make sure the chatbot is capable of understanding that with very, very high accuracy and not missing it…and that requires intelligence on the part of the machine that is…almost there, but not quite.”

- localizing them. In a previous article, we explored the fact that AI can’t understand the ins and outs of a culture and language the way a human transcreator does. Still, some health chatbots, like the one belonging to Ada Health, have been programmed with cultural knowledge in mind. While more limited than that of a human being, this knowledge can still go a long way in successfully communicating with patients.

- programming them to go beyond a diagnosis. Norman points out that medical chatbots are only half of an interaction, after all. Even if a ‘bot were to provide an accurate diagnosis, the human patient on the other end might have no idea what to do about it. An ideal ‘bot would not only provide an accurate diagnosis, but also provide a patient with at least basic information - maybe what kind of specialist they should consult, or a little bit about a condition and its treatment.

Until these issues are fixed, there is one major way medical chatbots can compensate for medical staffing shortages: by filling in for their human counterparts when it comes to routine tasks that don’t require insight, empathy, or significant medical knowledge. We can rely on current bots for things like scheduling appointments and sending out billing. Automating these tasks saves time for healthcare providers without putting patients at risk for things like inaccurate diagnoses.

In the meantime, tech developers will continue to try to program the perfect medical chatbot. If they succeed, one day our lives will be like the movies…at least in that one way.

Image source

Contact Our Writer – Alysa Salzberg

Ready to Transform your Business with Little Effort Using Vertical?

Alysa Salzberg