Recently, health care providers have started using artificially intelligent chat bots to guide patients through normal intake processes and other functions normally performed by staff. The bots can ask patients about basic symptoms, verify health insurance, and follow up after visits. They have the great potential to increase access and reduce costs for health care providers. But one of the key challenges to their successful adoption is maintaining patient engagement with these software robots; how do you get patients to talk to software robots about sensitive medical data? It turns out the answer is empathy.
In the quest to increase patient satisfaction, many providers are using bots to interact with … [+]
Future via Getty Images
GYANT, a San Francisco based AI-enabled care navigation company, has found that the key to keeping a patient engaged when talking to a chat bot is the emotional appropriateness of the responses to the patient’s answers; something that is called algorithmic empathy. If a patient says they have cancer and an AI bot responds, “That’s too bad,” as if the patient had the flu, the patient will likely disengage and get frustrated. But if the AI bot gives a more emotionally appropriate response, the patient will be more likely to continue engaging.
Why does maintaining engagement through empathy matter? It will lead to more customers and better outcomes. After sifting through patient feedback, one health care provider found that patient loyalty consistently boiled down to three factors: communication, care coordination, and empathy. And, with the shift to patient-centric health care and consumerism among patients, patient satisfaction is becoming more important than ever to the bottom line of providers. Even reimbursement for Medicare is now partially based on patient satisfaction surveys administered by the Hospital Consumer Assessment of Healthcare Providers and Systems Survey.
And ultimately, patient satisfaction has been associated with better outcomes. For instance, in theory, greater engagement and access leads to better adherence to doctor’s instructions and medications, earlier identification of medical issues before they become expensive to treat, and better overall monitoring of patients when their interaction isn’t limited to a single in-clinic visit—and the research is beginning to validate this.
In the quest to increase patient satisfaction, many providers are using bots to interact with patients and provide on-demand access and answers. And now, they are faced with the challenge of humanizing these bots, so the empathy component is not forfeited through the increase in access.
But if empathy is key, how do you teach it to a robot? It can be hard enough for humans to learn empathy. Enter GYANT, whose co-founders Pascal Zuta and Stefan Behrens brought their understanding of empathy in the gaming world to health care. Working in the gaming world showed them the types of interactions consumers respond to best.
GYANT found that engagement was increased when patients are given the right feedback. The right feedback reflects listening and understanding, which builds trust. How did they do it? A method of clustering that categorized response types into groups like “concerning, but not dangerous,” or “frustrating, but not concerning.” So if the response indicates there is a dangerous scenario, the bot is programmed not to say, “I am sorry to hear that,” in order to prevent the patient from disengaging if it seems the bot is not listening.
GYANT’s data shows it works. In one of their studies, a traditional method of outreach—calling—had an engagement rate of 55%. After they coded their bot for empathy, this engagement rate increased to 82%. Not bad. Engagement actually increased when the outreach was performed by non-human bots.
But bots that can improvise are fraught with legal challenges. If a patient’s symptoms indicate the likelihood of a serious condition, and the bot incorrectly responds as if it’s benign, it could give the patient a false sense of security. Therefore, the bots must have a robust escalation procedure that will transfer the conversation to a doctor if there is any ambiguity.
And these platforms should clearly distinguish to the patient between when they are talking to a human versus a bot, so the patient does not take the bot’s responses as a diagnosis. Otherwise, the bot could be classified as a medical device and thus subject to FDA regulations, among other regulatory pitfalls. On the contrary, if the bots merely help inform the licensed professional’s ultimate decision, the bots generally can avoid being classified as medical devices.
Thus, once you have a humanized robot that can respond empathetically to a patient, the challenge is how to avoid liability and regulatory scrutiny if you have less control over what your bot is going to say. This tightrope walk between empathy and liability has likely prevented many health care providers from using this channel of automated communication. But the shift to consumerism and value-based care will eventually compel health care providers to use similar types of communication. That’s good news because it is a fundamental—and sorely needed—change to health care.