Whether it’s the website of our bank, insurance company, doctor’s office, warranty provider, or even an online store, nowadays it seems to be more challenging to get the direct, personal help we need.
More often than not we find ourselves frustrated by a help desk chat bot or an automated phone system that forces you through multiple menus and decision trees before you can reach a human, if you ever actually do.
A popular video-conferencing platform recently required a software update. You couldn’t update unless you logged into your account, but you couldn’t log into the account unless you updated. The help-desk chat bot provided instructions on how to update the software after logging in but could not provide information about how to log in before updating. Only high-end corporate accounts spending a minimum of $200 per month are eligible for “live” tech support, so the average user could not get help, even by calling in to the telephone help desk, which also used an automated system. Help was available only by selecting the sales team from the phone menu.
Experiences like this are what the average person thinks artificial intelligence is all about. According to an October 2023 global study on trust in artificial intelligence, 85 percent of respondents believed that AI could provide significant benefits, but 61 percent said they were wary of trusting the technology.
Trust is the most critical aspect in any industry but nowhere is it more important than in the relationship between patient and healthcare professional.
The study, led by KPMG with the University of Queensland, surveyed thousands of people in 17 countries and found that half of respondents felt they don’t understand AI or when and how it’s used, especially in healthcare. A smaller study by Carta Healthcare found that 65 percent of patients said they wanted their health care providers to explain how they use AI.
The analytical capabilities of advanced technologies make them extremely useful in sifting through massive amounts of information and identifying patterns and anomalies. They were designed to provide the results of these analyses in an understandable form so a human could review that output and make decisions on actions to take. Artificial intelligence goes further by also deriving probabilities and suggested actions from that data, which makes it perfect for assisting human decision making.
There is no doubt of the amazing capabilities of artificial intelligence and the benefits to be gained from using these technologies in healthcare. However, while patients have faith that the use of AI can help improve diagnostic accuracy, a majority of patients are worried it will decrease access to their doctor.
And that’s where the trust comes in.
Access to healthcare has changed dramatically in recent years as the United States and other countries continue to experience shortages in primary care physicians, nurses, and specialists. Because of the high demand for access, the average visit with a primary care physician is a maximum of about 15 minutes, leaving many patients concerned that their doctor doesn’t have the time to pay attention and truly understand their individual situation. This is especially so when they may see a different doctor or PA each time they visit a medical practice.
Trusted relationships are important in any environment, and it can be difficult to develop deep trust between any two people in any situation. No matter what business or type of engagement, there needs to be some level of trust.
People don’t put their money into a bank they don’t trust. They don’t buy cars from dealers they don’t trust. They don’t attempt to make friends with people they don’t trust. And they certainly will not follow the medical advice from a doctor they don’t trust.
Obviously, people want a knowledgeable healthcare provider with access to resources that can diagnose, treat and even cure whatever affliction or injury they may have. But, according to Dr. Victor Covello, founder of the Centre for Crisis and Risk Communications, “People need to know that you care before they care what you know.”
Over and above knowledge and skill, the first thing a patient wants from their healthcare provider is empathy. There has been a great deal of scientific controversy about whether empathy is an inherent human trait or a learned skill, with more recent research indicating that empathetic behavior can be learned. And while artificial intelligence continues to evolve to possibly even replicate empathetic behavior, patients may find it difficult to accept.
People expect empathy from another human being, and not from a machine. While AI can certainly help diagnose illnesses and prescribe suggested treatment regimens, people want to know that another caring, empathetic human being has reviewed all the options and is ensuring they are getting the best and most appropriate care.
There needs to be a great amount of trust between patient and doctor in order to share the most personal information and vulnerability, especially if there is limited face-to-face contact. It takes a lot of human contact to develop trust. And trusted, open communication between a patient and their physician is needed to ensure the most effective diagnosis, treatment, and ongoing care.
It is the human-to-human interaction – even via telehealth – where that all important trust is developed.
The world of medicine is enormous and keeping up with the latest information on pharmaceutical advances, emerging diseases and treatments can be nearly impossible. The use of AI technology can be extremely helpful in ensuring that busy medical providers don’t overlook symptoms or base decisions on preconceived notions, and can implement the latest treatment options.
But just like any technology, AI must be used responsibly, ethically and with human involvement.
Humans still need to make the empathetic connections to develop and maintain the trust needed for successful medical outcomes.