AI is everywhere – dominating headlines, business meetings, and even conversations at the dining room table. But as healthcare and pharma leaders, and not necessarily AI experts, many of our customers and partners have questions about the technology. 

Machine learning, deep learning, large language models (LLMs), generative AI … these terms are all interconnected, but that doesn’t make them any less complex, or make it any easier to understand what they mean for healthcare operations. The truth is, unless you’re working directly with AI technologies, it can be difficult to read between the lines and comprehend what’s actually possible. 

That’s what we’re here for. Below, you’ll learn what you need to know to be able to move past the AI hype, and separate fact from fiction.

Fact or fiction? AI never makes mistakes

FICTION: AI isn’t perfect. Like humans, it does indeed make mistakes, and won’t reach 100% accuracy all of the time. While AI is constantly learning, and its accuracy improves over time as it has more data to work with, there’s still a risk. Unexpected results like hallucinations, as we wrote about above, can also occur if the right measures aren’t taken. 

Fact or fiction? Healthcare AI should be cheap

FICTION: There’s a common assumption that because automation can take on tasks from humans, it should be significantly cheaper than human labor. In fact, this isn’t exactly the case. Estimates suggest AI could lead to savings of 5-10% for the healthcare industry as a whole, but there are reasons why those figures are conservative.

We can look to the self-driving car as an appropriate metaphor – the cost of taking an autonomous car to your destination is in line with the cost of any human-driven rideshare options, because of everything that goes into building something safe and effective. At Infinitus, for example, building domain-specific models and training them extensively require resources, as do the human guardrails that are necessary for all the reasons outlined above. 

Savings incurred from AI implementation take the form of time granted back to busy healthcare workers, shortening the time to care for patients, cutting the number of accounts receivable days for providers, and improvements in data accuracy that can prevent wrongful claim denials or delays in care for patients. 

Fact or fiction? ChatGPT can automate phone calls to payors from providers

FICTION: ChatGPT is a text-based chatbot, so as it currently exists, it cannot automate phone calls. But for the sake of being thorough, let’s imagine a phone-based system were fueled using GPT-4 … and for complex payor-provider calls, the answer is still no. 

The ability of GPT-4 (or any open-source AI model, really) to automate a task depends on a combination of the model’s capabilities and the complexity of the task. In the case of automating calls from healthcare providers to payors, the information GPT-4 would need to be trained on data (e.g., the number to call, how to navigate the IVR, the questions to ask the representative) that isn’t publicly available. Infinitus, for example, is able to automate calls because of our vast knowledge base and experience working directly with payors and speciality medications.  

Fact or fiction? It’s important to keep humans in the loop in healthcare AI

FACT: As advanced as AI is, it’s far from perfect and, mistakes happen. That’s why human collaboration is critical. Not only can humans serve as important guardrails, they can actually help improve automation. As an example: When the Infinitus digital assistant encounters uncertainty, it can raise a digital “hand” to have a human reviewer jump in. The data that results from that human’s review is in turn used to help the model refine its abilities, and “learn” how to handle similar scenarios in the future.

Fact or fiction? AI bias is always a concern 

FACT … with exceptions: A common concern we hear from our customers regards bias, and specifically, the guardrails we have in place to avoid creating or reinforcing unfair bias in our AI. While we understand why this is top of mind – bias can be an issue in some healthcare AI applications – the risk of bias in the automation of back-office healthcare tasks is very, very low, since care decisions are not being made by the software. When AI is used to automate prior authorization status checks, for example, it isn’t up to the AI if the prior authorization is approved or denied; the AI is simply collecting data. The actual decision-making in the process is left to humans. 

Fact or fiction? All AI hallucinates

FICTION: When an LLM or generative AI application does not know the answer to a question, it can “hallucinate,” or make up facts and details that are not true. For this reason, use cases that require high accuracy like what we do at Infinitus are sometimes better suited for model architectures derived from machine learning or rules-based approaches where the possible answers are limited to a smaller set of options to avoid hallucinations.

Fact or fiction? Generative AI is replicating human skills

FACT: AI is indeed replicating human skills in many communication form factors. Today, AI-generated answers in text form are high quality and, when communicated via text-to-speech solutions, can even demonstrate empathy. It’s important to note here, however, that in healthcare administration, AI is at its best when it’s helping humans – not replacing them. Infinitus’ digital assistant, for example, is perfect for helping busy healthcare workers save time – and the focus of our products is to save those busy workers’ time. It’s not intended to completely replace a provider’s workforce. 

To keep up with the latest developments in healthcare AI, check out more on our blog. Or, if you’re ready to learn how AI can help your back-office processes, speak to us today.