We’re often asked whether the Infinitus digital assistant uses ChatGPT, or on a more general level, whether ChatGPT can automate calls from healthcare providers to payors like Infinitus does. After all, based on the media coverage OpenAI’s chatbot has received, it almost seems as if there’s nothing ChatGPT can’t do. 

It’s worth noting that ChatGPT on its own isn’t able to automate a phone call, which is a key requirement for the work we do at Infinitus, and completing benefit verifications in general. But could ChatGPT’s capabilities in processing and generating text be used to, say, answer prompts from a payor agent or ask an agent the right follow-up questions? 

As it turns out, no. ChatGPT has limitations in healthcare settings due to the need for accuracy and factualness. Thus, it alone can’t be used to automate benefit verification.

Here’s a three-part, more thorough explanation if you’d like to understand why:

1. ChatGPT doesn’t perfectly align with the demands of healthcare settings

ChatGPT, or specifically the underlying open-source model GPT-3 or GPT4, is good at responding to questions or requests with language that sounds natural, and it’s good at responding with information that can be considered general knowledge, information that is publicly available on the Internet. However, in healthcare, specific patient, provider, and insurance plan information is required depending on the use case, and much of this information isn’t publicly available, rendering responses from ChatGPT less useful.

Additionally, accuracy is paramount in healthcare, and ChatGPT can sometimes “hallucinate” or say things that seem real and sound convincing, but are actually inaccurate or not factual. Due to the potential impact on a patient’s health journey, relying solely on the response of ChatGPT is concerning.

2. Certain custom logic and domain-specific information cannot be provided by ChatGPT

Certain custom logic and domain-specific dialogue information, particularly crucial in the healthcare setting, simply isn’t available with ChatGPT.

For example, the Infinitus digital assistant has knowledge, learned from over 1 million calls made, that isn’t publicly available or easily accessible (e.g., the right number to call, how to navigate a payor’s IVR, the questions to ask a payor representative based on treatment or insurance plan). 

3. Infinitus uses large language models with necessary guardrails

While Infinitus does use large language models (LLMs), those models are incorporated in a controlled manner within a wider system of components. 

Infinitus has a “pipeline” of components in our system that we use in production, and we use LLMs for some components of it. As an example, it can help with autolabeling data. However, it’s important to emphasize that to ensure accuracy, we have implemented necessary guardrails where LLMs are incorporated.

Learn more about how Infinitus is helping save time for busy healthcare workers by learning about what the Infinitus digital assistant can do, or checking out a demo.