The methodology of AI bias testing at Infinitus At Infinitus, we develop and use artificial intelligence (AI) in a responsible manner. This includes avoiding creating or reinforcing unfair bias. We are optimistic that our AI and machine learning (ML) techniques are fair and unbiased, but recognize that advanced technologies can raise concerns and challenges. Infinitus provides software as a service (SaaS) applications and APIs for performing various outbound calls. These calls include benefit verification, prior authorization follow-up, and prescription savings and transfer. Because the vast majority of use cases Infinitus supports involve information gathering from business to business (e.g., calling a payor call center on behalf of a provider), and not diagnostic or suggesting approaches for care, the risk for bias is very low. However, we have developed and are continuously improving our AI bias testing methodology to address these challenges. The Infinitus digital assistant is an AI/machine learning-powered tool that helps make outbound phone calls and extract answers from the resulting audio in a structured format, such as JSON. Here are some specific things we do to avoid creating or reinforcing unfair bias in our AI: We use a variety of data sources to train our AI models, so that they are not biased towards any one particular group of people. We monitor our AI models for signs of bias, and take steps to correct any biases that we find. We educate our employees about AI bias and how to avoid it. We are committed to being transparent about our efforts to avoid AI bias. We are committed to developing and using AI in a responsible manner that benefits everyone. Training methodology Our customers send us protected health information (PHI) such as patient demographics, provider/practice demographics, CPT codes, diagnosis codes, and other data to perform services on their behalf. This data is secured according to HIPAA guidelines. However, no PHI is used to train the digital assistant or our machine learning models. Infinitus takes steps to reduce bias in its machine learning models by training them only on transcripts of past calls that contain no PHI, and de-identifying those transcripts if they do contain PHI. This means that the models are not trained on any PHI, which can help to reduce the risk of bias based on factors such as age, race, or gender. The de-identification process, as well as gathering examples of transcripts to train machine learning models at Infinitus, is conducted by trained linguists who ensure that the training data includes examples across all protected categories. Additionally, the linguists can create synthetic data outside of past examples to train the machine learning systems. This helps to ensure that the models are trained on a representative dataset and that the risk of bias is minimized. Testing methodology In addition to training ML models while mitigating bias, Infinitus is committed to reducing bias by continuously testing its models through a combination of human and ML-based reviewers. Human reviewers are used to evaluate the models for accuracy, bias, and fairness. They are also used to identify any potential problems with the models, such as errors or biases. The human reviewers are selected from a diverse group of people with different backgrounds and experiences. This helps to ensure that the models are evaluated from a variety of perspectives. The human reviewers are trained on how to evaluate machine learning models. They work closely with the linguistics team so that cases where the machine learning system is inaccurate or biased can be quickly corrected by adding the right training data for the models. Additionally, human reviewers are randomly assigned to review models or tasks. This helps to ensure that the reviews are unbiased, and the human reviewers are also given feedback on their reviews by a second level audit team. Additional details As we get closer to contracting, we are happy to share detailed testing results and action plans with our customers to ensure our algorithms demonstrate our intent to stay fair and unbiased. For more information, current customers can speak with their assigned account executive or contact us.