2023 will go down in history as the year artificial intelligence in its current incarnation went mainstream, and it won’t be slowing down this year. To many, the technology’s arrival – and pressure to be incorporated at work – felt sudden. Regardless, the healthcare industry is in dire need of the kind of help AI can provide; as HealthTech Magazine pointed out, the average health system produces about 50 petabytes of data a year, and most of it is unstructured. This results in administrative costs making up more than 20% of total US healthcare costs.

However, as John Moore III writes in Chilmark Research’s report Building Responsible AI in Healthcare, “An industry that literally treads the line between life and death is not the place to experiment with immature technologies.” And this is why it’s critical to ensure any use of AI you engage in is responsible. 

But what does that actually mean? We’ve broken our interpretation up into key areas below:

Avoiding bias

This is something we take extremely seriously at Infinitus, and it’s something any healthcare administrator considering implementing an AI solution should too. Building responsible AI means doing everything possible to reduce bias, reduce risk, and build trust.

To reduce the risk of bias at Infinitus, for example, we use a variety of data sources to train our AI models so that they are not biased towards any one particular group of people; monitor our AI models for signs of bias, and take steps to correct any biases that we find; and educate our employees about AI bias and how to avoid it. Any AI solution you evaluate should be committed to developing and using AI in a responsible manner with such tactics, as well.

Careful vendor selection and management

Responsible AI isn’t just about the company that’s providing you with an AI solution; it’s about ensuring any vendors they’re working with are committed to that vision, as well. 

As an example, Infinitus has a formal management process for selecting vendors that includes privacy, security, and ethical considerations. The team charged with conducting these evaluations is responsible for ensuring that any vendors Infinitus internally uses follow the same standards that Infinitus holds itself accountable to. It’s important to confirm that any AI solution you utilize is similarly careful with selecting – and managing – the vendors it uses internally. 

Continuous monitoring

Responsible AI requires ongoing assessment and mitigation of any security or privacy risks. Human monitoring of a machine learning model, even after it is approved for use, is critical. Any AI provider you bring onboard should be continuously evaluating its technologies for accuracy, and should have a plan in place should a model’s output fall below a desired threshold. 

To achieve this, Infinitus has a Governance, Risk, and Compliance (GRC) council that meets monthly. The GRC council is continuously evaluating, and when necessary updating, our approach to bias testing and reduction. 

Commitment to education – for everyone

Are the teams charged with building the AI receiving ongoing education in privacy, data safety and security, and even HIPAA? Building responsible AI in healthcare means that those who are training machine learning modes are remaining up to date on the latest best practices in these areas.

As important as it is to ensure that teams creating AI models are receiving ongoing training, it’s just as important to ensure workers who will be affected by AI on the job understand the system being used and how it functions. Transparency and education are key to helping employees feel comfortable and confident with the introduction of AI into their work. 

Employing guardrails at every level

Responsibility extends into reducing risks and unintended consequences, which is why we believe it’s critical to keep humans in the loop and to build guardrails into AI models. For Infinitus, human reviewers ensure that the data collected by our digital assistant is correct, and can be pulled into any conversation if the digital assistant runs into a roadblock. In healthcare, control should be maintained over data living in AI models, which in turn should allow for the creation of guardrails at every level.   

Building trust is paramount. According to survey data from research firm Metrigy, a third of consumers say they’ll never fully trust AI. Transparency – both across your organization and, if appropriate, with patients – about how you’re leveraging the technology can go a long way. 

Ensure your AI strategy is responsible

We’ve created a guide to help healthcare leaders like you to learn everything you need to know to move past the AI hype, and understand how to leverage AI to solve some of the greatest challenges you’re facing at work. In Separating AI Fact From Fiction, you’ll dive deeper into the importance of responsible AI, and understand how to move past the hype and distinguish between AI’s genuine possibilities.