Tough healthcare AI change management questions, answered NOTE: This post is an excerpt of our ebook Leading with trust: A people-first guide to AI integration in healthcare. Click here to access the full publication. For our guide to AI change in healthcare, we asked some of our customers to submit their most pressing questions about AI adoption and change management. Infinitus then sat down with global change management leader Prosci‘s CIO Tim Creasey to get his perspective. In the conversation that follows, Creasey addresses five of those customer questions. It is our hope that his answers offer practical guidance and candid insights your team can apply as you navigate this transformational moment. 1. My concern is around trust and having folks and teams feel comfortable. It’s a cross-industry concern. Do you have any specific recommendations? Building trust in AI starts with transparency, clarity, and open communication. Just as importantly, team leaders should create an environment where employees feel safe raising concerns – concerns of both “we’re going too fast” and “we’re going too slow” – and know those concerns will be addressed. In successful rollouts, frontline teams report minimal but healthy skepticism—an important distinction. Responsible AI use doesn’t mean blind trust; it means understanding the tool’s capabilities, built-in safeguards and continuously validating outputs in context. But it also doesn’t mean second guessing or re-doing all the work the AI is completing, either. The “black box” perception of AI is a real and valid concern, especially when the people directly using the technology aren’t involved in selecting it or shaping how it’s used. To address this, I tend to recommend reframing AI not as an all-knowing oracle, but as a digital collaborator: a probabilistic tool that generates responses based on patterns, not magic. 2. How do we avoid “shadow work,” where AI automates a task, but staff still do it out of habit or lack of trust? Shadow work often happens because the team hasn’t clearly defined which tasks are truly being handed off to AI, or because the team doesn’t yet trust the AI. Using a framework like “my work / with me work / for me work” that we’ve laid out in the guide helps. If you decide up front what belongs in the “for me” column – the work AI will do entirely – and you can demonstrate that the AI’s results are trustworthy, you create clarity and reduce the urge to redo it. Leaders can also help by showing teams what to do with the time they free up. If a task that used to take two hours now takes 15 minutes, that extra time should be redirected to high-value “human” work in the “my work” column. When people know both what they’re handing off and how they’ll reinvest their effort, they’re less likely to fall back into old habits. 3. How do we prepare staff for the ways their roles might shift? The first step is to acknowledge that everything is shifting. I like to compare it to electricity: electricity didn’t take jobs, but someone who used electricity could do their work more effectively than someone who didn’t. Over time, electricity became ubiquitous – it powered lights, fans, air conditioning, websites, booking systems – and eventually became invisible. AI is heading in that same direction. It will simply become a tool of the trade for all digital information work. To prepare people, we can use the AI Integration Framework right from the start – mapping out what tasks belong in My Work, “With Me” Work, and “For Me” Work. That helps teams clearly define which responsibilities stay human, which are shared with AI, and which can be fully handed off. Leaders also have a critical role in easing fears. It’s not enough to say, “Don’t worry about AI.” You have to give the “because.” For example: “Don’t worry about AI, because here’s how your role will evolve, here’s the new work we’ll need you focused on, and here’s how you’ll grow.” Without that clarity, people are left more anxious. Preparing staff means both redefining their tasks and painting a clear picture of the future they’re stepping into. 4. What’s the best way to support continuous learning as the AI evolves and is capable of taking on more over time? Continuous learning requires a flexible framework. That’s why I go back to the AI integration framework – My Work, “With Me” Work, “For Me” Work. As new capabilities roll out, tasks can shift between those categories. For example, something that was “with me” today could quickly become “for me” as the technology matures. The framework itself allows us to keep pace with change. We also have to recognize how fast technical skills expire. Our head of product likes to say that most technical training has a half-life of about 14 weeks, given how quickly AI is advancing. That means it’s less about training people once, and more about teaching them to adapt, move fluidly, and update their skills as tools evolve. 5. What other ‘words of wisdom’ about AI and this time of change can you share? For as long as change has existed, it’s always had two sides: the technical side and the people side. The technical side is about designing, developing, and delivering a good solution. But equally important is the people side – helping employees embrace the change, use it, and make it part of how they work. Too often, organizations succeed on the technical side but fail on the people side, and the result is wasted investment. Leaders need to recognize that success comes only when both sides move together. I like to end my talks with a phrase attributed to Michelangelo late in his life: ancora imparo: “yet I am still learning.” Even one of the greatest minds in history, at age 87, working on St. Peter’s Basilica, said he was still learning. For me, that captures the mindset we need today. Every moment, every interaction, is a chance to learn and adapt. That’s the real wisdom for this moment: embrace AI with humility, balance the technical and human dimensions of change, and keep learning as we go