Most health questions aren’t general: What Microsoft’s study reveals about patients’ AI use A new study published in Nature Health by Microsoft researchers offers one of the most detailed looks yet at how people actually use AI for health-related questions, and the findings should reshape how we think about building clinical AI agents. Analyzing over 500,000 de-identified health conversations with Microsoft Copilot from January 2026, the researchers found that nearly one in five conversations involves personal symptom assessment or condition discussion. But here’s the number that deserves a closer look: the dominant “general information” category is heavily concentrated on specific treatments and conditions, suggesting that the true share of personally motivated health queries is likely much higher than it appears on the surface. In other words, when someone asks “what causes high blood pressure,” they probably aren’t writing a term paper. They’re worried about their own reading from this morning’s checkup. This distinction matters enormously. There’s a spectrum between pure health education and personalized medical guidance, and most real patient conversations land somewhere in the messy middle. A patient who asks an AI about metformin side effects while also sharing their A1C results isn’t seeking a Wikipedia summary. They’re seeking care. The challenge is that AI systems designed for general audiences weren’t built to recognize that distinction, let alone act on it responsibly. This is precisely the gap that clinical AI agents, designed specifically for healthcare workflows, are built to close. At Infinitus, our AI agents operate in a fundamentally different context from a general-purpose chatbot. We work within defined healthcare workflows, including benefit verification, prior authorization, and care gap outreach, where patient intent is never ambiguous. When a patient is on the line, we know why they’re there, what information is relevant, and what the appropriate next step looks like. That specificity isn’t a constraint; it’s what makes the interaction trustworthy and safe. But the Microsoft study points toward a future that goes further. The researchers note that personal health queries spike sharply in evening and nighttime hours, precisely when traditional healthcare access is most limited. Patients aren’t waiting until 9 a.m. to worry. They’re searching, asking, and too often getting generic answers that neither address their actual concern nor connect them to someone who can help. The next evolution of patient-facing AI should do two things well: First, it should surface information from trusted clinical sources, not the open web, but guidelines, formularies, and care protocols that a patient’s own care team would endorse. Second, and more importantly, it should recognize when a patient’s question has crossed from education into a need for personalized guidance, and actively route them to the right local provider, navigator, or clinical resource. The study also found that one in seven personal health queries concerns someone other than the user, a caregiver asking on behalf of a parent or a spouse navigating a new diagnosis. This reinforces that clinical AI isn’t just a patient tool; it’s a care coordination tool. The opportunity isn’t only to answer questions better. It’s to make sure the right human is in the loop when it matters most. The Microsoft Copilot data is a mirror held up to unmet need. At Infinitus, we believe the answer to that need isn’t a smarter chatbot. It’s an AI agent that knows when to hand off, and who to hand off to. If you’d like to learn more about how our AI agents are meaningfully different from generic chatbots, and why that matters when it comes to safety operating in the real world, let’s find some time to talk.