AI Agents as the GLP-1s of the Health System?
Health care leaders gathered in Scottsdale at the end of April to wrestle with the defining question of the moment: can artificial intelligence (AI)—and AI agents in particular—finally be the silver bullet against health care's administrative bloat? What emerged was less a consensus and more a portrait of an industry holding two truths simultaneously: profound excitement and profound unease.
A Moment of Communal Reckoning
Walking into the Annual Conference, you could feel the tension in the room—not between people, but within them. Health care leaders are holding two perspectives in near-constant collision: genuine excitement about AI's potential to fix a broken system and genuine fear about what happens if it goes wrong.
On one side: the promise. AI could address both the macro failures of the U.S. health care system—patient access, care quality, health equity—and the micro ones: the prior authorizations, the documentation burden, the scheduling chores that consumes clinicians and staff. On the other side: the risks. Medical errors induced by AI, cybersecurity vulnerabilities, patient misinformation, job displacement, failed implementations, and wasted capital.
The conversation also surfaced something more personal. AI is forcing a rethinking of professional identity at the deepest level. When physician knowledge is being outperformed by AI on clinical benchmarks, what is the role of the doctor of the future? Interpreter of complexity? Navigator of tough conversations? Empathetic presence? Perhaps even a designer of AI agents? What stays on the medical school curriculum—and what doesn't? These are not comfortable questions. But they are urgent ones.
Perhaps there are stages of AI acceptance, not unlike the stages of grief.
- Excitement: “Wow, it can do things I never thought a computer could.”
- Denial: “But it won't be able to do this part of my job that gives me a sense of purpose.”
- Shock: “Oh wait, it just did.”
- Anxiety and fear: “What does this mean for me?”
- Testing: “Let me see how I can use this tool to my advantage.”
- Acceptance: “I see the leverage. It's a tool, and I know how to use it.”
The critical difference from grief: AI innovation moves at a pace that sends us cycling through these stages near constantly, with no stable endpoint.
Pick Your Path and Commit
Not every health system is navigating this moment the same way. Three broad archetypes have emerged, each with genuine logic behind it.
Some systems are moving full steam ahead—partnering with major “platform” vendors (specialized AI companies) and, in some cases, building in-house engineering teams to customize solutions across a wide array of use cases. Others are experimenting more broadly, “kissing a lot of frogs to find their prince or princess,” i.e., trying targeted point solutions across specific pain points without committing to a unified architecture. And others still are taking an Epic-first approach, waiting for their primary electronic health record vendor's AI roadmap to mature before building anything proprietary.
It's too early to declare a winner. All three paths have merit—and what matters more is whether you execute it with commitment and bring your teams along with you.
Change Management Is the Unlock
If there was a single phrase that defined the conference, it was this: change management is the unlock.
Technology can be acquired. Governance structures can be designed. ROI models can be built. But for AI-enabled solutions to deliver on their promise, clinicians and staff need to be willing to experiment—and that means helping them work through the stages of shock, anxiety and fear to reach a place of testing and eventual acceptance. One health system described organizing “Prompt-a-thons,” i.e., structured, low-stakes sessions where staff could explore AI tools in a safe environment, guided by peers, without fear of judgment or error.
What became clear is that the CHRO-CIO conversation needs to happen far more intentionally. Human resources and information technology leaders are still largely working in parallel—managing their respective workstreams—when the moment demands they work in concert. The tools are evolving faster than most organizations' ability to manage the change, and that gap is where implementations fail.
The workforce dimension of AI adoption is not a soft consideration. It is, arguably, the central one.
The Rise of Agentic AI
If one application is sitting atop the AI hype cycle right now in health care, it is agentic AI. The conference included a live demonstration of how to build AI agents to tackle administrative tasks—and a deeper discussion of the potential for agents to work together in multi-step, behind-the-scenes workflows, handing off tasks to each other without human intervention at each stage.
The vision is compelling: health systems becoming lean, mean administrative machines—essentially agentic AI as the GLP-1s of health care, tackling administrative bloat. For instance—AI handling scheduling optimization, operating room block time management, chart summarization, revenue cycle functions (denials management, prior authorization, contract dispute matching) and proactive patient finding for high-risk populations—all orchestrated by intelligent agents operating in the background.
But just as the promise of GLP-1 medications in obesity and metabolic disease runs into the real-world challenge of long-term patient adherence, the promise of agentic AI runs into the real-world challenge of long-term organizational monitoring. At the end of the day, constant vigilance is not something humans are particularly good at. Health systems standing up AI agents today will need sustained oversight mechanisms—not just at deployment but over months and years. That discipline is not yet embedded in most organizations' operating models.
Governance Is Evolving — Imperfectly, but Intentionally
The proliferation of AI use cases is stressing centralized governance models that were designed for a slower, more predictable rollout environment. Several health systems at the conference described exploring more federated governance structures, with domain owners taking greater accountability for AI tools within their areas.
Critically, there is a growing willingness to calibrate oversight based on stakes. High-risk, patient-facing use cases—clinical decision support, diagnostic AI, patient communication tools—are receiving greater scrutiny and more rigorous review. Lower-risk, back-office administrative applications are being allowed to move faster, with lighter-touch governance. This risk-tiered approach reflects a maturation in thinking: not every AI tool carries the same exposure, and treating them uniformly creates bottlenecks without proportionate risk reduction.
What was also striking is the intellectual honesty in the room. No one claimed to have a perfect governance model. Leaders gave themselves and their organizations explicit permission to keep evolving—to treat governance as a living system, not a fixed framework. Given the pace of change, that may be the most sophisticated posture available.
Results Are Real — But ROI Definitions Remain Squishy
Health systems shared genuine wins. AI agents successfully identified patients at elevated risk of cardiac events before those events occur. Scheduling optimization tools improving OR utilization. AI-enabled documentation templates accelerated clinical registry submissions. Revenue cycle coding tools generating measurable revenue upticks.
The results are real. And yet, the broader ROI picture remains murky.
“We're not firing, but we're not hiring either” was a refrain heard more than once—a candid acknowledgment that AI is beginning to temper headcount growth. Whether overall cost growth is slowing in a way that will eventually benefit purchasers, employers and consumers is still an open question. The mechanisms for translating health system efficiency gains into market savings are not yet in place.
“We Can't Tech Our Way Out of This”
Perhaps the most grounding moment of the conference was this acknowledgment: technology alone cannot solve the problems in U.S. health care. “We cannot tech our way out” of a system whose incentive structures are not aligned with keeping people healthy over their lifespan.
AI is a powerful lever. But it operates within payment models, regulatory frameworks and market structures that shape—and constrain—what it can accomplish. The group explored the potential of emerging payment innovation, including CMMI's ACCESS program, which aims to create an opening for lower-cost technology solutions to support better chronic condition management. If new payment models create the right incentive environment, the impact of AI could compound significantly. Without that environment, even the most sophisticated AI deployments may optimize the wrong things.
“Are We Putting Our AI Dollars Toward Our AI Dreams?”
The question that lingered longest after the conference: Are we being bold enough?
Are health systems using this moment, this genuinely unprecedented moment in the history of information technology, to envision a new identity for the health system of the future? Or are we deploying AI to make existing processes faster, while leaving the fundamental architecture of U.S. health care intact?
The leaders in Scottsdale are not shying away from asking the big questions. They are experimenting with urgency, managing risk with intention, and building the governance and change management infrastructure that will determine whether their AI investments deliver on their potential or follow a long line of health care IT promises that never quite materialized.
The difference, this time, may come down to whether we are bold enough to answer that question honestly.
For more information on Manatt's work in health care AI strategy, governance, and workforce transformation, please contact .