HIT Consultant – Read More

The healthcare industry is a leader in AI agent adoption at 68%, according to KPMG research, with early implementations reducing administrative workload by 55%. Yet for all this progress, Agentic AI remains widely misunderstood – often conflated with chatbots or treated as the “final answer” to benefits complexity.
In reality, Agentic AI represents a genuine evolutionary step: systems that understand context across multiple platforms, take action in real-time, and guide members through complex scenarios. Amidst the buzz about its potential, organizations must resist viewing it as a destination. The future of benefits administration isn’t AI replacing human judgment – it’s AI augmenting it.
Defining the Evolution
Agentic AI differs from previous generations of benefits technology in three fundamental ways:
- First, it maintains context across multiple conversation turns and channels, allowing members to start a question on mobile, continue via phone, and follow up on the web without repeating information.
- Second, it takes action rather than simply providing information (i.e. processing claims, updating accounts, and initiating transactions in real-time).
- Third, it operates proactively, anticipating member needs based on behavior patterns and life events rather than waiting for questions.
This represents genuine progress from earlier AI deployments that simplified search functions, requiring members to know exactly what to ask and how to phrase it. Calling this evolution “transformative” risks overselling where the technology actually stands. Agentic AI solves specific, well-defined problems exceptionally well. It struggles with ambiguity and unique circumstances.
The Business Case Beyond the Hype
Forward-thinking organizations are deploying Agentic AI where it delivers measurable value while maintaining realistic expectations about its limitations. Claims processing, for example, illustrates this balanced approach. Members uploading receipts for reimbursement have historically faced a tedious process: manually entering amounts, categorizing expenses, calculating tax implications, and waiting days for approval. Agentic AI can automatically apportion discounts and sales tax from uploaded images, process claims in under two minutes, and save members approximately 70% of the time previously required.
This isn’t magical – it’s Applied AI solving a concrete business problem. The pain point was real (members abandoning claims mid-process), the use case was well-defined (receipt processing follows predictable patterns), and the outcome is measurable (processing time, completion rates, member satisfaction). Benefits leaders evaluating AI investments should demand this same specificity: What exact problem does this solve? How do you measure success? What happens when AI encounters scenarios outside its training?
Where Humans Remain Essential
The concept of “human in the loop” isn’t a compromise or transitional phase. Human in the loop is the appropriate utilization of an employee for healthcare benefits, where financial and medical decisions intersect with individual circumstances in ways no algorithm can fully anticipate. Consider a member asking whether they should contribute more to their HSA or pay down medical debt. AI can provide relevant data: contribution limits, tax implications, interest rates, account balances. But the recommendation requires understanding risk tolerance, family planning considerations, job security, and dozens of other factors that resist algorithmic certainty.
Organizations and healthcare leaders implementing Agentic AI successfully recognize this boundary. They deploy AI for tasks with clear right answers: eligibility verification, contribution calculations, transaction processing, educational content delivery. They route to humans for scenarios requiring judgment, emotional intelligence, or addressing long-tail edge cases that require deeper insight and human intervention: financial planning advice, disputed claims, complex family situations, members in distress. The question isn’t whether AI can handle something, but whether it should.
Healthcare differs from domains where AI excels precisely because stakes are high and context is infinite. A member calling about a denied claim isn’t just seeking information. They may be anxious about affording treatment, confused about insurance rules, or frustrated with the bureaucracy of the system at large. AI can pull up claim details instantly and explain denial codes accurately, but should it be the sole interface for that conversation? Probably not.
Maintaining a human in the loop addresses emotional and judgment complexity. But Agentic AI in healthcare benefits faces another challenge that distinguishes it from consumer applications: the extraordinary security requirements of managing both protected health information and financial accounts simultaneously.
Security and Privacy as Foundational Requirements
The benefits industry manages uniquely sensitive data – health information combined with financial accounts. Agentic AI operating across this landscape must meet security standards that exceed even rigorous financial services requirements. Deepfake detection across both voice and digital channels becomes critical when AI can initiate transactions based on conversational requests. Real-time behavioral analysis must identify account takeover attempts even when attackers have correct answers to security questions. Additionally, organizations must address attackers using AI to launch sophisticated attacks targeting identity, data, infrastructure, and transaction layers.
This security architecture explains why Agentic AI in healthcare benefits can’t be purchased off the shelf and deployed in weeks. Organizations need phased rollouts with continuous monitoring, integration with existing authentication systems, audit trails meeting HIPAA requirements, and protocols for when AI detects potential fraud. Members concerned about AI “knowing too much” about them should understand that properly implemented Agentic systems can actually enhance privacy – enabling just-in-time access to complete specific tasks rather than permanent role-based access, reducing human access to sensitive data, and maintaining security through behavioral analysis and anomaly detection. This approach not only protects member privacy but also reduces the risk of credential-based cyber attacks and subsequent data breaches..
Implementation Realities for 2026
Benefits and IT leaders evaluating Agentic AI should demand evidence of business value before falling in love with technology capabilities. Consider these questions when in the selection process with vendors:
- What specific pain points does this address?
- How does it integrate with existing systems?
- What’s the phased rollout plan?
- Where do humans remain in the loop?
- How do you measure accuracy?
- What happens when AI doesn’t know?
Organizations successfully deploying this technology in 2026 share common characteristics: clear use case definition, realistic capability expectations, robust security architecture, planned human escalation paths, and continuous refinement processes. They’re treating Agentic AI as an evolution in benefits administration – solving specific problems better than previous approaches while acknowledging limitations. Those who approach 2026 as a year for disciplined implementation – rather than chasing the next technological promise – will build the foundation for sustainable competitive advantage as the industry continues its evolution toward truly personalized, AI-augmented benefits experiences.
About Shuki Licht
Shuki Licht is the Head of Innovation and VP of AI Technology at HealthEquity, the nation’s largest Health Savings Accounts (HSAs) and consumer-directed benefits administrator.
