HIT Consultant – Read More

Thirty-eight percent of Americans received a scam call in 2025 where someone was impersonating one of their healthcare providers. This is an eye-opening data point for healthcare executives, security leaders, and compliance officers challenged to stay ahead of increasingly sophisticated AI scams that even the most inexperienced bad actors can now rapidly and cost-effectively deploy.
Hospitals, health systems, and clinics were already on high alert when the American Hospital Association (AHA) issued a December 2025 notice to brace for an escalating wave of deepfake scams. AI-generated audio, video, and text are being used in impersonation scams targeting healthcare staff. Deepfake voice phishing (vishing) scams are damaging enough on their own, but bad actors are no longer relying solely on one communications channel. We are seeing a rise in multi-modal campaigns, where initial contact might be a text, followed by a phone call, email or both to make the campaign appear more legitimate.
The attack that temporarily shut down Kettering Health in 2025 was a textbook example of how disruptive multi-modal attack campaigns can be. With Kettering, the ransomware group precipitated a systemwide IT outage whereby patients could not reach staff or call center support lines. This phase of the campaign created chaos and confusion, which the Group seized on by impersonating Kettering Health team members and requesting credit card payments for medical expenses. It took weeks for Kettering to resume normal operations for key services.
Imposter fraud campaigns create legal and cost liabilities for healthcare organizations. Failure to comply with data protection laws such as HIPAA, HITECH and GDPR risks millions of dollars in losses to the attacks themselves, on top of potential regulatory fines.
AI Accelerating Voice Scam Impact
Healthcare organizations have always been a prime target for scammers, ranging from sophisticated global criminal operations to more localized, ad-hoc fraud campaigns. Artificial Intelligence is turbocharging these efforts, and healthcare stakeholders are rightly concerned as recent survey data found 77% of Americans are very concerned AI technology can be used to convincingly impersonate their voice or identity to access sensitive accounts.
Notable within data trends is that bad actors are equal opportunity, using AI to target not only patients but healthcare staff as well. The same survey reveals that three-fourths of consumers are more concerned about fraudsters impersonating them to access sensitive accounts than they are about receiving scam calls or texts.
While healthcare organizations are challenged to balance layering security with a frictionless customer experience, consumers recognize the stakes and are willing to do their part. Eighty-four percent of Americans are willing to go through a longer login or customer verification process if it reduces the risk of bad actors accessing their sensitive accounts.
Push Through Demographic Assumptions
Historically, older Americans have absorbed the brunt of scammers’ attention. Elder fraud is costing seniors more than $3 billion in losses annually, and has been particularly acute in ‘high-touch’ industries such as healthcare and insurance.
That said, data show that AI impersonation scammers have been equal opportunity in demographic targeting. While 38% of Americans received a scam call in 2025 where someone was impersonating a healthcare provider about their coverage, the number spikes to 53% for Gen Z and drops down to 25% for baby boomers. Similarly, more Gen Z respondents indicated they’ve had their healthcare details fraudulently accessed by someone pretending to be them (36%) than any other age group.
Addressing The Expanding Attack Surface
Despite rising consumer mistrust in the authenticity of communications coming from their healthcare provider, the voice channel remains a preferred method of communication for patients: 65% of adults would rather engage with their healthcare provider via phone call than text messaging, apps or website.
As a result, healthcare businesses are prioritizing securing the voice channel as they would their networks, data, web, cloud and physical infrastructure. Protecting brand reputation and customers requires a comprehensive voice security strategy that includes:
- Presenting Critical Call Information. By branding calls with the company’s name and logo, healthcare businesses can identify themselves, giving customers a better understanding of who’s trying to reach them.
- Prioritizing Call Authentication. Proper call validation enables healthcare providers to confirm the origin of each call by verifying that it originates from their identified number.
- Implementing Spoof Protection. Calls that are not properly authenticated should be blocked before they reach patients and other stakeholders to prevent fraudsters from establishing contact.
Managing the risk of AI-powered imposter fraud attacks to the voice channel, and evaluating emerging strategies and technologies to help mitigate these risks can assist healthcare decision makers in protecting their organizations – and their patients. He is responsible for TNS Enterprise Authentication and Spoof Protection, Enterprise Branded Calling, Telephone Number Reputation Monitoring, and TN Insights. Mike first joined TNS in 2010, spending 12 years in call identification product management before returning in 2024 after a role at cybersecurity startup Lookout (acquired by F‑Secure).
About Mike Schinnerer
Mike Schinnerer is Vice President of Product Management for Enterprise Sales at TNS with specific responsibility for TNS Communications Market.
