A new Wolters Kluwer Health survey finds that unauthorized artificial intelligence tools — often called “shadow AI” — are being widely used across U.S. hospitals and health systems, even for clinical tasks, signaling emerging risk and governance gaps in healthcare technology management.
The survey polled more than 500 healthcare professionals and administrators late last year and found that 40 % had encountered unauthorized AI tools at work and nearly 20 % admitted to using them, despite the lack of formal enterprise approval.
Why Shadow AI Is Emerging
Healthcare workers report turning to unsanctioned AI solutions largely because officially supported tools are not meeting workflow needs. Many respondents cited speed and functionality as the primary reasons for using these external tools, suggesting that clinicians and administrative staff will experiment with AI to streamline documentation or review data if approved systems lag behind expectations.
This trend reflects a broader shift in the healthcare workforce toward integrating technology into daily tasks, but it also highlights a critical oversight: many organizations lack clear governance or policies governing AI use, leaving staff to adopt tools in ways that may circumvent security, compliance, and clinical safeguards.
Rising Concerns: Safety, Privacy and Compliance
While respondents expressed optimism about AI’s long-term potential to improve healthcare delivery, they also identified patient safety and data privacy as top concerns. About one in four cited safety risks related to inaccurate AI outputs, and nearly a quarter expressed worry about privacy and data breaches — especially when sensitive information is entered into external, unapproved systems.
There was also a disconnect between administrators and providers in terms of awareness and involvement in policy development. Administrators were far more likely to be part of creating AI policies compared with frontline providers, yet overall awareness of AI policy details remains low.
Healthcare IT experts argue that governance and compliance must catch up with usage patterns. Without formal frameworks outlining approved tools, training on risk mitigation, and clear boundaries for clinical use, shadow AI will continue to proliferate — potentially exposing systems to legal and reputational challenges.
GET THE SUMMIT
Sign up for news and stuff all about the stuff you wanna know about in your sector twice a month.




