Findings highlight need for clear governance and compliance policies to target “shadow AI” tools and applications.
The survey reveals that 40% of healthcare professionals have encountered unauthorized AI tools within their workplaces, and nearly 20% admit to actively using these 'shadow AI' applications. The primary motivations for their use are a desire for faster workflows, with half of respondents citing this reason. For healthcare providers specifically, curiosity and experimentation, along with perceived 'better functionality' of unapproved tools, outranked official options. Crucially, 10% of those using unauthorized AI tools reported doing so for direct patient care, raising significant concerns about patient safety and unregulated practices.
There are notable deficiencies in AI policy development and a lack of consistent awareness across health systems. While administrators are significantly more likely (30%) to be involved in creating AI policies compared to providers (9%), a paradoxical situation exists where 29% of providers are aware of the main policies, compared to only 17% of administrators. This indicates a disconnect between policy creation and ground-level understanding, potentially exacerbating the risks associated with shadow AI.
Despite the governance challenges, the adoption of AI in healthcare is high, with over half of professionals frequently utilizing AI tools in their work. Furthermore, there is widespread optimism, as nearly 90% strongly believe that AI will lead to meaningful improvements in healthcare within the next five years. Data analysis stands out as the most common application of AI for both providers (60%) and administrators (78%), underscoring its deep integration into operational and clinical workflows.
Both healthcare providers (25%) and administrators (26%) consistently identify patient safety as their foremost concern regarding the implementation and use of AI in healthcare settings. For administrators, this concern is closely followed by worries about privacy and potential data breaches. Providers, on the other hand, list inaccurate AI outputs as their second major apprehension, highlighting the critical need for accuracy and reliability in AI tools used in clinical environments.
A significant portion of healthcare professionals, approximately 23%, expresses considerable concern about the privacy and security risks inherent with AI in healthcare. This includes fears of unauthorized access to sensitive patient information and potential healthcare data breaches, emphasizing the urgent need for robust data protection measures and secure AI tool implementation.