Enterprise AI adoption is rising, but daily use, governance, and security controls lag as agentic systems spread across organizations.
Survey data indicates a significant increase in sanctioned AI tool access within organizations, rising from under 40% to approximately 60% of workers in a year. However, daily integration of these tools into routine workflows lags behind, with usage patterns varying across different roles and functions. Early productivity gains are reported for specific tasks like summarization, research, and basic automation, but widespread, end-to-end AI usage across teams is less common. This discrepancy is attributed to gaps in training, suboptimal workflow design, and unclear expectations. Transitioning AI pilot projects to production environments remains a challenge, with only a quarter of surveyed organizations reporting significant progress. The integration process involves substantial work, security reviews, and compliance checks. This shift introduces new operational security risks, as existing controls designed for human activity may not adequately address potential misuse of permitted AI access outside its intended workflow. Risks include agents performing valid actions in incorrect contexts, necessitating dynamic permission models based on task scope.
The research categorizes organizations based on how deeply AI impacts their core operations. Roughly one-third of companies are experiencing profound changes, such as developing new AI-driven products, overhauling processes, or revising business models. The remaining companies primarily integrate AI through minor process updates or within existing workflows. While all groups acknowledge productivity improvements in daily tasks, the impact on overall revenue remains limited. Future revenue growth is anticipated from new AI-powered products and services, with current benefits focused on enhancing operational output and supporting decision-making. Security teams face increasing complexity as agentic workflows expand across more systems. The proliferation of action patterns as agents connect various tools and services makes monitoring challenging. The need for external control mechanisms that can independently interrupt invalid activities, without relying on the agent's self-regulation, is crucial to prevent destructive sequences and potential data loss.
Workforce readiness for AI is a recurring theme, with many organizations concentrating on fundamental AI training rather than comprehensive changes to job design, career progression, or role redefinition. Leaders foresee automation transforming many job aspects in the coming years, particularly affecting entry-level and task-oriented positions. Managers are expected to dedicate more time to overseeing hybrid work shared between humans and AI systems. Reliability risks extend beyond access control; agents might appear to perform well based on internal metrics but fail to deliver intended real-world outcomes. Without independent evaluation tied to external results, organizations lack confirmation of an agent's true value. Implementing continuous assessment mechanisms external to the agent is suggested to validate outcomes during live operations.
The location and control over AI development are increasingly influencing purchasing and architectural decisions. A significant portion of surveyed companies considers the country of origin in vendor selection, with many opting to build AI stacks using local providers to satisfy data residency and regulatory requirements. Sovereign AI refers to systems designed, trained, and deployed under local laws, utilizing controlled infrastructure and data. This topic is a key part of strategic planning, especially for organizations operating internationally, as requirements differ by region and industry, complicating deployment choices.
Interest in agentic AI, which involves systems capable of setting goals, reasoning, and acting through software interfaces, is rapidly increasing, with nearly three-quarters of surveyed companies planning deployments within two years. However, current usage is lower, and governance maturity lags behind these ambitious adoption plans. Only about one-fifth of respondents have established governance models for autonomous agents. Leaders recognize the necessity for clear boundaries on agent actions, robust approval workflows, continuous monitoring, and comprehensive audit trails. These efforts typically involve cross-functional teams encompassing security, legal, and business leaders. Governance gaps are often linked to identity and privilege management, where AI integrations might circumvent standard practices, leading to elevated permissions and expanded impact in case of exploitation across connected systems.
Physical AI, encompassing robotics, automated machinery, and systems that interact with the physical world, is already in use by over half of the surveyed organizations. Projections show a rise in adoption over the next two years, particularly within manufacturing, logistics, and defense sectors. Deployments are more common in controlled environments like factories and warehouses due to considerations of cost, safety regulations, and regulatory approvals. Business cases frequently involve significant infrastructure modifications, ongoing maintenance, and downtime planning. Security vulnerabilities often arise at the integration points between digital systems and physical assets. Agentic systems, by relying on connections across email, workflow platforms, and third-party services, expand exposure beyond traditional enterprise boundaries, creating shared risk ownership among IT, operational technology, and safety teams.
Many leaders express higher confidence in their AI strategy and governance planning compared to their preparedness in infrastructure, data management, or talent. While strategic and policy decisions are made swiftly at the executive level, implementing system upgrades and developing necessary skills across large organizations takes more time. Enterprise AI adoption remains an ongoing process, shaped by factors such as access to tools, design choices, and organizational structure. Despite continued progress across tools, agents, and physical systems, the daily integration of AI into work remains uneven, indicating that many organizations are still focused on advancing to the next stage of adoption.