Archive for Nudge Security

AI Adoption Report from Nudge Security Reveals How Widespread AI Use Is Transforming Security Governance

Posted in Commentary with tags on February 11, 2026 by itnerd

Nudge Security, the leading innovator in SaaS and AI security governance, today announced the findings of its newest report, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, which provides revealing insights into workforce AI adoption and usage patterns. The report found that AI use has moved beyond experimentation and general-purpose chat tools, and is now embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action.

The research report is based on anonymized and aggregated telemetry collected across Nudge Security customer environments. Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise environments. The percentages referenced below reflect the % of organizations using each tool, unless otherwise noted.

The report’s key findings include:

  • Usage of core LLM providers is nearly ubiquitous. OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%
  • The most-used AI tools are diversifying beyond chat. Meeting intelligence (Otter.ai at 74.2%, Read.ai at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
  • Agentic tooling is emerging. Agent tools like Manus (22%), Lindy (11%), and Agent.ai (8%) are establishing an early footprint.
  • Integrations are prevalent and varied. OpenAI and Anthropic are most commonly integrated with the organization’s productivity suite, as well as knowledge management systems, code repositories, and other tools.
  • Usage is concentrated. Among the most active chat tools observed, OpenAI accounts for 66.8% of prompt volume and Google Gemini for 29.6% (together 96.4%).
  • Data egress via prompts is non-trivial. 17% percent of prompts include copy/paste and/or file upload activity.
  • Sensitive data risks skew toward secrets. Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).

AI governance in practice differs from this reality

AI governance has emerged as a top priority for security and risk leaders, but many programs remain narrowly focused on vendor approvals, acceptable use policies, or model-level risk. While necessary, these controls alone are insufficient. As this research illustrates, the most consequential AI risks now stem from how employees actually use AI tools day to day—what data they share, which systems AI is connected to, and how deeply AI is embedded into other tools and operational workflows. Understanding these intersections—between people, permissions, and platforms—is the foundation of effective AI security.

To download the report, visit https://www.nudgesecurity.com/content/ai-adoption-in-practice.