Nudge Security, the leading innovator in SaaS and AI security governance, today announced the findings of its newest report, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, which provides revealing insights into workforce AI adoption and usage patterns. The report found that AI use has moved beyond experimentation and general-purpose chat tools, and is now embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action.
The research report is based on anonymized and aggregated telemetry collected across Nudge Security customer environments. Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise environments. The percentages referenced below reflect the % of organizations using each tool, unless otherwise noted.
The report’s key findings include:
- Usage of core LLM providers is nearly ubiquitous. OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%
- The most-used AI tools are diversifying beyond chat. Meeting intelligence (Otter.ai at 74.2%, Read.ai at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
- Agentic tooling is emerging. Agent tools like Manus (22%), Lindy (11%), and Agent.ai (8%) are establishing an early footprint.
- Integrations are prevalent and varied. OpenAI and Anthropic are most commonly integrated with the organization’s productivity suite, as well as knowledge management systems, code repositories, and other tools.
- Usage is concentrated. Among the most active chat tools observed, OpenAI accounts for 66.8% of prompt volume and Google Gemini for 29.6% (together 96.4%).
- Data egress via prompts is non-trivial. 17% percent of prompts include copy/paste and/or file upload activity.
- Sensitive data risks skew toward secrets. Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).
AI governance in practice differs from this reality
AI governance has emerged as a top priority for security and risk leaders, but many programs remain narrowly focused on vendor approvals, acceptable use policies, or model-level risk. While necessary, these controls alone are insufficient. As this research illustrates, the most consequential AI risks now stem from how employees actually use AI tools day to day—what data they share, which systems AI is connected to, and how deeply AI is embedded into other tools and operational workflows. Understanding these intersections—between people, permissions, and platforms—is the foundation of effective AI security.
To download the report, visit https://www.nudgesecurity.com/content/ai-adoption-in-practice.
Forcepoint X-Labs Uncovers SmartScreen Evasion Campaign Abusing ScreenConnect for Persistent Remote Access
Posted in Commentary with tags Forcepoint X-Labs on February 11, 2026 by itnerdAuthored by Mayur Sewani, Senior Security Researcher, Forcepoint X-Labs researchers observed:
A campaign in which a spoofed email impersonating the U.S. Social Security Administration delivers a malicious attachment designed for silent execution and privilege escalation.
The script disables Windows SmartScreen, removes the Mark-of-the-Web, and installs a legitimate ScreenConnect client that is then abused as a Remote Access Trojan (RAT) to maintain command-and-control access.
Notably, the ScreenConnect client analyzed was signed with a certificate that had been explicitly revoked, underscoring how attackers are leveraging trusted tooling to evade detection.
The compromised host ultimately establishes encrypted communications with a remote server linked to Iranian network infrastructure, enabling data exfiltration activity.
Why This Matters
This research highlights a growing defensive challenge: attackers increasingly bypass traditional security controls by modifying system protections and repurposing legitimate IT management software. The findings reinforce the need for organizations to block revoked software, enforce strict RMM allowlists, and monitor for security-control tampering.
You can read the research here: ScreenConnect Attack: SmartScreen Bypass and RMM Abuse
Leave a comment »