Guest Post – AI at work: How employees are becoming threat actors

3 growing security risks of unregulated AI use by employees

The adoption of artificial intelligence (AI) is accelerating across the business landscape as organizations aim to reap its potential benefits. However, recent incidents involving AI leaking corporate data show that embracing AI comes with a risk, as employees using AI tools at work might become involuntarily threat actors.

“As generative AI tools become deeply embedded in the workplace, the security risks stemming from employee misuse — intentional and accidental — are escalating,” says Zilvinas Girenas, AI security expert at nexos.ai, an AI orchestration platform for businesses. “Data breaches and leaks of sensitive information cause reputational damage, thus, many companies are torn between enablement and banning the use of AI, which creates friction between employee productivity and security.” 

Employee-dependent AI security risks

According to Girenas, employees can unintentionally cause cyber threats when using AI tools for three key reasons:

  1. Data exposure. Employees might input sensitive or confidential company data into AI tools, especially cloud-based generative AI platforms, without realizing that these inputs could be stored, analyzed, or even used to train models. This can lead to unintentional data leaks.
  2. Shadow AI usage. If employees use AI tools that haven’t been approved by the organization’s IT or security teams, shadow IT is introduced. These unvetted platforms may lack the necessary security controls, compliance certifications, or data governance protections, creating blind spots in risk management.
  3. Prompt injection or model manipulation. AI tools can be vulnerable to prompt injection and data poisoning attacks. If employees rely on outputs from compromised AI models or bots, they could act on manipulated or malicious advice, for example harmful instruction in automated workflows, leading to potential damage or breaches.

What helps organizations balance AI adoption with effective risk management?

“In today’s digital age, enhancing workflows with corporate data input shouldn’t come at the cost of security; however, without the right protection in place, it often does,” says Zilvinas Girenas. To mitigate human-fueled AI vulnerabilities and secure the modern workplace, the following should be considered:

  • Clear policy enforcement. Consistent implementation and communication of guidelines related to how employees are allowed to use which approved AI tools.  
  • Employee training. Educating staff on the safe and ethical use of AI tools.
  • Robust governance. Implementing smart guardrails that allow safe and compliant AI adoption without stifling productivity.

“Having a secure and structured approach to adopting various AI tools empowers organizations to tap into the full potential of artificial intelligence. It not only enhances productivity and efficiency across teams but also ensures that progress doesn’t come at the cost of cybersecurity or compliance,” says Girenas.

About nexos.ai

nexos.ai is a secure AI gateway that enables businesses to embrace generative AI without compromising security or control. It gives enterprises a single point of control to orchestrate all LLM usage responsibly and at scale. nexos.ai was founded in late 2024 and accelerated by Tesonet. It was created by the team behind some of Europe’s fastest growing businesses, including Oxylabs, Hostinger, and Nord Security. At just two months old, it secured an investment of $8 million from industry leaders and angel investors, most notably — Index Ventures.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading