Archive for NextDLP

Security Pros Admit to Using Unauthorized SaaS and AI (Despite the Risk) – NextDLP

Posted in Commentary with tags on July 9, 2024 by itnerd

Next DLP today revealed that nearly three-quarters (73%) of security professionals admit to using SaaS applications that had not been provided by their company’s IT team in the past year. This is despite the fact that they are acutely aware of the risks, with respondents naming data loss (65%), lack of visibility and control (62%), and data breaches (52%) as the top risks of using unauthorized tools. Adding to this, one in ten admitted they were certain their organization had suffered a data breach or data loss as a result.

A survey of more than 250 global security professionals, conducted at RSA Conference 2024 and Infosecurity Europe 2024, also revealed that despite having a laissez-faire attitude towards Shadow SaaS, security professionals have taken a more cautious approach to GenAI usage. Half of the respondents highlighted that AI use had been restricted to certain job functions and roles in their organization, while 16% had banned the technology completely. Adding to this, 46% of organizations have implemented tools and policies to control employees’ use of GenAI.

The research also provided a snapshot of how security professionals view their organization’s training and overall understanding of the risks of Shadow SaaS:

  • 40% of security professionals do not think employees properly understand the data security risks associated with Shadow SaaS and AI.
  • Yet, they are doing little to combat this risk. Only 37% of security professionals had developed clear policies and consequences for using these tools, with even less (28%) promoting approved alternatives to combat usage.
  • Only half had received guidance and updated policies on Shadow SaaS and AI in the past six months, with one in five admitting to never receiving this.
  • Additionally, nearly one-fifth of security professionals were unaware of whether their company had updated policies or provided training on these risks, indicating a need for further awareness and education.

For further insights into the survey results, please see the full results report linked here. Or, for more information about Shadow SaaS and AI, and the possible defenses, visit the Next DLP website.

Methodology

The survey of more than 250 global security professionals was conducted at RSA Conference 2024 and Infosecurity Europe 2024. Each respondent was asked the same ten questions surrounding Shadow SaaS and Shadow AI usage within their organization, the implied security risks, and the policies and security tools their company has in place.

Next DLP Extends Visibility and Adaptive Controls for Leading Generative AI Tools 

Posted in Commentary with tags on September 26, 2023 by itnerd

Next DLP a leader in insider risk and data protection, today announced the extension of the company’s generative AI (“GenAI”) policy templates from ChatGPT to include Hugging FaceBardClaudeDall.ECopy.AiRytrTome and Lumen 5, within the company’s Reveal platform. This extension of visibility and control enables customers to stop data exfiltration, expose risky behavior and educate employees around the usage of GenAI tools. 

CISOs around the world are grappling with the proliferation of GenAI tools including text, image, video and code generators. They worry about how to manage and control their uses within the enterprise and the corresponding risk of sensitive data loss through GenAI prompts. Researchers at Next investigated activity from hundreds of companies during July 2023 to expose that:

  • 97% of companies had at least one user access ChatGPT
  • 8% of all users accessed ChatGPT
  • ChatGPT navigation events account for <0.01% of traffic. For comparison, Google navigation events consistently account for 5-10% of traffic.

With these new policies, customers gain enhanced monitoring and protection of employees using the most popular GenAI tools on the market. From educating employees on the potential risks associated with using these services, to triggering when an employee visits the GenAI tool websites, security teams can remind and reinforce corporate data usage protocols. 

In addition, customers can set up a policy to detect the use of sensitive information such as internal project names, credit card numbers, or social security numbers in GenAI conversations, enabling organizations to take preventive measures against unauthorized data sharing. These policies are just two of many possible configurations that protect organizations whose employees are using GenAI tools. 

For more information on the Reveal Platform and how to protect intellectual property visit: https://www.nextdlp.com/use-cases/protect-intellectual-property