Legit Security, the leading application security posture management (ASPM) platform that enables secure application delivery, today announced the availability of the cybersecurity industry’s first AI discovery capabilities. With these new capabilities, Legit helps bridge the gap between security and development by enabling CISOs and AppSec teams to understand where and when AI code is used and take action to ensure proper security controls are in place – without slowing software delivery.
As developers harness the power of AI and large language models (LLMs) to develop and deploy capabilities more quickly, new risks arise. For example, AI-generated code may contain unknown vulnerabilities or flaws that put the entire application at risk. In addition, AI-generated code can introduce legal issues if copyright restrictions are in place. Another risk is improper implementation of AI features, which can lead to data exposure, such as customers bypassing prompt protections and extracting sensitive data. Despite all this, security teams rarely understand how developers use AI-generated code, resulting in security blind spots that impact both the organization and the software supply chain.
Legit’s platform enables security leaders, including CISOs, product security leaders, and security architects, to gain comprehensive visibility into risks across the development pipeline from the infrastructure to the application layer. With a crystal-clear view of the development lifecycle, customers ensure the code deployed is traceable, secure, and compliant. These new AI code discovery capabilities bolster the platform by closing a significant visibility gap that allows security to take preventive actions, decrease the risk of legal exposure, and ensure compliance.
Legit’s AI code discovery capabilities provide a range of benefits to both security and development teams, including:
- Discovery of AI-generated code: Legit provides a full view of the development environment, including code derived from AI-generated coding tools (e.g., GitHub Copilot).
- Full visibility of the dev environment: By gaining a full view of the application environment, including repositories using LLM, MLOps services, and code generation tools, Legit’s platform offers the context necessary to understand and manage an application’s security posture.
- Security policy enforcement: Legit Security detects LLM and GenAI development and enforces organizational security policies, such as ensuring all AI-generated code gets reviewed by a human.
- Real-time notifications of GenAI code: Legit can immediately notify security teams when users install AI code generation tools, providing greater transparency and accountability.
- Protect against releasing vulnerable code: Legit’s platform provides guardrails to prevent the deployment of vulnerable code to production, including that delivered via AI tools.
- Alert on LLM risks: Legit scans LLM application’s code for security risks, such as prompt injection and insecure output handling.
Read a new blog from the Legit research team to learn more about important security considerations associated with GenAI applications. For more information on the importance of AI discovery, please visit the company’s blog. To learn more about the broader Legit Security platform, please visit https://www.legitsecurity.com.
Legit Security Launches AI-Powered, Enterprise-Grade Secrets Scanning Product
Posted in Commentary with tags Legit Security on March 26, 2024 by itnerdLegit Security, the leading platform for enabling companies to manage their application security posture across the complete developer environment, today announced the launch of its standalone enterprise secrets scanning product, which can detect, remediate, and prevent secrets exposure across the software development pipeline. An AI-powered solution that enables secrets discovery beyond source code, Legit’s offering is built to meet the needs of even the most complex development organizations.
This new offering provides CISOs and their teams with enterprise-grade security capable of addressing the needs of the world’s largest and most complex organizations. Security teams can now identify, remediate, and prevent the exposure of secrets across developer tools, such as GitHub, GitLab, Azure DevOps, Jenkins, Bitbucket, Docker images, Confluence, Jira, and more. Legit’s AI-powered accuracy also drives highly accurate results; false positives are reduced by as much as 86%.
Secrets, such as API keys, access keys, passwords, and personally identifiable information (PII), are valuable assets and a focal point for attackers. At the same time, applications and developers are using more and more secrets and non-human credentials to function. According to IBM’s 2023 Data Breach Report, secret leak risks are the second most common initial attack vector. Protecting secrets is mission-critical, as just one disclosure can lead to multiple breaches that are costly and often difficult to remediate. With Legit, organizations can identify, remediate, and prevent the loss of secrets across various developer tools and platforms.
Key benefits of Legit’s enterprise secrets scanning product include:
With enterprise secrets scanning from Legit, customers can start with secrets scanning and, based on future needs, expand to other use cases, such as vulnerability management, compliance, and software supply chain security.
Highlighting the effectiveness of Legit’s enterprise secrets scanning, a leading financial services organization recently found the security of its software supply chain significantly improved after deploying Legit’s solution. The comprehensive scanning and integration capabilities provided insights into potential risks, leading to more informed decision-making and strengthened security practices.
Legit Security’s new product is available now to new and existing customers. For more information, visit www.legitsecurity.com. To learn more about how Legit tackles secrets detection across, join a webcast – “Secrets Detection: Why Coverage Throughout the SDLC is Critical to Your Security Posture” – on Thursday, March 28, 2024 at 2:30 pm ET. Register for the event here.
Leave a comment »