Rezilion, an automated software supply chain security platform, today announced a new report, “Expl[AI]ning the Risk: Exploring the Large Language Models (LLM) Open-Source Security Landscape,” finding that the world’s most-popular generative artificial intelligence (AI) projects present a high security risk to organizations.
Generative AI has surged in popularity, empowering us to create, interact with, and consume content like never before. With the remarkable advancements in LLMs, such as GPT (Generative Pre-Trained Transformers), machines now possess the ability to generate human-like text, images, and even code. The number of open-source projects that integrate these technologies is now growing exponentially. By way of example, since OpenAI debuted ChatGPT seven months ago, there are now more than 30,000 open-source projects on GitHub using the GPT-3.5 family of LLMs.
Despite the booming demand for these technologies, GPT and LLM projects present various security risks to the organizations that are using them, including trust boundary risks, data management risks, inherent model risks, and general security concerns.
Rezilion’s research team investigated the security posture of the 50 most popular generative AI projects on GitHub. The research utilizes the Open Source Security Foundation (OSSF) Scorecard to objectively evaluate the LLM open-source ecosystem and highlight the lack of maturity, gaps in basic security best practices, and potential security risks in many LLM-based projects.
The key findings highlight concerns, revealing very new and popular projects with low scores:
- Extremely popular, with an average of 15,909 stars
- Extremely immature, with an average age of 3.77months
- Very poor security posture with an average score of 4.60 out of 10 is low by any standard. For example, the most popular GPT-based project on GitHub, Auto-GPT, has over 138,000 stars, is less than three months old, and has a Scorecard score of 3.7.
The following best practices and guidance is recommended for the secure deployment and operation of generative AI systems: educate teams on the risks associated with adopting any new technologies; evaluate and monitor security risks related to LLMs and open-source ecosystems; implement robust security practices, conduct thorough risk assessments, and foster a culture of security awareness.
An alarming amount of time is dedicated to security – especially when it comes to software. Rezilion’s automated software supply chain security platform helps customers to manage their software vulnerabilities efficiently and effectively. Maintaining a detailed and current database on the latest software vulnerabilities and the strategies to mitigate them remains paramount to customers’ success in navigating this complex security landscape. Rezilion provides its users with the same OpenSSF scorecard insights as part of the product offering for customers to make more informed decisions regarding adopting and managing any open-source project.
I also got some commentary Yotam Perkal, Director of Vulnerability Research at Rezilion who authored this report.
What was the most concerning finding from the survey and why?
The most concerning finding from the survey is the inadequate maturity and security posture of the open-source ecosystem surrounding LLMs. As these systems gain popularity and adoption, it is inevitable that they will become attractive targets for attackers, leading to the emergence of significant vulnerabilities. This finding raises concerns about the overall security of LLMs and highlights the need for improved security standards and practices in their development and maintenance.
What should organizations know about LLM risk before integrating Gen AI tools?
Organizations should be aware that integrating Generative AI tools, including LLMs, comes with both unique challenges and general security concerns. They need to address the specific risks associated with LLMs, such as data privacy, protection against attacks on the models, and securing the infrastructure involved in their deployment. Additionally, organizations must consider broader security implications and ensure that industry security standards are followed to promote ethical and responsible use of generative AI technology.
How can they prepare for this risk and who is responsible for this?
Organizations can prepare for LLM risks by adopting a secure-by-design approach when developing Generative AI-based systems. They should leverage existing frameworks like the Secure AI Framework (SAIF), NeMo Guardrails, or MITRE ATLAS™ to incorporate security measures into their AI systems. It is also imperative to monitor and log LLM interactions and regularly audit and review the LLM’s responses to detect potential security and privacy issues and update and fine-tune the LLM accordingly. Responsibility for preparing and mitigating LLM risks lies with both the organizations integrating the technology and the developers involved in building and maintaining these systems.
What are some other risks GPT and LLMs can pose to organizations?
The risks associated GPT and LLMs can pose are varied and can affect all aspects of the CIA triad (Confidentiality, Integrity and Availability). These risks can lead to bypass of access controls, unauthorized access to resources, system vulnerabilities, ethical concerns, potential compromise of sensitive information or intellectual property and more.
How will this risk through LLM to organizations evolve in the next 12-18 months?
Over the next 12-18 months, the risk through LLMs to organizations is expected to evolve as the popularity and adoption of these systems continue to grow. Without significant improvements in the security standards and practices surrounding LLMs, the likelihood of targeted attacks and the discovery of vulnerabilities in these systems will increase. Organizations must stay vigilant and prioritize security measures to mitigate evolving risks and ensure the responsible and secure use of LLM technology.
To download the full report, please visit: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape
Rezilion Reveals Overlooked High-Risk Vulnerabilities in CISA KEV Catalog, Raising Questions about Patching Prioritization Standards
Posted in Commentary with tags Rezilion on July 26, 2023 by itnerdOn Wednesday, July 26, Rezilion, an automated software supply chain security platform, will release its new report, “CVSS, EPSS, KEV: The New Acronyms – And The Intelligence – You Need For Effective Vulnerability Management,” detailing the critical importance of the Exploitability Probability Prediction Score (EPSS) for enhancing patch prioritization and effective vulnerability management.
Rezilion’s vulnerability experts disclosed that there are three vulnerabilities currently being actively exploited and have a high EPSS score. The findings of the report show that vulnerabilities with a high EPSS score are more likely to be exploited compared to those with low EPSS scores- showing that using only the Common Vulnerability Scoring System (CVSS) for prioritizing patching is not the most effective approach.
Key takeaways from the report include:
You can read the report here.
Leave a comment »