Legit Security has published new research on AI platforms for security issues and potential data leakage with actual vulnerabilities as part of the investigation, with examples encountered in the wild where such attacks were possible.
Naphtali Deutsch, formerly Israeli Military Intelligence Unit 8200 and Security Researcher at Legit, discusses the risks surrounding publicly accessible AI services, exploitable by anyone with Internet access, honing in on two types: vector databases and LLM tools.
Popular publicly exposed vector datasets involving AI models: Legit’s analysis of unprotected vector databases found that thirty servers contained corporate or private data, including company email conversations, customer PII, product serial numbers, financial records, resumes, and contact information. Three vector databases from two of the most popular platforms belonging to companies in engineering services, fashion, and the industrial equipment sector contain documents, media summaries, customer details, and purchase information.
Legit scanned the data on these servers and found dozens of secrets (passwords, API keys), including OpenAI and Pinecone (vector database SaaS) API keys, GitHub access tokens, and URLs with database passwords. It also found all the configurations and LLM prompts of these applications, which can help exploit prompt vulnerabilities down the road.
You can read the research here.
Like this:
Like Loading...
Related
This entry was posted on August 28, 2024 at 8:35 am and is filed under Commentary with tags Legit Security. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Publicly Available GenAI Exploitable By Anyone With Internet Access
Legit Security has published new research on AI platforms for security issues and potential data leakage with actual vulnerabilities as part of the investigation, with examples encountered in the wild where such attacks were possible.
Naphtali Deutsch, formerly Israeli Military Intelligence Unit 8200 and Security Researcher at Legit, discusses the risks surrounding publicly accessible AI services, exploitable by anyone with Internet access, honing in on two types: vector databases and LLM tools.
Popular publicly exposed vector datasets involving AI models: Legit’s analysis of unprotected vector databases found that thirty servers contained corporate or private data, including company email conversations, customer PII, product serial numbers, financial records, resumes, and contact information. Three vector databases from two of the most popular platforms belonging to companies in engineering services, fashion, and the industrial equipment sector contain documents, media summaries, customer details, and purchase information.
Legit scanned the data on these servers and found dozens of secrets (passwords, API keys), including OpenAI and Pinecone (vector database SaaS) API keys, GitHub access tokens, and URLs with database passwords. It also found all the configurations and LLM prompts of these applications, which can help exploit prompt vulnerabilities down the road.
You can read the research here.
Share this:
Like this:
Related
This entry was posted on August 28, 2024 at 8:35 am and is filed under Commentary with tags Legit Security. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.