On the one year anniversary of ChatGPT going public comes these recent findings by Google researchers on ChatGPT’s training data.
The researchers successfully prompted ChatGPT to disclose parts of its training data using a novel attack technique, which involved asking the chatbot’s production model to repeatedly echo specific words indefinitely.
Anurag Gurtu, CPO, StrikeReady had this to say:
The exposure of training data in ChatGPT and other generative AI platforms raises significant privacy and security concerns. This situation underscores the need for more stringent data handling and processing protocols in AI development, especially regarding the use of sensitive and personal information. It also highlights the importance of transparency in AI development and the potential risks associated with the use of large-scale data. Addressing these challenges is critical for maintaining user trust and ensuring the responsible use of AI technologies.
This is not a good look for AI in general and ChatGPT specifically. Clearly people behind AI products need to get a handle on this sort of thing quickly or these sorts of issues will simply multiply.
UPDATE: Kevin Surace, Chair, Token adds this:
The attack was incredibly simple and some of them still work as of now. It is an absolute disaster for any model to reveal its training data – IP-wise, legal, integrity and so on. Certainly, OpenAI and others must put in more stringent safeguards to keep this from happening again.
Like this:
Like Loading...
Related
This entry was posted on November 30, 2023 at 3:37 pm and is filed under Commentary with tags Google. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Google discovers ChatGPT training data flaw
On the one year anniversary of ChatGPT going public comes these recent findings by Google researchers on ChatGPT’s training data.
The researchers successfully prompted ChatGPT to disclose parts of its training data using a novel attack technique, which involved asking the chatbot’s production model to repeatedly echo specific words indefinitely.
Anurag Gurtu, CPO, StrikeReady had this to say:
The exposure of training data in ChatGPT and other generative AI platforms raises significant privacy and security concerns. This situation underscores the need for more stringent data handling and processing protocols in AI development, especially regarding the use of sensitive and personal information. It also highlights the importance of transparency in AI development and the potential risks associated with the use of large-scale data. Addressing these challenges is critical for maintaining user trust and ensuring the responsible use of AI technologies.
This is not a good look for AI in general and ChatGPT specifically. Clearly people behind AI products need to get a handle on this sort of thing quickly or these sorts of issues will simply multiply.
UPDATE: Kevin Surace, Chair, Token adds this:
The attack was incredibly simple and some of them still work as of now. It is an absolute disaster for any model to reveal its training data – IP-wise, legal, integrity and so on. Certainly, OpenAI and others must put in more stringent safeguards to keep this from happening again.
Share this:
Like this:
Related
This entry was posted on November 30, 2023 at 3:37 pm and is filed under Commentary with tags Google. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.