Thursday, the New York Times reported that last year a hacker had gained access to the internal messaging systems at OpenAI and stole details about the design of the company’s AI technologies.
Two people familiar with the incident said the stolen information includes details from internal, online discussion forums where employees talked about OpenAI’s latest technologies. Hackers did not get into the systems where OpenAI houses and builds its AI.
According to the report, in April 2023, OpenAI executives informed both employees and board members about the breach, but executives decided not to share the news publicly as no information about customers or partners had been stolen.
OpenAI executives did not inform the federal law enforcement agencies about the breach and did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government.
In May, OpenAI said it had disrupted five covert influence operations that sought to use its AI models for “deceptive activity” across the internet, and in the same month 16 companies developing AI pledged to develop the technology safely.
Ted Miracco, CEO, Approov Mobile Security had this to say:
“OpenAI’s silence on this security breach speaks volumes. While they trumpet AI safety pledges, their own house may not be in order. True security isn’t just about appearances—it’s about transparency and proactive measures, even when it’s uncomfortable. A global tech company isn’t most qualified to determine national security risks. By failing to inform law enforcement, OpenAI prioritized its own interests over potential broader implications, raising questions about their commitment to responsible AI development.
“This incident is just another example of a tech company making unilateral decisions on matters that might warrant broader scrutiny or regulatory involvement. The complex dynamic underscores the ongoing debate about how to effectively regulate and govern the tech industry, especially in rapidly evolving fields like AI.”
I have to admit that OpenAI’s response to this is suspect at best. It makes me less likely to trust them. Especially since it was recently found that their ChatGPT Mac client stored conversation data in plain text. That is now fixed. But you have to wonder what else is out there that would reduce the trust level of OpenAI further?
Related
This entry was posted on July 9, 2024 at 8:55 am and is filed under Commentary with tags Hacked, OpenAI. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
OpenAI Got Pwned But Didn’t Tell Anyone For A Year
Thursday, the New York Times reported that last year a hacker had gained access to the internal messaging systems at OpenAI and stole details about the design of the company’s AI technologies.
Two people familiar with the incident said the stolen information includes details from internal, online discussion forums where employees talked about OpenAI’s latest technologies. Hackers did not get into the systems where OpenAI houses and builds its AI.
According to the report, in April 2023, OpenAI executives informed both employees and board members about the breach, but executives decided not to share the news publicly as no information about customers or partners had been stolen.
OpenAI executives did not inform the federal law enforcement agencies about the breach and did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government.
In May, OpenAI said it had disrupted five covert influence operations that sought to use its AI models for “deceptive activity” across the internet, and in the same month 16 companies developing AI pledged to develop the technology safely.
Ted Miracco, CEO, Approov Mobile Security had this to say:
“OpenAI’s silence on this security breach speaks volumes. While they trumpet AI safety pledges, their own house may not be in order. True security isn’t just about appearances—it’s about transparency and proactive measures, even when it’s uncomfortable. A global tech company isn’t most qualified to determine national security risks. By failing to inform law enforcement, OpenAI prioritized its own interests over potential broader implications, raising questions about their commitment to responsible AI development.
“This incident is just another example of a tech company making unilateral decisions on matters that might warrant broader scrutiny or regulatory involvement. The complex dynamic underscores the ongoing debate about how to effectively regulate and govern the tech industry, especially in rapidly evolving fields like AI.”
I have to admit that OpenAI’s response to this is suspect at best. It makes me less likely to trust them. Especially since it was recently found that their ChatGPT Mac client stored conversation data in plain text. That is now fixed. But you have to wonder what else is out there that would reduce the trust level of OpenAI further?
Share this:
Like this:
Related
This entry was posted on July 9, 2024 at 8:55 am and is filed under Commentary with tags Hacked, OpenAI. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.