Thursday, the New York Times reported that last year a hacker had gained access to the internal messaging systems at OpenAI and stole details about the design of the company’s AI technologies.
Two people familiar with the incident said the stolen information includes details from internal, online discussion forums where employees talked about OpenAI’s latest technologies. Hackers did not get into the systems where OpenAI houses and builds its AI.
According to the report, in April 2023, OpenAI executives informed both employees and board members about the breach, but executives decided not to share the news publicly as no information about customers or partners had been stolen.
OpenAI executives did not inform the federal law enforcement agencies about the breach and did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government.
In May, OpenAI said it had disrupted five covert influence operations that sought to use its AI models for “deceptive activity” across the internet, and in the same month 16 companies developing AI pledged to develop the technology safely.
Ted Miracco, CEO, Approov Mobile Security had this to say:
“OpenAI’s silence on this security breach speaks volumes. While they trumpet AI safety pledges, their own house may not be in order. True security isn’t just about appearances—it’s about transparency and proactive measures, even when it’s uncomfortable. A global tech company isn’t most qualified to determine national security risks. By failing to inform law enforcement, OpenAI prioritized its own interests over potential broader implications, raising questions about their commitment to responsible AI development.
“This incident is just another example of a tech company making unilateral decisions on matters that might warrant broader scrutiny or regulatory involvement. The complex dynamic underscores the ongoing debate about how to effectively regulate and govern the tech industry, especially in rapidly evolving fields like AI.”
I have to admit that OpenAI’s response to this is suspect at best. It makes me less likely to trust them. Especially since it was recently found that their ChatGPT Mac client stored conversation data in plain text. That is now fixed. But you have to wonder what else is out there that would reduce the trust level of OpenAI further?
Atlas browser vulnerability uncovered by researchers
Posted in Commentary with tags OpenAI on October 24, 2025 by itnerdRecently, researchers uncovered that OpenAI’s newly launched Atlas browser is vulnerable to indirect prompt injection, allowing malicious web pages to embed hidden commands that the browser’s AI agent may follow. The flaw is also observed in other AI-powered browsers like Comet and Fellou, according to Brave Software and highlights a systemic security risk where AI models treat untrusted web content as valid instructions, potentially exposing sensitive data and compromising user sessions.
You can read more about this here: Security Experts Raise Cybersecurity Warnings in OpenAI’s New ChatGPT Atlas Browser
The CTO of DryRun Security, Ken Johnson had this to say:
“In corporate environments, I would not allow Comet, Atlas, or any AI-powered browser on company devices at this time. Browser security is already difficult even for the companies that make them, and robust privacy controls require immense care. AI is new to both fronts. Granting these tools unprecedented access to personal and corporate data, combined with the inherent risks of AI systems and existing security concerns, is a time bomb.”
Many companies have restrictions on how AI can be used. If your organization hasn’t looked at this, now would be a good time to do so. Because the risk of having sensitive data leak out to the outside world is to great to ignore.
Leave a comment »