According to research from Microsoft and OpenAI, Nation-state threat actors from Russia, China, and North Korea and Iran are using generative AI tools, including large language models (LLMs) such as ChatGPT, in their efforts to support cyber campaigns rather than to develop novel attack techniques.
The researchers observed that AI is currently being used to scale and enhance existing social engineering attacks and to help bad actors find unsecured devices and accounts using the following services:
- Querying open-source information (reconnaissance)
- Translation
- Scripting
- Finding coding errors
- Running basic coding tasks
OpenAI said yesterday that it terminated 5 threat actor accounts linked to China, Russia, Iran and North Korea observed to be using these TTPs.
Also, as part of the report, Microsoft published a set of principles to govern its efforts to prevent other state-backed hackers from abusing its AI models. Those principles are:
- Identification and action against malicious threat actors’ use
- Notification to other AI service providers
- Collaboration with other stakeholders
- Transparency
“Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards,” OpenAI wrote.
Ted Miracco, CEO, Approov Mobile Security had this comment:
“The emergence of nation-state actors leveraging generative AI in cyber operations is no surprise and underscores the urgent need for proactive measures to safeguard digital infrastructure and information assets. Microsoft, OpenAI and Google can shutdown accounts periodically, but powerful generative AI technologies are readily available to all nation states through open source LLMs that are very close in capabilities to the industry leaders. There is no effective choke point that will prevent these nation states form using these emerging AI technologies, and it is essential to understand that safeguards need to be in place across the digital landscape as the opportunity to curtail access at the source has passed.”
Mark Campbell, Sr. Director, Cigent follows with this comment:
“At the end of the day nothing really changes for security professionals. Phishing, whether human or AI generated, is still the leading cause of initial access. Cyber security professionals need to keep systems up to date and deploy advanced endpoint security solutions that include AI and behavior analysis, to more effectively detect and block malicious activities, including those initiated by AI generated phishing emails.”
Making sure that AI isn’t being abused by bad actors to launch attacks should be priority one. Yes there’s a ton of cybersecurity priorities out there, but this one at the moment appears to potentially be the most dangerous.
Like this:
Like Loading...
Related
This entry was posted on February 16, 2024 at 8:30 am and is filed under Commentary with tags Microsoft, Open AI. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Microsoft & OpenAI – How nation-states are weaponizing AI
According to research from Microsoft and OpenAI, Nation-state threat actors from Russia, China, and North Korea and Iran are using generative AI tools, including large language models (LLMs) such as ChatGPT, in their efforts to support cyber campaigns rather than to develop novel attack techniques.
The researchers observed that AI is currently being used to scale and enhance existing social engineering attacks and to help bad actors find unsecured devices and accounts using the following services:
OpenAI said yesterday that it terminated 5 threat actor accounts linked to China, Russia, Iran and North Korea observed to be using these TTPs.
Also, as part of the report, Microsoft published a set of principles to govern its efforts to prevent other state-backed hackers from abusing its AI models. Those principles are:
“Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards,” OpenAI wrote.
Ted Miracco, CEO, Approov Mobile Security had this comment:
“The emergence of nation-state actors leveraging generative AI in cyber operations is no surprise and underscores the urgent need for proactive measures to safeguard digital infrastructure and information assets. Microsoft, OpenAI and Google can shutdown accounts periodically, but powerful generative AI technologies are readily available to all nation states through open source LLMs that are very close in capabilities to the industry leaders. There is no effective choke point that will prevent these nation states form using these emerging AI technologies, and it is essential to understand that safeguards need to be in place across the digital landscape as the opportunity to curtail access at the source has passed.”
Mark Campbell, Sr. Director, Cigent follows with this comment:
“At the end of the day nothing really changes for security professionals. Phishing, whether human or AI generated, is still the leading cause of initial access. Cyber security professionals need to keep systems up to date and deploy advanced endpoint security solutions that include AI and behavior analysis, to more effectively detect and block malicious activities, including those initiated by AI generated phishing emails.”
Making sure that AI isn’t being abused by bad actors to launch attacks should be priority one. Yes there’s a ton of cybersecurity priorities out there, but this one at the moment appears to potentially be the most dangerous.
Share this:
Like this:
Related
This entry was posted on February 16, 2024 at 8:30 am and is filed under Commentary with tags Microsoft, Open AI. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.