Archive for January 19, 2023

ChatGPT Is Good At Many Things…. Including Creating Malware

Posted in Commentary with tags on January 19, 2023 by itnerd

By now you have heard of ChatGPT by OpenAI. It has a lot of abilities including the ability to learn, come up with great ideas, and apparently it can create malware too:

ChatGPT took the world by storm being released less than two months ago, it has become prominent and is used everywhere, for a wide variety of tasks – from automation tasks to the recomposition of 18th century classical music. Its impressive features offer fast and intuitive code examples, which are incredibly beneficial for anyone in the software business. However, we find that its ability to write sophisticated malware that holds no malicious code is also quite advanced, and in this post, we will walk through how one might harness ChatGPT power for better or for worse.


ChatGPT could easily be used to create polymorphic malware. This malware’s advanced capabilities can easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary. 

That’s not reassuring to say the least. And Christopher Prewitt, CTO of Inversion6 has this to say:

“ChatGPT is going to be a significant impact to everyone’s lives very quickly and is proving to be of substantial impact. The security community has immediately taken note and its alleged that attackers have already been using this technology to create phishing emails and script kiddies taking advantage of this to improve their tactics and tooling. Security researchers have been testing the bounds of this technology to stretch its capabilities from creating malware to analyze and translate code.”

The take home message is that we’re likely in for a very scary and bumpy ride given the capabilities of ChatGPT. Hopefully there are checks and balances to stop it from becoming the terrifying SkyNet from the Terminator movies.

UPDATE: I have additional commentary from Jack Nichelson, CISO of Inversion6:

The use of AI-assisted coding has the potential to revolutionize the way we develop software, but it also poses new risks to cybersecurity. It is important for organizations to understand the potential for malicious use of AI models and take proactive steps to mitigate these risks. This includes investing in security research and development, proper security configuration and regular testing, and implementing monitoring systems to detect and prevent malicious use.

It is important to note that the emergence of AI-assisted coding is a new reality that we must learn to adapt to and be proactive in securing against potential threats. The ability reduce or even automate the development process using AI is a double-edged sword, and it’s important for organizations to stay ahead of the curve by investing in security research and development.

In this scenario, the researchers were able to bypass content filters by simply asking the question more authoritatively, which suggests that the security of the system was not properly configured. This highlights the importance of proper security configuration and regular testing to ensure that systems are protected against potential threats.

It is also important to note that ChatGPT is not the only AI language model with the potential to be used for malicious purposes, other models like GPT-3 also have the same potential. Therefore, it is important for organizations to stay informed about the latest advancements in AI and its potential risks.

Furthermore, it is important to understand that the possibilities offered by AI are vast, and that we must continue to invest in research and development to stay ahead of potential threats. As the saying goes, “the future is already here, it’s just not evenly distributed”, and it is important for organizations to stay ahead of the curve by investing in security research and development to mitigate the potential risks of AI-assisted coding.