Unit 42 Analyzes The Use of AI in Malware

While less sophisticated attackers are using LLMs to help write functional malware, we’re still seeing attackers having challenges deploying local models to a target environment or embedding into a malware sample for local decision making. This research analyzes two samples of malware leveraging AI for remote decision making

  1. AI Theater: An Infostealer’s Illusory LLM Features: A trio of highly similar .NET information stealer samples that incorporate the OpenAI GPT-3.5-Turbo model via HTTP API. We will explore the implementation and assess the practical impact of its AI integration.
  2. AI-Gated Execution: Malware Dropper’s LLM-Based Environment Assessment: A malware dropper written in Golang that leverages an LLM to evaluate a system and provide a decision on whether to proceed with an infection. The sample was initially highlighted on X as a dropper for Sliver malware.

Some key takeaways:  

  • The current state of AI in malware is characterized by experimentation and uneven integration, but the potential for AI to aid in malware creation highlights a concerning issue of lowering the barrier to entry for less-skilled threat actors. 
  • Unit 42 anticipates a future where AI plays a greater role in both malware creation and execution. As local model deployment becomes more feasible, we may see malware samples with embedded AI capabilities (especially code generation) that can more dynamically adapt to their environment, evade detection and optimize malicious activities in real-time.
  • The rise of AI-assisted malware could manifest in the form of increased feature cadence and reliability. It will be crucial to monitor these advancements and develop defenses that can effectively counter an evolving AI-driven threat landscape.

You can read the research here: https://unit42.paloaltonetworks.com/ai-use-in-malware

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading