Abnormal Security has released a generative AI-likelihood analysis on new email attacks in which threat actors leverage generative AI tools to create increasingly realistic and convincing email attacks, spotlighting what would be in the top 10 and top 100 predicted words if generated by AI.
Abnormal Security has discovered real-world examples of AI-generated attacks, presenting three cases: a credential phishing attempt impersonating Facebook, an employee impersonated in an AI-created payroll diversion scam, and a vendor email compromise (VEC) and invoice fraud generated by AI.
Using AI, Abnormal Security analyzes the likelihood of each word in an email being AI-generated. The report provides the output of their analysis for each email example to demonstrate how they know these are AI-generated emails, highlighting words indicated as generated by the AI.
I also have some detailed commentary from Dan Shiebler, Head of Machine Learning at Abnormal Security:
How are cybercriminals leveraging generative AI platforms to enhance email attack techniques?
One of the leading forms of cybercrime is business email compromise (BEC), whereby threat actors write seemingly realistic, socially-engineered emails that lure their victims into taking action, like paying a fake invoice, changing their bank account details, or sharing sensitive information.
BEC actors often use templates to write and launch their email attacks. Because of this, many traditional BEC attacks feature common or recurring content that can be detected by email security technology based on pre-set policies. But with generative AI tools like ChatGPT, cybercriminals could write a greater variety of unique content based on slight differences in their generative AI prompts, which makes detection based on known attack indicator matches much more complex while also allowing them to scale the volume of their attacks.
What recent incidents have highlighted the growing threat of AI-generated email attacks?
While we are still doing a complete analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks with AI indicators as a percentage of all attacks, particularly over the past few weeks.
How are AI-driven phishing attacks becoming more convincing and difficult to detect?
The danger of generative AI in email attacks is that it allows threat actors to write increasingly sophisticated content, making it more likely that their target will be deceived into clicking a link or following their instructions. For example, using AI to write their email attacks can help eliminate the typos and grammatical errors that often characterize and help us identify traditional BEC attacks.
It can also be used to create greater personalization. Imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. Emails will begin to show the typical context, language, and tone the victim expects, making BEC emails even more deceptive.
Can you explain how an AI-generated phishing email example mimics legitimate communication?
The email snapshots in the article are great examples of how AI-generated email attacks can mimic legit communications from individuals and brands. Unlike traditional BEC attacks that tend to be riddled with grammatical errors, typos, vague senders, and formatting issues, these emails are free of those indicators. They’re written professionally, with a sense of formality that would be expected around a business matter, and in some cases—such as in the last example from an impersonated attorney—they are signed by a named sender from a legitimate organization.
How is AI being used to detect AI-generated text in suspicious emails?
At Abnormal, we use a specialized prediction engine to analyze how likely an AI system will select each word in an email, given the context to the left of that email. If the words in the email have consistently high likelihood (meaning each term is highly aligned with what an AI model would say, more so than in human text), then we classify the email as possibly written by AI. However, It should be noted that not all AI-generated emails can be blocked, as there are many legitimate use cases where employees use AI to create email content. As such, the fact that an email has AI indicators must be used alongside many other signals to indicate malicious intent.
What are the challenges in accurately detecting AI-generated emails?
Many legitimate emails can look AI-generated, such as templatized messages and machine translations, making catching legitimate AI-generated emails difficult. When our system decides whether to block an email, it incorporates much information beyond whether AI may have generated the email using identity, behavior, and related indicators.
Beyond phishing attacks, how has generative AI expanded into other types of email attacks?
Phishing attacks, business email compromise, and vendor fraud often fall under the same umbrella category of social engineering. Regardless of whether a threat actor intends to lure their victim into clicking a link to steal their credentials (phishing); impersonate a trusted or authoritative figure, like a senior executive or a colleague (BEC); or more specifically, impersonate a vendor (vendor fraud), generative AI is lowering the barrier to entry for launching sophisticated social engineering attacks of all types. Criminals simply need to input information and intent into a tool like ChatGPT to receive a legitimate-looking email they can send to their targets.
What measures can organizations take to combat AI-generated email attacks?
Organizations must implement modern solutions capable of detecting threats—including highly sophisticated AI-generated attacks that can be nearly impossible to distinguish from legitimate emails. They must also see when an AI-generated email is legitimate versus when it has malicious intent.
Solutions that leverage AI will be most effective in detecting these evolving attacks—think of it as good AI to fight bad AI. Instead of looking for known indicators of compromise, which constantly change, solutions that use AI to baseline normal behavior across the email environment—including typical user-specific communication patterns, styles, and relationships—will be able to detect the anomalies that may indicate a potential attack, no matter if it was created by a human or by AI.
Organizations should also practice good cybersecurity hygiene, including implementing continuous security awareness training to ensure employees are vigilant about BEC risks. Additionally, implementing tactics like password management and multi-factor authentication will ensure the organization can limit further damage if any attack succeeds.
You can read this analysis here.
Human Factor Remains Crucial While MFA Bypass Kits Surge: Proofpoint
Posted in Commentary with tags Proofpoint on June 14, 2023 by itnerdAccording to Proofpoint’s report The Human Factor 2023, social engineering is more than three times more likely to be used in a cyber-attack than any other technique.
“Among the many attacks we classified, the vast majority relied on some element of psychological manipulation.
“Social Engineering is endlessly scalable and limited only by attackers’ ingenuity. And even without the use of malware or technical exploits, the aftermath of a successful social engineering attack can be devastating,” said the report.
Assisting with social interactions was the rise in threat actors’ ability to sidestep user defenses with MFA bypass kits accounting for millions of phishing messages.
Also, indicating the adoption by a significant number of less sophisticated groups: peaking at over 13 million per month is telephone-oriented attack delivery (TOAD) threats, and a twelvefold increase in “conversational” scams including romance fraud, fake job ads–the fastest growing threat in mobile.
“…our research has consistently led us toward a simple but powerful observation: people – not technology-are the most critical variable in today’s cyber threats,” stated the report.
Willy Leichter, PV of Marketing, Cyware had this to say:
“As cybersecurity improves, it shouldn’t be surprising that humans are increasingly the weakest link. But it’s also a cop out for the security industry to shrug and blame the victims. Humans will inevitably get fooled and lured into scams. As an industry we must do a better job of connecting the dots and disseminating actionable intelligence on threats and attacks to keep the damage from spreading.”
This is where education and re-education can help to make humans less of a factor in terms of attacks. Hopefully there will be a shift to make that more of a focus than it is right now.
Leave a comment »