Netcraft has released a new blog on LLMs falling for phishing, analyzing what happens when you ask AI where to log in to various well-known platforms, the real-world impact of phishing sites recommended by an AI model, and an AI coding assistants poisoning campaign.
Netcraft’s analysis revealed that 34% of all suggested domains were not brand-owned, potentially harmful, and many of the unregistered domains could easily be claimed and weaponized by attackers, opening the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools.
Netcraft observed a real-world instance where Perplexity suggested a phishing site when asked what the URL is to log in to Wells Fargo, which was surfaced by AI versus SEO, recommending the link directly to the user, bypassing traditional signals like domain authority or reputation.
Netcraft also uncovered a campaign to poison AI coding assistants in which the threat actor created a malicious API designed to impersonate a legitimate blockchain interface, engineering the entire ecosystem around it to bypass filters and reach developers through AI-generated code suggestions.
Multiple fake accounts shared a project seeded across accounts with rich bios, profile images, social media accounts, and credible coding activity with the malicious API hidden inside the repository, which were crafted to be indexed by AI training pipelines.
Netcraft found victims who copied this malicious code into their public projects, some of which show signs of being built using AI coding tools, so those poisoned repos are feeding back into the training loop, causing a supply chain attack.
You can read the blog post here.
Like this:
Like Loading...
Related
This entry was posted on July 1, 2025 at 8:13 am and is filed under Commentary with tags Netcraft. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Threat Actors Poison AI Assistants to Spread Malicious Code & LLM Falls for Phishing Scams Sites
Netcraft has released a new blog on LLMs falling for phishing, analyzing what happens when you ask AI where to log in to various well-known platforms, the real-world impact of phishing sites recommended by an AI model, and an AI coding assistants poisoning campaign.
Netcraft’s analysis revealed that 34% of all suggested domains were not brand-owned, potentially harmful, and many of the unregistered domains could easily be claimed and weaponized by attackers, opening the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools.
Netcraft observed a real-world instance where Perplexity suggested a phishing site when asked what the URL is to log in to Wells Fargo, which was surfaced by AI versus SEO, recommending the link directly to the user, bypassing traditional signals like domain authority or reputation.
Netcraft also uncovered a campaign to poison AI coding assistants in which the threat actor created a malicious API designed to impersonate a legitimate blockchain interface, engineering the entire ecosystem around it to bypass filters and reach developers through AI-generated code suggestions.
Multiple fake accounts shared a project seeded across accounts with rich bios, profile images, social media accounts, and credible coding activity with the malicious API hidden inside the repository, which were crafted to be indexed by AI training pipelines.
Netcraft found victims who copied this malicious code into their public projects, some of which show signs of being built using AI coding tools, so those poisoned repos are feeding back into the training loop, causing a supply chain attack.
You can read the blog post here.
Share this:
Like this:
Related
This entry was posted on July 1, 2025 at 8:13 am and is filed under Commentary with tags Netcraft. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.