The AI Caricature Trend Has Security Teams Paying Attention

The viral Instagram “AI work caricature” trend is exposing a serious shadow AI risk. By prompting ChatGPT to create job-based caricatures and posting the results publicly, users are unintentionally signaling their access to sensitive systems, their use of public LLMs for work, and potential data leakage in prompts. Millions are tied to real profiles, helping threat actors identify high‑value targets and potential exploitation of LLMs via prompt injection or jailbreaking.

This seemingly harmless trend is a roadmap for targeted cyber and data‑exfiltration attacks.

Fortra cybersecurity expert Josh Davies has just published an article informing of these risks, which you can read here: https://www.fortra.com/blog/what-can-ai-work-caricature-trend-teach-us-about-risks-shadow-ai 

UPDATE: Reinforcing that this is a top of mind issue at the moment, Bob Long, President, Americas at Daon had this comment:

“Preventing identity fraud on the internet can be a serious challenge. Everyone knows that it’s vital not to share high-value personal information like your social security number or credit card information, but that is just a start to truly protecting your identity. There are multiple ways that bad actors take advantage of people in order to break into their accounts. Stealing your login information through a data breach is just the most visible method of attack. The most common is something most people don’t even see until after their information is compromised—social engineering. Social engineering is a broad term for a number of methods of luring people into handing over their login credentials willingly. Phishing is the most well known of these techniques, but there are many others. One thing they all have in common is the more a fraudster knows about their target, the easier it is to fool them.

That’s where things like the new trend of having Generative AI create a caricature of you based on everything it knows about you moves from being a fun exercise to a security threat. By creating one of these images and posting it on social media, you are doing fraudsters’ work for them—giving them a visual representation of who you are. This is literally the modern version of the “40 things about me” posts that used to be popular on social channels, creating a quick access, public record of who you are so people with bad intentions can exploit it. The fact that it explicitly prompts AI to include everything it knows about you makes it sound like it was intentionally started by a fraudster looking to make their job easy. It not only tells them a lot about the person, but it tells them which people have a lot of accessible information and which don’t. Until all businesses move away from passwords and other knowledge based forms of authentication, people will need to remain vigilant about what information about them is publicly available.

Of course, the argument against giving your image to Generative AI also stands. Unless you know, for certain, what will be done with that image outside of providing the requested output, you are at risk of your image being used for anything from training AI image generators to populating less-than-legal tracking software. Sharing personal information, including your image, with AI should only be done when you know and trust the organization making the request.”

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading