By John Wilson, Senior Fellow, Threat Research, Fotra
Times are changing and no one changes faster than enterprising threat actors. Anxious to be the early bird that gets the worm, malicious insiders are already leveraging AI and cloud-level inroads to cause serious – and subtle – damage.
Here’s how companies can stay safe.
1. AI-Augmented Insider Threats
Insiders now have powerful tools at their fingertips. Generative AI can be used to repackage stolen data, evade detection, or even help craft malicious code. As LLMs become embedded in everyday business applications, they introduce new avenues for abuse.
To allay these threats, organizations should:
- Implement User Behavior Analytics (UBA). Invest in technology that will look for malicious indicators and catch them in real time. AI-crafted threats are expert at evading detection, and UBA catches what clues there are to find.
- Fight Fire with Fire: The only thing powerful enough to catch AI is often AI. Invest in security solutions that lean on AI, like XDR, that are powerful enough to stop these threats at scale.
- Use AI in Red Team Exercises. The best way to train against AI-augmented insider threats is to practice. Have your red team regularly utilize AI-driven techniques and have blue teams cut their teeth on these simulations with advanced technologies (like the ones mentioned above) that are specially designed to catch them.
2. Third-Party Insiders
The modern workplace is more distributed than ever. Contractors, MSPs, and vendors have become an insider threat risk as XaaS models and offshored support expand internal access.
In a world inundated with cloud services and AI, lowering third-party risk comes down to:
- Setting Minimum Threshold Requirements: Look for minimum certification standards like SOC2 or ISO:27001. If your third party can’t meet even one requirement, it says something about their security culture – or lack thereof.
- Conduct Independent Audits: Start with the typical vendor questionnaire and follow up with an independent security audit to regularly ensure external parties are toeing the line. Most security laws today place sole third-party breach responsibility on the host organization anyway, so doing this is only covering your bases. Regular reassessments will ensure that security doesn’t drift over time.
3. Burnout as a Risk Factor
Behavioral indicators matter. Burnout, dissatisfaction, or disengagement are proven precursors to malicious or negligent insider behavior, and in a volatile talent market, those signs are harder to ignore.
Outsourcing large projects (like pen testing and red teaming) to MSSPs is one way to give your team some breathing room. When it comes time to renew (or replace) solutions, invest in automation and AI. Anything that force-multiplies the capabilities of your SOC and makes them feel more successful puts coins back in emotional bank accounts.
And do all this under a “culture of cybersecurity,” because when the company shows its commitment to secure practices – and that it’s open to discussing them – team members are more likely to speak out rather than act out.
4. Cloud-Native Access as a Weak Point
Remote and hybrid work models have fueled a surge in BYOD and shadow IT. Employees often use personal cloud apps or unmanaged devices, which can unintentionally turn them into insider threats.
This cloud-based drift can be avoided as work-from-home models continue to expand.
- Scan for Cloud Assets | Use a data discovery tool to find all unknown digital assets in the cloud, then make sure they all have strong access policies around them. Implement data classification in the cloud to ensure that you have ongoing visibility and automated protection, as cloud-based assets are sure to scale quickly.
- Explicitly Communicate a Digital Services Policy | Assume no security initiative is automatically understood if you haven’t explicitly stated it. Many people are still reusing personal passwords for work logins. Gather department heads, talk to HR, get CISO buy-in – any or all of it – and explicitly state the policy on downloading SaaS and other digital services. Make IT permission mandatory and configure security controls and permissions to reflect that if necessary.
- Decide If BYOD is Right for You | While allowing employees to bring their own devices has obvious monetary benefits, consider the wide-swinging door of risk and whether it is worth it to your enterprise. Even if you can police behavior at work, users will always do what they like out of hours. An investment in company machines could be an investment in cybersecurity.
5. Avoiding Overseas Remote Work Scams
Another risk of remote work today – and perhaps the most obvious – is that when hiring, many organizations do not require an in-person meet-up. This problem goes both ways, as qualified candidates get “hired” and fill out sensitive HR paperwork, only to realize the whole company was a sham. But companies themselves can also get hit by scheming “employees.”
These schemes could introduce not only corporate compromise, but international espionage. In one recent example, “North Korean workers use[d] stolen or fake identities created with the help of AI tools to get hired by more than 100 companies in the U.S,” as stated in Bleeping Computer. While the two masterminds behind this particular operation were caught, this type of thing can pop up anywhere, at any time. Obviously, the use of AI makes it that much harder to catch.
How can organizations stay on the safe side of the line? Make sure you properly vet your remote workforce. One suggestion: Tell any prospective candidate up front that the final step in the interview process will be to visit corporate headquarters to meet the decision makers, even if no such visit will actually happen. This will quickly deter fake – or deepfake – candidates.
Staying One Step Ahead
Most insiders don’t have malicious intent, but with “the human factor” still present in 60% of data breaches, it can’t hurt to be sure. There is still a lot unknown about AI, at least to the average employee, and those same ambiguities in the cloud can lead to a perfect storm of unintentional mistakes.
Understanding human behavior and the risks it presents empowers organizations to take deliberate action against insider threats. By investing in AI-driven security, increasing visibility, automating key processes, and leveraging trusted partners, security teams can stay ahead of malicious insiders.
Like this:
Like Loading...
Related
This entry was posted on August 20, 2025 at 9:04 am and is filed under Commentary with tags Fortra. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Guest Post – Insider Risk in the Era of AI and Cloud Work: 5 Tips to Avoid Being Outsmarted
By John Wilson, Senior Fellow, Threat Research, Fotra
Times are changing and no one changes faster than enterprising threat actors. Anxious to be the early bird that gets the worm, malicious insiders are already leveraging AI and cloud-level inroads to cause serious – and subtle – damage.
Here’s how companies can stay safe.
1. AI-Augmented Insider Threats
Insiders now have powerful tools at their fingertips. Generative AI can be used to repackage stolen data, evade detection, or even help craft malicious code. As LLMs become embedded in everyday business applications, they introduce new avenues for abuse.
To allay these threats, organizations should:
2. Third-Party Insiders
The modern workplace is more distributed than ever. Contractors, MSPs, and vendors have become an insider threat risk as XaaS models and offshored support expand internal access.
In a world inundated with cloud services and AI, lowering third-party risk comes down to:
3. Burnout as a Risk Factor
Behavioral indicators matter. Burnout, dissatisfaction, or disengagement are proven precursors to malicious or negligent insider behavior, and in a volatile talent market, those signs are harder to ignore.
Outsourcing large projects (like pen testing and red teaming) to MSSPs is one way to give your team some breathing room. When it comes time to renew (or replace) solutions, invest in automation and AI. Anything that force-multiplies the capabilities of your SOC and makes them feel more successful puts coins back in emotional bank accounts.
And do all this under a “culture of cybersecurity,” because when the company shows its commitment to secure practices – and that it’s open to discussing them – team members are more likely to speak out rather than act out.
4. Cloud-Native Access as a Weak Point
Remote and hybrid work models have fueled a surge in BYOD and shadow IT. Employees often use personal cloud apps or unmanaged devices, which can unintentionally turn them into insider threats.
This cloud-based drift can be avoided as work-from-home models continue to expand.
5. Avoiding Overseas Remote Work Scams
Another risk of remote work today – and perhaps the most obvious – is that when hiring, many organizations do not require an in-person meet-up. This problem goes both ways, as qualified candidates get “hired” and fill out sensitive HR paperwork, only to realize the whole company was a sham. But companies themselves can also get hit by scheming “employees.”
These schemes could introduce not only corporate compromise, but international espionage. In one recent example, “North Korean workers use[d] stolen or fake identities created with the help of AI tools to get hired by more than 100 companies in the U.S,” as stated in Bleeping Computer. While the two masterminds behind this particular operation were caught, this type of thing can pop up anywhere, at any time. Obviously, the use of AI makes it that much harder to catch.
How can organizations stay on the safe side of the line? Make sure you properly vet your remote workforce. One suggestion: Tell any prospective candidate up front that the final step in the interview process will be to visit corporate headquarters to meet the decision makers, even if no such visit will actually happen. This will quickly deter fake – or deepfake – candidates.
Staying One Step Ahead
Most insiders don’t have malicious intent, but with “the human factor” still present in 60% of data breaches, it can’t hurt to be sure. There is still a lot unknown about AI, at least to the average employee, and those same ambiguities in the cloud can lead to a perfect storm of unintentional mistakes.
Understanding human behavior and the risks it presents empowers organizations to take deliberate action against insider threats. By investing in AI-driven security, increasing visibility, automating key processes, and leveraging trusted partners, security teams can stay ahead of malicious insiders.
Share this:
Like this:
Related
This entry was posted on August 20, 2025 at 9:04 am and is filed under Commentary with tags Fortra. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.