By Aras Nazarovas
The recent AI summit in Paris pushed an optimistic vision of the technology’s potential, focusing on how AI can solve big problems in medicine, climate science, and beyond instead of prioritizing security. But the world can’t just be blissfully excited. It’s crucial to remember that AI is also a powerful tool for malicious actors – one that’s already being used in cyberattacks and could evolve into a much bigger threat.
Today, AI is being deployed to amplify cyberattacks in various ways. A study from the University of Cambridge showed how AI-driven cyberattacks are becoming more sophisticated. Attackers are increasingly using machine learning algorithms to automate phishing attacks, targeting individuals and organizations with highly personalized content. These AI-driven systems can analyze vast amounts of data – on social media profiles, browsing history, and even email patterns – to create convincing attacks that are harder to detect than traditional ones.
AI tools lower the barrier to entry for cybercrime by enabling less experienced attackers to launch attacks they wouldn’t otherwise have the skills or knowledge to carry out. For instance, individuals who lack programming skills, can now simply ask AI tools like ChatGPT to write bots that automate the process of breaching servers. While these attacks may not be novel, they still increase the volume of potential threats companies need to defend against, draining the resources of already underfunded security teams.
Striking the Right Balance Between Innovation and Security
As AI tools become more embedded in business operations, the stakes grow even higher. For instance, KPMG’s recent survey of financial leaders revealed that 84% plan to increase their investments in generative AI (GenAI).
While they and presumably other industries are in the process of accelerating the adoption of AI tools, the World Economic Forum reports that nearly 47% of surveyed organizations have already noticed adversarial advances powered by GenAI as their primary concern, enabling more sophisticated and scalable attacks. Moreover, the same report states that only 37% of organizations have processes in place to assess the security of AI tools before deployment.
Meanwhile, the EU’s AI Act, which aims to regulate high-risk AI systems, is being phased in over several years, with full implementation not expected until 2027. However, there is a growing debate in Europe about how to balance regulation with fostering innovation. During the Paris AI summit, French President Emmanuel Macron remarked that Europe might reduce regulatory burdens to allow AI to flourish in the region.
This presents a potential challenge: while Europe struggles with over-regulation concerns, its wait-and-see approach might cause them to miss the boat as AI technology evolves at an incredible speed. By the time the AI Act is fully in place, we could be facing an entirely new wave of AI-powered cyberattacks, many beyond the scope of current regulations.
So, what does this mean for cybersecurity if AI is regulated by a light-touch regulatory framework? While innovation is essential, the absence of security-focused regulation means AI tools are already in the hands of cybercriminals who can weaponize them with minimal oversight.
At the moment, the capacity of AI systems for automating and optimizing cyberattacks already extends far beyond aforementioned phishing. AI-powered tools can be used to exploit vulnerabilities in critical infrastructure systems, launch bigger Distributed Denial of Service (DDoS) attacks, or even manipulate financial markets. In 2023, the US Department of Homeland Security issued a warning that AI-powered systems could soon be capable of launching autonomous cyberattacks that are difficult to counteract using conventional defense mechanisms. Such threats present a security nightmare that policymakers can’t afford to ignore.
If AI systems evolve to the point where they can autonomously compromise digital infrastructure, we could see an escalation in both the frequency and severity of cyberattacks, potentially crippling global systems.
Cybersecurity Must Evolve – Now
Whether AI is robustly regulated or not, businesses should do more than a bare minimum for cybersecurity. First, it’s essential to invest in additional, AI-driven security tools rather than replacing existing ones with AI-powered solutions. While AI and machine learning can be incredibly useful for detecting and preventing attacks in real time, they can also make incorrect decisions. AI should serve as an additional resource to enhance cybersecurity efforts, not as a replacement for traditional tools. By analyzing patterns in network traffic, AI can identify anomalies that may signal a breach. As cyberattacks become more automated, AI can help security teams identify threats faster and more efficiently, allowing them to do more with the same amount of resources.
Another step is to start incorporating AI threat modeling into security protocols. AI can be leveraged to predict and prevent attacks. Security teams need to think like attackers, using AI to simulate how their systems might be breached and proactively patching those vulnerabilities before they can be exploited.
Finally, companies must invest in continuous training for their security teams. As AI-driven attacks evolve, it’s not enough to simply rely on firewalls and antivirus software. Security professionals need to be prepared to deal with more sophisticated, AI-powered threats. This includes staying ahead of trends, understanding how AI tools are being used against them, and developing strategies that go beyond traditional defenses.
Undoubtedly, AI has the potential to revolutionize cybersecurity and every other industry, but it also introduces a new wave of risks. While policymakers may be caught up in the AI race, cybersecurity professionals must act now. AI can be an ally in the fight against cybercrime and in enabling business operations, but it can also become an adversary if left unchecked. As we race toward a future shaped by AI, securing our systems against its darker side should be a top priority.
ABOUT THE EXPERT
Aras Nazarovas is an Information Security Researcher at Cybernews, a research-driven online publication. Aras specializes in cybersecurity and threat analysis. He investigates online services, malicious campaigns, and hardware security while compiling data on the most prevalent cybersecurity threats. Aras along with the Cybernews research team have uncovered significant online privacy and security issues impacting organizations and platforms such as NASA, Google Play, and PayPal. The Cybernews research team conducts over 7,000 investigations and publishes more than 600 studies annually, helping consumers and businesses better understand and mitigate data security risks.
Previous Cybernews research:
- Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia.
- The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google’s latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
- The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
- The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
- They revealed that two online PDF makers leaked tens of thousands of user documents, including passports, driving licenses, certificates, and other personal information uploaded by users.
- An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
- The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
- The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
- The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.
- The team investigated 30,000 Android Apps, and discovered that over half of them are leaking secrets that could have huge repercussions for both app developers and their customers.
Romance Scam Losses Could Exceed $535 Billion
Posted in Commentary with tags Scam on February 13, 2025 by itnerdOn the eve of Valentine’s Day, researchers at Comparitech, Chainalysis and Bitfender are highlighting the staggering losses to romance baiting or pig butchering observed.
Comparitech estimated that almost 60,000 US romance seekers fell victim to these scams in 2024, resulting in heartbreaking losses of approximately $697 million ($11,616/victim!).
More concerning is an AARP survey that estimated that 4% of Americans have fallen victim to these scams, equating to over 13 million individuals, which is about 3.6% of those officially reported. Researchers estimate the cumulative financial damage from romance scams could exceed $535 billion.
Chloé Messdaghi, Founder founder of SustainCyber has this comment:
“These romance scams and pig butchering operations are getting more aggressive and harder to spot. Scammers are weaponizing AI to create fake profiles, deepfake videos, and run chatbot-driven conversations that feel real—they know how to tap into emotions fast.
“We can’t keep placing the burden solely on individuals to ‘watch for red flags’ when those flags are increasingly invisible. Platforms need to step up with stronger fraud detection and identity verification, and financial institutions should be doing more to catch suspicious transaction patterns before people lose everything. This is a collective problem that requires a collective response—tech, finance, and policy all need to work together to protect people from being manipulated and financially gutted.”
Since a major part of what I do is scam related, I’ll offer up this story that I did earlier this week. While it’s not the whole solution, it’s a start in terms of protection from these scams.
Leave a comment »