AI assistants can “sabotage” home cybersecurity says Cybernews

A Cybernews journalist ran a hands-on experiment that reveals how popular AI assistants like ChatGPT, Gemini, and Claude can unintentionally sabotage home network security.

“With the help of AI, I’ve spent nearly the whole day experimenting and setting up an NGINX reverse proxy,” the author writes. “My prompt was simple: ‘For my home lab, I registered a .com domain, so I can use secure TLS. But how do I do that?'”

The chatbots’ responses turned out to be dangerous.

“It then instructed that I need my public DNS to point to my home WAN. This is terrible advice. Not only does it expose my home IP address, but it also provides potential attackers with insights into the internal structure of my services and devices.”

“And it gets even worse. For this method to work, following the path down the road, you would need to further expose the network and run services on the open internet. The chatbots suggest exactly that – to open ports 80 and 443. Thousands of malicious bots scan each IP address every day for any exposed vulnerability.”

The experiment shows how AI tools can produce confident but unsafe recommendations, leading users to expose their systems online.

“Chatbots might be solving PhD-level problems in benchmarks,” the author notes, “but when it comes to real-life situations, they just produce generic advice that sometimes works, but neither optimally, nor will they ask about your specific situation to do better.”

For more information, here’s the full article: https://cybernews.com/security/experiment-ai-assistant-sabotaging-home-lab-security/ 

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading