Research: S&P 500’s software companies face mounting AI-related threats 

As the world prepares to celebrate Artificial Intelligence Appreciation Day on July 16th, the recent Grok scandal has reminded us that AI can’t serve or be celebrated by the public unless it’s secure and trustworthy. 

To address AI security, the Cybernews research team has analyzed potential security issues of AI tools used by the S&P 500 companies, and found that security risks are mounting due to the fast adoption of AI tools across industries. 

Key findings:

  • 98% of S&P 500 companies now use AI in their operations – from finance and healthcare to critical infrastructure.
  • The report identifies 970 AI-related security risks across 327 leading US companies, including:
    • 194 instances of possibly insecure AI output (e.g., flawed recommendations, unsafe automation),
    • 175 data leakage risks (there already are high-profile cases like Samsung’s code leak via ChatGPT),
    • 64 potential cases of IP theft through AI-driven model extraction and compromised platforms.
  • Critical infrastructure and patient safety are at risk, with 35 attack vectors identified in sectors like energy and utilities. Real-world examples already include IBM Watson’s unsafe cancer treatment advice and Zillow’s $400 million loss from predictive algorithm errors.

Please read the full report here

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading