Yesterday, CISA released MITIGATING AI RISK: Safety and Security Guidelines for Critical Infrastructure Owners and Operators, with the intent to address both possible opportunities for the technology and critical infrastructure but also the ways it could be weaponized or misused.
“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” Homeland Security Secretary Alejandro Mayorkas said in a statement.
According to the guidelines, opportunities related to AI include operational awareness, customer service automation, physical security, and forecasting. At the same time, it also warns that AI risks to critical infrastructure could include attacks utilizing AI, attacks targeting AI systems, and “failures in AI design and implementation,” leading to potential malfunctions or unintended consequences.
CISA instructs operators and owners to govern, map, measure, and manage their use of the technology, incorporating the NIST’s AI risk management framework, and emphasizes understanding the dependencies of AI vendors and inventorying AI use cases. It also encourages critical infrastructure owners to create procedures for reporting risks and continuously testing the systems for vulnerabilities.
This release comes just days after the DHS announced the formation of a safety and security board focused on the same topic, including executives Sam Altman of OpenAI and Sundar Pichai from Alphabet.
Jason Keirstead, VP of Collective Threat Defense, Cyware had this to say:
“I am pleased that CISA is highlighting the challenges AI presents for securing critical infrastructure. These guidelines underscore the need for robust AI system governance, urging infrastructure owners to adopt a structured framework for AI risk management. Simultaneously, CISA should work to highlight the opportunities that AI brings to assist in the defense of critical infrastructure, when leveraged effectively and with the goal of helping to break data silos in order to uncover hidden threats. If we want to avoid recreating the same siloed challenges that have impacted security operations tech and teams, we must encourage adopting consistent standardization and require defensive AI systems to interoperate with each other – this is key to both effectiveness and efficiency.”
This is a good move by the CISA because it is putting something out there that mitigates risk. And there are potentially many risks with AI that we simply aren’t aware of. Thus it would be wise to read and heed this advice.
Like this:
Like Loading...
Related
This entry was posted on May 1, 2024 at 8:38 am and is filed under Commentary with tags CISA. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
CISA releases AI safety and security guidelines for critical infrastructure
Yesterday, CISA released MITIGATING AI RISK: Safety and Security Guidelines for Critical Infrastructure Owners and Operators, with the intent to address both possible opportunities for the technology and critical infrastructure but also the ways it could be weaponized or misused.
According to the guidelines, opportunities related to AI include operational awareness, customer service automation, physical security, and forecasting. At the same time, it also warns that AI risks to critical infrastructure could include attacks utilizing AI, attacks targeting AI systems, and “failures in AI design and implementation,” leading to potential malfunctions or unintended consequences.
CISA instructs operators and owners to govern, map, measure, and manage their use of the technology, incorporating the NIST’s AI risk management framework, and emphasizes understanding the dependencies of AI vendors and inventorying AI use cases. It also encourages critical infrastructure owners to create procedures for reporting risks and continuously testing the systems for vulnerabilities.
This release comes just days after the DHS announced the formation of a safety and security board focused on the same topic, including executives Sam Altman of OpenAI and Sundar Pichai from Alphabet.
Jason Keirstead, VP of Collective Threat Defense, Cyware had this to say:
“I am pleased that CISA is highlighting the challenges AI presents for securing critical infrastructure. These guidelines underscore the need for robust AI system governance, urging infrastructure owners to adopt a structured framework for AI risk management. Simultaneously, CISA should work to highlight the opportunities that AI brings to assist in the defense of critical infrastructure, when leveraged effectively and with the goal of helping to break data silos in order to uncover hidden threats. If we want to avoid recreating the same siloed challenges that have impacted security operations tech and teams, we must encourage adopting consistent standardization and require defensive AI systems to interoperate with each other – this is key to both effectiveness and efficiency.”
This is a good move by the CISA because it is putting something out there that mitigates risk. And there are potentially many risks with AI that we simply aren’t aware of. Thus it would be wise to read and heed this advice.
Share this:
Like this:
Related
This entry was posted on May 1, 2024 at 8:38 am and is filed under Commentary with tags CISA. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.