Today the White House has announced on using an executive order to mitigate AI risks:
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
The link above has a very extensive document that is worth reading as it goes into a lot of detail as to what this executive order covers. John Gunn, CEO, Token had this comment:
The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack US companies with phishing emails that are nearly impossible to detect. Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today.
This is likely to be a hot topic today. Thus as I get other reactions to this, I will post it here.
UPDATE: Anurag Gurtu, CPO, StrikeReady had this comment:
As President Biden prepares to leverage emergency powers for AI risk mitigation, it’s a clear signal of the critical juncture at which we find ourselves in the evolution of AI technology. The administration’s decision reflects a growing awareness of the transformative impact AI has on every sector, and the need for robust frameworks that govern its ethical use and development.
This initiative isn’t just about preemptive measures against potential misuse; it’s a foundational move towards establishing a global standard for AI that aligns with our values of safety, security, and trustworthiness. It’s an acknowledgment that while AI presents unparalleled opportunities for advancement, it also brings challenges that must be addressed to protect societal welfare and national interests.
For businesses and developers, this move will likely mean a more stringent regulatory environment, but also a clearer direction for innovation within safe and secure boundaries. It’s time for all stakeholders to engage in dialogue and contribute to a balanced approach that fosters innovation while safeguarding against the risks that have kept policymakers and citizens alike vigilant.
UPDATE #2: George McGregor, VP, Approov had this to say:
If you market a cybersecurity solution in the USA, you had better read through this Executive Order (EO) – it may affect your business! If your solution is deterministic in nature, then life will be easier, but if you are promoting the use of AI in your product, then life may well get more complicated: Not only do you need to demonstrate to customers that false-positives and management overhead due to AI are not an issue, but with these new guidelines, the AI methods you employ will be under the microscope also.
Here are some other comments, each followed by the relevant text from the EO:
First – if you are an AI based cybersecurity vendor, you may be expected to share your test results with the government. The success or failure of a security solution, by its very nature, “poses a risk to national security”.
- From the EO text: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
Second, attestation techniques will become critical – this is already true for mobile app code which can easily be reverse-engineered and replicated unless steps are taken. Fingerprinting techniques used in mobile may be applicable here.
- From the EO text: Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
A program to use AI to eliminate vulnerabilities is a very noble pursuit but should not be viewed as a replacement for good software development discipline and implementing run time visibility and protection.
- From the EO text: Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
The use of AI will not only be a power for good. The hackers will seek to use these techniques also and there will inevitably be an arms-race between security teams and hackers. To start with however, the cost of entry for bad actors will be high, in terms of knowledge required and complexity of the task, and this will mean that only well funded “nation state” teams will be the primary users of AI for nefarious purposes. National Security teams will need to have the resources to track and counter these efforts.
- From the EO text: Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.
Like this:
Like Loading...
Related
This entry was posted on October 30, 2023 at 10:39 am and is filed under Commentary with tags AI. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
White House Issues Executive Order on Safe, Secure, and Trustworthy AI
Today the White House has announced on using an executive order to mitigate AI risks:
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
The link above has a very extensive document that is worth reading as it goes into a lot of detail as to what this executive order covers. John Gunn, CEO, Token had this comment:
The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack US companies with phishing emails that are nearly impossible to detect. Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today.
This is likely to be a hot topic today. Thus as I get other reactions to this, I will post it here.
UPDATE: Anurag Gurtu, CPO, StrikeReady had this comment:
As President Biden prepares to leverage emergency powers for AI risk mitigation, it’s a clear signal of the critical juncture at which we find ourselves in the evolution of AI technology. The administration’s decision reflects a growing awareness of the transformative impact AI has on every sector, and the need for robust frameworks that govern its ethical use and development.
This initiative isn’t just about preemptive measures against potential misuse; it’s a foundational move towards establishing a global standard for AI that aligns with our values of safety, security, and trustworthiness. It’s an acknowledgment that while AI presents unparalleled opportunities for advancement, it also brings challenges that must be addressed to protect societal welfare and national interests.
For businesses and developers, this move will likely mean a more stringent regulatory environment, but also a clearer direction for innovation within safe and secure boundaries. It’s time for all stakeholders to engage in dialogue and contribute to a balanced approach that fosters innovation while safeguarding against the risks that have kept policymakers and citizens alike vigilant.
UPDATE #2: George McGregor, VP, Approov had this to say:
If you market a cybersecurity solution in the USA, you had better read through this Executive Order (EO) – it may affect your business! If your solution is deterministic in nature, then life will be easier, but if you are promoting the use of AI in your product, then life may well get more complicated: Not only do you need to demonstrate to customers that false-positives and management overhead due to AI are not an issue, but with these new guidelines, the AI methods you employ will be under the microscope also.
Here are some other comments, each followed by the relevant text from the EO:
First – if you are an AI based cybersecurity vendor, you may be expected to share your test results with the government. The success or failure of a security solution, by its very nature, “poses a risk to national security”.
Second, attestation techniques will become critical – this is already true for mobile app code which can easily be reverse-engineered and replicated unless steps are taken. Fingerprinting techniques used in mobile may be applicable here.
A program to use AI to eliminate vulnerabilities is a very noble pursuit but should not be viewed as a replacement for good software development discipline and implementing run time visibility and protection.
The use of AI will not only be a power for good. The hackers will seek to use these techniques also and there will inevitably be an arms-race between security teams and hackers. To start with however, the cost of entry for bad actors will be high, in terms of knowledge required and complexity of the task, and this will mean that only well funded “nation state” teams will be the primary users of AI for nefarious purposes. National Security teams will need to have the resources to track and counter these efforts.
Share this:
Like this:
Related
This entry was posted on October 30, 2023 at 10:39 am and is filed under Commentary with tags AI. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.