The White House Makes Another Announcement Related To AI

For the second time this month, The White House has made an announcement in regards to the responsible use of AI. Unlike an earlier announcement, this one centres around R&D and deployment:

AI is one of the most powerful technologies of our time, with broad applications. President Biden has been clear that in order to seize the opportunities AI presents, we must first manage its risks. To that end, the Administration has taken significant action to promote responsible AI innovation that places people, communities, and the public good at the center, and manages risks to individuals and our society, security, and economy. This includes the landmark Blueprint for an AI Bill of Rights and related executive actions, the AI Risk Management Framework, a roadmap for standing up a National AI Research Resource, active work to address the national security concerns raised by AI, as well as investments and actions announced earlier this month. Last week, the Administration also convened representatives from leading AI companies for a briefing from experts across the national security community on cyber threats to AI systems and best practices to secure high-value networks and information.

Ani Chaudhuri, CEO, Dasera was kind enough to provide their view of this announcement:

The Biden-Harris Administration’s recent steps to advance responsible artificial intelligence (AI) research, development, and deployment are crucial in our rapidly evolving digital age. Undoubtedly, AI technologies will transform how we live and work, so we must approach this field with a responsible yet innovative mindset.

One fundamental aspect of this responsible approach is to ensure data security. Every AI system relies on vast amounts of data, whether it’s automating tasks, making predictions, or creating new services. Ensuring the security of this data, its privacy, and its ethical use is not just a good practice; it’s a necessity. An AI system is only as good as the data it’s trained on, and if that data is biased, misused, or breached, the consequences can be severe.

However, while the government’s role in fostering responsible AI innovation is critical, we should be mindful of the potential pitfalls of heavy-handed regulation. We must strike a careful balance: on one side, safeguarding the rights and safety of individuals, and on the other side, not stifling innovation and competition. This is a delicate act. It’s essential to refrain from giving too much power to the government over the tech sector and limit regulatory barriers that could hamper the global competitiveness of our AI industry.

 Sam Altman, CEO of OpenAI, was recently in front of Congress and proposed a new agency to oversee AI. The idea of issuing licenses to train and use AI models is thought-provoking. Still, it could create regulatory capture where established players protect their position by creating barriers for others.

Instead, we should consider a more collaborative and decentralized approach that fosters trust, transparency, and accountability. Regulations should be built with an understanding of the technology and, more importantly, its implications on society and individuals. Instead of a single body that issues licenses, why not have a network of organizations, including academic institutions, non-profits, and private enterprises, that review, audit, and certify AI systems and their uses?

The risks associated with AI are complex and multifaceted, akin to cybersecurity risks. Cybersecurity has always been a cat-and-mouse game between those who seek to exploit vulnerabilities and those who work tirelessly to patch them. The same will likely be true with AI.

We should learn from our experiences in handling cybersecurity and data privacy issues. Clear guidelines, self-regulation, transparency, and domestic and international cooperation are vital in managing these risks without curbing innovation. While it’s true that we might not be able to regulate every AI model, we can build regulations for how these models are used, just like we have regulations for how personal data is used.

It’s crucial to focus on the main goal: making AI serve humanity, protect individuals, and foster innovation. We should not just regulate AI but guide it towards that goal. Our approach should be both proactive and adaptive, ready to embrace the opportunities AI presents and address the risks it poses. After all, AI is a tool we created, and it’s up to us to ensure it’s used responsibly.

It is a positive step that The White House is interested in putting up guardrails around AI. Because in my mind, AI could either be a positive force for all of us, or it could go off the rails in a really bad way. The actions that we take now will decide which way that goes.

UPDATE: Craig Burland, CISO, Inversion6 adds this comment:

On the surface, this is a well-considered, positive, and actionable step forward. The government appears to be looking at AI as a tool with both potential and risks, seeking first to understand. This is a smart approach for any new technology, particularly for something disruptive like AI.

The nine strategies outlined within the updated roadmap are sound, again balancing both the potential and risk of AI.  For example, the first strategy focuses on investment in AI research.  The third strategy focuses on ethical, legal, and societal implications of AI.  Maybe more importantly, what the administration lays out is actionable. They’ve outlined strategies that the US government can drive. They’ve avoiding calling for moratoriums, global bans, or other unattainable steps that would accomplish little.

UPDATE #2: Kevin Bocek, VP Ecosystem and Community, Venafi adds this commentary:

“We are still in the early stages of understanding the impact of AI on both businesses and the public, and it’s a constantly moving target, with new use cases and products being announced on a daily basis. So, it is very encouraging to see The White House take the first steps in developing a responsible AI framework. As part of this process, it is vital that the government recognizes that smart organizations will not slow down the innovation that we’re seeing with Generative AI, and that the results will be overwhelmingly positive. However, there are known and unknown risks that need to be skillfully mitigated. 

“As such, the priority for regulations must be to contain risks while encouraging exploration, curiosity and trial and error. But any steps to achieve this can’t be approached with a “set and forget” mentality. Regulators need to establish policies and guidelines that are reviewed and refreshed frequently as we explore the power of AI in more depth. This means the government will need to constantly collaborate and communicate with experts in the field to avoid neglect and exploitation.”

Leave a Reply

%d bloggers like this: