The White House Makes An Announcement On How They’re Going To Promote Responsible AI Development

The White House today has announced what they are going to do to promote responsible AI innovations. This is timely as this is a top of mind issue at the moment. Here’s what the goal is:

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

There’s a lot more to this and I encourage you to read the full details at the link above.

I have two comments on this. Starting with Ani Chaudhuri, CEO, Dasera 

In light of the recent announcement made by the Biden-Harris Administration, it is evident that the US government has taken some essential steps to promote responsible AI innovation while protecting Americans’ rights and safety. While these actions are commendable, it is crucial to emphasize that data security plays a vital role in ensuring AI’s responsible and ethical use.

As the Administration engages with CEOs of leading AI companies, it is essential to remember that responsible and ethical AI development requires robust security measures. Data security companies play a significant part in this landscape, working diligently to protect sensitive information and mitigate risks associated with AI technologies.

The new investments in AI research and development, public assessments of generative AI systems, and policies to ensure responsible AI use by the US government are all necessary steps to create a safer AI ecosystem. However, investing in data security infrastructure and prioritizing collaboration with data security companies is vital. In doing so, the government and AI industry can ensure comprehensive protection against risks and potential harm to individuals and society.

Furthermore, AI developers must be held accountable for the security of their products, emphasizing their responsibility to make their technology safe before deployment or public use. This includes proper data management, secure storage, and measures to prevent unauthorized access to sensitive information.

The Biden-Harris Administration’s actions to promote responsible AI innovation are crucial for a safer future. However, it is equally important to acknowledge the role of data security companies in this landscape and foster partnerships to ensure a comprehensive and cohesive approach to AI-related risks and opportunities.

This is followed up by a comment from Craig Burland, CISO, Inversion6:

There’s no putting the AI genie back in the bottle. Two years ago, if your product didn’t have AI it was considered last-generation.  From SIEM to EDR, products had to have AI / ML.  Now, ChatGPT is evoking fears pulled from science fiction movies.  

Generative AI (GAI) is an evolution of technology that started when we jumped into Big Data. GAI has tremendous potential and troubling downsides. But, the government will be hard-pressed to curtail building new models, slow expanding capabilities, or ban addressing new use cases. These models could proliferate anywhere on the globe.  Clever humans will find new ways to use this tool – for good and bad.  Any regulation will largely be ceremonial and practically unenforceable.  

I think that this is a good initiative by the White House. But as always, I await meaningful results as I feel that we’re currently at a tipping point in terms of where we are with AI. Which in my mind implies that things can go in a great direction, or things could go off the rails when it comes to AI. And in either case, there would be no way back.

Leave a Reply

%d bloggers like this: