The White House Announces New Rules For The Use Of AI In Federal Agencies

The White House has announced new AI rules, stating U.S. federal agencies must show that their AI tools aren’t harming the public, or stop using them:

By December 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.

For example, by adopting these safeguards, agencies can ensure that:

  • When at the airport, travelers will continue to have the ability to opt out from the use of TSA facial recognition without any delay or losing their place in line.
  • When AI is used in the Federal healthcare system to support critical diagnostics decisions, a human being is overseeing the process to verify the tools’ results and avoids disparities in healthcare access.
  • When AI is used to detect fraud in government services there is human oversight of impactful decisions and affected individuals have the opportunity to seek remedy for AI harms.

If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.   

To protect the federal workforce as the government adopts AI, OMB’s policy encourages agencies to consult federal employee unions and adopt the Department of Labor’s forthcoming principles on mitigating AI’s potential harms to employees. The Department is also leading by example, consulting with federal employees and labor unions both in the development of those principles and its own governance and use of AI.

Craig Burland, CISO, Inversion6 had this comment:

The administration continues to demonstrate vigilant leadership in cybersecurity domains, modeling what they want (and maybe expect) to see from the private sector. It’s clear that AI poses both a compelling opportunity and significant threat to how people use and interact with technology. The government’s commitment to human oversight of AI for highly personal and highly impactful decisions is both sensible and prudent given the immaturity of AI. ChatGPT burst into the public consciousness just over a year ago. AIs and LLMs are not ready to make decisions about healthcare or government services. In human terms, these tools are barely toddlers! At the same time, the administration adds friction to AI advancement with requirements about oversight and transparency, and it is lowering barriers for agencies where that friction is no longer warranted like FEMA, the CDC, and the FAA. This demonstration of balance speaks highly of their approach to harness the disrupting of AI without unleashing it on an unsuspecting public. 

A cautious approach to AI is warranted seeing as AI has had a few “misfires” over the years. And the worst thing that can possibly happen is that one of those “misfires” turns into a catastrophic event.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading