X
Government

The White House plans to regulate the government's use of AI

AI safeguards will need to be in effect by December 1, 2024.
Written by Don Reisinger, Contributing Writer
AI data concept
Weiquan Lin/Getty Images

The Biden administration is moving to ensure that the US government uses artificial intelligence in a responsible way, the White House announced today.

By December 1, 2024, all US federal agencies will be required to have AI "safeguards" in place to ensure the safety of US citizens, the White House said on Thursday. The safeguards will be used to "assess, test, and monitor" how AI is being used by government agencies, avoid any discrimination that may occur through the use of AI, and ultimately allow for the public to see how the US is using AI.

Also: UN's AI resolution is non-binding, but still a big deal - here's why

"This guidance places people and communities at the center of the government's innovation goals," the White House said in a statement. "Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety."

The US government has been using AI in some form for years, but it's becoming more difficult to know how — and why. Last year, the President Joe Biden issued an executive order on AI, requiring that its use in government focus on safety and security first. This latest policy change builds upon that executive order.

There are understandable concerns about how government agencies could use AI. From law enforcement to public policy decisions, AI could be used in especially impactful ways. And it's possible that if AI is allowed to run amok, without human oversight and some checks in place to ensure it's being used properly, the technology could ultimately cause unforeseen and potentially negative effects.

In a statement, the White House provided some examples of how safeguards could be put into place to protect Americans. One example mentioned travelers could be allowed to opt-out of facial-recognition tools while at airports. In another, the White House said humans should be in place to verify information provided by AI regarding an individual's health care decisions.

See also: Most Americans want federal regulation of AI, poll shows

The White House guidance orders every government agency to comply with its safeguard requirement. Only in certain circumstances would a government agency be allowed to operate an AI tool without such safeguards.

"If an agency cannot apply these safeguards, the agency must cease using the AI system," the White House said, "unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations."

It's currently unclear what kinds of satisfactory safeguards agencies will adopt by December 1, and the White House didn't say how those policies will be made public or if there will be a process to petition for enhanced safeguards. It's also worth noting that the new policy extends only to federal agencies. In order for similar safety efforts to be put in place at the state government level, each state would need to issue similar policies.

Editorial standards