US federal agencies will have to appoint a chief artificial intelligence officer (CAIO) and publish a list of their AI use cases under new guidelines issued by the White House Office of Management and Budget (OMB).
The new policies aim to encourage federal agencies to leverage AI effectively while mitigating the risks of the technology by strengthening governance.
The new rules set a deadline of December 1, 2024 for federal agencies to establish “concrete safeguards” when using AI in a way that could affect rights or safety.
These safeguards include ways to test and monitor the impacts of AI on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparent information about how the government is using AI.
If an agency cannot apply these safeguards, it must stop using the AI system unless doing so creates an “unacceptable impediment” to the agency's critical operations, the OMB said. Agencies should also consult unions in an attempt to mitigate the potential impact of AI on the workforce.
Under the guidelines, agencies have 60 days to appoint a CAIO, which will coordinate the use of AI across their agencies and establish AI governance boards to regulate the use of AI.
The Departments of Defense, Veterans Affairs, Housing and Urban Development and State have established these governing bodies and some other agencies have until May 27, 2024 to do the same, the OMB said.
Agencies may designate an existing official, such as a CIO or CTO, to serve in the CAIO role. This person will have primary responsibility across their agencies for coordinating their agency's use of AI, promoting AI innovation, and managing risks arising from AI use.
That involves creating the annual inventory of AI use cases, working with HR on AI skills, removing barriers to responsible use of AI in the agency, and also identifying and managing risks arising from the use of AI.
These safeguards could mean, for example, that travelers will have the ability to opt out of using facial recognition at the airport without losing their place in line, the OMB said.
Similarly, when AI is used in the federal health system as part of diagnostic decisions, a human will be required to supervise the process to verify the results and ensure that the AI does not create disparities in access to care. medical.
Federal agencies will need to be clear about when they use AI by publishing annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency addresses relevant risks.
They will also have to publish government-owned AI code, models and data, where such publications do not pose a risk.
OMB is interested in agencies using AI, noting that the technology presents “tremendous opportunities to help agencies address society's most pressing challenges.”
He noted examples including the use of AI by the Federal Emergency Management Agency (FEMA) to review and assess structural damage after hurricanes, as well as the use of AI by the National Oceanic and Atmospheric Administration (NOAA). to make more accurate forecasts of extreme situations. weather, floods and forest fires.
“Advances in generative AI are expanding these opportunities, and OMB guidance encourages agencies to responsibly experiment with generative AI, with appropriate safeguards in place. “Many agencies have already started this work, including by using AI chatbots to improve customer experiences and other AI pilots,” he said.
Government agencies across the United States are trying to determine where, when and how to use AI.
For example, New York State recently unveiled its own “Acceptable Use of Artificial Intelligence Technologies” policy to cover the broader use of systems that use machine learning, large language models, natural language processing, and generative AI. .