The Trump Administration has unveiled its long-awaited guidelines governing how federal agencies should develop and use artificial intelligence (AI) technologies.
On January 7, the White House Office of Management and Budget (OMB) and the Office of Science and Technology Policy (OSTP) issued a draft memorandum with a set of 10 “Principles for the Stewardship of AI Applications,” calling for fairness and non-discrimination, transparency, flexibility, public trust and participation, sound science, and safety and security to be top priorities as agencies draft and implement regulations on AI.
Minimizing the regulatory burden is another major theme running through the draft Guidance for Regulation of Artificial Intelligence Application from Russell Vought, Acting Director of the White House Office of Management and Budget (OMB).
“Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” warns Vought’s memorandum for the Heads of Executive Department and Agencies. The memorandum states that “agencies should assess the effect of the potential regulation on AI innovation and growth” and cautions against “a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.” Agencies at the national level are also urged to employ risk assessment and cost-benefits analyses prior to any regulatory action, while taking into consideration the impact of state and local laws.
Most notably, the guidance encourages agencies to use other approaches to address risks posed by AI applications, rather than taking regulatory action. As an alternative to regulatory action, the OBM memorandum encourages agencies to allow pilot programs and experiments and calls upon agencies to “grant waivers and exemptions from regulations, or to allow pilot programs that provide safe harbors for specific AI applications.”
The guidance also encourages sector-specific policy guidance or frameworks “as a means of encouraging AI innovation in that sector … [and] provide clarity where a lack of regulatory clarity may impede innovation.” Lastly, the guidance encourages agencies to give preference to voluntary consensus standards and independent standards-setting organizations when managing risks associated with AI applications. Specifically, the guidance states that agencies “should consider relying on private-sector conformity assessment programs and activities, before proposing either regulations or compliance programs.”
In a January 6 call with journalists, Michael Kratsios, Chief Technology Officer of the United States, called the new initiative the “first of its kind” − the first “binding document” for how government agencies will regulate the emerging AI technology with all its attendant benefits, opportunities and risks.
The European Commission is expected to announce its AI regulatory plan in the coming months, and Kratsios said the Administration hopes the European proposal will be consistent with the new US policy framework.
Briefly, the 10 OSTP principles for AI call for:
- Public trust in AI
- Public participation
- Scientific integrity and information quality
- Risk assessment and management
- Benefits and costs
- Flexibility
- Fairness and non-discrimination
- Disclosure and transparency
- Safety and security
- Interagency coordination
Upon OMB’s issuance of the memorandum, federal agencies will have 180 days to review their authorities relevant to AI applications and submit plans to OMB on achieving consistency with the Memorandum. Agencies will also be expected to list and describe any planned or considered regulatory actions on AI within the same 180-day period.