The Outgoing Trump Administration Just Issued Guidelines For AI Regulation. It Advises To Proceed Cautiously.


It was February 11, 2019 when President Trump signed an Executive Order announcing the American AI Initiative. Billed as a national strategy to promote and protect national AI technology and innovation, the executive order laid out 5 ‘pillars’ that the federal government should follow to advance AI: 

  • Invest in AI research and development (R&D); 
  • Unleash AI resources; 
  • Remove barriers to AI innovation; 
  • Train an AI-ready workforce; 
  • Promote an international environment that is supportive of American AI innovation and its responsible use.  

A year later, the White House released the American Artificial Intelligence Initiative: Year One Annual Report detailing the work that had been done toward the American AI Initiative thus far. Among other things, the report stated that the first international statement on ‘AI Principles’ had been developed, along with the first-ever AI regulatory document for the trustworthy development, testing, deployment, and adoption of AI technologies. 

This week, the Office of Management and Budget (OMB) released a Memorandum to the heads of executive departments and agencies, providing guidance for the regulation of AI. Interestingly, the Memorandum stipulates that the guidance it is providing relates only to the development and deployment of AI outside the Federal government. Specifically, the Memorandum states that “although Federal agencies currently use AI in many ways to perform their missions, government use of AI is outside the scope of this Memorandum.” 

The memorandum advises that the regulation of AI should be done cautiously. “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” it reads. “Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on Al innovation and growth.” The Trump administration makes clear that it is important to strike a balance between being internationally competitive in the AI arena and managing potential risks brought about by the sophisticated technology. “Where AI entails risk, agencies should consider the potential benefits and costs of employing AI, as compared to the systems AI has been designed to complement or replace,” it states. “While narrowly tailored and evidence-based regulations that address specific and identifiable risks could provide an enabling environment for U.S. companies to maintain global competitiveness, agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.” 

The following ten principles are outlined as important to consider in the stewardship of AI: 

  1. Public Trust in AI 
  2. Public participation 
  3. Scientific Integrity and Information Quality 
  4. Risk Assessment and Management  
  5. Benefits and Costs  
  6. Flexibility  
  7. Fairness and Non-Discrimination 
  8. Disclosure and Transparency  
  9. Safety and Security  
  10. Interagency Coordination  

It is highlighted that public perception of AI is important and that benefits and risks must be communicated to maximize trust and understanding. “The process by which agencies develop and implement regulatory and non-regulatory approaches to AI applications will have a significant impact on public perceptions of AI,” the memorandum advises. It encourages sector-specific policy guidance and frameworks, voluntary consensus standards and frameworks, and pilot programs and experiments such as hackathons, tech sprints, challenges.  

‘International Regulatory Cooperation’ is also listed as a priority, with a view to America ‘remaining at the forefront of AI development.’ Key U.S. trading partners and cooperation with international partners should be considered in developing strategic plans. “Agencies should engage in dialogues to promote compatible regulatory approaches to AI and to promote American AI innovation while protecting privacy, civil rights, civil liberties, and American values,” according to the memorandum. 

How these directives will be interpreted by individual agencies and departments remains to be seen. The document asks for AI plans to be submitted to the Office of Management and Budget by May 2021. By that time the Biden administration will have taken the reins, and it will be interesting to see how the changing of the guard impacts AI regulation over the next four years. It will be an area we continue to monitor and report on given the inevitable nature of AI becoming more ubiquitous in our day-to-day lives.

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *