Recent U.S. Efforts on AI Policy
Recent actions taken by the U.S. government’s executive and legislative branches related to AI-based software systems reflect the need to marshal a national effort to defend critical infrastructure and government networks and assets, work with partners across government and industry, and expand existing services and programs for federal civilian agencies and critical infrastructure owners and operators.
The following recent efforts guiding CISA’s actions in our Roadmap for AI include:
- Executive Order 14110 “Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI). (October 2023) This EO focuses on ensuring that AI is safe and secure. This will require robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and mechanisms to test, understand, and mitigate risks from these systems before they are put to use.
- Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI. (Updated September 2023) The Biden-Harris administration has secured voluntary commitments from leading AI companies to help move toward safe, secure, and transparent development of AI technology. These commitments include ensuring products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust.
- DHS Policy Statement 139-06 Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components. (August 2023) This policy statement provides that DHS will acquire and use AI only in a manner that is consistent with the Constitution and all other applicable laws and policies.
- New National Science Foundation Funding. (May 2023) This dedicated $140 million will launch seven new National AI Research Institutes to promote responsible innovation, bolster the United States’ AI research and development (R&D) infrastructure and support the development of a diverse AI workforce.
- AI Risk Management Framework (RMF). (January 2023) In collaboration with the private and public sectors, the National Institute of Standards and Technology (NIST) developed this framework to better manage risks—to individuals, organizations, and society—that are uniquely associated with AI. The NIST AI RMF, intended for voluntary use, aims to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
- Blueprint for an AI Bill of Rights. (October 2022) This framework is a set of five principles—identified by the White House Office of Science and Technology Policy—that should guide the design, use, and deployment of automated systems to protect the American public in the age of AI.
- 2021 Final Report of the National Security Commission on Artificial Intelligence. (March 2021) This report presented an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict.
- National Artificial Intelligence Initiative (NAII) Act of 2020 (Division E of the National Defense Authorization Act for Fiscal Year 2021). (January 2021) Among other things, this act established direction and authority to coordinate AI research, development, and demonstration activities among civilian agencies, the Department of Defense, and the intelligence community to ensure that each informs the work of the others.
- AI in Government Act of 2020 (Title I of Division U of the Consolidated Appropriations Act, 2021). (December 2020) This act created the AI Center of Excellence within the General Services Administration and directed the Office of Management and Budget (OMB) to issue a memorandum informing federal agencies of policies for acquisition and application of AI and identifying best practices for mitigating risks.
- Department of Homeland Security 2020 Artificial Intelligence Strategy. (December 2020) This strategy set out to enhance DHS’s capability to safeguard the American people, our homeland, and our values through the responsible integration of AI into DHS’s activities and the mitigation of new risks posed by AI.
- EO 13960: Promoting the Use of Trustworthy AI in the Federal Government. (December 2020) This executive order required federal agencies to inventory their AI use cases and share their inventories with other government agencies and the public.
- EO 13859: Maintaining American Leadership in AI. (February 2019) This executive order established federal principles and strategies to strengthen the nation's capabilities in AI to promote scientific discovery, economic competitiveness, and national security.
See the full scope of AI actions from the Biden-Harris Administration: AI.gov