Federal documents/resources around artificial intelligence
An increasing number of agencies have published valuable guidance on the ethics, risks and impacts of AI as outlined below and were used in the development of these guidelines.
- DARPA’s Explainable AI (XAI) program: A retrospective, August 2016. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
- NIH Strategic Plan for Data Science, June 2018. The NIH’s first Strategic Plan for Data Science provides a roadmap for modernizing the NIH-funded biomedical data science ecosystem.
- EO 13859 Maintaining American Leadership in Artificial Intelligence, February 2019. Executive Order 13859 Maintaining American Leadership in Artificial Intelligence establishes federal principles and strategies to strengthen the nation's capabilities in artificial intelligence (AI) to promote scientific discovery, economic competitiveness, and national security.
- NIST US Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, August 2019. This plan was prepared in Response to EO 13859 and provides guidance regarding important characteristics of standards to help agencies in their decision making about AI standards, groups potential agency involvement into four categories (monitoring, participation, influencing, and leading), and provides a series of practical steps for agencies to take as they engage in AI standards.
- NIH Report of the Advisory Committee to the Director AI WG, December 2019. This report contains a set of recommendations on how the NIH can best ensure the use of machine learning to advance biomedical research and global health, responsibly.
- EO 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, December 2020. Executive Order 13960 Promoting the Use of Trustworthy AI in the Federal Government establishes principles for the use of AI in the Federal Government, establishes a common policy for implementing the principles, directs agencies to catalogue their AI use cases, and calls on General Services Administration (GSA) and the Office of Personnel Management to enhance AI implementation expertise at the agencies.
- VA Artificial Intelligence Strategies and Synergy in the Federal Space, 2021. This paper analyzes departmental strategies and provides a purview of themes that invite and ease cross-agency collaboration.
- HHS Artificial Intelligence (AI) Strategy, January 2021. This AI strategy establishes an approach and focus areas to encourage and enable familiarity, comfort, and fluency with AI technology and its potential (AI adoption), the application of best practices and lessons learned from piloting and implementing AI capabilities to additional domains and use cases across HHS (AI scaling) and increase speed at which HHS adopts and scales AI (AI acceleration).
- GAO Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities, June 2021. This report describes an accountability framework for AI and describes key practices for federal agencies and other entities that are considering and implementing AI systems. Each practice includes a set of questions for entities, auditors, and third-party assessors to consider, along with audit procedures and types of evidence for auditors and third-party assessors to collect.
- GAO An Accountability Framework for Federal Agencies and Other Entities, June 2021. This report identifies key accountability practices—centered around the principles of governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly.4740
- NSF/OSTP National Artificial Intelligence Research Resource, June 2021. The National AI Initiative Act of 2020 called for the National Science Foundation (NSF), in coordination with the White House Office of Science and Technology Policy (OSTP), to form a National AI Research Resource (NAIRR) Task Force to investigate the feasibility of establishing a NAIRR and develop a roadmap detailing how such a resource could be established and sustained.
- US Department of Veterans Affairs Artificial Intelligence (AI) Strategy, July 2021. The Artificial Intelligence (AI) Strategy formalizes the vision for how the Department of Veterans Affairs (VA) will develop, use, and deploy artificial intelligence (AI) capabilities, as informed by the National Defense and Authorization Act (NDAA).
- FDA Artificial Intelligence/Machine Learning (AI/ML) – Based Software as a Medical Device (SaMD) Action Plan, September 2021. This Action Plan is a direct response to stakeholder feedback to the April 2019 discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device,” and outlines five actions the U.S. Food and Drug Administration (FDA) intends to take.
- HHS Trustworthy AI (TAI) Playbook, September 2021. The TAI Playbook is designed to support leaders across the Department in applying TAI principles. It outlines the core components of TAI and helps identify actions to take for different types of AI solutions.
- FDA Good Machine Learning Practice for Medical Device Development: Guiding Principles, October 2021. The FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have jointly identified 10 guiding principles to inform the development of Good Machine Learning Practice (GMLP) and promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML).
- CMS AI Playbook version 2.0, October 2022. This document discusses the principles that enable scalable AI, focused on research and development (R&D) and innovation, how to apply these principles in an organization, the operating model allowing for rapid iteration, application of learnings and measurement of the impact of change to an organization, and the phases of adoption and the key organizational constructs to stand up for effective rollout agency-wide.
- OSTP Blueprint for an AI Bill of Rights, October 2022. The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy in October 2022. This framework was released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered world.”
- NIST Artificial Intelligence Risk Management Framework (RMF), January 2023. As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
- Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule, April 2023. The Office of the National Coordinator for Health Information Technology (ONC)'s HTI-1 proposed rule seeks to implement provisions of the 21st Century Cures Act and make updates to the ONC Health IT Certification Program (Certification Program) with new and updated standards, implementation specifications, and certification criteria. Implementation of the proposed rule’s provisions will advance interoperability, improve transparency, and support the access, exchange, and use of electronic health information. 484
- The Department of Veterans Affairs Establishes a Trustworthy AI Framework, July 2023. This AI framework serves as a reference document for ensuring VA satisfies EO 13960 consistency requirements and strives to include other trustworthy AI frameworks impacting or informing VA’s mission, a foundation for implementation activities to ensure consistency with EO 13960 Section 8 as coordinated by the VA RAIO and VA Data Governance Council, and an agency-wide consensus statement on VA’s trustworthy AI values.
- Senate HELP Committee: Exploring Congress’ Framework for the Future of AI, September 2023. Senator Bill Cassidy (R-LA), ranking member of the Senate Health, Education, Labor, and Pensions (HELP) Committee, released a white paper on artificial intelligence (AI) and the technology’s potential benefits and risks to society.
- The World Health Organization's Regulatory Considerations on Artificial Intelligence for Health, October 2023. The World Health Organization (WHO) released a new publication listing key regulatory considerations on artificial intelligence (AI) for health. The publication emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers, and patients.