[Congressional Bills 118th Congress] [From the U.S. Government Publishing Office] [S. 4178 Introduced in Senate (IS)] <DOC> 118th CONGRESS 2d Session S. 4178 To establish artificial intelligence standards, metrics, and evaluation tools, to support artificial intelligence research, development, and capacity building activities, to promote innovation in the artificial intelligence industry by ensuring companies of all sizes can succeed and thrive, and for other purposes. _______________________________________________________________________ IN THE SENATE OF THE UNITED STATES April 18, 2024 Ms. Cantwell (for herself, Mr. Young, Mr. Hickenlooper, and Mrs. Blackburn) introduced the following bill; which was read twice and referred to the Committee on Commerce, Science, and Transportation _______________________________________________________________________ A BILL To establish artificial intelligence standards, metrics, and evaluation tools, to support artificial intelligence research, development, and capacity building activities, to promote innovation in the artificial intelligence industry by ensuring companies of all sizes can succeed and thrive, and for other purposes. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. SHORT TITLE; TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``Future of Artificial Intelligence Innovation Act of 2024''. (b) Table of Contents.--The table of contents for this Act is as follows: Sec. 1. Short title; table of contents. Sec. 2. Sense of Congress. Sec. 3. Definitions. TITLE I--VOLUNTARY ARTIFICIAL INTELLIGENCE STANDARDS, METRICS, EVALUATION TOOLS, TESTBEDS, AND INTERNATIONAL COOPERATION Subtitle A--Artificial Intelligence Safety Institute and Testbeds Sec. 101. Artificial Intelligence Safety Institute. Sec. 102. Program on artificial intelligence testbeds. Sec. 103. National Institute of Standards and Technology and Department of Energy testbed to identify, test, and synthesize new materials. Sec. 104. National Science Foundation and Department of Energy collaboration to make scientific discoveries through the use of artificial intelligence. Sec. 105. Progress report. Subtitle B--International Cooperation Sec. 111. International coalition on innovation, development, and harmonization of standards with respect to artificial intelligence. Sec. 112. Requirement to support bilateral and multilateral artificial intelligence research collaborations. Subtitle C--Identifying Regulatory Barriers to Innovation Sec. 121. Comptroller General of the United States identification of risks and obstacles relating to artificial intelligence and Federal agencies. TITLE II--ARTIFICIAL INTELLIGENCE RESEARCH, DEVELOPMENT, CAPACITY BUILDING ACTIVITIES Sec. 201. Public data for artificial intelligence systems. Sec. 202. Federal grand challenges in artificial intelligence. SEC. 2. SENSE OF CONGRESS. It is the sense of Congress that policies governing artificial intelligence should maximize the potential and development of artificial intelligence to benefit all private and public stakeholders. SEC. 3. DEFINITIONS. In this Act: (1) Agency.--The term ``agency'' has the meaning given such term in section 3502 of title 44, United States Code, except such term shall include an independent regulatory agency, as defined in such section. (2) Artificial intelligence.--The term ``artificial intelligence'' has the meaning given such term in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9401). (3) Artificial intelligence blue-teaming.--The term ``artificial intelligence blue-teaming'' means an effort to conduct operational network vulnerability evaluations and provide mitigation techniques to entities who have a need for an independent technical review of the network security posture of an artificial intelligence system. (4) Artificial intelligence model.--The term ``artificial intelligence model'' means a component of an artificial intelligence system that is a model-- (A) derived using mathematical, computational, statistical, or machine-learning techniques; and (B) used as part of an artificial intelligence system to produce outputs from a given set of inputs. (5) Artificial intelligence red-teaming.--The term ``artificial intelligence red-teaming'' means structured adversarial testing efforts of an artificial intelligence system to identify risks, flaws, and vulnerabilities of the artificial intelligence system, such as harmful outputs from the system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system. (6) Artificial intelligence risk management framework.--The term ``Artificial Intelligence Risk Management Framework'' means the most recently updated version of the framework developed and updated pursuant to section 22A(c) of the National Institute of Standards and Technology Act (15 U.S.C. 278h-1(c)). (7) Artificial intelligence system.--The term ``artificial intelligence system'' has the meaning given such term in section 7223 of the Advancing American AI Act (40 U.S.C. 11301 note). (8) Critical infrastructure.--The term ``critical infrastructure'' has the meaning given such term in section 1016(e) of the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001 (42 U.S.C. 5195c(e)). (9) Federal laboratory.--The term ``Federal laboratory'' has the meaning given such term in section 4 of the Stevenson- Wydler Technology Innovation Act of 1980 (15 U.S.C. 3703). (10) Foundation model.--The term ``foundation model'' means an artificial intelligence model trained on broad data at scale and is adaptable to a wide range of downstream tasks. (11) Generative artificial intelligence.--The term ``generative artificial intelligence'' means the class of artificial intelligence models that utilize the structure and characteristics of input data in order to generate outputs in the form of derived synthetic content. Such derived synthetic content can include images, videos, audio, text, software, code, and other digital content. (12) National laboratory.--The term ``National Laboratory'' has the meaning given such term in section 2 of the Energy Policy Act of 2005 (42 U.S.C. 15801). (13) Synthetic content.--The term ``synthetic content'' means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by artificial intelligence. (14) Testbed.--The term ``testbed'' means a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including artificial intelligence systems, to help evaluate the functionality, trustworthiness, usability, and performance of those tools or technologies. (15) TEVV.--The term ``TEVV'' means methodologies, metrics, techniques, and tasks for testing, evaluating, verifying, and validating artificial intelligence systems or components. (16) Watermarking.--The term ``watermarking'' means the act of embedding information that is intended to be difficult to remove, into outputs generated by artificial intelligence, including outputs such as text, images, audio, videos, software code, or any other digital content or data, for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance. TITLE I--VOLUNTARY ARTIFICIAL INTELLIGENCE STANDARDS, METRICS, EVALUATION TOOLS, TESTBEDS, AND INTERNATIONAL COOPERATION Subtitle A--Artificial Intelligence Safety Institute and Testbeds SEC. 101. ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE. (a) Establishment of Institute.-- (1) In general.--Not later than 1 year after the date of the enactment of this Act, the Under Secretary of Commerce for Standards and Technology (in this section referred to as the ``Under Secretary'') shall establish an institute on artificial intelligence. (2) Designation.--The institute established pursuant to paragraph (1) shall be known as the ``Artificial Intelligence Safety Institute'' (in this section referred to as the ``Institute''). (3) Mission.--The mission of the Institute is as follows: (A) To assist the private sector and agencies in developing voluntary best practices for the robust assessment of artificial intelligence systems. (B) To provide technical assistance for the adoption and use of artificial intelligence across the Federal Government to improve the quality of government services. (C) To develop guidelines, methodologies, and best practices to promote-- (i) development and adoption of voluntary, consensus-based technical standards or industry standards; (ii) long-term advancements in artificial intelligence technologies; and (iii) innovation in the artificial intelligence industry by ensuring that companies of all sizes can succeed and thrive. (b) Director.--The Under Secretary shall appoint a director of the Institute, who shall be known as the ``Director of the Artificial Intelligence Safety Institute'' (in this section referred to as the ``Director'') and report directly to the Under Secretary. (c) Staff and Authorities.-- (1) Staff.--The Director may hire such full-time employees as the Director considers appropriate to assist the Director in carrying out the functions of the Institute. (2) Use of authority to hire critical technical experts.-- In addition to making appointments under paragraph (1) of this subsection, the Director, in coordination with the Secretary of Commerce, may make appointments of scientific, engineering, and professional personnel, and fix their basic pay, under subsection (b) of section 6 of the National Institute of Standards and Technology Act (15 U.S.C. 275) to hire critical technical experts. (3) Expansion of authority to hire critical technical experts.--Such subsection is amended, in the second sentence, by striking ``15'' and inserting ``30''. (4) Modification of sunset.--Subsection (c) of such section is amended by striking ``the date that is 5 years after the date of the enactment of this section'' and inserting ``December 30, 2035''. (5) Agreements.--The Director may enter into such agreements, including contracts, grants, cooperative agreements, and other transactions, as the Director considers necessary to carry out the functions of the Institute and on such terms as the Under Secretary considers appropriate. (d) Consultation and Coordination.--In establishing the Institute, the Under Secretary shall-- (1) coordinate with-- (A) the Secretary of Energy; (B) the Secretary of Homeland Security; (C) the Secretary of Defense; (D) the Director of the National Science Foundation; and (E) the Director of the Office of Science and Technology Policy; and (2) consult with the heads of such other Federal agencies as the Under Secretary considers appropriate. (e) Functions.--The functions of the Institute, which the Institute shall carry out in coordination with the laboratories of the National Institute of Standards and Technology, are as follows: (1) Research, evaluation, testing, and standards.--The following functions relating to research, evaluation, testing, and standards: (A) Conducting measurement research into system and model safety, validity and reliability, security, capabilities and limitations, explainability, interpretability, and privacy. (B) Working with the Department of Energy, the National Science Foundation, public-private partnerships, including the Artificial Intelligence Safety Institute Consortium established under subsection (f), and other private sector organizations to develop testing environments and perform regular benchmarking and capability evaluations, including artificial intelligence red-teaming as the Director considers appropriate. (C) Working with consensus-based, open, and transparent standards development organizations (SDOs) and relevant industry, Federal laboratories, civil society, and academic institutions to advance development and adoption of clear, implementable, technically sound, and technology-neutral voluntary standards and guidelines that incorporate appropriate variations in approach depending on the size of the entity, the potential risks and potential benefits of the artificial intelligence system, and the role of the entity (such as developer, deployer, or user) relating to artificial intelligence systems. (D) Building upon the Artificial Intelligence Risk Management Framework to incorporate guidelines on generative artificial intelligence systems. (E) Developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative artificial intelligence and for foundation models. (F) Developing and publishing cybersecurity tools, methodologies, best practices, voluntary guidelines, and other supporting information to assist persons who maintain systems used to create or train artificial intelligence models to discover and mitigate vulnerabilities and attacks. (G) Coordinating or developing guidelines, metrics, benchmarks, and methodologies for evaluating artificial intelligence systems, including the following: (i) Cataloging existing artificial intelligence metrics, benchmarks, and evaluation methodologies used in industry and academia. (ii) Testing and validating the efficacy of existing metrics, benchmarks, and evaluations, as well as TEVV tools and products. (iii) Funding and facilitating research and other activities in a transparent manner, including at institutions of higher education and other nonprofit and private sector partners, to evaluate, develop, or improve TEVV capabilities, with rigorous scientific merit, for artificial intelligence systems. (iv) Evaluating foundation models for their potential effect in downstream systems, such as when retrained or fine-tuned. (H) Coordinating with counterpart institutions of international partners and allies to promote global interoperability in the development of research, evaluation, testing, and standards relating to artificial intelligence. (I) Developing tools, methodologies, best practices, and voluntary guidelines for identifying vulnerabilities in foundation models. (J) Developing tools, methodologies, best practices, and voluntary guidelines for relevant agencies to track incidents resulting in harm caused by artificial intelligence systems. (2) Implementation.--The following functions relating to implementation: (A) Using publicly available and voluntarily provided information, conducting evaluations to assess the impacts of artificial intelligence systems, and developing guidelines and practices for safe development, deployment, and use of artificial intelligence technology. (B) Aligning capability evaluation and red-teaming guidelines and benchmarks, sharing best practices, and coordinating on building testbeds and test environments with allies of the United States and international partners and allies. (C) Coordinating vulnerability and incident data sharing with international partners and allies. (D) Integrating appropriate testing capabilities and infrastructure for testing of models and systems. (E) Establishing blue-teaming capabilities to develop mitigation approaches and partner with industry to address risks and negative impacts. (F) Developing voluntary guidelines on-- (i) detecting synthetic content, authenticating content and tracking of the provenance of content, labeling original and synthetic content, such as by watermarking, and evaluating software and systems relating to detection and labeling of synthetic content; (ii) ensuring artificial i