[Congressional Bills 118th Congress]
[From the U.S. Government Publishing Office]
[S. 4178 Reported in Senate (RS)]
<DOC>
Calendar No. 725
118th CONGRESS
2d Session
S. 4178
To establish artificial intelligence standards, metrics, and evaluation
tools, to support artificial intelligence research, development, and
capacity building activities, to promote innovation in the artificial
intelligence industry by ensuring companies of all sizes can succeed
and thrive, and for other purposes.
_______________________________________________________________________
IN THE SENATE OF THE UNITED STATES
April 18, 2024
Ms. Cantwell (for herself, Mr. Young, Mr. Hickenlooper, Mrs. Blackburn,
Mr. Wicker, Mr. Lujan, Ms. Sinema, Mr. Rounds, and Mr. Schumer)
introduced the following bill; which was read twice and referred to the
Committee on Commerce, Science, and Transportation
December 18 (legislative day, December 16), 2024
Reported by Ms. Cantwell, with an amendment
[Strike out all after the enacting clause and insert the part printed
in italic]
_______________________________________________________________________
A BILL
To establish artificial intelligence standards, metrics, and evaluation
tools, to support artificial intelligence research, development, and
capacity building activities, to promote innovation in the artificial
intelligence industry by ensuring companies of all sizes can succeed
and thrive, and for other purposes.
Be it enacted by the Senate and House of Representatives of the
United States of America in Congress assembled,
<DELETED>SECTION 1. SHORT TITLE; TABLE OF CONTENTS.</DELETED>
<DELETED> (a) Short Title.--This Act may be cited as the ``Future of
Artificial Intelligence Innovation Act of 2024''.</DELETED>
<DELETED> (b) Table of Contents.--The table of contents for this Act
is as follows:</DELETED>
<DELETED>Sec. 1. Short title; table of contents.
<DELETED>Sec. 2. Sense of Congress.
<DELETED>Sec. 3. Definitions.
<DELETED>TITLE I--VOLUNTARY ARTIFICIAL INTELLIGENCE STANDARDS, METRICS,
EVALUATION TOOLS, TESTBEDS, AND INTERNATIONAL COOPERATION
<DELETED>Subtitle A--Artificial Intelligence Safety Institute and
Testbeds
<DELETED>Sec. 101. Artificial Intelligence Safety Institute.
<DELETED>Sec. 102. Program on artificial intelligence testbeds.
<DELETED>Sec. 103. National Institute of Standards and Technology and
Department of Energy testbed to identify,
test, and synthesize new materials.
<DELETED>Sec. 104. National Science Foundation and Department of Energy
collaboration to make scientific
discoveries through the use of artificial
intelligence.
<DELETED>Sec. 105. Progress report.
<DELETED>Subtitle B--International Cooperation
<DELETED>Sec. 111. International coalition on innovation, development,
and harmonization of standards with respect
to artificial intelligence.
<DELETED>Sec. 112. Requirement to support bilateral and multilateral
artificial intelligence research
collaborations.
<DELETED>Subtitle C--Identifying Regulatory Barriers to Innovation
<DELETED>Sec. 121. Comptroller General of the United States
identification of risks and obstacles
relating to artificial intelligence and
Federal agencies.
<DELETED>TITLE II--ARTIFICIAL INTELLIGENCE RESEARCH, DEVELOPMENT,
CAPACITY BUILDING ACTIVITIES
<DELETED>Sec. 201. Public data for artificial intelligence systems.
<DELETED>Sec. 202. Federal grand challenges in artificial intelligence.
<DELETED>SEC. 2. SENSE OF CONGRESS.</DELETED>
<DELETED> It is the sense of Congress that policies governing
artificial intelligence should maximize the potential and development
of artificial intelligence to benefit all private and public
stakeholders.</DELETED>
<DELETED>SEC. 3. DEFINITIONS.</DELETED>
<DELETED> In this Act:</DELETED>
<DELETED> (1) Agency.--The term ``agency'' has the meaning
given such term in section 3502 of title 44, United States
Code, except such term shall include an independent regulatory
agency, as defined in such section.</DELETED>
<DELETED> (2) Artificial intelligence.--The term
``artificial intelligence'' has the meaning given such term in
section 5002 of the National Artificial Intelligence Initiative
Act of 2020 (15 U.S.C. 9401).</DELETED>
<DELETED> (3) Artificial intelligence blue-teaming.--The
term ``artificial intelligence blue-teaming'' means an effort
to conduct operational network vulnerability evaluations and
provide mitigation techniques to entities who have a need for
an independent technical review of the network security posture
of an artificial intelligence system.</DELETED>
<DELETED> (4) Artificial intelligence model.--The term
``artificial intelligence model'' means a component of an
artificial intelligence system that is a model--</DELETED>
<DELETED> (A) derived using mathematical,
computational, statistical, or machine-learning
techniques; and</DELETED>
<DELETED> (B) used as part of an artificial
intelligence system to produce outputs from a given set
of inputs.</DELETED>
<DELETED> (5) Artificial intelligence red-teaming.--The term
``artificial intelligence red-teaming'' means structured
adversarial testing efforts of an artificial intelligence
system to identify risks, flaws, and vulnerabilities of the
artificial intelligence system, such as harmful outputs from
the system, unforeseen or undesirable system behaviors,
limitations, or potential risks associated with the misuse of
the system.</DELETED>
<DELETED> (6) Artificial intelligence risk management
framework.--The term ``Artificial Intelligence Risk Management
Framework'' means the most recently updated version of the
framework developed and updated pursuant to section 22A(c) of
the National Institute of Standards and Technology Act (15
U.S.C. 278h-1(c)).</DELETED>
<DELETED> (7) Artificial intelligence system.--The term
``artificial intelligence system'' has the meaning given such
term in section 7223 of the Advancing American AI Act (40
U.S.C. 11301 note).</DELETED>
<DELETED> (8) Critical infrastructure.--The term ``critical
infrastructure'' has the meaning given such term in section
1016(e) of the Uniting and Strengthening America by Providing
Appropriate Tools Required to Intercept and Obstruct Terrorism
(USA PATRIOT ACT) Act of 2001 (42 U.S.C. 5195c(e)).</DELETED>
<DELETED> (9) Federal laboratory.--The term ``Federal
laboratory'' has the meaning given such term in section 4 of
the Stevenson-Wydler Technology Innovation Act of 1980 (15
U.S.C. 3703).</DELETED>
<DELETED> (10) Foundation model.--The term ``foundation
model'' means an artificial intelligence model trained on broad
data at scale and is adaptable to a wide range of downstream
tasks.</DELETED>
<DELETED> (11) Generative artificial intelligence.--The term
``generative artificial intelligence'' means the class of
artificial intelligence models that utilize the structure and
characteristics of input data in order to generate outputs in
the form of derived synthetic content. Such derived synthetic
content can include images, videos, audio, text, software,
code, and other digital content.</DELETED>
<DELETED> (12) National laboratory.--The term ``National
Laboratory'' has the meaning given such term in section 2 of
the Energy Policy Act of 2005 (42 U.S.C. 15801).</DELETED>
<DELETED> (13) Synthetic content.--The term ``synthetic
content'' means information, such as images, videos, audio
clips, and text, that has been significantly modified or
generated by algorithms, including by artificial
intelligence.</DELETED>
<DELETED> (14) Testbed.--The term ``testbed'' means a
facility or mechanism equipped for conducting rigorous,
transparent, and replicable testing of tools and technologies,
including artificial intelligence systems, to help evaluate the
functionality, trustworthiness, usability, and performance of
those tools or technologies.</DELETED>
<DELETED> (15) TEVV.--The term ``TEVV'' means methodologies,
metrics, techniques, and tasks for testing, evaluating,
verifying, and validating artificial intelligence systems or
components.</DELETED>
<DELETED> (16) Watermarking.--The term ``watermarking''
means the act of embedding information that is intended to be
difficult to remove, into outputs generated by artificial
intelligence, including outputs such as text, images, audio,
videos, software code, or any other digital content or data,
for the purposes of verifying the authenticity of the output or
the identity or characteristics of its provenance,
modifications, or conveyance.</DELETED>
<DELETED>TITLE I--VOLUNTARY ARTIFICIAL INTELLIGENCE STANDARDS, METRICS,
EVALUATION TOOLS, TESTBEDS, AND INTERNATIONAL COOPERATION</DELETED>
<DELETED>Subtitle A--Artificial Intelligence Safety Institute and
Testbeds</DELETED>
<DELETED>SEC. 101. ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE.</DELETED>
<DELETED> (a) Establishment of Institute.--</DELETED>
<DELETED> (1) In general.--Not later than 1 year after the
date of the enactment of this Act, the Under Secretary of
Commerce for Standards and Technology (in this section referred
to as the ``Under Secretary'') shall establish an institute on
artificial intelligence.</DELETED>
<DELETED> (2) Designation.--The institute established
pursuant to paragraph (1) shall be known as the ``Artificial
Intelligence Safety Institute'' (in this section referred to as
the ``Institute'').</DELETED>
<DELETED> (3) Mission.--The mission of the Institute is as
follows:</DELETED>
<DELETED> (A) To assist the private sector and
agencies in developing voluntary best practices for the
robust assessment of artificial intelligence
systems.</DELETED>
<DELETED> (B) To provide technical assistance for
the adoption and use of artificial intelligence across
the Federal Government to improve the quality of
government services.</DELETED>
<DELETED> (C) To develop guidelines, methodologies,
and best practices to promote--</DELETED>
<DELETED> (i) development and adoption of
voluntary, consensus-based technical standards
or industry standards;</DELETED>
<DELETED> (ii) long-term advancements in
artificial intelligence technologies;
and</DELETED>
<DELETED> (iii) innovation in the artificial
intelligence industry by ensuring that
companies of all sizes can succeed and
thrive.</DELETED>
<DELETED> (b) Director.--The Under Secretary shall appoint a
director of the Institute, who shall be known as the ``Director of the
Artificial Intelligence Safety Institute'' (in this section referred to
as the ``Director'') and report directly to the Under
Secretary.</DELETED>
<DELETED> (c) Staff and Authorities.--</DELETED>
<DELETED> (1) Staff.--The Director may hire such full-time
employees as the Director considers appropriate to assist the
Director in carrying out the functions of the
Institute.</DELETED>
<DELETED> (2) Use of authority to hire critical technical
experts.--In addition to making appointments under paragraph
(1) of this subsection, the Director, in coordination with the
Secretary of Commerce, may make appointments of scientific,
engineering, and professional personnel, and fix their basic
pay, under subsection (b) of section 6 of the National
Institute of Standards and Technology Act (15 U.S.C. 275) to
hire critical technical experts.</DELETED>
<DELETED> (3) Expansion of authority to hire critical
technical experts.--Such subsection is amended, in the second
sentence, by striking ``15'' and inserting ``30''.</DELETED>
<DELETED> (4) Modification of sunset.--Subsection (c) of
such section is amended by striking ``the date that is 5 years
after the date of the enactment of this section'' and inserting
``December 30, 2035''.</DELETED>
<DELETED> (5) Agreements.--The Director may enter into such
agreements, including contracts, grants, cooperative
agreements, and other transactions, as the Director considers
necessary to carry out the functions of the Institute and on
such terms as the Under Secretary considers
appropriate.</DELETED>
<DELETED> (d) Consultation and Coordination.--In establishing the
Institute, the Under Secretary shall--</DELETED>
<DELETED> (1) coordinate with--</DELETED>
<DELETED> (A) the Secretary of Energy;</DELETED>
<DELETED> (B) the Secretary of Homeland
Security;</DELETED>
<DELETED> (C) the Secretary of Defense;</DELETED>
<DELETED> (D) the Director of the National Science
Foundation; and</DELETED>
<DELETED> (E) the Director of the Office of Science
and Technology Policy; and</DELETED>
<DELETED> (2) consult with the heads of such other Federal
agencies as the Under Secretary considers
appropriate.</DELETED>
<DELETED> (e) Functions.--The functions of the Institute, which the
Institute shall carry out in coordination with the laboratories of the
National Institute of Standards and Technology, are as
follows:</DELETED>
<DELETED> (1) Research, evaluation, testing, and
standards.--The following functions relating to research,
evaluation, testing, and standards:</DELETED>
<DELETED> (A) Conducting measurement research into
system and model safety, validity and reliability,
security, capabilities and limitations, explainability,
interpretability, and privacy.</DELETED>
<DELETED> (B) Working with the Department of Energy,
the National Science Foundation, public-private
partnerships, including the Artificial Intelligence
Safety Institute Consortium established under
subsection (f), and other private sector organizations
to develop testing environments and perform regular
benchmarking and capability evaluations, including
artificial intelligence red-teaming as the Director
considers appropriate.</DELETED>
<DELETED> (C) Working with consensus-based, open,
and transparent standards development organizations
(SDOs) and relevant industry, Federal laboratories,
civil society, and academic institutions to advance
development and adoption of clear, implementable,
technically sound, and technology-neutral voluntary
standards and guidelines that incorporate appropriate
variations in approach depending on the size of the
entity, the potential risks and potential benefits of
the artificial intelligence system, and the role of the
entity (such as developer, deployer, or user) relating
to artificial intelligence systems.</DELETED>
<DELETED> (D) Building upon the Artificial
Intelligence Risk Management Framework to incorporate
guidelines on generative artificial intelligence
systems.</DELETED>
<DELETED> (E) Developing a companion resource to the
Secure Software Development Framework to incorporate
secure development practices for generative artificial
intelligence and for foundation models.</DELETED>
<DELETED> (F) Developing and publishing
cybersecurity tools, methodologies, best practices,
voluntary guidelines, and other supporting information
to assist persons who maintain systems used to create
or train artificial intelligence models to discover and
mitigate vulnerabilities and attacks.</DELETED>
<DELETED> (G) Coordinating or developing guidelines,
metrics, benchmarks, and methodologies for evaluating
artificial intelligence systems, including the
following:</DELETED>
<DELETED> (i) Cataloging existing artificial
intelligence metrics, benchmarks, and
evaluation methodologies used in industry and
academia.</DELETED>
<DELETED> (ii) Testing and validating the
efficacy of existing metrics, benchmarks, and
evaluations, as well as TEVV tools and
products.</DELETED>
<DELETED> (iii) Funding and facilitating
research and other activities in a transparent
manner, including at institutions of higher
educat