[Congressional Bills 118th Congress]
[From the U.S. Government Publishing Office]
[S. 3312 Reported in Senate (RS)]

<DOC>





                                                       Calendar No. 723
118th CONGRESS
  2d Session
                                S. 3312

   To provide a framework for artificial intelligence innovation and 
                accountability, and for other purposes.


_______________________________________________________________________


                   IN THE SENATE OF THE UNITED STATES

                           November 15, 2023

 Mr. Thune (for himself, Ms. Klobuchar, Mr. Wicker, Mr. Hickenlooper, 
  Mr. Lujan, Mrs. Capito, Ms. Baldwin, and Ms. Lummis) introduced the 
 following bill; which was read twice and referred to the Committee on 
                 Commerce, Science, and Transportation

            December 18 (legislative day, December 16), 2024

              Reported by Ms. Cantwell, with an amendment
 [Strike out all after the enacting clause and insert the part printed 
                               in italic]

_______________________________________________________________________

                                 A BILL


 
   To provide a framework for artificial intelligence innovation and 
                accountability, and for other purposes.

    Be it enacted by the Senate and House of Representatives of the 
United States of America in Congress assembled,

<DELETED>SECTION 1. SHORT TITLE.</DELETED>

<DELETED>    This Act may be cited as the ``Artificial Intelligence 
Research, Innovation, and Accountability Act of 2023''.</DELETED>

<DELETED>SEC. 2. TABLE OF CONTENTS.</DELETED>

<DELETED>    The table of contents for this Act is as 
follows:</DELETED>

<DELETED>Sec. 1. Short title.
<DELETED>Sec. 2. Table of contents.
   <DELETED>TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND INNOVATION

<DELETED>Sec. 101. Open data policy amendments.
<DELETED>Sec. 102. Online content authenticity and provenance standards 
                            research and development.
<DELETED>Sec. 103. Standards for detection of emergent and anomalous 
                            behavior and AI-generated media.
<DELETED>Sec. 104. Comptroller General study on barriers and best 
                            practices to usage of AI in government.
       <DELETED>TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY

<DELETED>Sec. 201. Definitions.
<DELETED>Sec. 202. Generative artificial intelligence transparency.
<DELETED>Sec. 203. Transparency reports for high-impact artificial 
                            intelligence systems.
<DELETED>Sec. 204. Recommendations to Federal agencies for risk 
                            management of high-impact artificial 
                            intelligence systems.
<DELETED>Sec. 205. Office of Management and Budget oversight of 
                            recommendations to agencies.
<DELETED>Sec. 206. Risk management assessment for critical-impact 
                            artificial intelligence systems.
<DELETED>Sec. 207. Certification of critical-impact artificial 
                            intelligence systems.
<DELETED>Sec. 208. Enforcement.
<DELETED>Sec. 209. Artificial intelligence consumer education.

        <DELETED>TITLE I--ARTIFICIAL INTELLIGENCE RESEARCH AND 
                          INNOVATION</DELETED>

<DELETED>SEC. 101. OPEN DATA POLICY AMENDMENTS.</DELETED>

<DELETED>    Section 3502 of title 44, United States Code, is amended--
</DELETED>
        <DELETED>    (1) in paragraph (22)--</DELETED>
                <DELETED>    (A) by inserting ``or data model'' after 
                ``a data asset''; and</DELETED>
                <DELETED>    (B) by striking ``and'' at the 
                end;</DELETED>
        <DELETED>    (2) in paragraph (23), by striking the period at 
        the end and inserting a semicolon; and</DELETED>
        <DELETED>    (3) by adding at the end the following:</DELETED>
        <DELETED>    ``(24) the term `data model' means a mathematical, 
        economic, or statistical representation of a system or process 
        used to assist in making calculations and predictions, 
        including through the use of algorithms, computer programs, or 
        artificial intelligence systems; and</DELETED>
        <DELETED>    ``(25) the term `artificial intelligence system' 
        means an engineered system that--</DELETED>
                <DELETED>    ``(A) generates outputs, such as content, 
                predictions, recommendations, or decisions for a given 
                set of objectives; and</DELETED>
                <DELETED>    ``(B) is designed to operate with varying 
                levels of adaptability and autonomy using machine and 
                human-based inputs.''.</DELETED>

<DELETED>SEC. 102. ONLINE CONTENT AUTHENTICITY AND PROVENANCE STANDARDS 
              RESEARCH AND DEVELOPMENT.</DELETED>

<DELETED>    (a) Research.--</DELETED>
        <DELETED>    (1) In general.--Not later than 180 days after the 
        date of the enactment of this Act, the Under Secretary of 
        Commerce for Standards and Technology shall carry out research 
        to facilitate the development and standardization of means to 
        provide authenticity and provenance information for content 
        generated by human authors and artificial intelligence 
        systems.</DELETED>
        <DELETED>    (2) Elements.--The research carried out pursuant 
        to paragraph (1) shall cover the following:</DELETED>
                <DELETED>    (A) Secure and binding methods for human 
                authors of content to append statements of provenance 
                through the use of unique credentials, watermarking, or 
                other data or metadata-based approaches.</DELETED>
                <DELETED>    (B) Methods for the verification of 
                statements of content provenance to ensure authenticity 
                such as watermarking or classifiers, which are trained 
                models that distinguish artificial intelligence-
                generated media.</DELETED>
                <DELETED>    (C) Methods for displaying clear and 
                conspicuous statements of content provenance to the end 
                user.</DELETED>
                <DELETED>    (D) Technologies or applications needed to 
                facilitate the creation and verification of content 
                provenance information.</DELETED>
                <DELETED>    (E) Mechanisms to ensure that any 
                technologies and methods developed under this section 
                are minimally burdensome on content 
                producers.</DELETED>
                <DELETED>    (F) Such other related processes, 
                technologies, or applications as the Under Secretary 
                considers appropriate.</DELETED>
                <DELETED>    (G) Use of provenance technology to enable 
                attribution for content creators.</DELETED>
        <DELETED>    (3) Implementation.--The Under Secretary shall 
        carry out the research required by paragraph (1) as part of the 
        research directives pursuant to section 22A(b)(1) of the 
        National Institute of Standards and Technology Act (15 U.S.C. 
        278h-1(b)(1)).</DELETED>
<DELETED>    (b) Development of Standards.--</DELETED>
        <DELETED>    (1) In general.--For methodologies and 
        applications related to content provenance and authenticity 
        deemed by the Under Secretary to be at a readiness level 
        sufficient for standardization, the Under Secretary shall 
        provide technical review and assistance to such other Federal 
        agencies and nongovernmental standards organizations as the 
        Under Secretary considers appropriate.</DELETED>
        <DELETED>    (2) Considerations.--In providing any technical 
        review and assistance related to the development of content 
        provenance and authenticity standards under this subsection, 
        the Under Secretary may--</DELETED>
                <DELETED>    (A) consider whether a proposed standard 
                is reasonable, practicable, and appropriate for the 
                particular type of media and media environment for 
                which the standard is proposed;</DELETED>
                <DELETED>    (B) consult with relevant stakeholders; 
                and</DELETED>
                <DELETED>    (C) review industry standards issued by 
                nongovernmental standards organizations.</DELETED>
<DELETED>    (c) Pilot Program.--</DELETED>
        <DELETED>    (1) In general.--The Under Secretary shall carry 
        out a pilot program to assess the feasibility and advisability 
        of using available technologies and creating open standards to 
        facilitate the creation and verification of content governance 
        information for digital content.</DELETED>
        <DELETED>    (2) Locations.--The pilot program required by 
        paragraph (1) shall be carried out at not more than 2 Federal 
        agencies the Under Secretary shall select for purposes of the 
        pilot program required by paragraph (1).</DELETED>
        <DELETED>    (3) Requirements.--In carrying out the pilot 
        program required by paragraph (1), the Under Secretary shall--
        </DELETED>
                <DELETED>    (A) apply and evaluate methods for 
                authenticating the origin of and modifications to 
                government-produced digital content using technology 
                and open standards described in paragraph (1); 
                and</DELETED>
                <DELETED>    (B) make available to the public digital 
                content embedded with provenance or other 
                authentication provided by the heads of the Federal 
                agencies selected pursuant to paragraph (2) for the 
                purposes of the pilot program.</DELETED>
        <DELETED>    (4) Briefing required.--Not later than 1 year 
        after the date of the enactment of this Act, and annually 
        thereafter until the date described in paragraph (5), the Under 
        Secretary shall brief the Committee on Commerce, Science, and 
        Transportation of the Senate and the Committee on Science, 
        Space, and Technology of the House of Representatives on the 
        findings of the Under Secretary with respect to the pilot 
        program carried out under this subsection.</DELETED>
        <DELETED>    (5) Termination.--The pilot program shall 
        terminate on the date that is 10 years after the date of the 
        enactment of this Act.</DELETED>
<DELETED>    (d) Report to Congress.--Not later than 1 year after the 
date of the enactment of this Act, the Under Secretary shall submit to 
the Committee on Commerce, Science, and Transportation of the Senate 
and the Committee on Science, Space, and Technology of the House of 
Representatives a report outlining the progress of standardization 
initiatives relating to requirements under this section, as well as 
recommendations for legislative or administrative action to encourage 
or require the widespread adoption of such initiatives in the United 
States.</DELETED>

<DELETED>SEC. 103. STANDARDS FOR DETECTION OF EMERGENT AND ANOMALOUS 
              BEHAVIOR AND AI-GENERATED MEDIA.</DELETED>

<DELETED>    Section 22A(b)(1) of the National Institute of Standards 
and Technology Act (15 U.S.C. 278h-1(b)(1)) is amended--</DELETED>
        <DELETED>    (1) by redesignating subparagraph (I) as 
        subparagraph (K);</DELETED>
        <DELETED>    (2) in subparagraph (H), by striking ``; and'' and 
        inserting a semicolon; and</DELETED>
        <DELETED>    (3) by inserting after subparagraph (H) the 
        following:</DELETED>
                <DELETED>    ``(I) best practices for detecting outputs 
                generated by artificial intelligence systems, including 
                content such as text, audio, images, and 
                videos;</DELETED>
                <DELETED>    ``(J) methods to detect and understand 
                anomalous behavior of artificial intelligence systems 
                and safeguards to mitigate potentially adversarial or 
                compromising anomalous behavior; and''.</DELETED>

<DELETED>SEC. 104. COMPTROLLER GENERAL STUDY ON BARRIERS AND BEST 
              PRACTICES TO USAGE OF AI IN GOVERNMENT.</DELETED>

<DELETED>    (a) In General.--Not later than 1 year after the date of 
enactment of this Act, the Comptroller General of the United States 
shall--</DELETED>
        <DELETED>    (1) conduct a review of statutory, regulatory, and 
        other policy barriers to the use of artificial intelligence 
        systems to improve the functionality of the Federal Government; 
        and</DELETED>
        <DELETED>    (2) identify best practices for the adoption and 
        use of artificial intelligence systems by the Federal 
        Government, including--</DELETED>
                <DELETED>    (A) ensuring that an artificial 
                intelligence system is proportional to the need of the 
                Federal Government;</DELETED>
                <DELETED>    (B) restrictions on access to and use of 
                an artificial intelligence system based on the 
                capabilities and risks of the artificial intelligence 
                system; and</DELETED>
                <DELETED>    (C) safety measures that ensure that an 
                artificial intelligence system is appropriately limited 
                to necessary data and compartmentalized from other 
                assets of the Federal Government.</DELETED>
<DELETED>    (b) Report.--Not later than 2 years after the date of 
enactment of this Act, the Comptroller General of the United States 
shall submit to the Committee on Commerce, Science, and Transportation 
of the Senate and the Committee on Science, Space, and Technology of 
the House of Representatives a report that--</DELETED>
        <DELETED>    (1) summarizes the results of the review conducted 
        under subsection (a)(1) and the best practices identified under 
        subsection (a)(2), including recommendations, as the 
        Comptroller General of the United States considers 
        appropriate;</DELETED>
        <DELETED>    (2) describes any laws, regulations, guidance 
        documents, or other policies that may prevent the adoption of 
        artificial intelligence systems by the Federal Government to 
        improve certain functions of the Federal Government, 
        including--</DELETED>
                <DELETED>    (A) data analysis and 
                processing;</DELETED>
                <DELETED>    (B) paperwork reduction;</DELETED>
                <DELETED>    (C) contracting and procurement practices; 
                and</DELETED>
                <DELETED>    (D) other Federal Government services; 
                and</DELETED>
        <DELETED>    (3) includes, as the Comptroller General of the 
        United States considers appropriate, recommendations to modify 
        or eliminate barriers to the use of artificial intelligence 
        systems by the Federal Government.</DELETED>

  <DELETED>TITLE II--ARTIFICIAL INTELLIGENCE ACCOUNTABILITY</DELETED>

<DELETED>SEC. 201. DEFINITIONS.</DELETED>

<DELETED>    In this title:</DELETED>
        <DELETED>    (1) Appropriate congressional committees.--The 
        term ``appropriate congressional committees'' means--</DELETED>
                <DELETED>    (A) the Committee on Energy and Natural 
                Resources and the Committee on Commerce, Science, and 
                Transportation of the Senate;</DELETED>
                <DELETED>    (B) the Committee on Energy and Commerce 
                of the House of Representatives; and</DELETED>
                <DELETED>    (C) each congressional committee with 
                jurisdiction over an applicable covered 
                agency.</DELETED>
        <DELETED>    (2) Artificial intelligence system.--The term 
        ``artificial intelligence system'' means an engineered system 
        that--</DELETED>
                <DELETED>    (A) generates outputs, such as content, 
                predictions, recommendations, or decisions for a given 
                set of human-defined objectives; and</DELETED>
                <DELETED>    (B) is designed to operate with varying 
                levels of adaptability and autonomy using machine and 
                human-based inputs.</DELETED>
        <DELETED>    (3) Covered agency.--the term ``covered agency'' 
        means an agency for which the Under Secretary develops an NIST 
        recommendation.</DELETED>
        <DELETED>    (4) Covered internet platform.--</DELETED>
                <DELETED>    (A) In general.--The term ``covered 
                internet platform''--</DELETED>
                        <DELETED>    (i) means any public-facing 
                        website, consumer-facing internet application, 
                        or mobile application available to consumers in 
                        the United States; and</DELETED>
                        <DELETED>    (ii) includes a social network 
                        site, video sharing service, search engine, and 
                        content aggregation service.</DELETED>
                <DELETED>    (B) Exclusions.--The term ``covered 
                internet platform'' does not include a platform that--
                </DELETED>
                        <DELETED>    (i) is wholly owned, controlled, 
                        and operated by a person that--</DELETED>
                                <DELETED>    (I) during the most recent 
                                180-day period, did not employ more 
                                than 500 employees;</DELETED>
                                <DELETED>    (II) during the most 
                                recent 3-year period, averaged less 
                                than $50,000,000 in annual gross 
                                receipts; and</DELETED>
                                <DELETED>    (III) on an annual basis, 
                                collects or processes the personal data 
                                of less than 1,000,000 individuals; 
                                or</DELETED>
                        <DELETED>    (ii) is operated for the sole 
                        purpose of conducting research that is not 
                        directly or indirectly made for 
                        profit.</DELETED>
        <DE