BILL AS INTRODUCED H.710
2024 Page 1 of 29
1 H.710
2 Introduced by Representatives Priestley of Bradford, Anthony of Barre City,
3 Burrows of West Windsor, Chase of Chester, Christie of
4 Hartford, Jerome of Brandon, Masland of Thetford, Roberts of
5 Halifax, Sibilia of Dover, Sims of Craftsbury, Templeman of
6 Brownington, White of Bethel, and Williams of Barre City
7 Referred to Committee on
8 Date:
9 Subject: Information technology; artificial intelligence; developers; deployers
10 Statement of purpose of bill as introduced: This bill proposes to regulate
11 developers and deployers of high-risk artificial intelligence systems and
12 developers of generative artificial intelligence systems.
13 An act relating to regulating developers and deployers of certain artificial
14 intelligence systems
15 It is hereby enacted by the General Assembly of the State of Vermont:
16 Sec. 1. 22 V.S.A. chapter 17 is added to read:
17 CHAPTER 17. ARTIFICIAL INTELLIGENCE
18 § 1001. DEFINITIONS
19 As used in this chapter:
20 (1) “Algorithmic discrimination” means an automated system’s VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 2 of 29
1 contribution to unjustified differential treatment or impacts that disfavor
2 individuals or groups of individuals based on their race, color, ethnicity, sex,
3 sexual orientation, gender identity, religion, age, national origin, limited
4 English proficiency, disability, veteran status, genetic information, or any other
5 classification protected by State or federal law.
6 (2) “Artificial intelligence” means any technology, including machine
7 learning, that uses data to train an algorithm or predictive model for the
8 purpose of enabling a computer system or service to autonomously perform
9 any task, including visual perception, language processing, and speech
10 recognition, that is normally associated with human intelligence or perception.
11 (3) “Artificial intelligence system” means any computer system or
12 service that incorporates or uses artificial intelligence.
13 (4) “Consequential decision” means any decision that has a material
14 legal, or similarly significant, effect on a consumer’s access to credit, criminal
15 justice, education, employment, health care, housing, or insurance.
16 (5) “Consumer” means any individual who is a resident of this State.
17 (6) “Deployer” means any person who deploys or uses a high-risk
18 artificial intelligence system to make a consequential decision.
19 (7) “Developer” means any person who develops or who intentionally
20 and substantially modifies:
21 (A) a high-risk artificial intelligence system; or VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 3 of 29
1 (B) a generative artificial intelligence system.
2 (8) “Digital watermark” means information that:
3 (A) is embedded in, and reasonably difficult to remove from, any
4 digital content; and
5 (B) enables a consumer who accesses the digital content to verify the
6 authenticity of the digital content and to determine whether the digital content
7 is synthetic digital content.
8 (9) “Foundation model” means any form of artificial intelligence that:
9 (A) is trained on broad data at scale;
10 (B) is designed for generality of output; and
11 (C) can be adapted to a wide range of distinctive tasks.
12 (10) “Generative artificial intelligence” means any form of artificial
13 intelligence, including a foundation model, that is able to produce synthetic
14 digital content, including audio, images, text, and videos.
15 (11) “Generative artificial intelligence system” means any computer
16 system or service that incorporates or uses generative artificial intelligence.
17 (12) “High-risk artificial intelligence system” means any artificial
18 intelligence system that, when deployed, makes or is a controlling factor in
19 making a consequential decision.
20 (13) “Machine learning” means any technique that enables a computer
21 system or service to autonomously learn and adapt by using algorithms and VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 4 of 29
1 statistical models to autonomously analyze and draw inferences from patterns
2 in data.
3 (14) “Red teaming” means a structured testing effort to find flaws and
4 vulnerabilities in an AI system, often in a controlled environment and in
5 collaboration with developers of AI.
6 (15) “Search engine” means any computer system or service that
7 searches for, and identifies, items in a database that correspond to keywords or
8 characters specified by a consumer, and is offered to, or used by, any
9 consumer.
10 (16) “Search engine operator” means any person who owns or controls a
11 search engine.
12 (17) “Significant update” means any new version, new release, or other
13 update to a high-risk artificial intelligence system that results in significant
14 changes to such high-risk artificial intelligence system’s use case, key
15 functionality, or expected outcomes.
16 (18)(A) “Social media platform” means a public or semipublic internet-
17 based service or application that:
18 (i) is used by a consumer;
19 (ii) is primarily intended to connect and allow users to socially
20 interact within the service or application; and
21 (iii) enables a consumer to:
VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 5 of 29
1 (I) construct a public or semipublic profile for the purposes of
2 signing into and using the service or application;
3 (II) populate a public list of other persons with whom the
4 consumer shares a social connection within the service or application; and
5 (III) create or post content that is viewable by other persons,
6 including on message boards, in chat rooms, or through a landing page or main
7 feed that presents the consumer with content generated by other persons.
8 (B) “Social media platform” does not include a public or semipublic
9 internet-based service or application that:
10 (i) exclusively provides e-mail or direct messaging services;
11 (ii) primarily consists of news, sports, entertainment, interactive
12 video games, electronic commerce, or content that is preselected by the
13 provider or for which any chat, comments, or interactive functionality is
14 incidental to, directly related to, or dependent on the provision of such content;
15 or
16 (iii) is used by and under the direction of an educational entity,
17 including a learning management system or a student engagement program.
18 (19) “Social media platform operator” means any person who owns or
19 controls a social media platform.
20 (20) “Synthetic digital content” means any digital content, including any
21 audio, image, text, or video, that is produced by a generative artificial VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 6 of 29
1 intelligence system.
2 (21) “Trade secret” has the same meaning as in 9 V.S.A. § 4601.
3 § 1002. DUTIES OF DEVELOPERS OF HIGH-RISK ARTIFICIAL
4 INTELLIGENCE SYSTEMS
5 (a) Each developer shall use reasonable care to avoid any risk of
6 algorithmic discrimination that is a reasonably foreseeable consequence of
7 developing, or intentionally and substantially modifying, a high-risk artificial
8 intelligence system to make a consequential decision. In any enforcement
9 action brought by the Attorney General pursuant to section 1007 of this
10 chapter, there shall be a rebuttable presumption that a developer used
11 reasonable care as required under this subsection if the developer complied
12 with the provisions of this section.
13 (b) Except as provided in subsection (e) of this section, no developer of a
14 high-risk artificial intelligence system shall offer, sell, lease, give, or otherwise
15 provide a high-risk artificial intelligence system to a deployer unless the
16 developer provides to the deployer all of the following:
17 (1) a statement disclosing the intended uses of the high-risk artificial
18 intelligence system;
19 (2) documentation disclosing:
20 (A) the known limitations of the high-risk artificial intelligence
21 system, including any and all reasonably foreseeable risks of algorithmic VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 7 of 29
1 discrimination arising from the intended uses of the high-risk artificial
2 intelligence system;
3 (B) the purpose of the high-risk artificial intelligence system and the
4 intended benefits, uses, and deployment contexts of the high-risk artificial
5 intelligence system;
6 (C) a summary of the type of data collected from individuals and
7 processed by the high-risk artificial intelligence system when the high-risk
8 artificial intelligence system is used to make a consequential decision; and
9 (D) an analysis of any adverse impact that the deployer’s deployment
10 or use of the high-risk artificial intelligence system will potentially have on
11 any individual, or group of individuals, on the basis of race, color, ethnicity,
12 sex, sexual orientation, gender identity, religion, age, national origin, limited
13 English proficiency, disability, or veteran status.
14 (3) documentation describing:
15 (A) the type of data used to program or train the high-risk artificial
16 intelligence system;
17 (B) how the high-risk artificial intelligence system was evaluated for
18 validity and explainability before the high-risk artificial intelligence system
19 was licensed or sold;
20 (C) the data governance measures used to cover the training data sets
21 and the measures used to examine the suitability of data sources, possible VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 8 of 29
1 biases, and appropriate mitigation;
2 (D) the outputs of the high-risk artificial intelligence system and how
3 these outputs may be used to make consequential decisions;
4 (E) the measures the developer has taken to mitigate any risk of
5 algorithmic discrimination that the developer knows may arise from
6 deployment or use of the high-risk artificial intelligence system; and
7 (F) how an individual can use the high-risk artificial intelligence
8 system to make, or monitor the high-risk artificial intelligence system when the
9 high-risk artificial intelligence system is deployed or used to make, a
10 consequential decision.
11 (c) Except as provided in subsection (e) of this section, each developer that
12 offers, sells, leases, gives, or otherwise provides to a deployer a high-risk
13 artificial intelligence system shall provide to the deployer the technical
14 capability to access, or otherwise make available to the deployer, all
15 information and documentation in the developer’s possession, custody, or
16 control that the deployer reasonably requires to complete an impact assessment
17 pursuant to subsection 1003(c) of this chapter.
18 (d) Each developer shall post a clear and conspicuous statement on its
19 public-facing website summarizing:
20 (1) the types of high-risk artificial intelligence systems that:
21 (A) the developer has developed or has intentionally and VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 9 of 29
1 substantially modified; and
2 (B) are currently deployed or used by a deployer; and
3 (2) how the developer manages any reasonably foreseeable risk of
4 algorithmic discrimination that may arise from deployment or use of each
5 high-risk artificial intelligence system described in subdivision (1) of this
6 subsection.
7 (e) Nothing in subsections (b)–(d) of this section shall be construed to
8 require a developer to disclose any trade secret.
9 (f)(1) The Attorney General may require that a developer disclose to the
10 Attorney General any statement or documentation described in subsection (b)
11 of this section if the statement or documentation is relevant to an investigation
12 conducted by the Attorney General.
13 (2) The Attorney General may evaluate any statement or documentation
14 to ensure compliance with the provisions of this section, and any such
15 statement or documentation is exempt from public inspection and copying
16 under the Public Records Act.
17 (3) To the extent any information contained in any such statement or
18 documentation includes any information subject to the attorney-client privilege
19 or work product protection, disclosure to the Attorney General pursuant to this
20 subsection shall not constitute a waiver of that privilege or protection.
VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 10 of 29
1 § 1003. DUTIES OF DEPLOYERS OF HIGH-RISK ARTIFICIAL
2 INTELLIGENCE SYSTEMS
3 (a) Each deployer shall use reasonable care to avoid any risk of algorithmic
4 discrimination that is a reasonably foreseeable consequence of deploying or
5 using a high-risk artificial intelligence system to make a consequential
6 decision. In any enforcement action brought by the Attorney General pursuant
7 to section 1007 of this chapter, there shall be a rebuttable presumption that a
8 deployer used reasonable care as required under this subsection if the deployer
9 complied with the provisions of this section.
10 (b) No deployer shall deploy or use a high-risk artificial intelligence system
11 to make a consequential decision unless the deployer has designed and
12 implemented a risk management policy and program for the high-risk artificial
13 intelligence system. The risk management policy shall specify the principles,
14 processes, and personnel that the deployer shall use in maintaining the risk
15 management program to identify, mitigate, and document any risk of
16 algorithmic discrimination that is a reasonably foreseeable consequence of
17 deploying or using such high-risk artificial intelligence system to make a
18 consequential decision. Each risk management policy and program designed,
19 implemented, and maintained pursuant to this subsection shall be:
20 (1) at least as stringent as the latest version of the Artificial Intelligence
21 Risk Management Framework published by the National Institute of Standards VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 11 of 29
1 and Technology or another nationally or internationally recognized risk
2 management framework for artificial intelligence systems; and
3 (2) reasonable, considering:
4 (A) the size and complexity of the deployer;
5 (B) the nature and scope of the high-risk artificial intelligence
6 systems deployed and used by the deployer, including the intended uses of
7 those systems;
8 (C) the sensitivity and volume of data processed in connection with
9 the high-risk artificial intelligence systems deployed and used by the deployer;
10 and
11 (D) the cost to the deployer to implement and maintain the risk
12 management program.
13 (c)(1) Except as provided in subdivisions (3) and (4) of this subsection, no
14 deployer shall deploy or use a high-risk artificial intelligence system to make a
15 consequential decision unless the deployer has completed an impact
16 assessment for the high-risk artificial intelligence system. The deployer shall
17 complete an impact assessment for a high-risk artificial intelligence system:
18 (A) before the deployer initially deploys the high-risk artificial
19 intelligence system;
20 (B) not later than 45 days following the close of each calendar year
21 during which the deployer used the high-risk artificial intelligence system to
VT LEG #372714 v.1
BILL AS INTRODUCED H.710
2024 Page 12 of 29
1 make a consequential decision; and
2 (C) not later than 45 days after each significant