Insights

ColoradoEnactsAIConsumerProtection_Alert_SOCI

Colorado Enacts AI Consumer Protection Legislation

On May 17, 2024, Colorado enacted S.B. 24-205 (the "Act"), which imposes a duty of reasonable care on developers and deployers of high-risk artificial intelligence ("AI") systems to protect consumers from risks of algorithmic discrimination.

Colorado is the first state to enact comprehensive legislation of high-risk AI systems in the United States. Effective February 1, 2026, the Act imposes sweeping compliance requirements on developers and deployers of high-risk AI systems. The Act will be enforced by the Colorado Attorney General ("AG"). Violations of the Act will constitute unfair and deceptive trade practices under Colorado's Consumer Protection Act.

A high-risk AI system means any AI system that makes a "consequential decision," meaning a decision that has a material legal effect, or similarly significant effect, on the provision, denial, cost, or terms to a consumer who resides in Colorado in the areas of education, employment, financial services, essential government services, health care, housing, insurance, or legal services. AI systems intended to perform narrow procedural tasks, detect decision-making patterns or deviations from prior patterns, and certain enumerated technologies, such as antivirus software or firewalls, are not considered high risk.

Developers and deployers must use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from intended and contracted uses of high-risk AI systems. Compliance with the Act creates a rebuttable presumption that the developer or deployer used reasonable care. 

The Act imposes different requirements on developers and deployers. A developer must comply with notice and documentation requirements and conduct impact assessments. Deployers must implement a risk management policy and program to identify and mitigate known or reasonably foreseeable risks of algorithmic discrimination. They may consult nationally or internationally recognized guidance, such as the AI Risk Management Framework from the National Institute of Standards and Technology, to assess reasonableness. Deployers also must conduct an impact assessment, review deployment of high-risk AI systems annually, and provide certain notice and rights to consumers. Except where "obvious," the Act requires deployers of AI systems (not just high-risk AI systems) that are intended to interact with consumers to disclose to each such consumer that they are interacting with an AI system.

Developers and deployers have separate reporting requirements. Developers must report to the Colorado AG within 90 days after learning of any known or reasonably foreseeable risks of algorithmic discrimination related to a high-risk AI system that has been deployed. A deployer must report to the Colorado AG within 90 days of discovering that a high-risk AI system caused algorithmic discrimination.

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.