Colorado Passes Artificial Intelligence (AI) Regulatory Bill
Colorado became the first state in the country to pass a regulatory framework for artificial intelligence (AI). On May 17, 2024, Governor Jared Polis signed Senate Bill 24-205 (Concerning consumer protections in interactions with artificial intelligence systems) into law. This landmark law is set to take effect on Feb. 1, 2026. Here are the five key takeaways from the bill:
1. Defining Key Terms. The act categorizes eight examples of “high-risk” use cases that are subject to the law’s discrimination provisions. Four defined terms drive the bill’s consumer protection principles: artificial intelligence system, consequential decision, high-risk artificial intelligence system, and algorithmic discrimination.
An artificial intelligence system means “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
Algorithmic discrimination means “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
With limited exceptions, a high-risk artificial intelligence system is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”
A consequential decision is a “decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of [listing eight categories of high-risk use cases].”
2. Defining Categories of High-Risk Use Cases. The act categorizes eight high-risk use cases for which algorithmic discrimination is actionable. Companies and developers working on AI solutions in these areas should be very aware of the act and its provisions.
- Education enrollment or an education opportunity.
- Employment or an employment opportunity.
- A financial or lending service.
- An essential government service.
- Health care services.
- Housing.
- Insurance.
- A legal service.
3. Regulating High-Risk AI Systems. Borrowing from the sweeping European Union AI Act, Senate Bill 24-205 targets developers of “high-risk” AI systems, imposing a duty on such developers to exercise reasonable care to protect consumers from any “known or reasonably foreseeable” risks of algorithmic discrimination. By defining eight categories of high-risk use cases, the Colorado legislature recognizes that not all use cases carry the same risk, and companies in these industries will be required to implement more guard rails in their algorithms than those in others.
4. Mandatory Disclosure of Risks. The act requires AI developers to disclose any known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General within 90 days of discovering the risk or receiving a credible report of algorithmic discrimination.
5. Attorney General Enforcement. Consistent with other consumer protection laws, the act authorizes the Colorado Attorney General to promulgate rules and to enforce violations of the act as an unfair or deceptive trade practice.
Practical Guidance
Consistent with the messaging from the Biden administration, Colorado’s bill focuses on high-risk use cases and algorithmic discrimination. And it is clear the EU’s AI bill is already having an effect on laws in the United States. Given the rapid development and deployment of AI systems, other states will likely follow suit. If you are a software developer building an AI system in one of the eight industries categorized as a high-risk use case, you should be aware of Colorado’s bill and start taking concrete steps to address the disclosure, documentation, and reporting requirements that begin in 2026.
To learn more about how to prepare for changing AI regulation, contact a member of Taft’s Technology and Artificial Intelligence industry group.
In This Article
You May Also Like
What Does a Second Trump Administration Mean for AI Regulation? New Illinois Law Restricts Use of AI in Employment Practices