Colorado Artificial Intelligence Act

In May 2024, Colorado enacted the Colorado Artificial Intelligence Act (“Colorado AI Act”) – the first comprehensive artificial intelligence legislation (AI) in the United States (and with the appearance of the enactment, will come the disappearance of the lament that “AI is completely unregulated in the U.S.!”). The Colorado AI Act will go into effect in February 2026. The highlights of this law, which regulates the use of AI in healthcare, are covered below. 

The Problems of Using AI in Healthcare: Algorithmic Bias

The purpose of the AI law is not to prevent or inhibit the Rise of the Machines. The purpose of the law is to prevent uses of AI in healthcare that might be discriminatory. Such uses are known as algorithm or algorithmic discrimination. Algorithm discrimination is sometimes referred to as algorithmic bias or AI bias. An algorithm, simply, is a set of rules or instructions to be followed in calculations or other problem-solving operations. Artificial intelligence operates through the use of algorithms.

How can a machine be biased? It’s not the computer that’s biased. The humans who input data that the algorithm relies on, though, can be, intentionally or unintentionally. This bias can, in turn, lead to results that are discriminatory, or that can favor one group over another. In the healthcare system, algorithmic processes used to predict what patient populations will need more healthcare in the future than others, can lead to inaccurate decisions that can harm one group over another.

A landmark AI in healthcare study published in 2019 in the scientific and medical journal, Science, illustrates algorithmic bias in action. The study found that an algorithm, widely used by hospitals to predict future healthcare needs for over 100 million people, was biased against black patients. How so?

The algorithm relied on healthcare spending to predict future health needs. The humans who fed the data to AI to crunch, essentially gave these instructions to the algorithm: “When trying to predict who is likely to need extra healthcare in the future, consider previous healthcare spending.” In other words, the algorithm was to assume, “The patients who have spent large amounts of their income in the past, are the ones who are likely to spend more in the future, and require extra care in the future.”

A reasonable assumption? Not quite. The assumption did not account for economic realities. The algorithm, the study noted, predicted that because black patients spent less than white patients on healthcare, black patients were not as likely to require extra care in the future as white patients would be. The conclusion that the algorithm had enabled was that black patients had to be much sicker to be recommended for extra healthcare.

Equating the amount of previous healthcare spending with the likelihood more care will be required in the future (e.g., “If a patient spent little on healthcare in the past, it’s likely that patient will not need extra care in the future”) is, it turns out, a biased assumption. The humans who came up with the algorithm did not take a basic fact into account: Historically, black patients have had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities. Spending less on healthcare does not mean someone is healthier – it could mean a person might be unhealthy, but has lacked access to affordable treatment.

Using AI in Healthcare: What Are the Requirements of the Colorado AI Act?

The Colorado AI Act requires AI developers and deployers to use reasonable care to protect consumers from the risks of algorithmic discrimination. The Act defines “algorithmic discrimination” as “Any use of a high-risk artificial intelligence system that results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law.”

Algorithmic discrimination, as we’ve seen above, can arise from the use of AI systems. The Colorado AI Act regulates “high risk AI systems.” The law defines a “high-risk AI system” as a system that makes or is a substantial factor in making a “consequential decision.” A “consequential decision,” in turn, is a decision that has a material legal or similarly significant effect on the provision or denial to any customer of, or the cost of, certain things. These include:

  1. Healthcare services 
  2. Education enrollment or opportunities
  3. Employment or employment opportunities
  4. Financial or lending services
  5. Essential government services
  6. Housing 
  7. Insurance
  8. Legal services

Had the Colorado AI Act been the law when the “who needs extra care” algorithm was deployed, the fact of the longstanding wealth and income disparities would have been taken into account.

AI in Healthcare: Guarding Against Discrimination

The Colorado AI Act regulates the use of AI in healthcare and other areas by requiring developers of high-risk AI systems to complete annual impact assessments for these systems. These assessments must include certain information, such as “An analysis of whether deployment of [a high-risk AI system] poses any known or reasonably foreseeable risks of algorithmic discrimination, and if so, details on such discrimination and any mitigations that have been implemented.”

The Colorado AI Act also requires AI developers to notify consumers of certain activities. If a deployer uses a high-risk AI system to make an adverse consequential decision concerning a consumer, it must send the affected consumer a notice that includes the following information:

A disclosure of the principal reason(s) for the consequential decision, including:

  1. The degree/manner in which the high-risk AI system contributed to the decision
  2. The type of data processed to make the decision
  3. The source or sources of such data
  4. An opportunity to correct any incorrect personal data that factored into the decision
  5. An opportunity to appeal any adverse decision, which must allow for human review

The Colorado AI Act, legal experts predict, prompt AI in healthcare regulation in other states. Already, California and several other states have proposed AI-related legislation.