News & Insights

Client Alert

May 14, 2025

AI Quarterly Update: Recent AI Legislation Efforts Signal Potential Challenges for State-Led Regulatory Approach


Introduction

In Q1 of 2025, there have been significant regulatory and policy developments at the state, federal, and international level regarding artificial intelligence (“AI”). In Europe, the phased implementation of the  European Union's Artificial Intelligence Act (“EU AI Act”) has begun, introducing numerous obligations, including for operators of “high risk” AI systems (see our March EU & UK AI Round-Up). Meanwhile, the United States has undergone a notable policy shift following President Trump’s inauguration, resulting in changes to several AI-related initiatives. At the state level, legislatures have been highly active, introducing over 500 AI-related bills in the first quarter of this year alone.

This alert summarizes recent policy changes, focusing on legislation advanced in Colorado and Virginia. While the regulatory landscape will continue to evolve, these developments provide important insight into the priorities and risks that AI developers and users must address.

The Current State of Play

EU AI Act

While the EU AI Act entered into force on August 1, 2024, the first set of regulatory requirements came into effect on February 2, 2025: (i) requiring measures to ensure AI literacy; and (ii) banning AI systems deemed to pose "unacceptable risk." Looking ahead, August 2, 2025, marks the commencement of obligations for general-purpose AI models. Those obligations require providers to maintain detailed documentation (including technical documentation) and policies. They further require compliance with the European Commission’s code of practice (currently awaiting its fourth and final version). For general-purpose AI models that pose systemic risk, providers must also ensure and maintain an adequate level of cybersecurity protection, report information in the event of “serious incidents,” and assess and mitigate possible systemic risks at the EU level.

In response to industry concerns about the complexity and potential innovation-stifling effects of the EU AI Act, the European Commission is incorporating feedback from key industry stakeholders to further refine the Act's requirements. Despite anticipated challenges, the EU AI Act represents a significant step in AI regulation, setting a precedent for a comprehensive legislative approach to safety and the ethical development and deployment of AI technologies. It remains to be seen what approach to enforcement will be taken by the EU AI Office and national authorities after August 2, 2025, the official start of enforcement.  

Federal AI Policy

At the US federal level, the Trump administration has prioritized AI development, placing less emphasis on safety and security protocols. This shift is already producing tangible policy changes in both the executive and legislative branches. 

  • On January 23, 2025, President Trump issued an Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which suspended or revised Biden-era AI policies and directed the creation of an AI Action Plan to identify priority initiatives. The Office of Science and Technology Policy (“OSTP”) has been soliciting public input to shape this plan, with a final version expected by July 2025.
  • On April 7, 2025, the Office of Management and Budget issued two policy memoranda aimed at implementing Trump’s EO in regards to agency use and procurement of AI technologies: M-25-21 (“Accelerating Federal Use of AI through Innovation, Governance, and Public Trust”) and M-25-22 (“Driving Efficient Acquisition of Artificial Intelligence in Government”).
  • On April 23, 2025, President Trump issued another Executive Order, titled “Advancing Artificial Intelligence Education for American Youth,” which establishes an AI Education Task Force and directs the Secretary of Labor to develop a Registered Apprenticeship program for AI-related occupations, signaling AI adoption across all levels of the public and private sector remains a top executive priority.

The combined impact of these orders and policies, when taken in light of the administration’s embrace of crypto and rejection of heightened reporting requirements, such as the Biden-era climate disclosure rules, demonstrates that the administration will remain focused on promoting the adoption of advanced technology in the private and public sector through a light-touch approach to regulation. That being said, the administration likely will continue to roll out EOs and policy pronouncements that shape incentives and regulations for AI.

State Regulation

At the state level, the regulatory landscape surrounding AI has become significantly more complex. Unlike the more centralized approaches at the federal or international level, U.S. states are moving forward with a patchwork of legislative initiatives that vary widely in scope, substance, and enforcement. In 2024 alone, over 400 AI-related bills were introduced across state legislatures, of which dozens were enacted. That pace has accelerated in 2025, with over 550 state AI bills introduced across 45 states and Puerto Rico in the first quarter already. These proposals address a broad range of topics, including consumer protection, transparency in AI, election-related AI use, automated decision-making, and the impacts of AI on government services, education and the workforce. While most of these proposals focus on discrete AI uses, at least eight states—Vermont, California, Texas, Massachusetts, Illinois, New York, Rhode Island, and Connecticut—have proposed comprehensive frameworks. These bills focus on “high-risk” AI systems or “algorithmic discrimination,” which they typically define as “unlawful differential treatment” on the basis of a protected class. Congress itself appears concerned with the sharp uptick in AI bills being proposed and the potential implications for competition on May 11, 2025, the House Energy & Commerce Committee released a budget reconciliation bill proposal which seeks a ten-year moratorium on state regulations of AI models, AI systems, and automated decision systems.

This flurry of legislative activity highlights the difficulty of regulating a relatively nascent technology that is expected to transform productivity in many fields. California’s experience exemplifies this. In 2024, Governor Gavin Newsom vetoed SB 1047, which would have mandated safety testing and disclosure obligations for AI developers, given the potentially burdensome nature of SB 1047’s requirements. Despite the veto, Governor Newsom subsequently commissioned a panel to study AI risks and opportunities, which has since published a report recommending legislation imposing broader transparency requirements.

Recent developments in Virginia and Colorado further underscore the complexities of regulating the use of AI. Both states introduced bills addressing “algorithmic discrimination”—Virginia’s “High-Risk Artificial Intelligence Developer and Deployer Act” (“Virginia AI Act”) and Colorado’s “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act” (“Colorado AI Act”). The states define algorithmic discrimination as “the use of an artificial intelligence system that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived” state or federal protected class status. While each defines discrimination similarly, the two bills differ in implementation and coverage, reflecting different approaches to regulating the full AI lifecycle, from development to deployment. Virginia’s bill was vetoed in March 2025; Colorado’s bill was enacted, though amendments are expected before it takes effect. These diverging approaches—particularly regarding the regulation of the full AI lifecycle—are likely to shape how other states move forward. As these early laws may serve as models for future regulation, they are analyzed in detail below.

Overview of Virginia and Colorado Acts

Colorado AI Act – Key Requirements

Colorado’s SB 2025, the “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act” (“Colorado AI Act”), was signed into law on March 17, 2024. Although Colorado lawmakers are expected to amend the bill, the Colorado AI Act as currently enacted imposes sweeping requirements on developers and deployers of “high-risk AI systems.” The Act is designed to protect consumers from known or reasonably foreseeable risks of “algorithmic discrimination” arising from the intended and contracted uses of these systems.

Who is Covered?

Under the Colorado AI Act, a developer is an individual, corporation, or other legal or commercial entity doing business in Colorado that “develops or intentionally and substantially modifies” an AI system. A deployer is an individual, corporation, or other legal or commercial entity doing business in Colorado that deploys a “high-risk” AI system.

Key Requirements

  • Developers must:
    • Publish a public statement outlining the system’s reasonably foreseeable and known harmful uses.
    • Maintain documentation describing how the AI system was trained and evaluated.
    • Provide detailed disclosures of risks related to “algorithmic discrimination”, including the system’s intended uses, limitations, mitigation strategies, and performance metrics.
    • Notify the Colorado Attorney General, as well as known deployers and developers, of any known or reasonably foreseeable risks of “algorithmic discrimination” within 90 days of discovery.
  • Deployers must:
    • Implement a regularly reviewed risk management program that aligns with guidance from the National Institute of Standards and Technology (NIST).
    • Conduct a comprehensive impact assessment for the AI system annually and within 90 days of any intentional and substantial modification to the “high-risk” AI system. The assessment must detail the system’s purpose, use, risks, mitigation strategies, safeguards, and any changes.
    • Retain all impact assessments for at least three years.
    • Provide clear public disclosures to consumers describing the AI system, its purpose, and its role in any consequential decision-making.
    • Offer opt-out rights for consumers under the Colorado Privacy Act for profiling that results in legal or similarly significant effects.

Violations are deemed unfair trade practices under the Colorado Consumer Protection Act and may result in civil penalties of up to $20,000 per violation. The Act does not create a private right of action.

Affirmative Defenses

  • Entities may assert a defense if they:
    • Discover and cure a violation based on feedback, testing, or internal review, and
    • Maintain a documented review process and comply with the latest NIST “Artificial Intelligence Risk Management Framework” or a similar framework recognized by the Attorney General.

Legislative Amendments Under Consideration for the Colorado AI Act

The Colorado AI Act is scheduled to take effect on February 1, 2026. However, Governor Jared Polis has called for further legislative refinement.  In his signing statement on May 17, 2024, Governor Polis acknowledged concerns about balancing consumer protection with innovation and urged the Colorado General Assembly to work with stakeholders to revise the bill based on “evidence-based findings”.

In response, the Colorado AI Impact Task Force released a report in February 2025 outlining three categories of issues:

(1) areas of consensus for changes

(2) areas requiring more engagement to reach agreement among stakeholders, and

(3) areas of firm disagreement requiring creative solutions

Changing the definition of “substantial factor” was identified as one such area of disagreement, underscoring the complexity of defining the scope of AI regulation. Importantly, none of the discussions suggested a full repeal or overhaul of the Act.

As Colorado considers refinements to its law, Virginia’s recently vetoed AI Act and evolving executive policy under the Trump Administration will likely shape the next phase of policymaking. Lawmakers in other states considering similar legislation are expected to monitor Colorado closely before finalizing their own proposals.

The Proposed Virginia AI Act

A week after the Colorado AI Act passed, Governor Youngkin vetoed HB 2094, formally titled the “High-Risk Artificial Intelligence Developer and Deployer Act” (“Virginia AI Act”). The Virigina AI Act would have imposed a series of obligations on companies that create and deploy “high-risk” AI systems, with the aim of preventing “algorithmic discrimination.” In a statement explaining his veto, Governor Youngkin expressed concern that the bill’s regulatory framework would hamper AI innovation and deter the creation of new jobs and investment in the state. He voiced concern that the bill “fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments." Governor Youngkin also pointed to recent state-level initiatives, the Governor’s Executive Order 30 (2024) and the establishment of an AI task force, as evidence of his administration’s commitment to AI oversight through more adaptive and innovation-friendly mechanisms.

Both laws address similar concerns, including the regulation of “high-risk” AI systems and the prevention of “algorithmic discrimination” by developers and deployers. However, the Virginia proposal was narrower in scope than the Colorado AI Act and arguably more favorable to businesses. Key distinctions between the two frameworks include:

  • Definition of “substantial factor”: Both laws define “high-risk” AI systems as those playing a “substantial factor” in making a consequential decision. Under the Colorado AI Act, consequential decisions include decisions that have a material effect on the provision or cost of education, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. The AI system must “assist” in making the consequential decision for it to play a “substantial factor,” but the Colorado AI Act does not define “assists”, creating an open issue that may be addressed through litigation or subsequent regulation. In contrast, the proposed Virginia AI Act would have limited the definition of “substantial factor” to systems that serve as the “principal basis” for a consequential decision. Additionally, Virginia’s bill would have expanded the list of consequential domains to include parole and probation decisions, as well as marital status.
  • Specific Intent Requirement: The proposed Virginia bill required that a system either be a substantial factor or be “specifically intended” to make consequential decisions to be deemed “high risk”. This intent-based limitation would have narrowed the law’s scope. Colorado’s statute contains no such requirement, meaning that even systems not designed for consequential decision-making may fall within the Act’s reach if used in that context.

Since the AI industry is a rapidly evolving space, state legislatures will debate and consider pending and new AI legislation in 2025 and beyond. Given the similar legislative focus with slightly different provisions, Colorado AI Act and Virginia AI Act will continue to serve as a starting point for other states as they consider various AI provisions and legislation.

Conclusion

Despite divergent outcomes – Governor Polis signing the Colorado AI Act into law and Governor Youngkin vetoing a similar bill – both governors expressed a common desire: to ensure AI regulation protects consumers without stifling innovation. Governor Youngkin’s veto emphasized concerns with the bill’s structure, not opposition to regulation itself. These developments suggest that while state-level AI legislation is not retreating, lawmakers remain receptive to industry feedback and focused on striking the right balance. Companies developing or deploying high-risk AI systems should continue to closely monitor developments at the state and federal level.