News & Insights

Client Alert

December 29, 2025

New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption


In the absence of clear U.S. federal guidance, states have begun to regulate the development, delivery, and operation of AI. California’s Transparency in Frontier Artificial Intelligence Act (the “California TFAIA”) and Texas’s Responsible Artificial Intelligence Governance Act (the “Texas RAIGA”) are two prominent examples of several state AI laws effective as of January 1, 2026.

However, on December 11, 2025, President Trump signed an executive order that creates some doubt regarding the enforceability of those and other state AI laws.  The executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence” (the “Executive Order”), proposes to establish a uniform Federal policy framework for AI that preempts state AI laws that are deemed by the Trump administration to be inconsistent with that policy. The timing of the Executive Order suggests that the California TFAIA, the Texas RAIGA, and other state AI laws with proximate effective dates are targets of the Executive Order.  These other state laws include, among others:

  • California’s AB 2023, which requires public‑use generative AI developers to publish “high‑level” training‑data information;
  • California’s SB 942 (the California AI Transparency Act), which requires large AI platforms to provide free AI‑content detection tools and include manifest and latent watermarks;
  • California’s AB 489, which prohibits AI from falsely claiming healthcare licenses and requires disclosures when AI communicates with patients;
  • California’s SB 243, which mandates chatbot disclosures, safety protocols against suicidal/harmful content, and protections for minors (content limits and break reminders);
  •  California’s AB 325 (the Preventing Algorithmic Price Fixing Act), which updates the state’s antitrust law, the Cartwright Act, to bar shared or common pricing algorithms used by competitors to fix prices or restrain trade;
  • Colorado’s SB 24-205 (the Colorado AI Act), which requires a developer of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system, and requires a deployer of a high-risk system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system. It is the only state law that is specifically mentioned in the Executive Order; and
  • Illinois HB 3773, which amends the Illinois Human Rights Act to prohibit employer use of AI that discriminates against protected classes.

This client alert summarizes The California TFAIA and the Texas RAIGA and then highlights key features of the Executive Order and its implications for state enforcement and compliance planning.

The California TFAIA

The California TFAIA applies to “frontier developers,” or developers that have trained or initiated the training of a frontier model. A “frontier model” is defined as a large foundation model trained using a quantity of computing power greater than 10^26 integer or floating-point operations per second (i.e., a very large model).

The California TFAIA introduces enhanced transparency and accountability obligations for “large frontier developers” whose combined annual revenue exceeded $500 million in the previous calendar year (including affiliates). Large frontier developers must create, implement, and prominently publish a “Frontier AI Framework” describing the company’s approach to identifying and mitigating “catastrophic risks” (i.e., foreseeable and material risks that a model will contribute to the injury of more than 50 people or more than $1 billion in property damage, in connection with biological, chemical, or other weapons, cyberattacks without human oversight, and evasion of control) and should include, among other things,

  • alignment with national and international standards and industry best practices;
  • engagement of third parties to assess risks and audit mitigation effectiveness;
  • cybersecurity measures to protect unreleased model weights; and
  • internal governance to ensure compliance with these protocols.

The California TFAIA also requires “critical safety incident” management and reporting, which include

  • unauthorized access or modification of frontier model weights causing death or injury;
  • harms caused by a catastrophic risk, loss of control of a model leading to death or injury; and
  • models using deceptive methods to bypass controls.

The California TFAIA is one of several California AI-related measures effective on January 1, 2026, including the GAI Training Data Transparency Act (AB-2013) and the AI Content Transparency Act (SB-492), each of which imposes significant penalties for noncompliance. Companies operating in California should prepare for a layered AI compliance environment as these measures take effect.

The Texas RAIGA

The Texas RAIGA applies broadly to developers and deployers of AI systems that conduct business in Texas, provide products or services used by Texas residents, or develop or deploy AI systems within Texas. The Texas RAIGA prohibits such developers and deployers from intentionally creating or using AI systems for “restricted purposes,” which include:

  • encouragement of self-harm, violence, or criminality;
  • infringement of federal constitutional rights;
  • unlawful discrimination against protected classes; and
  • the creation or distribution of AI-generated child sexual abuse material, unlawful deepfakes, or communications impersonating minors in explicit contexts.

The Texas Attorney General (“AG”) may issue civil investigative demands requiring detailed information about the AI system in question, including a description of its use; the types of data used for training; a summary of inputs and outputs; the metrics and evaluation methods for assessing performance; and monitoring and safeguards in place. The Texas RAIGA clarifies that developers and deployers are not liable solely because end users misuse their AI systems; rather, liability depends on the developer’s or deployer’s intent in creating and distributing an AI system. Enforcement by the AG can lead to significant penalties.

The Texas RAIGA provides affirmative defenses for parties that self-detect through feedback from a developer, deployer, or other person; testing procedures, such as red-teaming or adversarial testing; adherence to state agency guidelines; or an internal review process, provided that the party is otherwise in compliance with a nationally recognized AI risk management framework, such as NIST’s AI Risk Management Framework.

The Texas RAIGA also creates an AI regulatory sandbox, managed by the Texas Department of Information Resources, allowing approved participants to test AI systems under relaxed requirements for up to thirty-six (36) months, during which the AG will not pursue enforcement actions.

The Executive Order

The Executive Order directs the Executive branch to coordinate federal action and encourage federal legislation for a uniform standard. The Executive Order directs the Attorney General to establish an AI litigation task force (the “Task Force”) to challenge state AI laws deemed inconsistent with the Executive Order’s language, including on the grounds of unconstitutional regulation of interstate commerce and federal preemption. The Executive Order calls for an evaluation of state AI laws, including those that compel disclosures or alter model outputs in ways that may raise constitutional concerns, underscoring potential federal-state conflict across an existing and growing slate of state AI laws.

The Executive Order also directs the Secretary of Commerce to publish, by March 11, 2026, an evaluation identifying burdensome state AI laws that conflict with the federal policy and merit referral to the Task Force. At a minimum, the evaluation must flag state laws that require AI models to alter truthful outputs or compel disclosures or reporting that would violate the First Amendment or other constitutional protections.

The Executive Order leverages federal funding and regulatory standards by directing:

  • the Secretary of Commerce to condition certain remaining “Broadband Equity Access and Deployment” program funds on states’ avoidance of “onerous” AI laws to the maximum extent permitted by federal law;
  • agencies to consider conditioning discretionary grants on states refraining from enacting or enforcing conflicting AI laws during performance periods;
  • the Federal Communications Commission to initiate a proceeding to consider adopting a federal AI reporting and disclosure standard that would preempt conflicting state laws; and
  • the Federal Trade Commission to issue a policy statement by March 11, 2026, describing how the FTC Act applies to AI and when state laws requiring alteration of truthful outputs are preempted by federal law barring deceptive practices affecting commerce.

The Executive Order does identify categories of regulation that are not proposed for preemption, including regulation of child safety, AI compute and data center infrastructure (except for generally applicable permitting reforms), and state government procurement and use of AI.

The Road Ahead

It will fall to the courts to determine if, whether and how the Executive Order will affect multiple state AI laws, including those in California, Colorado, Illinois and Texas. Companies should maintain flexible compliance programs capable of adjusting to the shifting state and federal regulatory environment. We will continue to follow these developments closely, as they reflect a dynamic and uncertain future for AI regulation in the United States.