The Trump administration has issued an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," (EO) which seeks to establish a "minimally burdensome national standard" for artificial intelligence (AI) and to address what the administration characterizes as a patchwork of state AI laws it considers excessive. Issued on December 11, the order directs multiple federal actions, including the creation of a Department of Justice (DOJ) AI Litigation Task Force to litigate against state AI measures that are inconsistent with the EO's policy; conditioning certain federal funds on state regulatory posture; and initiating the development of federal standards through the Federal Communications Commission (FCC) and Federal Trade Commission (FTC). Several things are clear from the EO: expect robust discussion on the federal and state levels and legal challenges from stakeholders, including regulators. Additionally, the implications for the regulatory landscape for AI developers, deployers, and users across sectors remain volatile, and all stakeholders must not only focus on compliance with the law but also compliance with the likely evolution of the legal landscape.
This alert provides companies and state-regulated entities with the legal context necessary to prepare for the EO's potential impact(s).
What the Order Does
The EO articulates a federal policy to "sustain and enhance" U.S. AI leadership through a unified framework and identifies certain state laws as creating compliance burdens that the administration believes risk impeding innovation and affecting interstate commerce. It cites state prohibitions on "algorithmic discrimination" and disclosure mandates as examples of measures that could require model alterations or implicate constitutional protections, including the First Amendment. The EO states an objective to move toward a national standard that would supersede conflicting state requirements while expressly preserving certain categories of state laws, including those addressing child safety and state procurement.
Operationally, the EO directs:
- Creation of an AI Litigation Task Force by the U.S. Attorney General within 30 days to challenge state AI laws that the Attorney General deems unconstitutional, preempted, or otherwise unlawful, with an express focus on laws that affect interstate commerce or conflict with federal AI policy.
- A Commerce Department evaluation of state AI laws within 90 days, identifying measures the administration considers onerous and that conflict with federal AI policy and, at a minimum, flagging any state laws that the order characterizes as compelling "alterations to truthful outputs" or requiring disclosures that could violate constitutional protections. The evaluation may also identify state laws that are consistent with the EO's policy of promoting AI innovation.
- Funding conditions directing Commerce to specify eligibility limits for certain Broadband Equity, Access, and Deployment (BEAD) funds and instructing agencies to assess discretionary grants that could be conditioned on states refraining from, or not enforcing, laws identified as conflicting with the federal AI policy during the funding period, "to the maximum extent allowed by federal law."
- Regulatory preemption mechanisms directing the FCC to initiate a proceeding on a federal AI reporting and disclosure standard that preempts conflicting state requirements, and directing the FTC to issue a policy statement explaining how state laws that require changes to truthful AI outputs may be preempted by the FTC Act's prohibition on deceptive practices.
- Preparation of legislative recommendations for a uniform federal policy framework that would preempt state laws conflicting with the EO's policy, while expressly preserving certain categories of state laws from preemption proposals (e.g., child safety, state procurement).
Immediate Context and Stakeholder Reactions
The EO states an intent to displace conflicting state regimes and to consolidate governance at the federal level, including litigation to challenge state measures and potential federal agency steps to establish preemptive standards. Some technology industry stakeholders have expressed support for a unified federal framework, citing concerns about operational challenges from divergent state requirements. Advocacy organizations and several state officials have criticized the order as overreaching and have signaled potential constitutional challenges. Congress recently declined to adopt similar nationwide preemption and funding-conditionality proposals, which may be relevant to assessing the EO's legal foundation.
Several states, including California, have characterized the order as an attempt to displace state AI regulations and have emphasized ongoing state initiatives around innovation, public safety, content authenticity, and protections for vulnerable populations. California's response highlights the state's AI ecosystem and asserts that state measures on issues such as deepfakes, watermarking, performer likeness protection, and AI-related child safety could be affected by federal preemption as contemplated in the order. Florida also recently released a comprehensive citizen Bill of Rights for AI, which could be handicapped by the EO.
Legal Considerations and Likely Areas of Challenge
The EO raises several threshold legal questions that may be subject to judicial review. The following topics are central to assessing the EO's legal durability and practical impact:
Federal preemption and executive authority: Under established preemption doctrine, preemption of state law is generally grounded in federal statute, regulation, or constitutional structure, and an executive order alone may not be sufficient to displace state legislation absent underlying congressional authorization. Analysis of the EO has therefore focused on whether and to what extent the executive branch may effectuate nationwide preemption via agency action or litigation strategy without new legislation. The EO directs a legislative proposal to establish a uniform federal approach that would expressly preempt conflicting state measures, implicitly recognizing Congress's central role. These dynamics suggest that preemption arguments advanced under the EO will likely rely on existing federal statutes, agency authorities, and classic conflict preemption theories, to be tested case-by-case.
Conditional spending and grant eligibility: The EO directs the Department of Commerce and other agencies to condition certain federal funds on state posture toward identified AI laws "to the maximum extent allowed by federal law." Legal questions exist regarding whether modifying the terms of federal funding or imposing retroactive conditions raises constitutional concerns under the Spending Clause of the U.S. Constitution and may exceed statutory authority if the conditions are not sufficiently related, clear, or authorized under applicable grant statutes. Whether BEAD-related and other discretionary grant conditions can be implemented as outlined will depend on program-specific statutes, timing, clarity of conditions, and the interplay with administrative law doctrines, which may be subject to legal challenge.
Agency action and potential preemptive standards: The EO's directives to the FCC and FTC contemplate federal standards or policy statements that could preempt conflicting state requirements or explain when state mandates may be preempted by the FTC Act. The legality and scope of any such preemption will likely turn on clear statutory authority, the substance of the rules or guidance issued, the nature of the conflict with state law, and associated administrative procedures. These proceedings, if initiated, would be subject to notice-and-comment procedures and may be subject to judicial review on both statutory and constitutional grounds.
Litigation posture and Commerce Clause themes: The AI Litigation Task Force is being established to challenge state AI laws alleged to burden interstate commerce or conflict with federal priorities. Courts will apply established Commerce Clause and preemption analysis to evaluate each challenged state law for extraterritorial effects, discriminatory or undue burdens on interstate commerce, and conflicts with federal statutes or programs. Given the diversity of state AI measures, outcomes may be highly context-specific, with potential for circuit splits and possible Supreme Court review if core federalism questions are squarely presented.
First Amendment and compelled outputs: The EO targets state laws that purportedly require "alterations to truthful outputs" or compel disclosures in ways that may trigger constitutional scrutiny. Future cases may examine whether specific state provisions constitute compelled speech, interfere with truthful commercial speech, or otherwise regulate model behavior in a manner that collides with federal consumer protection frameworks. The FTC policy statement called for by the EO would seek to clarify when state requirements may be preempted by the FTC Act in this context, which could become a focal point of subsequent litigation.
What This Means for Industry and State-Regulated Entities
For companies building or deploying AI systems nationwide, the EO signals a concerted federal effort to challenge certain state AI mandates, to condition select federal funds, and to explore agency-led preemptive standards. In the near term, this increases regulatory volatility, as state measures may be swiftly challenged, while federal agencies consider actions that could later unify or displace overlapping regimes.
Regardless of this uncertainty, entities should continue to implement robust AI governance programs – AI governance is crucial not only for compliance with existing and forthcoming legal and regulatory frameworks, but also for alignment with national frameworks and global laws like the EU AI Act. Furthermore, companies deploying AI solutions should remain mindful of established common law duties, especially in light of litigation against AI developers relating to chatbots involved in self-harm incidents. Proactive governance helps mitigate legal risks from multiple sources while monitoring action taken by federal agencies, state legislatures, and other stakeholders following the EO.
For states, the EO invites immediate choices about defense of existing frameworks, potential adjustments to maintain eligibility for certain federal programs, and participation in federal rulemakings that could affect preemption scope. Public statements by state officials and advocacy groups suggest robust opposition on federalism and statutory authority grounds, indicating that litigation timelines could be rapid and outcomes uncertain across jurisdictions.
At the same time, however, several states that were early AI regulators – particularly Utah and Colorado – have already begun softening or narrowing their regimes, signaling a shift from broad governance mandates to more targeted, risk‑based obligations. In Texas, a comprehensive AI governance bill advanced against the backdrop of federal proposals to impose a moratorium on state AI regulation, underscoring the political and legal tension between state experimentation and a uniform national approach. The new EO adds to this tension by signaling that federal agencies may actively contest certain state AI laws, even as large states continue exploring robust consumer protection and anti-bias frameworks. This emerging dynamic creates both an opening and a moving target for companies operating nationwide AI programs.
Key Takeaways
- The EO advances a unified federal AI framework and seeks to curb "onerous" state laws via DOJ litigation, funding conditions, and potential FCC/FTC regulatory activity, while preparing preemptive federal legislation for congressional consideration.
- Legal challenges are likely to focus on the limits of executive authority to preempt state law absent congressional action, the lawfulness and timing of conditional spending directives, and the statutory foundations and procedures for any agency rules or policy statements aimed at preemption.
- In the short term, expect heightened uncertainty as state laws are evaluated by the Department of Commerce, DOJ initiates challenges, and agencies consider federal standards. We expect states to continue regulatory activity in this area.
- Companies should continue to build AI governance programs within their organizations. AI governance should continue to track developments closely and prepare for overlapping compliance considerations pending judicial resolution.
- Industry participants may want to engage with both federal and state policymakers on harmonization, while preparing for overlapping investigations under traditional consumer protection, data privacy, and civil rights laws that will continue to apply regardless of how AI‑specific statutes are curtailed.
For further analysis tailored to your sector and compliance footprint, please contact Michael (Mike) Halaiko, CIPP/E; Alisa Chestler, CIPP/US, QTE; Alexandra (Alex) Moylan, CIPP/US, AIGP; or another member of Baker Donelson's AI Team.