The White House has issued an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” signaling a federal push to limit emerging, conflicting state AI requirements and to lay the groundwork for a single national approach. While framed as a step toward regulatory certainty, the order is unlikely to ease compliance obligations for businesses in the near term.
Until courts and agencies translate the order’s directives into enforceable outcomes, and unless Congress enacts overarching legislation, companies will continue to navigate state-by-state rules, with added uncertainty from expected litigation and evolving federal actions.
What the executive order covers
The order articulates a policy to “sustain and enhance” US AI leadership through a “minimally burdensome” national framework and targets what it characterizes as “onerous” state AI laws. To advance this objective, it directs the:
- establishment of a Department of Justice AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy;
- evaluation by the Department of Commerce to identify state laws viewed as conflicting with federal policy;
- use of federal funding levers, including conditioning certain funds and other discretionary grants on states’ refraining from enacting or enforcing identified AI laws;
- consideration by the FCC of a federal AI reporting and disclosure standard that would preempt conflicting state requirements;
- drafting of a policy statement by the FTC describing when state laws that mandate changes to “truthful outputs” could be preempted by the FTC Act’s prohibition on unfair or deceptive practices; and
- development of a legislative proposal for a uniform federal AI framework that would preempt conflicting state laws, with carve-outs for areas such as child safety, AI compute/data center infrastructure, and state procurement and use of AI.
Why compliance burdens are unlikely to be eased
Despite its unifying language, the order introduces a period of heightened legal and operational uncertainty rather than immediate relief:
State laws remain operative
Absent court injunctions, repeal, or definitive federal preemption through rulemaking or legislation, existing state AI obligations (for example, Illinois’ prohibition against AI in behavioral health therapy) continue to apply. Businesses face continued multi-jurisdiction compliance while potential challenges play out.
Litigation will take time and may be uneven
A DOJ-led strategy to challenge state laws will likely spur immediate state and third-party defenses, producing fragmented outcomes across courts. Even successful challenges could be limited to specific provisions or jurisdictions, prolonging ambiguity and increasing litigation-driven risk.
Agency actions will unfold on extended timelines
The order tasks the FCC and FTC with initiating processes that will require notice-and-comment proceedings and may be contested on statutory authority and “major questions” grounds. Any preemptive effect will not materialize quickly.
Funding conditions add leverage but not clarity
Conditioning grants may influence state behavior but will also invite challenges. In the interim, companies still must plan for current state requirements while monitoring whether states alter enforcement postures to preserve funding.
A federal standard, if it emerges, could add obligations
A single federal “reporting and disclosure” regime or FTC-driven uniformity may reduce state-by-state divergence but could introduce new nationwide documentation, testing, transparency, and governance expectations, especially for model developers and deployers in regulated sectors.
Carve‑outs preserve state authority in key domains
The order contemplates continued state roles in child safety, state AI procurement and use, and infrastructure-related topics. Sector and use-case specific state rules, including those addressing insurance, employment, education, or public sector AI, may remain active even under a future federal framework.
Practical implications for businesses
In the near term, AI governance and compliance programs should remain calibrated to current state requirements while accounting for increased volatility as challenges proceed. Businesses should expect:
- Continued multi-state compliance efforts, including impact/risk assessments, transparency and disclosure workflows, incident response and model monitoring practices, and documentation traceable to state-specific obligations;
- Potential expansion (not contraction) of documentation and testing expectations if federal agencies move toward harmonized reporting or if litigation prompts interim “best practice” commitments;
- Heightened scrutiny of model outputs, auditability, and explainability where states or federal agencies frame “truthful output” versus “deception” as a consumer protection issue;
- Ongoing adjustments in regulated sectors such as financial services, health, employment, and critical infrastructure, where sectoral rules and supervisory expectations interact with AI-specific obligations.
What to watch
Key milestones that will shape the compliance landscape over the coming quarters include:
- DOJ’s initial litigation posture and any preliminary injunction activity;
- The Department of Commerce’s 90‑day evaluation of state laws;
- The scope of any FCC proposal on AI reporting and potential preemption theories;
- The substance and practical effect of the FTC policy statement; and
- Whether Congress advances legislation establishing a uniform federal AI framework.
Until these, and other, events produce settled outcomes, businesses should plan for continued state-by-state compliance and budget for governance enhancements that can withstand both state and potential federal standards. Nixon Peabody will continue to monitor developments in the space at a federal and state level.
