Following its December 2025 executive order on artificial intelligence, which previewed a federal effort to streamline AI regulation, the White House on March 20, 2026, released its National Policy Framework for Artificial Intelligence. The Framework provides Congress with a roadmap for potential federal AI legislation and offers important signals for companies already navigating a rapidly evolving patchwork of state AI laws.
As we noted in our prior alert on the executive order, “AI executive order unlikely to reduce compliance burden in short term,” the administration’s stated goal is to reduce regulatory fragmentation and compliance burdens over time. But like the executive order, the Framework does not itself impose new requirements or displace existing obligations. For companies navigating an increasingly complex landscape of state AI laws, the practical compliance picture remains unchanged (for now).
A legislative roadmap, not a regulatory regime
The Framework is intended as a blueprint for Congress rather than a binding policy. It reflects the administration’s continued emphasis on a “light-touch” federal approach focused on enabling innovation, limiting regulatory burdens, and addressing discrete areas of risk.
This confirms a key takeaway from the executive order: meaningful federal harmonization of AI regulation will depend on congressional action, the timing and scope of which remain uncertain.
Key themes and policy direction
While not prescriptive, the Framework provides important signals on how federal AI policy may evolve.
FEDERAL PREEMPTION AND A NATIONAL STANDARD
The Framework calls for a single federal approach that would preempt state AI laws that impose inconsistent or burdensome requirements, while preserving certain baseline state authorities (e.g., consumer protection and fraud).
Practical impact: Preemption remains a central, but politically complex, objective. Until enacted, companies must continue to comply with existing state regimes.
LIMITED, SECTOR-BASED OVERSIGHT
Consistent with the executive order, the Framework favors reliance on existing regulatory authorities and sector-specific oversight, rather than creating a comprehensive AI regulator or horizontal regime.
Practical impact: Companies should not expect a near-term shift toward a unified, EU-style regulatory model.
TARGETED FOCUS AREAS: CHILDREN AND ONLINE SAFETY
The Framework places particular emphasis on protections for minors and risks associated with AI-enabled content.
Practical impact: Targeted legislation in this area is more likely to advance in the near term and may introduce new, discrete compliance obligations.
INTELLECTUAL PROPERTY AND AI TRAINING
The Framework takes a measured approach to intellectual property issues, largely deferring to courts and market-driven solutions.
Practical impact: Ongoing litigation and legal uncertainty around training data and outputs will continue to shape risk.
INFRASTRUCTURE, ENERGY, AND COMPETITIVENESS
The Framework highlights the importance of AI infrastructure, energy resources, and workforce development as components of national competitiveness.
Practical impact: Policy activity in these areas may indirectly affect companies through incentives, funding, and operational considerations.
SPEECH AND CONTENT GOVERNANCE
The Framework reflects a focus on protecting lawful expression and limiting government-driven content restrictions in AI systems.
Practical impact: Content governance will remain an area of scrutiny, with potential implications for platform policies and risk management.
Practical implications for companies
THE PATCHWORK PERSISTS FOR NOW
Companies should continue to plan for compliance with state-level AI laws, which remain fully enforceable unless and until federal legislation is enacted.
FEDERAL PREEMPTION IS A POSSIBILITY, NOT A CERTAINTY
While preemption is a central policy goal, it faces significant political and legal hurdles.
ENFORCEMENT RISK WILL CONTINUE TO BE DRIVEN BY STATES AND COURTS
State regulators, private litigants, and courts will remain primary drivers of AI-related risk in the near term.
TARGETED FEDERAL LEGISLATION IS MORE LIKELY THAN COMPREHENSIVE REFORM
Discrete areas such as child safety, fraud prevention, or deepfakes may see earlier legislative action.
GOVERNANCE PROGRAMS SHOULD REMAIN FLEXIBLE
Organizations should continue building adaptable AI governance frameworks capable of accommodating both state-specific requirements and potential future federal standards.
Nixon Peabody’s Technology Industry Team will continue to monitor legislative developments and provide updates as the federal AI policy landscape evolves.
For more information on the content of this alert, please contact your Nixon Peabody attorney or the author of this alert.
