The White House has released its long-awaited AI legislative recommendations, outlining a legislative roadmap with seven broad policy goals. This framework emphasizes the need for a preemptive national standard for AI development, use, and liability, though significant Democratic opposition may complicate its path in Congress.
The White House has released AI legislative recommendations, a roadmap with seven policy goals. It stresses a preemptive national AI standard, but Democratic opposition could hinder congressional progress, especially given the razor-thin GOP majority in the House.
On March 21, 2026, the White House released its Legislative Recommendations for a National Policy Framework for AI (AI Framework), reiterating the need for a preemptive national standard regarding AI development, usage limitations, and third-party misuse liability. This framework acts as a legislative roadmap centered on seven major policy goals: protecting children and empowering parents; strengthening AI infrastructure, security, and economic access; respecting intellectual property (IP) and creator rights; preventing censorship and protecting free speech; removing barriers to AI innovation; educating Americans and developing an AI-ready workforce; and establishing a preemptive federal policy framework. These goals involve various Congressional committees and key federal agencies. The AI Framework follows President Trump’s December 2025 Executive Order aimed at overriding state AI regulations, as a state-by-state regulatory landscape had emerged. While key Republican lawmakers support federal preemption, significant Democratic opposition, particularly among members on relevant committees, may complicate the legislative path forward in Congress.
This section of the AI Framework calls on Congress to empower parents with tools for managing children's privacy, screen time, content exposure, and accounts on AI services. It advocates for privacy-protective age assurance requirements for AI services accessed by minors, implementation of safety features to mitigate risks like sexual exploitation and self-harm, clarification of existing child privacy protections for AI, and preservation of states’ ability to enforce child protection laws, including those addressing AI-generated child sexual abuse material (CSAM). Recent legislative efforts in the House E&C Committee, such as the Kids Internet and Digital Safety (KIDS) Act (which includes elements of KOSA, SCREEN Act, SAFE BOTs Act, Sammy’s Law, and the App Store Accountability Act), aim to address these goals. The Senate has also passed its version of COPPA 2.0 and plans to advance related children's online protection legislation.
This section directs Congress to protect residential ratepayers from rising electricity costs due to AI data center expansion, streamline federal permitting for AI infrastructure, enhance law enforcement efforts against AI-enabled scams targeting vulnerable populations, ensure national security agencies can assess and mitigate risks from frontier AI models, and expand AI access for small businesses through grants, tax incentives, and technical assistance. Lawmakers are increasingly concerned about the electric grid strain from AI data centers and are exploring ways to shift costs from consumers to developers. Bipartisan efforts focus on streamlining federal permitting for AI infrastructure and energy resources. Bills like Sen. Blackburn’s TRUMP AMERICA AI Act propose codifying ratepayer protection pledges, while other legislation, such as the AI Scam Prevention Act and QUIET Act, target fraud and consumer protection. Proposals like the AI Talent Act and AI Risk Evaluation Act aim to boost federal AI expertise, and several bills seek to aid small businesses in AI adoption.
This section of the AI Framework urges Congress to refrain from interfering with ongoing judicial determinations regarding whether AI training on copyrighted material constitutes fair use, while the Administration holds that such training is lawful. It calls for exploring licensing or collective rights frameworks to enable creators to negotiate compensation from AI developers without triggering antitrust liability. Additionally, it suggests considering a federal regime to protect individuals from unauthorized use of AI-generated digital replicas of their voice or likeness, with safeguards for First Amendment-protected uses, and monitoring evolving copyright law for potential legislative gaps created by AI. The NO FAKES Act, which would establish a federal right of publicity against unauthorized AI-generated replicas, is a notable legislative effort aligning with these goals, incorporating free speech protections.
This section calls on Congress to prohibit federal agencies from pressuring technology and AI providers to moderate or alter content based on partisan or ideological considerations. It also seeks to establish clear mechanisms for individuals to seek redress when government actions improperly influence or censor expression on AI platforms. In recent months, particularly in the Senate, Congressional Republicans have increasingly focused on concerns over 'jawboning' – instances where federal agencies allegedly pressure tech platforms into content moderation or suppression of lawful speech. Senate Commerce Committee hearings in 2025 delved into whether agencies like the Cybersecurity and Infrastructure Security Agency (CISA) inappropriately influenced content moderation decisions. Legislative proposals, such as the Transparency in Bureaucratic Communications Act, aim to increase transparency in agency communications with online platforms, especially concerning content moderation, specific online content, or platform technologies like algorithms, alongside proposals related to Section 230 reform and broader free speech protections.
This section of the AI Framework calls on Congress to establish regulatory sandboxes that foster AI innovation and leadership, and expand access to federal datasets in AI-ready formats for industry and academia. It emphasizes that no new federal AI regulator should be created, instead advocating for oversight by existing sector-specific agencies and encouraging industry-led standards for AI development and deployment. Senator Ted Cruz’s SANDBOX Act, a part of a broader legislative framework for American AI leadership, aims to create regulatory sandboxes for AI developers to test and launch new technologies by allowing modifications or waivers of impeding regulations. Additionally, the Future of AI Innovation Act, reintroduced by Senator Maria Cantwell and others, supports AI innovation by authorizing testbeds at national laboratories, establishing grand challenge prize competitions, and expanding access to public datasets. The CREATE AI Act also proposes establishing a National Artificial Intelligence Research Resource (NAIRR) to provide researchers, educators, and students with access to AI data and computational resources.
This section calls on Congress to integrate AI training into existing education and workforce programs through non-regulatory methods, expand federal research into AI-driven workforce shifts to inform policy, and enhance the capacity of land-grant institutions to offer technical assistance, demonstration projects, and youth AI development initiatives. Several bipartisan small business proposals have been introduced, including the Small Business AI Advancement Act, AI for Mainstreet Act, AI-WISE Act, and Small Business AI Training Act, aiming to provide resources and training for AI adoption. The AI Workforce PREPARE Act seeks to establish an AI Workforce Research Hub to study AI's impact on jobs and improve data collection. Furthermore, the NSF Artificial Intelligence Education Act and the Land Grant Research Prioritization Act aim to expand AI education, workforce development, and research through scholarships, professional training, and support for land-grant universities, particularly in agriculture and manufacturing applications.
While the AI Framework reiterates the necessity for a preemptive national standard, particularly concerning AI development, limitations on its use, and liability for third-party misuse, it also clarifies its scope. It specifies that a national standard should not preempt the traditional 'police powers' retained by individual states. These state powers include enforcing laws of general applicability against AI developers and users, state zoning laws, and specific requirements governing a state's own use of AI technologies.
Akin’s lobbying and public policy team actively advises clients on navigating the constantly evolving AI regulatory landscape. The team commits to closely tracking the implementation of the AI Framework and ensuring clients are kept informed of all significant developments in this area.