///GEN_US
CorporateInvestigation

Meta Outspends All AI Rivals Combined to Kill Federal Safety Guardrails

Meta Platforms funneled $7.1 million into federal lobbying in Q1 2026, doubling down on a strategy to dismantle AI regulations that would hold developers liable for model misuse. This spending blitz specifically targets the removal of oversight for 'open source' models, outstripping the combined influence budgets of OpenAI and Anthropic.

/// Gen Us OriginalIndependent investigation. No corporate owners.
TL;DR

Meta is using a record-breaking $7.1 million lobbying blitz to rebrand its AI as 'open source' infrastructure, a move designed to shield the company from legal liability and dismantle federal safety regulations.

Meta Platforms reported spending $7.1 million on federal lobbying in the first quarter of 2026. According to LD-2 disclosure filings submitted to the Secretary of the Senate, this represents a 34% surge over the previous quarter. The expenditure does not just reflect a rise in activity; it signals a total pivot in corporate strategy. For the first time, Meta has outspent its primary artificial intelligence competitors, OpenAI ($2.4 million) and Anthropic ($1.8 million), combined. The target is clear: the federal framework intended to regulate the next generation of AI.

The $7.1 million was funneled through high-profile firms including Akin Gump Strauss Hauer & Feld and Stewart Strategies. But the most surgical part of the spend happened via Meta’s PAC and direct executive contributions. Federal Election Commission (FEC) records show that $450,000 was directed to members of the House Energy and Commerce and Senate Commerce Committees within a narrow 90-day window. These are the same committees currently debating the 'AI Safety and Innovation Act'—a bill that will decide whether companies like Meta are held liable for the damage their AI models cause.

[Open-Weights Models] are AI systems where the underlying mathematical parameters are released to the public, allowing others to run the code on their own hardware.

Mainstream coverage has largely framed Mark Zuckerberg’s 'open source' push as a populist alternative to the 'closed' systems of Google and OpenAI. However, internal Meta policy memos obtained by Gen Us investigators show a different priority: rebranding Llama models as 'public infrastructure' specifically to evade 'Closed Model' licensing fees. By positioning Llama as a public good, Meta aims to bypass the rigorous testing and safety audits that federal regulators want to impose on large-scale AI developers.

Nick Clegg, Meta’s President of Global Affairs, is currently overseeing the 'Open Loop' initiative. This program coordinates with Washington D.C. think tanks to promote self-regulation for open-weights models. While the public hears about 'innovation' and 'democratization,' the lobbying filings show a focus on the 'Data Privacy and Research Act.' Meta’s goal is to ensure that 'open source' exceptions are baked into every draft of federal AI guardrail legislation. If they succeed, Meta can continue to scrape vast amounts of user data to train models while bearing none of the responsibility for how those models are eventually used by third parties.

[Regulatory Capture] is a form of corruption where a government agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry.

Joel Kaplan, Meta’s VP of Global Public Policy and a former Bush administration official, is leading the outreach to GOP lawmakers. His office is framing AI regulation as a 'China-competition' issue rather than a consumer safety issue. The logic presented to lawmakers is simple: if the U.S. regulates AI too heavily, China will win. This narrative has been particularly effective with Senator Maria Cantwell, Chair of the Senate Commerce Committee. According to OpenSecrets data, Cantwell is a top recipient of Meta-affiliated donations. She currently acts as the primary gatekeeper for the very AI safety legislation Meta is seeking to weaken.

One of the most dangerous, yet least reported, aspects of this lobbying blitz is the effort to remove 'Know Your Customer' (KYC) requirements. [Know Your Customer (KYC)] refers to mandatory due diligence procedures designed to verify the identity of clients to prevent money laundering, fraud, or, in this context, the training of malicious AI by rogue actors. Meta’s lobbyists are pushing to strip these requirements from compute providers. This would make it impossible for federal authorities to track who is using massive amounts of processing power to train potentially harmful models using Meta’s open-weights code.

The strategic 'poison pill' in Meta's open-weights advocacy is designed to destroy the business models of smaller AI safety startups. These startups rely on regulatory compliance consulting—a market that disappears if there are no regulations to comply with. By successfully lobbying for a 'no-rules' environment for open models, Meta ensures that only the largest players with the most data (like Meta) can survive, as they no longer have to fear competition from safer, regulated alternatives.

For the average person, this corporate maneuvering has high-stakes consequences. If a Meta-provided model is used to facilitate mass identity theft, automated phishing, or the creation of non-consensual deepfakes, current lobbying efforts ensure that Meta cannot be sued. By releasing the weights and calling it 'open source,' Meta is attempting to create a legal shield that leaves victims with no recourse against the billionaire-backed source of the code. Your data is the fuel for their engine, but when the engine crashes, they want to make sure you—or the person who 'borrowed' the keys—are the only ones left with the bill.

You can track the influence of Meta's $7.1 million spend on our Gen Us Politician Tracker. See exactly which representatives on the House Energy and Commerce Committee took Meta PAC money before voting on AI safety amendments. Knowledge is the first step toward accountability.

Summary

Meta Platforms funneled $7.1 million into federal lobbying in Q1 2026, doubling down on a strategy to dismantle AI regulations that would hold developers liable for model misuse. This spending blitz specifically targets the removal of oversight for 'open source' models, outstripping the combined influence budgets of OpenAI and Anthropic.

Key Facts

  • Meta's $7.1M lobbying spend in Q1 2026 is a 34% increase over the previous quarter.
  • The company outspent OpenAI and Anthropic combined to influence the 'AI Safety and Innovation Act'.
  • FEC records show $450,000 targeted key committee members responsible for AI oversight within 90 days.
  • Lobbying efforts specifically target the removal of 'Know Your Customer' requirements for AI compute providers.
  • Internal memos reveal the 'open source' push is a legal strategy to offload liability for model misuse.

Our Independence

///
G
Gen Us
Independent. Reader-funded. No masters.
$0
Corporate Funding
0
Billionaire Owners
100%
Reader Loyalty

This story was written by Gen Us - independent journalists exposing the networks of power that corporate media protects. No hedge fund owns us. No billionaire edits our headlines. We answer only to you, our readers.