OpenAI and Anthropic Spend Millions to Buy AI Regulatory Moats
Silicon Valley giants are spending record amounts on lobbying to ensure new AI laws kill off open-source competition while protecting their own market dominance.
Anthropic and OpenAI spent $2.6M in one quarter to hire former government aides and insert loopholes into the Senate’s AI safety bill that protect their own proprietary models while crushing smaller competitors.
In the first quarter of 2026, the two leading figures in the generative AI race, Anthropic and OpenAI, deployed a combined $2.6 million into federal lobbying efforts. According to filings from the Senate Office of Public Records (SOPR), Anthropic reported $1.6 million in expenditures—a 400% increase over its previous year’s spending. Simultaneously, OpenAI reported $1.0 million, its highest quarterly outlay to date. This influx of capital was not a general branding exercise; it was a surgical strike aimed at the Senate Commerce Committee during a critical window for the 'Algorithmic Accountability Act.'
[Regulatory Capture] is the process by which a regulated industry exerts such strong influence over the government body responsible for its oversight that the agency effectively acts as an advocate for the industry rather than the public.
The timing of these payments directly aligns with the committee’s private markup sessions held in February 2026. Records from the Legis1 Salesforce database indicate that at least four specific amendments to the Act were drafted with direct input from industry-hired counsel. These amendments focused on redefining the criteria for 'High-Risk AI.' By narrowing these definitions, the firms successfully lobbied to exclude their own proprietary, closed-source models from the most stringent oversight, while advocating for rules that would place a heavy compliance burden on smaller developers.
Central to this strategy is the concept of compute thresholds. [Compute Thresholds] are regulatory benchmarks based on the amount of raw computing power used to train an AI model, often used as a proxy for the model’s potential risk level. Anthropic and OpenAI successfully lobbied for high mandatory thresholds. To the casual observer, this looks like a safeguard. In reality, it ensures that only firms with multi-billion dollar venture capital backing can afford the regulatory compliance and audit costs required to operate. By setting the bar at a level they have already cleared, these firms are effectively pulling up the ladder behind them.
The lobbying force behind this surge consists of a familiar roster of Washington insiders. SOPR filings identify three former senior aides from the Senate Commerce and Judiciary committees who are now registered as in-house lobbyists for these firms. These individuals, who previously helped write the rules of the road for the technology sector, are now being paid to install exit ramps for their new employers. This revolving door ensures that the technical nuances of the legislation—language that could make or break a startup—are curated by those with the deepest pockets.
Mainstream coverage has largely framed this cooperation between Silicon Valley and DC as a shared commitment to 'AI safety' and 'human alignment.' However, internal shifts within these companies tell a different story. The surge in political spending coincides with the quiet departure of several senior ethics researchers from both Anthropic and OpenAI. This suggests a pivot from internal safety engineering to external political maneuvering. While the public is told that regulation is necessary to prevent existential risk, the specific amendments being pushed shield proprietary training data from public audit while mandating transparency for smaller 'general purpose' models.
For the average person, this consolidation of power is not an abstract policy debate. It translates to a future where a handful of corporations control the algorithms that determine everything from credit eligibility to healthcare access. When competition is stifled by high regulatory moats, costs for AI-powered services remain artificially high, and the democratization of technology through open-source innovation is suppressed. The $2.6 million spent in Q1 is an investment in a duopoly, paid for with the capital of Microsoft and Google, and designed to be recouped through the future dependence of the American public on closed-box systems.
At Gen Us, we believe in following the receipts. You can use our Politician Tracker to see which members of the Senate Commerce Committee received donations from PACs associated with these firms in the lead-up to the February markup sessions. Transparency isn't just about what the models do; it's about who owns the people who write the laws.
Summary
Record-breaking Q1 2026 lobbying expenditures from AI incumbents targeted the Senate Commerce Committee to shape the Algorithmic Accountability Act. This capital deployment, utilizing a revolving door of former congressional staffers, aims to establish regulatory moats that favor high-capital firms over open-source competitors.
⚡ Key Facts
- Anthropic increased its lobbying spend by 400% in Q1 2026 to $1.6M, while OpenAI spent a record $1.0M.
- Lobbying efforts targeted the Senate Commerce Committee during private markup sessions for the Algorithmic Accountability Act.
- Three former senior congressional aides are now working as in-house lobbyists for these firms to influence legislative language.
- Legis1 Salesforce data reveals at least four bill amendments were drafted with direct input from industry-hired counsel.
- The proposed regulations include high compute thresholds that create a 'moat' around incumbents, effectively blocking open-source competition.
Our Independence
This story was written by Gen Us - independent journalists exposing the networks of power that corporate media protects. No hedge fund owns us. No billionaire edits our headlines. We answer only to you, our readers.