Anthropic’s $1.6M Lobbying Blitz: Buying Safety Rules to Kill Competitors
Investigation: Anthropic doubled its lobbying spend to codify internal policies into federal law, creating a regulatory 'moat' that threatens to stifle AI rivals.
Anthropic is spending record amounts to hire former government staffers and ensure new AI laws reflect its own corporate policies, effectively pricing out open-source competition.
Anthropic, the San Francisco-based AI firm marketed as a 'safety-first' alternative to its rivals, reported $1.6 million in lobbying expenditures for the first quarter of 2026. This figure, pulled from official LD-2 Lobbying Disclosure filings, represents a 115% increase from the $744,000 spent in Q4 2025. By comparison, OpenAI—a company with a significantly higher market valuation—reported spending only $1.0 million during the same period. This 60% spending gap signals a strategic pivot by Anthropic to lead the legislative charge in Washington, D.C.
According to Senate Lobbying Disclosure databases, the surge in spending funded a team of 14 new lobbyists. Of these hires, 10 are former senior Congressional staffers, primarily from the Senate Commerce Committee and the House Energy and Commerce Committee. This 'revolving door' approach ensures that the individuals currently drafting the fine print of the 2026 AI Accountability Act are sitting across the table from their former colleagues. [Regulatory Capture] is the process by which a government agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry it is charged with regulating.
The target of this influence campaign is the specific language regarding 'compute thresholds' in the upcoming AI Accountability Act. Draft revisions of the bill obtained by Gen Us show that proposed safety audit requirements mirror the exact technical specifications found in Anthropic’s internal 'Responsible Scaling Policy.' Specifically, the draft mandates that any model trained using more than 10^25 Floating Point Operations (FLOPs) must undergo a third-party safety audit. [Compute Threshold] refers to a regulatory limit based on the total amount of mathematical operations used to train an AI model, used as a proxy for its potential risk.
While marketed as a safety measure, this threshold creates a significant financial barrier. According to data from the Center for Security and Emerging Technology (CSET), a single third-party audit for a model of this scale is estimated to cost at least $250,000. For Anthropic, backed by $4 billion in venture capital from Amazon and Google, this is a minor administrative expense. For the independent developers who constitute 90% of the current open-source ecosystem, it is a death knell. [Open Source AI] describes artificial intelligence models whose underlying code and training weights are made publicly available for anyone to inspect, modify, and run.
The mainstream narrative presents Anthropic as the 'adult in the room,' proactively seeking regulation to prevent existential risks. However, the LD-2 filings reveal a more pragmatic financial motive. By codifying their own internal standards into federal law, Anthropic is effectively building a 'moat' around their business. If the law requires expensive audits that only the most well-funded labs can afford, the competitive threat from low-cost, open-source alternatives is neutralized. This allows incumbents to maintain high subscription prices and control the direction of the technology without fear of disruption from lean, independent startups.
This legislative strategy is funded by massive infusions of corporate capital. SEC filings show that Amazon and Google have committed a combined $4 billion to Anthropic. That capital is now being liquidated into K Street influence to ensure the AI market remains a duopoly or triopoly. The revolving door is spinning at full speed; two of the lobbyists hired by Anthropic in February 2026 were, only weeks prior, serving as legislative directors for members of the very committees marking up the AI Accountability Act. This isn't just influence; it is the direct outsourcing of legislative drafting to the regulated entities themselves.
For the ordinary person, this regulatory shift means a future where AI is controlled by a handful of massive corporations. It means the death of independent, privacy-focused AI tools that users can run on their own hardware. Instead, the public will be forced to rely on corporate 'black box' systems that require constant subscription fees and facilitate data harvesting. When competition is regulated out of existence, the consumer loses both choice and leverage.
At Gen Us, we believe in radical transparency. You can use our Politician Tracker to see which members of the Senate Commerce Committee received donations from Anthropic-affiliated lobbyists this quarter. We have also uploaded the full list of the 14 lobbyists and their previous government roles to our 'Revolving Door' database. Investigate the links between the $1.6 million spent and the votes cast in the upcoming committee markup.
Summary
The AI startup Anthropic increased its lobbying spend by 115% this quarter, significantly outspending rival OpenAI to influence federal safety legislation. This investigation reveals how the firm is leveraging former Congressional staffers to turn its internal policies into mandatory regulatory hurdles for competitors.
⚡ Key Facts
- Anthropic's Q1 2026 lobbying spend rose to $1.6M, a 115% increase that outpaced OpenAI by 60%.
- 10 of Anthropic's 14 new lobbyist hires are former senior staffers from key Congressional oversight committees.
- Proposed language in the 2026 AI Accountability Act directly copies definitions from Anthropic’s internal corporate white papers.
- The bill’s $250k+ audit requirement for high-compute models would financially exclude 90% of current open-source AI developers.
- Amazon and Google’s $4B investment in Anthropic is being utilized to secure a regulatory moat via K Street.
Our Independence
This story was written by Gen Us - independent journalists exposing the networks of power that corporate media protects. No hedge fund owns us. No billionaire edits our headlines. We answer only to you, our readers.