'I Would Rather Go To Jail': Sam Altman Rewrites Pentagon Deal After Public Backlash
· Free Press Journal

OpenAI has revised its controversial agreement with the US Department of Defense, after CEO Sam Altman admitted the company 'shouldn't have rushed' its initial announcement - one that triggered a swift and fierce public reaction. The contract led to a growing number of users deleting their ChatGPT accounts, while main competitor Anthropic shot to the top of app store charts in several countries. The backlash was swift, the damage real, and Altman knew it.
Visit chinesewhispers.club for more information.
What is Sam Altman changing in the deal with DOW?
In an internal memo he later shared publicly on X, Altman said OpenAI would add language to its contract stating that the AI system 'shall not be intentionally used for domestic surveillance of US persons and nationals,' citing the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.
Altman also confirmed that the Department of Defense had affirmed OpenAI's services would not be used by military intelligence agencies such as the NSA, and that any such use would require a separate contract modification.
On the question of constitutional limits, Altman left little room for ambiguity. "If we were asked to do something unconstitutional or illegal, we will walk away. Please come visit me in jail if necessary," he wrote on X.
Here is re-post of an internal post:
— Sam Altman (@sama) March 3, 2026
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
"• Consistent with applicable laws,…
Three red lines OpenAI will not cross
OpenAI laid out three non-negotiables as part of the deal: no mass domestic surveillance using OpenAI technology, no directing autonomous weapons systems with OpenAI technology, and no high-stakes automated decisions - citing a 'social credit' system as an example.
Altman argued that enforcement ultimately rests on OpenAI retaining control. The company says it keeps "full discretion over our safety stack" and has authorised OpenAI personnel kept "in the loop," backed by contract language and existing US law.
(I also would like to share this, which I wrote after thinking a little more.)
— Sam Altman (@sama) March 3, 2026
There is a lot we will talk about in the coming days, but since this is one of the first "real deal" decisions we have faced, I wanted to share a few things that have been heavily on my mind the past…
OpenAI struck the deal that Anthropic plainly refused
The deal didn't emerge in a vacuum. The background to the controversy lies in a failed agreement between Anthropic and the Department of Defense. Anthropic had attempted to negotiate safeguards that would prevent the Pentagon from using its AI models for mass surveillance of Americans or incorporating them into autonomous weapons capable of striking targets without human oversight - demands the DoD refused to accept. Defense Secretary Pete Hegseth subsequently designated Anthropic a supply-chain risk.
OpenAI stepped in almost immediately, striking its own deal. Critics, including OpenAI's former head of policy research Miles Brundage, argued that "OpenAI employees' default assumption here should unfortunately be that OpenAI caved and framed it as not caving, and screwed Anthropic while framing it as helping them."
Altman pushed back. He said he strongly disagreed with the supply-chain risk designation, calling it "a very bad decision from the DOW," and hoped the Pentagon would reverse course.
The Internet errupted with mass Claude adoption
Claude surged past ChatGPT to become the most downloaded free app in Apple's US App Store. Protesters in San Francisco, forming a group called Quit GPT, planned a demonstration outside OpenAI's headquarters.
Altman reached out to Emil Michael, the Undersecretary of Defense for Research and Engineering, to rework part of the contract, with deal terms now explicitly prohibiting OpenAI's technology from being used on "commercially acquired" data - a protection absent from the original agreement.
"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future," Altman wrote.
Not everyone is satisfied with the revised language. Activists argued that the inclusion of the word 'deliberate' in the surveillance prohibition leaves loopholes that could allow the technology to be misused, and that the policy does not adequately address risks associated with AI-powered autonomous weapons systems.