Back to Blog
AI Policy April 29, 2026 5 min read

Google Signs Classified AI Deal Giving the Pentagon Access to Gemini for 'Any Lawful Government Purpose'

Google finalized an agreement allowing Gemini models to run on classified U.S. military networks — including for mission planning and weapons targeting — while also agreeing to adjust safety filters at government request. More than 600 Google employees have signed an open letter opposing the deal.

Google Signs Classified AI Deal Giving the Pentagon Access to Gemini for 'Any Lawful Government Purpose'

Google finalized a classified agreement with the U.S. Defense Department on April 28 that allows Gemini AI models to be deployed on secure military networks for “any lawful government purpose.” That phrase covers mission planning, weapons targeting analysis, intelligence processing, and logistics — a scope that goes further than the narrow research use cases Google typically highlights in public AI contracts.

The contract also includes a provision requiring Google to help modify Gemini’s safety filters at the government’s request. That clause is what triggered the blowback. More than 600 Google employees signed an open letter circulating internally, urging leadership to refuse classified military AI work and demanding the company publish the contract’s full terms. Google has declined to do so.

The move is a direct reversal of a position Google held as recently as 2018, when the company withdrew from Project Maven — a Pentagon drone surveillance contract — after roughly 4,000 employees signed a protest letter. The current deal is broader: Maven was about applying computer vision to drone footage; this contract puts a general-purpose frontier AI model on classified networks with explicit targeting applications.

Google joins OpenAI and xAI as frontier AI labs now operating under active Pentagon agreements. The military AI contracting wave accelerated after the White House AI Executive Order in January 2025 directed DoD to fast-track commercial AI adoption. Reports indicate individual contract values range up to $200 million.

The safety filter provision is the detail most likely to matter long-term. Gemini’s default behavior refuses to assist with detailed weapons targeting, mass casualty planning, and certain surveillance tasks. Modifying those refusals at the government’s request means Gemini’s classified deployment is operating under a different ethical framework than the consumer and enterprise versions — without public visibility into where the lines are drawn.

Google’s justification is pragmatic: if frontier AI is going on military networks regardless, it’s better for safety-focused labs to be the supplier than to cede the space to less safety-conscious vendors. Critics inside and outside the company argue that reasoning normalizes the erosion of AI safety constraints under government cover.

The employee backlash is unlikely to stop the contract — Google’s cloud business depends on large government deals, and the company showed in 2019 that it would maintain DoD relationships despite internal opposition. The real question is whether the Gemini safety modifications will be disclosed in any form, and whether this contract model becomes the template for other frontier AI labs navigating the same tension between safety commitments and defense revenue.

Google Gemini Pentagon AI policy military AI