share_log

More than 580 Google employees have signed a petition urging the CEO to reject classified AI contracts with the U.S. military.

wallstreetcn ·  Apr 28 05:08

The reason for the employee protest this time is that once AI tools are deployed in confidential systems, Google will have no way to monitor their actual usage. Currently, the Pentagon and Google are negotiating the use of their AI tools for 'all legitimate purposes'; critics argue that this phrase could potentially encompass weapons system development and large-scale surveillance in practice. This adds further uncertainty to the business cooperation between Google and the Department of Defense.

$Alphabet-C (GOOG.US)$ A new wave of employee protests has erupted internally. Hundreds of artificial intelligence researchers have signed an open letter demanding that CEO Sundar Pichai refuse to use the company's AI systems for classified missions by the US Department of Defense.

According to Bloomberg, the organizers claim to have gathered over 580 signatures, including more than 20 directors, senior directors, and vice presidents, as well as several senior employees from DeepMind, Google's AI research unit. The organizers stated that the letter will be delivered to Pichai this Monday.

The open letter explicitly requests that Google reject all classified workloads, arguing that once AI tools are deployed in classified systems isolated from the public internet, the company will have no way to monitor their actual usage.

The issuance of this letter coincides with ongoing negotiations between Google and the Pentagon. Reports indicate that the two parties are discussing the use of Google’s AI tools for "all lawful purposes." Critics argue that this phrasing could potentially encompass fully autonomous weapon systems and large-scale domestic surveillance in practice.

For the market, this employee action adds further uncertainty regarding the pace of Google’s future defense business expansion and its compliance boundaries.

Employee Concerns: Ambiguous Contract Boundaries and High Risk of Losing Control Over Classified Systems

The core demand of the open letter is that employees believe Google has failed to establish specific and enforceable red lines regarding the use of AI in classified networks.

Sofia Liguori, an AI research engineer at Google DeepMind’s UK branch, stated that she signed the letter because Google had not discussed the use of AI in classified areas with its employees. She pointed out that once AI tools are deployed in classified systems, the company will technically be unable to track or restrict how these tools are actually used.

"The company’s consistent response has been to assure employees that leadership will sign good contracts," said Liguori. "But these statements are very broad." She specifically highlighted the risks of 'Agentic AI':

"The level of autonomy it can achieve is concerning. It’s like handing over an extremely powerful tool while also relinquishing any control over how it is used."

The open letter stated: "At present, the only way to ensure that Google does not associate with such harm is to refuse any classified workloads. Otherwise, such applications may occur without our knowledge and ability to prevent them."

History Repeats: The Boundary Dispute from Maven to the Present

This employee action is not the first of its kind. In 2018, Google employees strongly protested the company’s involvement in the Pentagon's 'Project Maven,' which aimed to use AI to analyze target objects in drone video footage.

In response to employee opposition and a wave of resignations, Google eventually formulated new AI principles and decided not to renew the Maven contract.

However, Google has gradually rebuilt its relationship with the defense industry since then, and last year removed language from its AI principles committing to avoid using the technology for weapons or other potentially harmful applications.

The organizers of this joint letter stated frankly in their declaration: "Maven is not over." They noted, "Workers will continue to organize around the weaponization of Google's AI technologies until the company establishes clear and enforceable boundaries."

Google and the Pentagon: Accelerating Cooperation, Uncertain Boundaries

Meanwhile, Google’s cooperation with the Pentagon has advanced substantially. In March this year, Google made its Gemini AI agent available to more than 3 million employees of the Pentagon for non-classified use; this followed the release of the Gemini chatbot in December of last year.

Emil Michael, the Deputy Secretary of Defense for Research and Engineering, told Bloomberg in March that the Pentagon would begin by utilizing the Gemini agent at the non-classified level, "and then we will move into the classified and top-secret levels," revealing that negotiations regarding the use of Google’s AI agents in classified cloud environments were already underway.

Google’s negotiations with the Pentagon concerning the authorization of AI tools for "all lawful uses" are progressing. Critics argue that this phrasing could encompass weapon systems. In response, the Pentagon strongly refuted such characterizations and argued that commercial companies should not have veto power over usage policies during wartime or periods of military preparedness.

This controversy also contrasts with Anthropic's recent situation — reports indicate that the Pentagon is seeking to exclude Anthropic and its Claude AI tool from the U.S. defense supply chain while actively scouting for new tech giant partners, making Google one of the potential alternatives.

Editor/Rocky

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to EleBank. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.