Google to Separate from Project Maven

Tech giant to forgo future contracts with the Air Force AI initiative after employees express ethical concerns

1000w_q95

Following outspoken protests from thousands of employees, Google decided not to continue its artificial intelligence (AI) work for the U.S. Air Force’s Project Maven once the current contract period ends in 2019.

Project Maven, also known as the Algorithmic Warfare Cross-Functional Team, is the Pentagon’s signature machine learning program focused on using computer vision to develop military capabilities such as data labeling, video processing, and object detection and classification. A significant population within the Google workforce voiced concerns about the company developing systems that could be used to refine tracking and targeting for drone strikes overseas.

In April, an internal letter from Google employees to Chief Executive Sundar Pichai requested the company step away from Project Maven and “enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.” It garnered almost 4,000 signatures. In May, about a dozen Googlers resigned in further protest, worried about ushering in an age of advanced, autonomous weapons.

By June 1, the objections left an impression on Google decision-makers; Google Cloud CEO Diane Greene held an employee meeting to announce the company’s exit from the project. And on June 7, the company released a new code of ethics establishing that Google will not design AI for use in weapons or other technologies likely to cause overall harm or directly facilitate injury to people. The document specifies Google won’t sever all ties with the Defense Department, and will continue partnering through cybersecurity, training, recruitment, veterans’ healthcare, and search and rescue efforts. The document also outlines seven objectives for AI, positing the technology should be socially beneficial, avoid creating or reinforcing bias, be accountable to people, and more. 

Project Maven officially became operational in December 2017, when the team deployed an object recognition algorithm to the Middle East to identify features in video footage captured by UAVs. The initiative’s early success and continued testing has broadened the scope of its applications, which have begun to include drone strikes and targeted missile attacks.

Lethal use is a grey area for myriad emerging military applications of AI. Since 2012, the U.S. military has had doctrine in place requiring a “human in the loop” on all potentially life-threatening decisions. But the day may come when a machine’s decision-making during combat could be quicker, more objective, and more confident than that of a human. If the singularity were to occur, would the military grant AI systems the sovereignty to take human lives in the interest of national security? This question sits at the technology’s ethical core.

As evidenced by the Google situation, a divide exists in the tech community as to AI’s role in combat. Lethal AI in the wrong hands could be devastating for civilian populations. Adversely, it may be disadvantageous for the military to shy away from AI research and development in a warfighting context when adversaries are exploring ways to employ AI against the U.S.

Google’s withdrawal from Project Maven isn’t likely to impede development of autonomous warfighting systems. Other tech giants such as Microsoft and Amazon have competed for Project Maven contracts in the past and haven’t experienced the same resistance among their workforces. In addition, the project’s budget grew to $131 million in March with President Trump’s new federal spending bill.

The military is confident AI has an important place on tomorrow’s battlefield. But for Google, a progressive, public-facing company, the financial or national security benefits of applying the technology to autonomous weapons don’t outweigh the risks or the damage to its reputation.

The Google controversy is indicative of a need for a larger national conversation surrounding law and policy that keeps pace with advancements in AI. Until AI regulations are established, its likely more organizations will seek to regulate themselves. Read our special report from SXSW 2018 to learn more about the state of AI and machine ethics. 

Photo Credit: U.S. Air Force

GEOINT In Demand

Education and workforce diversification are critical for future success

Mapping #Ebola

Twitter data heat map shows timeline of Ebola-related tweets

Signed, Sealed, Delivered

SmallSat pioneer Skybox Imaging wants to change the way people view the world