Google Pledges Not to Develop AI Weapons, But Says It Will Still Work With the Military

Back to QNT News


Indian Technology News

June 12, 2018

Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following a months-long controversy over its involvement in a Department of Defense drone project. The document, titled “AI at Google: our principles” and published June 8 on Google’s primary public blog, sets out objectives the company is pursuing with AI, as well as those applications it refuses to participate in. It’s authored by Google CEO Sundar Pichai.

Notably, Pichai says his company will never develop AI technologies that “cause or are likely to cause overall harm”; involve weapons; are used for surveillance that violates “internationally accepted norms”; and “whose purpose contravenes widely accepted principles of international law and human rights.” The company’s main focuses for AI research are to be “socially beneficial”; “avoid creating or reinforcing unfair bias”; be built and tested safely; be accountable to human beings and subject to human control; to incorporate privacy; “uphold high standards of scientific excellence”; and to be only used toward purposes that align with those previous six principles.

Today we’re sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right.

“At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” Pichai writes. “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

However, Pichai does not rule out working with the military in the future. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he writes. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

Gizmodo reported last week that Google plans to end its involvement with Project Maven, a government initiative that involves using Google’s open-source machine learning libraries to parse drone footage. Google was not involved in the operation of drones, the company claims, but its involvement in any way with drone warfare on behalf of the U.S. government was met with fierce backlash both inside and outside the company. Thousands of employees signed an open letter urging Google to cut ties with the program, and at least a dozen or so employees even resigned over the company’s continued involvement as of last month.

Eventually, Google Cloud CEO Diane Greene told employees that the company would end its involvement with Project Maven when its contract expired in 2019. According to Wired, Google’s work with Project Maven would fall outside the work it plans to continue with the military, because using AI to analyze drone footage “doesn’t follow the spirit of the new guidelines.” In addition to releasing its AI ethics guidelines, Google also published a “Responsible AI Practices” document June 8 that outlines best practices when it comes to overall design, fairness and bias, privacy and security, and other controversial topics that the company thinks are important to stress when dealing with AI development.

Google’s decision to outline its ethical stance on AI development comes after years of alarm-sounding over the impending threat automated systems and the potential development of so-called artificial general intelligence (human-level AI) poses to society and the human race. Just last month, a coalition of human rights and technology groups came together to put out a document titled the Toronto Declaration that calls governments and tech companies to ensure AI respects basic principles of equality and nondiscrimination.

Over the years, criticism and commentary regarding AI development has come from a wide-ranging group, from pessimists on the subject like Tesla and SpaceX founder Elon Musk to more reasonable voices in the industry like Facebook scientist Yann LeCun. Now, Silicon Valley companies are beginning to put more significant resources toward AI safety research, with help from ethics-focused organizations like the nonprofit Open AI and other research groups around the world. A number of Silicon Valley companies, Google included, are also part of an existing coalition called the Partnership on AI that seeks “to ensure AI is understood by and benefits as many people as possible.”

Copyright 2018 FFC Information Solution Private Limited. All Rights Reserved.

Copyright © LexisNexis, a division of Reed Elsevier Inc. All rights reserved.  
Terms and Conditions    Privacy Policy

Quality News Today is an ASQ member benefit offering quality related news
from around the world every business day.

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.