Google pledges not to develop AI weapons

Friday, 08 June 2018

Whilst most technology companies have already agreed to the general idea of only using artificial intelligence for good, Google have set out much more precise standards. They have stated that they will not allow its artificial-intelligence products to be used in military weapons. In the set of ethical principles they released Thursday, they set out seven guidelines for the use of their AI:

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles

They also outlined applications which they will not  pursue:

  1. Technologies that cause or are likely to cause overall harm
  2. Weapons or other technologies whose principal purpose or implementation is to cause of directly facilitate injury to people
  3. Technologies that gather or use information for surveillance violating internationally accepted norms
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights

They have recently come under fire from its own employees for supplying image-recognition technology to the U.S. Department of Defense, in a partnership called Project Maven. After receiving a petition signed by thousands of employees and having some employees going as far as resigning, earlier this month they told employees they wouldn't seek to renew its contract for Project Maven.

In the blog post, Chief Executive Sundar Pichai explained that whilst the new principles forbid the development of AI weaponry, he states that they "will continue [their] work with governments and the military in many other areas." 

He stated that Google is using AI "to help people tackle urgent problems" which ranges from predicting wildfires, helping farmers, diagnosing diseases and preventing blindness.

This move comes amid growing concerns that robotic or automated systems could be misused and lead to chaos. There have been many years of worry over the impending threat posed by automated systems. Just last month, a coalition of human rights and technology groups came together to put out a document titled the Toronto Declaration that calls for governments and tech companies to ensure AI respects basic principles of equality and non-discrimination

Machine learning and artificial intelligence have inevitably become more and more important for defence and intelligence work. Other US-based tech companies such as Microsoft and Amazon, have bid on multibillion-dollar cloud computing projects with the Pentagon.

Latest jobs

Client Technical Support Munich

Munich, Germany | €45,000 - €60,000 pa

My client is a company based in Munich and they are currently seeking a Client Technical Support Munich.


Data Scientist

Munich, Germany | €60,000 - €80,000 pa

My client is a company based in Munich and they are currently seeking a Data Scientist.

See all jobs Submit your CV

Register for an account

Create an account today to start applying for jobs and receive email alerts on your job criteria.

Register Today

Data Revolution is a trademark of VMR Consultants | Registered Number: 4234001. VAT Number: 774 4848 82

Website design and built by: Revive.Digital