Today Google unveiled a new set of principles guiding its approach to artificial intelligence, including a pledge not to build AI weapons, "technologies that gather or use information for surveillance violating internationally accepted norms" or ones "whose objective contravenes widely accepted principles of global law and human rights".
"While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas", Pichai wrote.
It planted its ethical flag on use of AI just days confirming it would not renew a contract with the United States military to use its AI technology to analyse drone footage.
But can Google realistically stick to its now-public principles?
"These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions", Google CEO Sundar Pichai wrote in a post. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law.
Trooping the Colour: Queen Elizabeth celebrates 92nd birthday
The couple later joined other members of the Royal Family on the palace's front balcony to watch the Royal Air Force fly by. Meghan's premiere week continues on Jun. 14, when she and the Queen will travel to Cheshire for a series of royal events.
Several Google employees, including former CEO and board of directors Chairman Eric Schmidt and Matt Cutts, who used to run Google's search spam division, have quit Google in the last few years to work for the Pentagon.
However, the search company will take on government contracts that it believes won't be used to hurt people (or at least will be beneficial enough to justify the harm). And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis. To improve upon its principles, Google should commit to independent and transparent review to ensure that its rules are properly applied, he said.
No Google AI technology will ever be used as a weapon or for surveillance, the policy states.In addition, the company will refuse to develop any AI projects that will "cause or are likely to cause overall harm".
Google has been moving forward with applying AI to their suite of products which as led to fun innovations such as the Auto Awesomes in Google Photos, as well as in more serious areas such as health and conservation which they showed off at their recent AI Stories event here in Sydney. Google seems to recognize the massive potential of AI technology, so it wants to start building systems with this framework in mind. "Ultimately, how the company enacts these principles is what will matter more than statements such as this".
- CDC: 'Throw away' pre-cut melons after salmonella outbreak
- Porsche Taycan: Mission E's production name revealed, here in 2020
- Warriors sweep away Cavaliers to repeat as National Basketball Association champions
- Trump says that Russian Federation should be reinstated as a G8 member
- The Cost of Removing CO2 From Air Significantly Reduced
- UFC 225: Australia's Robert Whittaker primed for Yoel Romero fight
- New indictment filed against Paul Manafort
- Weeks elated with UNBC ranking on Young University list
- Trump Feuds With Leaders of France, Canada Before Summit
- Questions remain about White House cancellation of Eagles visit