Microsoft’s CEO Calls for Accountable AI, Ignores the Algorithms That Already Rule Our Lives

Posted by on June 29, 2016 8:10 pm
Tags:
Categories: Blender

Microsoft’s CEO Calls for Accountable AI, Ignores the Algorithms That Already Rule Our Lives

Microsoft’s CEO Calls for Accountable AI, Ignores the Algorithms That Already Rule Our Lives

Satya Nadella warns that future, smart software may be capable of discrimination—in fact biased algorithms are already here.

Microsoft CEO Satya Nadella is concerned about the power artificial intelligence will wield over our lives. In a post on Slate yesterday he advised the computing industry to start thinking now about how to design intelligent software to respect our humanity.

“The tech industry should not dictate the values and virtues of this future,” he wrote.

Nadella called for “algorithmic accountability so that humans can undo unintended harm.” He said that smart software must be designed in ways that we can inspect its workings and prevent it from discriminating against certain people, or using private data in unsavory ways.

Microsoft CEO Satya Nadella.

These are noble and rational concerns—but ones tech leaders should have been talking about some time ago. There is ample evidence that the algorithms and software that shape daily life are already capable of troubling biases.

Studies from the Federal Trade Commission have found signs that racial and economic biases decried in pre-Internet times are now reappearing in the systems powering targeted ads and other online services. In Wisconsin a fight is taking place over why the workings of a system that tries to predict whether a criminal will reoffend – and is used to determine jail terms – must be kept secret.

Just today the ACLU filed suit against the U.S. government on behalf of researchers with a plan to look for racial discrimination in online job and housing ads. They can’t carry it out because of the restrictions of federal hacking laws and the way tech firms write their terms and conditions.

It’s clear that some of the problems Nadella cautions could be created by future artificial intelligence software are in fact already here. Microsoft researcher Kate Crawford nicely summarized the root of algorithmic bias in a recent New YorkTimes op-ed, writing that software “may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.”

Nadella concludes his forward-looking post on artificial intelligence by saying that: “The most critical next step in our pursuit of A.I. is to agree on an ethical and empathic framework for its design.” What better way to be ready for the artificial intelligence-dominated future than to start work now on applying an ethical and empathic framework to the “dumb” software that already surrounds us?

(Read more: Slate, Vice, Ars Technica, New York Times)

Leave a Reply

Your email address will not be published. Required fields are marked *