Jan 16 · 5 min read
Image: Apisit Sorin / EyeEm/Getty Images
Artificial intelligence should treat all people fairly, empower everyone, perform reliably and safely, be understandable, be secure and respect privacy, and have algorithmic accountability. It should be aligned with existing human values, be explainable, be fair, and respect user data rights. It should be used for socially beneficial purposes, and always remain under meaningful human control. Got that? Good.
These are some of the high-level headings under which Microsoft, IBM, and Google-owned DeepMind respectively set out their ethical principles for the development and deployment of A.I. They’re also, pretty much by definition, A Good Thing. Anything that insists upon technology’s weighty real-world repercussions — and its creators’ responsibilities towards these — is surely welcome in an age when automated systems are implicated in every facet of human existence.
And yet, when it comes to the ways in which A.I. codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
There’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to.
What’s wrong with this view? To quote an article in Nature Machine Intelligence from September 2019, while there is “a global convergence emerging around five ethical principles (transparency, justice and fairness, nonmaleficence, responsibility, and privacy),” what precisely these principles mean is quite another matter. There remains “substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain, or actors they pertain to, and how they should be implemented.” Ethical codes, in other words, are much less like computer code than their creators might wish. They are not so much sets of instructions as aspirations, couched in terms that beg more questions than they answer.
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with defenders of private property; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like — or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
In their different ways, most existing A.I. ethical codes acknowledge this. DeepMind puts the problem up front, stating that “collaboration, diversity of thought, and meaningful public engagement are key if we are to develop and apply A.I. for maximum benefit,” and that “different groups of people hold different values, meaning it is difficult to agree on universal principles.” This is laudably frank, as far as it goes. But I would argue that there’s something missing from this approach that needs to be made explicit before the debate can move where it must go — into a zone, not coincidentally, uncomfortable for many tech giants.
This is the fact that there is no such thing as ethical A.I, any more than there’s a single