What’s your Security today #MinisterOfHappiness

“There’s only one security, and when you’ve lost that security, you’ve lost everything you’ve got. And that is the security of confidence in yourself; to be, to create, to make any position you want to make for yourself. And when you lose that confidence, you’ve lost the only security you can have.”

— L. Ron Hubbard

Excerpted from a lecture by L. Ron Hubbard delivered on 15 October 1951.

Happy Sunday #MinisterOfHappiness

Today Sunday is specially exciting day. It is the second date in 1000 years that can be written the same way both backwards and forwards. 02/02/2020. Its called a “palindromic” day. The first was 01/01/1010. The next one will occur in 909 years on 03/03/3030. No one can witness two palindromic days, not even Donald Trump . All of us alive today are privilege to witness this special day. Give a special thanks to God for the privilege. Hallelujah!
Happy palindromic day to you all !!!

Dr Charles Sinkala

There’s No Such Thing As ‘Ethical A.I.’ Technologists believe the ethical challenges of A.I. can be solved with code, but the challenges are far more complex

Get started

Dr Charles Sinkala

Tom Chatfield
Jan 16 · 5 min read

Image: Apisit Sorin / EyeEm/Getty Images
Artificial intelligence should treat all people fairly, empower everyone, perform reliably and safely, be understandable, be secure and respect privacy, and have algorithmic accountability. It should be aligned with existing human values, be explainable, be fair, and respect user data rights. It should be used for socially beneficial purposes, and always remain under meaningful human control. Got that? Good.
These are some of the high-level headings under which Microsoft, IBM, and Google-owned DeepMind respectively set out their ethical principles for the development and deployment of A.I. They’re also, pretty much by definition, A Good Thing. Anything that insists upon technology’s weighty real-world repercussions — and its creators’ responsibilities towards these — is surely welcome in an age when automated systems are implicated in every facet of human existence.
And yet, when it comes to the ways in which A.I. codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
There’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to.
What’s wrong with this view? To quote an article in Nature Machine Intelligence from September 2019, while there is “a global convergence emerging around five ethical principles (transparency, justice and fairness, nonmaleficence, responsibility, and privacy),” what precisely these principles mean is quite another matter. There remains “substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain, or actors they pertain to, and how they should be implemented.” Ethical codes, in other words, are much less like computer code than their creators might wish. They are not so much sets of instructions as aspirations, couched in terms that beg more questions than they answer.
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with defenders of private property; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like — or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
In their different ways, most existing A.I. ethical codes acknowledge this. DeepMind puts the problem up front, stating that “collaboration, diversity of thought, and meaningful public engagement are key if we are to develop and apply A.I. for maximum benefit,” and that “different groups of people hold different values, meaning it is difficult to agree on universal principles.” This is laudably frank, as far as it goes. But I would argue that there’s something missing from this approach that needs to be made explicit before the debate can move where it must go — into a zone, not coincidentally, uncomfortable for many tech giants.
This is the fact that there is no such thing as ethical A.I, any more than there’s a single

%d bloggers like this: