The conversation around the Right to Explanation reminded me of the Mandela Effect. Just as Mandela’s death is believed by many to have happened before his real time of death, Right to Explanation is falsely attributed to GDPR’s collection of laws. An offshoot from early GDPR conversations, the rule has now developed its own literature on the internet. Posts suggesting that the law threatens Artificial Intelligence have flooded Google (examples here, here, and here), while uncertainty-fueled paranoia has taken over LinkedIn. Is it misinformation spread on the internet in its finest or is there more to the discussion? I suggest we review what a Right to Explanation is and why an absent law is causing so much stir on the world wide web.
GDPR stands for General Data Protection Regulation and it will take effect in May this year. The law introduces new rules to the data processing game: most notably, the right to be forgotten, the principles of data privacy by design and by default, and a set of procedures around data breaches. GDPR is an effort to tame the ever-growing data economy in the customer’s favor. Virtually every body that has been feeding off the generous EU data pool will be affected by the new law. That includes invasive data hoarders, compulsive data resellers, and humble businesses: any organisation that collects sensitive personal data of EU citizens.
Right to Explanation is best described as the right to be given an explanation for an algorithm’s output. Think credit scores and insurance costs: decisions at times baffling and unjust; results of some mysterious computer ruling. The law would allow us to demand an explanation for any instance of machine processing that uses our personal data.
The belief that GDPR incorporates Right to Explanation is a combination of old news, wishful thinking, and internet clickbait. GDPR has 11 chapters and none of them mention Right to Explanation specifically. We know it was being discussed: the rule had made it to some public drafts, but was dropped from the final version of the legislation. The gist of these talks is still vivid in the collection of articles in Chapter 3, otherwise known as the Right to Information. It grants the data subject the right to know “the purposes of the processing for which the personal data are intended (…), the existence of automated decision-making, including profiling, (…) and, at least in those cases, meaningful information about the logic involved“. As pointed out in the Oxford University report, there is a certain disparity between logic and significance. Article 22 calls out automation by stating that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“. Yet what we get here is the right to opt-out from automatic processing: not a right to be delivered a personalised YouTube tutorial on computer decision making.
Why wasn’t the Right to Explanation included in the GDPR text? It’s mostly because we haven’t worked out a consensus between machine independence and human intervention. We are past the times when computers would merely “crunch numbers”: today, bots are superior to people in creating solutions to computational problems. It feels belittling: a blow to our collective ego. Worse still, many wonder if further development and reliance on the algorithms’ autonomy could ruin our socio-economic structure. At the same time, human-based decisions aren’t necessarily a real alternative. We are biased, moody, money-driven, and hence more prone to errors than an emotionless computer. Some breathtaking progress is being made with little human intervention. From self-driving cars to speech-interpreters, Artificial Intelligence applications are powered by unsupervised learning algorithms. These are bots learning off each other in a synchronised effort to answer probabilistic problems without a script humans could comprehend. State of art applications as such would be penalised under the Right to Explanation. Innovation would stifle in consequence. Simply put, to dictate what Machine Learning is able or not able to do is one hell of a task. In the end, what even constitutes a meaningful explanation, especially for those not fluent in maths?
GDPR is going to shake up the data market, exclusion of Right to Explanation notwithstanding. EU citizens will get to exercise an unprecedented set of rights to protect their privacy – and businesses will need to abide. Ambiguity in GDPR’s language in regard to human-free decision-making flags an issue, but provides little guidance. The debate is far from over as we witness the growing impact automation has on our lives. How about we setup a neural network to solve it all for once?
I am cooking a series of posts on GDPR. Follow me on Twitter!Follow @EveTheAnalyst