Safeguarding human rights in the age of AI


David Kaye

UN Special Rapporteur on the Right to Freedom of Opinion and Expression

How could artificial intelligence impinge on human rights? Are there specific threats to consider?
David Kaye: In my report, I focused on three AI applications that raise concerns: content display and personalisation; content moderation and removal; profiling, advertising and targeting. Consider the hypothetical situation where governments use automated technology at border crossings to select individuals for additional screening. Such a tool could systematically profile a specific ethnic group, in violation of the obligation of non-discrimination inscribed in human rights law. AI is just like any other tool: its potential negative outcomes are built-in from the design stage, and result from human (and corporate) choices. That is where we must start the discussion.

How can States influence these choices and manage the impact of AI-related activities on human rights?
D.K.: AI tools can be powerful engines of social control and will therefore be approached very differently by democratic and authoritarian States. Restating human rights laws is important, but that only takes you so far. AI tools have become extremely difficult to understand by non-tech people. States can thus play a crucial role by educating their populations – from children to legislators – on AI and on the information environment. This would help demystify algorithmic decision- making and make its tools transparent. We should aim for governance by democracy, not governance by tech. I believe courts will be strongly involved at some point: we will inevitably see claims of injury to human rights based on some form of automation (whether that is censorship, discrimination or another).

More about UN Special Rapporteur on the Right to Freedom of Opinion and Expression​

Click here (opens in a new window)

What about the responsibilities of tech companies, whether GAFA or telcos? Should they be subject to increased oversight?
D.K.: It’s hard to oversee AI tools if you don’t understand them! Obscurity is, unfortunately, somewhat baked into the tech business model, and companies often argue their proprietary tools constitute trade secrets when asked for transparency. Governments should step in with a precise framework that defines the limits of the “trade secret” argument, and fixes the current asymmetry of information between governments and corporates where AI is concerned. As for the companies, I believe some self-regulation is possible, provided they engage in human rights impact assessments of their tools. Process-oriented players like telcos may be in a better position here, if they adopt strong transparency principles and consistently communicate with their users. This would benefit all parties. We believe the right balance will be achieved through collaboration between governments, NGOs, the private sector and civil society.

How do we achieve this collaboration? Is consensus even possible at that scale?
D.K.: I fully believe a multi-stakeholder approach will help convene on some set of rules, and lead to greater transparency. However, given that the most important external factor will be litigation, I think we must be modest at this stage. Joint statements of principles would be a good start, as they can and will have an influence on how courts respond to litigation, and that is a concrete reason for collective action.

Marc-André Feffer

Businesses need to be seen to take a zero tolerance stand on corruption