UN Human Rights Chief: Artificial Intelligence can be used for good, or have “catastrophic” effects
Image by teguhjati pras from Pixabay

UN Human Rights Chief: Artificial Intelligence can be used for good, or have “catastrophic” effects

The United Nations High Commissioner for Human Rights has released a scathing report on the effects of Artificial Intelligence on civil and economic rights across the world. The report is available on the OHCHR website in Microsoft Word format.

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,”

UNHCR Comissioner Michelle Bachelet

The damning analysis outlines the impact on human rights in a number of areas.

Four areas of concern

Law enforcement, national security, criminal justice and border management

The report notes that some are trying to use facial recognition and other biometric analysis systems to analyze “emotional and mental state” based on facial expressions.

“Researchers have found only a weak association of emotions with facial expressions and highlighted that facial expressions vary across cultures and contexts, making emotion recognition susceptible to bias and misinterpretations.”

It also discusses the use of forecasting for criminality in this context, highlighting the use of AI and machine learning in particular to use “criminal records, crime statistics, records of police interventions in specific neighbourhoods, social media posts, communications data and travel records” to determine an individual’s supposed likelihood of committing a crime or even being a terrorist in the future. It later notes that “the decision-making processes of many AI systems are opaque.” We often do not have any real insight into what factors went into the system choosing the outcome of a particular set of inputs.

Public services

Although these are legitimate, even laudable goals, the deployment of AI tools in the delivery of public and humanitarian services may have an adverse impact on human rights if proper safeguards are not in place.

In a similar context to the attempts at identification of future criminality from the above section, the report discusses the use of AI systems in predicting the need for “delivery of humanitarian goods and services.” It warns that the opaqueness of algorithms and the disparities in information gathering can and have led to issues of targeting the poor, “leading to de facto discrimination based on socioeconomic background.”

A Dutch court ruled in 2020 against the use of such a system after a campaign by activists and trade unionists, noting that it violated the right to privacy and other human rights.


It might be shocking to hear, but companies are increasingly relying on forms of AI to determine whether to hire a person. This is supposedly to take a biased human factor out of hiring decisions, but only replaces them with biased datasets fed into algorithms that have no oversight.

If you make it past the automated hiring systems, companies continue to use AI to monitor both on-the-job and off-the-job behavior to detect risk for things like having to use employer-provided healthcare at a higher rate.

Managing information online

This section gets to the heart of the interactions we have online every day. From search results tailored to the profiles companies have created on you to the algorithmic news feeds on Facebook and other social media sites:

Furthermore, platform recommender systems tend to focus on maximizing user engagement while relying on insights into people’s preferences, demographic and behavioural patterns, which has been shown to often promote sensationalist content, potentially reinforcing trends towards polarization.

Content targeted at triggering an emotional response keeps us engaged in a platform, which then causes us to see more ads as well as provide more data, resulting in a circular pattern.

Using AI for good

Throughout the report, the UNHCR suggests that AI can also be used for good. Technology itself is neutral. The impact it has depends on how it is used, and what political and economic forces control its development and use.

In the 1970s, the socialist Chilean government attempted to build Cybersyn, a network and processing system that could monitor factory, production and economic data across the country. This revolutionary system would let leaders make decisions in real-time. Such a system being used for public good in 2021 with all of the technological advancements in processing power, networking speed and the Internet of Things could improve society in almost unimaginable ways.

On the other hand, these same technologies under the control of organizations like the FBI, Google, Clearview AI and Facebook are used to make money and further oppression.

In its Recommendations, the UNHCR suggests a ban on the use of much of AI until regulatory frameworks can be developed and enforced that guarantee human rights. This would be a very positive step, but the UN in general does not have the enforcement power to make it happen, especially in a short amount of time.

The UNHCR report is a significant milestone and resource. As struggles for privacy rights from Amsterdam to New York, Berkeley and beyond have shown us, the way to win is to fight back.