What is the scope of artificial intelligence

Amnesty International

© Stefanie Hilgarth / Amnesty International

Artificial intelligence (AI) is already in use today: the Siri and Alexa voice assistance systems are based on this, as are navigation systems such as GoogleMaps and many other online applications that already assist people to a considerable extent. Artificial intelligence will be integrated much more extensively into our everyday life in the next few years. That offers a lot of opportunities, could improve the lives of many people and serve the well-being of societies. However, the use of artificial intelligence also harbors dangers, especially for human rights. A real and comprehensive social discussion about the opportunities and risks of artificial intelligence is still pending.

What is Artificial Intelligence?

Machine learning, deep learning, neural networks - these techniques can be summarized under the term artificial intelligence (AI). All of these techniques are about pattern recognition. Large data sets (keyword "big data") are analyzed in order to identify patterns in this data. This analysis takes place using formulas and instructions - algorithms. During this process, the program learns what patterns there are and can apply this knowledge - and this is the crucial point - to new data. If a program has learned from people what a face looks like with a large number of images, it can now evaluate new images as to whether a face is depicted on them or not.

THE BENEFITS OF ARTIFICIAL INTELLIGENCE

AI can bring social progress in many areas and serve the further realization of human rights. The sensible use of AI can help make numerous situations more humane and contribute to a world in which human rights are better implemented. The course of the disease could be recognized more quickly and treated more effectively. Artificial intelligence can already analyze cancer studies, specialist articles and medical records and let doctors know whether a cancer is present and how it should be treated. This technology is already in use in 230 hospitals worldwide and can help ensure that the human right to the “highest attainable level of physical and mental health” can be better implemented. Self-learning robots could very soon help elderly people with care and in everyday life.

Artificial intelligence as a threat to human rights

However, the use of artificial intelligence (AI) can also pose a threat to people and their human rights. Above all, AI can have a negative impact on the right to equality and non-discrimination, but also on the protection of human dignity, the protection of privacy and numerous other human rights that can be impaired by the use and abuse of machine learning systems. This includes the right to freedom of expression and association, participation in cultural life, equality before the law and access to effective legal remedies.

Decision making and data processing systems can also undermine economic, social and cultural rights; for example, they can affect the delivery of vital services such as health and education and limit access to opportunities such as employment.

The following examples show the negative effects the use of artificial intelligence can have on human rights.

Artificial intelligence and protection against discrimination

Example of artificial intelligence in the justice system

Problematic is the use of AI, which predicts the likelihood that an incarcerated person will relapse after being released. The so-called COMPAS system, which is used in some US states, uses over 100 variables to assess whether an incarcerated person has a low, moderate or high risk of relapse. Judges use these results as a basis for their decision. Although the system is designed to exclude ethnic origin, a study found that black people were twice as likely to be mistakenly classified as “high risk” than white people. Likewise, the likelihood of relapse among people with white skin was underestimated.

The examples show that groups that are already marginalized can be discriminated even more through the use of artificial intelligence. They also show that it is difficult or impossible for victims to find out whether algorithms have discriminated against them. The use of mathematical models suggests the illusion of objectivity, which is why it is difficult to go against these decisions. Because the calculations take place in a "black box" and outsiders do not find out why the artificial intelligence came to which results. Without this transparency, however, such a system is difficult to question.

If artificial intelligence has a discriminatory effect, one speaks of a bias in the algorithms, a bias. This bias can arise for many reasons. Self-learning software is always based on mathematical models that are supposed to represent reality to a certain extent. The worldview and point of view of the developers play a major role here. Probably the most important reason why there can be a bias in an AI is the fact that the algorithms always have to train with historical data in order to learn patterns. However, in most cases, these data are colored with racist and discriminatory elements.

Artificial Intelligence and the Protection of Human Dignity: Right to Life

Take autonomous weapon systems as an example

Semi-autonomous weapon systems are already in use, such as the Harpy drones (Israel) and the Counter Rocket missile launching system (USA). With advances in technology, there is a realistic risk that fully autonomous lethal weapon systems will be deployed that will independently identify and kill targets. The dangers that this means are summarized very clearly in this video. Autonomous weapon systems are to be condemned from a human rights perspective, as they are an arbitrary execution and do not give the right to a fair trial.

Take social profiling as an example

Self-learning algorithms divide people into different cohorts based on probability calculations and correlations and give these groupings different attributes and characteristics. Because, for example, more residents of a certain district are more likely to have payment difficulties than residents of other parts of the city, these people only get a loan from their bank with a higher interest rate to compensate for the supposed default risk. In this assessment, the individual situation of a person is left out. Instead, the person is statistically assigned to a group with similar characteristics.

Artificial intelligence and the protection of privacy

A surveillance system with over 170 million cameras and a further 400 million cameras - many of them equipped with AI software - enables Chinese authorities to identify, track and ultimately capture people in real time. It is possible to compare every recorded face with stored identification documents, to create complete movement patterns from the past week and to determine who met with whom, when, where, in which environment, how often and for how long. Algorithms and face recognition software create a system of permanent monitoring that would never be possible without this technology.

In this system, there is no longer any right to privacy in public space - it is not possible for citizens to evade almost all-encompassing surveillance or to object to surveillance.

Amnesty International fights for artificial intelligence that respects human rights

Amnesty International fights for the rights of all people. Human rights are universal and inalienable - that is why they apply everywhere, including in the digital world. That is why Amnesty International has been campaigning for many years to ensure that human rights are also protected in the development of new technologies such as algorithms, machine learning and artificial intelligence.

  • In 2018 we adopted the Toronto Declaration, a landmark statement to protect the rights to equality and non-discrimination in machine learning systems.

  • We are committed to governments, international organizations and companies around the world to ensure that human rights are safeguarded in the development of artificial intelligence and comparable technologies and that a legal framework is created for this.

  • We demand a worldwide, complete preventive ban on the development of autonomous weapon systems. That is why we have started a worldwide campaign to ban killer robots together with 75 international and national organizations.

  • We are fighting for sector-specific oversight of AI technologies, starting with the criminal justice sector. Technological changes in policing, from "predictive police systems" to the introduction of facial recognition, are key examples of the changing nature of law enforcement around the world.

Amnesty International urges

  • When designing or implementing machine learning systems in a public context, states may not participate in or support discriminatory or otherwise infringing practices.

  • States have a duty to hold private actors accountable when violations of human rights occur.

  • States must create opportunities to take legal action against potentially discriminatory decisions based on AI and, if necessary, make reparations.

  • States and private actors must ensure the greatest possible transparency when using AI systems. This includes that the effects on the persons concerned can be checked by independent bodies.

  • Private actors have a responsibility to respect human rights, which is independent of state obligations. As part of this responsibility, private sector actors must continuously take proactive and reactive steps to ensure that they do not cause or contribute to human rights violations ("human rights due diligence").

What opportunities and dangers digitization brings for human rights

More on this