What are the difficulties in avoiding direct discrimination on the basis of a protected characteristic (e.g. gender) when creating AI systems?

This is my formative writing task in Ethics and Regulation of Artificial Intelligence(LAWM161) at University of Surrey.

Professor: Mikolaj Barczentewicz

Over the past few years, lots of services or products with artificial intelligence have come to people’s life. After several waves of research in Artificial Intelligence, some AI technologies have been emerging like Machine Learning, Deep learning and Neural Networks. Although great progress has been made, research on ethics and laws still lags behind the development of technology. Besides, it turns out that prejudice and discrimination between people will also appear in AI systems easily due to the wide spread of AI systems. There are many reasons for discrimination. In the training process of the AI model, some inappropriate training data may be used, which may contain discrimination information. At the same time, the designer of the AI system will also entrain some personal emotions when designing. Or the idea that the decision made by the Ai system is discriminatory. Even the wrong use of the AI system by users will make them feel treated differently. In this article, I will focus on the direct discrimination when creating an AI system, and discuss some difficulties to decrease direct discrimination.

Direct discrimination is known as an unfair circumstance or situations when you have a protected characteristic such as gender, race, age, health, religion, disability, sexual orientation or other characteristics (Equality and Human Rights Commission, 2019[1]). Some regulations or laws have been made to protect people from discrimination in society like the Equality Act, however discrimination or unfair treatment happens easily because of the wide use of AI systems. The Most unfair treatment occurs in face-to-face communications before, but now AI systems are deployed in everywhere, traditional bias or discrimination can be transplanted to these automated-decision making systems because there are only true or false (0 and 1) in digital world, without any law, morality and human sophisticated. A racial disparity was found by ProPublica (ProPublica, 2016[2]) that black defendants had a higher crime risk score than white defendants in the computer algorithms. Sometime, the things which is correct mathematically is not compete faire in reality. F. Zuiderveen Borgesius (2018)[3] believes that discrimination can be amplified with the use of predictive policing systems.

With the widely use of artificial intelligence systems, direct discrimination will occur more and more. When designing an AI system, in order to reduce or avoid direct discrimination, what are the issues and challenges that need to be considered? Ferrer et al (2021)[4] shows three main causes for bias: bias in modeling, bias in training, bias in usage. The bias in these stags is the core problems when creating an AI system. Ferrer et al (2021)[4:1] also claims that bias is not necessarily contribute to discrimination, but it is impossible to sort out the differences between discriminations without bias. So I mainly introduce some of the problems encountered in these three stages.

Firstly, it is hard to build a faire model which is not only mathematical equality, but also fairness in law and morality. When building a mathematical model, the primary task of engineers or mathematicians is to establish equations and adjust different parameters to match what they want. For the results of the model, they will only consider whether it is right or wrong mathematically. There are various algorithms and mathematics behind models and these algorithms or theories cannot verify prejudice or discrimination. As the blacks mentioned above, the higher probability of crime is just a probability model. Many other factors should be considered in real situation such as location, education level, personal perception, etc. and these factors are all changing, and it is impossible to find relevant ones. More importantly, computational fairness is just an ideal situation, the deployment of AI model is also restricted by the local legislation and culture.

Secondly, after the model is made, its primary purpose is to choose suitable training data. The quality of training data is related to the results of training. Unfair data often leads to discrimination or bias in the results. In the experiment of Torralba A. and Efros A.A. (2011)[5], they report that datasets have strong built-in bias and these biases still present exist in some form since using extras filters.

Thirdly, Deployment is the last stage to create an AI system but there is also discrimination or bias existing. For example, for the disabled, the AI system needs to provide special support to them, such as more accurate voice broadcasts, support for braille processing, etc.

Overall, there are lots of difficulties creating an AI system especially for some sensitive characteristics. To realizing an AI system requires not only relevant technology, but also strong moral and legal knowledge.

Reference


  1. Equality and Human Rights Commission (2019) What is direct and indirect discrimination. Available at: https://www.equalityhumanrights.com/en/advice-and-guidance/what-direct-and-indirect-discrimination (Accessed: 8 November 2021). ↩︎

  2. ProPublica (2016) Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. Available at: https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say (Accessed: 8 November 2021). ↩︎

  3. F. Zuiderveen Borgesius. (2018) ‘Discrimination, artificial intelligence, and algorithmic decision-making’, Strasbourg: Council of Europe, Directorate General of Democracy, p. 16. ↩︎

  4. Ferrer et al (2021) ‘Bias and Discrimination in AI: a cross-disciplinary perspective’, IEEE Technology and Society Magazine, 40(2), pp.72-80. ↩︎ ↩︎

  5. Torralba, A. and Efros, A.A.( 2011), ‘Unbiased look at dataset bias’, CVPR 2011, (pp. 1521-1528). ↩︎