Artificial intelligence is given some aura of objectivity, an invisible omnipresent radiation of impartiality. However, some artificial intelligence (AI) systems can exhibit algorithmic bias, i.e. they can produce results that unfairly discriminate against people based on their social identity.
This can be a problem in the use of machine learning algorithms and artificial intelligence in the police, in application during selection in employment, determining the amount of credit, in health insurance… Imagine not getting a job, not even being shortlisted for an interview, just because in the first step, which is processed by computer, your CV is rejected by artificial intelligence , based on your gender, race, ethnicity or the fact that you come from a rural area.
Examples of AI Bias
Google’s search algorithms have already been noted to be oppressive towards black women and more “push” stereotypes in search. This was elaborated in her book Algorithms of Oppression by Safiya Umoja Noble, which, through analysis of text and media searches, as well as extensive research into online advertising – paid posts, exposed the culture of racism and sexism in the way visibility is created on the Internet.
Some algorithms powering artificial intelligence for facial recognition have systematically misclassified darker complexions (1, Buolamwini & Gebru, 2018 ) or mislabeled black people as “primates,” according to a New York Times article .
According to the Verge , Amazon used an artificial intelligence algorithm for hiring that systematically undervalued women’s resumes, thus showing gender bias. That algorithm “punished” people who attended only women’s colleges, as well as all resumes that contained the word “female” (for example, if you wrote “member of the women’s volleyball club” – the program would demote you).
Princeton researchers used off-the-shelf AI machine learning software to analyze and link 2.2 million words. They found that European names were perceived as more pleasant than African-American names and that the words “woman” and “girl” were more often associated with art and less with science and mathematics, which are most likely to be associated with men. In analyzing these word associations in the training data, the algorithm picked up on existing racial and gender biases exhibited by humans. If the learned associations of these algorithms were used as part of a search ranking algorithm, to generate word suggestions as part of an autocomplete tool, or for job recruiting, it could have the cumulative effect of reinforcing racial and gender bias.
The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, used by US judges to predict whether defendants should be detained or released on bail pending trial, has been found to be biased against whites and discriminatory against African-Americans, according to an analysis published by ProPublica. The algorithm assigns a risk score to a defendant’s likelihood to commit a crime in the future, drawing on extensive data available on arrest records, defendant demographics and other variables. Compared to whites who were equally likely to reoffend, African Americans were more likely to receive a higher risk score, resulting in longer periods of detention while awaiting trial. Northpointe, the company that sells the algorithm, claims that this is not true.
Some algorithms can now easily determine people’s political orientations against their will (according to 2, Peters, 2022).
AI can be biased if it is poorly designed, trained, or implemented in a way that creates unintended and unfair consequences. Bias in AI typically occurs when algorithms are trained on incomplete or inadequate data, which can lead to wrong conclusions or discrimination. Also, bias can occur if algorithms pick up and reinforce existing social inequalities and stereotypes without recognizing or eliminating them.
Human and historical biases are inherently present in artificial intelligence and, if it is built based on this kind of data, it will be biased.
How to reduce algorithm biases
There are ways to reduce bias in AI, such as more careful design, using different data sets, and validating models on different populations. It is necessary to develop strategies for recognizing bias, and all approaches to solving the problem must be based on the protection of sensitive user data – data that indicate race, gender, ethnicity, belonging to a minority, and the like. Also, it is important to have an ethical framework that promotes fairness and responsibility in the application of AI.
The development of AI should be based on the principles of equality, justness, fairness and neutrality, aiming to serve the well-being of all people, regardless of political beliefs.
But this is easier said than done. Creating artificial intelligence with a bias towards a certain political belief is not only ethically questionable, but it is also technically very challenging to realize such programming. AI learning is based on the data used to “train” or, more precisely, derive the model. If an AI were to try to program itself based on a specific data set of a narrow range of values, a phenomenon known as “overfitting” could occur, where the AI over-adapts to the current data, and lacks the ability to generalize and adapt to new situations. Also, such a biased AI could provoke strong reactions from the public and worsen problems that already exist in society, such as polarization and division.
An example of these problems is when artificial intelligence over time learns to distinguish between male and female names. Then, even if gender is not a factor, if the users do not enter it in the form or it is omitted from the CV, a situation of discrimination may arise.
“Some decisions will be best served by algorithms and other artificial intelligence tools, while others may need careful consideration before computer models are designed. Furthermore, testing and reviewing specific algorithms will also identify and, at best, mitigate discriminatory outcomes. For algorithm operators seeking to reduce the risk and complications of bad outcomes for consumers, promoting and using mitigation proposals can create a path toward algorithmic fairness, even if fairness is never fully realized,” according to an analysis by the Brookings Institution, a non-profit think- tank . dedicated to independent research and political solutions.
AI users in the future will need better algorithmic literacy and stronger critical thinking skills. It will be necessary for people to know when they are the object of algorithm-based decision-making, automatic decisions, and what to do and how to respond if they are negatively affected by algorithmic bias. It is necessary to form and strengthen public institutions to regulate algorithmic bias and train staff. Certainly, the call of experts in algorithmic bias is one of the calls of the future.
[1] Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81: 77–91
[2] Peters, U. Algorithmic Political Bias in Artificial Intelligence Systems. Philos. Technol. 35, 25 (2022). https://doi.org/10.1007/s13347-022-00512-8
Jelena Kalinić, MA in comparative literature and graduate biologist, science journalist and science communicator, has a WHO infodemic manager certificate and Health metrics Study design & Evidence based medicine training. Winner of the 2020 EurekaAlert (AAAS) Fellowship for Science Journalists. Short-runner, second place in the selection for European Science journalist of the year for 2022.