Why is artificial intelligence inheriting racism and sexism from humans?
It may seem almost incomprehensible that a piece of technology could have prejudices. A machine that sees the world in numbers and data inheriting racism, sexism and other ignorances from its human creators. As well as its intelligence being artificial, AI has picked up other artificial perceptions of humanity; the machine is more like a man.
The Guardian explains how AI is learning our worst impulses by citing a saying in computer science: garbage in, garbage out. When machines are fed data that reflects human prejudices, they mimic the prejudices; as a result, this can see the creation of antisemitic chatbots or racially biased software.
Computer scientist Joanna Byron argues that AI should not be thought of as "some fairy godmother", but instead "an extension of our existing culture".
A Stanford University study found that an internet-trained AI associated what would be stereotypical white names - like Brett and Alison - with positive words such as "love"; and in comparison, it associated stereotypical black names - like Alonzo and Shaniqua - with negative words such as "failure" and "cancer".
The acquisition of prejudices could be problematic when AI takes on common tasks such as evaluating job applicants, Life Hacker suggests. They note that the associations it inherits could rank women applicants lower than those from men. They also give an example from Luminoso Chief Science Officer Rob Speer, who de-biased an open-source data net called ConceptNet when a restaurant review algorithm rated Mexican food lower because it had learned to associate “Mexican” with negative words like “illegal".
A recent report also found that software used in the United States to predict future criminals was biased against black people. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) was more likely to incorrectly label black defendants as likely to reoffend, wrongly flagging them at almost twice the rate as white people (45 per cent to 24 per cent), according to ProPublica.
Joy Buolamwini, co-founder of Algorithmic Justice League (AJL), believes part of the reason AI has inherited racism and sexism is because of the lack of diversity within the tech industry. She said: "If you test your system on people who look like you and it works fine then you're never going to know that there's a problem."
According to the BBC, the diversity reports for the tech giants make "grim reading": Google state that 19 per cent of its tech staff are women and just one per cent are black; women make up 17.5 per cent and black people make up 2.7 per cent of the Microsoft tech workforce; and Facebook's American tech staff are made up of 17 per cent women and one per cent black people.