Robots racist?

Is it possible for robots to be racist?

Download Perspecs
Perspecs

Robot's racist?

Films about artificial intelligence are box office cash cows.

2001 film A.I reaped in almost $30million in its opening weekend. 2004 film I Robot bagged $53.3million. This month, Ghost in the Shell, the film in which Scarlett Johansson plays a half human half robotic hybrid, took $43million in its first couple of days.

The reality of A.I is very different to what Hollywood depicts, however. At the World Robot Conference in Beijing in 2016, China's University of Science and Technology unveiled what it believed to be the world's most human-like robots.

The machines could recognise facial expressions, understand human language, and comprehend the age and gender of anyone standing in front of them. They even blinked like humans.

However, even these robots, can not be classed as artificially intelligent, as they do not think for themselves.

The great debate over whether machines will ever be able to think for themselves can be traced back to the 1950s and an English computer scientist call Alan Turing.

In 1950 Turning created the Turning Test; a thought experiment designed to test a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human being.

The idea was that a computer could be said to 'think' if it could fool a human interrogator into believing that they were having a conversation with a human. Said interrogator would only communicate with the machine through typing.

The test remained un-passed until 2014, when a computer program called Eugene Goostman, which simulated a 13-year-old Ukrainian boy, was said to have passed the Turing test at an event organised by the University of Reading.

Days after the reported pass, however, holes were picked in its validity. Commentators explained that the pass was 'smoke and mirrors' rather a genuine proof of artificial intelligence.

Since 2014, another issue has started to rear its ugly head in the world of A.I; racism.

In September 2016, the Guardian newspaper reported on a beauty pageant that had been judged by robots.

The machines involved in the contest were programmed to use objective factors like face symmetry to judge a panel of contestants.

But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

On 14th April 2017, fresh research was released through the journal Science suggesting that the robots of the future might have the propensity to be racist.

But is this just more smoke and mirrors as in the case of Eugene, or a fact of reality that needs to nipped in the bud before scenes from films like Ghost in the Shell become real?

Download Perspecs
Download Perspecs