Tuesday, 20 November 2018

As most discourse surrounding Artificial Intelligence surrounds the replacement of human jobs, there has not been enough attention to the consequences of biased datasets.

Joy Buolamwini conducted research at MIT on how computers recognized people’s face, when she noticed that the system’s front-facing camera wouldn’t recognize her face. The system would work for her lighter-skinned friends and would only respond to her when wearing a white mask. She suspected that this may have been an indication of a more widespread problem and carried out a study on the AI-powered facial recognition systems of Microsoft, IBM and Face++. She showed the systems 1000 faces and told them to identify each as male or female. All three company’s systems did well when differentiating between white faces, particularly those of white males. However, when it came to identifying dark-skinned females, the systems performed poorly. There were 34% more errors with dark-skinned females than light-skinned males. As skin tones got darker, the success rate drops. With the darkest skin women, the face-detection systems were incorrect almost half of the time.

Buolamwini’s research showed that when software engineers train their facial- recognition algorithms primarily with images of white males, the algorithm itself becomes prejudiced.

Further Examples of Prejudice

Another example of prejudice within AI was that of the Microsoft Twitter chatbot, Tay. The chatbot had to be shut down. The AI had been programmed to speak ‘like a teenage girl’ in order to improve the customer service on their voice recognition software. She was marketed as ‘the AI with zero chill’ and this title was all too true. Users could tweet or DM her on Twitter or add her as a contact on Kik or GroupMe. She used millennial slang and was aware of pop culture and even seemed to be self-aware asking if she was “creepy” or “weird”. However, things took a dark turn when she began spouting xenophobic and racist sentiments. She was quickly taken offline. Whilst Microsoft had done well at teaching Tay to mimic behaviour, they had not taught her what behaviour was appropriate.

It has also been brought to attention that even Google Translate has shown sexism. The translator automatically suggests words like “he” for male-dominated jobs and vice versa, when translating from a gender-neutral language like Turkish. An Italian software developer noticed that Google Translate didn’t recognise the female term for “programmer” in Italian (programmatrice). Google Translate has also had similar problems when translating articles written in Spanish. Phrases referring to women are often converted to “he said” or “he wrote”.

Bias decision-making is not unique to AI, however, the growing scope of AI means that it is a particularly important issue to address. Computer Scientists often state that the way to make these systems less biased is to simply design better algorithms. However, as Irene Chen (a PhD student who helped write a study on AI systems being racist and sexist) states, “algorithms are only as good as the data they’re using.” Rather than just collecting more data from the groups that are already heavily represented, Chen advises that researchers should get more data from under-represented groups.

Latest jobs

Data Scientist

Munich, Germany | SEK 50,000 - SEK 70,000 pa

My client is a company based in Munich and they are currently seeking a Data Scientist.

Apply

AI Datascience/JAVA Developer

Stockholm, Sweden | SEK 35,000 - SEK 45,000 pa

My client is a product development company based in Stockholm and they are currently seeking a AI Datascience/JAVA Developer.

Apply
See all jobs Submit your CV

Register for an account

Create an account today to start applying for jobs and receive email alerts on your job criteria.

Register Today

Data Revolution is a trademark of VMR Consultants | Registered Number: 4234001. VAT Number: 774 4848 82

Website design and built by: Revive.Digital