Search

Racism in technology

Is technology racist? If technology is racist, what are the implications of these outcomes? Are there any ways we can truly combat bias and prejudice via technology?

Technology uses scientific knowledge for practical purposes which one would think means objective and accurate outcomes, though we are seeing more and more subtle instances of the inability for machines to be objective and accurate. For example, automatic taps and soap dispensers are unable to detect darker skin tones. Or in another instance cameras were designed with the focus on lighter skin tones thus limiting the ability for darker skin tones to use cameras

In addition, AI has been found to be biased specifically within facial recognition technology. A black female MIT student Joy Buolamwini found that a robot recognised her face much better when she wore a white mask. She found the training data that is being used for data recognition is not as representative of the variety of human skin tones and facial structures.

In 2007 Users using the app Faceapp found that when using the ‘hot’ filter the app lightened darker skin tones, the company apologised and stated ‘it is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour’. Furthermore in 2012 FBI expert Brendon Clare composed a review to see how well some of the older facial recognition systems worked on a diverse range of faces, results showed that the systems did not work as well for black faces and women.

Why does this bias exist?

Machines are not human therefore they do not experience feelings and thoughts in the way humans do yet the most human thing about them is their intrinsic bias. Machines view the world through a coded gaze, they digest pixels from a camera in dictated ways. Using machine learning we create training sets with examples that help the machines detect new faces, thus a lack of diversity in these trainings sets leads to limitations in facial recognition.

To save time code libraries for facial recognition are shared like off-the-shelf parts, many computer vision projects share the same code any bias in the system propagates widely and implants a coded gaze. The coded gaze reflects the views of whoever creates the system, all of our work reflects both our aspirations and our limitations. Since there is lack of diversity in the data encoded within Ai and technology the data is highly homogenous so that any face that deviates from what has been encoded will not be as readily detected. Therefore, this suggests that the technology itself is not racist however the unconscious biases from those who develop these technologies cause the outcomes of racial bias and limitations within technology.

What are the implications?

In the Book ‘Weapons of mass destruction’ the author and data scientist Kathy O’Neil talked about the rise of the new WMDs (widespread, mysterious and disruptive algorithms) and how these algorithms are making decisions around employment, insurance and if people have access to opportunities and also increasingly WMDs are being used in predictive policing. In addition, these harmful algorithms are used by social media platforms like Facebook, YouTube and google. They use ‘echo-chambers’ which is where the algorithm learns what the individual likes therefore the algorithm continues to recommend the same type of content so that the individual remains entertained for longer however this can be dangerous as only seeking content which we agree with limits our exposure to other experiences and viewpoints as well as creating a wider divide between people who believe different things.

What is being done?

Joy Buolamwini founder of the algorithmic Justice League suggests three ways in which the coded gaze can be combated:

1. Highlighting bias- raise awareness about existing algorithmic bias and the societal impacts of AI


2. Identify bias- develop tools for checking bias in existing data and data-centric technology


3. Mitigate bias – diversify data, develop inclusive practices for the design, development, deployment and testing of AI

Therefore, if we diversify those in the coders chair, blind spots can be better checked for so that we can see if there is an over representation that might be masking problems or if we are overshadowing certain groups. Inevitably technology is an extension of us and the reality of our world and though our world is far from having no biases and racial discrimination, focusing on challenging what we can do will put us in a better position is getting there. If we continue to work on technology and how it can be improved to serve all of us and not just some of us we will inevitably allow for better decision making and judgements to be made as well as decreasing racial biases


.

33 views0 comments

Recent Posts

See All