Algorithms of Oppression: Racism and Technology

Kirstin and I led a discussion about racism and technology. The reading, Algorithms of Oppression by Safiya Umoja Noble, was very interesting. Many of us have grown up using with Google, and I know that I had never paid attention to any kind of biases within search engines. I found many of Noble’s discoveries to be quite shocking, particularly the case in which a Google search of “black on white crimes” only presented results about white nationalist groups; these search results led to a person planning a shooting, an absolutely horrifying event that was partly caused by the biases of Google algorithms. The inherent racism embedded within the algorithms was something I had never really thought about, but I think it is extremely important for people to be educated about these biases, especially as we proceed in the digital age.

The class engaged in discussions about many topics including censorship, imperialism, and commercialization. I found our discussion about censorship to be particularly interesting, and I was slightly surprised with the way that the class responded. We discussed the ways in which Google tracks and records user information; we know that if we search for L.L. Bean boots on Google, an ad for Bean boots may show up in our Facebook feed. I was surprised that so many students seemed unbothered by Google’s tracking. Some admitted that they know that this surveillance occurs but do not change their habits at all. Maybe Google’s surveillance of us is just so common that we have become numb to its existence. It seems that we have lost our right to privacy in some ways, and I wonder if we will ever get to the point where there will be significant push-back against Google for spying on its users. We emphasize freedom and liberty without censorship in the United States, and we fully support the idea of privacy. When does Google cross the line? It seems that we have to re-establish boundaries with the expanding power of technology, and I wish we had further discussed the implications and significance of Google’s ability to monitor our every digital move.

Another topic I would’ve liked to discuss more was the idea of “safe spaces” on the Internet. Kirstin found an article about a group on Reddit that only allowed black people to join, and each person had to prove that they were not white by including a photo of their forearm. Many students expressed that they do not think these kinds of groups should be acceptable, but I don’t think that the existence of the group in-of-itself is an issue. I think Internet groups that target specific demographics can be comparable to having on-campus clubs that cater to specific groups of people, like the Asian Student Alliance. I think it is okay for an organization like that to have an online group that is for a specific group. However, I do think that the “proving you’re not white” by showing a picture of your forearm seems problematic. That is judging on only how a person appears phenotypically. For example, a black person who is light-skinned may not be admitted to the group because he has light skin that could pass for white. I thought it was interesting that people in the class were so opposed to having this kind of group, saying that it is important for people to discuss and learn from other people. I do agree that people should learn from others, but I also think it is reasonable for people to have a community of others like them where they can share experiences. However, I wish that we had discussed this topic in more detail because I would’ve liked to hear people’s thoughts about if the same situation occurred but the races were reversed. What if white people made an online group that only allowed other white people? Would we still think that is okay? Why might it be okay for minority groups and not okay for the majority group?

I think technology adds a whole new layer to racism, and it’s really important that we continue to learn about how racism is present in all aspects of life so that we are not blind to the biases inherent in our culture.

4 thoughts on “Algorithms of Oppression: Racism and Technology

  1. schin2

    I agree with your point about the similarity of Internet safe spaces and multicultural student organizations, where people of color can find a community in which they feel welcome. In answering your question, I think that whites forming a space for other whites only perpetuates racial inequality. Like you said, white people are the dominant group. No white culture exists, so the creation of these white spaces (on the Internet and elsewhere) both amplifies racial animus and furthers the exclusion of POC.

  2. aopongny

    I thought that this discussion on the book “Algorithms of Oppressions” was extremely interesting. The internet influences our lives constantly and we don’t even realize it. So many people think that the internet tells the truth all the time, but people created the internet… and people have biases. I agree that our discussion around the topic of censorship was engaging. Many of us in the class believed that that search engines like Google takes it too far in monitoring what we search, but none of us firmly decided to change our search engines because of its convenience. It is crazy to think that we live in an age where we are always being watched/monitored because of cameras and technology. This topic of algorithms relates to my lesson with Saul on the Inheritance of the Ghetto because of how people associate certain words like ghetto with black people vs. stable/pure/happy with white people. I agree that technology infiltrates it racist ideology of the creators of certain sites/algorithms, and it’s horrible that people do not even realize.

  3. cchong3

    I thought that racism and technology was one of the most interesting discussions we’ve had because we use technology so much that we don’t realize the racism behind technology. For example, we don’t realize the biases within search engines. It’s interesting to think about whether the people who create the algorithms for these search engines are racist, since they are creators and are “training” technology. It also blows my mind that some technology that detect skin tones don’t work on certain skin colors. I feel like companies should only sell products that work on everyone, and they should test their products carefully on all types of people before releasing them.

  4. cbritotr

    After a great presentation by Kirstin and Katie on the first three chapters of Algorithms of Oppression, Connor and I followed up with our own discussion of some topics brought up in the latter half of the same book. We spoke of a variety of things in our presentation, highlighting some everyday encounters with racism in technology. Included in this were the soap dispensers that could only function on lighter skin tones, and the shelby camera films that were created to make people of lighter skin look better.

    We then looked in depth at the most recent diversity report released by Google. We highlighted the ways in which these diversity reports by tech companies like Google are often very misleading and do not tell the full story of what is truly going on within the company. I am very curious to learn what people of color within these companies believe of the job hierarchy within those companies. I wonder if they truly believe they can be promoted to higher positions, or maybe they are just simply happy to be a person of color within the company in the first place. My gut tells me that a lot of these people go into these companies with tons of ambition and expectation to succeed, but then are hit with a sad reality. A sad reality in which they realize that the job hierarchy within big incorporations like Google almost always favor whiteness, whether it is intentful or not. This brings me to the notion of colorblind racism within these companies, and how much it baffles me that people believe colorblind racism does not exist. If you can look further into the numbers that these diverse companies produce, I really suggest you go ahead and do so. It’s amazing how these companies produce this diverse image where in reality it’s all fake.

    We also looked at this huge trend of revenge porn, a very awful trend that is not only allowed through technology, but almost encouraged. It baffles me that there are whole websites dedicated to revenge porn. These revenge porn websites create a safe space for you to potentially ruin another person’s life. I just don’t understand how this can be allowed, and I’m even more frustrated with how long it takes to remove something off of a site. It all comes down to the right of privacy and a lack of privacy that is extremely overlooked, often pushed under the rug. These companies give you giant lists of terms and agreements that almost everyone just skips through and never reads. These giant lists act as barriers for the customer, a barrier which has an easy fix, a quick click agreeing to a set of terms. This allows for millions of users to opt into things that they have no idea they are opting into. It’s just wild to think that so many people including myself are being taken advantage of, losing the privacy that we perceive we have. The scariest part is that no one seems to mind this loss of privacy. We are very aware of it, but it’s just not something that affects you directly. It only affects you directly when you face your own personal problem and realize that loss of privacy.

Comments are closed.