Kirstin and I led a discussion about racism and technology. The reading, Algorithms of Oppression by Safiya Umoja Noble, was very interesting. Many of us have grown up using with Google, and I know that I had never paid attention to any kind of biases within search engines. I found many of Noble’s discoveries to be quite shocking, particularly the case in which a Google search of “black on white crimes” only presented results about white nationalist groups; these search results led to a person planning a shooting, an absolutely horrifying event that was partly caused by the biases of Google algorithms. The inherent racism embedded within the algorithms was something I had never really thought about, but I think it is extremely important for people to be educated about these biases, especially as we proceed in the digital age.
The class engaged in discussions about many topics including censorship, imperialism, and commercialization. I found our discussion about censorship to be particularly interesting, and I was slightly surprised with the way that the class responded. We discussed the ways in which Google tracks and records user information; we know that if we search for L.L. Bean boots on Google, an ad for Bean boots may show up in our Facebook feed. I was surprised that so many students seemed unbothered by Google’s tracking. Some admitted that they know that this surveillance occurs but do not change their habits at all. Maybe Google’s surveillance of us is just so common that we have become numb to its existence. It seems that we have lost our right to privacy in some ways, and I wonder if we will ever get to the point where there will be significant push-back against Google for spying on its users. We emphasize freedom and liberty without censorship in the United States, and we fully support the idea of privacy. When does Google cross the line? It seems that we have to re-establish boundaries with the expanding power of technology, and I wish we had further discussed the implications and significance of Google’s ability to monitor our every digital move.
Another topic I would’ve liked to discuss more was the idea of “safe spaces” on the Internet. Kirstin found an article about a group on Reddit that only allowed black people to join, and each person had to prove that they were not white by including a photo of their forearm. Many students expressed that they do not think these kinds of groups should be acceptable, but I don’t think that the existence of the group in-of-itself is an issue. I think Internet groups that target specific demographics can be comparable to having on-campus clubs that cater to specific groups of people, like the Asian Student Alliance. I think it is okay for an organization like that to have an online group that is for a specific group. However, I do think that the “proving you’re not white” by showing a picture of your forearm seems problematic. That is judging on only how a person appears phenotypically. For example, a black person who is light-skinned may not be admitted to the group because he has light skin that could pass for white. I thought it was interesting that people in the class were so opposed to having this kind of group, saying that it is important for people to discuss and learn from other people. I do agree that people should learn from others, but I also think it is reasonable for people to have a community of others like them where they can share experiences. However, I wish that we had discussed this topic in more detail because I would’ve liked to hear people’s thoughts about if the same situation occurred but the races were reversed. What if white people made an online group that only allowed other white people? Would we still think that is okay? Why might it be okay for minority groups and not okay for the majority group?
I think technology adds a whole new layer to racism, and it’s really important that we continue to learn about how racism is present in all aspects of life so that we are not blind to the biases inherent in our culture.