One of the inaccuracies people believe about technology is that since it does not have a brain, it doesn’t have typical biases that humans have. According to a recent New York Times article, this could not be farther from the truth. As the article illustrates, Google has created an artificial intelligence technology called BERT (Bidirectional Encoder Representations from Transformers) that learns from other digitized media in order to understand how humans operate. In an effort to create the most ‘human-like’ technology, BERT has unintentionally been picking up biases that have plagued society for generations. Because BERT is learning from the current internet culture, a few examples of bias have been against women, people of color, and certain political groups. While AI is a fascinating progression in technology, is it really bettering our society?
I had initially had chosen this article simply because it was really interesting, but in creating the citation for the article I realized it was written by Cade Metz! While I was unable to actually hear him speak to our class, I know that a lot of his work ties into the themes of society’s interaction with technology, which we discuss often in class. This article reminds me of Neuromancer and the idea that technology and humanity are slowly becoming interconnected. The line between people and technology is slowly blurring and causing ethical and moral issues that we no longer know how to combat. In relation to the “Women in Computer Science” documentary, minority groups already face enough discrimination and scrutiny in the workplace, so the idea that technology will now have a bias against these people will cause even more tensions in this regard. For women and minorities trying to enter the technology industry, this will especially harm their passion for following their dreams and revert our society to a more white male-dominated tech field.
There are obviously issues with the concept that AI could be biased, and while some of this stems from issues we cannot fix, some of the problems come from the systematic issues within the field of technology. Similar to how airbags initially killed a lot of women and children because there were no women on the teams that developed them, there are very few women on the teams for these AI. The white grown men who are programming BERT have their own biases that they are passing on, and there are no women or minorities to counteract these actions. Likewise, in a generation dedicated to social justice and equality, the development of this kind of technology is simply hurting those efforts. While changing people’s minds is hard, changing the progress of technology may be even harder.
Metz, Cade. “We Teach A.I. Systems Everything, Including Our Biases.” The New York Times, The New York Times, 11 Nov. 2019, https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html.