Last week Microsoft claimed that they are planning to use artificial intelligence in the future to censor their Xbox Live chats in real time. Xbox Live chats are notorious for their vulgarity, and the censoring would give players optional control over what they hear. At least for now, machine learning is not good enough to censor specific words, but they said that they could easily measure the level of profanity coming from a user and mute them accordingly. This potentially could also impact a user’s Xbox Live social reputation score (if they weren’t already sounding enough like China with the censorship). Overall, I think that the idea of live AI censoring is very intriguing and could be applied to a lot more than just game chats, such as social media posts, live streams, and other content that can be made available to a large audience in seconds. If censoring AI were to take advantage of other AI-based detection systems, like Google’s Lens and AI Cam, it could potentially censor photos and videos as well.
While I think that this technology is a good idea and should be explored, I think that its consequences need to be realized as well. If AI was ever used to filter television or the internet, it could have an enormous impact on what information people consume. As such, the engineers working on this project have a lot of responsibility with concerns to determining what should or shouldn’t be filtered. It is also possible that their personal biases would impact the AI system, although one could argue that personal biases impact human censors on a daily as well.
While my last paragraph was skeptical of AI-censoring, I would also like to show some of the reasons this technology needs to be developed. For example, just this month, 2200 users viewed a video of a shooting in Germany outside of a synagogue, which was initially live streamed onto Twitch. It took humans (Twitch) 35 minutes to take the video down. To me that seems like 35 minutes too long. Watching something like that could be scarring. In Christchurch, New Zealand, a gunman live-streamed on Facebook Live as he killed 51 people at two mosques. If AI could potentially censor these videos, what reason do we have not to use it? This also is not only an internet issue, as censoring has been an ongoing issue with live television. In 2012 Fox News accidentally live broadcasted somebody committing suicide. To me, it seems inevitable that all of these live-based channels and sites will eventually take advantage of using AI to censor their content. The benefits seem to outweigh the cons, but that is also dependent on who’s doing the coding.