In a recent article published in Scientific American, Gil Deniz Salali looks to hunter gatherer children in the Congo for information on the future of artificial intelligence. Why? Because Salali thinks machine learning should be modeled after the learning styles of young children. In the article, Salali explains how young children are expected, more or less, to teach themselves by observation and occasional feedback from other members of the community. According to Salali, humans have a great capacity to learn by imitation, which allows cultural practices to be passed on to new generations, and to then be expanded. Learning by imitation, Salali writes, “is how human culture progresses. Our cultural traits are built upon the legacies of the past information. But this means they are also restricted by them.”
This passage then leads to Salali’s argument about how artificial intelligence will one day surpass humans. Because machine learning does not have to be restricted by past information, what Steven Johnson discusses when he writes about the “adjacent possible,” it has the capacity to eventually outsmart humans, according to Salali. She then points to an example of a machine that has been learning like the kids in the Congo—by exploring and then getting smarter from primarily internal feedback. AlphaGo Zero, a computer program that plays a game called Go, has officially become the best player by “learning through self-play.” AlphaGo Zero managed to become the best player because, unlike humans, the computer program does not have to base its “game strategies on the 3,000 years of accumulated knowledge.”
Though it is very interesting to imagine technology that can “think,” I surely don’t see there point. I can’t help but think back to the writing we read in class from Lewis Mumford, cautioning is against technology that gets rid of the human component. This “machine learning” technology fits exactly the definition of authoritarian technology: humans no longer have control. Aren’t humans good enough at thinking that we don’t need machines to do it for us? I also wonder about the ethics behind a computer that can think for itself, what repercussions would arise?
https://blogs.scientificamerican.com/observations/what-do-machine-learning-and-hunter-gatherer-children-have-in-common/