Elon Musk’s Robot Hand

OpenAI is a robotics company based in San Franciso who recently released a video of a humanoid hand that learned to solve a Rubik’s cube. However, the news and video released by the company do not exactly show the whole story. The company, which was co-founded by Elon Musk (one of his numerous endeavors), claimed that the robot is “close to human-level dexterity.” But after a closer look at the results of the testing the results show an impressive robot, but not one close to human-like dexterity. Results show that hand dropped the cube eight out of ten times and the AI in the hand needed the “equivalent of 10,000 years of simulated training to learn to manipulate the cube.” The article notes multiple professionals in the field of robotics who explain that the AI is nowhere close to human dexterity but there are important aspects of his robots. According to Ken Goldberg, a roboticist at UC Berkely, the robot created is impressive and is not completely just hype with no substance, but the AI using reinforcement learning (using repeated experimentation to learn a task) to solve the Rubik’s cube is not nearly as impressive as one may believe.

While the accomplishment of a robot solving a Rubik’s cube is undoubtedly impressive, the entire circumstances of the event are worth looking at. For example, as aforementioned, the hand dropped the cube eight of the ten times it held it. Gary Marcus, a cognitive scientist who is an AI skeptic, says that if a 6-year-old dropped a cube eight out of ten times “you would take them to a neurologist.” Another important aspect of this robot is how exact its actions are and even the slightest disruption can lead to conflict as well as the fact that for it to learn through reinforcement learning requires a long time of failures that are usually replaced with less accurate simulations. One last thing to add is that the hand required a special cube with sensors embedded to know the orientation of the cube. When looking at all of these problems facing reinforcement learning AI, it becomes clear that although interesting and intriguing as to how it will play a role in the future of technology it is not read to be implemented widely.

I saw this article and found it interesting, especially after our discussion on AI in cars on Wednesday in class. This article made me question the future of AI in cars due to the wide variety of things that can be encountered on the roads. Rodney Brooks, “a pioneering figure in robotics,” explains that robot’s abilities are often over-generalized which can lead to dangerous assumptions about what it’s possibilities are. For example, we have had iRobot vacuums for years which is designed to find its way around a house and vacuum it. This can easily be generalized to a smaller and simpler version of a self-driving car, but this is the exact type of generalization that Brooks warns against. While I understand that creating a robot with AI that solves a Rubik cube and an AI system in a car is different, the AI in one is somewhat similar to the AI in the other and the concerns with the AI from the humanoid hand have created concerns of mine for the AI in the cars.

Article: https://www.wired.com/story/why-solving-rubiks-cube-not-signal-robot-supremacy/