There’s always something floating around on the internet that captures people’s attention. The attention span of adults today is shorter than that of a toddler, and social media has made it worse. Twitter is not the beginning and an end of a particular, human existential problem. It rarely reflects an actual reality that people live daily. Nevertheless, Twitter’s virtuality does reveal certain aspect of humanity.
This was evident in a recent spate of A.I. videos that were repeatedly shared on Twitter. Labeled by many as “This AI Generated Pizza Commercial is Terrifying,” a video that’s supposed to be an ad for a fictional pizza restaurant called “Pepperoni Hug Spot,” is anything but terrifying. It is weird. People are stuffing their faces with pepperoni and vegetable pizzas, except they also appear to be taking a chunk of their chin along with a huge slice of oozing cheese. The announcer isn’t inviting. Although he’s supposed to be male, his distorted voice sounds more like a vengeful terrorist who just kidnapped the president’s daughter, and is demanding either money or retribution.
Adding to the collection of this weirdness is another A.I. video of Will Smith in a series of images eating spaghetti. Just like the computer-generated people in the pizza commercial, Smith also looks like a blob whose mouth can’t seem to stay on his face. Is he eating his lip or spaghetti?
As expected, the reactions on social media have ranged from indifferent to hysterical. Acting like a bunch of humanoid monkeys in Stanley Kubrick’s 1968 2001: Space Odyssey, people ended up afraid but also admiring this supposedly new technology. Robots and androids will take over the world! There will be nothing left of humanity! They will rule us! It’s the usual apocalyptic nonsense.
We’re forgetting that the computer itself can’t generate anything unless there’s a human input. And even if for a brief moment we assume such science fiction, “creating” a strange pizza commercial is hardly the pinnacle of artificial intelligence. Many believe that some form of A.I. will eventually become sentient and separated from its human creator. The assumption is always that this newly self-created creature will exercise its will, and the first thing it’ll choose is self-preservation, survival, and domination over the other. This, unfortunately, doesn’t reveal anything about robots or androids but about our fallen nature, and how we see ourselves.
In the midst of the cacophony, a voice of reason emerges. Jaron Lanier, considered a father of virtual reality, says that we must be crazy if we think the A.I. will take over the world. In a recent interview for the Guardian, Lanier rejects the notion that the A.I. will somehow outsmart us. He rejects the terminology “artificial intelligence.” It’s just a matter of science fiction that the machine can surpass our own intelligence, and any comparison between human and the machine is useless. He says, “This idea of surpassing human ability is silly because it’s made of human ability… It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
Humans like finding a scapegoat for their fundamentally human problems. We do this when we find a culture or civilizations we disagree with—it’s easier to dismiss or hate something that’s unfamiliar. Human reaction is more of fear than curiosity, but curiosity itself can be dangerous.
Much has changed in 15 years or so, particularly with the advent of social media. How we relate to each other is different because many people are leaving the embodied world and entering a virtual one. But humanity remains the same. No matter how much virtual reality we crave and accept, we’re still left with ourselves. This means that we shouldn’t look away from our capabilities for good and evil, and say that the final human extinction will occur because of machines, or artificial intelligence. There are things to consider, like the economic implications of computers executing the work of human beings. But this isn’t the world of science fiction but of capitalism gone wild.
In the same interview, Lanier makes an excellent point. According to his view, “the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane, if you like, in a way that we aren’t acting with enough understanding to survive, and we die through insanity, essentially.”
We already have enough trouble trying to communicate with each other, and we’ve created a virtual Tower of Babel. Some live in fear that the machines or androids will become sentient beings with free will. Perhaps we shouldn’t worry too much about the machines becoming sentient. Rather, we should focus on our sentience, our free will, and our intelligence. Every time we obsess over artificial intelligence taking over, we’re making ourselves artificial and stupid.