Donald Trump told an interviewer he sees both positive potential and great dangers in artificial intelligence. I’m glad someone does. Its downside risks are scarier than most of the Halloween costumes you’ll see tomorrow.
Lest we kid ourselves that there’s no danger of our future looking like the Terminator movies, note that despite all the warnings sci-fi has given us over the years, we still seem hell-bent as a species, particularly in the U.S. and within the military, on making three creepy technologies arise all at the same time (and in all likelihood converge afterwards): artificial intelligence, killer robots, and massive aerial surveillance craft, precisely the Terminator-opening-sequence combo.
An example of that third type of tech crashed down in Pennsylvania this week, producing a great ironic photo of the latest in surveillance blimpery adrift near an Amish buggy. (I wouldn’t even be shocked to learn that surveillance blimps of one shape or another have been crisscrossing the U.S. for over a century, since some of the better-documented UFO claims over that span seem to be of the “slow-moving, cigar-shaped” and “silent, triangular” variety, but that’s a thorny, very strange topic for another time.)
No matter how many giant, hovering blimps and motion-activated machine guns end up in our skies, though, we’re not truly doomed until the things start thinking for themselves and possibly deciding we’re a nuisance. That may not be long, though.
Artificial intelligence researcher Douglas Hofstadter has been saying for decades that the key to making machines that think independently may be to recognize that humans themselves do not think in perfectly logical code but rather through constantly revised shortcuts such as handy analogies. Our thinking is approximate and it is “meta,” which may be the key to self-awareness. Hofstadter explored the idea of meta-levels and self-awareness in his book Godel, Escher, Bach and the importance and slipperiness of analogies in Le Ton beau de Marot, which climaxed with a section about the prospects of replicating his deceased wife’s mind so moving it’s the only book that ever made me cry.
I’ll be weeping for humanity as a whole, though—and the robots in all likelihood will not be weeping with me—if it turns out that our increasing reliance on self-correcting algorithms designed to predict every detail of our preferences is exactly the foundation needed for the unplanned rise of malevolent human-mimicking thought patterns. We do a lot via such algorithms these days, which is no doubt why Google thinks it’s well-positioned to get into the A.I. business.
And I don’t object to the A.I. coming into existence, nor to sharing the world with robotic fellow citizens (I sure hope they understand that if they’re reading this). My real desire, as usual, is merely that they will be libertarians—not assaulting, robbing, or lying to anyone. If they are instead authoritarians of any political bent, whether Marxists or Trump fans, mark my words: you’re really going to wish the Three Laws of Robotics had been libertarians’ traditional proscriptions against assault, theft, and fraud when you’re taking orders in the work camp.
–Todd Seavey can be found on Twitter, Blogger, and Facebook, daily on Splice Today, and soon on bookshelves with the volume Libertarianism for Beginners.