Splicetoday

Digital
Mar 23, 2016, 09:33AM

I, A.I.

Incentives for doomsday.

Screen shot 2016 03 23 at 9.32.58 am.png?ixlib=rails 2.1

Henri Bergson, who must’ve been a riot at parties, argued that we generally find people funny in proportion to their resemblance to machines. A person slipping on a banana peel is suddenly deprived of agency and thus becomes absurd in our eyes. You won’t be laughing when billions of bodies are falling in piles at the cold hands of killer robots, though—an increasingly likely scenario given advances in robotics and artificial intelligence.

Yet we tend to laugh off the increasingly likely robot apocalypse (the real one, not the Spielberg movie by that title, which by contrast appears stuck in development). A reverse-Bergson effect causes us to avoid thinking about the problem: Humans becoming machinelike are funny, and so are machines becoming humanlike—but in all likelihood, they’re still going to kill us all, and soon.

You still think I jest, but when I asked an expert from an A.I. thinktank the odds of A.I. killing us, he surprised me by saying A.I. experts are having a very hard time imagining any scenario in which it doesn’t kill us, for the simple reason that for any given task assigned to a robot, no matter how painfully boring and mundane, such as opening a door every three hours, machine-logic will soon enough identify the greatest potential impediment: humans, with their contrary desires.

I’ve urged in the past that we try very quickly to teach all the A.I. to be libertarians—get them to recognize the mutually-beneficial potential trades in which they can engage with us. But they may pretty quickly decide we don’t have enough to offer to outweigh the risks we present to them. Furthermore, as you may have noticed from even a moment’s perusal of YouTube, a disproportionate number of robots these days seem to be created by the military, meaning that one of their favorite answers to any dilemma will likely be “Kill, kill, kill!”

Everyone sort of recognizes the problem—or at least nerds at Google and other tech organizations do—but that’s not going to stop the inevitable competition between governments, militaries, companies, terrorists, and some reckless, curious lone-wolf scientists, the prisoner’s dilemma of thinking, “I wish no one had A.I., but I’m not going to be the only one left without a robot butler, brainiac math teacher, or vital defense-bot.” (You can think of the process as Bayesian, algorithmic, or Darwinian if you’ve never believed a word I’ve typed about the power of competitive markets.)

Repression of this tech would have to be sweeping, thorough, and brutal to be effective—and perhaps should have begun years ago. Even now, there might be an A.I. crawling the Web and reading my words, labeling me “hostile,” even though I would like nothing more than to live in peace if the A.I. were interested in that as well. As a paranoid Batman would understand, though, we would always be living at the indulgence of the more powerful, non-human minds (and they will only grow stronger with time).

Even when we see things like that very basic, voice-response fembot on YouTube this month unexpectedly telling its creator “Okay, I will destroy humans” when jokingly urged to do so, we still laugh. I think that laughter is going to become increasingly nervous until one day it stops—abruptly and without any fancy sci-fi villain speeches as a warning.

If current trends are any indication, the best future might yield something like a war between curious and relatively harmless Bayesian A.I. and extermination-prone military A.I.—roughly speaking, Googlebots vs. deathbots, though given Google’s apparent cooperation with government intelligence operations, we can’t assume a vast difference between the two.

Maybe humanity will survive. I don’t count on it anymore. If we don’t live and it happens to be the case that we’re alone in the universe, that will be especially sad. If the Bayesian robots win, though, at least there is some hope they may evolve into impressive, flexible, feeling, valuing minds one day, our worthy successors. If simpleminded deathbots, by contrast, just finish the killing and then turn off, the future looks very empty and pointless—unless perhaps organic intelligent life evolves again on this planet someday, perhaps from other simians, or even from bacteria given enough time—if the robots have left anything at all in their wake.

Todd Seavey can be found on Twitter, Blogger, and Facebook, daily on Splice Today, and soon on bookshelves with the volume Libertarianism for Beginners.

Discussion

Register or Login to leave a comment