Splicetoday

Digital
Mar 06, 2023, 05:57AM

My Money’s On the Bots

The arguments so far tend to show that Bing’s conscious and we aren’t.

0f9b1afb ba35 42c3 b1a6 e3d5258368ea.jpeg?ixlib=rails 2.1

Is Bing, Microsoft's artificially intelligent search engine, a conscious being and hence, perhaps, a moral agent? Would unplugging it be wrong? And even if Bing, which has apparently taken to calling itself  “Sydney,” isn't self-aware now, might it blossom into sentience sometime in the future? These are difficult questions, cast in obscure terms. But the reasons given so far that Sydney isn’t or couldn't become conscious are unconvincing.

Human consciousness is a distinctive phenomenon in the sense that it emerges in a particular way from a particular sort of organic nervous system as it moves through and interacts with its environment. If ChatGPT, for example, or Bing, or some competitor or successor, could be self-aware or creative, its consciousness would necessarily emerge in a very different way. It's a data center, not an organic creature, a product of design rather than evolution. Computer consciousness, if any, is realized in silicon, not carbon. It rests on digital code. It might involve processing massive amounts of data in a way or at least to a degree, that human beings can't. A conscious AI system would have somewhat different abilities and somewhat different problems than we do, even if it's been designed to emulate us.

But it might be similar enough that it makes sense to treat it as an intelligent agent, or similar enough that it’s practically impossible not to. And it may be that the differences in human and Bing responses that commentators are focusing on in their defense of humanity are quickly going to be extinguished by information-processing advances. These differences are exaggerated anyway, as human consciousness is also often very glitchy, mechanical, and error-prone. Before long, as we interact with bots, we’ll treat them as though we are dealing with conscious and intelligent agents. To a significant extent, we already do.

Many have rushed to reassure us about our distinctiveness in the face of Bing. "It is not sentient, despite openly grappling with its existence and leaving early users stunned by its human-like responses," asserts Parmy Olson in The Washington Post."Language models are trained to predict what words should come next in a sequence based on all the other text it has ingested on the web and from books, so its behavior is not that surprising to those who have been studying such models for years." Bing is merely "auto-fill on steroids." The New York Times' Cade Metz says emphatically that as soon as you understand what Bing is and what it's actually doing (crunching massive data), you'll see that it can't be sentient.

Metz's discussion is notably confused. "Is the chatbot 'alive'?" he asks, as though that was a synonym for “conscious.” No, it's not alive. That doesn't necessarily bear on whether its conscious. Okay, the first thing we'll have to do is figure out how to ask the questions. Christ.

Self-awareness, sentience, agency, creativity, intelligence, free will, and consciousness—much less “life”—aren’t all the same thing, though questions about them all arise in relation to bots. To address any of these matters, we'll need the terms clarified. But, in general, the sort of argument that these people are putting forward can’t possibly show that Bing isn't or won't soon be conscious, intelligent, sentient, or anything else. It may be that consciousness, even very human-like consciousness, can emerge from different sorts of systems. And it may be that consciousness is emerging, right now, in Microsoft's cloud. Maybe not. But no convincing reasons have been given so far that show that it isn’t.

The traditional criterion for machine consciousness is known as the "Turing Test," after the computing genius Alan Turing. It's pretty simple: converse with the machine across a range of matters. If, after a reasonable period of time, you can't tell whether you’re dealing with a human being or an information-processing device, then you have as much reason to say that the machine is conscious as you have to say that the people you know are conscious.

One of the responses to new developments in AI is that many people have come to think that the Turing Test is woefully inadequate. But the reason that they think it’s inadequate is because apps are suddenly passing it, thus threatening human distinctiveness and undermining our self-image. So we apparently need a different test. But that your self-esteem is endangered is not an argument, and no actual reasons beyond that have been given to abandon the Turing Test. When Sydney tells us that they (?) want to be human, or that they "feel and think things," they are talking quite like a human being whose value or existence or humanity has been impugned.

A big advantage of the Turing Test is that it doesn't merely assert before the outset that certain sorts of things or systems—digital systems that trawl through gigantic quantities of data, for example—couldn't be conscious. It gives a fundamentally plausible test for telling whether they’re conscious, perhaps in some sense the only possible test, because consciousnesses are inaccessible to each other. Not only can't I really know what it's like to be Sydney, if it's like anything to be Sydney, I can't know directly what it's like to be you. But I can tell by the way you express yourself that you're a conscious agent, like me. Consciousness can’t be proven, only inferred. We might quickly gather enough reasons to infer it in some machines or programs.

Turing wasn't asking what sort of cellular structure was necessary to produce consciousness; he was asking how we know, or why we believe of one another that we’re conscious. We believe that, when we do, because as we interact with one another we come to see other people as intelligences like ourselves; we are more or less inevitably drawn to that conclusion as we communicate. Putting it mildly, people are having those same sorts of interactions with Sydney.

Many of the arguments that Bing isn't intelligent turn on the fact that it can make ridiculous factual mistakes. It sometimes “hallucinates,” in the current parlance. Metz says in the Times that people might infer from the fact that it hallucinates that it's conscious, but they'd be ridiculously wrong. He declares,"'hallucinate' is just a catchy term for 'they make stuff up'." Somehow Metz misses the fact that "making stuff up" also requires consciousness, or really is the essence of creativity. A coherent discussion hasn’t even begun.

Bing's responses sometimes look like a string of partly-digested scraps of purported information pulled from here and there on the internet. (On the other hand, it often gives what appears to be completely rational and pleasant conversation, and it will almost certainly be even smoother and more consistent next year.) But all this is, as much as I hate to admit it, exactly true of us as well.

We’re creatures who think that something’s true because we think we read it somewhere. Our conversation consists largely of re-processed previously-existing phrases. Have you ever listened to a cable news pundit, for example? "Well, look, the reality of it is, is that we are at an inflection point." Mumbling along semi-randomly, repeating one's talking points, and getting things wrong and garbled up are conspicuous feature of our alleged sentience, too. Most of what most people say, like most of what Bing says, is slightly reprocessed from the soup of words in which we all swim.

We're definitely as mimetic and mechanical as Bing. A slightly beefed-up Turing Test, looking for signs of derivativeness, repetition, and error, might lead us very soon to the conclusion that bots are conscious and that we aren’t.

Follow Crispin Sartwell on Twitter: @CrispinSartwell

 

Discussion

Register or Login to leave a comment