HBO’s new series Westworld begins with a clever narrative twist. We’re shown a handsome man named Teddy (James Marsden) on a train as he approaches a Wild West theme park populated by androids. We follow him after his arrival, and he’s cautious of his harum-scarum surroundings as he navigates the coincidences and chance desires that constitute the lifeblood of human experience.
In particular, he meets an android girl named Dolores, with whom he shares an apparent romantic history, and the two of them spend the day flirting and reminiscing. When night falls a Man in Black (Ed Harris) arrives to destroy the quaint idyll with a shocking outburst of rape and murder. When Teddy attempts to fight back, viewers are disturbed to find out that Dolores is not the only android involved here—Teddy’s consciousness is synthetic too. The Man in Black, a human guest, ponders the unusual cruelty inherent in directing two artificial beings to fall in love with each other, grimly remarking on the fact that their sad encounter with desire was concocted merely to enhance his sadistic pleasure.
This shift in perspective lays the essential basis for the show’s central thematic ambiguity, the question of who’s human and who’s android—and whether or not we can enunciate a meaningful difference. Even before we learn much of anything about Westworld and its android hosts, we’re already “on their side” in a sense, caught up in the uncanny feeling that these android hosts are something more than mere objects; that they’re edging very close to distinct subjectivity. That’s the reason why we feel sympathy and disgust when Westworld guests gleefully slaughter them. Our consciences intuit that even if they are technically just manmade machines, even if they fit comfortably into our current definition of subhuman, they still shouldn’t be maimed and killed at will. Because perhaps… those so-called androids matter?
Westworld may trade in Western genre imagery, but its philosophical underpinnings are not exactly an exercise in frontiersmanship. The question of android sentience is a timeless theme in science fiction, and the mystery of consciousness has been an intractable problem in the western philosophical tradition for thousands of years. And to engage in a bit of vulgar oversimplification, suffice it to say that scientists and philosophers alike are still stumped.
One of the finest pop-culture explorations of this topic is found in Star Trek: The Next Generation, with the character arc of android Lieutenant Commander Data. In “The Measure of a Man,” a classic episode from season two, Commander Data is confronted by a cybernetics expert that wants to conduct research on his person. When Data objects the procedure, the question arises as to whether Data has any legal right to do so.
Is Data really nothing more than a computer we can turn on and off when we like, or does the sophistication of his intelligence and personality demand the status of bona fide personhood? In other words, does Data matter?
The question is addressed at trial, with Captain Picard delivering a harrowing defense of Data’s right to autonomy and self-determination. Picard’s exquisite courtroom performance is worth watching, but in brief the issue is resolved in favor of Data’s personhood. Why? It’s due simply to the fact that denying Data’s personhood would raise questions that we as free, thinking beings are still not quite prepared to answer. Namely, Picard invokes the nagging aporia of our own inability to prove one another’s sentience, a profound tragedy isolated by Hegel’s master-slave dialectic.
Another fine investigation of android personhood comes from Philip K. Dick, the master of psychedelic sci-fi, in his novel Do Androids Dream of Electric Sheep? Although the novel will always live in the shadow of its stylish film adaptation Blade Runner, it remains my favorite Philip K. Dick novel.
In order to isolate the ethical kernel of Dick’s novel, a futuristic noir yarn about the adventures of android bounty hunter Rick Deckard, the story needs to be considered in its own internal social context. The novel takes place in the post-apocalyptic dust of environmental catastrophe, and the spiritual milieu is such that caring for live animals is considered a hallmark of social status. Meanwhile, a hip new religion based on universal human empathy has gained popularity, with its adherents using “empathy boxes” to gain access to authentic intersubjective experience.
But this social morality is rendered schizoid by an obvious internal tension. This supposedly empathic human culture is dependent on an android slave race, beings sophisticated enough to rise to the level of dubious sentience. It’s Deckard’s job to keep androids in their place by retiring (destroying) those that go rogue. When Deckard is arrested and accused of being an android himself, all of the obvious questions follow, to wit: Just who do we think we are when we draw lines between us and them?
Dick first became interested in the distinction between human and android consciousness when he was researching Nazi behavior for his landmark work of alternate history The Man in the High Castle. His plunge into Nazi psychology made him question what it meant to abandon one’s humanity, i.e. “the problem of differentiating the authentic human being from the reflex machine I call an android.”
For Dick, the essence of humanity lies in the action of ethical refusal: “the capacity to say no when what one was told to do was wrong. Someone saying, ‘No, I won’t kill. I won’t bomb.’ A balking.”
In this sense, Dick’s conception of the android is far darker than the peachy humanism of Jean-Luc Picard. Far from human, the android is simply the organism that finds in its very nature the capacity to follow orders.
Dick’s Do Androids Dream was published in 1968, and four years later he delivered a speech called “The Android and the Human,” which he referred to as the most important thing he’d ever written.
He describes a world in which machines are becoming more human and humans are becoming more like machines, a futuristic paradox whereby the possibility of a human-android distinction slips slowly through our fingertips as a matter of mere indifference. Dick took seriously that “we and our elaborately evolving computers may meet each other halfway” one day, and so the question was no longer whether an android could rise to the level of human; rather, one questions whether humans have sunk to the level of mere android.
“We are merging by degrees into homogeneity with our mechanical constructs, step by step, month by month, until a time will perhaps come when a writer, for example, will not stop writing because someone unplugged his electric typewriter but because some unplugged him.”
Yet throughout the address, Dick maintains a pained optimism sourced in what he saw as the authentically futuristic and quintessentially human character of youth culture. The most casual transgressions are anti-android insofar as they signify “a mere lack of agreement that one must always do what one is ordered to do.” This all-purpose protest ethic points toward the possibility of a humane, happy future. If the hallmark of machinery is predictability, then what we need to fight the future are kids who “just plain will not jump when the whip is cracked.”
Science fiction is fundamentally concerned with imagining a future that inspires us to reassess the present. The concept of the android is a useful conceptual construct that cautions us about the human tendency to abuse and degrade the sentient other, and it can warn us about the authoritarian values that we’re internalizing. Most importantly, the specter of the android reminds us to take seriously matters of consciousness and suffering, particularly when it comes the way some flesh-and-blood humans are treated on a day-to-day basis in a wealthy, hyper-technological culture supposedly built on universal humanism.
Even if Westworld isn’t breaking new ground, its sleek and thoughtful approach to questions regarding sentience and personhood are still worth exploring—today more than ever. In an embarrassing political culture still stuck dealing with the basic social question of who matters—whose education, whose health, whose life matters—one can do worse than seeking the answer to that question in the company of androids.