by Kenneth Lota
2 May 2017
*Warning: Spoilers for Westworld and Black Mirror follow*
Technology may enable us to do amazing things, but we humans cannot be trusted with it. At least, such seems to be the implication of two of the most entertaining and impressive science fiction shows currently running on American television: Charlie Brooker’s Black Mirror and Jonathan Nolan and Lisa Joy’s Westworld. Both of these highly intelligent and stylishly-made shows explore the future possibilities of the relationship between humans and the technology we create, but ultimately see that relationship take a dark, dystopic turn.
There is something qualitatively different about the technology-driven anxiety of these two shows compared to earlier narratives of machines going out of control. In earlier films, it was the machines themselves who were to blame for the chaos they created. The possibility of advanced artificial intelligence became frightening with the accompanying realization that such intelligence might not share our moral values or our regard for human life. In 1968’s 2001: A Space Odyssey, the menacingly bland and polite supercomputer HAL 9000 decides on its own to murder the human astronauts on its ship in order to eliminate the possibility of human error. In 1984’s The Terminator and its sequels, the machines become self-aware and decide to launch a war against their creators. The human characters of The Terminator are unambiguous heroes, and even the T-800 (Arnold Schwarzenegger) itself becomes heroic when it learns how to act more like a human in Terminator 2. In The Matrix, the computers have trapped humanity in a digital simulacrum of the ordinary world circa 1999, but the human heroes decide it is preferable to live authentically human lives even in a devastated wasteland than to live relatively pleasant ones according to the machines’ rules. In these earlier cases, it is ultimately the machines who are at fault, and the humans who must overcome them in order to live full human lives. The primary value these stories defend is not pure intelligence, but intelligence tempered by a human morality presumed to be inherently good, by humaneness.
In Black Mirror and Westworld, however, we humans are no longer the heroes, and we cannot blame technology for the problems created by our own moral flaws. The first episode of the Netflix-produced third season of Black Mirror, “Nosedive,” depicts a very plausible future world in which all social, political, and economic status is determined by people’s aggregate scores on a social media app in which everyone rates everyone else they come into contact with on a 5-star scale. The protagonist Lacie (Bryce Dallas Howard), an upper-middle-class young woman obsessed with getting good ratings in order to advance her standing, falls into a series of increasingly unfortunate encounters that end with her being thrown in jail. It might be tempting to ascribe her titular nosedive to the technology of the app itself, but to do so is to miss the point: the technology is working exactly as intended. The computers do not rise up against Lacie or make any sort of active decision to undermine her; the app functions perfectly. It is the vindictive ways in which people use the app, and the perverted social system built around it, that leads to Lacie’s downfall.
Throughout most of the other episodes of the third season, the same principle holds true: humans, not robots, are the real threat. “Shut Up and Dance,” an episode that is essentially consistent with technology that already exists, follows a young man (Alex Lawther) as his life is destroyed by a series of threatening messages and orders conveyed through his computer and smartphone. The computer and smartphones themselves are not the problem for this episode; it is the fact that there are cruel, wicked people on the other end of these communications devices, emboldened by the anonymity the devices enable. “Hated in the Nation,” an episode in which the collective ill-will of the Twittersphere can literally kill, merely gives concrete consequences to the hate spewed daily by all manner of people on the Internet. In these and in every other episode of Black Mirror, it is never the machines themselves that turn on humanity; the machines work perfectly in order to bring about what people ask of them. The fear used to be that advanced technology would turn on humanity and refuse to obey us; now the fear is that it will do what we tell it to after all. The technology is merely a tool; it is our own vileness that will destroy us.
Where Black Mirror proves that advanced technology working as intended can be just as horrifying as an AI rebellion would be, Westworld goes even further in morally re-evaluating our relationship to the machines we create. Westworld is possibly the first ever mass-cultural depiction of a robot uprising to take the side of the robots. The series, which is sprawling in the scale of its cast and number of plotlines, depicts a futuristic theme park in which a large group of robotic “hosts” enable rich human guests to live out their Wild West cowboy fantasies. The robots, who include characters played by Evan Rachel Wood, Thandie Newton, James Marsden, and many others, are initially unaware of their true nature, believing that they really are 19th-century Americans living out in the West. The human guests often seem to be interested in having sex with some robots and getting into shootouts with others (shootouts which by design the humans cannot lose), but rarely have any regard for them as autonomous beings.
(Major spoiler alert): While Westworld is on the whole a show that follows multiple perspectives and rarely makes a thematic statement as clear as those on Black Mirror, the final episode of the first season, “The Bicameral Mind,” tips the audience’s sympathies decidedly in favor of the robots. One of the most surprising and impactful twists of this episode is the revelation that the naïve but heroic William (Jimmi Simpson) and the violent, mysterious Man in Black (Ed Harris) are in fact the same man decades apart, a revelation that drastically expands what the audience had thought of as the show’s chronology and forces us to revise many of our earlier understandings. William, whom the show pointedly associated with the “white hat” traditionally worn by good guys in Westerns, is initially quite compassionate and earnest in his dealings with the pretty “host” Dolores (Evan Rachel Wood); thus, it comes as a real shock to learn that he has over time become the villainous Man in Black (again, the color symbolism is clear). Dolores and the other robots never asked to be created or to be used as amusements by rich tourists, and were it not for the violent abuse they repeatedly suffer they would have no reason to rebel against the humans visiting and running the park. By the time the robot rebellion finally begins at the end of the first season, we in the audience are ready to join them; and yet, it is our own sadism and prurience that we must rebel against. If Black Mirror changes advanced technology from a villain to a passive tool that reflects our own moral failings, Westworld goes so far as to encourage the audience to identify with the robots themselves as if they were the victims of our villainy.
Both Black Mirror and Westworld depict high-tech futures that are simultaneously amazing, at least semi-plausible, and morally troubling. Unlike in the science-fiction cinema of decades past, we do not see the horrors wrought by advanced technology; we see the human horrors that can be perpetrated through it and perhaps even upon it. We have seen the dystopian, high-tech enemy, and he is us.
Kenneth Lota is a PhD candidate in English at the University of North Carolina at Chapel Hill. He focuses on twentieth-century American literature.