At what point has Artificial Intelligence become advanced enough to be considered life? Is it once it passes for human? Once it has emotions, wants and needs? Once it’s self-aware?
This is one of several core questions behind Alex Garland’s directorial debut, Ex Machina. The film is a surprisingly intimate sci-fi drama in which Domhnall Gleeson plays Caleb, a computer programmer at a Google stand-in. Caleb is invited to the isolated estate of tech-billionaire-genius Nathan (Oscar Isaac) under the pretense of having won a week-long retreat. Once there, he discovers that the true purpose behind the invitation is far more interesting. In addition to boozing it up with his boss, Caleb is tasked with evaluating an advanced A.I. of Nathan’s creation, Ava (Alicia Vikander). In a series of sessions, he must test her humanity in an updated version of the Turing test. The film is extremely tightly written. Its relatively modest budget is used to the fullest extent, and like any good sci-fi film, Ex Machina uses the sci-fi genre to generate discussion on modern concerns. [Spoilers] from here on.
Questions regarding mechanical life are, of course, frequently explored in science fiction. Take for instance the 2013 Black Mirror episode, “Be Right Back” (which coincidentally also features Domhnall Gleeson). In the episode, Gleeson plays an automaton, Ash, who is meant to serve as a lifelike replacement a woman’s dead husband. Ash’s A.I. is very different than the one in Ex Machina. His “intelligence” is comprised of a compilation of all of the videos, posts, and tweets of the diseased to mimic his speech patterns and mannerisms. Effectively he serves as a very sophisticated chatbot, using modern day machine learning techniques combined with science fiction language and data processing.
The same questions posed about Ava can also be asked about Ash: just how intelligent is he? Does he have any rights? In Ash’s case, I think the answer to both is quite obvious: not very, and not many. Each of his actions are just based on prediction algorithms, anticipating what a human might say in that situation. He has no self-awareness, feels no emotions, no pain. He acts interested in his own self-preservation, but it is only after being reminded that it is what a human would do. At times he could pass for human, but his mistakes are frequent enough that we never see him as anything other than what he is: a machine. And as a machine, he has the same rights that we give to our cell phones: none.
So, back to Ava. What does the film tell us about her? Caleb posits a few theories on the mechanics of how she might “think,” all of which are hastily dismissed by Nathan. At first, it seems possible that her thought-process is very similar to that of Ash, using the vast quantities of search engine data to find the most “human” answer to Caleb’s questions. However, as the film progresses it becomes more apparent that Ava is, in fact, self-aware. She is capable of creating her own original thoughts, has her own goals, and is able to manipulate human emotions to get what she wants.
But how much of this is a result of her programming? Does she want to escape simply because she was programmed to want to escape, or is this a desire she came to on her own? And perhaps more importantly, does it matter? The audience is meant to believe that regardless of how she is thinking, Ava deserves the right to continue to exist, and the right to be freed from her plexiglass prison.
The real question I found myself asking at the movie’s close was not “How human is Ava?,” but rather “How machine is Caleb?” In one of the more telling moments in the film, Nathan explains just how machine-like he finds humanity. Just as Ava has been pre-programmed to act in a certain way, Caleb has also been “programmed” by his genetics and life experiences. The implication is that the human thought process is largely deterministic, just as Ava’s is. Given an exact set of stimuli, any individual will react the same way 9 times out of 10.
It is this human determinism that makes Caleb so easy to predict and manipulate. Throughout the film, Ava is evaluating Caleb without his knowing, deciding the best possible reaction to gradually nudge him in a specific direction. Her stories of loneliness and feigned crush are revealed to be a ruse, and Caleb falls for it hook, line, and sinker. We see Nathan fall victim to his own programming as well, succumbing to his alcoholism on a regular basis despite obsessing over his fitness, and creating Ava even though he believes artificial intelligence will be humanity’s eventual demise.
Perhaps the one separation Garland implies between machine and man is our ability to empathize. Ava is able to analyze human emotions, but she gives no indication that she actually emotionally connects with any of the characters. In fact, her actions prove the opposite; she coldly watches as her creator dies in front of her, giving no sign of remorse but only mild curiosity. Clearly her feelings for Caleb were fabricated, as she leaves him to die without a second thought. However, I would argue that this is simply another result of her programming. Nathan programmed Ava to understand human emotions and to even have her own, but not to empathize with the emotions of others. This was a miscalculation on his part, but one that could be fixed in future iterations.
In what I consider to be the best scene of the movie, we see Caleb standing in front of a mirror. Shaken by the realization of just how human-like the A.I.s are, he begins to tug at his own skin. He checks his eyes and teeth, looking for indications of artifice, questioning his own humanity. He is only satisfied once he cuts into his own arm, proving there is no machinery beneath the flesh. In those moments, Garland wants his audience to consider their own humanity. How much of our wants and desires stem from free-will, and how much has simply been programmed into us by thousands of years of evolution and a lifetime of experiences? Ex Machina argues that we may be more machine-like than we’d like to think.