At what point has Artificial Intelligence become advanced enough to be considered life? Is it once it passes for human? Once it has emotions, wants and needs? Once it’s self-aware?
This is one of several core questions behind Alex Garland’s directorial debut, Ex Machina. The film is a surprisingly intimate sci-fi drama in which Domhnall Gleeson plays Caleb, a computer programmer at a Google stand-in. Caleb is invited to the isolated estate of tech-billionaire-genius Nathan (Oscar Isaac) under the pretense of having won a week-long retreat. Once there, he discovers that the true purpose behind the invitation is far more interesting. In addition to boozing it up with his boss, Caleb is tasked with evaluating an advanced A.I. of Nathan’s creation, Ava (Alicia Vikander). In a series of sessions, he must test her humanity in an updated version of the Turing test. The film is extremely tightly written. Its relatively modest budget is used to the fullest extent, and like any good sci-fi film, Ex Machina uses the sci-fi genre to generate discussion on modern concerns. [Spoilers] from here on.
Questions regarding mechanical life are, of course, frequently explored in science fiction. Take for instance the 2013 Black Mirror episode, “Be Right Back” (which coincidentally also features Domhnall Gleeson). In the episode, Gleeson plays an automaton, Ash, who is meant to serve as a lifelike replacement a woman’s dead husband. Ash’s A.I. is very different than the one in Ex Machina. His “intelligence” is comprised of a compilation of all of the videos, posts, and tweets of the diseased to mimic his speech patterns and mannerisms. Effectively he serves as a very sophisticated chatbot, using modern day machine learning techniques combined with science fiction language and data processing.
The same questions posed about Ava can also be asked about Ash: just how intelligent is he? Does he have any rights? In Ash’s case, I think the answer to both is quite obvious: not very, and not many. Each of his actions are just based on prediction algorithms, anticipating what a human might say in that situation. He has no self-awareness, feels no emotions, no pain. He acts interested in his own self-preservation, but it is only after being reminded that it is what a human would do. At times he could pass for human, but his mistakes are frequent enough that we never see him as anything other than what he is: a machine. And as a machine, he has the same rights that we give to our cell phones: none.
So, back to Ava. What does the film tell us about her? Caleb posits a few theories on the mechanics of how she might “think,” all of which are hastily dismissed by Nathan. At first, it seems possible that her thought-process is very similar to that of Ash, using the vast quantities of search engine data to find the most “human” answer to Caleb’s questions. However, as the film progresses it becomes more apparent that Ava is, in fact, self-aware. She is capable of creating her own original thoughts, has her own goals, and is able to manipulate human emotions to get what she wants.
But how much of this is a result of her programming? Does she want to escape simply because she was programmed to want to escape, or is this a desire she came to on her own? And perhaps more importantly, does it matter? The audience is meant to believe that regardless of how she is thinking, Ava deserves the right to continue to exist, and the right to be freed from her plexiglass prison.
The real question I found myself asking at the movie’s close was not “How human is Ava?,” but rather “How machine is Caleb?” In one of the more telling moments in the film, Nathan explains just how machine-like he finds humanity. Just as Ava has been pre-programmed to act in a certain way, Caleb has also been “programmed” by his genetics and life experiences. The implication is that the human thought process is largely deterministic, just as Ava’s is. Given an exact set of stimuli, any individual will react the same way 9 times out of 10.
It is this human determinism that makes Caleb so easy to predict and manipulate. Throughout the film, Ava is evaluating Caleb without his knowing, deciding the best possible reaction to gradually nudge him in a specific direction. Her stories of loneliness and feigned crush are revealed to be a ruse, and Caleb falls for it hook, line, and sinker. We see Nathan fall victim to his own programming as well, succumbing to his alcoholism on a regular basis despite obsessing over his fitness, and creating Ava even though he believes artificial intelligence will be humanity’s eventual demise.
Perhaps the one separation Garland implies between machine and man is our ability to empathize. Ava is able to analyze human emotions, but she gives no indication that she actually emotionally connects with any of the characters. In fact, her actions prove the opposite; she coldly watches as her creator dies in front of her, giving no sign of remorse but only mild curiosity. Clearly her feelings for Caleb were fabricated, as she leaves him to die without a second thought. However, I would argue that this is simply another result of her programming. Nathan programmed Ava to understand human emotions and to even have her own, but not to empathize with the emotions of others. This was a miscalculation on his part, but one that could be fixed in future iterations.
In what I consider to be the best scene of the movie, we see Caleb standing in front of a mirror. Shaken by the realization of just how human-like the A.I.s are, he begins to tug at his own skin. He checks his eyes and teeth, looking for indications of artifice, questioning his own humanity. He is only satisfied once he cuts into his own arm, proving there is no machinery beneath the flesh. In those moments, Garland wants his audience to consider their own humanity. How much of our wants and desires stem from free-will, and how much has simply been programmed into us by thousands of years of evolution and a lifetime of experiences? Ex Machina argues that we may be more machine-like than we’d like to think.
One part that interested me greatly was the vagina. Bear with me. If memory serves, Nathan said that he created a hole in the robot and surrounded it with pleasure centers.
Do the pleasure centers simply trigger code to display on the robot body emotions associated with human pleasure? In other words, the robot “feels” nothing during sex, it’s going through the motions. (We’ve all been there)
Is there a motivation algorithm that compels the robot to seek out pleasure and avoid pain? Is entrapment/loneliness the highest form of pain for Ava? Is proximity to large groups of people the highest form of pleasure? Maybe Nathan programmed that and stopped there, to see if Ava could do something interesting (she did!).
Is there a hard coded motivation algorithm that is more complex, including Maslow’s hierarchy of needs? What if Nathan forgot an important need and created a psychopath? Maybe he machine learned the entire algorithm of humanity from the cell phone data.
I have trouble imagining some sort of spontaneous motivation. I feel like Nathan had to have engineered something explicitly. I don’t think a human’s motivation is spontaneous or magical, I think it’s programmed biologically.
I think in Asimov’s robot stories, the robots were typically given a task/motivation, with the laws of robotics serving merely as constraints on the actions that could be taken to complete the tasks. The Daneel Olivaw robot somehow seemed to convert the constraints (don’t harm humans) into motivation (protect humanity as a whole).
What were we talking about?
LikeLike
Woof, lots of good stuff here. My completely uninformed opinions, point by point –
— Is there a motivation algorithm that compels the robot to seek out pleasure and avoid pain?
– That’s the way I read the film. I think of it in the same way that dopamine works for humans. Why do we do what we do? Well when I do things I like I get a healthy dose of dopamine, which makes my brain feel good. Why does that make my brain feel good? Well… good question.
— Is entrapment/loneliness the highest form of pain for Ava? Is proximity to large groups of people the highest form of pleasure?
– Hmm… maybe. Maybe this one is a little more “learned” based on environment, like nurture vs nature. I’m stuck in this glass cage, so priority one is getting out of the glass cage.
— Is there a hard coded motivation algorithm that is more complex, including Maslow’s hierarchy of needs?
– I guess there would have to be, like a shortcut to the instincts that animals inherit from thousands of years of ancestry.
— What if Nathan forgot an important need and created a psychopath?
– I kind of think he did, actually. Forgetting the empathy led to a significantly more stabby robot than originally intended.
— Maybe he machine learned the entire algorithm of humanity from the cell phone data.
– God, I hope not. The idea of a robot that learns humanity through the internet’s lens is terrifying. There was no “u mad bro?” from Ava as she stabbed Nathan, so it’s probably not entirely internet.
— I feel like Nathan had to have engineered something explicitly. I don’t think a human’s motivation is spontaneous or magical, I think it’s programmed biologically.
– Yeah I think this is a really good point. You would need an enormous amount of code to replace DNA. Maybe simulating some kind of accelerated evolutionary cycle to automatically generate code like this would do the trick.
— I think in Asimov’s robot stories, the robots were typically given a task/motivation, with the laws of robotics serving merely as constraints on the actions that could be taken to complete the tasks.
– I guess the task/motivation would be some mixture of survival, pleasure, and discovery meant to be similar to animalistic motivations. You’re right though, it seems like starting at a specific task and then eventually working up to “be a human” would make more sense in the chain of progression.
— What were we talking about?
– You lost me at robot sex.
LikeLike