Gricean Maxims in Sci-Fi Movie “Passengers”

Pontus Wärnestål
3 min readApr 25, 2018

[Spoiler alert — If you haven’t watched Passengers yet, do that first, and then return here. I’m serious, I’m ruining the whole movie plot for you if you continue to read this!]

I have worked with dialogue systems and natural language understanding for several years, and I have seen my share of cringe-worthy sci-fi renderings of human-computer natural language interactions... Nine times out of ten they are unrealistic, unnecessary, and illogical all at the same time. Of course, I still usually enjoy these movies, since I generally like science fiction! But sometimes, the writers get things right, and in the case of Passengers (2016), they even built the whole plot twist around a correct understanding of (or at least they highlight a problem with) what’s known as the Gricean Maxims (Grice, 1975).

In short, Grice formulated principles for cooperative conversation, and the maxims describe principles for how a cooperative and useful conversation should be modeled for the best and highest impact.

In Passengers, the bartender android Arthur has implemented two of Grice’s maxims that together shapes the biggest plot twist in the whole movie. Let’s have a look at the first one: “The Maxim of Quality” (be truthful). This means, don’t lie. Let’s have a look at the first 20 seconds of this clip from the movie:

It’s a bit cryptic, but notice Arthur’s key phrase: “I won’t tell if you don’t.” In Arthur’s logic, this would at the same time mean: “If you tell me, I may tell this to others.”

This is a very clear implementation of the Maxim of Quality. In effect, Arthur actually warns Chris Pratt’s character Jim to not tell him any secrets, because Arthur states that secrets in general are only safe if he doesn’t know about them.

Later, Jim confesses to Arthur that he actually woke Aurora (played by Jennifer Lawrence) up — and thereby doomed her to live out her life with him on the ship and die with him long before the ship reached its final destination. But, as we shall see, confessions and secrets have no place in Grice’s cooperative dialogue.

Now, let’s move to the next maxim that plays a role here: “The Maxim of Relation” (be relevant). Let’s look at this clip (0:50 seconds in is the key dialogue):

If you assess Arthur’s behavior from a human-human relationship point of view here, you might think he’s turning his back on poor Jim who has confided in Arthur. Of course Aurora deserves to know the truth, but the way Arthur takes initiative to tell Aurora about Jim’s action would seem disloyal to Jim. That is, if Arthur was a human being.

But Arthur is implementing both the Maxim of Quality and the Maxim of Relation here. As a droid serving all the passengers on the ship, he is not more loyal to any individual passenger. It doesn’t matter if Jim has known Arthur a lot longer than Aurora has, or that he has confided in him to build deeper bonds of trust. Arthur will tell the truth, and he is programmed to be relevant to all passengers. Obviously, he will tell the relevant truth to Aurora. And in fact, as mentioned above, he truthfully informed Jim about how he will deal with secrets in their first encounter.

So, this story highlights one of the many key differences between human-human relationship-building conversations, and human-machine conversations in a pretty neat way. I don’t know if the scriptwriters actually looked at Grice’s maxims, or if they consulted computational linguists for writing the android dialogues, but Passengers actually built an interesting story around the topic of human-machine conversations. In fact, the biggest plot twist (Aurora finds out about Jim’s action) is based on human-computer conversational interaction. And the story highlights a flaw in the maxims: when you consider multi-party dialogue like this, these maxims can have troublesome effects. I’m guessing Jim doesn’t give Arthur a star rating when it comes to “cooperativeness” after Arthur spills the beans on how Jim woke Aurora up prematurely. So how “loyal” are today’s chatbots and assistants? What are the implications of cooperativeness and truth when several people interacts with the same assistant? And where do we draw the line between “different Siris” or “different Alexas”?

Note: I’m not saying Passengers is perfect in every regard — there are a lot of other strange things going on in the Arthur dialogue model, but that’s the topic of another post. Until then, please continue to enjoy science fiction.

References

Grice, H. P. (1975). “Logic and Conversation.” In Cole, P., and J.L. Morgan (eds.) Speech Acts. New York: Academic Press, 41–58.

--

--

Pontus Wärnestål

Head of Design and Innovation at eghed. Deputy Professor (PhD) at Halmstad University (Sweden). Father of two. I ride my bike to work.