What Do Models of Natural Language "Understanding" Actually Understand? | Ellie Pavlick, Brown University

Date: 

Friday, February 28, 2020, 1:30pm to 2:30pm

Location: 

Harvard University, Maxwell Dworkin G115, 33 Oxford Street, Cambridge MA

IACS seminars are free and open to the public; no registration is required. Lunch will not be provided.

ABSTRACT: Natural language processing has become indisputably good over the past few years. We can perform retrieval and question answering with purported super-human accuracy, and can generate full documents of text that seem good enough to pass the Turing test. In light of these successes, it is tempting to attribute the empirical performance to a deeper "understanding" of language that the models have acquired. Measuring natural language "understanding", however, is itself an unsolved research problem. In this talk, Dr. Pavlick argues that we have made real, substantive progress on modeling the _form_ of natural language, but have failed almost entirely to capture the underlying _meaning_. She'll discuss recent work which attempts to illuminate what it is that state-of-the-art models of language are capturing by inspecting the models' internal structure directly and by measuring their inferential behavior. Finally, Dr. Pavlick will conclude with results on the ambiguity of humans' linguistic inferences to highlight the challenges involved with developing prescriptivist language tasks for evaluating models of semantics.

BIO: Ellie Pavlick is an Assistant Professor of Computer Science at Brown University, and an academic partner with Google AI. Her research interests include building better computational models of natural language semantics and pragmatics: how does language work, and how can we get computers to understand it the way humans do? You can learn more about her projects and interests here.

See also: Seminar