Intuitive Reasoning as (Un)supervised Neural Generation

Yejin Choi / University of Washington

Jan 04, 2021

Abstract: Neural language models, as they grow in scale, continue to surprise us with utterly nonsensical and counterintuitive errors despite their otherwise remarkable performances on leaderboards. In this talk, I will argue that it is time to challenge the currently dominant paradigm of task-specific supervision built on top of large-scale self-supervised neural networks. I will first highlight how we can make better lemonade out of neural language models by shifting our focus on unsupervised, inference-time algorithms. I will demonstrate how unsupervised algorithms can match or even outperform supervised approaches on hard reasoning tasks such as nonmonotonic reasoning (such as counterfactual and abductive reasoning), or complex language generation tasks that require logical constraints. Next, I will highlight the importance of melding explicit and declarative knowledge encoded in symbolic knowledge graphs with implicit and observed knowledge encoded in neural language models. I will present COMET, Commonsense Transformers that learn neural representation of commonsense reasoning from a symbolic commonsense knowledge graph, and Social Chemistry 101, a new conceptual formalism, a knowledge graph, and neural models to reason about social, moral, and ethical norms.

Bio: Yejin Choi is a Brett Helsel associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research interests include commonsense knowledge and reasoning, neural language (de-)generation, language grounding, and AI for social good. She is a co-recipient of the AAAI Outstanding Paper Award in 2020, Borg Early Career Award (BECA) in 2018, IEEE’s AI Top 10 to Watch in 2015, the ICCV Marr Prize in 2013, and the inaugural Alexa Prize Challenge in 2017.