Systematic reasoning with language models

Peter Clark / Allen Institute for AI

Talk: , -

Meet the Speaker in Gather.Town:
Chats II
, -
Abstract: While language models are rich, latent “knowledge bases”, and have remarkable question-answering capabilities, they still struggle to explain *how* their knowledge justifies those answers - and can make opaque, catastrophic mistakes. To alleviate this, I will describe new work on coercing language models to produce answers supported by a faithful chain of reasoning, describing how their knowledge justifies an answer. In the style of fast/slow thinking, conjectured answers suggest which chains of reasoning to build, and chains of reasoning suggest which answers to trust. The resulting reasoning-supported answers can then be inspected, debugged, and corrected by the user, offering new opportunities for meaningful, interactive problem-solving dialogs in future systems.

Bio: Peter Clark (peterc@allenai.org) is a Senior Research Manager at the Allen Institute for AI (AI2) and leads the Aristo Project. His work focuses on natural language processing, machine reasoning, and world knowledge, and the interplay between these three areas.