Meet the Speaker in Gather.Town:
Chats II
Chats II
, -
Abstract:
I will start by reflecting on different ways of storing 'knowledge' over the years (structured databases, raw text, and language models) and their implications for downstream applications. I will then focus on language models, which have a lot of raw potential, but rely on adaptation (prompting or fine-tuning) to be able to 'extract the knowledge' and use it productively. I will show that standard fine-tuning of all the parameters can 'destroy the knowledge' in the language model, and we then introduce prefix-tuning and composed fine-tuning, which allow us to preserve as much of the language model as possible, leading to improved generalization.
Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.
Bio: Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.