Title: Retrieval-augmented models for in-context learning and fast adaptation to new information
Abstract: Fluency of the state of the art Large Language Models (LLMs) is mind blowing. Yet, knowledge-intensive tasks, that require the model to produce factually correct samples or reason about infrequent or new events altogether, remain more challenging than freestyle creative generation. The factuality challenges are often attributed to the statistical nature of LLMs that doesn’t guarantee accuracy of its knowledge. In this talk we will explore how to make LLMs better at these tasks by adding non-parametric external memory (retrieval augmentation). We will discuss most relevant use cases for retrieval augmentation, highlight recent successes and open questions.
Bio: Elena Gribovskaya is a Staff Research Scientist at Google DeepMind, UK. Her current research focuses on improving performance of Large Language Models on knowledge/reasoning-intensive tasks, particularly when the models have to adapt to a barrage of new information about real-world events. Prior to that, Elena worked as a quant in an algorithmic trading fund where she used tools from machine learning and mathematical statistics to sift through large amounts of data in a pursuit of predicting price movements. She received PhD from Ecole Polytechnique Fédérale de Lausanne.