Title: Self-Improvement with Large Language Models
Speaker: Xinyun Chen, Google DeepMind
Abstract: Large language models (LLMs) have achieved impressive performance in many domains, including code generation and reasoning. In this talk, I will discuss our recent works on instructing LLMs to improve their own predictions at inference time. I will first discuss our work self-debugging, which teaches LLMs to debug their own predicted programs. Self-debugging notably improves both the model performance and sample efficiency, matching or outperforming baselines that generate more than 10× candidate programs. In the second part, I will further demonstrate that LLMs can improve their own prompts to achieve better performance, acting as optimizers.
Bio: Xinyun Chen is a senior research scientist at Google DeepMind. She obtained her Ph.D. in Computer Science from University of California, Berkeley. Her research lies at the intersection of deep learning, programming languages, and security. Her recent research focuses on large language models, code generation, and reasoning. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and her work AlphaCode was featured as the front cover in Science Magazine.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.