Chris Lu

I am a third-year DPhil student at the University of Oxford, where I am advised by Professor Jakob Foerster at FLAIR. My work focuses on applying evolution-inspired techniques to meta-learning and multi-agent reinforcement learning. In the summer of 2022 I interned at DeepMind as a research scientist.

Previously, I worked as a researcher at Covariant.ai.

Google Scholar  /  Twitter  /  Github  /  LinkedIn

profile photo
News

Blog Posts

  Publications (representative papers are highlighted)
vsop ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Andrew Jesson, Chris Lu, Gunshi Gupta, Angelos Filos, Jakob Nicolaus Foerster, Yarin Gal
ICML 2024
evil EvIL: Evolution Strategies for Generalisable Imitation Learning
Silvia Sapora, Gokul Swamy, Chris Lu, Yee Whye Teh, Jakob Nicolaus Foerster
ICML 2024
Also at the NeurIPS 2023 Robot Learning Workshop
talpo Discovering Temporally-Aware Reinforcement Learning Algorithms
Matthew Jackson*, Chris Lu*, Louis Kirsch, Robert Lange, Shimon Whiteson, Jakob Foerster
*Equal Contribution
ICLR 2024
Also at the NeurIPS 2023 Workshop on Agent Learning in Open-Endedness
behaviour-distillation Behaviour Distillation
Andrei Lupu, Chris Lu, Jarek Luca Liesen, Robert Tjarko Lange, Jakob Foerster
ICLR 2024
jaxmarl JaxMARL: Multi-Agent RL Environments in JAX
Alexander Rutherford*†, Benjamin Ellis*†, Matteo Gallici*†, Jonathan Cook*, Andrei Lupu*, Gardar Ingvarsson*, Timon Willi*, Akbir Khan, Christian Schroeder de Witt, Alexandra Souly, Saptarashmi Bandyopadhyay, Mikayel Samvelyan, Minqi Jiang, Robert Tjarko Lange, Shimon Whiteson, Bruno Lacerda, Nick Hawes, Tim Rocktaschel, Chris Lu*†, Jakob Nicolaus Foerster
†Equal Contribution, *Core Contribution
AAMAS 2024
Also at the NeurIPS 2023 Workshop on Agent Learning in Open-Endedness
rfos Analyzing the Sample Complexity of Model-Free Opponent Shaping
Kitty Fung*, Qizhen Zhang*, Chris Lu, Jia Wan, Timon Willi, Jakob Foerster
*Equal Contribution
AAMAS 2024 (Oral)
Also at the ICML 2023 Workshop on New Frontiers in Learning, Control, and Dynamical Systems
shaper Scaling Opponent Shaping to High Dimensional Games
Akbir Khan*, Timon Willi*, Newton Kwan*, Andrea Tachetti, Chris Lu, Edward Grefenstette, Tim Rocktäschel, Jakob Foerster
*Equal Contribution
AAMAS 2024 (Oral)
Also at the Games, Agents, and Incentives Workshop at AAMAS 2023
jaxlob JAX-LOB: A GPU-Accelerated limit order book simulator to unlock large scale reinforcement learning for trading
Sascha Frey*, Kang Li*, Peer Nagy*, Silvia Sapora, Chris Lu, Stefan Zohren, Jakob Foerster, Anisoara Calinescu
*Equal Contribution
International Conference on AI in Finance 2023 (Best Academic Paper Award)
s5 Structured State Space Models for In-Context Reinforcement Learning
Chris Lu, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, Feryal Behbahani
NeurIPS 2023
Also at the Workshop on New Frontiers in Learning, Control, and Dynamical Systems @ ICML 2023
lga Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design
Matthew Thomas Jackson, Minqi Jiang, Jack Parker-Holder, Risto Vuorio, Chris Lu, Gregory Farquhar, Shimon Whiteson, Jakob Nicolaus Foerster
NeurIPS 2023
act Adversarial Cheap Talk
Chris Lu, Timon Willi, Alistair Letcher, Jakob Foerster
ICML 2023
Also at the Workshop on Machine Learning for Cybersecurity @ ICML 2022 (Spotlight)
lga Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization
Robert Tjarko Lange, Tom Schaul, Yutian Chen, Chris Lu, Tom Zahavy, Valentin Dallibard, Sebastian Flennerhag
GECCO 2023
metameta Arbitrary Order Meta-Learning with Simple Population-Based Evolution
Chris Lu, Sebastian Towers, Jakob Foerster
ALIFE 2023 (Oral)
des Discovering Evolution Strategies via Meta-Black-Box Optimization
Robert Tjarko Lange, Tom Schaul, Yutian Chen, Tom Zahavy, Valentin Dallibard, Chris Lu, Satinder Singh, Sebastian Flennerhag
ICLR 2023
DPO Discovered Policy Optimisation
Chris Lu*, Jakub Grudzien Kuba*, Alistair Letcher, Luke Metz, Christian Schroeder de Witt, Jakob Foerster
*Equal Contribution
NeurIPS 2022
Also at the Decision Awareness in Reinforcement Learning Workshop @ ICML 2022 (Oral)
pola Proximal Learning With Opponent-Learning Awareness
Stephen Zhao, Chris Lu, Roger Baker Grosse, Jakob Foerster
NeurIPS 2022
mfos Model-Free Opponent Shaping
Chris Lu, Timon Willi, Christian Schroeder de Witt, Jakob Foerster
ICML 2022 (Spotlight)
Also at the ICLR 2022 Workshop on Gamification and Multiagent Solutions (Spotlight)
marco Centralized Model and Exploration Policy for Multi-Agent RL
Qizhen Zhang, Chris Lu, Animesh Garg, Jakob Foerster
AAMAS 2022   (Oral Presentation)
sym Learning to Control Self-Assembling Morphologies
Deepak Pathak*, Chris Lu*, Trevor Darrell, Phillip Isola, Alexei A. Efros
*Equal Contribution
NeurIPS 2019  (Spotlight)
Winner of Virtual Creatures Competition (link)
  Preprints and Workshop Papers
nplayer Leading the Pack: N-player Opponent Shaping
Alexandra Souly, Timon Willi, Akbir Khan, Robert Kirk, Chris Lu, Edward Grefenstette, Tim Rocktäschel
Multi-Agent Security Workshop @ NeurIPS 2023 (Oral)
monoids Revisiting Recurrent Reinforcement Learning with Memory Monoids
Steven Morad, Chris Lu, Ryan Kortvelesy, Stephan Liwicki, Jakob Foerster, Amanda Prorok
arXiv Preprint
Misc

  • Reviewer for: NeurIPS 2021, ICLR 2022, IROS 2022, NeurIPS 2022, NeurIPS 2023 (Top Reviewer), ICLR 2024, ICML 2024, ALOE@ICLR2022, DARL@ICML2022, AI4ABM@ICML2022, F4LCD@ICML2023, ALOE@NeurIPS 2023
  • In my free time I like to work on side projects. I used to sell kalimbas. I also solo-developed and sold a video game.
  • I also created the Noisy TV environment that appears in a few highly-cited papers on curiosity-driven learning. The code is here.
  • If you want to see some of my older works and projects, my old website is here.


Credit for the template to Jon Barron.