Welcome to Cooperative AI Lab website! Our research aim is to enable machines to exhibit cooperative and responsible behavior in intelligent decision making tasks. To this end, we work on multi-agent cooperation, human-AI coordination, and scalable alignment, which includes human value learning, safety, and ethics.

Active openings

PostDoc Position: One PostDoc position available in Multi-Agent Reinforcement Learning and Foundation Models. Deadline: 12 Mar 2024. Please apply here.
PhD Positions: We are constantly looking for highly motivated students to join our lab. Interested applicants, please contact yali.du@kcl.ac.uk.

News

Feb 2024

One PostDoc position available in Multi-Agent Reinforcement Learning and Foundation Models. Deadline: 12 Mar 2024. Please apply here.

Jan 2024

Yali Du received support from EPSRC grant on (Exploring Causality in Reinforcement Learning for Robust Decision Making). One postdoc position is to be filled. Please get in touch if you are interested.

23 Dec 2023

Yali Du has sat on the UK Parliamentary Office of Science and Technology(POST) board for Artificial Intelligence, contributing to a Government briefing document which helps to define how AI can be used, how AI works, and discusses concerns and perceptions surrounding AI.

20 Dec 2023

Two papers are accepted to AAMAS 2024. We explore safe RL from human natural language constraints and selfishness level of Markov games. Congrats to Xingzhou and Stefan!

10 Dec 2023

Three papers are accepted to AAAI 2024. We explore spatial-temporal credit assignment/ morality aware agent/multi-agent policy gradient. More details will be shared soon!

8 Dec 2023

We are excited to announce the completion of A Review on Cooperative Multi-agent Learning in collaboration with DeepMind. Comments and feedback are welcomed!

22 Sep 2023

Five papers are accepted to NeurIPS 2023. We explore safe RL/interpretable credit assignment with causal modelling/LLM-aligned policy learning and human-AI coordination. More details will be shared soon!

6 May, 2023

One paper is accepted to ICML 2023 on two-player zero-shot coordination . We study how to overcome the problem of cooperative incompatibility and achieve better collaboration with unknown teammates!

20 Mar, 2023

One paper is accepted to AI Journal. We explore safe multi-robot control by reinforcement learning.

Feb 10, 2023

Yali Du is selected for the 2023 AAAI New Faculty Highlights Program!

Feb 7, 2023

Yali Du and Dr. Joel Z. Leibo (DeepMind) will give a tutorial on “Cooperative multi-agent learning: a review of progress and challenges” at AAAI 2023. Link to tutorial website.

Jan 29, 2023

One paper on Moral RL is accepted to ICLR 2023. We explore a tradeoff between morality and task progress in text-based games!

Jan 25, 2023

Welcome Jinyu Cai to visit our group!

Jan 3, 2023

Two papers are accepted to AAMAS2023, 1) on Human-AI coordination on overcooked AI, 2) on learnability of Nash Equilibrium.

... see all News