Research project Federated Reinforcement Learning (FedRL): Algorithms and Theoretical Foundations
Federated Reinforcement Learning is an emerging research field which allows multiple AI systems to learn together – without sharing their private data. In this project, we explore how scalable systems can be built.

The project explores how Federated Reinforcement Learning (FedRL) can enable scalable, efficient learning in multi-agent systems by advancing theory and algorithms, with applications in both homogeneous and heterogeneous environments.
Recent advancements in single-agent reinforcement learning have set the stage for significant AI innovations. However, when it comes to systems with multiple agents, reinforcement learning faces notable challenges.
This project aims to build on recent progress in federated learning, identifying and leveraging unique structures within FedRL setups that show promise for scalable solutions in multi-agent scenarios. Our goal is to advance the mathematical and algorithmic foundations of FedRL and expand the boundaries of this emerging research field.
Building on this vision, the project is divided into four focused work packages (WPs):
- WP-A develops algorithms and theory for homogeneous agent environments, laying the groundwork for scalable FedRL systems.
- WP-B explores the impact of heterogeneity among agents, crucial for tailoring FedRL to diverse real-world scenarios.
- WP-C advances communication-efficient strategies, addressing one of the key challenges in federated settings.
- WP-D applies these innovations to real systems, validating the efficacy and applicability of FedRL principles.
Collectively, these WPs present a unified strategy aimed at forging significant advancements in the domain of FedRL, charting a path for groundbreaking research and application.
Project members
Project managers
Sindri Magnússon
Senior Lecturer, Associate Professor
