Research project Federated Reinforcement Learning (FedRL): Algorithms and Theoretical Foundations

Photo: Roland Magnusson/Mostphotos.
The project explores how Federated Reinforcement Learning (FedRL) can enable scalable, efficient learning in multi-agent systems by advancing theory and algorithms, with applications in both homogeneous and heterogeneous environments.
Recent advancements in single-agent reinforcement learning have set the stage for significant AI innovations. However, when it comes to systems with multiple agents, reinforcement learning faces notable challenges.
This project aims to build on recent progress in federated learning, identifying and leveraging unique structures within FedRL setups that show promise for scalable solutions in multi-agent scenarios. Our goal is to advance the mathematical and algorithmic foundations of FedRL and expand the boundaries of this emerging research field.
Building on this vision, the project focusses on four main goals:
- Developing algorithms and theory for homogeneous agent environments, laying the groundwork for scalable FedRL systems.
- Characterize the impact of heterogeneity among agents, crucial for tailoring FedRL to diverse real-world scenarios.
- Advance communication-efficient strategies, addressing one of the key challenges in federated settings.
- Apply these innovations to real systems, validating the efficacy and applicability of FedRL principles.