Projects
Key research area
-
Name of the project : DELICIO
Funding agency: ANR
Researchers: Prof. dr. Jilles S. Dibangoye
Timeline: 2018-2024
Webpage
Summary
We propose to combine machine learning and control theory for sequential decision-making of multiple agents. The project proposes fundamental contributions: adding stability to the algorithms of reinforcement learning; data driven methods for robust control; hybrid ML / CT methods for multi-horizon control and planning; decentralized control. The methodological contributions of this fundamental IA project will be applied to the robust control of UAV fleets.
-
Name of the project : EpiRL
Funding agency: ANR
Researchers: Prof. dr. Jilles S. Dibangoye
Timeline: 2022-2026
Webpage
Summary
EpiRL project aims at investigating the combination of epistemic planning and reinforcement learning (RL), by proposing new algorithms that are efficient, adaptive, and capable of computing decisions relying on theory of knowledge and belief. We expect from this approach an efficiency in the generation of epistemic plans, while decisions made RL algorithms will be explainable. Moreover, the algorithms of EpiRL will be tested and evaluated within a real application that exploits autonomous agents.
-
Name of the project : MAESTRIOT
Funding agency: ANR
Researchers: Prof. dr. Jilles S. Dibangoye
Timeline: 2022-2026
Webpage
Summary
The deployment of Internet of Things systems in open and dynamic environments raises several issues related to the reliability of its components. It is unrealistic to consider that every hardware or software component is reliable, trustworthy and efficient whatever the conditions, especially climatic. The MaestrIoT project will address these issues by proposing an algorithmic framework for ensuring trust in a multi-agent system handling sensors and actuators of a cyber-physical environment. Trust management has to be ensured from the perception to decision making and integrating the exchange of information between IoT devices.
-
Name of the project : PLASMA
Funding agency: ANR
Researchers: Prof. dr. Jilles S. Dibangoye
Timeline: 2019-2023
Webpage
Summary
The increasing penetration of multi-agent systems in the society will require a paradigm shift—from single- agent to multi-agent planning and reinforcement learning algorithms—leveraging on recent breakthroughs. To this end, this proposal aims at designing a generic software or machine that can efficiently compute rational strategies for a group of cooperating or competing agents in spite of stochasticity and sensing uncertainty, yet using the same algorithmic scheme. Such a machine should adapt to changes in the environment; apply to different tasks ; and eventually converge to a rational solution for the task at hand. But it needs not exhibit the fastest convergence rates since there is no free lunch. We aim at using the same algorithmic scheme for different problems to ease knowledge transfer and dissemination in expert and practitioner communities. Overall, our objective is to contribute to theoretical foundations of the fields of intelligent agents and multi-agent systems by characterizing the underlying structure of the multi-agent decision-making problems and designing efficient planning and reinforcement learning algorithms with performance guarantees. The main idea presented in this proposal is that it is possible to reduce a multi-agent decision-making problem (such as a partially observable stochastic game) to a fully observable stochastic game, which is solved using a generic algorithm based on recent advances in Artificial Intelligence.
-
Name of the project : Reinforcement Learning for Zero-Sum Partially Observable Stochastic Games (POSGs)
Funding agency: RUG
Researchers: Prof. dr. Jilles S. Dibangoye, Dr. Matthia Sabatelli
Timeline: 2023-2027
PhD: Erwan Escudie
Summary
Due to the omnipresence of artificial agents in society, the development of algorithms that can train them to act to best of their interests in the face of others has become inevitable. In this project, we formalize the interactions among multiple artificial agents as a partially observable stochastic game (POSG). In such a setting, agents can neither see the true state of the world nor share their information with one another, a problem known as the silent coordination dilemma. This dilemma partially explains finding the optimal solution for an infinite-horizon cooperative POSG is undecidable, finite-horizon cooperative POSGs are hard for the class NEXP, and non-cooperative variants are hard for the class NEXP^NP. To circumvent these negative complexity results, we adopted the central planning for decentralized control approach, which suggests recasting POSGs into simpler games, whose solutions can be transferred back to the original games. In this project, we aim at extending this approach to POSGs with competing and mixed-motive interests.
Last modified: | 10 October 2024 11.30 a.m. |