- 9.2 Privacy-Engineered Value Decomposition Networks for Cooperative Multi-Agent Reinforcement Learning
- Authors: Parham Gohari, Matthew Hale, Ufuk Topcu
- Reason: The paper addresses privacy concerns in Co-MARL which is a critical aspect for adoption in sensitive applications and is accepted at a prestigious IEEE conference.
- 8.7 Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination
- Authors: Lu Wen, Songan Zhang, H. Eric Tseng, Huei Peng
- Reason: Introduces a context-based Meta RL algorithm promising data efficiency and generalization capabilities; this innovation is essential for rapid adaptation in RL.
- 8.6 An advantage based policy transfer algorithm for reinforcement learning with metrics of transferability
- Authors: Md Ferdous Alam, Parinaz Naghizadeh, David Hoelzle
- Reason: Presents an off-policy transfer RL algorithm that could significantly increase the efficiency of RL applications, reducing computing resources and enhancing scalability.
- 8.4 Learning Predictive Safety Filter via Decomposition of Robust Invariant Set
- Authors: Zeyang Li, Chuxiong Hu, Weiye Zhao, Changliu Liu
- Reason: Offers a novel framework combining RMPC and RL for ensuring safety in nonlinear systems, providing a balance between computational efficiency and safety guarantees.
- 8.1 Towards Continual Reinforcement Learning for Quadruped Robots
- Authors: Giovanni Minelli, Vassilis Vassiliades
- Reason: Investigates continual learning in real-world quadruped robots, a step towards adaptive and resilient robotic systems that can learn post-deployment.