- 8.7 MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning
- Authors: Zohar Rimon, Tom Jurgenson, Orr Krupnik, Gilad Adler, Aviv Tamar
- Reason: Introduces a new model-based approach to meta-RL that achieves better sample efficiency on benchmark and higher-dimensional domains, authored by researchers with authority in the area of model-based and meta-RL.
- 8.5 Global Convergence Guarantees for Federated Policy Gradient Methods with Adversaries
- Authors: Swetha Ganesh, Jiayu Chen, Gugan Thoppe, Vaneet Aggarwal
- Reason: Addresses robustness in federated reinforcement learning with global convergence guarantees, signaling a significant step in secure and distributed RL environments.
- 8.3 Quality-Diversity Actor-Critic: Learning High-Performing and Diverse Behaviors via Value and Successor Features Critics
- Authors: Luca Grillotti, Maxence Faldor, Borja G. León, Antoine Cully
- Reason: Presents an off-policy actor-critic algorithm for learning diverse skills in continuous control tasks, showing substantial performance improvements compared to other methods.
- 8.1 A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage
- Authors: Kevin Tan, Ziping Xu
- Reason: Proposes a novel hybrid reinforcement learning strategy with theoretical backing and demonstrates provable gains using numerical simulations, contributed by authors diving into unexplored aspects of hybrid RL.
- 7.9 HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation
- Authors: Carmelo Sferrazza, Dun-Ming Huang, Xingyu Lin, Youngwoon Lee, Pieter Abbeel
- Reason: Establishes a new simulated benchmark for humanoid robots that facilitates the evaluation of state-of-the-art RL algorithms in high-dimensional, dexterous tasks, created by a team with a record in advancing robotic learning platforms.