- 8.6 Evolving Reservoirs for Meta Reinforcement Learning
- Authors: Corentin Léger, Gautier Hamon, Eleni Nisioti, Xavier Hinaut, Clément Moulin-Frier
- Reason: Proposes a unique computational model connecting evolution, learning, and reinforcement learning with potential applications in complex and partially observable environments.
- 8.4 RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality
- Authors: Tianyi Li, Alexander Halatsis, Raphael Stern
- Reason: Introduces a novel car-following AI model that could significantly enhance autonomous driving systems and has a strong base in realistic driving constraints.
- 8.1 Beyond Expected Return: Accounting for Policy Reproducibility when Evaluating Reinforcement Learning Algorithms
- Authors: Manon Flageat, Bryan Lim, Antoine Cully
- Reason: Presents an important aspect of policy evaluation in noisy RL environments, proposing a novel approach to assessment that addresses variability in performance.
- 7.9 Distributional Bellman Operators over Mean Embeddings
- Authors: Li Kevin Wenliang, Grégoire Déletang, Matthew Aitchison, Marcus Hutter, Anian Ruoss, Arthur Gretton, Mark Rowland
- Reason: Offers an innovative algorithmic framework for distributional RL with promising empirical results, which could be impactful in both theory and practical application.
- 7.7 ReRoGCRL: Representation-based Robustness in Goal-Conditioned Reinforcement Learning
- Authors: Xiangyu Yin, Sihao Wu, Jiaxu Liu, Meng Fang, Xingyu Zhao, Xiaowei Huang, Wenjie Ruan
- Reason: Accepted in a major conference (AAAI24), this paper introduces attacks and defenses specific to GCRL, suggesting practical importance in enhancing algorithmic robustness.