- 9.1 Supercompiler Code Optimization with Zero-Shot Reinforcement Learning
- Authors: Jialong Wu, Chaoyi Deng, Jianmin Wang, Mingsheng Long
- Reason: Introduces a novel AI agent (CodeZero) with extensive training for optimizing codes and outperforms expert-designed compilers, demonstrating a significant advancement by using deep reinforcement learning in software engineering.
- 8.7 Learning World Models With Hierarchical Temporal Abstractions: A Probabilistic Perspective
- Authors: Vaisakh Shaj
- Reason: The paper presents new probabilistic formalisms for creating hierarchical world models, including concepts from neuroscience, and promises improvements in long-range predictions for robots, indicating a foundational contribution.
- 8.5 AFU: Actor-Free critic Updates in off-policy RL for continuous control
- Authors: Nicolas Perrin-Gilbert
- Reason: Proposes a novel off-policy deep RL algorithm, marking it as the first to be competitive with state-of-the-art actor-critic methods, which could shift future research away from the actor-critic paradigm in RL.
- 8.2 Reinforcement Learning with Generative Models for Compact Support Sets
- Authors: Nico Schiavone, Xingyu Li
- Reason: Develops a new reinforcement learning framework that utilizes foundation models in a way that could significantly increase classification accuracy without the need for additional data labeling.
- 7.8 A Dual Perspective of Reinforcement Learning for Imposing Policy Constraints
- Authors: Bram De Cooman, Johan Suykens
- Reason: Offers a unifying primal-dual framework for reinforcement learning that allows imposing various types of policy constraints, potentially influencing future designs of RL systems with enhanced safety and performance.