- 9.5 Exploration-Driven Policy Optimization in RLHF: Theoretical Insights on Efficient Data Utilization
- Authors: Yihan Du, Anna Winnicki, Gal Dalal, Shie Mannor, R. Srikant
- Reason: Coming from renowned authors with a track record in reinforcement learning and systems theory, this paper provides theoretical justification for the efficiency of Reinforcement Learning from Human Feedback (RLHF), which is a significant step towards understanding human-in-the-loop RL systems.
- 9.2 HyperAgent: A Simple, Scalable, Efficient and Provable Reinforcement Learning Framework for Complex Environments
- Authors: Yingru Li, Jiawei Xu, Lei Han, Zhi-Quan Luo
- Reason: Serves as a bridge between theoretical concepts and practical application in RL, offering potentially high impact due to its implementation simplicity and demonstrated practical efficiency.
- 9.0 Robust agents learn causal world models
- Authors: Jonathan Richens, Tom Everitt
- Reason: An important contribution presented at a top conference (ICLR), addressing fundamental questions about causal reasoning in AI and its implications for robustness and generalization.
- 8.8 Provably Sample Efficient RLHF via Active Preference Optimization
- Authors: Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, Sayak Ray Chowdhury
- Reason: Presents a new method for collecting human preference data in RLHF, hinting at practical efficiency which could have significant influence on future cost-effective RLHF implementations.
- 8.7 Policy Learning for Off-Dynamics RL with Deficient Support
- Authors: Linh Le Pham Van, Hung The Tran, Sunil Gupta
- Reason: Accepted as a full paper at AAMAS, addressing off-dynamics transfer in RL which is highly relevant for real-world application where sim-to-real transfer is common.