- 9.1 Compositional Conservatism: A Transductive Approach in Offline Reinforcement Learning
- Authors: Yeda Song, Dongwook Lee, Gunhee Kim
- Reason: Introduces a novel approach in offline RL, which is a critical domain in RL research. The authors’ previous work and the acceptance at ICLR 2024 increases the credibility of the paper.
- 8.8 Skill Transfer and Discovery for Sim-to-Real Learning: A Representation-Based Viewpoint
- Authors: Haitong Ma, Zhaolin Ren, Bo Dai, Na Li
- Reason: Addresses the pivotal sim-to-real transfer issue in RL for robotics, which is a rapidly advancing field. The project page and strong empirical results support the potential influence of this work.
- 8.6 Percentile Criterion Optimization in Offline Reinforcement Learning
- Authors: Elita A. Lobo, Cyrus Cousins, Yair Zick, Marek Petrik
- Reason: Offers a novel algorithmic solution to a well-known problem in RL with a strong theoretical and practical significance, which is also validated by the acceptance at Neurips 2023.
- 8.4 SAFE-GIL: SAFEty Guided Imitation Learning
- Authors: Yusuf Umut Ciftci, Zeyuan Feng, Somil Bansal
- Reason: Presents a new perspective on behavior cloning in safety-critical applications, which is essential both for theoretical and practical considerations in RL.
- 8.2 Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer
- Authors: Xinyang Gu, Yen-Jen Wang, Jianyu Chen
- Reason: Introducing a framework for RL on humanoid robots, coupled with the challenge of zero-shot sim-to-real transfer which is a trending topic in RL for robotics.