- 9.3 Critic-Guided Decision Transformer for Offline Reinforcement Learning
- Authors: Yuanfu Wang, Chao Yang, Ying Wen, Yu Liu, Yu Qiao
- Reason: This paper presents a novel approach that bridges the gap between deterministic RCSL and probabilistic value-based methods, potentially advancing the state of the art in offline RL, which is an area of high interest in the RL community.
- 9.0 In-Context Reinforcement Learning for Variable Action Spaces
- Authors: Viacheslav Sinii, Alexander Nikulin, Vladislav Kurenkov, Ilya Zisman, Sergey Kolesnikov
- Reason: Addresses the challenge of generalizing to new action spaces in RL, which is crucial for real-world applications, and involves authors from recognized institutions contributing to its potential influence.
- 8.7 Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation
- Authors: Sudharshan Suresh, Haozhi Qi, Tingfan Wu, Taosha Fan, Luis Pineda, Mike Lambeta, Jitendra Malik, Mrinal Kalakrishnan, Roberto Calandra, Michael Kaess, Joseph Ortiz, Mustafa Mukadam
- Reason: The paper has a robust and interdisciplinary approach to in-hand manipulation using neural fields, and the involvement of Jitendra Malik, a prominent figure in AI, increases its potential influence.
- 8.5 Automatic Curriculum Learning with Gradient Reward Signals
- Authors: Ryan Campbell, Junsang Yoon
- Reason: Offers a unique perspective on using gradient norm signals for Automatic Curriculum Learning, which is a less explored but promising area in the context of efficient training of RL agents.
- 8.2 Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth and Data Heterogeneity
- Authors: Yiyue Chen, Haris Vikalo, Chianing Wang
- Reason: The paper addresses critical real-world challenges such as bitwidth and data heterogeneity in federated learning, relevant for resource-constrained devices, and is accepted at AAAI, adding to its credibility and influence potential.