- 9.3 Safety-aware Causal Representation for Trustworthy Reinforcement Learning in Autonomous Driving
- Authors: Haohong Lin, Wenhao Ding, Zuxin Liu, Yaru Niu, Jiacheng Zhu, Yuming Niu, Ding Zhao
- Reason: This paper provides a novel methodology that is critical for ensuring safety in autonomous driving, an area of high interest and potential. The authors’ affiliations and citations suggest high authority in the field.
- 9.1 Reinforcement Learning with Maskable Stock Representation for Portfolio Management in Customizable Stock Pools
- Authors: Wentao Zhang
- Reason: The paper tackles an innovative approach to a financial trading task using RL which has commercial potential and addresses a practical urgency in financial markets. The author presents extensive experimental results adding to its value.
- 8.9 Towards a Standardized Reinforcement Learning Framework for AAM Contingency Management
- Authors: Luis E. Alvarez, Marc W. Brittain, Kara Breeden
- Reason: Offers a substantial contribution towards the implementation of RL in the novel and emerging field of AAM, with comprehensive benchmarking and potential for setting industry standards.
- 8.7 Tactics2D: A Multi-agent Reinforcement Learning Environment for Driving Decision-making
- Authors: Yueyuan Li, Songan Zhang, Mingyang Jiang, Xingyuan Chen, Ming Yang
- Reason: The paper has the potential to be influential by providing a framework that may become a standard toolset in the research community for autonomous driving decision-making.
- 8.7 Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts
- Authors: Ahmed Hendawy, Jan Peters, Carlo D’Eramo
- Reason: Approach introduces a novel representation learning mechanism for MTRL which establishes new state-of-the-art results on recognized benchmarks. Collaboration with well-known authority Jan Peters.
- 8.6 Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization
- Authors: Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, Jianxin Li
- Reason: Addresses a novel approach to OOD generalization on dynamic graphs, which is valuable for various applications but may not be as narrowly impactful as the other topics presented above.
- 8.5 Offline Reinforcement Learning for Wireless Network Optimization with Mixture Datasets
- Authors: Kun Yang, Cong Shen, Jing Yang, Shu-ping Yeh, Jerry Sydir
- Reason: Contributes to the significant practical application area of RRM with reinforcement learning, offering near-optimal policies and innovative mix-dataset techniques; this is also camera-ready for Asilomar 2023.
- 8.3 ADAPTER-RL: Adaptation of Any Agent using Reinforcement Learning
- Authors: Yizhao Jin, Greg Slabaugh, Simon Lucas
- Reason: Presents a potentially groundbreaking method for adapting any agent with pre-trained models or rule-based systems in reinforcement learning, addressing over-fitting and sample inefficiency issues.
- 8.0 Provably Efficient CVaR RL in Low-rank MDPs
- Authors: Yulai Zhao, Wenhao Zhan, Xiaoyan Hu, Ho-fung Leung, Farzan Farnia, Wen Sun, Jason D. Lee
- Reason: Offers the first theoretical framework for efficient CVaR RL with function approximation suitable for large state spaces, addressing a vital aspect of risk sensitivity in reinforcement learning.
- 7.9 Replay-enhanced Continual Reinforcement Learning
- Authors: Tiantian Zhang, Kevin Zehua Shen, Zichuan Lin, Bo Yuan, Xueqian Wang, Xiu Li, Deheng Ye
- Reason: Paper accepted by Transactions on Machine Learning Research 2023 and provides a method to handle catastrophic forgetting in continual learning, a key concern in reinforcement learning over time.