- 9.5 Dynamic Knowledge Injection for AIXI Agents
- Authors: Samuel Yang-Zhao, Kee Siong Ng, Marcus Hutter
- Reason: Authored by Marcus Hutter, a prominent figure in AI and co-creator of AIXI. Targets a fundamental challenge in general reinforcement learning, with extended participation in AAAI2024 - indicating peer recognition and potential high impact.
- 9.2 Foundations of Reinforcement Learning and Interactive Decision Making
- Authors: Dylan J. Foster, Alexander Rakhlin
- Reason: The paper provides a comprehensive and unified statistical framework for tackling reinforcement learning problems and includes in-depth theoretical analysis, authored by recognized experts in the field.
- 9.0 Inverse Reinforcement Learning with Unknown Reward Model based on Structural Risk Minimization
- Authors: Chendi Qu, Jianping He, Xiaoming Duan, Jiming Chen
- Reason: Offers a novel approach to selecting reinforcement learning reward models, balancing complexity and computation, and comes from authors with a credible academic background.
- 8.9 OpenRL: A Unified Reinforcement Learning Framework
- Authors: Shiyu Huang, Wentse Chen, Yiwen Sun, Fuqing Bie, Wei-Wei Tu
- Reason: Introduces a comprehensive framework supporting a wide range of tasks and integrating NLP with RL. The paper does not include a strong indication of prestigious conference or journal acceptance but presents tangible resources and a user-centric approach that could influence the RL community.
- 8.7 Ensemble-based Interactive Imitation Learning
- Authors: Yichen Li, Chicheng Zhang
- Reason: Proposes an efficient algorithmic framework for interactive imitation learning, providing both theoretical and empirical results, authored by researchers actively contributing to the field.
- 8.6 XuanCe: A Comprehensive and Unified Deep Reinforcement Learning Library
- Authors: Wenzhang Liu, Wenzhe Cai, Kun Jiang, Guangran Cheng, Yuanda Wang, Jiawei Wang, Jingyu Cao, Lele Xu, Chaoxu Mu, Changyin Sun
- Reason: Presents a versatile DRL library with over 40 classical algorithms, indicating a potential influence on DRL research. The paper details multiple environment compatibility and baselines which can be attractive for researchers in the field.
- 8.5 Adaptive trajectory-constrained exploration strategy for deep reinforcement learning
- Authors: Guojian Wang, Faguo Wu, Xiao Zhang, Ning Guo, Zhiming Zheng
- Reason: Introduces an effective exploration strategy for DRL that could have significant practical implications, backed by extensive experiments and authored by a team with a strong publication record.
- 8.3 Maximizing the Success Probability of Policy Allocations in Online Systems
- Authors: Artem Betlei, Mariia Vladimirova, Mehdi Sebbar, Nicolas Urien, Thibaud Rahier, Benjamin Heymann
- Reason: Accepted to AAAI 2024 with an interesting approach to policy allocation in advertising contexts, a critical application area of RL, which suggests a significant impact on both academia and industry.
- 8.3 Active Third-Person Imitation Learning
- Authors: Timo Klein, Susanna Weinberger, Adish Singla, Sebastian Tschiatschek
- Reason: Addresses an interesting aspect of imitation learning by considering the learner’s perspective, with a practical approach grounded in a GAN-based active learning method, authored by individuals with relevant expertise.
- 8.1 Harnessing the Power of Federated Learning in Federated Contextual Bandits
- Authors: Chengshuai Shi, Ruida Zhou, Kun Yang, Cong Shen
- Reason: Connects federated learning with contextual bandits, a high relevance topic in distributed RL. The paper provides a new perspective though lacks a clearly stated acceptance to a top-tier venue that might otherwise boost its inferred influence.