- 9.4 MENTOR: Guiding Hierarchical Reinforcement Learning with Human Feedback and Dynamic Distance Constraint
- Authors: Xinglin Zhou, Yifu Yuan, Shaofu Yang, Jianye Hao
- Reason: Incorporates novel human feedback integration and a dynamic approach to formulating subgoals, directly addressing two main challenges in HRL.
- 9.2 BeTAIL: Behavior Transformer Adversarial Imitation Learning from Human Racing Gameplay
- Authors: Catherine Weaver, Chen Tang, Ce Hao, Kenta Kawamoto, Masayoshi Tomizuka, Wei Zhan
- Reason: Leveraging transformer models with adversarial imitation learning in a complex, dynamic racing environment shows potential for strong influence due to its application in robotic tasks.
- 9.1 ACE: Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
- Authors: Tianying Ji, Yongyuan Liang, Yan Zeng, Yu Luo, Guowei Xu, Jiawei Guo, Ruijie Zheng, Furong Huang, Fuchun Sun, Huazhe Xu
- Reason: Introduces a novel causality-aware exploration mechanism in reinforcement learning for continuous control, a significant contribution that is likely to influence future research in policy learning efficiency.
- 9.0 Bayesian Off-Policy Evaluation and Learning for Large Action Spaces
- Authors: Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba
- Reason: Presents a unified Bayesian framework that addresses the issue of large action spaces by leveraging action correlations, which is backed by both theoretical foundations and empirical evidence indicating strong performance.
- 8.9 Model-Based Reinforcement Learning Control of Reaction-Diffusion Problems
- Authors: Christina Schenk, Aditya Vasudevan, Maciej Haranczyk, Ignacio Romero
- Reason: Explores novel applications of reinforcement learning to control problems in reaction-diffusion systems which could have implications for a range of fields, such as thermal transport and disease modeling.
- 8.8 Simple and Effective Transfer Learning for Neuro-Symbolic Integration
- Authors: Alessandro Daniele, Tommaso Campari, Sagar Malhotra, Luciano Serafini
- Reason: Addresses the critical problem of generalization and reasoning in deep learning through an innovative transfer learning approach applied to Neuro-Symbolic Integration.
- 8.7 Edge Caching Based on Deep Reinforcement Learning and Transfer Learning
- Authors: Farnaz Niknia, Ping Wang, Zixu Wang, Aakash Agarwal, Adib S. Rezaei
- Reason: Addresses contemporary issues in network data transmission using reinforcement learning and transfer learning, promising contributions to real-world issues like edge caching in variable traffic conditions.
- 8.6 PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization
- Authors: André Hottung, Mridul Mahajan, Kevin Tierney
- Reason: Proposes an approach that could affect the field of combinatorial optimization by improving exploration without relying on handcrafted rules.
- 8.6 Enhancement of High-definition Map Update Service Through Coverage-aware and Reinforcement Learning
- Authors: Jeffrey Redondo, Zhenhui Yuan, Nauman Aslam
- Reason: Proposes optimization algorithms for vehicular networks to improve service quality in High-definition (HD) map updates for autonomous vehicles, showing considerable improvements over existing methods.
- 8.5 Partial Search in a Frozen Network is Enough to Find a Strong Lottery Ticket
- Authors: Hikari Otsuka, Daiki Chijiwa, Ángel López García-Arias, Yasuyuki Okoshi, Kazushi Kawamura, Thiem Van Chu, Daichi Fujiki, Susumu Takeuchi, Masato Motomura
- Reason: The innovative concept of using a reduced search space and its experimental validation can potentially influence understanding and methodologies within deep learning optimization.