- 9.6 Simple Hierarchical Planning with Diffusion
- Authors: Chang Chen, Fei Deng, Kenji Kawaguchi, Caglar Gulcehre, Sungjin Ahn
- Reason: The paper introduces a novel hierarchical planning method that enhances diffusion-based planning with potentially broad implications for various RL tasks, presented by authors including reputable figures like Kenji Kawaguchi.
- 9.4 t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision Making
- Authors: William Yue, Bo Liu, Peter Stone
- Reason: Authored by experts in the field like Peter Stone, the paper addresses continual learning with a new generative replay technique and is presented at a NeurIPS workshop, indicating potential high impact on the field.
- 9.2 Towards an Adaptable and Generalizable Optimization Engine in Decision and Control: A Meta Reinforcement Learning Approach
- Authors: Sungwook Yang, Chaoying Pei, Ran Dai, Chuangchuang Sun
- Reason: The use of meta-RL for adaptable optimization in MPC seems innovative and valuable, authored by researchers potentially advancing the field of optimal control.
- 9.0 Adaptive Discounting of Training Time Attacks
- Authors: Ridhima Bector, Abhay Aradhya, Chai Quek, Zinovi Rabinovich
- Reason: New insights into training-time attacks in RL and introducing gammaDDPG, an algorithm that can be of significant interest for security purposes in RL-based systems.
- 8.8 A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in Smart Grids
- Authors: Viorica Rozina Chifu, Tudor Cioara, Cristina Bianca Pop, Horia Rusu, Ionut Anghel
- Reason: Deals with RL for real-world energy systems, signaling an intersection of deep learning and smart grid management, and authored by domain experts.