- 9.2 Robust Communicative Multi-Agent Reinforcement Learning with Active Defense
- Authors: Lebin Yu, Yunbo Qiu, Quanming Yao, Yuan Shen, Xudong Zhang, Jian Wang
- Reason: Accepted by AAAI 2024, and the focus on communication robustness in MARL is a critical aspect of real-world applications.
- 9.2 Emergence of In-Context Reinforcement Learning from Noise Distillation
- Authors: Ilya Zisman, Vladislav Kurenkov, Alexander Nikulin, Viacheslav Sinii, Sergey Kolesnikov
- Reason: Introduces a novel approach in the emergent field of in-context reinforcement learning, potentially addressing the challenge of multi-task learning with suboptimal demonstrators through noise distillation.
- 9.0 Prediction and Control in Continual Reinforcement Learning
- Authors: Nishanth Anand, Doina Precup
- Reason: Published at NeurIPS 2023, which is a highly prestigious conference, and addresses the important aspect of continual learning in RL.
- 9.0 Value Explicit Pretraining for Goal-Based Transfer Learning
- Authors: Kiran Lekkala, Henghui Bao, Sumedh Sontakke, Laurent Itti
- Reason: Proposes an innovative pretraining method aimed at facilitating the transfer of learning across tasks in reinforcement learning, with potential application across various RL problems.
- 8.9 On the Effectiveness of Retrieval, Alignment, and Replay in Manipulation
- Authors: Norman Di Palo, Edward Johns
- Reason: Offers insights into imitation learning efficiency improvement, with a focus on visual observations critical for reinforcement learning in robotics.
- 8.7 Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning
- Authors: Rupali Bhati, Sai Krishna Gottipati, Clodéric Mars, Matthew E. Taylor
- Reason: Presented at NeurIPS 2023, focuses on cooperation in MARL, an area of growing interest and applicability.
- 8.7 Chasing Fairness in Graphs: A GNN Architecture Perspective
- Authors: Zhimeng Jiang, Xiaotian Han, Chao Fan, Zirui Liu, Na Zou, Ali Mostafavi, Xia Hu
- Reason: Addresses fairness in machine learning with a specific focus on graph neural networks, which is a timely issue and may spur further research into ethical AI.
- 8.5 Neural Network Approximation for Pessimistic Offline Reinforcement Learning
- Authors: Di Wu, Yuling Jiao, Li Shen, Haizhao Yang, Xiliang Lu
- Reason: Accepted to AAAI 2024, offers theoretical insights into offline RL with practical implications for deep learning models.
- 8.5 Device Scheduling for Relay-assisted Over-the-Air Aggregation in Federated Learning
- Authors: Fan Zhang, Jining Chen, Kunlun Wang, Wen Chen
- Reason: Tackles the efficient device scheduling problem in federated learning, which has implications for the scale and efficiency of RL algorithms in distributed environments.
- 8.3 Optimistic Policy Gradient in Multi-Player Markov Games with a Single Controller: Convergence Beyond the Minty Property
- Authors: Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, Tuomas Sandholm
- Reason: To appear at AAAI 2024, provides a new framework for optimistic policy gradient methods, relevant for both theory and application in multi-player games.