- 9.3 Learning to Team-Based Navigation: A Review of Deep Reinforcement Learning Techniques for Multi-Agent Pathfinding
- Authors: Jaehoon Chung, Jamil Fayyad, Younes Al Younes, Homayoun Najjaran
- The authors of this paper are experts in the field and the topic is highly relevant in large-scale robotic applications. It provides a comprehensive review of DRL-based approaches which can have a far-reaching impact on MAPF research.
- 9.0 Reinforcement Logic Rule Learning for Temporal Point Processes
- Authors: Chao Yang, Lu Wang, Kun Gao, Shuang Li
- This paper proposes a novel algorithm for optimization of explanatory temporal logic rule set. It provides a new perspective on the problem by integrating advanced optimization with reinforcement learning.
- 8.8 Neural Conversation Models and How to Rein Them in: A Survey of Failures and Fixes
- Authors: Fabian Galetzka, Anne Beyer, David Schlangen
- This paper’s framework of how neural conversation model should be structured offers a pioneering perspective. The authors’ comprehensive study on the failures and fixes of the system is of high value to this field.
- 8.5 Learning Control Policies for Variable Objectives from Offline Data
- Authors: Marc Weber, Phillip Swazinna, Daniel Hein, Steffen Udluft, Volkmar Sterzing
- The paper speaks about offline reinforcement learning method to control dynamic systems which can be impactful in this field.
- 8.1 Towards a Causal Probabilistic Framework for Prediction, Action-Selection & Explanations for Robot Block-Stacking Tasks
- Authors: Ricardo Cannizzaro, Jonathan Routley, Lars Kunze
- This work proposes a new framework to combine physics simulation with a structural causal model. The contribution here could inspire future works in the domain of robotics and manipulation tasks.