- 9.5 Emergent Communication in Multi-Agent Reinforcement Learning for Future Wireless Networks
- Authors: Marwa Chafii, Salmane Naoumi, Reda Alami, Ebtesam Almazrouei, Mehdi Bennis, Merouane Debbah
- Reason: The work introduces a multi-agent reinforcement learning model addressing complex wireless network scenarios. This could have a significant impact on the future 6G networks development.
- 9.3 The Safety Filter: A Unified View of Safety-Critical Control in Autonomous Systems
- Authors: Kai-Chieh Hsu, Haimin Hu, Jaime Fernández Fisac
- The focus of the paper on the critical aspect of safety in autonomous systems positions it as a potentially influential work given the increasing prevalence of AI-based automation. The authors also propose an integrative approach to enhance both model-based and data-driven methods.
- 9.2 Fidelity-Induced Interpretable Policy Extraction for Reinforcement Learning
- Authors: Xiao Liu, Wubing Chen, Mao Tan
- Reason: The paper presents a novel method for Interpretable Policy Extraction in Reinforcement Learning. This could improve predictability and transparency of decision-making processes in RL.
- 8.9 Revisiting Energy Based Models as Policies: Ranking Noise Contrastive Estimation and Interpolating Energy Models
- Authors: Sumeet Singh, Stephen Tu, Vikas Sindhwani
- This paper’s emphasis on the practicality of energy-based models and its empirical validation by outperforming other state-of-the-art approaches makes it a noteworthy paper in the field of reinforcement learning.
- 8.8 Risk-Aware Reinforcement Learning through Optimal Transport Theory
- Authors: Ali Baheri
- Reason: This work introduces a risk-aware framework for reinforcement learning. It pioneers the integration of Optimal Transport theory with RL to ensure reliable decision-making in uncertain environments.
- 8.7 ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning
- Authors: Chenxiao Gao, Chenyang Wu, Mingjun Cao, Rui Kong, Zongzhang Zhang, Yang Yu
- This paper demonstrates a potential improvement in the Decision Transformer by conditioning it on estimated advantages. It further emphasizes the method’s effectiveness through extensive benchmarking.
- 8.5 Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems
- Authors: Amr Gomaa, Michael Feld
- This paper offers a novel perspective on multimodal interaction with autonomous systems, emphasizing human learning techniques. It’s proposals for advancing AI makes it a potential impactful work.
- 8.4 Toward Discretization-Consistent Closure Schemes for Large Eddy Simulation Using Reinforcement Learning
- Authors: Andrea Beck, Marius Kurz
- Reason: Authors propose an innovative approach to the problem of discretization in ‘Large Eddy Simulation’ using Reinforcement Learning. This paper could stimulate research in this direction.
- 8.1 Interpretable learning of effective dynamics for multiscale systems
- Authors: Emmanuel Menier, Sebastian Kaltenbach, Mouadh Yagoubi, Marc Schoenauer, Petros Koumoutsakos
- While playing an important role in advancing the understandability of AI models, this work’s focus on providing the added benefit of interpretability to state-of-the-art recurrent neural network-based approaches is what highlights it for potential influence.
- 8.1 Verifiable Reinforcement Learning Systems via Compositionality
- Authors: Cyrus Neary, Aryaman Singh Samyal, Christos Verginis, Murat Cubuktepe, Ufuk Topcu
- Reason: The paper proposes a framework for verifiable and compositional reinforcement learning. The approach could be of significant interest to the community aiming towards more reliable RL models.