- 8.9 Structured Reinforcement Learning for Delay-Optimal Data Transmission in Dense mmWave Networks
- Authors: Shufan Wang, Guojun Xiong, Shichen Zhang, Huacheng Zeng, Jian Li, Shivendra Panwar
- Reason: This paper proposes a structured reinforcement learning solution that is computationally efficient and has demonstrated significant gains over existing approaches in realistic network simulations, making it highly relevant for practical applications in telecommunications.
- 8.9 An Explainable Deep Reinforcement Learning Model for Warfarin Maintenance Dosing Using Policy Distillation and Action Forging
- Authors: Sadjad Anzabi Zadeh, W. Nick Street, Barrett W. Thomas
- Reason: This paper addresses the important issue of explainability in deep reinforcement learning applied to a critical healthcare application, which is likely to have substantial influence in medical AI applications.
- 8.7 Learning Control Barrier Functions and their application in Reinforcement Learning: A Survey
- Authors: Maeva Guerrier, Hassan Fouad, Giovanni Beltrame
- Reason: A comprehensive review of control barrier functions within safe reinforcement learning could have a significant impact on the practical application of RL in robotics, enhancing safety and efficacy.
- 8.6 Making Better Use of Unlabelled Data in Bayesian Active Learning
- Authors: Freddie Bickford Smith, Adam Foster, Tom Rainforth
- Reason: The paper offers a novel semi-supervised framework for Bayesian active learning, which could significantly improve the efficiency of learning, hence it has the potential to impact active learning methodologies.
- 8.5 Myopically Verifiable Probabilistic Certificates for Safe Control and Learning
- Authors: Zhuoyuan Wang, Haoming Jing, Christian Kurniawan, Albert Chern, Yorie Nakahira
- Reason: This paper introduces a novel technique for designing safety certificates in stochastic systems, which is crucial for advancing reinforcement learning in the context of safety-critical applications. However, there is text overlap with prior work, which may slightly affect its innovation score.
- 8.3 Closing the gap: Optimizing Guidance and Control Networks through Neural ODEs
- Authors: Sebastien Origer, Dario Izzo
- Reason: This work enhances the accuracy of Guidance & Control Networks (G&CNETs) using Neural ODEs, which can significantly improve the reliability of autonomous control policies in spacecraft.
- 8.3 Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND
- Authors: Qiyu Kang, Kai Zhao, Qinxu Ding, Feng Ji, Xuhao Li, Wenfei Liang, Yang Song, Wee Peng Tay
- Reason: Proposes a new framework that could enhance the performance of graph neural networks, thereby impacting the field of representation learning which is essential in many machine learning applications.
- 8.1 IDIL: Imitation Learning of Intent-Driven Expert Behavior
- Authors: Sangwon Seo, Vaibhav Unhelkar
- Reason: Addresses the important aspect of capturing and mimicking expert intent-driven behaviors, which can improve the fidelity of trained models in reinforcement learning applications, especially for human-agent interactions.
- 8.0 Cycling into the workshop: predictive maintenance for Barcelona’s bike-sharing system
- Authors: Jordi Grau-Escolano, Aleix Bassolas, Julian Vicens
- Reason: The paper addresses the practical issue of predictive maintenance using machine learning, which has clear implications for urban mobility and could influence the maintenance strategies for physical systems.
- 7.8 A Deep Dive into Effects of Structural Bias on CMA-ES Performance along Affine Trajectories
- Authors: Niki van Stein, Sarah L. Thomson, Anna V. Kononova
- Reason: This analysis of structural bias in evolutionary strategies algorithms can impact the understanding and development of more robust optimization algorithms in machine learning, especially in reinforcement learning settings.