- 9.7 Hybrid Control Policy for Artificial Pancreas via Ensemble Deep Reinforcement Learning
- Authors: Wenzhou Lv, Tianyu Wu, Luolin Xiong, Liang Wu, Jian Zhou, Yang Tang, Feng Qi
- Reason: The paper presents an innovative approach in medical science using DRL. The real-world impact of this work is enormous, potentially improving the lives of millions with type 1 diabetes. The use of DRL to provide personalized, adaptive strategies shows a significant advancement in utilizing machine learning in healthcare.
- 9.5 Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning
- Authors: Tianyuan Wang, Felix Lucka, Tristan van Leeuwen
- Reason: The paper uses DRL to optimize the performance of X-Ray CT scanning, providing a potential solution for improving in-line quality control. The use of DRL for optimizing real-world technologies such as CT scans shows a great potential impact, this being a strong use-case for technical advancement.
- 9.3 On the Effective Horizon of Inverse Reinforcement Learning
- Authors: Yiqing Xu, Finale Doshi-Velez, David Hsu
- Reason: The paper provides new insights into inverse reinforcement learning, discussing how the effective time horizon can play a role in balancing model complexity and avoiding overfitting. As fundamental research in this area, it’s important for improving future methodologies.
- 9.1 Prescriptive Process Monitoring Under Resource Constraints: A Reinforcement Learning Approach
- Authors: Mahmoud Shoush, Marlon Dumas
- Reason: This work applies reinforcement learning to the field of business process management, addressing the real-world problem of resource constraints. The paper is significant as it provides practical solutions for businesses trying to optimize their operations under constraints.
- 9.0 The complexity of non-stationary reinforcement learning
- Authors: Christos Papadimitriou, Binghui Peng
- Reason: This theoretical paper analyzes the challenges in the application of non-stationary reinforcement learning. The paper’s contribution towards understanding limitations and requirements of these learning models is extremely valuable to the research community.