- 9.3 Episodic Reinforcement Learning with Expanded State-reward Space
- Authors: Dayang Liang, Yaru Zhang, Yunlong Liu
- Reason: The authors address a critical issue in the data efficiency of DRL, proposing a new EC-based framework integrating historical information which potentially enhances the accuracy of value estimation and policy performance, thereby potentially substantially impacting the reinforcement learning field.
- 9.2 Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model
- Authors: Yinan Zheng, Jianxiong Li, Dongjie Yu, Yujie Yang, Shengbo Eben Li, Xianyuan Zhan, Jingjing Liu
- Reason: The paper introduces a novel approach for satisfying hard safety constraints in offline RL, critical for safety-critical applications, and presents the FISOR framework, which is supported by reachability analysis from safe-control theory, indicating high potential influence in both theory and practice.
- 9.0 Deep Reinforcement Learning Empowered Activity-Aware Dynamic Health Monitoring Systems
- Authors: Ziqiaing Ye, Yulan Gao, Yue Xiao, Zehui Xiong, Dusit Niyato
- Reason: The paper proposes a novel DRL-based health monitoring system that dynamically adapts to user activity, which could significantly influence smart healthcare by optimizing performance and resource efficiency.
- 8.9 Cooperative Multi-Agent Graph Bandits: UCB Algorithm and Regret Analysis
- Authors: Phevos Paschalidis, Runyu Zhang, Na Li
- Reason: The paper addresses a novel multi-agent environment with a focus on cooperative strategies and UCB-based learning algorithms, presenting a significant theoretical contribution with regret analysis, which is central to progress in reinforcement learning. The topic is at the forefront of RL research and the authors provide credible expertise.
- 8.8 Catastrophic Interference is Mitigated in Naturalistic Power-Law Learning Environments
- Authors: Atith Gandhi, Raj Sanjay Shah, Vijay Marupudi, Sashank Varma
- Reason: This work investigates a rehearsal-based approach in power-law learning environments for mitigating catastrophic interference, a critical issue in neural networks, showing promise for improvements in continual learning.
- 8.7 Hierarchical Federated Learning in Multi-hop Cluster-Based VANETs
- Authors: M. Saeid HaghighiFard, Sinem Coleri
- Reason: The research introduces an innovative hierarchical federated learning framework for VANETs, a problem of growing importance due to the emergence of smart vehicles and IoT. The approach is particularly relevant for real-world applications, and the author has authority in this specific domain.
- 8.6 Contrastive Unlearning: A Contrastive Approach to Machine Unlearning
- Authors: Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong
- Reason: The authors introduce a potentially influential new framework for machine unlearning leveraging contrastive learning, which could be significant for privacy and compliance in machine learning applications.
- 8.5 Learning Non-myopic Power Allocation in Constrained Scenarios
- Authors: Arindam Chowdhury, Santiago Paternain, Gunjan Verma, Ananthram Swami, Santiago Segarra
- Reason: This paper proposes a practical learning-based framework for power allocation in wireless networks, employing reinforcement learning techniques suitable for constrained decision-making. The relevance for real-world scenarios and the inclusion of the authors from a well-regarded conference (ASILOMAR) add to the paper’s potential impact.
- 8.2 Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security
- Authors: Marsalis Gibson, David Babazadeh, Claire Tomlin, Shankar Sastry
- Reason: The paper directly contributes to the security aspect of reinforcement learning applications in autonomous driving, a critical area for the deployment of these systems in the real world. The authors have a strong background in the intersection of cybersecurity and machine learning.
- 8.0 Noise Contrastive Estimation-based Matching Framework for Low-resource Security Attack Pattern Recognition
- Authors: Tu Nguyen, Nedim Srndic, Alexander Neth
- Reason: This paper approaches the niche yet vital problem of security attack pattern recognition using a learning paradigm pertinent to reinforcement learning. The methodological novelty and the authors’ experience in the cybersecurity domain render the paper potentially influential.