- 9.5 Reinforcement Learning for Online Testing of Autonomous Driving Systems: a Replication and Extension Study
- Authors: Luca Giamattei, Matteo Biagiola, Roberto Pietrantuono, Stefano Russo, Paolo Tonella
- Reason: Extends a previous study showing how Reinforcement Learning (RL) can be effectively used for online testing of ADS, a critical application area. Authors replicate previous work and improve upon it, which could influence further research and applications.
- 9.2 Towards Principled Representation Learning from Videos for Reinforcement Learning
- Authors: Dipendra Misra, Akanksha Saran, Tengyang Xie, Alex Lamb, John Langford
- Reason: The paper introduces a theoretical understanding of representation learning from videos for RL tasks, which is a significant step forward for both the RL community and practical applications such as games and software testing.
- 8.9 Dynamic Reward Adjustment in Multi-Reward Reinforcement Learning for Counselor Reflection Generation
- Authors: Do June Min, Veronica Perez-Rosas, Kenneth Resnicow, Rada Mihalcea
- Reason: The study introduces novel bandit methods (DynaOpt and C-DynaOpt) for optimizing multiple text qualities in natural language generation, a critical area for improving language models and applications like automated counseling.
- 8.8 Fast Value Tracking for Deep Reinforcement Learning
- Authors: Frank Shih, Faming Liang
- Reason: Introduces a novel RL algorithm with a grounding in SGMCMC and Kalman filtering, including convergence proofs. Authors’ expertise and the algorithm’s robustness and adaptability make it potentially influential.
- 8.5 Hierarchical Gaussian Mixture Normalizing Flow Modeling for Unified Anomaly Detection
- Authors: Xincheng Yao, Ruoqi Li, Zefeng Qian, Lu Wang, Chongyang Zhang
- Reason: Proposes an innovative approach to anomaly detection, addressing a critical issue in previous NF-based methods. The method’s applications could be influential in the machine learning community.
- 8.4 Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
- Authors: Jon Vadillo, Roberto Santana, Jose A. Lozano, Marta Kwiatkowska
- Reason: This research addresses the important aspect of explanation uncertainty in self-explainable neural networks, enhancing model transparency and reliability for high-stake applications.
- 8.3 Federated reinforcement learning for robot motion planning with zero-shot generalization
- Authors: Zhenyuan Yuan, Siyuan Xu, Minghui Zhu
- Reason: This paper tackles the challenging problem of zero-shot generalization in robot planning with a federated learning framework. The potential impact on robotics and reinforcement learning is significant, with theoretical guarantees provided.
- 8.2 Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning
- Authors: Raffaele Paolino, Sohir Maskey, Pascal Welke, Gitta Kutyniok
- Reason: Proposes a hierarchical extension to the Weisfeiler-Leman (WL) graph isomorphism test, which could substantially influence the expressiveness and understanding of graph neural networks (GNNs) in representation learning.
- 8.1 Robust NAS under adversarial training: benchmark, theory, and beyond
- Authors: Yongtao Wu, Fanghui Liu, Carl-Johann Simon-Gabriel, Grigorios G Chrysos, Volkan Cevher
- Reason: Addresses the lack of benchmarks and theoretical foundations in robust NAS, offering a dataset and new insights. The contributions could have a lasting impact on the field of NAS, especially concerning adversarial robustness.
- 7.9 Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity
- Authors: Shiyuan Zuo, Xingrun Yan, Rongfei Fan, Han Hu, Hangguan Shan, Tony Q. S. Quek
- Reason: Deals with two critical challenges in FL—Byzantine attacks and data heterogeneity—and offers a robust algorithm with convergence analysis under both strongly-convex and non-convex loss functions. This work could influence future secure FL practices.