- 9.5 Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
- Authors: David DeFazio, Eisuke Hirota, Shiqi Zhang
- Reason: This paper outlines an innovative technology that uses Reinforcement Learning (RL) to improve the functionality of seeing-eye robots, a highly impactful real-world application. The authors are experienced in this field and have presented their work in CoRL 2023.
- 9.3 Sample-Efficient Co-Design of Robotic Agents Using Multi-fidelity Training on Universal Policy Network
- Authors: Kishan R. Nagiredla, Buddhika L. Semage, Thommen G. Karimpanal, Arun Kumar A. V, Santu Rana
- Reason: The paper introduces a novel approach for co-design optimization which can potentially transform reinforcement learning tasks in many areas besides the author’s focus on robotics.
- 9.3 Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning
- Authors: David Yunis, Justin Jung, Falcon Dai, Matthew Walter
- Reason: The paper focuses on a novel approach to skill-generation to enhance exploration in the sparse-reward reinforcement learning scenario – a significant challenge in the RL field, making it potentially influential.
- 9.2 Emergent learning in physical systems as feedback-based aging in a glassy landscape
- Authors: Vidyesh Rao Anisetti, Ananth Kandala, J. M. Schwarz
- Reason: This paper’s idea of discerning the physical properties of linear physical networks using reinforcement learning stands out as a new approach to understanding learning behaviours of such systems, indicating potential influential scope for its application in various systems.
- 9.1 Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity
- Authors: Jiduan Wu, Anas Barakat, Ilyas Fatkhullin, Niao He
- Reason: The paper brings notable improvements in sample complexity in zero-sum LQ games which could be impactful in reinforcement learning research.
- 9.1 Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach
- Authors: Sofiane Ouaari, Ali Burak Ünal, Mete Akgün, Nico Pfeifer
- Reason: The paper provides an interesting perspective on the privacy-utility trade-off in machine learning, a prominent issue in the modern data-reliant industries. Though not directly related to reinforcement learning, it can indirectly influence the field by providing secure data usage methodology.
- 8.9 Actor critic learning algorithms for mean-field control with moment neural networks
- Authors: Huyên Pham, Xavier Warin
- Reason: The presented algorithm could be influential in the field of reinforcement learning by providing a new approach to solve mean-field control problems.
- 8.9 Parallel and Limited Data Voice Conversion Using Stochastic Variational Deep Kernel Learning
- Authors: Mohamadreza Jafaryani, Hamid Sheikhzadeh, Vahid Pourahmadi
- Reason: The paper demonstrates a novel method for voice conversion that performs well on limited data, a persistent challenge in machine learning and deep learning applications. Although not directly related to reinforcement learning, the approach may influence how data limitations are handled in the RL domain.
- 8.7 Online Submodular Maximization via Online Convex Optimization
- Authors: T. Si-Salem, G. Özcan, I. Nikolaou, E. Terzi, S. Ioannidis
- Reason: The paper successfully reduces the task of submodular function optimization to online convex optimization, offering a potential impact on submodular function optimization problems in machine learning.
- 8.5 Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts
- Authors: Erik Daxberger, Floris Weers, Bowen Zhang, Tom Gunter, Ruoming Pang, Marcin Eichner, Michael Emmersberger, Yinfei Yang, Alexander Toshev, Xianzhi Du
- Reason: Authors introduce a way to optimize Vision Transformers for mobile usage without sacrificing effectiveness, which could be utilized for application with limitations in resources.