- 9.5 Partially Observable Multi-agent RL with (Quasi-)Efficiency: The Blessing of Information Sharing
- Authors: Xiangyu Liu, Kaiqing Zhang
- This paper presents cutting-edge work on multifaceted reinforcement learning (MARL) within the scope of partially observable stochastic games, leveraging information sharing among agents for effective results. Accepted at the ICML 2023, it promises influence due to its potential impact on multi-agent control systems, providing a pathway for efficient problem-solving in POSGs.
- 9.4 RLIPv2: Fast Scaling of Relational Language-Image Pre-training
- Authors: Hangjie Yuan, Shiwei Zhang, Xiang Wang, Samuel Albanie, Yining Pan, Tao Feng, Jianwen Jiang, Dong Ni, Yingya Zhang, Deli Zhao
- Importance Reason: The authors address the challenging task of large-scale relational pre-training. The team proposes a new model (RLIPv2) which introduces Asymmetric Language-Image Fusion, enabling fast scaling. This paper demonstrates that RLIPv2 achieves state-of-the-art performance on multiple benchmarks. The work has significant importance due to its breakthrough in addressing speed and scale issues in relational pre-training.
- 9.3 Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games
- Authors: Songtao Feng, Ming Yin, Yu-Xiang Wang, Jing Yang, Yingbin Liang
- The work proposes a model-free stage-based Q-learning algorithm with optimal sample complexity, introducing novel techniques for improving Markov games. The findings could have a significant impact on the field of multi-agent reinforcement learning.
- 9.2 Learning Reward Machines through Preference Queries over Sequences
- Authors: Eric Hsiung, Joydeep Biswas, Swarat Chaudhuri
- Importance Reason: The paper addresses the learning of reward machines with weak feedback in the form of preferences, a vital task in the development of complex AI systems. REMAP, the algorithm proposed by the authors, promises to offer correctness and termination guarantees, which can significantly influence the field.
- 9.1 Federated Reinforcement Learning for Electric Vehicles Charging Control on Distribution Networks
- Authors: Junkai Qian, Yuning Jiang, Xin Liu, Qing Wang, Ting Wang, Yuanming Shi, Wei Chen
- In this paper, a new approach is introduced to manage electric vehicle charging, contributing to the solution to power grid stability. Due to its topical relevance and practical implications, this paper is likely to influence research related to energy efficiency, environmental impact, and artificial intelligence.
- 9.0 Reinforcement Learning for Battery Management in Dairy Farming
- Authors: Nawazish Ali, Abdul Wahid, Rachael shaw, Karl Mason
- This research utilizes Q-learning to manage battery charging and discharging in a dairy farm setting. Given the growing influence of artificial intelligence in the agriculture sector, and the unique application of reinforcement learning for energy efficiency, the work has potential to become influential.
- 8.9 Data augmentation and explainability for bias discovery and mitigation in deep learning
- Authors: Agnieszka Mikołajczyk-Bareła
- Importance Reason: The paper discusses the critical issue of bias in deep neural networks and presents methods for reducing bias influence. The research introduces novel techniques like Style Transfer Data Augmentation, Targeted Data Augmentations, and Attribution Feedback. This work aids in the ongoing struggle to reduce bias in machine learning.
- 8.7 IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making
- Authors: Hui Niu, Siyuan Li, Jiahao Zheng, Zhouchi Lin, Jian Li, Jian Guo, Bo An
- This work introduces a novel RL framework, the IMM, for developing multi-price level market-making strategies. While this area has seen a lot of research, this unique approach of integrating RL with imitation learning techniques can result in efficient learning, making this paper potentially influential in the field of financial trading.
- 8.7 Active and Passive Causal Inference Learning
- Authors: Daniel Jiwoong Im, Kyunghyun Cho
- Importance Reason: This work makes a substantial contribution to the challenging subject of causal inference. By introducing a detailed set of assumptions and methods for causal identification and inference, the authors offer a robust foundation for further research in causal inference and discovery.
- 8.5 TinyProp – Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning
- Authors: Marcus Rüb, Daniel Maier, Daniel Mueller-Gritschneder, Axel Sikora
- Importance Reason: This paper addresses the crucial field of on-device machine learning. The authors propose TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each step. This research has considerable potential significance in the field of embedded machine learning applications.