- 8.9 Advancing Investment Frontiers: Industry-grade Deep Reinforcement Learning for Portfolio Optimization
- Authors: Philip Ndikum, Serge Ndikum
- Reason: The paper introduces an innovative industry-grade framework and presents a proprietary RL agent with potential scalable applications in quantitative finance, enhancing its influence by bridging theoretical advancements with real-world applicability.
- 8.7 Corruption-Robust Offline Two-Player Zero-Sum Markov Games
- Authors: Andi Nika, Debmalya Mandal, Adish Singla, Goran Radanović
- Reason: This paper tackles the practical and challenging problem of learning in the presence of data corruption, presenting robust algorithms and performance guarantees, which is critical for trustworthy reinforcement learning applications.
- 8.5 The Fusion of Deep Reinforcement Learning and Edge Computing for Real-time Monitoring and Control Optimization in IoT Environments
- Authors: Jingyu Xu, Weixiang Wan, Linying Pan, Wenjian Sun, Yuxiang Liu
- Reason: Combining DRL with edge computing for IoT proposes a novel approach that could yield substantial performance improvements and cost savings in real-time, offering significant potential for influencing the field.
- 8.3 Do Agents Dream of Electric Sheep?: Improving Generalization in Reinforcement Learning through Generative Learning
- Authors: Giorgio Franceschelli, Mirco Musolesi
- Reason: The paper presents a creative and novel approach to enhance generalization in reinforcement learning agents, inspired by the Overfitted Brain hypothesis and using generative learning techniques.
- 8.1 $\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games
- Authors: Weichao Mao, Haoran Qiu, Chen Wang, Hubertus Franke, Zbigniew Kalbarczyk, Tamer Başar
- Reason: The work contributes to the theoretical understanding of no-regret learning in Markov games, a foundational aspect of multi-agent reinforcement learning, which is important for advancing the capabilities of RL systems.