- 9.0 Self-Supervised Interpretable Sensorimotor Learning via Latent Functional Modularity
- Authors: Hyunki Seong, David Hyunchul Shim
- Reason: High potential in the field of explainable AI for robotics with the combination of self-supervision, modularity, and interpretability. Accepted for an oral presentation at the prominent AAAI 2024 Workshop, indicating recognition by the AI community.
- 8.8 Robustness and Visual Explanation for Black Box Image, Video, and ECG Signal Classification with Reinforcement Learning
- Authors: Soumyendu Sarkar, Ashwin Ramesh Babu, Sajad Mousavi, Vineet Gundecha, Avisek Naug, Sahand Ghorbanpour
- Reason: Offers a comprehensive RL framework for various data types, with implications for enhancing both robustness and interpretability, critical in fields like healthcare. Referenced in AAAI proceedings, highlighting its importance.
- 8.5 Human-compatible driving partners through data-regularized self-play reinforcement learning
- Authors: Daphne Cornelisse, Eugene Vinitsky
- Reason: Addresses the practical and highly relevant challenge of coordinating autonomous vehicles with human drivers, with promising results and potential impact on the future of transportation.
- 8.3 Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent Reinforcement Learning
- Authors: Wei Duan, Jie Lu, Junyu Xuan
- Reason: Contributes to cooperative MARL and the emerging topic of temporal graph learning in multi-agent systems, with significant performance improvements on benchmark tasks.
- 8.1 Towards Human-Centered Construction Robotics: An RL-Driven Companion Robot For Contextually Assisting Carpentry Workers
- Authors: Yuning Wu, Jiaying Wei, Jean Oh, Daniel Cardoso Llach
- Reason: Innovation in the construction industry through an RL-driven companion robot points to substantial impact on worker safety and efficiency, enhancing human-robot collaboration.