1. 9.2 The Illusion of State in State-Space Models
  2. 9.1 Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts
  3. 9.0 WROOM: An Autonomous Driving Approach for Off-Road Navigation
  4. 9.0 SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking Neural Networks for Autonomous Agents
  5. 8.9 Handling Reward Misspecification in the Presence of Expectation Mismatch
  6. 8.9 Higher Replay Ratio Empowers Sample-Efficient Multi-Agent Reinforcement Learning
  7. 8.8 Deep Reinforcement Learning based Online Scheduling Policy for Deep Neural Network Multi-Tenant Multi-Accelerator Systems
  8. 8.8 Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
  9. 8.7 Exploring Text-to-Motion Generation with Human Preference
  10. 8.5 Hindsight PRIORs for Reward Learning from Human Preferences
  11. 8.5 Active Learning for Control-Oriented Identification of Nonlinear Systems
  12. 8.5 DEGNN: Dual Experts Graph Neural Network Handling Both Edge and Node Feature Noise
  13. 8.5 Effective Reinforcement Learning Based on Structural Information Principles
  14. 8.4 Provable Interactive Learning with Hindsight Instruction Feedback
  15. 8.4 FedDistill: Global Model Distillation for Local Model De-Biasing in Non-IID Federated Learning
  16. 8.2 LLM-Seg: Bridging Image Segmentation and Large Language Model Reasoning
  17. 8.2 Mixture of Experts Soften the Curse of Dimensionality in Operator Learning
  18. 8.2 Inferring Behavior-Specific Context Improves Zero-Shot Generalization in Reinforcement Learning
  19. 7.9 Multiply-Robust Causal Change Attribution
  20. 7.9 Hybrid FedGraph: An Efficient Hybrid Federated Learning Algorithm Using Graph Convolutional Neural Network