ICLR 2026
World Modeling Workshop at Mila 2026 - Oral (top 7%)
We propose a hierarchical entity-centric framework for offline Goal-Conditioned Reinforcement Learning (GCRL) that combines subgoal decomposition with factored structure to solve long-horizon tasks in domains with multiple entities. Achieving long-horizon goals in complex environments remains a core challenge in Reinforcement Learning (RL). Domains with multiple entities are particularly difficult due to their combinatorial complexity. GCRL facilitates generalization across goals and the use of subgoal structure, but struggles with high-dimensional observations and combinatorial state-spaces, especially under sparse reward. We employ a two-level hierarchy composed of a value-based GCRL agent and a factored subgoal-generating conditional diffusion model. The RL agent and subgoal generator are trained independently and composed post hoc through selective subgoal generation based on the value function, making the approach modular and compatible with existing GCRL algorithms. We introduce new variations to benchmark tasks that highlight the challenges of multi-entity domains, and show that our method consistently boosts performance of the underlying RL agent on image-based long-horizon tasks with sparse rewards, achieving over 150% higher success rates on the hardest task in our suite and generalizing to increasing horizons and numbers of entities.
Prior work focused mainly on proximity, i.e., states you can achieve within K steps.
We present another perspective: factored progress, i.e., states modifying few factors (e.g., objects) at a time.
We refer to these as factored subgoals, which simplify the subtask when factors are independently controllable.
To obtain factored subgoals, we employ entity-centric structure in:
1. State representation (Deep Latent Particles)
2. Model architecture (Entity-centric Diffusion Transformer)
We find that the bias induced by entity-centric diffusion naturally produces entity-factored subgoals.
We attribute this partly to the Transformer's ability to selectively copy its input tokens (state and goal entities) to the output (denoised subgoal entities).
Results on offline RL tasks with multiple entities:
- 𝟏𝟓𝟎% success increase on hardest task compared to strongest baseline
- We require orders of magnitude less data (3M) than non-factored approaches (up to 1B)
- Factored subgoals modify fewer objects (~1) compared to baselines (~3)
- Non-trivial compositional generalization to x2 number of objects