Strength Through Diversity: Robust Behavior Learning via Mixture Policies


Efficiency in robot learning is highly dependent on hyperparameters. Robot morphology and task structure differ widely and finding the optimal setting typically requires sequential or parallel repetition of experiments, strongly increasing the interaction count. We propose a training method that only relies on a single trial by enabling agents to select and combine controller designs conditioned on the task. Our Hyperparameter Mixture Policies (HMPs) feature diverse sub-policies that vary in distribution types and parameterization, reducing the impact of design choices and unlocking synergies between low-level components. We demonstrate strong performance on the DeepMind Control Suite and a simulated ANYmal robot, showing that HMPs yield robust, data-efficient learning

Conference on Robot Learning (CoRL)

Toronto Intelligent Systems Lab Co-authors

Igor Gilitschenski
Igor Gilitschenski
Assistant Professor