AI Papers Reader

Personalized digests of latest AI research

View on GitHub

Learning Manipulation Affordances and Robot Trajectories with Flow Matching

A new paper by researchers at the Honda Research Institute EU introduces a novel framework for robot manipulation that combines parameter-efficient prompt tuning for visual affordance learning and flow matching for trajectory generation.

The key challenge in robot manipulation is enabling robots to understand and respond to complex, human-centered instructions. This is particularly difficult in everyday environments where robots might need to perform multiple tasks that require understanding the affordances of objects in the scene. For example, a robot might need to grasp a cup, pour water into it, and then hand it to a human.

To address this challenge, the authors propose a two-step approach:

  1. Parameter-efficient prompt tuning (PEFT) for affordance learning: The researchers use pre-trained vision-language models (VLMs) to learn affordances from text-based instructions. They use a PEFT technique called prompt tuning, which modifies the VLM by adding learnable text prompts to the input layer. This allows the VLM to adapt to new tasks without retraining the entire model, which saves computational resources and time.
  2. Flow matching for trajectory generation: Flow matching is a generative method that learns a robot’s visuomotor policy by representing it as a continuous vector field. This field maps random waypoints to a desired trajectory, allowing the robot to generate smooth, continuous movements based on the learned affordances.

The paper demonstrates the effectiveness of this approach using a real-world dataset of 10 tasks across Activities of Daily Living (ADL). The dataset includes 1,000 sets of RGB images, demonstrated robot trajectories, and labeled ground truth of affordances. The researchers show that their prompt tuning method outperforms other finetuning protocols in terms of learning affordances. They also show that their flow matching policy generates high-quality robot trajectories, outperforming alternative methods like diffusion policies and transformer-based behavior cloning.

Concrete Examples:

This research is a significant step towards building more capable and adaptable robot manipulation systems. The framework presented in the paper seamlessly unifies affordance model learning and trajectory generation, paving the way for robots that can understand and respond to complex, human-centered instructions in everyday environments.