The pipeline at a glance
| Tab | What you do |
|---|---|
| Record | Collect demonstration episodes with the leader arm |
| Train | Configure hyperparameters and launch a training run |
| Runs | Monitor active training jobs |
| Completed | Download finished models and activate them |
What goes in, what comes out
Input: A dataset of teleoperated demonstrations — each episode captures synchronized camera images (main + wrist), joint positions, joint velocities, and optionally wheel odometry at 30 Hz. Output: A PyTorch checkpoint that runs inference at 25 Hz, outputting 6 arm joint commands and 2 base velocity commands every 40 ms.When to use trained skills
Trained policies shine when:- The task involves visuomotor coordination (reach for, grasp, place)
- Object positions vary between runs and the robot needs to adapt visually
- Writing explicit motion code would be brittle or impractical
Next steps
Record a dataset
Collect high-quality demonstrations.
Train a policy
Configure and launch training on Innate’s cloud.
Deploy your skill
Download, activate, and run your trained model.
Dataset format
Understand what’s inside each episode file.
Training Manager
Browser-based power-user UI for dataset management.

