Manipulation
Last updated
Last updated
The manipulation system consists of a compact 5 degree-of-freedom (DOF) robotic arm designed for research and development in robotic manipulation. The system combines precise motor control, integrated vision capabilities, and a modular end-effector design to support a wide range of manipulation tasks.
Users can develop manipulation strategies through multiple approaches:
Learning-based policies trained through teleoperation demonstrations
Hardcoded motion sequences for repeatable tasks
Recorded trajectories that can be played back for specific operations
The arm's integration with the innate SDK enables straightforward development of both learned and programmatic manipulation policies. Through the included leader arm interface, users can easily demonstrate desired behaviors, which can then be used to train learning-based policies or recorded for direct playback.
The robotic arm is a 5 degree-of-freedom (DOF) manipulator with a modular end effector. The arm's movement is powered by a combination of high-quality Dynamixel servo motors:
3 x Dynamixel XL430 motors
Typically used for major joints requiring higher torque
Enhanced positioning accuracy
Built-in PID control
3 x Dynamixel XL330 motors
Used for lighter-load movements and end effector
Optimized for speed and precision
Energy-efficient operation
Integrated arm camera
150-degree field of view
Provides visual feedback during teleoperation
Enables vision-based manipulation tasks
Maximum reach: 10 inches
Payload capacity: 200 grams at full extension
Working envelope: Spherical segment defined by maximum reach
The 5 DOF configuration provides:
Base rotation
Shoulder movement
Elbow articulation
Wrist pitch
Wrist roll
The arm features a modular end effector mount that supports:
User-designed custom end effectors
Swappable tool attachments
Additional end effector designs (coming soon)
This modularity allows users to adapt the arm for various applications by designing and implementing their own end effector solutions. Future releases will include new end effector designs to expand the arm's capabilities.
To teleoperate the robot using the included leader arm:
Prerequisites:
Ensure no other manipulation tasks are running
Use command: innate manipulation pause
Have access to a workstation with available USB-C port
Hardware Connection:
Connect leader arm to workstation via USB-C
Note: Leader arm can draw up to 1 Amp of current
Ensure stable power supply to workstation
Initialization:
Enter command: innate manipulation teleop
Wait for initialization (takes a few seconds)
System will initialize both leader and follower arms
Operation:
Follower arm will mirror leader arm's joint configuration
Camera feed window will automatically display
Real-time visual feedback provided through arm camera
Connect leader arm to workstation via USB-C
Verify no other manipulation tasks are running using command line
Run initialization command:
When prompted, provide the following information:
Task Type Selection
Enter 'r' for recorded policy
Task Name
Provide a descriptive name for the task
Task Description
Enter a concise description (1-2 sentences)
Include:
Objects to be manipulated
Task objectives
Key guidelines
Note: This description will be used when calling the task via the agent
Once teleoperation mode initializes, use the following controls:
Spacebar
: Multiple functions
Start recording the trajectory
Save recorded trajectory
X
: Cancel and delete recording
Escape
: Save and exit
Recorded behaviors are stored in ~/behaviors
directory on the robot
Note: Unlike learned policies, recorded behaviors:
Only require one successful demonstration of the desired trajectory
Will replay the exact recorded trajectory when executed
Provide no autonomous adaptation to environment changes
Maurice provides a streamlined process for developing learned manipulation policies through demonstration. Users can create new manipulation behaviors by demonstrating tasks via teleoperation, without requiring expertise in machine learning. The system handles the underlying complexities of policy training and deployment.
Data Collection
Demonstrate tasks using the leader arm teleoperation interface
Capture multiple demonstrations to provide task variations
System automatically logs relevant state and action data
No manual data formatting required
Data Upload
Upload demonstration data to Maurice console
System validates data integrity automatically
Access demonstration playback for verification
Organize demonstrations by task type
Policy Configuration
Select neural network architecture for the policy
Choose from available base models as starting points
Configure model structure and parameters
Adjust training parameters such as:
Learning rate
Epochs
Default configurations provided for common use cases
Training Execution
Initiate training through Maurice console
Monitor training status via progress dashboard
System automatically handles optimization process
Training typically takes 1-6 hours depending on task complexity
Deployment
Download trained policy files
Load policy onto Maurice system
Verify behavior matches demonstrations
Deploy to production environment
The process enables technical users to develop manipulation policies based on practical task knowledge while maintaining control over the underlying model architecture and training process.
Connect leader arm to workstation via USB-C
Verify no other manipulation tasks are running using command line
Run initialization command:
When prompted, provide the following information:
Task Type Selection
Enter 'l' for learned policy
Enter 'r' for recorded policy
Task Name
Provide a descriptive name for the task
Task Description
Enter a concise description (1-2 sentences)
Include:
Objects to be manipulated
Task objectives
Key guidelines
Note: This description will be used when calling the task via the agent
Once teleoperation mode initializes, use the following controls:
Spacebar
: Multiple functions
Start recording a new example
Save current example
X
: Cancel and delete current example
Escape
: Save all episodes and exit
Vary Task Settings
Change object positions between demonstrations
Vary robot's initial position
Modify environmental conditions when applicable
These variations help policy generalize to new situations
Maintain Consistency
Use the same strategy across all demonstrations
Keep movement patterns similar
Maintain consistent grasp points
Uniform approach angles when possible
Handle Failures
When a demonstration fails, continue to completion
Do not cancel failed attempts
Retry the task with the same configuration
Failed attempts provide valuable learning data
Recorded demonstrations are stored in ~/data
directory
Access data via SSH connection to robot
Data is automatically formatted for training
List All Tasks
This command displays:
Task names
Data size
Task specifics
View Task Details
This command shows:
Task type (learned/recorded)
Task description
Number of episodes
Data statistics:
Episode lengths
Other relevant metrics
To add additional demonstrations to an existing task:
Run the training command again:
Enter the same task name as before
New demonstrations will be appended to existing data
Verify updated data status using data status
command
Upload Command
Requirements
Robot must remain powered on
Stable internet connection required
Upload time varies with internet speed (up to 45 minutes)
Monitor Progress Check upload status using:
Verification in Maurice Console
Visit Maurice data console
Navigate to "Data Sets" section
Locate your uploaded task
Verify:
All episodes are present
Playback episodes to check upload quality
Best Practices
Ensure stable power supply during upload
Maintain consistent network connection
Verify upload completion before powering down
Monitor progress periodically for large datasets
Navigate to "Policies" section in Maurice console
Click "Add New Policy"
Select architecture and base model
Currently supports ACT and Random Weights
More improved base models coming soon
Configure training parameters
Learning rate
Number of epochs
Default values work well for most tasks
Select training datasets in dataset tab
Click execute to begin training
Training runs in background and can be monitored via the policy console, where you can track key parameters like train and validation loss. Policy training typically takes between 1-6 hours depending on dataset size and number of epochs.
Download Command
Storage Location
Model weights are stored in ~/policy
directory on the robot
Policy Evaluation
To test the downloaded policy, use:
This will run the policy for the specified number of seconds.
Setup
Joint Control
End Effector Control
Gripper Control
Behavior and Policy Execution
Example Usage
Get joint positions
Set joint positions
Get end effector pose
Set end effector pose
Get gripper pressure
Set gripper pressure
Run behavior
Run policy
Interrupt execution
Example Usage