Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Prerequisites:
Python 3.10 or higher
Linux operating system (macOS support coming soon)
Windows is not supported
Before installing the Maurice SDK, it's recommended to ensure your pip installation is up to date:
Installation Options:
Base Python Environment:
Virtual Environment (recommended):
Conda Environment:
Note: Using a virtual environment (venv) or conda environment is recommended for isolation and dependency management.
Put Maurice in connection mode:
Press and hold power button for 5 seconds until status light blinks blue
This indicates Maurice is ready for connection
Connect to Maurice's network:
Find and connect to Wi-Fi network named "Maurice_XXX"
Initialize SSH connection:
When prompted to add Maurice to SSH config, enter 'Y'
Verify connection:
This should open an SSH session to Maurice
VS Code Setup (Optional):
Open VS Code
Install Remote-SSH extension if not already installed
Press F1 or Ctrl+Shift+P to open command palette
Type "Remote-SSH: Connect to Host"
Select "maurice" from the list of SSH targets
VS Code will establish connection to Maurice
Run the initialization command:
Authentication:
A browser tab will automatically open
Choose your login method:
Google account
GitHub account
After successful authentication, the browser will redirect back
SDK initialization is now complete
To charge Maurice, follow these simple steps:
Locate the included wall charger that came with your device.
Find the charging port located on the back of Maurice.
Connect the charger to Maurice's charging port at the rear.
Plug the wall charger into a standard electrical outlet.
Monitor Maurice's status light while charging:
Red light indicates low battery
Yellow light indicates medium charge level
Green light indicates fully charged
For optimal performance, charge Maurice when the status light turns red. A complete charging cycle typically occurs when the light changes from red to green.
Note: Continue to use only the wall charger included with Maurice to ensure safe and proper charging.
To power on Maurice:
Press the power button once to turn Maurice on.
The status light will indicate Maurice's current battery level:
Red light: low battery
Yellow light: medium charge
Green light: full charge
Wi-Fi Connection Status:
Blinking light indicates Maurice is attempting to connect to Wi-Fi
Solid light confirms Maurice has successfully connected to Wi-Fi
If this is your first time powering on Maurice and you haven't set up the Wi-Fi connection yet, please proceed to the Connection Setup section below.
Note: If Maurice's status light continues to blink, check your Wi-Fi connection or refer to the troubleshooting section.
There are two paths to establish network connectivity for Maurice: via the Innate Builder App or through a direct Wi-Fi configuration.
Initial Setup Mode: Press and hold the power button for 5 seconds until the LED indicator begins pulsing blue, indicating Maurice has entered configuration mode.
Method 1: Innate Builder App Configuration
Install the Innate Builder App
Navigate to: Robots → Setup New Robot
The app will scan for available Bluetooth LE devices
Locate and select "Maurice_XXX" from the discovered devices
Once paired:
Select target Wi-Fi SSID
Input network credentials
Upon successful connection:
Status LED transitions to solid state (color reflects battery level)
Bluetooth connection terminates
App control interface becomes active
Method 2: Direct Wi-Fi Configuration
Maurice creates a local access point "Maurice_XXX"
Connect to this network
Access the configuration interface at 192.168.1.1
Navigate to connection settings
Input target network credentials:
SSID
Password
Upon successful connection:
Local AP terminates
Status LED transitions to solid state
Network Requirements: For successful communication, both the control device and Maurice must operate on the same subnet. Maurice supports standard Wi-Fi protocols.
Maurice operates using a dual-system navigation approach:
Uses RGBD+LiDAR SLAM for localization
Creates and maintains an occupancy map for obstacle-free path planning
Continuously stores and updates a pose-graph representation of the environment for semantic navigation
Maurice can navigate in three ways:
By coordinates
Through text prompts to stored locations
Through text prompts to visible locations (in-sight navigation)
For successful navigation, Maurice requires both:
An occupancy map (for understanding free/occupied space)
A pose-graph (for understanding the environment's structure)
The system continuously updates its environmental understanding through the pose-graph representation, allowing Maurice to maintain an accurate model of its surroundings for navigation purposes.
Initial Position
Place Maurice in a repeatable starting position
This position will serve as the map's origin
Note: Choose this position carefully as it will be your reference point
Start Mapping
Execute command: innate map new
Enter your desired map name when prompted
A visualization window will appear showing:
Current map generation
Sensor data window
Mapping Process
Drive Maurice through the environment using either:
The mobile app
WASD keys on your keyboard
Mapping best practices:
Cover all areas where Maurice will operate
Perform regular 360-degree turns to avoid sensor blind spots
Revisit key regions multiple times to improve map accuracy
Save Map
Press escape to finish mapping
The map will automatically save to ~/maps
View Map
To view your created map, use command:
Initial Setup
Place Maurice in the same origin position used for the occupancy map
Load your previously created map using:
Create New Pose-Graph
Start the pose-graph creation process:
Environment Coverage
Drive Maurice through the environment
Ensure the robot observes:
All key rooms
Important objects relevant to planned tasks
The robot will automatically:
Build its proprietary pose-graph representation
Create a navigable understanding of the environment
Managing the Pose-Graph
Save the pose-graph:
View the pose-graph:
Load a pose-graph for use:
Note: The -dynamic
argument controls whether the pose-graph:
Setup
Get Current Pose
Go to Pose
Go to Memory
Go to In Sight
Interrupt Navigation
Get Current Pose
Go to Pose
Go to Memory
Go to In Sight
Interrupt Navigation
The manipulation system consists of a compact 5 degree-of-freedom (DOF) robotic arm designed for research and development in robotic manipulation. The system combines precise motor control, integrated vision capabilities, and a modular end-effector design to support a wide range of manipulation tasks.
Users can develop manipulation strategies through multiple approaches:
Learning-based policies trained through teleoperation demonstrations
Hardcoded motion sequences for repeatable tasks
Recorded trajectories that can be played back for specific operations
The arm's integration with the innate SDK enables straightforward development of both learned and programmatic manipulation policies. Through the included leader arm interface, users can easily demonstrate desired behaviors, which can then be used to train learning-based policies or recorded for direct playback.
The robotic arm is a 5 degree-of-freedom (DOF) manipulator with a modular end effector. The arm's movement is powered by a combination of high-quality Dynamixel servo motors:
3 x Dynamixel XL430 motors
Typically used for major joints requiring higher torque
Enhanced positioning accuracy
Built-in PID control
3 x Dynamixel XL330 motors
Used for lighter-load movements and end effector
Optimized for speed and precision
Energy-efficient operation
Integrated arm camera
150-degree field of view
Provides visual feedback during teleoperation
Enables vision-based manipulation tasks
Maximum reach: 10 inches
Payload capacity: 200 grams at full extension
Working envelope: Spherical segment defined by maximum reach
The 5 DOF configuration provides:
Base rotation
Shoulder movement
Elbow articulation
Wrist pitch
Wrist roll
The arm features a modular end effector mount that supports:
User-designed custom end effectors
Swappable tool attachments
Additional end effector designs (coming soon)
This modularity allows users to adapt the arm for various applications by designing and implementing their own end effector solutions. Future releases will include new end effector designs to expand the arm's capabilities.
To teleoperate the robot using the included leader arm:
Prerequisites:
Ensure no other manipulation tasks are running
Use command: innate manipulation pause
Have access to a workstation with available USB-C port
Hardware Connection:
Connect leader arm to workstation via USB-C
Note: Leader arm can draw up to 1 Amp of current
Ensure stable power supply to workstation
Initialization:
Enter command: innate manipulation teleop
Wait for initialization (takes a few seconds)
System will initialize both leader and follower arms
Operation:
Follower arm will mirror leader arm's joint configuration
Camera feed window will automatically display
Real-time visual feedback provided through arm camera
Connect leader arm to workstation via USB-C
Verify no other manipulation tasks are running using command line
Run initialization command:
When prompted, provide the following information:
Task Type Selection
Enter 'r' for recorded policy
Task Name
Provide a descriptive name for the task
Task Description
Enter a concise description (1-2 sentences)
Include:
Objects to be manipulated
Task objectives
Key guidelines
Note: This description will be used when calling the task via the agent
Once teleoperation mode initializes, use the following controls:
Spacebar
: Multiple functions
Start recording the trajectory
Save recorded trajectory
X
: Cancel and delete recording
Escape
: Save and exit
Recorded behaviors are stored in ~/behaviors
directory on the robot
Note: Unlike learned policies, recorded behaviors:
Only require one successful demonstration of the desired trajectory
Will replay the exact recorded trajectory when executed
Provide no autonomous adaptation to environment changes
Maurice provides a streamlined process for developing learned manipulation policies through demonstration. Users can create new manipulation behaviors by demonstrating tasks via teleoperation, without requiring expertise in machine learning. The system handles the underlying complexities of policy training and deployment.
Data Collection
Demonstrate tasks using the leader arm teleoperation interface
Capture multiple demonstrations to provide task variations
System automatically logs relevant state and action data
No manual data formatting required
Data Upload
Upload demonstration data to Maurice console
System validates data integrity automatically
Access demonstration playback for verification
Organize demonstrations by task type
Policy Configuration
Select neural network architecture for the policy
Choose from available base models as starting points
Configure model structure and parameters
Adjust training parameters such as:
Learning rate
Epochs
Default configurations provided for common use cases
Training Execution
Initiate training through Maurice console
Monitor training status via progress dashboard
System automatically handles optimization process
Training typically takes 1-6 hours depending on task complexity
Deployment
Download trained policy files
Load policy onto Maurice system
Verify behavior matches demonstrations
Deploy to production environment
The process enables technical users to develop manipulation policies based on practical task knowledge while maintaining control over the underlying model architecture and training process.
Connect leader arm to workstation via USB-C
Verify no other manipulation tasks are running using command line
Run initialization command:
When prompted, provide the following information:
Task Type Selection
Enter 'l' for learned policy
Enter 'r' for recorded policy
Task Name
Provide a descriptive name for the task
Task Description
Enter a concise description (1-2 sentences)
Include:
Objects to be manipulated
Task objectives
Key guidelines
Note: This description will be used when calling the task via the agent
Once teleoperation mode initializes, use the following controls:
Spacebar
: Multiple functions
Start recording a new example
Save current example
X
: Cancel and delete current example
Escape
: Save all episodes and exit
Vary Task Settings
Change object positions between demonstrations
Vary robot's initial position
Modify environmental conditions when applicable
These variations help policy generalize to new situations
Maintain Consistency
Use the same strategy across all demonstrations
Keep movement patterns similar
Maintain consistent grasp points
Uniform approach angles when possible
Handle Failures
When a demonstration fails, continue to completion
Do not cancel failed attempts
Retry the task with the same configuration
Failed attempts provide valuable learning data
Recorded demonstrations are stored in ~/data
directory
Access data via SSH connection to robot
Data is automatically formatted for training
List All Tasks
This command displays:
Task names
Data size
Task specifics
View Task Details
This command shows:
Task type (learned/recorded)
Task description
Number of episodes
Data statistics:
Episode lengths
Other relevant metrics
To add additional demonstrations to an existing task:
Run the training command again:
Enter the same task name as before
New demonstrations will be appended to existing data
Verify updated data status using data status
command
Upload Command
Requirements
Robot must remain powered on
Stable internet connection required
Upload time varies with internet speed (up to 45 minutes)
Monitor Progress Check upload status using:
Verification in Maurice Console
Visit Maurice data console
Navigate to "Data Sets" section
Locate your uploaded task
Verify:
All episodes are present
Playback episodes to check upload quality
Best Practices
Ensure stable power supply during upload
Maintain consistent network connection
Verify upload completion before powering down
Monitor progress periodically for large datasets
Navigate to "Policies" section in Maurice console
Click "Add New Policy"
Select architecture and base model
Currently supports ACT and Random Weights
More improved base models coming soon
Configure training parameters
Learning rate
Number of epochs
Default values work well for most tasks
Select training datasets in dataset tab
Click execute to begin training
Training runs in background and can be monitored via the policy console, where you can track key parameters like train and validation loss. Policy training typically takes between 1-6 hours depending on dataset size and number of epochs.
Download Command
Storage Location
Model weights are stored in ~/policy
directory on the robot
Policy Evaluation
To test the downloaded policy, use:
This will run the policy for the specified number of seconds.
Setup
Joint Control
End Effector Control
Gripper Control
Behavior and Policy Execution
Example Usage
Get joint positions
Set joint positions
Get end effector pose
Set end effector pose
Get gripper pressure
Set gripper pressure
Run behavior
Run policy
Interrupt execution
Example Usage
The Orchestrator is Maurice's AI agent - a reactive physical intelligence that interacts with the world through robot operations and user-defined primitives. When users give natural language instructions, the Orchestrator intelligently selects and executes appropriate robot behaviors to accomplish the requested tasks.
The Orchestrator acts as a bridge between user intentions and robot actions by:
Understanding natural language requests
Selecting appropriate primitives for tasks
Coordinating physical and digital operations
Providing feedback about actions and outcomes
Builders can customize and extend the Orchestrator's capabilities through Directives. A Directive tells the Orchestrator:
Which primitives it can use
How to understand when and how to use them
A Directive consists of two key components:
When creating a Directive:
Create a Python file in the ~/directives
directory
Inherit from the Directive base class
Implement get_primitives() to list available primitives
Implement get_prompt() to define usage instructions
You can activate and deactivate directives in two ways:
Use the Maurice SDK CLI commands:
Connect to your robot through the Maurice app
Navigate to the Directives page
Click your desired directive to activate it
Hit the cancel button to deactivate the current directive
When a directive is activated, the Orchestrator will use its configuration to understand and respond to user requests.
Here's a complete example showing how to build a simple security system using the Orchestrator:
File: ~/primitives/notify_user.py
File: ~/primitives/patrol.py
File: ~/directives/security_directive.py
Place files in correct directories:
Put primitives in ~/primitives/
Put directive in ~/directives/
Activate the security directive using either:
Or use the Maurice app:
Navigate to the Directives page
Click on "security_directive"
3. The robot can now handle commands like: - "Check the house" - "Do a security patrol" - "Alert me if anything seems wrong"
The Orchestrator will use the directive's prompt to understand these requests and execute the appropriate primitives in response.
User Primitives are powerful building blocks that enable developers to create complex robot behaviors by combining Maurice's physical capabilities (navigation, manipulation) with digital functions (API calls, data processing, etc.). These primitives serve as the highest level of abstraction in Maurice's architecture, allowing the robot agent to seamlessly integrate physical and digital tasks into cohesive operations.
Each primitive consists of three essential components:
Usage Guidelines
Natural language descriptions of when the primitive should be used
Required environmental conditions and context
Constraints on when the primitive can be safely executed
Expected outcomes and side effects
Any dependencies on other primitives or system states
Interruption Protocol
Defined safety procedures for stopping execution
Cleanup steps to maintain system consistency
State restoration procedures
Error handling and recovery methods
Conditions under which interruption is allowed or blocked
Execution Sequence
Ordered list of physical and digital operations
Clear entry and exit conditions for each step
Error handling at each stage
Success/failure criteria with feedback
State validation between steps
To create a new primitive, create a Python file in the ~/primitives
directory. Here's the basic structure:
File Location: ~/primitives/go_home.py
A basic movement primitive that returns the robot to its home position. Demonstrates coordination between navigation and manipulation systems for safe repositioning.
Key Implementation Points:
Initializes both navigation and manipulation systems
Moves arm to zero position before navigation
Uses absolute coordinates (0,0,0) for consistent home position
File Location: ~/primitives/pick_trash.py
A vision-based manipulation primitive that uses natural language descriptions to identify and pick up trash items. Combines visual navigation with manipulation policies.
Key Implementation Points:
Uses in-sight navigation with description matching
Maintains safe distance (0.5m) during approach
Executes pre-trained picking policy
Includes safe gripper release in interruption
File Location: ~/primitives/alert_user.py
A digital integration primitive that combines physical gestures with email notifications for security alerts. Shows basic integration of robot actions with external services.
Key Implementation Points:
Coordinates physical gesture with email sending
Uses SMTP for reliable email delivery
Includes safe arm positioning in interruption
Provides detailed alert messages with descriptions
File: ~/primitives/monitor_air.py
A sensor integration primitive that combines gas sensing with robot navigation to monitor air quality in different locations. Shows hardware sensor integration with robot behaviors.
Key Implementation Points:
Integrates SGP30 sensor readings with navigation
Implements professional air quality thresholds
Sends alerts when thresholds are exceeded
Includes proper sensor initialization and timing
The Innate SDK introduces a new paradigm in which every physical robot is an AI agent.
In this framework, every function call is either a learned physical action, a digital operation, or a combination of both.
Build complex sophisticated AI applications in minutes using state-of-the-art manipulation, navigation and reasoning capabilities.
Let's create a simple agent that can navigate rooms and serve glasses - a perfect introduction to physical AI development.
You need to have a robot implementing the Innate SDK. Innate is selling our first such robots for $2,000 a piece, see our website.
Then follow the instructions in Maurice Setup and Workstation Setup.
You can start coding right after by defining files in ~/primitives
and ~/directives
A primitive is akin to a function call for an LLM. This is what the robot will call when it believes it is in the right circumstances to call. You can add guidelines to tell it more about when to call it, when to interrupt it...
If the policies "handover_glass" and "pickup_glass" were not trained, you have to collect data and send it to us to train and load the policy onto your robot.
Below is how the process looks like once you're in training mode. The SDK will guide you to collect episodes for the task you want to teach. You can learn more about training and inference in Manipulation.
This is what describes the purpose of the robot during its execution. You can switch between directives. Here, the directive makes sure the robot is aware of its physical capabilities to act in the real world.
First move the robot around a little with the app so that it memorizes the place.
Then let the agent run, using either the app, or the terminal:
This is what the resulting execution looks like:
Primitives: Building blocks that combine physical actions, learned behaviors, and digital operations
Directives: Natural language instructions that guide how your robot uses primitives
Policies: Learned behaviors for complex physical tasks like manipulation
Navigation: Built-in mapping and movement capabilities
🤖 Full robotic control (navigation, manipulation, sensing)
🧠 Built-in AI agent capabilities
📱 Simple Python SDK and CLI tools
🛠 Extensible hardware support
🎓 Learning from demonstration
👀 Advanced visual understanding
Ready to build something more complex? Check out our detailed examples and join our developer community below.
Follow our Setup Guide to get Maurice up and running
Learn about basic Navigation and Manipulation Control
Explore creating User Primitives
Join our community to share and learn from other builders
Maurice is built for hackers who want to push the boundaries of what's possible with embodied AI. Whether you're building a robotic bartender or something entirely new, we can't wait to see what you'll create.
Welcome to Innate!
We are developing teachable, accessible general-purpose robots for builders ranging from software engineers beginning in robotics, to advanced hardcore roboticists. The world of robotics is changing, and with our platforms, you can quickly begin training your robot and developing applications!
We currently offer:
A small-size, affordable mobile manipulator (with onboard compute and a data collector) called Maurice for $2,000. You can book one here.
The Innate SDK to quickly teach your robot new physical, digital tasks, and chain them in your home to perform long-term activities.
Follow us on Discord! We'll be posting more frequent updates and organize events there.
Maurice features a comprehensive sensor suite consisting of three primary sensors:
High-quality depth perception with 7.5cm stereo baseline
150° diagonal field of view
Effective depth range: 40cm - 6m
Depth accuracy: <2% error up to 3.5m, <4% to 6.5m, <6% to 9m
Global shutter for improved motion handling
1MP resolution (1280x800) at up to 120 FPS
160° diagonal field of view
2MP resolution (1920x1080)
Enhanced low-light performance (0.001 Lux minimum illumination)
30 FPS at full resolution
Ideal for close-range manipulation tasks and visual servoing
360° scanning coverage
Range: 0.15m - 12m
Angular resolution: ≤1°
Distance resolution: <0.5mm
Scan rate: 5.5Hz
Primary sensor for SLAM and navigation
This sensor configuration enables robust environmental perception, precise manipulation, and reliable navigation through complementary sensing modalities. Each sensor's data can be accessed and processed through the Innate SDK.
Stream Sensor Data
View live sensor data streams in a visualization window:
Capture Sensor Data
Save sensor data snapshots to file:
By default, captures are saved in the current working directory. Use the optional --output
flag to specify a different save location:
Setup
RGBD Camera
Gripper Camera
LiDAR
Maurice provides two powered USB 3.0 ports for connecting additional sensors and peripherals.
Port Specifications
2x USB 3.0 ports
Power output: 1.2 Amps per port
Data transfer rate: Up to 5 Gbps
Hot-swappable
Full compatibility with USB 2.0 devices
Custom Sensor Integration Users can integrate additional sensors to enhance Maurice's perception capabilities. When adding custom sensors, ensure:
Sensor drivers are compatible with NVIDIA Jetpack 6
Drivers are properly installed on Ubuntu 22.04
Power requirements fall within the 1.2 Amp limit per port
Custom sensors can enhance Maurice's capabilities for specific tasks such as:
Higher resolution imaging
Specialized environmental sensing
Additional viewpoints
Task-specific measurements
Extended range detection