The essential SLAM stack for autonomous machines and XR products.

A full stack spatial intelligence solution for positioning, mapping and perception, so you can focus on developing the features that really matter.

Robots, consumer products and drones need to navigate and understand their surroundings.  Autonomous operation in dynamic environments requires robust and real-time spatial understanding.

What is Spatial Intelligence?

We define three levels of spatial understanding:

Spatial Intelligence answers three critical questions:
- Where am I?
- What's around me?
- What are the objects around me?

SLAMcore Spatial Intelligence answers all three questions providing accurate and robust localization, reliable mapping and enhanced semantic perception - while running in real-time on standard sensors and compute.  Accurate positioning allows better navigation and obstacle avoidance,  quality maps accurately show surroundings and semantic perception removes dynamic objects and enriches maps with object locations and types.

How does SLAMcore deliver Spatial Intelligence?

Visual Stereo SLAM

Vision is the only sensing modality that can cut through all three levels of spatial AI, giving robots spatial understanding that cannot be achieved using LiDAR.

Sensor Fusion

We fuse vision with other low cost sensors such as IMUs and wheel odometers for increased robustness.

Low-level optimization

Our team of embedded software engineers has optimised our software and algorithms to run on low power processors. We are committed to making sure that our solutions do not require costly processors to run, and we have recently showcased our positioning software running on a Raspberry Pi in real time.

Explore our full-stack Spatial Intelligence Levels

Robust, Accurate Spatial Intelligence in Minutes

Join over 75 companies for industry-leading visual inertial SLAM and faster product development