Our Technology

SLAMcore is making spatial AI solutions more accessible to robotics businesses, transforming their vision into a tangible reality

Spatial understanding is essential for autonomy. Whether it is robots, drones, self driving cars or humans, being able to understand and navigate the environment is essential to enable complex behaviours. As robots and drones operate in less constrained, more dynamic environments, high quality spatial understanding is necessary for reliable operation and intelligent behaviour.

What is spatial understanding?

Spatial AI: offering advanced SLAM solutions that can help a robot accurately understand where it is and what’s around it

We define three levels of spatial understanding:

Commercially available solutions today address level 1, level 2, a combination of the two or level 3 alone. SLAM, simultaneous localization and mapping, is the family of algorithms commonly used to address levels 1 and 2, combining inaccurate or noisy sensor data with estimates of how the robot moves to produce more accurate estimates of position and a map of the environment. There are no other companies, however, effectively combining all three levels, augmenting traditional SLAM with semantics and providing this solution for robotics and drone companies to use.

To build robots with true spatial awareness, we need solutions that cut across these three levels.  An accurate position means a better quality map and more confidence in the location of the mapped objects.  Adding a semantic layer enriches the map by providing information about the location and type of objects of interest, such as dynamic, static or mostly static but likely to move. This information can be used to produce more accurate maps by removing dynamic objects and providing information about areas that are likely to change and thus enabling more robust autonomous behaviour.

How do SLAMcore deliver spatial AI?

Visual SLAM

We use vision as our main sensing modality, because not only do we have world leading computer vision experts in our team but, more importantly, it is the only sensing modality that can cut through all three levels of spatial AI

Sensor fusion

No sensor is perfect, so we fuse vision with other low cost sensors such as IMUs and wheel odometers to produce accurate estimates. We focus on supporting low cost, commercially available hardware and augmenting their capabilities using our software to deliver high quality spatial AI. GPS and beacons only provide Level 1.  LIDAR can be used for Level 2 but struggles at Level 1 and Level 3. 

Low-level optimization

To ensure we deliver commercially viable solutions we have a team of embedded software engineers working to optimize our software and algorithms to run on low cost processors. We are committed to making sure that our solutions do not require costly processors to run, and we have recently showcased our positioning software running on a Raspberry Pi in real time.

Explore full-stack Spatial AI levels

Level 1

Pose Tracking
Where am I?

Level 2

Mapping
Where are the objects around me?

Level 3

Semantics
What are the objects around me?