Next-level spatial intelligence

Our foundational technology combines state-of-the art visual SLAM, sensor fusion and advanced AI for real-time positioning, perception and mapping

The power of Slamcore

Humans use senses and context to orient themselves, powered by the brain. In the machine world, robots need novel algorithms designed and optimized for embedded performance. 

SLAM (simultaneous localization and mapping) helps robots and other devices understand their surroundings by calculating their position and orientation relative to the world around them – whilst at the same time creating and continuously updating a map of the surroundings.

Our visual SLAM uses data from a stereo camera and can fuse it with additional data from other sensors to create reliable, accurate, and robust spatial intelligence that devices can use to understand their surroundings, location, and next move.

Our SDK

We deliver our technology to customers via products that are built on our in-house, proprietary Software Development Kit (SDK).

Slamcore’s SDK provides a spatial intelligence foundation on which to build, enhance and enable products for a wide range of industries and applications, from consumer grade robots, drones and devices, to industrial-grade machinery and vehicles.
Since launching a beta version of our SDK publicly in 2021, hundreds of customers have integrated our spatial intelligence technology into their products, some of which you can read about below.

Whilst we’re not currently releasing our SDK as a standalone commercial product, it remains the foundation for our current and future product roadmap and is at the core of our specialised products for the intralogistics industry.

 

Products

Under the hood

Position

Where am I?

For robots to make better decisions as they move through space, they need to know where they are.

Our visual-inertial SLAM software processes images from a stereo camera, detecting notable features in the environment which are used to understand where the camera is in space.

Those features are saved to a 3D sparse map which can be used to relocalize over multiple sessions or share with other vision-based products in the same space.

Drift in the position estimate is constantly accounted for by comparing all previous measurements to the live view and triggering corrections as locations are revisited, a process known as loop closure.

Our robust feature detectors along with intelligent AI models used in our SLAM pipeline ensure accurate positioning even in challenging dynamic and changing environments.

Perceive

What are the objects around me?

Object detection and semantic segmentation are integrated directly into our positioning pipeline and give your robot a sense of what it’s actually seeing.

This information can be used to improve position estimates by ignoring measurements against dynamic objects or to enhance obstacle detection by enabling different navigation behaviours depending on the detected obstacle class.