Unleash the power of spatial intelligence
Bring ideas to life faster with full stack SLAM for positioning, mapping and perception
The power of Slamcore
Humans use senses and context to orient themselves, powered by the brain. In the machine world, robots and AR/VR devices need novel algorithms designed and optimized for embedded performance.
SLAM helps robots and other devices understand their surroundings by calculating their position and orientation relative to the world around them – while simultaneously creating a map of the surroundings.
Our visual SLAM uses data from a stereo camera and fuses it with additional data from other sensors to create reliable, accurate, and robust spatial intelligence that devices can use to understand their surroundings, location, and next move.
How Slamcore makes it happen
Position
Where am I?For robots to make better decisions as they move through space, they need to know where they are. For headsets to render virtual views that appear anchored in the real world, they need to accurately track their position as they move through space.
Our SLAM software processes images from a stereo camera, detecting notable features in the environment to understand where it is.
Those features are saved to a 3D sparse map we can use to relocalise over multiple sessions or share with other vision-based products in the same space.
Drift in the position estimate is constantly accounted for by comparing all previous measurements to the live view and updating as locations are revisited. This sparse map is only useful for localization; we’d need more information to be able to navigate the world.
Map
What’s around me?Depth data from a stereo camera, time of flight sensor or LIDAR helps us understand how far away physical objects are.
With this information, our robot or headset can build an occupancy map, giving it a full 2D, 2.5D or 3D representation of its environment, and allowing us to navigate autonomously or render virtual objects that are occluded (blocked) by the real world.
We can also generate a full 3D dense map of the space, providing the shape, size, color and texture of the objects in the environment – ideal for digital twin creation or 3D navigation.
Perceive
What are the objects around me?Augment the previous two levels with deep learning powered and semantic segmentation, and our robot or headset gains a sense of what it’s actually seeing.
This information can be used lower in the stack to improve position estimates by ignoring measurements against dynamic objects.
For mapping, specific classes of objects can be removed from the map to reduce digital clutter.
When all three levels of SLAM – localization, mapping and semantic understanding – combine, we get full-stack spatial intelligence, providing machines with complete spatial understanding.