A full stack spatial intelligence solution for positioning, mapping and perception, so you can focus on developing the features that really matter.
Robots, consumer products and drones need to navigate and understand their surroundings. Autonomous operation in dynamic environments requires robust and real-time spatial understanding.
We define three levels of spatial understanding:
Spatial Intelligence answers three critical questions:
- Where am I?
- What's around me?
- What are the objects around me?
SLAMcore Spatial Intelligence answers all three questions providing accurate and robust localization, reliable mapping and enhanced semantic perception - while running in real-time on standard sensors and compute. Accurate positioning allows better navigation and obstacle avoidance, quality maps accurately show surroundings and semantic perception removes dynamic objects and enriches maps with object locations and types.
Vision is the only sensing modality that can cut through all three levels of spatial AI, giving robots spatial understanding that cannot be achieved using LiDAR.
We fuse vision with other low cost sensors such as IMUs and wheel odometers for increased robustness.
Our team of embedded software engineers has optimised our software and algorithms to run on low power processors. We are committed to making sure that our solutions do not require costly processors to run, and we have recently showcased our positioning software running on a Raspberry Pi in real time.