We're pushing the boundaries of what's possible today and what SLAM will look like tomorrow. Get a sneak peek below of what we're working on. SLAMcore customers can test these features as a part of the SLAMcore Visionary Program.
SLAMcore Labs showcases upcoming SLAMcore capabilities and are available today for SLAMcore Visionary customers in custom projects.
Using a dedicated neural network running on a local GPU, SLAMcore’s algorithms combines raw infrared images, depth map and IMU data to provide more accurate and smoother depth/disparity maps.
Customers today build cm-accurate 2.5D maps using the SLAMcore SDK. Now, from the same data, SLAMcore Labs creates full 3D maps with RGB rendering.
SLAMcore Labs calculates accurate occupancy maps in 3D, all in real-time including detail and texture from the real-world environment around the robot.
3D mapping at scale is notoriously memory hungry. SLAMcore Labs sub-mapping strategy allows maps to scale from living room to city scale.
Using an AI neural network with raw sparse map point-clouds, SLAMcore Labs creates more accurate, faster and more resource efficient position estimations. The AI selects which features are best for positioning and rejects features from dynamic objects.
SLAMcore semantic recognition algorithms color codes an entire image/map to identify different objects and surfaces and objects. Walls, floors, tables, chairs, pallets, people etc can be color coded for easy recognition.
Next generation feature identifies, locates and counts each semantic object. So maps include objects (such as chairs) with their locations, count and treats them as individual objects.