September 6, 2022

How do I get around? Creating Real-Time Maps

Posted by
SLAMcore

In this second part of our series explaining the SLAMcore Full-Stack SLAM algorithm, we look at Mapping. 

Part 1 of the series detailed how robots and other devices with spatial intelligence, including next-generation wearable devices for the metaverse, can utilize low-cost, low-power processors and sensors to accurately place themselves in physical space.

But accurate, real-time maps are essential for autonomous navigation. Knowing where objects are around the robot or an individual wearing a mixed reality headset is critical to path planning and obstacle avoidance.

The challenge is to provide enough detail to allow safe movement into unoccupied space without adding significant computational or memory requirements to the design. Using the same sensors and hardware already enabling our Position capabilities, SLAMcore can generate detailed occupancy maps (2D, 2.5D, and 3D) onboard the device in real time. Eliminating the need for cloud-based processing or storage for Mapping, allowing devices to map their surroundings autonomously.

2D maps are simple floor plans showing an obstacle’s position and accessible space that allow dynamic route planning and obstacle avoidance. While LIDAR-based systems deliver 2D maps through an environment, SLAMcore algorithms use rich data from stereoscopic depth cameras and inertial sensors to add essential capabilities to maps. 

For example, our 2.5D maps add height to calculations of volumes of space in a scene meaning these maps can indicate where space exists above or below obstacles. An instrumental dataset in real-world situations, for example, when a cleaning robot needs to move underneath a table or a drone flies over a divide.

3D maps provide further detail by rendering and coloring to provide additional detail. For example, 3D Depth maps color-code pixels to represent the distance from the device (Knowing how far away objects are improves navigation accuracy). 

Crucially, these enhanced mapping capabilities do not create significant processing or memory overheads. Developers of autonomous solutions always need to balance the increased opportunity that detailed Mapping provides with the additional compute demands. Increasing map detail traditionally required significant computing power, processing time, and memory. We’ve ensured that our memory footprint, even for detailed 3D maps, is small enough to easily accommodate warehouse-scale maps within minimal memory constraints. 

A dense 2.5D map of a living room, office, hospital, or even warehouse needs only tens of megabytes of storage. Coupled with the fact that our detailed maps can be generated in real-time using low-cost processors, SLAMcore 2.5 and 3D Mapping is optimized for the low-power ARM CPUs favored by the industry.

Mapping is fundamental to adopting autonomous robots, machines, and wearable devices across industries. We believe that to be valuable, maps must be persistent and portable. Creating an accurate, robust, and persistent map once, and using it multiple times increases efficiency and allows for better planning, interaction and cooperation in real-world situations. 

SLAMcore’s breakthrough in using low-cost cameras effectively for multisession Mapping lays the foundation for immediate improvements and exciting future developments. Our vision is to constantly update maps, making them available across fleets so that what one robot encounters is sharable with all the robots and devices in that environment. 

Understanding where things are around you is the heart of spatial intelligence. The next step is to understand what individual objects are. Semantic understanding opens a new door to a wide range of opportunities in robotics and mixed reality as we’ll discuss in the next blog.

Ends

🤖

Read more at www.slamcore.com/blog

Read more on our blog

Get the latest updates from SLAMcore

Sign up to our newsletter to hear about our latest releases, product updates and news