September 28, 2021

Overcoming the robot Tower of Babel (part 2 of 3)

Posted by
Owen Nicholson

A modern-day Rosetta Stone of Spatial Intelligence

At SLAMcore we are speaking to hundreds of innovative, entrepreneurial developers who are desperate to move their designs out of the lab and from proof-of-concept phase to real-world deployment. These are the next generation of businesses that will drive the expected growth and value of the robotics market.  They already have brilliant ideas, designs and applications that could quite literally change the world. But many are stuck reinventing the core technology of spatial intelligence. Each pursues their own approach, establishing new silos, and then struggling to adapt to the thousands of ‘edge-cases’ that cause their designs to fail in unexpected ways.

Working with these organizations, as well as the larger established players, SLAMcore is beginning to democratize access to a shared approach to mapping, positioning and understanding the world for autonomous devices.

Multiple designs, one approach

There is no one-size fits all robot; each will have a form factor, hardware/software set up and parameters tailored to meet specific requirements. But there can be a consistent and repeatable way of perceiving the world around these devices. Just as there are thousands of different ‘designs’ of living creatures on planet Earth but one, surprisingly consistent, way to position themselves in the world around them. This approach, honed through millions of years of evolution, uses a combination of two eyes and inertia sensors in the inner ear to calculate position.

SLAMcore has mirrored nature by creating a common approach to SLAM using two cameras and an IMU (inertial measurement unit). With these core components we are creating the universal language of spatial intelligence that can benefit all robot developers. Simple, effective and widely available sensors, combined with our algorithms give a consistent, repeatable and shareable way for any robot to perceive, map and describe the world around it.  

Vision is the key

Even basic, low-cost standard-definition cameras capture huge amounts of data. Processed in the right way, this information can support instant and accurate calculations of position – even with no prior knowledge of the location or physical situation. Our algorithms are able to take this data to create sparse point-clouds of the ‘features’ in any scene that a robot or autonomous device can use to accurately and robustly calculate its position. We also use the same data to create detailed 2.5D and 3D maps that add more functionality including identification of free space that is safe to occupy. Finally, our software identifies and labels all the objects in a scene attaching semantic understanding of ‘what’ the robot is seeing; the basis of decisions on how it should react to it.

Using vision as the primary input creates a common framework– a language of spatial intelligence that can be shared with other devices, and with humans. If robots of all types, and the humans that work with and around them, all ‘see’ physical space in the same way it is much easier to begin to collaborate, share and build common understanding.

Constant evolution

SLAMcore will unleash and benefit from a new wave of growth in the robotics sector. Its customers benefit not only from a shared language for spatial intelligence that will short-cut their own development cycles, but also from a constantly growing knowledge base. Using a common language to describe the world around robots means that learnings can be aggregated. 

Surprisingly, although robots and autonomous devices come in many different shapes and sizes, they tend to fail in common ways. Outside of the lab, robots quickly encounter unexpected situations – unforeseen objects, different lighting conditions, new layouts or physical environments. These ‘edge-cases’ are hard to anticipate, simulate and program for; and they are usually the events that cause robots to fail in the real world. Describing these situations in a common way – using data from cameras and IMUs processed consistently by SLAM algorithms – means that every edge-case, and its solution, can be recorded and shared. Just as we learn to move around as children through trial-and-error, robots can learn from their errors. Sharing data and creating reusable maps allows knowledge about a physical environment to persist so that robots can learn from each other’s experiences. 

SLAMcore allows developers and designers to learn from the errors of others and progress further, faster. With tens of thousands of sessions, thousands of hours and well over three million metres travelled and recorded by our customers, we have deeper knowledge across wider scenarios than any individual developer.  Our common language means that, for example, an event or failure experienced by a drone designed for delivery can still provide valuable information to a designer of a wheeled robot for hospitality scenarios. This constant evolution will accelerate the industry as more and more real-world scenarios are encountered and effectively managed – the experience of one benefits everyone.

This common language of spatial intelligence not only helps developers solve today’s problems more quickly, efficiently and cost effectively, but establishes a powerful foundation for future applications. The potential of shared maps and digital twins are explored in my next blog.


Read more on our blog

Get the latest updates from SLAMcore

Sign up to our newsletter to hear about our latest releases, product updates and news