Creating a universal language of Spatial Intelligence

Today, the opportunities for robots and other autonomous machines are enormous. There has been an explosion of use cases and predicted value for robots in almost every sector of society. The COVID pandemic has seen robots rolled out in roles as diverse as UV cleaning of hospitals to last-mile delivery, and from logistics and warehouses to off-shore wind farms. Robots, as well as a growing number of consumer electronics products are demonstrating increasingly sophisticated autonomous skills – especially in the crucial area of simultaneous location and mapping (SLAM).

Where am I?

To be effective, and to operate safely in dynamic environments among people and other devices, every autonomous robot must be able to accurately, reliably and consistently answer the question ‘Where am I?’. This positioning or locating is the fundamental layer of SLAM systems. Robots must be able to calculate exactly where they are at all times. Ideally, they should do this without recourse to external systems like GPS or beacons and other way-point systems. The ability to accurately calculate (to centimetre level) position in any new environment will allow autonomous devices to rapidly deliver value in thousands of situations.

Trapped in their own reality

The leading robotics firms are already creating robots that can do this. But each accomplishes it in its own way. Every autonomous device ‘sees’ the world around it in its own way. Outside of the hardware and software set-ups tailored for its intended use and operational environment, it will fail. Fundamentally, even the smartest autonomous machines, those able to interact with the ‘real’ world in meaningful albeit limited ways, do so in a manner completely incomprehensible to other machines or humans. They are trapped by their own bespoke ways of positioning themselves in the physical world.

For example, an automated cleaning robot and a hospitality robot working in the same shopping mall will perceive the environment around them completely differently. Each exists only in a narrowly defined spatial silo aligned to the specific routes, functions and parameters created so it can fulfil its set of tasks. They are unable to collaborate with each other, other machines, objects or people. The information they use to calculate their position is incomprehensive to any other system.  Not only is there no shared understanding between these devices, but the tight integration of sensors and SLAM software means that attempting to re-use the algorithms from one in the other won’t just lead to non-optimal performance but will fail altogether. Without its own bespoke combinations each robot will be unable to answer that core question of ‘Where am I?’

Opportunity postponed

These spatial silos are holding back the industry and slowing the development of devices and robots that could be solving some of the most pressing challenges in the economy, the environment and society. The cost, time and resources needed to recreate the complex SLAM solutions needed for increasingly sophisticated autonomous robot applications exclude all but the wealthiest businesses from the market. Not only is it time intensive, technically challenging and expensive to create the SLAM systems needed for today’s emerging autonomous devices, but every developer has to make its own mistakes and solve its own problems. The lack of common and shareable approaches to understanding the physical space around autonomous devices is holding back the industry and denying the potential of multiple valuable applications. 

The myriad approaches are creating a modern-day Tower of Babel that threatens the projected growth and viability of the robotics industry. My next blog will explore how we at Slamcore are creating a modern-day Rosetta Stone to break down the silos of unintelligibility and build a shared language of spatial intelligence.