Popular opinion holds that today’s warehouses are already full of robots scurrying around fetching and packing products. Those in the industry know that the reality is somewhat different. Warehouse and intralogistics managers must coordinate a complex ecosystem of autonomous and manually operated machinery as well as individuals all moving around a chaotic and ever-changing space. Knowing exactly where all the moving parts are all the time is essential to improve efficiency and ensure safety. 

For many, this still means basic ‘walkie-talkie’ communications. Foremen ask drivers to report where they (and their vehicles) are and to direct them where to go next. But as warehouses get bigger and more complex, and humans work alongside automated and autonomous solutions, more integrated and real-time solutions are needed. Real-Time Location Systems (RTLS), as their name suggests, provide instant and continuous information on the position of mobile assets and feed data into the warehouse management systems, with many increasingly using AI to plan and optimize routes and work schedules.

The challenge is finding a standard mode of spatial awareness that can provide real time, accurate and resilient location that works for humans, robots, and manually piloted vehicles in a single system. Increasingly, the solution is vision. Visual spatial intelligence uses low-cost, reliable cameras to provide rich data that can be used for simultaneous localization and mapping so that any mobile device can immediately signal where it is and map the environment around it. Slamcore’s genius is to have created algorithms that can process this information efficiently and in real-time so that accurate spatial information can be calculated on low-cost, low-power processors that can be embedded into virtually any warehouse, manufacturing, and assembly facility asset.

Working with SYNA.OS, we have developed a retrofit localization and mapping unit that can be easily attached to any existing manually driven vehicle. Once added to vehicles such as forklifts, stackers, and pallet trucks, these can be instantly tracked, so that warehouse management systems know exactly where they are at all times. The real-time nature of SLAMcore’s visual spatial intelligence means that even on start-up, instructions can be sent to driver assist screens almost instantly to direct operators efficiently to their next task.

These vision-based systems require no costly adjustments, additions, or installations to the warehouse infrastructure, such as ultra-wideband (UWB) beacons or fiducials. 3D maps indicating height and 2D floor plans are created by driving around the facility. They are then shared and constantly updated in real-time. Vision-based systems do not suffer from ‘not spots’ where UWB beacons or other Internal Positioning Systems (IPS) infrastructure fails to reach. They can deal with the complex and ever-changing environments within warehouses where unexpected objects, obscured fiducials, changing light conditions, or even monotonous rows of identical racking frequently cause other localization and mapping systems to fail. Best of all, many operators are already adding cameras to their vehicles for safety reasons – compatible cameras can be utilized to provide inputs for vision-based RTLS.

Fully autonomous mobile robots (AMRs) are the next step for many seeking to optimize warehouse operations. These too use the same visual SLAM as the basis of their spatial intelligence, and Slamcore is already working with several AMR designers to integrate its solution into their robots. As robots increasingly work alongside manually driven vehicles and people, it is imperative that all are managed together. Warehouse management systems need a common, mutually comprehensible map to track and plan the routes of autonomous and manually driven units and humans as a single system to increase efficiency and ensure safety. Only visual spatial intelligence can provide this.

Visual spatial intelligence collects far richer data than SLAM based on other sensors. Our algorithms can process this in real-time and highly efficiently to provide a wealth of additional capability. For basic navigation, sparse point clouds are used to localize quickly. More complex 2.5D maps can be used to calculate safe areas for movement and route planning that incorporate height restrictions. Level 3 of Slamcore’s full-stack visual spatial intelligence provides semantic understanding. Only possible through visual sensors, this allows autonomous systems to understand what different objects around them are and so act differently. In a warehouse scenario, an AMR may be programmed to come within a few cm of a rack but to always remain at least 2 meters away from a human.

And, of course, visual images are also clear and understandable to humans. As wearable augmented reality devices improve, the same visual spatial intelligence will be integrated as accurate maps and virtual representations of the physical space around individual staff members. Not only will it assist them in getting to the right place, but it can alert them to the real-time position of robots or other vehicles around them.

As warehouses invest in and upgrade real-time location systems, they should look to vision as the best sensor type to meet their immediate needs and to establish the foundations of the warehouse of the future. To find out more, please get in touch.