We are pleased to announce Slamcore’s free tutorial demonstrating how to simply add visual SLAM to the capabilities of the ROS1 Navigation Stack. Robot designers with any level of experience can follow step-by-step instructions to deploy visual SLAM on a prototype robot or add it to existing ROS-based designs. The tutorial provides a straightforward way to test the efficacy of vision-based navigation using depth cameras and an IMU, with the option to include wheel odometry. Developers who are using the ROS framework can integrate SLAMcore’s visual SLAM SDK with the ROS Nav Stack either to replace expensive LIDAR sensors with more cost-effective sensors, or to increase accuracy and robustness of estimations whilst paving the way for more complex vision-based capabilities including semantic mapping and enhanced spatial intelligence.

OPEN-SOURCE Ros Nav Stack 

We have created a dedicated branch of the ROS1 Nav Stack that is freely available and seamlessly connects our SDK and algorithms to the ROS framework. The set-up files are available at github.com/slamcore/ros1-examples. We have also created a complete tutorial and demonstration using a Kobuki robotic base, an Intel RealSense D435i depth camera and a Nvidia Jetson Xavier NX single board computer. These readily available hardware parts, plus the Slamcore SDK, allow any developer to quickly and cost effectively recreate our demo to test the capabilities of Slamcore Visual SLAM in real-world conditions. The full tutorial can be found at docs.slamcore.com/navstack-integration.

ROS is a powerful framework which many developers use as the core of their robot designs. Many initially combine its navigation stack with Cartographer, AMCL or gmapping maps that require inputs from LIDAR sensors to support SLAM functions. However, vision-based SLAM offers significant benefits to designers whether they are working on their first prototype or enhancing designs at major robotics companies. Not only do lower-cost sensors reduce the overall bill of materials to support more effective commercial deployments, but vision plus wheel odometry provides more accurate and more robust pose estimation and location than other sensor combinations.

Robotics leaders are also experimenting with Visual SLAM because of the wide range of additional potential applications. Slamcore algorithms not only support semantic labelling of objects within maps, but their categorization and removal for more efficient, accurate and faster SLAM. Visual data can also be shared with other subsystems to support emerging vision-based functions as well as providing human-readable maps to aid in planning and operations.

Test Drive Visual SLAM

The step-by-step tutorial allows any designer or developer to test drive the Slamcore visual SLAM algorithms by creating a simple autonomous mobile robot. Whether creating a new prototype, testing SLAM with the suggested hardware set-up, or swapping in Slamcore’s powerful algorithms for an existing robot, the tutorial guides designers in adding visual SLAM capabilities to the ROS1 Navigation Stack.  As such it provides a highly flexible way to deploy and test visual SLAM in real-world scenarios.

Customers of the Slamcore SDK can use the free tutorial and open-source code to manage every step of the mapping and positioning process. From installing code and setting up a workspace, through calibration of sensors to recording an initial 3D map of the space with the robot under teleoperation, the tutorial provides clear instructions and links to required code. Once the initial map is recorded and edited it can be loaded into the robot for autonomous operation.

Using the Slamcore tools, the robot can be set endpoint goals or waypoints to navigate towards.  Using the planning algorithms from the Nav Stack, the robot will calculate the best path to get to the waypoint.  If the cameras detect new obstacles, the path will be updated in real time allowing the robot to navigate around them. See it in action in this video.

Get started

At Slamcore we believe that any robot should benefit from low-cost, high accuracy, robust SLAM, and that visual SLAM is the best option to deliver this in the widest range of environments. We’ve worked hard to create visual SLAM algorithms that outperform virtually all commercially available competing solutions so that robot designers can concentrate on the other critical elements of their robots. By democratizng access to high quality commercial grade SLAM at costs that allow at-scale deployments, we hope to accelerate the entire industry. The ROS1 Nav Stack tutorial quickly demonstrates how our spatial intelligence algorithms can be effectively integrated with the de facto standard software framework for robotics. To get started on your own journey to the future of visual SLAM download the SDK here and check out the tutorial here.