First impressions of Slamtec SDP Mini Lidar/SLAM development rover

I’ve had a chance to try out the new Slamtec SDP Mini development platform, from the maker of the popular RP-series low-cost Lidar scanners, and it’s nothing short of amazing. For $499, you get a RP-Lidar A2 2D lidar (which normally costs $320 just by itself), a Slamware hardware SLAM processing module along with a carrier board with wifi (another $300+ package), and a very competent rover chassis with wheel encoders.

Putting aside the value of nearly $1,000 worth of gear for half that price, what’s amazing about the SDP Mini is that it works. SLAM is notoriously hard and usually requires a full PC worth of computer power and complex software like ROS (although you can roll your own for less than $200 if you’re willing to settle for lower resolution, as described in the post here).

What the Slamtec SDP Mini does is combine a capable 8m-range, 8000-samples/second 2D Lidar unit with the necessary hardware to interpret it and turn it into an accurate map of the space its in. It builds the map as it drives around and localizes itself in that, thus the name: Simultaneous Location and Mapping (SLAM).

The magic all happens on the Slamware Core module, which is a single-board computer + IMU runing Slamtec’s SLAM software. It fits on a carrier board, which adds Wifi, motor controllers for the robot and connectors for wheel encoders and other sensors.

Most important is the SDK, which supports Android, iOS, Windows and Linux. Much of the documentation is still in Chinese, but Chrome can auto-translate it quite well into English. Along with all the usual Lidar functions, the SDK has full support for maps, pathfinding, wall-avoidance and encoder reading. This is an absolute nightmare in ROS (five Linux terminal windows, if you can even get it running), but is super easy with Slamware.

Here’s a sample of navigation, using the Android app (which communicates with the SDP Mini via Wifi). (2020 Update: Slamtec has now removed the RoboHome app and replaced it with the similar RoboStudio, which is available as both Windows and Android apps. The Android app on their website won’t run on Android 11 in my testing, but they do have a more recent version that will. It’s here.) It automatically made a map of my room in a few seconds and then can navigate to any point in the room with just a click.

Even cooler, you can create “virtual walls” by just drawing a line in the app. Then, when you tell it to navigate, it will find a path that avoids the wall.

Slamtec also has a Roomba-style larger SDP platform (shown below) that does the same thing, but can carry larger payloads and also includes automatic return to charging base and bumpers/sensors to avoid obstacles and stairs. It’s only available via special order now, but will soon be available more broadly at a price to be announced. Overall, the Slamtec range is super impressive.  If you want SLAM, look to China!

Written by 

5 thoughts on “First impressions of Slamtec SDP Mini Lidar/SLAM development rover”

  1. Hi, Thanks for the nice article.

    I just bought the Slamtec m1m1 version hoping to make a robot. However the android app seems to be chinese version. I wondering where you got the english one?

    Could you share light on that ?

    Thanks

      1. Hi Zlite,

        Thanks for the link, I’ll try it out.

        Btw, if I may would like to ask few more questions if you are still playing with the slamtec m1m1.

        I just want to know does it recover the last position after it is powered on. Did some scanning and powered off the device. When I switched it back, it shows the angle more or less correctly, but the x, y is always zero. I am using robo studio on win10 home.

        I’ve searched thru the site, but docs are not much helpful.

        Any thoughts?

        Thanks & rgds

  2. Yes, I am using the RoboStudio as suggested. While it works more or less OK some parts are not clearly explained.

    Since SLAM is built in, once the map is built, it should be stored permanently inside which it does indeed. The problem arises when the unit is switched off and on again in a different place. Now it is supposed to return the current pose against the stored map. But it is 0,0 and some arbitrary theta. The doc says recover localization by dragging the origin co-ords to new place and do some extra steps which seems to be counter intuitive.

    Still wrestling with it to understand the relocalization after power off.

    Thanks for your suggestions.

Leave a Reply

Your email address will not be published. Required fields are marked *