Review of the new Nivida Jetson NX for robocars

Nvidia Announces Jetson Xavier NX Developer Kit for $399

It’s now been a couple weeks since Nvidia released its new Jetson Xavier NX board, a $399 big brother to the Jetson Nano (and successor to the TX2) with 5-10 times the compute performance of the Nano (and 10-15x the performance of a RaspberryPi 4) along with twice as much memory (8 Gb). It comes with a similar carrier board as the Nano, with the same Raspberry Pi GPIO pins, but includes built-in Wifi/BT and a SSD card slot, which is a big improvement over the Nano.

How well does it suit DIY Robocars such as Donkeycar? Well, there are pluses and minuses:

Pros:

  • All that computing power means that you run deeper learning models with multiple camera at full resolution. You can’t beat it for performance.
  • It also means that you can do your training on-car, rather than having to export to AWS or your laptop
  • Built-in wifi is great
  • Same price but smaller and way more powerful than a TX2.

Cons:

  • Four times the price of Nano
  • The native carrier board for the Jetson NX runs at 12-19v, as opposed the Nano, which runs at 5v. That means that the regular batteries and power supplies we use with most cars that use Raspberry Pi or Nano won’t work. You have two options:
    • 1) Use a step-up voltage converter like this
    • 2) Use a Nano’s carrier board if you have one. But you can’t use just any one! The NX will only work with the second-generation Nano carrier board, the one with two camera inputs (it’s called B-01)
  • When it shipped, the NX had the wrong I2C bus for the RPi-style GPIO pins (it used the bus numbers from the older TX2 board rather than the Nano, which is odd because it shares a form factor with the Nano). After I brought this to Nvidia’s attention they said they would release a utility that allows you to remap the I2C bus/pins. Until then, RPi I2C peripherals won’t work unless they allow you to reset their bus to #8 (as opposed to the default #1). Alternatively, if your I2C peripheral has wires to connect to the pins (as opposed to a fixed header) you can use the NX’s pins 27 and 28 rather than the usual 3 and 5, and that will work on Bus 1

I’ve managed to set up the Donkey framework on the Xavier NX and there were a few issues, mostly involving that fact that it ships with the new Jetpack 4.4, which requires newer version of TensorFlow than the standard Donkey setup. The Donkey docs and installation scripts are being updated to address that and I’m hoping that by the time you read this the setup should be seamless and automatic. In the meantime, you can use these installation steps and it should work. You can also get some debugging tips here.

I’ll also be trying it with the new Nvidia Isaac robotic development system. Although the previous version of Isaac didn’t work with the Xavier NX, version 2020.1 just came out so fingers crossed this works out of the box.

Review of the new HuskyLens AI camera as a robocar brain

There are so many cool sensors and embedded processors coming out of China these days! The latest is the amazing HuskyLens, which is a combination of a powerful AI/computer vision processor, a camera and a screen — for just $45. HuskyLens comes with a host of CV/AI functions pre-programmed and a simple interface of a scroll wheel and a button to choose between them and change their parameters.

To test its suitability for DIY Robocars, I swapped it in on my regular test car, replacing an OpenMV camera. I used the same Teensy-based board I designed for the OpenMV to interface with a RC controller and the car’s motor controller and steering servo. But because the HuskyLens can’t be directly programmed (you’re limited to the built-in CV/AI functions) I used it just for the line-following function and programmed the rest of the car behavior (PID steering, etc) on the Teensy. You can find my code here.

As you can see from the video above, it works great for line following.

Advantages of the HuskyLens include:

  • It’s super fast. I’m getting 300+ FPS for line detection. That’s 5-10x the speed of OpenMV (but with some limitations as discussed below)
  • Very easy to interface with an Arduino or Teensy. The HuskyLens comes with a cable that you can plug into the Arduino/Teensy and an Arduino library to make it easy to read the data.
  • The built-in screen is terrific, not only as a UI but to get real-time feedback on how the camera is handling a scene without having to plug it into a PC.
  • Easy to adjust for different lighting and line colors
  • Built-in neural network AI programs that make it easy to do object detection, color detection, tags, faces and gestures. Just look at what you want to track and press the button.

Compared to the OpenMV, some disadvantages of HuskyCam include:

  • You can’t change the camera or lens. So no fisheye lens option to give it a wider angle of view
  • You can’t directly program it. So no fancy tricks like perspective correction and tracking two different line colors at the same time
  • You’ll have to pair it with an Arduino or Teensy to do any proper work like reading RC or driving servos (OpenMV, in contrast, has add-on boards that do those things directly)
  • It consumes about twice the power of OpenMV (240ma) so you may need a beefier power supply than simple the BEC output from your car’s speed controller. To avoid brownouts, I decided not to use the car’s regular motor controller’s output to power the system and used a cheap switching power supply instead.

If you want to do a similar experiment, here are some tips on my setup:

Hardware:

After you solder in the Teensy with header pins, solder in a 3-pin header for RC IN 1,2,3 and RC OUT 1 and 2. You’ll connect your RC receiver’s channels 1 and 2 to RC IN 1 and 2 and whichever channel you want to use to switch from RC to auto modes to RC IN 3. Connect the steering servo to RC Out 1 and the motor controller to RC Out 2. If you’re using a separate power supply, you can plug that into any spare RC in or out pins

Also solder a 4-pin connect to Serial 2. Your HuskyLens will plug into that. Connect the HuskyLens “T” wire to the Rx and “R” wire to the Tx, and + and – to the corresponding pins.

Software:

  • My code should pretty much work out of the box on a Teensy with the above PCB. You’ll need to add the HuskyLens library and the AutoPID library to your Arduino IDE before compiling.
  • It assume that you’re using a RC controller and you have a channel assigned (plugged into RC IN 3) for selecting between RC and HuskyLens controlled modes. If you don’t want to use a RC controller, set boolean Use_RC = true; to false
  • It does a kinda cool thing of blending the slope of the line with its left-right offset from center. Both require the car to turn to get back on line.
  • If you’re using RC, the throttle is controlled manually with RC in both RC and auto modes. If not, you can change it by modifying this line: const int cruise_speed = 1600;. 1500 is stopped; less than that is backwards and more than that (up to 2000) is forwards at the speed you select.
  • It uses a PID controller. Feel free to change the settings, which are KP, KI and KD, if you’d like to tune it
  • On your HuskyLens, use the scroll wheel to get to General Settings and change the Protocol/Serial Baud Rate to 115200.

Arduino Serial Plotter: The Missing Manual

If you use Arduino, perhaps to handle the lower-level driving work of your DIY Robocar, you may have noticed the Serial Plotter tool, which is an easy way to graph data coming off your Arduino (much better than just watching numbers scroll past in the Serial Monitor).

You may have also noticed that the Arduino documentation gives no instructions on how to use it ¯\_(ツ)_/¯. You can Google around and find community tutorials, such as this one, which give you the basics. But none I’ve found are complete.

So this is an effort to make a complete guide to using the Arduino Serial Plotter, using some elements from the above linked tutorial.

First, you can find the feature here in the Arduino IDE:

It will plot any data your Arduino is sending out in a Serial.print() or Serial.println() command. The vertical Y-axis auto adjusts itself as the value of the output increases or decreases and the X-axis is a fixed 500-point axis with each tick of the axis equal to an executed Serial.println() command. In other words the plot is updated along the X-axis every time Serial.println() is updated with a new value.

It also has some nice features:

  • Plotting of multiple variables, with different labels and colors for each
  • Can plot both integers and floats
  • Auto-resizes the scale (Y axis)
  • Supports negative value graphs
  • Auto-scrolls the X axis

But to make it work well, there are some tricks in how to format that data. Here’s a complete(?) list:

  • Keep your serial speed low. 9600 is the best for readability. Anything faster than 57600 won’t work.
  • Plot one variable: Just use Serial.println()

Serial.println(variable);

  • Plot more than one variable. Print a comma between variables using Serial.print() and use a Serial.println() for the variable at the end of the list. Each plot will have a different color.
Serial.print(variable1);
Serial.print(",");
Serial.print(variable2);
Serial.print(",");
Serial.println(last_variable); // Use Serial.println() for the last one
  • Plot more than one variable with different labels. The labels will be at the top, in colors matching the relevant lines. Use Serial.print() for each label. You must use a colon (and no space) after the label:
Serial.print("Sensor1:");
Serial.print(variable1);
Serial.print(",");
Serial.print("Sensor2:");
Serial.print(variable2);
Serial.print(",");
Serial.println(last_variable); // Use Serial.println() for the last one

A more efficient way to do that is to send the labels just once, to set up the plot, and then after that you can just send the data:

void setup() {
   // initialize serial communication at 9600 bits per second:
   Serial.begin(9600);
   Serial.println("var1:,var2:,var3:");
 }
void loop() {
   // read the input on analog pin 0:
   int sensorValue1 = analogRead(A1);
   int sensorValue2 = analogRead(A2);
   int sensorValue3 = analogRead(A3);
   // print out the value you read:
   Serial.print(sensorValue1);
   Serial.print(",");
   Serial.print(sensorValue2);
   Serial.print(",");
   Serial.println(sensorValue3);
   delay(1);        // delay in between reads for stability
 }
  • Add a ‘min’ and ‘max’ line so that you can stop the plotter from auto scaling (Thanks to Stephen in the comments for this):
Serial.println("Min:0,Max:1023");
  • Or if you have multiple variables to plot, and want to give them their own space:
Serial.print("Min:0,");
Serial.print("Sensor1:");
Serial.print(map(variable1,0,1023,0,100));
Serial.print(",");
Serial.print("Sensor2:");
Serial.print(map(variable2,0,1023,100,200));
Serial.print(",");
Serial.print("Sensor3:");
Serial.print(map(variable3,0,1023,200,300));
Serial.print(",");
Serial.println("Max:300");

Of course, now the numbers on the y-axis don’t mean much, but you can still see the waveforms.

Comparing Sonar and Lidar Arrays

For reasons that probably involve too much starting projects and not enough thinking about why I was starting them, I have conducted an experiment in comparing an array of ultrasonic sensors with an array of time-of-flight Lidar sensors. This post will show how to make both of them as well as their pros and cons. But [spoiler alert] at risk of losing all my readers in the first paragraph, I must reveal the result: neither are as good as a $69 2D Lidar.

Nevertheless! If you’re interested in either kind of sensors, read on. There are some good tips and lessons below.

First, how this started. My inspiration was the SonicDisc project a few years ago from Dimitris Platis, which arranges eight cheap ultrasonic sensors in a disc and makes it easy to read and combine the data.

I ordered the PCBs and parts that Dimitis recommended, but got busy with other things and didn’t get around to assembling them until this year. Although I eventually did get it working, it was kind of a hassle to solder together an Arduino from the basic components, so I redesigned the board to be an Arduino shield, so it just plugs on top of a regular Arduino Uno or the like. If you want to make one like that, you can order my boards from OSH Park here. The only other parts you’ll need are eight ultrasonic sensors, which are very cheap (just $1.40 each). I modified Dimitris’ Arduino code to work as an Arduino shield; you can get my code here.

Things to note about the code: It’s scanning way faster (~1000hz) than needed and fires all the ultrasonic sensors at the same time, which can lead to crosstalk and noise. A better way would be to fire them one at a time, at the cost of some speed. But anything faster than about 50-100Hz is unnecessary since we can’t actuate a rover faster than about 10hz. Any extra scan data can be used for filtering and averaging. You’ll note that it’s also set-up to be able to send the data to another microprocessor via I2C if desired.

While I was making that, I started playing with the latest ST time-of-flight (ToF) sensors, which are like little 1D (just a single fixed beam) Lidar sensors. The newest ones, the VL53L1X, have a range of up to 4m indoors and are available in an easy-to-use breakout board form from Pololu ($11 each) or for a bit more money but a better horizontal configuration of the sensor, from Pimoroni ($19 each).

The advantage of the ToF sensors over ultrasound is that they’re smaller and have a more focused, adjustable beam, so they should be more accurate. I designed a board that used an array of eight of those with an onboard Teensy LC microprocessor (it works like an Arduino, but it’s faster and just $11). You can buy that board from OSH Park here. My code to run it is here, and you’ll need to install the Pololu VL53L1X library, too.

The disadvantage of the ToF sensors is that they’re more expensive, so an array of 8 plus a Teensy and the PCB will set you back $111, which is more expensive than a good 2D Lidar like the RPLidar A1M8, which has much higher resolution and range. So really the only reason to use something like this is if you don’t want to have to use a Linux-based computer to read the data, like RPLidar requires, and want to run your car or other project entirely off the onboard Teensy. Or if you really don’t want any moving parts in your Lidar but need a wider field of view than most commercial solid-state 2.5D Lidars such as the Benewake series.

Things to note about the code: Unlike the ultrasonic sensors, the TOF sensors are I2C devices. Not only that, but the devices all come defaulting to the same I2C addresses, which it returns to at each power-on. So at the startup, the code has to put each device into a reset mode and then reassigns it a new I2C address so they all have different addresses. That requires the PCB to connect one digital pin from the Teensy to each sensor’s reset pin, so the Teensy can put them into reset mode one-by-one.

Once all the I2C devices have a unique ID, each can be triggered to start sampling and then read its data. To avoid cross-talk between them, I do that in three groups of 2-3 sensors each, with as much physical separation between them. Because of this need to not have them all sampling at the same time and the intrinsic sampling time required for each device, the whole polling process is a lot slower than I would like and I haven’t found a way to get faster than 7Hz polling for the entire array.

This is my test setup for the two, side by-side

Testing the two arrays side by side, you can see some clear differences in the data below as I move a target (my head ;-)) towards and away from the arrays.

First, you can see that the ultrasonic array samples much faster, so one sequence of me moving my head closer and further takes up the whole screen, as it scrolls faster than the ToF lidar display below it, where I can do it dozens of times in the time it takes the data to scroll off the screen

Sonar/ultrasonic array
ToF Lidar Array

Second, you can see that the Sonar data is noisier. Most of the spurious readings in the ToF Lidar graph (ie, not the green line, which was the sensor pointed right at me) are from the sensor next to it (yellow), which makes sense since the sensors all have a beam spread that could have easily overlapped with the main sensor pointed at me.

That’s true for the sonar data, too (the red line is the sensor right next to the green line of the one pointed at me), but note how the green line, which should be constantly reporting my distance, quite often drops to zero. The blue line, which is a sensor on the other side of the array, is probably seeing a wall that isn’t moving that’s right at the limits of its range, which is why it drops in and out.

So what can we conclude from all this?

  • ToF Lidar data is less noisy than sonar data
  • ToF Lidar sensors are slower than sonar sensors
  • ToF Lidar sensors are more expensive than sonar sensors
  • Both ToF Lidar and Sonar 1D depth sensors in an array have worse resolution, range and accuracy than 2D Lidar sensors
  • I’m not sure why I even tried this experiment, since 2D Lidars are great, cheap and easily available 😉

Is there any way to make such an array useful? Well, not the way I did it, unless you’re super keen not to use a 2D mechanical spinning lidar and a RaspberryPi or other Linux computer.

However, it is interesting to think about what a dense array of the ToF chips on a flexible PCB would allow. The chips themselves are about $5 each in volume, and you don’t need much supporting circuitry for power and I2C, most of which could be shared with all the sensors rather than repeated on each breakout board as in the case with the Pololus I used. Rather than have a dedicated digital pin for each sensor to put them in reset mode to change their address, you can use an interrupt-driven daisy-chain approach with a single pin, as FuzzyStudio did. And OSHPark, which I use for my PCBs, does flex PCBs, too.

With all that in mind you could create a solid-state depth-sensing surface of any size and shape. Why you would want that I’m not sure, but if you do I hope you will find the preceding useful.

Lessons learned making a Lidar-based maze rover

As we integrate depth sensing more into the DIY Robocars leagues, I’ve been using a simple maze as a way to test and refine various sensor and sensor processing techniques. In my last maze-navigating post, I used a Intel RealSense depth camera to navigate my maze. In this one, I’m using a low-cost 2D Lidar, the $99 YDlidar X4, which is very similar to the RPLidar A1M8 (same price, similar performance). This post will show you how to use it and walk through some lessons learned. (Note: this is not a full tutorial, since my setup is pretty unusual. It will, however, help you with common motion control, Lidar threading and motion planning problems.)

First, needless to say, Lidar works great for this task. Maze following with simple walls like my setup is a pretty easy task, and there are many ways to solve it. But the purpose of this exercise was to set up some more complex robotics building blocks that often trip folks up, so I’ll drill down on some of the non-obvious things that took me a while to work out.

Hardware

First, my setup: I used a Pololu Romi chassis with motor encoders and a Romi32U control board. Two expansion plates on the top provide a surface for the Lidar. The 32U control board mounts a RaspberryPi 3, which is what we’ll use for most of the processing. Just mount the Lidar on the top, as I have in the picture above, and power it with a separate USB battery (any cheap phone charging battery will work) since the RaspberryPi USB port won’t provide enough power.

Software

The below is a description of some hard things I had to figure out, but if you just want to go straight to the Python code, it’s here.

1) Closed-loop motor control. Although there are lots of rover chassis with perfectly good motors, I’m a bit of a stickler for closed-loop motor control using encoders. That way you can ensure that a rover goes where you tell it to, even with motors that don’t perform exactly alike. This is always a hassle to set up on a new platform, with all sorts of odometry and PID loop tuning to get right. Ideally, the Romi should be perfect for this because it has encoders and a control board (which runs an Arduino-like microprocessor) to read them. But although Pololu has done a pretty good job with its drivers, it hasn’t really provide a complete closed-loop driving solution that works with Python on the on-board RaspberryPi.

Fortunately, I found the RomiPi library that adds proper motion control to the Romi, so if you say turn 10 degrees to the left it actually does that and when you say go straight it’s actually straight. Although it’s designed to work with ROS, the basic frameworks works fine with any Python program. There is one program that you load on the Romi’s Arduino low-level motor controller board and then a Python library that you use on the RaspberryPi. The examples show you how to use it. (One hassle I had to overcome is that it was written for Python 2 and everything else I use needs Python 3, but I ported it to Python 3 and submitted a pull request to incorporate those changes, which was accepted, so if you download it now it will work fine with Python 3)

2) Multitasking with Lidar reading. Reading the YDLidar in Python is pretty easy, thanks to the excellent open source PyLidar3 library. However, you’ll find that your Python code pauses every time the library polls the sensor, which means that your rover’s movements will be jerky. I tried a number of ways to thread or multitask the Lidar and the motor parts of my code, including Asyncio and Multiprocessing, but in the end the only thing that worked properly for me was Python’s native Threading, which you can see demonstrated in PyLidar3’s plotting example.

In short, it’s incredibly easy:

Just import threading at the top of your Python program:

import threading

And then call your motor control routine (mine is called “drive”) like this:

threading.Thread(target=drive).start()

3) Sampling strategies. I totally overthought this one. I tried all sorts of things, from trying to average all the distance points in the Lidar’s 360 degree arc to just trying to find the longest free distance and heading in that way. All too noisy.

I tried batching groups of ten degrees and doing it that way: still too noisy, with all sorts of edge cases throughout the maze. The problem is that you don’t actually want to steer towards the longest free path, because that means that you’ll hit the edge of the corner right next to the longest free path, like this:

Instead, the best strategy turned out to just keep it simple: define a left side (say 20 to 100 degrees, if straight ahead is 0 degrees), a right side (260 to 340 degrees) and a center (340 to 20 degrees). Get an average distance (vector) for each, like this:

Now that you have three vectors, you can sum them and get the net vector like this (I halve the center vector because avoiding the walls to right and left is more important than seeking some distant free space):

If you set the right and left angles to 45 degrees (pi/4 in radians, which is what Python uses), you can decompose the x and y average values of each zone and add them together like this:

        left_y = math.sin(math.pi/4) * left_average
        left_x = -1*math.cos(math.pi/4) * left_average
        right_y = math.sin(math.pi/4) * right_average
        right_x = math.cos(math.pi/4) * right_average
        center_y = center_average

        sum_x = round(left_x + right_x,2)
        sum_y = round(center_y - (left_y + right_y)/2,2)
        if sum_y < 100:
            sum_y = 100
        sum_angle = math.atan2(sum_x,sum_y)

That’s pretty much it. It seems so simple now, but all these things took days to figure out. Hope this guide will help save you some of that time!

Rules and guidelines for the DIY Robocars outdoors races

Starting with the March 7 Circuit Launch event, we’re going to be adding an outdoors track for cars using GPS or otherwise wanting more of a challenge than the indoor track. Although the outdoor track will be open all day (starting at 10:00am) of the event, the formal competition on that track will only begin after the indoor event has concluded, at 3:00pm, and conclude at 5:00pm.

You can sign up for it here­.

The below shows roughly where the track will be, in the parking lot at the side of Circuit Launch. There will be white tape marking the outside and inside of the track and cones placed at the corners. More details on the track and challenges (random cone obstacles, etc), will be announced on the day of the event.

To aid in GPS-guided navigation, we will have two RTK GPS base stations broadcasting. One is an Emlid Reach RS+, the other is a Swiftnav Piksi2. If you want to use either of these, you car must have a compatible “rover” GPS mounted on it. We recommend the Emlid Reach M+ (it can also work without a base station)

Regular GPS units will work too, of course, but will not be as precise as RTK GPS. But if that’s all you have, bring it and see how well it does!

BTW, if you’ve never tried GPS-guided autonomous cars, we recommend you start with the ArduRover­ project, which has been used successfully for years at the Sparkfun Autonomous Vehicle Competition (RIP), and uses the same Pixhawk autopilot hardware as many drones.

How to add an encoder to the Donkeycar chassis

One of the most useful sensors to add to an autonomous car (after the camera) is an encoder. This “closes the loop” of motor control, giving you feedback on what actually happens when you send a speed command to the motor (the speed the motor actually turns can depend on a lot of things, from the battery voltage to the increased load on the motor going uphill, etc).

In a perfect world we’d have an encoder on each wheel, showing the actual rotation of each tire, which, slippage aside, should perfectly correlated to speed and distance traveled. But it’s too hard to retrofit most RC cars to add encoders on each wheel, so this post will show you how to do it the easy way, with a single encoder on the motor. That will at least give you motor speed feedback and since that shaft is distributed to all the wheels, it averages out quite close to car speed and distance.

We’re going to be using “quadrature” encoders, which know the difference between forwards and backwards rotation and are easily read by a simple microcontroller. I’ll offer two alternatives that work the same, although one is cheaper (albeit larger) than the other.

I won’t be going into how to use this encoder information in your robocar, since it depends a lot on which software stack you’re using (Donkeycar, Jetracer, etc). That will wait for the next post. This one is just for the mechanical and electrical installation.

As you can see from the photo above, the standard Donkey chassis (the Exceed Magnet) has space to put the encoder above the motor where there is a large drive gear that is easily accessible.

Your choices are pretty simple. The cheaper encoder below is bigger (38mm diameter) and weighs a bit more. The more expensive one is smaller (25mm diameter) and proportionately lighter. My picture above shows the smaller one, which is a slightly neater installation, but if you want to save a bit of money the larger one works just as well.

You’ll also want a microcontroller to read the encoder data, which is easier to do on something like an Arduino than it is on a Linux computer like a RaspberryPi or Jetson Nano. I use a Teensy LC for that, which is faster than a stock Arduino and nicely small. You can buy that from many places, such as PJRC ($11.65) or Amazon ($15.26). But any Arduino-compatible microcontroller will do, such as the Adafruit Feather series or an Arduino Mini.

If you have a 3D printer, you can download the files for the mount and gear below. Or if you don’t, you can order them from the same link.

CostEncoderMountGear
$15.99
(plus $10 for 3D printing if you don’t have your own printer)

Amazon (get 200 PPR resolution)

Shapeways

Shapeways
$44.95
(plus $10 for 3D printing if you don’t have your own printer)

Sparkfun

Shapeways

Shapeways

Just screw the encoder on the mount, press the gear on to the shaft, and position the encoder as shown above so that the gear seats nicely on the motor gear and turns with it, without slipping. You can glue the mount on the car chassis shelf when you have the right position.

Once you have the encoder in place, solder pins on the the Teensy board in the pins shown below (USB V+ and Gnd and Pins 2 and 3), cut the encoder wires to the desired length and splice female connector of any sort to them as shown. On the smaller encoder, there is a fifth wire (orange, called “Output Z”) that can be cut off and ignored.

On the Teensy (or any Arduino), you can run the standard Arduino encoder library to test the encoders. Just search for it in the Arduino Library Manager and install it. From the Arduino Examples menu, select the Encoder/Basic example and edit it to reflect the pins you’re actually using:

Encoder myEnc(2, 3);

Now you’re up and running! You should be able to open the Serial Terminal (9600 baud) in the Arduino IDE and see the values streaming as you turn the encoder. To use this with your robocar code, you’ll want to plug the Teensy/Arduino into your main car computer’s USB port with a short cable, where it should show us as a serial port. You can can then read the encoder values over serial and use them as you see fit.

More details on how to do that with the next post.

Maze navigation with the Realsense 435 depth sensor camera

In the past year there’s been an explosion of good, cheap depth sensors, from low-cost Lidars to “depth cameras” with stereo vision and laser projection. It’s now easy to augment or even replace computer vision with depth sensing in DIY robocars, just like most full-size self-driving cars do. Intel has been leading a lot of this work with its great Realsense series of depth sensing and tracking cameras, which are under $200 and have a solid SDK and often built-in processing. This post is the latest in my tutorials in using them (a previous post on using the Realsense T265 tracking camera is here).

One of the ways to compare depth sensing cameras with Lidar (2D or 2.5D) is to use them to navigate a maze. There are lots of ways to navigate mazes, including very simple depth sensors such as an array of ultrasonic (sonar) or 1D laser range sensors (“time of flight”), but the common factor between a Lidar unit and a depth sensing camera is that they do more than those 1D sensors: they both return an array of pixels or points with depth information across a sweep of area around the vehicle.

The difference between these two kinds of sensors is that a spinning Lidar unit can potentially show those points around a full 360-degree disc. A depth camera, on the other hand, can typically can only see a rectangular area forwards like a solid state Lidar, with a horizontal spread of around 90 degrees. Unlike most solid state Lidars, however, a depth camera typically has has a much wider vertical spread (also around 90 degrees).

Camera tend to have shorter range than Lidar (10m max, compared to 20-30m max for low-cost Lidar) and lose precision the further they are from an object. On the other hand, depth camera are much higher resolution (a million points per second for a camera vs about 10,000 for a low-cost Lidar).

Cameras are also much cheaper: $180 for the RealSense 435 versus around $840 for the cheapest solid state Lidar, a Benewake CE30-A.

Here’s a table that summarizes the above:

2D Spinning Lidar (RPLidar A1)2.5D Solid State Lidar (Benawake CE30-C)3D Depth Camera (Intel Realsense 435)
Cost$120$959$179
Resolution8k points/sec150k points/sec1m points/sec
Range12m3-4m9-10m
Coverage360 x 1 degrees132 x 9 degrees87 x 58 degrees

The purpose of this experiment was to see whether the Realsense Depth Camera could do as a well as a laser. Spoiler: it can! (see below)

A note about the computer I used for this, which was a Raspberry Pi 3. It doesn’t have enough power to really take advantage of the high speed and large data return from the Realsense sensors. However, I used it because I wanted to use the new Sphero RVR as the chassis, since it does a nice job of making all the motor driver and heading control simple and automatic, thanks to an internal gyroscope and PID-driven motor controller with encoders — you just tell it what direction to go and goes there, straight as an arrow.

The RVR can power your onboard computer with 5v via USB (it communicates with the computer via a separate serial cable). That limited me to computers that could be powered at 5v, which includes the Raspberry Pi series, Odroids and the Jetson Nano. However, there is also a low-cost x86 single-board computer (SBC) that Intel recommends, call the UP Board, and that runs on 5V, too. So if I was starting this again and I’d use that instead, since it should be able to install the Realsense SDK with a simple “sudo apt install” and “pip install”.

The second problem with the Rpi is that the Realsense SDK only works on Ubuntu (as well as Windows, although that wasn’t an option here). Although there is a version of Ubuntu (Ubuntu Mate) that works with the Raspberry Pi 3, support for the more powerful Raspberry Pi 4 is incomplete, and I couldn’t get Realsense to work on it. So Raspberry Pi 3 it was.

Compiling the Realsense SDK (librealsense) on the Rpi 3 can take a full day (unattended, thankfully) and you have to do it again every time Intel releases a new version, so that’s a bit of a hassle. So lesson learned: if you have an UP Board, use that. But if you happen to have a RPi 3, it will work, just more slowly.

Software

Enough of the hardware: let’s talk about the software! My approach to the maze-following task was to basically look at all the depth signatures in front of the car, with as wide a view as possible and a vertical “region of interest” (ROI) set as above the ground and not far above the height of the maze walls. I figure out which path has obstacles furthest away (ie, is most open) and head that way — basically go to where the path is clearest ahead. (This is mathematically the same as avoiding the areas where the obstacles are closest).

In my Python code, I do that the following way. Thirty times a second, the Realsense sensor sends me a depth map (640×480 pixels). To save processing time, I read every fifth pixel from left to right for each scan line within my vertical ROI (in this case from line 220 to 280), creating 128 “stacks”, each of which contains 60 vertical pixels, which is about 8,000 points in total. I then average the values within each stack and display them in a serial terminal in a crude character-based display like this (the smaller dots are the more distant obstacles, ie, the clear path ahead):

I then bin them in blocks of ten of these stacks, for a total of 13 blocks. Each block then gets an average and then I steer towards the block with the highest average (furthest distance). If I get stuck in a corner due to the relatively narrow field of view of the Realsense depth camera (“corner” = all visible directions have obstacles less than 0.75 meters away), I stop and rotate in 15-degree increments until I can see a clear path ahead.

It’s pretty simple but it works most of the time. That said, a simple 360-degree 2D lidar would do even better in this course, since it would be able to see the best open path at all times without having to stop and rotate. For that matter, a simple sonar “stay 10 cm from the right wall” method would work equally well in this idealized environment (but not in a real-world one). So perhaps a maze is not the best test of the Realsense sensors – a more open-world environment where they are used for obstacle detection and avoidance, not path following, would show off their advantages better.

Lessons learned:

  1. Don’t use Raspberry Pis for Realsense sensors. Use Up boards or Jetson Nanos instead
  2. Depth cameras are as good and way cheaper than solid state Lidar for short-range sensing
  3. That said, 360-degree sensing is better than 90-degree sensing
  4. The Sphero RVR platform is easy to use. It just works!

Using the Intel Realsense T265 and the Jetson Nano for a robocar

Thanks to the astoundingly productive Tawn Kramer, I’ve had a chance to play around a bit with his Donkeycar fork that uses the impressive Intel Realsense T265 visual odometry “tracking camera” instead of a regular RaspberryPi camera. Since Tawn’s original work, the Intel Realsense team has released a number of performance improvements for the T265, so it tracks even better now.

I also decided to use a processor with more power than the Raspberry Pi 3 that Tawn used, so I went with the Jetson Nano, which is the only other non-X86 single-board computer I was able to get the Realsense SDK working on (it won’t work on a Raspberry Pi 4, since Ubuntu Mate, which is required, doesn’t run on that yet. Realsense will work on any X86-based SBC, such as an Up Board or LattePanda, but those are a bit more expensive).

It took a while to get all this working, so this post is just a summary of the steps needed.

Steps:

  1. Follow the Donkeycar installation on Jetson Nano. But when you get to the part about cloning the Donkeycar code, use my fork instead: git clone https://github.com/zlite/donkey.git
  2. Build Intel Librealsense on Jetson Nano
  3. Then build pyrealsense2 from source
  4. Go through this Donkeycar application setup and make sure you note the special Jetson Nano instructions
  5. Setup your Joystick, following these instructions. (update: see TCIII comment below for updated instructions) I used a PS4 controller (see TCIII’s comment below if you’re using an Xbox controller). Note that you’ll have to pair it via Bluetooth with the Nano. I did that in the Nano’s desktop interface by selecting the Bluetooth icon and adding the controller as a Bluetooth device when it was in pairing mode. The instructions say to set the driver to start at boot by editing the rc.local file, but you may find that the Nano doesn’t already have one. No worries, just create it from scratch and add the necessary line like this:
sudo nano /etc/rc.local
paste /home/pi/env/bin/ds4drv --led 00ff00 into the file. Save and exit.  

If you’re using a PS3 controller, in manage.py change this line to say this:

cont_class = PS3JoystickController 

Set bus num to 1 in myconfig.py:

 PCA9685_I2C_BUSNUM = 1

After you’ve done that, set up the directory with this:

 donkey createcar --path ~/follow --template path_follower 

Running:

 cd ~/follow 
 python3 manage.py drive 

Once it’s running, open a browser on your laptop and enter this in the URL bar: http://<your nano’s IP address>:8887

The rest of the instructions from Tawn’s repo:

  • When you drive, this will draw a red line for the path, a green circle for the robot location.
  • Mark a nice starting spot for your robot. Be sure to put it right back there each time you start.
  • Drive the car in some kind of loop. You see the red line show the path.
  • Hit X on the PS3/4 controller to save the path.
  • Put the bot back at the start spot.
  • Then hit the “select” button (on a PS3 controller) or “share” (on a PS4 controller) twice to go to pilot mode. This will start driving on the path. If you want it go faster or slower, change this line in the myconfig.py file: THROTTLE_FORWARD_PWM = 530
  • Check the bottom of myconfig.py for some settings to tweak. PID values, map offsets and scale. things like that. You might want to start by downloading and using the myconfig.py file from my repo, which has some known-good settings and is otherwise a good place to start.

Some tips:

When you start, the green dot will be in the top left corner of the box. You may prefer to have it in the center. If so, change PATH_OFFSET = (0, 0) in the myconfig.py file to PATH_OFFSET = (250, 250)

For a small course, you may find that the path is too small to see well. In that case, change PATH_SCALE = 5.0 to PATH_SCALE = 10.0 (or more, if necessary)

If you’re not seeing the red line, that means that a path file has already been written. Delete “donkey_path.pkl” (rm donkey_path.pkl) and the red line should show up

It defaults to recording a path point every 0.3 meters. If you want it to be smoother, you can change to a smaller number in myconfig.py with this line: PATH_MIN_DIST = 0.3

Meet Ross Robinson, the Defender of Humanity

As you know, the DIY Robocars races are partly about beating other cars, but they’re also about beating puny humans. And we’re getting pretty close!

But see that yellow line at the bottom representing the “fastest human time”? That time was set almost a year ago by a human who…let’s put this gently…may not have been the fastest RC driver on the planet. Or possibly even in the room.

So now it’s time to give the humans a chance to defend themselves better before those red and blue lines cross the yellow and our species is rendered unnecessary.

Enter Ross Robinson (above)! By day he’s a robot wrangler at 3DR (as am I), but unlike the rest of us geeks, he’s pretty good at driving a car. And he not only has a fast one, but it’s set up with a camera for First Person View driving, so his view of the track will be exactly the same as the autonomous cars!

So, starting with the Sept 21st event, the last race of the day will be the fastest robot car vs the fastest human present. The first one will be Ross.

No pressure, Ross, but the fate of humanity lies in your hands