Although I’ve posted previous tutorials on adding a shaft encoder to your DIY robocar, sometimes you want the additional precision of wheel encoders. That’s because when you’re turning, the wheels move at different speeds and you can’t measure that from just the main shaft.
The problem is that it’s pretty hard to retrofit wheel encoders to these small cars; there just isn’t much room inside those wheels, especially if you’re trying to add hall-effect sensors, quadrature rotary encoders or even just the usual optical wheel with a LED/photodiode combination to read the regular cutouts as they spin.
However, most wheel already have cutouts! Maybe we don’t need to use an LED at all. Perhaps a photodiode inside the wheel could look out at the lights outside the car and just count the pulses as the wheel turns, with the gaps in the wheel letting the light through.
I used these cheap photodiodes, which have both digital and analog outputs, as well as a little potentiometer to calibrate them for the available light.
Then I just connected them to an Arduino and mounted them inside the wheel, as close to the wheel as possible without binding, to avoid light leakage. The Arduino code is below. Just connect the DO pin to Arduino pin 8, and the VCC and GND pins to the Arduino’s VCC and GND pins (it will work with 3.3v or 5v). No need to connect the AO pin.
const int encoderIn = 8; // input pin for the interrupter
const int statusLED = 13; // Output pin for Status indicator
int detectState=0; // Variable for reading the encoder status
int prevState = 1;
int counter = 0;
void setup()
{
Serial.begin(115200);
pinMode(encoderIn, INPUT); //Set pin 2 as input
pinMode(statusLED, OUTPUT); //Set pin 13 as output
}
void loop() {
detectState=digitalRead(encoderIn);
if (detectState == HIGH) {
detectState=digitalRead(encoderIn); // read it again to be sure (debounce)
if (detectState == HIGH) {
if (prevState == 0) { //state has changed
digitalWrite(statusLED, HIGH); //Turn on the status LED
counter++;
Serial.println(counter);
prevState = 1;}
}
}
if (detectState == LOW && prevState == 1) { //If encoder output is low
digitalWrite(statusLED, LOW); //Turn off the status LED
prevState = 0;
}
}
When you run a Raspberry Pi in “headless” configuration without a screen, which is typical for a Donkeycar setup, one of the tricky things is knowing what IP address it has been assigned by your Wifi network, so you can connect to it. If you control your own network, you may be able to see it in your network control app, shown in a list of connected devices. But if you don’t control your own network, such as the races we run at Circuit Launch, it’s hard to figure out which Raspberry Pi on the network is yours.
So what you need is a screen that shows you your IP address on startup. This is harder than it should be, in part because of changing startup behavior in various different kinds of Linux and generations of Raspian, and I wrote a post last year on how to do it with a simple LED screen.
Now we’re standardizing on smaller, more modern color OLED screens. So this is an update to show how to use them to show your IP address on startup.
Update: All too typically, after I write this code myself I find that there’s a perfectly good repo already out there that does the same thing by slightly different means. So check that one out and use whichever you prefer.
Note: In the picture above, I’m using the custom Donkeycar RC hat (coming soon to the Donkey Store), which has a built-in OLED screen. But if you don’t have that, you can connect your OLED screen directly with jumper cables as per this tutorial.
Step 1: Install Adafruit’s Circuit Python OLED library. If your Pi boots into a Conda environment, exit that (conda deactivate), then install the library with pip: pip install adafruit-circuitpython-ssd1306
Step 2: Copy this Python file to your top level user directory (probably /home/pi/)
Step 3: Set it to run at startup. Type crontab -e and then select the editor you prefer (I use Nano). Then add @reboot python3 /home/pi/oled_ip.py to the bottom of the file. Then save it and exit the editor (control-o, control-x).
I backed the Naobot AI robot on Kickstarter, and actually got two of them (the first was defective, so they sent me another). So I took apart the defective one to see how it works inside. Overall, it looks very well done.
Right now it’s mostly useful as a programming platform, using Scratch in its mobile app. The built-in “AI” functions, such as follow a person or animal, don’t work very well. But on the hardware side, it’s quite impressive.
Here’s what’s inside:
All the “brains” are in the board in the head that includes the camera, the processor and the wifi board
Back in the days of the much-missed Sparkfun Autonomous Vehicle Competition (2009-2018), I and other team members did very well with GPS-guided rovers based on the ArduRover software, which was a spin-off from our drone work. Although we won a lot, there was a lot of luck and randomness to it, too, mostly because GPS was not very precise and positions would drift over the course of the day.
Now that GPS and other other sensors used by ArduPilot have improved, as has the codebase itself, I thought I’d come back to it and see how much better I could get a rover to perform with all the latest, greatest stuff.
The answer is: better! Here’s a how-to.
For this experiment, I used a rover with a Pixhawk 4 autopilot, a Reach RTK M+ rover GPS (on the car) and RS+ base station. This is pretty expensive gear (more than $1,000 for the RTK combo alone) so I don’t expect others to be able to duplicate this (I borrowed the gear from work). However, you don’t actually need the expensive base station — you can use the $265 M+ rover GPS by itself, using Internet-based corrections instead of a local base. There are also cheaper solutions, such as the $600 Here+ base/rover combo or just a $125 Here 3 using Internet-based corrections.
The promise of RTK GPS is that by using a pair of high-quality GPSs, one stationary in a known spot and the other moving on the car, you can detect the atmospheric disturbances that lead to GPS drift on the stationary one (since any apparent movement is clearly noise) and then transmit the necessary corrections to the nearby moving GPS, so it can correct itself. The closer the stationary GPS is to the moving one, the more accurate the RTK solution. So my base station right on the track was perfect, but if you use Internet correction sources and are either near the coasts or farming country (farms use RTK GPS for automated agricultural equipment), you should be able to find a correction source within about 10-20km.
If this works, it should avoid the biggest problem with just using the GPS on the car, which is drift. We often found at the Sparkfun AVC that a waypoint mission around the course that worked great in the morning was several meters off in the afternoon because it was using different GPS satellites. So repeatability was a real issue.
My rover is a standard 1/10th scale RC chassis with a brushed motor and a big-ass foam bumper on the front (trust me, you’ll need a bumper). It honestly doesn’t matter which one you use; even 1/16th can work. Don’t bother to get one with a RC radio, since you’ll want more channels than a standard car radio provides (you use the extra channels to do things like switch modes and record waypoints). Something like this, with this radio (you’ll need CPPM output on the RC receiver), would be fine. I tend to use 2S (7.4v) LiPo batteries so the car doesn’t go too fast (that’s the same reason for using a brushed rather than brushless motor, although either can work). Just make sure that it’s got a proper separate steering servo and ESC; none of those cheap toy integrated deals.
A couple things I added that are useful:
It’s hard to get the standard 3DR telemetry radios to work well at ground level. So I use the long-range RF900D+ radios instead. Those are pretty expensive, so you will probably want to use the cheaper and still very good 500mw Holybro radios (which are designed to work with the Pixhawk 4) instead.
I cut out an aluminum circle to put under the Reach RTK GPS receiver, which was recommended. You can also buy those pre-made from Sparkfun.
You’ll notice a two-barrel sensor at the front of the car. That’s a Lidar-Lite range sensor, which I’ll eventually use for obstacle avoidance
The first task was to set it up, which was not easy.
Setting up the RTK
First, you have to set up the RTK pair. In the case of the Reach RTK, both base and rover (which is the name for the GPS module that moves) can connect via wifi, either to an existing network or to one that they set up themselves. You can either use a web browser or a mobile app to set them up, update the firmware if necessary and monitor their performance. A few things to keep in mind, along with following the instructions:
You do have to tell the base where it is (enter lat/lon), or set it to sample its own position for 2 minutes (use Single mode, since it isn’t getting corrections from anywhere else). I usual use the sampling method, since Google maps on my phone is not as accurate.
On the rover, select “Kinematic mode” and tell it to accept horizontal motion of 5 m/s
On the rover, set it to stream its position over Serial using NMEA format at 38kbs.
If you’re using a Pixhawk 4 like me (and perhaps other Pixhawk-based autopilots), you’ll have to make a custom cable, since for some dumb reason Reach and Pixhawk use Serial connector pinouts that are mirrored. On Pixhawk, V+ is the red wire on the left of the connector, while on the M+ it’s on the right. Super annoying. Basically just take a spare six-pin cable for the Reach and a spare for the Pixhawk (they’re mirrored), cut them and resolder the wires (use heat shrink tubing on each) in the same order counting from the red wire. Like this:
Setting up ArduRover
In my setup, I’m using both the standard GPS that comes with the Pixhawk 4 as well as the Reach RTK GPS.
So we’ll start by setting up ArduRover so it works with the standard Pixhawk GPS, and then we’ll add the second RTK GPS. There are a couple tricks to setting up ArduRover, which you should keep in mind in addition to the instructions:
After you do the regular setup, there are a lot of tuning instructions. This can be a bit overwhelming, but these are the ones that are most important:
First, your steering may be reversed. There are actually two steering modes: manual and auto. First, start with auto mode. Create a mission in Mission Planner, arm and put your rover in auto mode. If it steers towards the first waypoint, great; if it steers away, that means that your output is reversed and change the Servo 1 parameter to Reversed. Once you’ve got that sorted, do the same thing for Manual Mode: if moving your right stick to the right makes the car steer to the right, great; if not, change the RC1 parameter to Reversed.
Once you’ve got that right, you can focus on getting the rover to track the course as well as possible. There is a page on doing this, but three parameters that are good to start with are these:
Steering P (proportional) value. The default is pretty low. Try raising it to 0.8 or 0.9. If your rover zig-zags fast when it’s supposed to be going straight, you’ve gone too far. Dial it back a bit
Tune the L1 controller. I find that it’s too conservative out of the box. I use a NAVL1_PERIOD of 6 (vs default 10) and a NAVL1_DAMPING of .90 (vs default 0.75) on my rover.
Adding the second RTK GPS
Once your rover is running well with the built-in GPS, you can add the second RTK one. Once you plug it into the second UART port, you can enable it by setting the following parameters:
GPS2_TYPE2 to 5 (auto)
GPS_BLEND_MASK to 1 (just horizontal)
GPS_BLEND_TC to 5.0s (quick averaging)
GPS_AUTO_SWITCH to 2 (blend)
SERIAL4_BAUD to 38 (38,400 bps)
SERIAL4_PROTOCAL to 5 (GPS)
Once you’ve set all those, reboot your autopilot. If you’re outside, when you restart and give the GPSs time to settle, you should see both listed on the Mission Planner HUD. (Note: it’s important to do all this testing outside. If your RTK can’t see satellites, it won’t stream a solution and you won’t see a second GPS in the Mission Planner.)
When it’s working, you should see this: two GPSs shown in the HUD (it will say “Fixed” or “Float” depending on how good your RTK solution is at any given moment):
We’re going to “blend” the RTK and the regular GPS. Why not just use RTK, which should be more accurate? Because sometimes it totally glitches out and shows a position many meters away, especially when you go under trees. So we use the less accurate but more reliable regular GPS to “smooth out” the GPS, which dampens the glitches while still allowing the higher precision (usually) RTK to dominate. The above parameters work pretty well for me.
Here’s a replay of the log of one run at 5m/s, which is pretty much perfect. If you want to go even faster, you’ll need to tune it a bit more, but this is quite promising.
So, the final question is: is RTK worth it? I think, on balance, the answer is no. New regular GNSS GPSs using the Ublox M8 and M9 chip sets (I like the MRo range) are getting much better and there are more satellite constellations for them to see. $1,000 and a harder setup is a lot of money and hassle for slightly better performance. So what’s RTK good for? Mostly for industrial rovers that need to be in an exact position, such a agricultural and construction machinery. For DIY Robocar racing, I think you’d be better off with regular GPS.
There’s a lot to like about the Intel OpenBot, which I wrote about last month. Since that post, the Intel team has continued to update the software and I’m hopeful that some of the biggest pain points, especially the clunky training pipeline (which, incredibly, required recompiling the Android app each time) will be addressed with cloud training. [Update 1/22/21: With the 0.2 release, Intel has indeed addressed these issues and now have a web-based way to handle the training data and models, including pushing a new model to the phone without the need for a recompile. Bravo!]
On the hardware side, I could see an easy way to improve upon Intel’s design, which required too much 3D printing and used a ungainly 4 wheel-drive design that was very difficult to turn. There are plenty of very cheap 2WD chassis on Amazon, which use a trailing castor wheel for balance and turning nimbleness. 2WD is cheaper, better, easier and to be honest, Intel should have used it to begin with.
So I modified Intel’s design for that, and this post will show you how you can do the same.
First, buy a 2WD chassis from Amazon. There are loads of them, almost all exactly alike, but here’s the one I got ($12.99)
If you don’t have a 3D printer, you can buy a car phone mount and use that instead.
The only changes you’ll need to make to Intel’s instructions are to drill a few holes in the chassis to mount the phone holder. The plexiglass used in the Amazon kits is brittle, so I suggest cutting a little plywood or anything else you’ve got that’s flat and thin to spread the load of the phone holder a bit wider so as not to crack the plexiglass.
You can see how I used a little plywood to do that below
You will also have to expand some of the slots in the plexiglass chassis to fit the encoders. If you have a Dremel tool, use a cutoff wheel for that. If not, carefully use a drill to make a line of holes.
Everything else is as per the Intel instructions — the software works identically. I think the three-wheel bot works a bit better if the castor wheel is “spring-loaded” to return to center, so I drilled in a hole for a screw in the castor base and wrapped a rubber band around it to help snap the wheel back to center as shown above, but this is optional and I’m not sure if it matters all that much. All the other fancy stuff, like the “turn signal” LEDs and the ultrasonic sensor, are also optional (if you want to 3D print a mount for the sonar sensor, this is the one I used and I just hot-glued it on).
I’d recommend putting the Arduino Nano on the small solderless breadboard, which makes adding all jumper wires (especially the ground and V+ ones) so much easier. You can see that below.
Here it is (below) running in manual mode in our house. The 4 AA batteries work pretty much the same as the more expensive rechargeable ones Intel recommends. But getting rid of two motors is the key — the 2WD is so much more maneuverable than the 4WD version and just as fast! Honestly, I think it’s better all around — cheaper, easier and more nimble. OpenBot with 2WD chassis – YouTube
tl;dr: The Tinkergen MARK ($199) is my new favorite starter robocar. It’s got everything — computer vision, deep learning, sensors — and a great IDE and set of guides that make it all easy and fun.
Getting a robocar design for first-time users right is a tricky balance. It should be like a great videogame — easy to pick up, but challenging to master. Too many kits get it wrong one way or another. They’re either too basic and only do Arduino-level stuff like line-following or obstacle avoidance with a sonar sensor, or they’re too complex and require all sorts of toolchain setups and training to do anything useful at all.
In this post, I list three that do it best — Zumi, MARK, and the Waveshare Piracer. Of those, the Piracer is really meant for more advanced users who want to race outdoors and are comfortable with Python and Linux command lines — it really only makes the hardware side of the equation easier than a fully DIY setup. Zumi is adorable but limited to the Jupyter programming environment running via a webserver on its own RaspberryPi Zero, which can be a little intimidating (and slow).
But the Tinkergen MARK gets the balance just right. Like the others, it comes as a very easy to assemble kit (it takes about 20 minutes to screw the various parts together and plug in the wires). Like Zumi, it starts with simple motion control, obstacle detection and line following, but it also as some more advanced functions like a two-axis gimbal for its camera and the ability to control other actuators. It also has a built-in screen on the back so you can see what the camera is seeing, with an overlay of how the computer vision is interpreting the scene.
Where MARK really shines is the learning curve from basic motion to proper computer vision and machine learning. This is thanks to its web-based IDE and tutorial environment.
Like a lot of other educational robotics kits designed for students, it defaults to a visual programming environment that looks like Scratch, although you can click an icon at the top and it switches to Python.
Videos and guides are integrated into the web interface and there are a series of courses that you can run through at your own pace. There is a full autonomous driving course that starts with simple lane-keeping and goes all the way to traffic signs and navigation in a city-street like environement.
Where MARK really shines is in the number of built-in computer vision and deep learning functions. Pre-trained networks include recognizing traffic signs, numbers, animals and other common objects:
Built-in computer vision modules include shapes, colors, lines, faces, Apriltags and targets. Also supported is both visual line following (using the camera) or sensor line following using the IR emitter/receiver pairs on the bottom of the car.
In addition, you can can train it to identify new objects and gestures by recording images on the device and then training a deep learning network on your PC, or even training on the MARK itself for simpler objects.
I got the track mat as well with the kit, which is the right size and contrast to dial in your code so it performs well. Recommended.
In short, this is the best robocar kit I’ve tried — it’s got very polished hardware and software, a surprisingly powerful set of features and great tutorials. Plus it looks great and is fun to use, in large part due to the screen at the top that shows you what the car is seeing. A great Holiday present for kids and adults alike — you won’t find a better computer vision and machine learning experimentation package easier to use than this.
Intel has released an open source robocar called OpenBot that uses any Android phone running deep-learning code to do autonomous driving, including navigating in halls or on a track or following a person. The key bit here is the open source Intel Android app, which does all the hard work; the rest of the car is just a basic Arduino and standard motors+chassis.
To be honest, I had not realized that it was so easy to get an Android phone to talk to an Arduino — it turns out that all you need is an OTG (USB Type C to USB Micro or Mini) cable for them to talk serial with each other. (This is the one I used for Arduinos that have a USB Micro connector.) Sadly this is not possible with iOS, because Apple restricts hardware access to the phone/tablet unless you have a special licence/key that is only given out to approved hardware.
The custom Intel chassis design is very neat, but it does require a lot of 3D printing (a few days’ worth). I don’t really understand why they didn’t just use a standard kit that you can buy on Amazon instead and just have the user 3D print or otherwise make or buy a phone holder, since everything else is totally off the shelf. Given how standard the chassis parts are, it would be cheaper and easier to use a standard kit.
I ended up using a chassis I had already, which is the DFRobot Cherokey car. It’s slightly overkill, since it has all sorts of wireless communications options built in, including bluetooth and and Xbee socket, that I didn’t need, but it’s just evidence that you can use pretty much any “differential drive” (steering is done by running motors on the left and right at different speeds) chassis you have handy. Basically any car that uses an Arduino will work with a little tweaking.
I took a few other liberties. I had some motors that had built-in quadrature encoders, which I prefer to the cheap optical encoders Intel recommended, so I had to modify the code a bit for them and that meant changing a few pin mappings. (You can see my modified code here.). But otherwise it’s pretty much as Intel intended, complete with sonar sensor and cute turn signals at the back.
So how does it work? Well, for the easy stuff, great. It does person following right out of the box, so that’s a good test if your bot is working right. But the point is to do some training of your own. For that, Intel has you drive manually with a bluetooth game controller, such as a PS4 controller, to gather data for training on your laptop/PC. That’s what I did, although Intel doesn’t tell you how to pair the controller with your Android phone ( (updated) the answer: press the controller PS and the Share button until the light starts flashing blue fast. Then you you should be able to see it in your Android Bluetooth settings “pair new device” list. More details here).
But for the real AI stuff, which is learning and training new behavior, it’s still pretty clunky. Like DonkeyCar, it uses “behavioral cloning”, which is to say that the process is to drive it manually around a course with the PS4 controller, logging all the data (camera and controller inputs) on your phone, then transfer a big zip file of that data to your PC, which you run a Jupyter notebook inside a Conda environment that uses TensorFlow to train a network on that data. Then you have to replace one of the files in the Android app source code with this new model and recompile the Android app around that. After that, it should be able to autonomously drive around that same course the way you did.
Two problems with this: first, I couldn’t get the Jupyter notebook to work properly on my data and experienced a whole host a problems, some of which were my fault and some were bugs in the code. The good news is that the Intel team is very responsive to issue reports on Github and I’m sure we’ll get those sorted out, ideally leading to code improvements that will spare later users these pain points. But overall, the data gathering and training process is still way too clunky and prone to errors, which reflects the early beta nature of the project.
Second, it’s crazy that I have to recompile the Android app every time I train a new environment. We don’t need to do that with DonkeyCar, and we shouldn’t have to do that with OpenBot, either. Like DonkeyCar, the OpenBot app should be able to select and load any pretrained model. It already allows you to select from three (person following and autopilot) out of the box, so it’s clearly set up for that. So I’m confused why I can’t just copy a new model to a directory on my phone and select that from within the app, rather than recompiling the whole app.
Perhaps I’m missing something, but until I can get the Jupyter notebook to work it will just have to be a head-scratcher…
There are a load of great DIY projects you can find on this site and elsewhere, such as the OpenMV minimum racer and Donkeycar. But if you’re not feeling like you’re ready for a DIY build here are a few almost ready-to-run robocars that come pretty much set up out of the box.
My reviews of all three are below, but the short form is that unless you want to race outdoors, the Tinkergen MARK is the way to go.
What kind of car should you make autonomous? They all look the same! There are some amazing deals out there! How to choose??
I’m going to make it easy for you. Look for this:
What’s that? It’s a standard RC connector, which will allow you to connect steering and throttle to standard computer board (RaspberryPi, Arduino, etc). If you see that in car, it’s easily converted into autonomy. (Here’s a list of cars that will work great)
If you don’t see that, what you’ve got is CRAZY TOY STUFF. Don’t buy it!
But let’s say you already have, because you saw this cool looking car on Amazon (picture above) for just $49
By the looks of it, it’s got all this great stuff:
HIGH PERFORMANCE MOTOR: Can Reach Speeds of Approximately Up to 15 MPH ·
RECHARGEABLE LITHIUM BATTERY: High Performance Lithium-Ion Battery · Full Function Pro Steering (Go Forward and Backward, Turn Left and Right) · Adjustable Front Wheel Alignment
PRO 2.4GHz RC SYSTEM: Uninterrupted, Interference-Free Driving · Race Multiple Cars at the Same Time
Requires 6.4v 500mAh Lithium-Ion Battery to run (Included) Remote Control requires 9v Battery to run (Included)
BIG 1:10 SCALE: Measures At a Foot and a Half Long (18″) · Black Wheels with Premium, Semi-Pneumatic, Rubber Grip Tires · Interchangeable, Lightweight Lexan Body Shell with Metal Body Pins and Rear Racing Spoiler · Approximate Car Dimensions, Length: 18″ Width: 8″ Height: 5″
But is it good for autonomy? Absolutely not. Here’s why:
When you get it, it looks fine:
But what’s inside? Erk. Almost nothing:
That’s not servo-driven steering! 🙁 Instead, it’s some weird thing with a motor, some gears and a spring:
How about the RC? Yikes. Whatever this thing is, you can’t use it (it’s actually an integrated cheap-ass radio and motor controller — at any rate there’s no way to connect a computer to it)
So total write-off? Not quite.
Here’s what you have to do to make it usable:
First, you’ve to to put in a proper steering servo. Rip out the toy stuff, and put in a servo with metal gears (this one is good). Strap it in solidly, like I have with a metal strap here:
Now you have to put in a proper RC-style motor controller and power supply. These cheap cars have brushed motors, not brushless ones, so you need to get a brushed ESC. This one is fine, and like most of them has a power supply (called a BEC, or battery elimination circuit), too.
Now you can put in your RaspberryPi and all the other good stuff, including a proper LiPo battery (not that tiny thing that came with it).
It’s now been a couple weeks since Nvidia released its new Jetson Xavier NX board, a $399 big brother to the Jetson Nano (and successor to the TX2) with 5-10 times the compute performance of the Nano (and 10-15x the performance of a RaspberryPi 4) along with twice as much memory (8 Gb). It comes with a similar carrier board as the Nano, with the same Raspberry Pi GPIO pins, but includes built-in Wifi/BT and a SSD card slot, which is a big improvement over the Nano.
How well does it suit DIY Robocars such as Donkeycar? Well, there are pluses and minuses:
Pros:
All that computing power means that you run deeper learning models with multiple camera at full resolution. You can’t beat it for performance.
It also means that you can do your training on-car, rather than having to export to AWS or your laptop
Built-in wifi is great
Same price but smaller and way more powerful than a TX2.
Cons:
Four times the price of Nano
The native carrier board for the Jetson NX runs at 12-19v, as opposed the Nano, which runs at 5v. That means that the regular batteries and power supplies we use with most cars that use Raspberry Pi or Nano won’t work. You have two options:
2) Use a Nano’s carrier board if you have one. But you can’t use just any one! The NX will only work with the second-generation Nano carrier board, the one with two camera inputs (it’s called B-01)
When it shipped, the NX had the wrong I2C bus for the RPi-style GPIO pins (it used the bus numbers from the older TX2 board rather than the Nano, which is odd because it shares a form factor with the Nano). After I brought this to Nvidia’s attention they said they would release a utility that allows you to remap the I2C bus/pins. Until then, RPi I2C peripherals won’t work unless they allow you to reset their bus to #8 (as opposed to the default #1). Alternatively, if your I2C peripheral has wires to connect to the pins (as opposed to a fixed header) you can use the NX’s pins 27 and 28 rather than the usual 3 and 5, and that will work on Bus 1
I’ve managed to set up the Donkey framework on the Xavier NX and there were a few issues, mostly involving that fact that it ships with the new Jetpack 4.4, which requires newer version of TensorFlow than the standard Donkey setup. The Donkey docs and installation scripts are being updated to address that and I’m hoping that by the time you read this the setup should be seamless and automatic. In the meantime, you can use these installation steps and it should work. You can also get some debugging tips here.
I’ll also be trying it with the new Nvidia Isaac robotic development system. Although the previous version of Isaac didn’t work with the Xavier NX, version 2020.1 just came out so fingers crossed this works out of the box.