Blog

A DIY Autonomous Robotic Lawnmower

Whoa, wait a minute! Isn’t this a Robo Car website? True, but wouldn’t it be nice to have an autonomous lawnmower to save you from spending time mowing your lawn and therefore give you more free time to do RC Car racing? If so, read on to give you an idea of how I built my low budget prototype autonomous robotic lawnmower, but please remember this is just a guide and none of this design is set in stone.

Warning! This article describes the process of building an autonomous lawn mower for your entertainment and scientific inquiry. THIS IS NOT AN INSTRUCTION MANUAL OR GUIDE FOR BUILDING AUTONOMOUS MOWERS. IF YOU DECIDE TO BUILD ANYTHING YOU ARE DOING SO AT YOUR OWN RISK. A full size autonomous lawnmower, like this one, can be a VERY DANGEROUS machine — please consider carefully the implications before even thinking about building a machine like that. If you plan on building a self-driving mower you are doing so at your own risk.

Additionally, ArduPilot’s Developer Code of Conduct explicitly excludes ArduPilot from running systems where ArduPilot is effectively in control of human lives.

If you do a search on the IoT for “diy autonomous robotic lawnmowers” you will come across a vast selection of diy autonomous robotic lawnmower articles ranging from simple and inexpensive to very complex and very costly. Basically autonomous robotic lawnmowers have been around for quite a while and were kept within the lawn cutting area by sensing a wire buried around the perimeter of the cutting area of interest. However, now that GPS and especially RTK GPS guidance controllers have come into the hobbyist price range, building an autonomous robotic lawnmower using RTK GPS guidance has become a reality. Whereas the wire sensing autonomous robotic lawnmowers usually followed a random cutting pattern, sometimes cutting the same cutting area multiple times, the RTK GPS guided autonomous robotic lawnmowers can be programmed to meticulously cut a nonrepeating swath of grass row after row until the lawn is completely mowed. Still interested? Now on to the nuts and bolts of building an autonomous robotic lawnmower.

The typical autonomous robotic lawnmower is usually composed of a chassis, some form of motive power, a cutting head, a power source, and some form of cutting path guidance. I chose 1/4 inch thick black ABS plastic sheets to build the chassis which consists of two plates one 19 inches X 12 inches and the other smaller plate 15 inches X 12 inches. The larger plate serves as the mounting point for the differential steering motor controller, the drive motors, the cutting head motor, the batteries, the power switches, and a speed controller for the cutting head motor. The three photos below show how I positioned the aforementioned components on the main 19 inches X 12 inches plate. The smaller plate is attached to the main chassis plate with 72mm long 1109 Series goRAIL, one in each corner using appropriate metric hardware, to provide a mounting surface for the RTK GPS Path Guidance Module as shown in the fifth photo below.

The differential steering motor controller receives steering and throttle PWM signals from the RTK GPS Path Guidance Module and converts those PWM signals to differential steering and throttle voltages to control the two 12vdc drive motors.

The drive motors are mounted in U-channel with L-brackets, shown in the fourth photo below, and are coupled to two GLOBACT 1/8 1/10 17mm Hex RC Wheels and Tires using a D-shaft clamp, a 12mm Hex Wheel Adaptor, and finally a 12mm Hex to 17mm Hex Wheel Adaptor with appropriate metric machine screws. The rear caster wheel is 3 inches in diameter, can be purchased here, and mounted to the Main Chassis with appropriate hardware.

The cutting head motor shaft of 8 mm in diameter is connected to the cutting head with a 8 mm to 10 mm threaded shaft adapter who’s threaded 10 mm shaft conveniently fits the mounting hole in the cutting head. The cutting head motor, as seen in the three photos below, is mounted on its own vertically adjustable platform with its threaded shaft adapter penetrating the main chassis plate to couple with cutting head on the underside of the main chassis plate as shown in the fourth photo below.

The two sets of 12vdc battery pairs, seen in the three photos below, provide power to the differential steering motor controller which powers the drive motors and the speed controller that powers the cutting head motor. I chose 12vdc NiMH 3amp-hr batteries because they do not have to be removed from the chassis to be charged as would LiPo batteries that require a balancing charger. Additionally the NiMH batteries are heavy, compared to a LiPo battery, and help provide additional weight that keeps the chassis moving smoothly over thick grass.

The power switches, seen on the left in photo three below, are a pair of SPDT with center OFF switches which allows me to either use the switches to provide battery power to the differential steering motor controller or the cutting head motor speed controller or charge the two sets of battery pairs through their respective charging ports adjacent to each power switch.

Since DC motors draw the most current at stall or when first turned on, I decided to power the cutting head motor with a variable PWM speed controller which can be seen on the right in photo three below. Employing this type of controller allows me to switch on the cutting head motor and gradually bring it up to the desired cutting head speed which protects the 12vdc NiMH batteries from a high current surge when starting the cutting head motor.

Okay, now that we have the lawnmower chassis put together, how are we going to program the mower to meticulously cut a nonrepeating swath of grass row after row until the lawn is completely mowed? This is where the RTK GPS Guidance Module comes in to play. You can build your Guidance Module as you see fit. Basically the Module should include a RTK GPS module, GPS L1/L2 antenna, Telemetry radio compatible with the Base Station telemetry radio, a Single Board Computer (SBC), a RC receiver, as a minimum. If you like using ArduRover/Mavlink for controlling your lawnmower, you can use a Pixhawk 2.1 as your SBC and add the necessary RC receiver and telemetry radio and RTK GPS module.

My Lawnmower Guidance Module employs an RTK GPS module that provides RTCM corrected X and Y position coordinates to an appropriate Single Board Computer (SBC) running the DC path_follow template which is used to record and playback a mowing path sequence over the lawn cutting area of interest. Using a RC transmitter to control the lawnmower steering and throttle through a RC receiver attached to the SBC through an RC Mux, I recorded the row by row path that I wanted the lawnmower to cut the grass in the area of interest while in the User Mode. Placing the lawnmower back at the start of the recorded path, I then set the RC Mux to use the output of the SBC for steering and throttle guidance, instead of the RC transmitter, put the lawnmower in the Full Auto mode and watched it repeat the previously recorded path. Yes, I know that this is kind of a primitive way to record and playback the recorded mowing path compared to using ArduRover/Mavlink, but hey, this is a budget build.

Comments or questions? Please post below.

Main Chassis Front View (1)

Main Chassis Side View (2)

Main Chassis Rear View (3)

Main Chassis Bottom View (4)

Chassis Upper Mounting Plate with Bobble Head Mandalorian and Baby Yoda (5)

Rear of RTK GPS Guidance Module on upper Chassis (6)

Towards an affordable Base Station for outdoor RTK GPS RC Car racing

Even though RTK GPS hardware has come down in price substantially over the last couple of years, building a RTK GPS Base Station is still not inexpensive for the average RC Car enthusiast. However there is hope and I will show how a barebones Base Station can be built at a reasonable out of pocket cost.

The cost of the Base Station I described and built in my recent Blog post Using the Donkey Car Path_Follow Template with RTK GPS pencils out to around $580 (plus shipping for some of the parts) which puts it over $600. This barebones Base Station can be built for under $400. Interested? Then read on.

The Barebones Base Station, unlike my original Base Station, integrates the RTK GPS module with the ESP32 WROOM processor and does away with the LCD Display, but retains the Sparkfun L1/L2/L5 Survey Grade antenna (for future proof operation) and the Sik telemetry radio for RTCM message transmission to the Rover. The integrated RTK GPS module/Processor (PX1122R L1/L2 RTK Evaluation Board shown above) comes from the NavSpark Store in Taiwan and can be found here on the store website. Before you get all excited about the low price of $95 compared to the Sparkfun ZED-F9P RTK GPS module, just remember that the module is being shipped from Taiwan by FedEx air which, in my case added $55 to the cost of the module as I live on the US East Coast, will probably run between $50-$60 depending on where you live. Obviously the closer to Taiwan that you live, the cheaper the FedEx shipping will be. I queried the NavSpark Store as to a possible US distributor, but it was a no-go as they said that the overhead was too high to make a reasonable profit. Bummer to say the least.

Before moving on to the programming and evaluation of the PX1122R L1/L2 RTK Evaluation Board, a word about the Sparkfun L1/L2/L5 Survey Grade antenna. The L5 band upgrade is preoperational and should be fully in place by 2027 so the Sparkfun antenna should future proof your Base Station as RTK GPS modules that support the L1/L2/L5 bands become available. However there is a cheaper L1/L2 Survey Grade antenna that can be found here for $78. I have tested this Beitian BT-160 on a NavSpark Store PX1122R Breakout Board and it provided performance comparable to the Sparkfun Survey Grade antenna for about $50 less.

The programming of the PX1122R L1/L2 RTK Evaluation Board can be accomplished by following the steps in the Getting Started with PX1122R RTK Evaluation Board to install the Windows GNSS Viewer on page 3 and for setting up the EVB as a Base Station (2/3) using the Survey Mode on page 10. Based on my experience with the Sparkfun ZED-F9P using the survey-in mode, I recommend a Survey Time of 600 (sec) and a Standard Deviation value of 30 (cm) though other users seem to like the default values of a Survey Time of 60 (sec) and a Standard Deviation value of 30 (cm).

Now is the time to select a suitable enclosure for the Evaluation Board (EVB) and attach the Sik telemetry Radio and suitable Survey Grade Antenna as seen without an enclosure in my prototype photo below and the completed Base Station in the second photo below. The Very Quick Short Baseline Test (1/2) on page 13 can be used to determine the location of the TX, GND, and 5V_O pins on the EVB that should be connected to the Sik telemetry radio to transmit the RTCM correction messages to the Rover. The EVB TX pin should be connected to the RX (2) on the Sik telemetry radio while the GND and 5V_O connect to the Sik telemetry radio Ground (6) and the Power (1) respectively. The EVB TX pin must be set to 57,600 baud as shown in the third photo below. Unfortunately the NavSpark Store designers did not use a separate UART to transmit RTCM correction messages, like Sparkfun does, so the USB1 and the TX/RX UART must be the same rate of 56,700 baud. This should not create a problem since the RTCM correction messages being output on USB1 are for observation only with the GNSS Viewer therefore speed is not an issue. The only item left now is to verify the RTCM correction message output. This can be accomplished by clicking on the GNSS Viewer RAW tab and selecting “Configure RTCM Measurement Data Out” as shown in the fourth photo below. Since the “RTCM Measurement Data Out” default configuration was very similar to the Sparkfun recommended configuration for the ZED-F9P Base Station, I left them as is. If you decide to make changes to the “RTCM Measurement Data Out” default configuration, make sure you select “Update to SRAM+Flash and then hit the “Accept” button.

Prototype Base Station
Completed Base Station
Configure TX Baud Rate
Configure RTCM Measurement Data Out

Now comes the proof of the pudding so to speak. I started up the PX1122R L1/L2 RTK Breakout Board located on my DC Lawnmower Rover by plugging the Breakout Board Sparkfun UART to USB converter output to my trusty laptop running the GNSS Viewer. I then plugged the Base Station Evaluation Board USB1 output into a suitable 5 vdc supply and watched the PX1122R L1/L2 RTK Breakout Board GNSS Viewer output shift from “Position fix 2D” in the Message Bar to “Float RTK” and finally to “Fix RTK” (see photo below) once the Base Station Evaluation Board has completed its Survey Mode and began transmitting RTCM correction messages to the Rover Breakout Board. If you are using a Sparkfun ZED-F9P RTK GPS module on your Rover, you should see the module RTK yellow LED go from solid yellow to flashing yellow (Float) and finally go out completely (Fix) once the Base Station has completed the Survey Mode and a Fix solution is reached.

Fix RTK

Questions or comments? Please post below.

Using the Donkey Car Path_Follow Template with RTK GPS

Almost three years ago zlite contributed a blog post titled “Using ArduRover with an RTK GPS“. It was an excellent article that detailed using existing RTK GPS hardware available at the time to provide RC Car course guidance using ArduRover with a Base Station/Rover RTK GPS configuration. In this blog post I will detail how to use the Donkey Car Path_Follow template with RTK GPS hardware.

The Donkey Car Path_Follow template can be configured to use either wheel encoders or GPS to record a path and then have the RC Car autonomously track the recorded path from beginning to finish. For those of you who are not familiar with Donkey Car, the official website with instructions on how to build, configure, drive and then train a RC Car using either the Raspberry Pi or NVIDIA Nano 4GB Single Board Computers (SBC)  can be viewed here. This article assumes that the user has already built their RC Car, programmed the selected SBC with an appropriate Operating System (OS), the Donkey Car app, created the car application using the path_follow template and is ready to procure, install, and program the necessary RTK GPS hardware.

There are presently two solutions to using RTK GPS hardware with the path_follow template car application with one being less expensive than the other. Both solutions require a Rover RTK GPS module and an appropriate antenna for the car, but then deviate as to from where the required course corrections for the Rover are received. The more expensive approach requires either the  purchase or construction of a RTK GPS Base Station while the cheaper solution uses Internet-based corrections instead of a local base. A detailed tutorial, by Donkey Car Maintainer Ezward, on using the Internet-based corrections solution can be found here, while the rest of this blog post will be devoted to describing either purchasing or building a suitable RTK GPS Base Station, a telemetry system to communicate with the Rover, and the Rover RTK GPS module/antenna.

For those users with deep pockets, a ready built RTK GPS Base Station can be procured from Sparkfun.com here. For those of you who like to roll your own to build a Base Station, as I did, here are the steps I took to build a Base Station which assumes you know how to use the U-Blox U-Center, are proficient with the Arduino IDE, and have good soldering skills.

For a start, here is a list of the required hardware for the Base Station that I built:

  1. Sparkfun ZED-F9P RTK GPS module
  2. Sparkfun Thing Plus – ESP32 WROOM (USB-C) module
  3. Sparkfun 20 x 4 SerLCD display module
  4. Sparkfun GNSS Multi-Band L1/L2/L5 Surveying Antenna
  5. SiK Telemetry Radio V3 pair
  6. Sparkfun Reinforced Interface Cable
  7. Sparkfun Antenna Thread Adapter
  8. Sparkfun Qwiic Cable Kit Hook Up I2C
  9. Appropriate housings for the LCD Display and the GPS/WROOM modules
  10. Misc USB A to USB C cables for programming the ZED-F9P and ESP32 WROOM modules
  11. A suitable portable 5 vdc power supply: ie Anker Power Core 13000 Power Bank

Before moving ahead with the construction of the Base Station, you might want to familiarize yourself with the following tutorials:

  1. GPS-RTK2 Hookup Guide
  2. Setting up a Rover Base RTK System
  3. Sik Telemetry Radio
  4. ESP32 Thing Plus (USB-C) Hookup Guide

To configure the Base Station Sparkfun ZED-F9P RTK GPS module I followed the detailed steps in the Sparkfun “Setting up a Rover Base RTK System” tutorial with the following exceptions:

  1. I did not complete the initiation of the survey-in mode section of the tutorial because the ZED-F9P will not go into the survey-in mode from a cold start.
  2. I did not attach the first Sik telemetry radio to the ZED-F9P because I wanted to power the telemetry radio with 5 vdc which is not available on the Qwiic interconnection bus.
  3. See below how I programmed the Sparkfun Thing Plus to put the ZED-F9P into the survey-in mode from a cold start.

How to put the ZED-F9P into the survey-in mode from a cold start:

  1. Down load the Sparkfun Example4_BaseWithLCD.ino
  2. Follow the Sparkfun ESP32 Thing Plus (USB-C) Hookup Guide to program the Thing Plus with the Example4 .ino code which has bee updated since I worked with it.
    a) A “delay(5000)” code has been removed from the lcd.print statements which means messages will just flash by on the LCD display.
  3. I made the following changes to the Example4 .ino code:

a) I changed the code at line 52 to Serial1.begin(57600) to let the ESP32 Thing Plus Serial1 port transmit the RTCM correction messages.

b) I commented out the code at lines 81 and 82 because I am using the ESP32 Thing Plus Serial1 to transmit the RTCM correction messages over the telemetry link.

c) I commented out the following section of code: line 85 to line 105 because those CFG values were previously configured during the “Setting up a Rover Base RTK System” tutorial.

d) At line 129 I changed the 60 sec and 5.0 m to 600 sec and 1.0 m and usually get a position accuracy of under 0.7 m after 600 sec on a clear day.

e) I removed line 206 and replaced the code beginning at line 216 with the following code to allow the ESP32 Thing Plus Serial1 to transmit the RTCM messages over the telemetry link :

#ifdef USE_SERIAL1
//Push the RTCM data to Serial1
Serial1.write(incoming);
#endif

//Pretty-print the HEX values to Serial
if (myGNSS.rtcmFrameCounter % 16 == 0) Serial.println();
Serial.print(F(” “));
if (incoming < 0x10) Serial.print(F(“0”));
Serial.print(incoming, HEX);
}

To wire the Sik telemetry radio to the Sparkfun ESP32 Thing Plus I put individual female pins on the telemetry radio power, ground, tx, and rx pins and connected them to the appropriate berg header pins soldered to the ESP32 Thing Plus pwb at TX, RX, GND, and FREE. I soldered a jumper wire from FREE to V_USB to provide 5 vdc to the telemetry radio.

I then connected the ESP32 Thing Plus to the ZED-F9P and the LCD Display using the Qwiic bus cables from the parts list. Powering the ESP32 Thing Plus USB C connector from the 5 vdc Power Bank provides 3.3 vdc power to the ZED-F9P and the LCD Display over the Qwiic bus and starts the ZED-F9P in the survey-in mode.

When 5 vdc power is applied to the ESP32 Thing Plus USB C connector, the LCD Display will display “LCD Ready” followed by “GNSS Detected” indicating that the modules connected over the Qwiic bus are operational and functioning correctly. The ESP32 Thing Plus will then put the ZED-F9P in the “survey-in” mode then display “Survey in progress” at the top of the LCD Display followed by “Elapsed: ” and “Accuracy: ” on individual lines below. The “Elapsed” time will count upwards to whatever survey time you have selected and the Accuracy” position will count downwards towards whatever survey accuracy you have chosen. The survey-in mode will terminate when both (and) the survey time and survey accuracy values selected are met. The LCD Display will then display “Transmitting RTCM” and the Sik telemetry radio red transmit LED will begin to flash at a 1Hz rate indicating the telemetry transmission of RTCM correction messages.

Below is a shot of my completed Base Station sitting on a camera tripod. The PVC mount for the Survey Grade L1/L2/L5 antenna can be seen in the back of the Base Station along with the Sik telemetry radio antenna.

Moving on to the Rover side of the Base Station/Rover setup, will require an additional Sparkfun ZED-F9P module, or equivalent, an appropriate L1/L2 band antenna, connecting cables, ground plane for the antenna, and the second Sik telemetry radio. Find appropriate mounting points on your RC Car for the GPS module, antenna, and Sik telemetry radio. I attached the second Sik telemetry radio to the Rover ZED-F9P module the same way that the Sparkfun “Setting up a Rover Base RTK System” attached the Sik telemetry radio to the Base Station ZED-F9P with the following exceptions: I used berg pin headers on the designated ZED-F9P UART2  pins and ran a jumper wire from a module unused 5 vdc pin hole to an unused pin hole where the UART2 pin holes were located, instead of soldering the Sik telemetry radio harness directly as in the tutorial, so I could build a wire harness connector for the Sik telemetry radio.

As far as Rover RTK GPS configurations go, I found the ArduSimple Rover 10Hz configuration file to work well with the Rover ZED-F9P though I kept the UART2 baud rate at 57600 and I removed all but the default GNRMC message from the USB output using the “Message” function in the U-Center Configuration view. However you are free to select whatever configuration suits your fancy.

Here is a shot of my Rover ZED-F9P module and Sik telemetry radio sub-chassis that mounts on my Traxxas E-Maxx DC test vehicle. Not shown is the U-Blox L1/L2 band antenna that connects to the female SMA chassis connector at the back of the sub-chassis.

To ensure that the Base Station and the Rover are communicating correctly once the hardware has been built and configured, perform the following:

  1. Setup the Base Station outside with a clear view of the sky and turn it on.
  2. Bring the Rover outside and power up the SBC to provide power to ZED-F9P connected to a USB port.
  3. The Rover ZED-F9P PPS LED should begin flashing after the cold start and the RTK LED should be a solid yellow indicating no “Float” or “Fix” solution.
  4. After the Base Station LCD Display indicates that the Base Station has completed the survey-in mode and is transmitting RTCM messages, the Rover ZED-F9P RTK LED should start flashing indicating a RTK “Float” solution (<500 mm) and then go out completely indicating a RTK “Fix” solution (<14 mm).

Getting back to Donkey Car now that the RTK GPS Base Station and Rover hardware are functioning correctly, the Path Follow Autopilot (using GPS, wheel encoders, etc) guide can be found here. The Train a path follow autopilot link will get you going by providing detailed instructions for configuring your myconfig.py file and recording and playing back a recorded path.

Super simple wheel encoder

Although I’ve posted previous tutorials on adding a shaft encoder to your DIY robocar, sometimes you want the additional precision of wheel encoders. That’s because when you’re turning, the wheels move at different speeds and you can’t measure that from just the main shaft.

The problem is that it’s pretty hard to retrofit wheel encoders to these small cars; there just isn’t much room inside those wheels, especially if you’re trying to add hall-effect sensors, quadrature rotary encoders or even just the usual optical wheel with a LED/photodiode combination to read the regular cutouts as they spin.

However, most wheel already have cutouts! Maybe we don’t need to use an LED at all. Perhaps a photodiode inside the wheel could look out at the lights outside the car and just count the pulses as the wheel turns, with the gaps in the wheel letting the light through.

I used these cheap photodiodes, which have both digital and analog outputs, as well as a little potentiometer to calibrate them for the available light.

Then I just connected them to an Arduino and mounted them inside the wheel, as close to the wheel as possible without binding, to avoid light leakage. The Arduino code is below. Just connect the DO pin to Arduino pin 8, and the VCC and GND pins to the Arduino’s VCC and GND pins (it will work with 3.3v or 5v). No need to connect the AO pin.

const int encoderIn = 8; // input pin for the interrupter 
const int statusLED = 13; // Output pin for Status indicator
int detectState=0; // Variable for reading the encoder status
int prevState = 1;
int counter = 0;
void setup()
{
   Serial.begin(115200);
   pinMode(encoderIn, INPUT); //Set pin 2 as input
   pinMode(statusLED, OUTPUT); //Set pin 13 as output
}
void loop() {
   detectState=digitalRead(encoderIn);
   if (detectState == HIGH) {
    detectState=digitalRead(encoderIn);  // read it again to be sure (debounce)
    if (detectState == HIGH) {
      if (prevState == 0) { //state has changed
      digitalWrite(statusLED, HIGH); //Turn on the status LED
      counter++;
      Serial.println(counter);
      prevState = 1;}
      }
   }
   if (detectState == LOW && prevState == 1) { //If encoder output is low
      digitalWrite(statusLED, LOW); //Turn off the status LED
      prevState = 0;
   }
  }

Show your RaspberryPi IP address on startup with an OLED

When you run a Raspberry Pi in “headless” configuration without a screen, which is typical for a Donkeycar setup, one of the tricky things is knowing what IP address it has been assigned by your Wifi network, so you can connect to it. If you control your own network, you may be able to see it in your network control app, shown in a list of connected devices. But if you don’t control your own network, such as the races we run at Circuit Launch, it’s hard to figure out which Raspberry Pi on the network is yours.

So what you need is a screen that shows you your IP address on startup. This is harder than it should be, in part because of changing startup behavior in various different kinds of Linux and generations of Raspian, and I wrote a post last year on how to do it with a simple LED screen.

Now we’re standardizing on smaller, more modern color OLED screens. So this is an update to show how to use them to show your IP address on startup.

Update: All too typically, after I write this code myself I find that there’s a perfectly good repo already out there that does the same thing by slightly different means. So check that one out and use whichever you prefer.

Note: In the picture above, I’m using the custom Donkeycar RC hat (coming soon to the Donkey Store), which has a built-in OLED screen. But if you don’t have that, you can connect your OLED screen directly with jumper cables as per this tutorial.

Step 1: Install Adafruit’s Circuit Python OLED library. If your Pi boots into a Conda environment, exit that (conda deactivate), then install the library with pip: pip install adafruit-circuitpython-ssd1306

Step 2: Copy this Python file to your top level user directory (probably /home/pi/)

Step 3: Set it to run at startup. Type crontab -e and then select the editor you prefer (I use Nano). Then add @reboot python3 /home/pi/oled_ip.py to the bottom of the file. Then save it and exit the editor (control-o, control-x).

Teardown of Naobot AI robot

I backed the Naobot AI robot on Kickstarter, and actually got two of them (the first was defective, so they sent me another). So I took apart the defective one to see how it works inside. Overall, it looks very well done.

Right now it’s mostly useful as a programming platform, using Scratch in its mobile app. The built-in “AI” functions, such as follow a person or animal, don’t work very well. But on the hardware side, it’s quite impressive.

Here’s what’s inside:

All the “brains” are in the board in the head that includes the camera, the processor and the wifi board

Here’s a close-up of that board
Inside the arm are motor gearboxes for each joint
Each gearbox has the usual motor, gears and a potentiometer or encoder to measure position
Here’s a close-up of the wheel encoders
The motor train is well-designed and has wheel encoders on both tracks
Here’s the gear train for the “fingers”
This one rotates the “wrist”
An close-up of the potentiometers that close the loop for motor movement
This is the “elbow” joint

Using ArduRover with an RTK GPS

Back in the days of the much-missed Sparkfun Autonomous Vehicle Competition (2009-2018), I and other team members did very well with GPS-guided rovers based on the ArduRover software, which was a spin-off from our drone work. Although we won a lot, there was a lot of luck and randomness to it, too, mostly because GPS was not very precise and positions would drift over the course of the day.

Now that GPS and other other sensors used by ArduPilot have improved, as has the codebase itself, I thought I’d come back to it and see how much better I could get a rover to perform with all the latest, greatest stuff.

The answer is: better! Here’s a how-to.

For this experiment, I used a rover with a Pixhawk 4 autopilot, a Reach RTK M+ rover GPS (on the car) and RS+ base station. This is pretty expensive gear (more than $1,000 for the RTK combo alone) so I don’t expect others to be able to duplicate this (I borrowed the gear from work). However, you don’t actually need the expensive base station — you can use the $265 M+ rover GPS by itself, using Internet-based corrections instead of a local base. There are also cheaper solutions, such as the $600 Here+ base/rover combo or just a $125 Here 3 using Internet-based corrections.

The promise of RTK GPS is that by using a pair of high-quality GPSs, one stationary in a known spot and the other moving on the car, you can detect the atmospheric disturbances that lead to GPS drift on the stationary one (since any apparent movement is clearly noise) and then transmit the necessary corrections to the nearby moving GPS, so it can correct itself. The closer the stationary GPS is to the moving one, the more accurate the RTK solution. So my base station right on the track was perfect, but if you use Internet correction sources and are either near the coasts or farming country (farms use RTK GPS for automated agricultural equipment), you should be able to find a correction source within about 10-20km.

If this works, it should avoid the biggest problem with just using the GPS on the car, which is drift. We often found at the Sparkfun AVC that a waypoint mission around the course that worked great in the morning was several meters off in the afternoon because it was using different GPS satellites. So repeatability was a real issue.

My rover is a standard 1/10th scale RC chassis with a brushed motor and a big-ass foam bumper on the front (trust me, you’ll need a bumper). It honestly doesn’t matter which one you use; even 1/16th can work. Don’t bother to get one with a RC radio, since you’ll want more channels than a standard car radio provides (you use the extra channels to do things like switch modes and record waypoints). Something like this, with this radio (you’ll need CPPM output on the RC receiver), would be fine. I tend to use 2S (7.4v) LiPo batteries so the car doesn’t go too fast (that’s the same reason for using a brushed rather than brushless motor, although either can work). Just make sure that it’s got a proper separate steering servo and ESC; none of those cheap toy integrated deals.

A couple things I added that are useful:

  • It’s hard to get the standard 3DR telemetry radios to work well at ground level. So I use the long-range RF900D+ radios instead. Those are pretty expensive, so you will probably want to use the cheaper and still very good 500mw Holybro radios (which are designed to work with the Pixhawk 4) instead.
  • I cut out an aluminum circle to put under the Reach RTK GPS receiver, which was recommended. You can also buy those pre-made from Sparkfun.
  • You’ll notice a two-barrel sensor at the front of the car. That’s a Lidar-Lite range sensor, which I’ll eventually use for obstacle avoidance

The first task was to set it up, which was not easy.

Setting up the RTK

First, you have to set up the RTK pair. In the case of the Reach RTK, both base and rover (which is the name for the GPS module that moves) can connect via wifi, either to an existing network or to one that they set up themselves. You can either use a web browser or a mobile app to set them up, update the firmware if necessary and monitor their performance. A few things to keep in mind, along with following the instructions:

  • You do have to tell the base where it is (enter lat/lon), or set it to sample its own position for 2 minutes (use Single mode, since it isn’t getting corrections from anywhere else). I usual use the sampling method, since Google maps on my phone is not as accurate.
  • On the rover, select “Kinematic mode” and tell it to accept horizontal motion of 5 m/s
  • On the rover, set it to stream its position over Serial using NMEA format at 38kbs.
  • If you’re using a Pixhawk 4 like me (and perhaps other Pixhawk-based autopilots), you’ll have to make a custom cable, since for some dumb reason Reach and Pixhawk use Serial connector pinouts that are mirrored. On Pixhawk, V+ is the red wire on the left of the connector, while on the M+ it’s on the right. Super annoying. Basically just take a spare six-pin cable for the Reach and a spare for the Pixhawk (they’re mirrored), cut them and resolder the wires (use heat shrink tubing on each) in the same order counting from the red wire. Like this:

Setting up ArduRover

In my setup, I’m using both the standard GPS that comes with the Pixhawk 4 as well as the Reach RTK GPS.

So we’ll start by setting up ArduRover so it works with the standard Pixhawk GPS, and then we’ll add the second RTK GPS. There are a couple tricks to setting up ArduRover, which you should keep in mind in addition to the instructions:

After you do the regular setup, there are a lot of tuning instructions. This can be a bit overwhelming, but these are the ones that are most important:

  • First, your steering may be reversed. There are actually two steering modes: manual and auto. First, start with auto mode. Create a mission in Mission Planner, arm and put your rover in auto mode. If it steers towards the first waypoint, great; if it steers away, that means that your output is reversed and change the Servo 1 parameter to Reversed. Once you’ve got that sorted, do the same thing for Manual Mode: if moving your right stick to the right makes the car steer to the right, great; if not, change the RC1 parameter to Reversed.
  • Once you’ve got that right, you can focus on getting the rover to track the course as well as possible. There is a page on doing this, but three parameters that are good to start with are these:
    • Steering P (proportional) value. The default is pretty low. Try raising it to 0.8 or 0.9. If your rover zig-zags fast when it’s supposed to be going straight, you’ve gone too far. Dial it back a bit
    • Tune the L1 controller. I find that it’s too conservative out of the box. I use a NAVL1_PERIOD of 6 (vs default 10) and a NAVL1_DAMPING of .90 (vs default 0.75) on my rover.

Adding the second RTK GPS

Once your rover is running well with the built-in GPS, you can add the second RTK one. Once you plug it into the second UART port, you can enable it by setting the following parameters:

  • GPS2_TYPE2 to 5 (auto)
  • GPS_BLEND_MASK to 1 (just horizontal)
  • GPS_BLEND_TC to 5.0s (quick averaging)
  • GPS_AUTO_SWITCH to 2 (blend)
  • SERIAL4_BAUD to 38 (38,400 bps)
  • SERIAL4_PROTOCAL to 5 (GPS)

Once you’ve set all those, reboot your autopilot. If you’re outside, when you restart and give the GPSs time to settle, you should see both listed on the Mission Planner HUD. (Note: it’s important to do all this testing outside. If your RTK can’t see satellites, it won’t stream a solution and you won’t see a second GPS in the Mission Planner.)

When it’s working, you should see this: two GPSs shown in the HUD (it will say “Fixed” or “Float” depending on how good your RTK solution is at any given moment):

We’re going to “blend” the RTK and the regular GPS. Why not just use RTK, which should be more accurate? Because sometimes it totally glitches out and shows a position many meters away, especially when you go under trees. So we use the less accurate but more reliable regular GPS to “smooth out” the GPS, which dampens the glitches while still allowing the higher precision (usually) RTK to dominate. The above parameters work pretty well for me.

Here’s a replay of the log of one run at 5m/s, which is pretty much perfect. If you want to go even faster, you’ll need to tune it a bit more, but this is quite promising.

So, the final question is: is RTK worth it? I think, on balance, the answer is no. New regular GNSS GPSs using the Ublox M8 and M9 chip sets (I like the MRo range) are getting much better and there are more satellite constellations for them to see. $1,000 and a harder setup is a lot of money and hassle for slightly better performance. So what’s RTK good for? Mostly for industrial rovers that need to be in an exact position, such a agricultural and construction machinery. For DIY Robocar racing, I think you’d be better off with regular GPS.

An improved version of the Intel OpenBot

There’s a lot to like about the Intel OpenBot, which I wrote about last month. Since that post, the Intel team has continued to update the software and I’m hopeful that some of the biggest pain points, especially the clunky training pipeline (which, incredibly, required recompiling the Android app each time) will be addressed with cloud training. [Update 1/22/21: With the 0.2 release, Intel has indeed addressed these issues and now have a web-based way to handle the training data and models, including pushing a new model to the phone without the need for a recompile. Bravo!]

On the hardware side, I could see an easy way to improve upon Intel’s design, which required too much 3D printing and used a ungainly 4 wheel-drive design that was very difficult to turn. There are plenty of very cheap 2WD chassis on Amazon, which use a trailing castor wheel for balance and turning nimbleness. 2WD is cheaper, better, easier and to be honest, Intel should have used it to begin with.

So I modified Intel’s design for that, and this post will show you how you can do the same.

First, buy a 2WD chassis from Amazon. There are loads of them, almost all exactly alike, but here’s the one I got ($12.99)

In the OpenBot hardware instructions, the only things you really need are:

If you don’t have a 3D printer, you can buy a car phone mount and use that instead.

The only changes you’ll need to make to Intel’s instructions are to drill a few holes in the chassis to mount the phone holder. The plexiglass used in the Amazon kits is brittle, so I suggest cutting a little plywood or anything else you’ve got that’s flat and thin to spread the load of the phone holder a bit wider so as not to crack the plexiglass.

You can see how I used a little plywood to do that below

You will also have to expand some of the slots in the plexiglass chassis to fit the encoders. If you have a Dremel tool, use a cutoff wheel for that. If not, carefully use a drill to make a line of holes.

Everything else is as per the Intel instructions — the software works identically. I think the three-wheel bot works a bit better if the castor wheel is “spring-loaded” to return to center, so I drilled in a hole for a screw in the castor base and wrapped a rubber band around it to help snap the wheel back to center as shown above, but this is optional and I’m not sure if it matters all that much. All the other fancy stuff, like the “turn signal” LEDs and the ultrasonic sensor, are also optional (if you want to 3D print a mount for the sonar sensor, this is the one I used and I just hot-glued it on).

I’d recommend putting the Arduino Nano on the small solderless breadboard, which makes adding all jumper wires (especially the ground and V+ ones) so much easier. You can see that below.

Here it is (below) running in manual mode in our house. The 4 AA batteries work pretty much the same as the more expensive rechargeable ones Intel recommends. But getting rid of two motors is the key — the 2WD is so much more maneuverable than the 4WD version and just as fast! Honestly, I think it’s better all around — cheaper, easier and more nimble. OpenBot with 2WD chassis – YouTube

https://youtu.be/KasPKFk0Y4w

First impressions of Tinkergen MARK robocar

tl;dr: The Tinkergen MARK ($199) is my new favorite starter robocar. It’s got everything — computer vision, deep learning, sensors — and a great IDE and set of guides that make it all easy and fun.

Getting a robocar design for first-time users right is a tricky balance. It should be like a great videogame — easy to pick up, but challenging to master. Too many kits get it wrong one way or another. They’re either too basic and only do Arduino-level stuff like line-following or obstacle avoidance with a sonar sensor, or they’re too complex and require all sorts of toolchain setups and training to do anything useful at all.

In this post, I list three that do it best — Zumi, MARK, and the Waveshare Piracer. Of those, the Piracer is really meant for more advanced users who want to race outdoors and are comfortable with Python and Linux command lines — it really only makes the hardware side of the equation easier than a fully DIY setup. Zumi is adorable but limited to the Jupyter programming environment running via a webserver on its own RaspberryPi Zero, which can be a little intimidating (and slow).

But the Tinkergen MARK gets the balance just right. Like the others, it comes as a very easy to assemble kit (it takes about 20 minutes to screw the various parts together and plug in the wires). Like Zumi, it starts with simple motion control, obstacle detection and line following, but it also as some more advanced functions like a two-axis gimbal for its camera and the ability to control other actuators. It also has a built-in screen on the back so you can see what the camera is seeing, with an overlay of how the computer vision is interpreting the scene.

Where MARK really shines is the learning curve from basic motion to proper computer vision and machine learning. This is thanks to its web-based IDE and tutorial environment.

Like a lot of other educational robotics kits designed for students, it defaults to a visual programming environment that looks like Scratch, although you can click an icon at the top and it switches to Python.

Videos and guides are integrated into the web interface and there are a series of courses that you can run through at your own pace. There is a full autonomous driving course that starts with simple lane-keeping and goes all the way to traffic signs and navigation in a city-street like environement.

Where MARK really shines is in the number of built-in computer vision and deep learning functions. Pre-trained networks include recognizing traffic signs, numbers, animals and other common objects:

Built-in computer vision modules include shapes, colors, lines, faces, Apriltags and targets. Also supported is both visual line following (using the camera) or sensor line following using the IR emitter/receiver pairs on the bottom of the car.

In addition, you can can train it to identify new objects and gestures by recording images on the device and then training a deep learning network on your PC, or even training on the MARK itself for simpler objects.

I got the track mat as well with the kit, which is the right size and contrast to dial in your code so it performs well. Recommended.

In short, this is the best robocar kit I’ve tried — it’s got very polished hardware and software, a surprisingly powerful set of features and great tutorials. Plus it looks great and is fun to use, in large part due to the screen at the top that shows you what the car is seeing. A great Holiday present for kids and adults alike — you won’t find a better computer vision and machine learning experimentation package easier to use than this.

First impressions of the Intel OpenBot

Intel has released an open source robocar called OpenBot that uses any Android phone running deep-learning code to do autonomous driving, including navigating in halls or on a track or following a person. The key bit here is the open source Intel Android app, which does all the hard work; the rest of the car is just a basic Arduino and standard motors+chassis.

To be honest, I had not realized that it was so easy to get an Android phone to talk to an Arduino — it turns out that all you need is an OTG (USB Type C to USB Micro or Mini) cable for them to talk serial with each other. (This is the one I used for Arduinos that have a USB Micro connector.) Sadly this is not possible with iOS, because Apple restricts hardware access to the phone/tablet unless you have a special licence/key that is only given out to approved hardware.

The custom Intel chassis design is very neat, but it does require a lot of 3D printing (a few days’ worth). I don’t really understand why they didn’t just use a standard kit that you can buy on Amazon instead and just have the user 3D print or otherwise make or buy a phone holder, since everything else is totally off the shelf. Given how standard the chassis parts are, it would be cheaper and easier to use a standard kit.

I ended up using a chassis I had already, which is the DFRobot Cherokey car. It’s slightly overkill, since it has all sorts of wireless communications options built in, including bluetooth and and Xbee socket, that I didn’t need, but it’s just evidence that you can use pretty much any “differential drive” (steering is done by running motors on the left and right at different speeds) chassis you have handy. Basically any car that uses an Arduino will work with a little tweaking.

I took a few other liberties. I had some motors that had built-in quadrature encoders, which I prefer to the cheap optical encoders Intel recommended, so I had to modify the code a bit for them and that meant changing a few pin mappings. (You can see my modified code here.). But otherwise it’s pretty much as Intel intended, complete with sonar sensor and cute turn signals at the back.

So how does it work? Well, for the easy stuff, great. It does person following right out of the box, so that’s a good test if your bot is working right. But the point is to do some training of your own. For that, Intel has you drive manually with a bluetooth game controller, such as a PS4 controller, to gather data for training on your laptop/PC. That’s what I did, although Intel doesn’t tell you how to pair the controller with your Android phone ( (updated) the answer: press the controller PS and the Share button until the light starts flashing blue fast. Then you you should be able to see it in your Android Bluetooth settings “pair new device” list. More details here).

But for the real AI stuff, which is learning and training new behavior, it’s still pretty clunky. Like DonkeyCar, it uses “behavioral cloning”, which is to say that the process is to drive it manually around a course with the PS4 controller, logging all the data (camera and controller inputs) on your phone, then transfer a big zip file of that data to your PC, which you run a Jupyter notebook inside a Conda environment that uses TensorFlow to train a network on that data. Then you have to replace one of the files in the Android app source code with this new model and recompile the Android app around that. After that, it should be able to autonomously drive around that same course the way you did.

Two problems with this: first, I couldn’t get the Jupyter notebook to work properly on my data and experienced a whole host a problems, some of which were my fault and some were bugs in the code. The good news is that the Intel team is very responsive to issue reports on Github and I’m sure we’ll get those sorted out, ideally leading to code improvements that will spare later users these pain points. But overall, the data gathering and training process is still way too clunky and prone to errors, which reflects the early beta nature of the project.

Second, it’s crazy that I have to recompile the Android app every time I train a new environment. We don’t need to do that with DonkeyCar, and we shouldn’t have to do that with OpenBot, either. Like DonkeyCar, the OpenBot app should be able to select and load any pretrained model. It already allows you to select from three (person following and autopilot) out of the box, so it’s clearly set up for that. So I’m confused why I can’t just copy a new model to a directory on my phone and select that from within the app, rather than recompiling the whole app.

Perhaps I’m missing something, but until I can get the Jupyter notebook to work it will just have to be a head-scratcher…