Post-race report from Thunderhill

Our outdoor hack/race at Thunderhill was a blast, both for the opportunity to compete over a whole weekend and the intermixing with the full-size autonomous car teams at the Self Racing Cars event we were part of.  We didn’t do anything that couldn’t have been done indoors at our regular monthly races in Oakland (people didn’t use GPS, for example), but the preparation that went into a full weekend brought the best out of everyone.

Here are some quick observations from the weekend.

  • There was lots of improvement over the two days. Times more than halved from Saturday to Sunday, from 45 seconds to winning times all in the low twenties.
  • Humans are still faster. The fastest human driven time was 8 seconds, the fastest autonomous time was 21 seconds. We’ve still got work to do.
  • Wheel-to-wheel racing is the future (see above). We’re going to be doing a lot more of that in the monthly events, including seeded rounds, ladders and a “Final Four”.
  • Of the 14 teams that entered, 10 finished the course. All used computer vision to navigate (no GPS, on the ground that GPS is too easy — we’ve been doing that for nearly a decade at the Sparkfun Autonomous Vehicle Competition)
  • By contrast, on the full-size track next door, *no* cars successfully did a full autonomous lap with vision. The few that used vision, such as Autonomous Stuff and Right Turn Clyde (our own Carl Bass and Otavio Good, shown below), were not able to complete autonomous laps. And the cars that did complete autonomous laps, such as Comma.ai, used GPS waypoints to do it.
  • On the 1/10 scale course, the jury is still out on whether traditional computer vision or neural networks are best. First place, Will Roscoe, the author of Donkey, uses TensorFlow and Keras (which uses neural networks). But just one second behind, in 2nd place, was Elliot Anderson using traditional OpenCV-style computer vision on a $55 OpenMV M7 board.
  • Traditional computer vision is easier to understand and debug, but requires tuning. Neural networks, on the other hand, are black boxes that take a long time to train, but they don’t need to be hand-tuned (ideally, they can just figure out correlations themselves, without having to be told to look for specific shapes or colors).  I prefer things I understand, so I’m drawn to CV. But I fully accept that there will soon come a day when CNNs are so easy to train and use that I make the switch. Just not yet.

Written by 

Leave a Reply

Your email address will not be published. Required fields are marked *