Blog

Two ways to handle control loops in Python

In robot control systems, it’s common to have an “inner loop” that controls basic motion and an “outer loop” that does higher level control, such as navigation.  (You can have even higher level control, such as mission planning, above that but we’ll concentrate on the first two for now).  In the case of drones, for example, the inner loop runs very fast (400 times a second, or 400 Hz, is typical) for precise control of basic stabilization. The outer navigation loop, on the other hand, can run more slowly — typically 10 times a second (10 Hz), which is the speed at which standard GPS modules update their position.

Ideally, you have both these loops running at the same time, using multithreading on Linux or a real-time operating system. But that can be somewhat intimidating to program and there are some gotchas that you have to watch out for when you’re running multiple threads, such as race conditions.

Until recently, Python didn’t have such “concurrancy” built-in, and you had to use special libraries to do this. But starting with 3.5, this sort of asynchronous code execution was vastly improved with the native “asyncIO” module, which is well explained here.  This should allow you to run multiple loops running at different speed simultaneously by using “coroutines“, without much programming overhead and risky clashes.  To try this out, I did the following experiment.

Let’s say you want to run and inner loop at 10Hz and an outer loop at 1Hz. In regular Python you’d code it like this:

import time

def tenhz():
  time1 = time.time()
  print ("Ten Hz")
  while True:
    if time.time() > (time1 + 0.1):  # check to see if a tenth of a second has passed
      break
  
def onehz():
  time1 = time.time()
  print ("One Hz")
  while True:
    tenhz()
    if time.time() > (time1 + 1):  # check to see if a second has passed
      break
  
while True:
  print("root")
  onehz()

That works — the “tenhz()” function will run ten times  a second, and the “onehz()” function will run once a second — but the problem is that the two loops won’t run simultaneously.  While the tenhz() function is running, the onehz() function is not, and vice versa. One blocks the other.

With Python’s new AsyncIO concurrency feature, you can effectively have the two running at the same time — no blocking — in separate threads without a lot of fuss.  Here’s how those same loops look programmed for asynchronous operation (thanks to this guide as a starter).

import time
import asyncio

start = time.time()

def tic():
    return 'at %1.1f seconds' % (time.time() - start)

async def gr1():
    # Busy waits for a tenth of a second, but we don't want to stick around...
    print('10Hz loop started work: {}'.format(tic()))
    time1 = time.time() 
    while True:
      # do some work here
      await asyncio.sleep(0)
      if time.time() > time1 + 0.1:
        print('10Hz loop ended work: {}'.format(tic()))
        time1 = time.time() 

async def gr2():
    # Busy waits for a second, but we don't want to stick around...
    print('1 Hz loop started work: {}'.format(tic()))
    time2 = time.time()
    while True:
      # do some work here
      await asyncio.sleep(0)
      if time.time() > time2 + 2:
        print('1 Hz loop ended work: {}'.format(tic()))
        time2 = time.time()

async def gr3():
    print("Let's do some stuff while the coroutines are blocked, {}".format(tic()))
    time3 = time.time()
    while True:
      # do some work here
      if time.time() > time3 + 2:
        print("Done!")
      await asyncio.sleep(0)

ioloop = asyncio.get_event_loop()
tasks = [
    ioloop.create_task(gr1()),
    ioloop.create_task(gr2()),
    ioloop.create_task(gr3())
]
ioloop.run_until_complete(asyncio.wait(tasks))
ioloop.close()

 

Much better! Now you can insert your own code in those loop and not worry about one blocking the other.  Yay Python 3.5!

 

First experiments with JeVois computer vision module

I’m a huge fan of the OpenMV cam, which is a very neat $65 integrated camera and processor with sophisticated built-in computer vision libraries, a Micopython interpreter, and a very slick IDE (a car powered entirely by it came in 2nd in the Thunderhill DIY Robocars race). Now there is a competitor on the block, Jevois, which offers even more power and a lower cost. I’ve now spent a week with it and can report back on how it compares.

In terms of form factor, it’s a bit smaller than OpenMV

Here’s a quick feature comparison:

 OpenMVJevois
Camera320x240, with good 2.8mm standard lens (can be switched to wide angle or IR)320x240, no removable lens
Processor216 Mhz M71.34 Ghz A7, with GPU
I/O3 PWM, Serial, I2C, 1 ADC, 1 DAC, USBSerial, USB
Expansion boardsWifi, LCD screen, proto board, thermal camera(none)
OSMicropythonLinux
Power consumption140 ma700-1,000 ma
IDEQT Creator based custom IDE (Mac, Windows, Linux)(none)
Memory512KB RAM, 1 MB flash, SD card256MB RAM, SD card
Price$65$50

Both come with a full suite of standard computer vision libraries and examples. (OpenMV’s libraries are here and examples are here; Jevois’s libraries are here and examples are here). Both are well supported on the software side and have good communities. Jevois derives from the Jevois software framework that came out of academic work at USC. OpenMV is the work of small team of very smart computer vision experts, but benefits from the large Micropython community.

Basically, the Jevois board is more powerful, but the OpenMV board is a lot easier to use and more flexible, mostly due to its awesome IDE and native Micropython environment. Both can do our job of driving an autonomous car, so it’s just a question of which board is easier to develop on.  Also, why would you get one of these instead of a RaspberryPi 3 and camera, which doesn’t cost much more?

For OpenMV, the reason to use it over RaspberryPi is simply that it’s easier. It’s got a great IDE that works over USB or Wifi that makes interactive use fun, it’s all in one board, and it can drive servos and read sensors without any additional add-on boards. Although the performance is not quite as good as RaspberryPi and it can’t run standard Linux packages like TensorFlow, the included CV libraries are well optimized to make the most of the hardware, and for basic computer vision the included libraries handle most of what you’ll want. (If you want to use neural networks, use the RaspberryPi — these are just computer vision boards).

For Jevois, the reason to use it over RaspberryPi is not as clear. It is potentially more powerful that a RasperrryPi at computer vision, thanks to the built-in GPU, but in practice it seems to perform about the same.  But more importantly, it’s much harder to use.  After spending a week getting it up and running, I think the only reason to use it over RaspberryPi is in cases where you need a very small, integrated package and can use the built-in modules pretty much as they are without much additional programming.

My testbed

I built a small rover to use Jevois’s RoadNavigation function, using a cheap RC car chassis and some plywood and aluminum. The software uses a “vanishing point” algorithm to see the road ahead and keep the rover on it.

The module works very well when you plug it into a PC via USB and use a video app to test the computer vision modules, such as looking at a picture of a road. What’s much harder, however, is using it in the real world, in an embedded environment where you need to read the data, not just look at the cool on-screen lines.

To do that, you need to do a lot of work on both hardware and software:

Hardware:

You’ll need to get the Jevois talking to an Arduino, which will do the actual control of the car’s servos and motors. You can do that by adapting the included serial cable to connect to an Arduino.  A tutorial is here, but in practice it’s a good bit harder than that.  In my case, I used an Arduino Mini Pro running Software Serial to talk to the Jevois, so I could program and monitor the Arduino via a FTDI cable or Bluetooth bridge while it was communicating with the Jevois. I also created a custom PCB to attach the Arduino Mini to and break out pins for servos and sensors, although that’s not necessary if you use a regular Arduino and don’t mind a lot of wires hang off it. My Arduino code for this is here.

You’ll also need to power the Jevois via a Mini USB cable. I created my own using these connectors.  The regular ESC that drives your car’s motor will not provide enough power for the JeVois, so I used a stand-alone power regulator like this one.

Here’s another shot of the completed car from the back, which shows the Arduino connection. You’ll note that it also has sonar and IR sensors at the front; those are not used now.

The hard part was the software. Basically, the way to use Jevois is primarily through modifying configuration files that are on the module’s SD card.  The three necessary ones are here, but I’ll explain the key elements in the next section:

Initscript.cfg:

setmapping 1  # this selects the module that's assigned to Mode 1, which happens to be a video setting called "YUYV 320x240 30.0 fps"
setpar serlog None # this tells it not to save a log file
setpar serout Hard # this tells it to send data to the serial port
streamon # this tells the module to start, even though the board is not connected to USB

Params.cfg:

serialdev=/dev/ttyS0  # this tells it to use the serial port
serial:baudrate=38400 # this sets the baud rate to 38400
serial:linestyle=LF # this sets the line endings to a LF, which make it easier to parse

Videomappings.cfg:

NONE 0 0 0 YUYV 320 240 30.0 JeVois RoadNavigation # this is the key line. It assigns the 320x240 30fps video mode with no USB output to the RoadNavigation module

This last one is the most confusing, but the basics are that for arcane reasons involving not having a proper IDE and having relatively bare-bones video support, the only way you can command the Jevois module from a computer is by commanding changes in video mode. So all modes are mapped to a virtual video mode (confusingly, even if that’s not actually the video mode that it’s using), and the way to tell the board which mode you want it to boot up into is by assigning that module to the video mode number you’re calling in the Initscript.cfg, which runs on startup.

This all took forever to figure out, and needed a lot of help from the team in the forums.  But now I’ve done it for you, so you just need to copy the files from here onto your SD card and it should work right out of the box.

In my opinion, this is too hard for beginners. The most perplexing thing about Jevois is that it runs Linux, but there’s no way to get to the Linux command line that I can find.  If you could get to Linux via the USB cable (rather than just a weird command input parser that’s a lot like the old modem’s “AT” command set), you’d be able to script this powerful board properly and otherwise use modern programming tools. But as it is, this is a very fiddly matter of taking out the SD card and editing configuration files on a Linux PC,  guessing at parameters, sticking it back into the Jevois board, powering it up and praying.

The Jevois software project is very mature and powerful, so I have no doubt that this more user-friendly exposure of its Linux heart and deep AI and CV libraries can be done. But right now the Jevois computer vision board feels like a cool demo (and an incredibly cheap computer vision computer) but not worth the hassle to use for real work when you can do so much more with a RaspberryPi in much less development time.  Perhaps the next version will improve this.

 

 

 

 

 

 

The Path to Wheel to Wheel Racing

In the past 5 months we have seen performance improve greatly in our 1/10 scale “fastest lap” race format.  When we started out cars were traveling about 1 m/s with the most recent races we see cars traveling at 3 m/s, with the fastest human drivers averaging around 8 m/s.  With the next set of innovations,  we expect cars to move into the 5-6 m/s range, closing the gap with human drivers.  While this speed improvement does require introduction of new methods, we will be moving into a period of refinement rather than invention.  So it is time to shake things up.    

In order to encourage the next set of innovations,  we are going change up the rules and lay out a timeline for wheel to wheel racing.  This will happen over a course of months with intermediate milestones.  To simplify this objective we will be using AR tags (specifically AprilTags) to identify the cars.  This focuses the challenge on localization, and path planning rather than vehicle identification.  Finally for those who want to focus solely on single lap performance, we will continue to have the single lap race.

Example AprilTag AR Tag

There will be two races for the 1/10 scale cars:

The first race will be a single timed lap, shortest lap time wins, gets pole position and first seed for the wheel to wheel race.  This is similar to our existing race.

The second race will be a wheel to wheel event and will race two cars at a time, if there are more than two cars participating the cars will be put into a seeded ladder.  Each car will have a 10×10 cm 36h11 AprilTag attached to the back of the car vertically and within 3cm of the ground.   Tags must remain vertical within +/- 10 degrees during the race.  Minor collisions are to be expected, however a car that causes a collision that prevents a competitor from completing the race is grounds for forfeit or re-race as determined by the track master or other designated judge.  The wheel to wheel race will be 3 laps.

Timeline:

  • May 13 race:  We will have AprilTags around to show to people and Andy Sloane will have a tech demo and sample code ready.
  • June Race:  In addition to our standard race, we will run an initial exhibition race where we will put stationary AprilTags on the track as obstacles.  Cars that can avoid the obstacles and get around the track at the fastest speed win.
  • July Race: Exhibition Wheel to Wheel!  This will be our first Wheel to wheel race.  Racers should be able to identify other cars and avoid them, but it is also a chance to test code and share techniques.
  • August Race: First real wheel to wheel race!

Note: Timeline may be adjusted based how well we hit our milestones.

Request for comments.  Please go to the DIYRobocars forum to discuss and comment.  These rules are not locked in stone and I would like feedback.  Please try to provide feedback soon, I would like to close on this by 5/15

Zero to Autonomous

Last weekend DIYRobocars held the biggest race of its short history.  We raced our 1/10th scale cars (and smaller) at ThunderHill Raceways as part of the Self Racing Cars event where some raced full sized cars and one go-cart on the 2 mile ThunderHill West track.   Among the 3 classes of cars at the event between Self Racing Cars and DIYRobocars, I am the track master for what may be the most active track at the event, the 1/10th scale competition.   Our modest 60 meter long track laid out with tape in the parking lot had 12 cars of which 7 completed fully autonomous laps.  While all used vision the cars were based upon 5 different software stacks (2 based on OpenCV, one on a depth camera and 2 different CNN implementations).  While most racers are from the greater Bay Area, we had racers join us from all over North America including Miami, Toronto and San Diego.

DIYRobocars Tape Track at ThunderHill in the Early Morning

After this huge milestone it is worth reflecting on the short time that the DIYRobocars league has been in existence.  The 1/10th leave kicked off with a hackathon organized by Chris Anderson at Carl Bass’ Warehouse on November 13 2016.  As I look back on the last 4.5 months it is amazing what we have accomplished.  Here are some stats:

  • 23 total cars built
  • 15 cars have raced 
  • 10 cars have completed an autonomous lap
  • 21s – Fastest autonomous time
  • 8s – Fastest Human time.  
  • And there is much more to come…

In addition times have consistently improved even though we have added new cars and racers as can be seen in the chart below

Compared to Self Racing Cars (full sized cars) at ThunderHill, our little cars did exceptionally well especially considering many of the full sized car projects have been running for years.  Even with all of their funding, only 4 full-sized cars were able to run autonomous laps and not one was able to make a lap with vision only. 

Winner at ThunderHill, Will Roscoe.

How was the 1/10th scale track able to demonstrate so much success in a shorter time?  I am not totally certain, but I suspect two things are the primary causes:

First, the cost of failure is zero – The amount of caution required for a the development of an autopilot for a full size car must greatly hamper speed of innovation.  With 1/10 we are able to take risks and test anywhere and move fast. 

Second is less obvious – What makes DIYRobocars special is that it is a collaborative league.  I cannot express how unusual this feels.  While we are all competitors, we share our code, our secrets for winning and brainstorm with our competitors how to make our cars faster.   Fierce competitors one moment are looking to merge code bases the next.  In the larger car league, many cars are sponsored by competing companies.  There is much less sharing and collaboration which puts the brakes on innovation.  

While the last 4 months have been great, I also look towards all we accomplish in the next 4.  While many designs will be refined and lap-times will drop, the next big step is to mix up the format to tackle the next set of technical challenges.  Our next big rule and format change is to incorporate is wheel-to-wheel racing which introduces a new set of technical problems.  More on the rule changes in the coming weeks.  

Post-race report from Thunderhill

Our outdoor hack/race at Thunderhill was a blast, both for the opportunity to compete over a whole weekend and the intermixing with the full-size autonomous car teams at the Self Racing Cars event we were part of.  We didn’t do anything that couldn’t have been done indoors at our regular monthly races in Oakland (people didn’t use GPS, for example), but the preparation that went into a full weekend brought the best out of everyone.

Here are some quick observations from the weekend.

  • There was lots of improvement over the two days. Times more than halved from Saturday to Sunday, from 45 seconds to winning times all in the low twenties.
  • Humans are still faster. The fastest human driven time was 8 seconds, the fastest autonomous time was 21 seconds. We’ve still got work to do.
  • Wheel-to-wheel racing is the future (see above). We’re going to be doing a lot more of that in the monthly events, including seeded rounds, ladders and a “Final Four”.
  • Of the 14 teams that entered, 10 finished the course. All used computer vision to navigate (no GPS, on the ground that GPS is too easy — we’ve been doing that for nearly a decade at the Sparkfun Autonomous Vehicle Competition)
  • By contrast, on the full-size track next door, *no* cars successfully did a full autonomous lap with vision. The few that used vision, such as Autonomous Stuff and Right Turn Clyde (our own Carl Bass and Otavio Good, shown below), were not able to complete autonomous laps. And the cars that did complete autonomous laps, such as Comma.ai, used GPS waypoints to do it.
  • On the 1/10 scale course, the jury is still out on whether traditional computer vision or neural networks are best. First place, Will Roscoe, the author of Donkey, uses TensorFlow and Keras (which uses neural networks). But just one second behind, in 2nd place, was Elliot Anderson using traditional OpenCV-style computer vision on a $55 OpenMV M7 board.
  • Traditional computer vision is easier to understand and debug, but requires tuning. Neural networks, on the other hand, are black boxes that take a long time to train, but they don’t need to be hand-tuned (ideally, they can just figure out correlations themselves, without having to be told to look for specific shapes or colors).  I prefer things I understand, so I’m drawn to CV. But I fully accept that there will soon come a day when CNNs are so easy to train and use that I make the switch. Just not yet.

Progress on a standard RaspberryPi-based platform

One of the things we’re hoping to do here is create a cheap, easy and powerful platform for people to get started with autonomous car technology. We start by asking the question: “What could we do for $100 in a weekend that is on the path to pro-level autonomous cars”?  That price point pretty much points us to something based on RaspberryPi 3, which is the best way to do computer vision on a budget. Not only does it have enough computing power itself to run OpenCV, but it has built-in WiFi, so you can stream the data to a laptop to run even more challenging code, such as TensorFlow and neural networks.

The hardware side of this platform is easy: make this. But the software side is harder. We know we want to use Python if at all possible, since it’s become the standard for AI (it’s also easy to use on all platforms). But should we standardize on something based on OpenCV, like ForumulaPi and CompoundEye, which uses standard computer vision techniques for standard courses? Or should we move to the more flexible CNN neural network approach like TensorFlow/Keras, which requires training for each course?

Each has pros and cons. The CV approach is very predictable, and can be tuned to perform well on a standard track such as the RGB FormulaPi one. The CNN approach, on the other hand, is very flexible (it can work on any track/road/environment) but is very unpredictable. You can train it on a track so it nails it, and then the lighting changes or people gather around the track and it no longer knows where it is.  It’s the classic blackbox problem and the debugging tools for neural networks are still in their infancy.

Will Roscoe is making good progress towards a framework that can take the best of both worlds.  He’s already built a good RaspberryPi-based rover on the TensorFlow/Keras framework (called Donkey for some reason). Meanwhile the CompoundEye team has built an equally good RaspberryPi-based rover on the OpenCV framework (using C++ rather than Python for performance reasons, although that might not be necessary).

How to choose between the two? Will is working on a codebase that may allow you to use both. He’s porting the CompoundEye code to Python and merging it with his Donkey code.

If this works, you may be able to make a run-time decision to use a CNN or OpenCV on the same rover, to compare the performance of the two approaches head-to-head. That would be really cool, and if it works I’ll make this the standard DIY Robocars reference platform.

 

 

Final results and code from FormulaPi race

Love this final race — it’s both thrilling and charmingly shambolic at the same time.  FormulaPi was the model for the RGB track part of our DIYRobocar race/hack series of events.  The race starts at 7:04.

It’s interesting that although some competitors tried very ambitious neural-network learning approaches to the contest, the winner just tuned the stock code with some better PID values.

The consensus from everyone is that the Pi Zero just doesn’t have enough horsepower for proper AI.  The organizers are considering switching to Raspberry Pi 3 for the next season, which I heartily endorse.

Post-race analysis — one track and two different kinds of cars

The  whole reason to have races and hackathons is to stress-test your code and find problems (ideally so you can fix them and do better next time). So the key part of any event is the post-mortem — understanding what didn’t work and why.

Here are a few of those from this week, both at our own DIYRobocars race in Oakland and from a ForumulaPi team.

Oakland DIYRobocars Event lessons:

  • 12 pizzas are not enough
  • One bathroom is not enough
  • We need a PA system
  • The lighting in the warehouse ranged from bad to worse, which is really a problem for computer vision. At peak sun, the RGB lanes were read as RGB. But when the sun was obscured by clouds, gray became blue and contrast thresholds weren’t hit. Disaster. We can’t do much about the lighting, but competitors must use automatic gain adjustment algorithms in their code.
  • We need to paint the lines on the concrete — tape and huge puddles don’t mix
  • Otherwise it went great!

Keras/TensorFlow car lessons from Will Roscoe

[This describes the experience with his “Donkey” Keras/TensorFlow CNN learning code] Overall today was a success given that two Donkey vehicles did at least one completely autonomous lap prior to race time. This is a big improvement from the same event 3 months ago when we worked on Adams car all day to get it to lurch forward and turn right. This is short debrief of what I learned this weekend to help guide our next efforts to make Donkey a better library for training autopilots for these races.
Of course, after the race the driving problems became obvious.
  1. The vehicle drive loop was only updating every 1/5th of a second but should have been updating every 1/15th of a second or more frequently. This meant we didn’t collect very much training data and the autopilot didn’t update often.
  2. Training data was not cleaned. We didn’t have a good way to view the training data on the ec2 instance so we could not see that there we were using training data that included bad steering, and even images when the vehicle had been picked up..
Beyond bugs, how can do better at these races?
On a race day, we have 4-6 hours to take a proven car and autopilot and train it to perform on a new track. The more efficiently we can improve the autopilot performance the better we’ll do. Here’s an overview problems I saw today and proposals how to fix them.  Specific issues and solutions live in github issues.
Many steps are required to update an driving model. 
At one point today I had 20+ terminal windows open because changing the autopilot requires many useless “plumbing” steps. These steps can be automated or made easier with command line arguments or web interface.
  • Switching models requires restarting the server
  • It’s difficult to remember which session was the good session.
  • Combining sessions is a separate script.
Driving is required to test models. 
Since updating and driving an autopilot takes time, we need to make sure that our changes actually improve the autopilot before we test it on the track. A trusted performance measurement is needed at the time of training. This could be a combination of the error on a validation dataset and a visual showing how closely predicted values were to actual.
There is no way to debug an autopilot. 
Currently an autopilot either works or it doesn’t. Driving performance are the only clues to help us understand what’s going wrong. Helpful clues would include:
  • Visual showing what the network is queuing off.
  • Lag times
  • Predicted vs actual overlaid on image.
Common problems that don’t have obvious solutions. 
Even after a common problem has been identified, there’s no standard solution to fix the problem. “Agile training” could be used to correct the autopilot by creating more training data where the autopilot fails.
  • Vehicle doesn’t turn sharp enough.
  • Vehicle doesn’t turn at all on a corner.
  • Vehicle goes to slow.
There is no easy way to clean training data on a remote instance. 
Training on bad data makes bad autopilots. To learn where bad training data exists you need to see the image the recorded steering values, This is impossible on a CLI but would be possible through a web interface.
There are other improvements we can make but these are the big unsexy ones that will help most. Also, get a friend to build a Donkey. Let the fleet learning begin.

 

ForumulaPi car lessons from Jorge Lamb:

To train the machine learning we needed to generate many images and tell what we would want the car to do in each case. To do that we integrated the race code with the wiimote (cwiid) and played in the simulator: This way we stored the simulated camera images and the position of the wiimote as desired driving direction. It was even fun!

wiimote

When we had enough images we started training a scikit-learn neural network. We started our tests with a fully connected neural network with one hidden layer. We did some changes to the wiimote driving code to train it to recover when it went out of the ideal path in the track. We run our code and the simulator in our laptop and it looked promising :) We sent that for the first tests but some issues with the library, not compiled for the Raspberry Pi Zero made our robot stay in the starting line :(

In the first race we had the library properly installed and so the neural network was going to drive the robot. This time the robot moved but our robot kept crashing against the walls, and in the recover movements it managed to do one full lap… in reverse! At least there was one other robot that did 5 laps in reverse :p

From the logs and images we got, our theory is that the Pi Zero took too long for processing each image with our code. This means it applied the driving directions to late and during too long. If it decided to turn, then turning for a full second meant driving into the walls.

For the next races we changed back to a simpler racing code: Follow the lane one right from the center.

In race 2 we did better (it’s easy to do it better than -1 laps ;) This time we managed to get nearly 10 laps for a second place in the heat, but that only gave us 1 point. First point though, yeah!

We kept testing and training neural networks, but we couldn’t get some of them to run in a Raspberry (after the issues in the test rounds we decided to test everything in a Raspberry Pi). Some libraries didn’t even run in the raspberry, and some other tests were too slow to properly drive the robot. Running some tests at lower robot speed, it looked really promising, but when driving full speed as you want to do in a race we didn’t find a working solution that didn’t crash into the walls.

We also tried convolutional neural networks. With neupy and a convolutional neural network that would fit in the Pi Zero it got many images right, but some others were giving wrong values and the robot would crash in the simulator.

In Race 3 we did even better! 16 laps and just a couple of meters away of house robot (which was surprisingly stopped). Our robot even passed the house robot a few seconds after the 15 minutes finished. 14 more points for RasPerras del Infierno :)

Race 4 was even better: First win for RasPerras del Infierno!! 22 laps even though we were stuck with other robotfor a couple of minutes.

Race 5 start was very good but we crashed on a robot that went backwards for some time. Then there was a good competition and two other robots went over the 23 laps limit. A third place in the race, 11 points and one of the 10 places into the semifinal! (good luck the house robots are not going into the semifinal).

 

Details of 1/21 Hack/Race day in Oakland

We’re all ready for our first hack/race day next Sat, 1/21, at the new Oakland warehouse location for DIY Robocar events.

There are three tracks:

1) A RGB lane track, modeled after the FormulaPi track:

2) A white-line track for 1/10th scale cars, which is designed to model lanes on real roads. The ones below are just temporary markings; the final lines will be wider and properly stuck to the surface.

3) A walled-in course created by the PowerRacers group for PowerWheels-type racers, especially those using LIDAR to navigate

Rules:

For RGB track:

  • 10m wide track, with red and green lanes.
  • Blue lane dividers in FormulaPi course may be replaced by blank gaps showing concrete instead.  If this creates a problem for the cars, we will replace the blue tape
  • Course is pretty smooth and most bumps are taped over. 1/2″ clearance under cars should be sufficient

 

For White-Tape track:

  • 1.5 meter (5 foot) wide course with borders in 70-110mm (3-4 inch) wide white tape.
  • yellow tape at the centerline 35mm – 55mm (1.5-2 inch) wide
  • The course must have at least one left turn, right turn, hairpin turn(1.5m outside radius) and gradual turn ( >3m outside radius)
  • Course should fit in a box 20m x 15m
  • Up to 3 cars at a time (for now)
  • Course may not be smooth so the car should be able to handle step shaped bumps of up to 25mm (1 inch)
  • People will stand 3 meters away from the track.