why i’m so proud of frc team 694 and their cv in auton scoring

FRC Team 694, StuyPulse is currently competing at the China Robotics Challenge which is an off season event. The event uses the same game/challenge as the “main” season which ran from January 2016-April 2016. This year’s event was the Stronghold game. In the autonomous part of the game, you got points for going over obstacles (defenses) and/or shooting a ball into a target.  The team worked on CV (computer vision) very hard this year. This was going to be the year we used CV in competition.. We used it one year when a lot of the code was supplied to teams. And we got CV recognition working last year, but not soon enough to integrate with the robot. We did have a really cool demo where it automatically recognized a target but never quite got it fully working with the robot.

Which brings us to 2016. Don’t get me wrong, I’m thrilled that it worked. But that’s only a piece of why I’m so proud of what happened with CV this year

Subteam

There were dedicated team members who focused on CV for most of the build season. This wasn’t one student on a Skunkswork project. Making a subteam shows this is a priority. Something the team values and wants to see succeed.

Sustainablility/Continuity

There were students in different grades on the subteam. This will ensure that the knowledge and experiences gained won’t all graduate at the same time. Which allows this to become a skill that grows every year rather than having to start over.

Technical understanding

The last two years I’ve seen a marked increase in the deep understanding of how CV works. It isn’t just poking at it. This is hard stuff. Same for communication channels between the camera and the robot. Although I did enjoy the discussion about bit endian vs little endian on transferring data. I knew it was that because we had that problem in the past. So seeing it back, I recognized it faster. Convincing the student I was working with on the problem took a little longer ;).

Team Communication

Some years, I see mixed messages about CV. Whether it is the drivers not trusting it or other parts of the robot being higher priority, the messaging can become demoralizing to the people working on the code. This year, I felt like the whole team wanted to see this happen. There were designated windows to work on CV integration with the robot even really close to the robot’s due date. It wasn’t shafted. It wasn’t put on the back burner.

Quick response to change

There were a number of integration problems. The students quickly adjusted to each one trying different strategies.  And they learned what works/doesn’t work for next year too.

*Never* giving up

Whether it was between matches at competition or after we got eliminated at a competition, the students were working at various aspects of the robot. Even after it became apparent we weren’t going to run the CV during competition because the risk at Championships was too high, the students STILL didn’t give up. They made the most of practice field time to test. They kept testing and integrating and testing and… They *never* gave up and got it working.

At home, I have a Peanuts picture near my desk. It has Lucy holding the football and Charlie Brown looking at it with a caption “Never ever EVER give up.” This is really important. You never know when pushing just a little bit further is going to be the difference. And I’m so proud of them for that. Pushing past the point where most people would give up in a really important skill and just as important as tech skills.

Cookies and Chocolate

On “stop build” day, the team had come further than ever before with CV. The electronics mentor and I chipped in and bought cookies to give out at the “tagging ceremony.” (This is when the code gets tagged in github as the last code tested on the robot before ship date.) I spoke for a minute or so on how big a deal with was and issued a challenge. I would bring in “something better than cookies” if they scored a certain number of goals with CV in competition. A 9th grader said “there’s nothing better than cookies.”

This is the first year I believed it could happen. (Sorry Josh – there were too many forces against you.) While that didn’t happen it wasn’t because of not being able to do it. It was that the risk of getting eliminated at championships was too high by the time CV was reliable. It was the right call.

So at the end of year dinner, I extended the offer to include the China Robotics Championships. Where they scored in autonomous in two out of two practice matches. While the number of goals for my offer is higher than two, the repeatability of getting 100% of them in practice matches is enough for me to declare success for the contest.

Which bring us to – what is better than cookies. It’s not pizza. We eat too much pizza during the build season for that to be a reward for anything. Instead the answer is chocolate:

694-chocolate

694-candy

694-candy

Congratulations 694. I am so proud of you for this accomplishment. Both both the tech skills to do it and for the soft skills and tenacity. Great job!!!

getting computer vision to work on a raspberry pi

This year, I helped a local high school FIRST robotics team get computer vision working on a Raspberry Pi.   Two students worked on the problem.  At the beginning, they both worked on computer vision.  Later, one specialized on the Pi and the other on CV.  I learned a lot from a book and from playing with it.  We encountered some interesting and memorable problems along the way.

Recognizing a target

We started with sample code from another team from last year.  This helped learn how to write the code and understand the fundamentals.  It also helped with the “plumbing” code.  As hard as it was to recognize a target, this didn’t prove to be the most frustrating part.  The Pi itself mounted a number of challenges.

Parts for the Pi

We bought/scavenged parts for the Pi.  A USB keyboard, USB mouse and cell phone micro charger where donated.  A HDMI/DVI cable we needed to buy.  We borrowed a computer monitor and ethernet cable.

Finding the right jars

The Pi is built on ARM.  We needed javacv-0.2-linux-arm.jar.  It turned out there is no linux arm version in the latest javacv (0.3).  There is one in 0.2 which we used.  Which was incompatible with the versions of other tools.  (see next problem.)

Setting up the Pi

Compiling opencv on the pi takes 4 hours.  Since that’s how long a meeting is, this meant running the compile overnight.  Having to wait overnight to find out if something worked was like a taste of what punchcard programmers had to go through!

Then it turned out we couldn’t even use our compile.  We were missing the libjniopencv_core.so file. We spent a few days trying to solve this.  We wound up  using a pre-compiled for Pi version.  This is how we got version compatibility.

Updating a NetBeans Ant script

Since speed matters in a competition,we wanted to change the build run target to not compile first.  Netbeans comes with an empty looking build.xml and a useful build impl xml file.  (This is actually my favorite feature of NetBeans – that the build can easily be run outside of NetBeans.)  We easily found the run target in the build impl file.  We copied it to build.xml, renamed it and removed the compile dependency.  This wasn’t actually a problem, but it was interesting how NetBeans sets up the build file.

Starting a script on startup

We wanted the Pi to start the computer vision script automatically on boot up.  We created a file in /etc/init.d since this is a Linux (Debian) install.  Then we made a fatal error.  We forgot to add the & to run the script in the background.  So when we tested rebooting, the Pi hung.  And we couldn’t SSH to it because it hadn’t booted.  The solution was to take the pi’s sd card to another computer and edit the bootup script to use single user mode.  We could then login and edit the startup script to add the missing &.

Networking

We used Java sockets to transfer the “answer” from the Pi to the robot.  The answer being a single number representing the number of degrees off from the center of the target.  We made the mistake of testing this with both ends on a regular computer.  When moving to the robot it didn’t compile because the robot uses J2ME.  We then refactored to use the mobile version (code here).

Performance – CPU

Success.  Computer vision works.  The problem is it took 3 seconds per image.  We reduced it to 1.3 seconds per image by reducing the resolution to the smallest one the camera supports.  We shaved off another 0.1-0.2 seconds by turning off file caching in ImageIO.  We learned the problem was full CPU usage when calling ImageIO.read.

I found an interesting thread showing the “old way” of creating a JPEG using ImageIcon was much faster.  We tried it and the thread was right.  It even created an image we could open in a photo editor.  The problem is that it didn’t work with our code/libraries for image processing.  We don’t know why.  Obviously ImageIO has some default we are unaware of.  A 1 second time is acceptable so this isn’t  a problem.  But it’s an interesting conundrum.

Another interesting CPU note.  We tried compiling image magik.  It took over 3 hours on the Pi.  By contrast, it took 2.5 minutes on a Mac.