Category Archives: Robotics

OpenCV Blob Tracker

github code is here!

The Blob Tracker is a simple demo that shows how you can track a certain color in OpenCV.

The setup consists of a camera mounted on a pan-tilt unit that’s wired to an Arduino. The camera and Arduino are hooked up to a computer via USB. On the computer, a simple Python script takes in the camera images, processes them using OpenCV, and sends back commands to the Arduino to move the pan-tilt servos and track the desired color.

Parts List and Assembly

This tutorial builds off the Remotely Controlled Pan-Tilt Unit post. Follow instructions there to assemble the unit, and upload the provided Arduino code to your Arduino.

If you’ve done things correctly, you should be able to view images from the camera on your computer (using any webcam software, like Skype or Google hangouts), and control the pan-tilt unit by plugging in your Arduino and sending commands over serial.

Installing OpenCV

In Linux, install OpenCV by running:


# run apt-cache search opencv-core first, to see which version is available. Anything greater than 2.2 should work
sudo apt-get install opencv-core2.4

Alternatively, you can install ROS, which comes with OpenCV.

Code

The code is available on Github at https://github.com/jessicaaustin/robotics-projects/tree/master/blob-tracker

Understanding color tracking

(Note: If you’re lazy and just want to track a color without understanding how it works, you can skip this section and pass in –red, –green, or –blue to blob-tracker.py below)

Most of the time, we think in terms of the RGB color model. However, when it comes to trying to track an object of a certain “color”, the RGB space is not very useful. That’s because something that’s “red” in one lighting condition might look like “dark red” in low light or “light red” in bright light.

An alternative is the HSV color model. HSV stands for Hue, Saturation, and Value. The hue is what we care about — for example, red — and the range we’ll look for here will be fairly narrow. Saturation and value will depend on the object’s texture and lighting conditions, and we can set those those values to a wider range to account for a wider range of texture and lighting conditions.

To illustrate this concept to yourself, try running the color_detector.py script in the blob-tracker folder:


# use --camera=N to set the index of your camera.
# e.g., if /dev/video1 is your camera device, then use --camera=1
./color_detector.py --camera=1

The program will pop up two windows: “camera feed” and “filtered feed”. The filtered feed is a mask where white is the color you’re tracking, and black is not.

color_detector_1

The program will start out with HSV set to the following values:
H = 100 +/- 50
S = 155 +/- 200
V = 155 +/- 200

Place a solid-color object in front of the camera — for example, a red ball — and use the keyboard to modify these ranges:

hue: sat: val:
 e    t    u
s d  f g  h j
 x    v    n

For example, to increase the max hue, press e, to decrease the min hue press x, to decrease the range press s, to increase the range press d. (If things aren’t working, make sure you have the window called “filtered feed” selected when you press the buttons).

Play with the values until you’re consistently seeing just the color you want, and not anything else (for example, a red jacket in the background).

color_detector_2

Now try changing the lighting conditions. How does this change the track-ability? What if you modify the sat and val values?

The program will spit out the current HSV min/max ranges to the terminal. Once you’re happy with your ranges, hit ESC to exit and save the HSV values — you’ll need to input them into the blob-tracker program next.

For example, for tracking a red object I ended up with:

(h,s,v):
min=(146.0, 146.5, 55.0, 0.0)
max=(182.0, 283.5, 255.0, 0.0)

Running everything together

At this point you’ve got a camera to capture images, mounted on a pan-tilt unit that you can control over serial. You also have an HSV range to track. Now it’s just a matter of running the blob-tracker code! This code will process the images, find the color you want to track in the image, and send commands to the servos to close the loop and track the color.

To run:

# get options
./blob-tracker.py --help
# find a red object (no tracking), using a camera on /dev/video1 and an arduino on /dev/ttyACM0
./blob_tracker.py --camera=1 --red
# track a red object:
./blob_tracker.py --camera=1 --device=/dev/ttyACM0 --red --follow

First try without the –follow command. You should see two windows pop up: “camera” and “threshed”. The code performs some filtering on the image to reduce noise, so the color blob in the threshed image is “smooth”. A red circle on the camera shows where the center of the largest “blob” matching your color is located. If there is more than one “blob” of the same color in the image, the code will find the largest one and track it.

blob_tracker_1

Now try running with the –follow command. Your pan-tit unit should move around the track the object!

blob_tracker_follow_3 blob_tracker_follow_2 blob_tracker_follow_1

Robotics: books and online courses for independent study

This fall, I’m going back to school to study Robotics as a graduate student.

It’s been almost five years since I graduated from undergrad, so to prepare myself I created a list of study materials for review. I hope others might find this list of recommendations helpful.

There were two main areas I wanted to cover: mathematics review, and introduction to robotics concepts. The latter section might be useful for someone interested in robotics but not sure which areas they want to pursue.

Please comment if you have any questions!

General Math

How to Prove It by Velleman

Man, I wish I had read this book BEFORE undergrad. In this book, Velleman does three things:

  • describes basic concepts in Logic
  • gives common proof strategies, with plenty of examples
  • dives into more set theory, defining functions, etc

He does all this assuming the reader is NOT a mathematician–in fact, he does an excellent job of explaining a mathematician’s thought process when trying to prove something.

I highly recommend this book if you feel uncomfortable reading and/or writing proofs, since it will make the following math books much more enjoyable to read!

Calculus

Barron’s College Review Series: Calculus

This book was my warm-up. It is very simple, and is focused more on computation than rigorous proofs. I think I got through it in a weekend, while completing most of the exercises. It does NOT include multivariate calculus.

Khan Academy: Calculus

Khan Academy lectures, while time-consuming, are a great reference if there is a specific concept that you’re struggling with. That said, I don’t recommend watching the whole series, but rather searching for a specific topic (say, “gradient”) when you want more information.

Probability and Statistics

Khan Academy: Probability and Statistics (combined with Combinatorial Probabilities cheat sheet)

I have to say: I always had problems getting combinatorics straight in my head, and watching these videos + completing the exercises really helped.

Introduction to Bayesian Statistics by Bolstad

This book is AMAZING. Bayesian statistics is extremely important to modern robotics, and this book provides an excellent introduction. Highly recommended!

Note that if you’re already comfortable with traditional probability, you can skip the Khan Academy altogether and skip straight to the Bolstad book.

Differential Equations

Elementary Differential Equations by Boyce and DiPrima

All-around excellent book. Probably my favorite, most-referenced textbook from undergrad.

Khan Academy: Differential Equations

Again, don’t watch the all the lectures, but use them as a reference when you want a simple, thoroughly-explained overview of a specific topic.

Linear Algebra

Linear Algebra by Hefferon (also available in print)

If you had to pick a single math topic to study before entering robotics, linear algebra would be it. This book is particularly good because it starts with solving systems of equations, defining spaces, and creating functions and maps between spaces–and only after this foundation is laid does it introduce matrices as a convenient form for dealing with these concepts.

Khan Academy: Linear Algebra

Again, don’t watch the all the lectures, but use them as a reference when you want a simple, thoroughly-explained overview of a specific topic.

Code

The Nature of Code

I’ve been programming since high school, so I didn’t really need much review in this area. However, The Nature of Code is an amazing book, it’s free!, and it includes online exercises in the Processing language, so I have to recommend it.

Also note that the Udacity CS-373 course includes programming exercises in Python.

Robotics

If you complete the following courses, you’ll get a high-level understanding of some of the most important concepts in robotics.

Udacity CS-373, Artificial Intelligence for Robotics

Topics include: Localization, Particle Filters, Kalman Filters, Search (including A* Search), PID control, and SLAM (simultaneous localization and mapping). If you understand these concepts, you can write software for a mobile robot! Even better, each section has multiple programming exercises in Python, so you really get practice with the topic.

If you want to dig deeper into some of the above topics, I recommend Sebastian’s book, Probabilistic Robotics

Udacity CS-271, Introduction to Artificial Intelligence

If you’re interested in Machine Learning, this is a great course. It’s not as slick as CS-373, but still worthwhile.

ChiBots SRS RoboMagellan 2012: Nomad Overview

This summer, my friend Bill Mania and I entered our robot in the ChiBots SRS RoboMagellan contest. To steal the description directly from the website:

Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor terrain. Robots have three opportunities to navigate from a starting point to an ending point and are scored on time required to complete the course with opportunities to lower the score based on contacting intermediate points.

Basically, we had to develop a robot that could navigate around a campus-like setting, find GPS waypoints marked by orange traffic cones, and do it faster than any of the other robots entered.

To give you an idea of what this looked like for us, here’s a picture of us testing in Bill’s backyard:

Our robot moving between two waypoints. Note the red-orange potters and yellow plastic bin–“red herrings” that our robot is wisely ignoring!

For our platform, we used a modified version of the CoroWare CoroBot, with additional sensors like ultrasonic rangefinders, a 6-DOF IMU, and wheel encoders.

Our software platform was ROS — rospy specifically — and we made liberal use of various components in the navigation stack. We were even able to attend the very first ROSCon in St. Paul, MN, which was a blast and greatly expanded our knowledge of the software and what it was capable of.

Over the next few weeks, I’ll be writing more detailed posts about the robot and specific challenges we faced, including:

  • Hardware and sensor overview
  • Using robot_pose_ekf for sensor fusion of IMU + wheel encoders to allow us to navigate using dead reckoning
  • Localization in ROS using a very, very sparse map
  • Our attempts to use the move_base stack with hobby-grade sensors, and why we ended up writing our own strategy node
  • Using OpenCV + ROS to find an orange traffic cone, and using this feedback to “capture” the waypoint

In the meantime, enjoy this video of the above scene, from the robot’s point of view!

Navigating a known map using a Generalized Voronoi Graph: an example

github code is here!

voronoi-bot is a robot that navigates by creating a Generalized Voronoi Graph (GVG) and then traveling along this graph to reach the goal. It requires a full map of the environment in order to navigate.

I completed this project during a class for Joel Burdick while an undergrad at Caltech. I’ve since added the code to github and started cleaning up the files so that they’re easier to understand and reuse (refactoring, adding tests, etc). This is still in progress, but the code is functional in the meantime.

Example output from the program, plotted in Matlab. The black dots define the boundary of the map, the red and blue boxes are obstacles, and the cyan dots are nodes in the GVG, constructed based on this map. The green dots show the start and end goal, the red lines show the path taken by the robot.
A video showing the robot in action (running in Player/Stage) is below.


To read more about using GVG for navigation, I recommend the following:

http://en.wikipedia.org/wiki/Voronoi_diagram

Sensor Based Planning, Part II: Incremental Construction of the Generalized Voronoi Graph
Howie Choset, Joel Burdick
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.68.3533

Mobile Robot Navigation: Issues in Implementation the Generalized Voronoi Graph in the Plane
Howie Choset, Ilhan Konukseven, and Joel Burdick
http://www.ri.cmu.edu/publication_view.html?pub_id=1415

Path Planning for Mobile Robot Navigation using Voronoi Diagram and Fast Marching
Robotics Lab, Carlos III University
http://neuro.bstu.by/ai/To-dom/My_research/Papers-2.0/Closed-loop-path-planning/Voro.pdf

 

Chicago GTUG Presentation: Building Robots with the Sparkfun IOIO


Last night I presented at the Chicago GTUG. It was held at 1871 in Merchandise Mart, and wow is that a great space! It was a real pleasure to talk there.

Here’s a link to the presentation: https://docs.google.com/presentation/d/1id7sUVDHFXhKzujg3dPWivC3kM5o3r7NIrWkq3IB_Ws/edit

Links to references from the presentation: