Sunday, March 18, 2007

Final Paper

Ok, so this is probably my last post. The project was a lot of fun and a great learning experience. If you are interested in reading my final results check out the paper I wrote. If you are still interested to learn more about this kind of stuff, go check out my classmates at the cse190a website because some of their stuff is really quite impressive.

cya!

Wednesday, March 14, 2007

Feature Vocabulary

In the previous post I commented that the interest point detection algorithm I created looked like it could detect vehicles all by itself. I created a simplistic feature point counting program which 'detected' a parking spot as empty if the number of interest points in the ROI was below an arbitrary threshold. The results were moderately good but nowhere near as good as I had expected.

Threshold (min # of interest pts):
1 __ %81
2 __ %82
3 __ %77
4 __ %74
5 __ %70

The next thing I focused on was the generation of a code book of car features--small image segments centered at each interest point. For each ROI, I scale the image so as to normalize the 'zoom' of each image feature. To save a little time and memory, I only keep one feature image for interest points which are within a 3 pixel range. I then save all feature images who's center lie within the current parking space ROI being examined, positive features to one set negative features to another. Here's an example of the resulting feature vocabulary from one occupied parking space:

Monday, March 5, 2007

Interest Point Detection

So the next stage of the project is to bring in vehicle feature detection to make the overall algorithm more robust. To this end, I read Agarwal and Roth's paper on "Learning a Sparse Representation for Object Detection" and it seemed like an excellent place to start.

The first thing I needed to do was implement interest point detection and test it out on some of my training data. Kristen was kind enough to give me a copy of her implementation of Förstner Corner detection. But like Kristen warned me, Förstner worked fine on the test image of a checkers board but it didn't pick much up in my training images (see below):


I then used OpenCV's Harris Corner detection and found it to be extremely good for my project (see below):


The interesting thing about this Harris Corner detection algorithm is that even without creating a database of vehicle features and a database of non-vehicle features and then using that to detect vehicles, the interest points themselves are actually very accurate at determining where a vehicle is. However, I'm still going to try and reproduce as much of Adarwal and Roth's research, time permitting ;-)

Tuesday, February 20, 2007

Poster


After working all night I finally finished the slide for the EUReKA conference. Here it is....

Monday, February 19, 2007

KNN Distance Metric Comparisons

I just finished running a comparison of K-nearest neighbor using euclidean distance and chi-squared (I've been using euclidean this whole time). And what do you know, using chi-squared distance got me consistently better results. Here's the results of the tests:

KNN (k=3)

Night Images:

Euclidean Distance:

  1. 92% accuracy, # test images = 41
  2. 85% accuracy, # test images = 41
  3. 87% accuracy, # test images = 41
  4. 85% accuracy, # test images = 41
  5. 90% accuracy, # test images = 41
Chi-Squared Distance:
  1. 92% accuracy, # test images = 41
  2. 90% accuracy, # test images = 41
  3. 90% accuracy, # test images = 41
  4. 90% accuracy, # test images = 41
  5. 95% accuracy, # test images = 41
Day Images:

Euclidean Distance:

  1. 75% accuracy, # test images = 41
  2. 76% accuracy, # test images = 41
  3. 81% accuracy, # test images = 41
  4. 76% accuracy, # test images = 41
  5. 84% accuracy, # test images = 41
Chi-Squared Distance:
  1. 77% accuracy, # test images = 41
  2. 84% accuracy, # test images = 41
  3. 85% accuracy, # test images = 41
  4. 78% accuracy, # test images = 41
  5. 85% accuracy, # test images = 41
Here are some visual examples of the differences in the results from Chi-Squared and Euclidean Distances:

Chi-Square 1

Euclidean 1

Chi-Square 2

Euclidean 2

Chi-Square 3

Euclidean 3

Viewing Night Results

Photobucket Album


Green = empty space, White = occupied space, Blue = Misclassified Space

So I just finished writing a combination of programs which gather up the results from the k-fold validation testing and visualizes these results (ie creates a set of result pictures). You can go to my photo bucket account and view the results from the night set.

Wednesday, February 14, 2007

Night and Day

I have been gathering more training data and the increase from 650 total images to 740 has made a visible difference in the detection rates for the KNN classifier.

I have been experimenting with the idea of splitting the image set into smaller sets related to the time of day and general lighting level. Other than the benefits of reducing the KNN classification time, these time-specific images sets noticeably increase the detection rates. The following is results of my tests so far:

KNN (K=3)

All Images:
  1. 79% accuracy, # test images = 148
  2. 80% accuracy, # test images = 147
  3. 84% accuracy, # test images = 148
  4. 78% accuracy, # test images = 148
  5. 78% accuracy, # test images = 149
Day Images:
  1. 80% accuracy, # test images = 107
  2. 83% accuracy, # test images = 107
  3. 66% accuracy, # test images = 106
  4. 82% accuracy, # test images = 107
  5. 78% accuracy, # test images = 109
Night Images:
  1. 90% accuracy, # test images = 41
  2. 90% accuracy, # test images = 41
  3. 90% accuracy, # test images = 41
  4. 82% accuracy, # test images = 41
  5. 80% accuracy, # test images = 41

SVM

All Images:
  1. 78% accuracy, # test images = 148
  2. 76% accuracy, # test images = 147
  3. 78% accuracy, # test images = 148
  4. 68% accuracy, # test images = 148
  5. 76% accuracy, # test images = 149
Day Images:
  1. 75% accuracy, # test images = 107
  2. 76% accuracy, # test images = 107
  3. 59% accuracy, # test images = 106
  4. 75% accuracy, # test images = 107
  5. 66% accuracy, # test images = 109
Night Images:
  1. 70% accuracy, # test images = 41
  2. 85% accuracy, # test images = 41
  3. 68% accuracy, # test images = 41
  4. 75% accuracy, # test images = 41
  5. 63% accuracy, # test images = 41