Tuesday, February 20, 2007
Monday, February 19, 2007
KNN Distance Metric Comparisons
I just finished running a comparison of K-nearest neighbor using euclidean distance and chi-squared (I've been using euclidean this whole time). And what do you know, using chi-squared distance got me consistently better results. Here's the results of the tests:
KNN (k=3)
Night Images:
Euclidean Distance:
Euclidean Distance:
Chi-Square 1
Euclidean 1
Chi-Square 2
Euclidean 2
Chi-Square 3
Euclidean 3
KNN (k=3)
Night Images:
Euclidean Distance:
- 92% accuracy, # test images = 41
- 85% accuracy, # test images = 41
- 87% accuracy, # test images = 41
- 85% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 92% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 95% accuracy, # test images = 41
Euclidean Distance:
- 75% accuracy, # test images = 41
- 76% accuracy, # test images = 41
- 81% accuracy, # test images = 41
- 76% accuracy, # test images = 41
- 84% accuracy, # test images = 41
- 77% accuracy, # test images = 41
- 84% accuracy, # test images = 41
- 85% accuracy, # test images = 41
- 78% accuracy, # test images = 41
- 85% accuracy, # test images = 41
Chi-Square 1
Euclidean 1
Chi-Square 2
Euclidean 2
Chi-Square 3
Euclidean 3
Viewing Night Results
Photobucket Album
Green = empty space, White = occupied space, Blue = Misclassified Space
So I just finished writing a combination of programs which gather up the results from the k-fold validation testing and visualizes these results (ie creates a set of result pictures). You can go to my photo bucket account and view the results from the night set.
Wednesday, February 14, 2007
Night and Day
I have been gathering more training data and the increase from 650 total images to 740 has made a visible difference in the detection rates for the KNN classifier.
I have been experimenting with the idea of splitting the image set into smaller sets related to the time of day and general lighting level. Other than the benefits of reducing the KNN classification time, these time-specific images sets noticeably increase the detection rates. The following is results of my tests so far:
KNN (K=3)
All Images:
SVM
All Images:
I have been experimenting with the idea of splitting the image set into smaller sets related to the time of day and general lighting level. Other than the benefits of reducing the KNN classification time, these time-specific images sets noticeably increase the detection rates. The following is results of my tests so far:
KNN (K=3)
All Images:
- 79% accuracy, # test images = 148
- 80% accuracy, # test images = 147
- 84% accuracy, # test images = 148
- 78% accuracy, # test images = 148
- 78% accuracy, # test images = 149
- 80% accuracy, # test images = 107
- 83% accuracy, # test images = 107
- 66% accuracy, # test images = 106
- 82% accuracy, # test images = 107
- 78% accuracy, # test images = 109
- 90% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 90% accuracy, # test images = 41
- 82% accuracy, # test images = 41
- 80% accuracy, # test images = 41
SVM
All Images:
- 78% accuracy, # test images = 148
- 76% accuracy, # test images = 147
- 78% accuracy, # test images = 148
- 68% accuracy, # test images = 148
- 76% accuracy, # test images = 149
- 75% accuracy, # test images = 107
- 76% accuracy, # test images = 107
- 59% accuracy, # test images = 106
- 75% accuracy, # test images = 107
- 66% accuracy, # test images = 109
- 70% accuracy, # test images = 41
- 85% accuracy, # test images = 41
- 68% accuracy, # test images = 41
- 75% accuracy, # test images = 41
- 63% accuracy, # test images = 41
Monday, February 12, 2007
KNN > SVM
I just finished fixing my K-nearest neighbor program and what do you know, its detection rate is consistently better than the svm. The results on the k-fold cross-validation testing where k=5 is:
KNN
(K=1)
SVM
KNN
(K=1)
- 72% accuracy, # test images = 129
- 82% accuracy, # test images = 129
- 75% accuracy, # test images = 129
- 74% accuracy, # test images = 128
- 77% accuracy, # test images = 129
- 79% accuracy, # test images = 129
- 83% accuracy, # test images = 129
- 77% accuracy, # test images = 129
- 75% accuracy, # test images = 128
- 77% accuracy, # test images = 129
- 75% accuracy, # test images = 129
- 86% accuracy, # test images = 129
- 81% accuracy, # test images = 129
- 73% accuracy, # test images = 128
- 77% accuracy, # test images = 129
- 79% accuracy, # test images = 129
- 84% accuracy, # test images = 129
- 76% accuracy, # test images = 129
- 74% accuracy, # test images = 128
- 79% accuracy, # test images = 129
SVM
- 79% accuracy, # test images = 129
- 65% accuracy, # test images = 129
- 59% accuracy, # test images = 129
- 62% accuracy, # test images = 128
- 71% accuracy, # test images = 129
Cross-validation and KNN
Throughout the week I have been taking pictures of parking lots as I have walked to and from school each day. However, the number of ROI from my image set is still pretty small, around 650 distinct parking spaces, and this may be adversely affecting my training efforts. Most research papers that I've read have said that good results are often achieved with somewhere between 1000 and 2000 bits of training data.
The next thing that I did was to implement a cross-validation script. I ended up coding a K-fold in python which starts by randomizing the input data and then performs the cross validation. With K=5, the svm is classifying within a range of 59%-79% positive detection rate. This extremely wide range might be the result of poor randomization of the data on the part of the script and/or it might be due to the fact that I have very few night time images as part of my test data. Right now I'm going to increase the size of my test set and see if that has an effect in reducing the range of results returned by the cross-validation script.
The last thing that I worked on was to create a K-nearest neighbor classification program. I am still trying to debug the program but I hope to have it done sometime tonight or tomorrow.
The next thing that I did was to implement a cross-validation script. I ended up coding a K-fold in python which starts by randomizing the input data and then performs the cross validation. With K=5, the svm is classifying within a range of 59%-79% positive detection rate. This extremely wide range might be the result of poor randomization of the data on the part of the script and/or it might be due to the fact that I have very few night time images as part of my test data. Right now I'm going to increase the size of my test set and see if that has an effect in reducing the range of results returned by the cross-validation script.
The last thing that I worked on was to create a K-nearest neighbor classification program. I am still trying to debug the program but I hope to have it done sometime tonight or tomorrow.
Monday, February 5, 2007
SVMs
So I installed, trained, and ran SVMLight today. I used some images which weren't part of the training set and which were taken on completely different days than those taken for the training set (ie, no obviously similar images being used to train and test at the same time). However, these images were quite a bit larger than those that I trained on. I figure that because the features being used right now are color histograms and because this test was only to help me get my bearings, the difference in the sizes of the images wouldn't be extremely important for now. In the end, the SVM got 73% accuracy on the test set (58 pos detect, 21 neg detect), which I find encouraging for a first step.
One of the things I am considering doing next is relabeling my training set so that there are only non-occluded parking spaces being trained on. I also want to gather a more images to both increase the training set and also build a more realistic pseudo-test set. Lastly, I'm going to code up a program for visualizing the SVM's classification guesses overlaid on the test images if I have time.
One of the things I am considering doing next is relabeling my training set so that there are only non-occluded parking spaces being trained on. I also want to gather a more images to both increase the training set and also build a more realistic pseudo-test set. Lastly, I'm going to code up a program for visualizing the SVM's classification guesses overlaid on the test images if I have time.
Friday, February 2, 2007
Finished Histogramming
So I finally finished my color histogramming program. Since training the support vector machine is completely pointless unless the data you are training it on is accurate, I had to make sure that my 'histogrammer' was 100% bug free. Here are some of the test images I used to debug the program:
Currently, the program works by reading in the log file which contains a list of coordinate-image lines to extract the pixels from a particular region of an image. For each line in the log file the program does the following:
My next step will be to train a support vector machine on this training data. Since I'm most familiar with SVMLight, I'm probably going to start there.
Currently, the program works by reading in the log file which contains a list of coordinate-image lines to extract the pixels from a particular region of an image. For each line in the log file the program does the following:
- Compute the extraction region in the image.
- Convert the image from RGB to L*a*b*.
- Create 2 32-bin histograms, one for the 'a' channel and one for the 'b' channel (we discard the 'L' channel as it does not add much useful information in this case).
- Compute the histograms.
- Write out each histogram, bin by bin, to the resulting text file along with a 0 or a 1 to indicate if the region was a positive or negative training example.
My next step will be to train a support vector machine on this training data. Since I'm most familiar with SVMLight, I'm probably going to start there.
Subscribe to:
Posts (Atom)