3rd year computer vision coursework - visual search system and report. Achieved 100%
data | ||
descriptor | ||
distance | ||
report | ||
util | ||
.gitignore | ||
cvpr_computedescriptors.m | ||
cvpr_visualsearch_pca.m | ||
cvpr_visualsearch_query_set.m | ||
cvpr_visualsearch_rand_image.m | ||
cvpr_visualsearch.m | ||
labsheet3.pdf | ||
parameter_iter_pca.m | ||
parameter_iter_query_set.m | ||
README.txt | ||
scratch.m | ||
spec.pdf |
/data - pulled images and spreadsheet of data /descriptor - functions for extracting descriptors /distance - functions fo measuring distance between descriptors /util - util functions such as toGreyscale and EVD There are two types of script, ones that run a category response once (cvpr_visualsearch_*) and ones that iteratively generate new descriptors to run queries on (parameter_*) _query_set operates using either L1 or L2 norm on the query set _pca generates an eigenmodel from the descriptors and computes mahalanobis distance _rand_image picks a random query image from each category to iterate over, no results from this script are in the paper The cvpr_visualsearch_* scripts load descriptors from folders and perform a category response test on them. The parameter_* scripts were used to generate iterative parameter results for descriptors. Effectively the query code from the cvpr_visualsearch_* files have been prefaced with descriptor generators that as a whole iterate over parameters instead of loading them from files.