Dimension reduction and feature selection are fundamental tools for machine learning and data mining. Most existing methods, however, assume that objects are represented by a single vectorial descriptor. In reality, some description methods assign unordered sets or graphs of vectors to a single object, where each vector is assumed to have the same number of dimensions, but is drawn from a different probability distribution. Moreover, some applications (such as pose estimation) may require the recognition of individual vectors (nodes) of an object. In such cases it is essential that the nodes within a single object remain distinguishable after dimension reduction. In this paper we propose new discriminant analysis methods that are able to satisfy two criteria at the same time: separating between classes and between the nodes of an object instance.
We analyze and evaluate our methods on several different synthetic and real-world datasets.
If the inline PDF is not rendering correctly, you can download the PDF file here.
Agarwal S. Awan A. and Roth D. (2004). Learning to detect objects in images via a sparse part-based representation IEEE Transactions on Pattern Analysis and Machine Intelligence26(11): 1475–1490.
Bååth R. (2014). Bayesian first aid: A package that implements Bayesian alternatives to the classical *.test functions in R International R User Conference UseR! 2014 Los Angeles CA USA pp. 86.
Baudat G. and Anouar F. (2000). Generalized discriminant analysis Neural Computation12(1): 2385–2404.
Boutsidis C. Zouzias A. Mahoney M.W. and Drineas P. (2011). Stochastic dimensionality reduction for k-means clustering CoRRabs/1110.2897 http://arxiv.org/abs/1110.2897.
Bronstein A.M. Bronstein M.M. and Ovsjanikov M. (2010). Feature-based methods in 3D shape analysis in N. Pears et al. (Eds.) 3D Imaging Analysis and Applications Springer-Verlag London pp. 185–216.
Chai D. He X. Zhou K. Han J. and Bao H. (2007). Locality sensitive discriminant analysis International Joint Conference on Artificial Intelligence Hyderabad India pp. 708–713.
Cunningham J.P. and Ghahramani Z. (2015). Linear dimensionality reduction: Survey insights and generalizations Journal of Machine Learning Research16(1): 2859–2900.
Demirci M.F. Osmanlioglu Y. Shokoufandeh A. and Dickinson S. (2011). Efficient many-to-many feature matching under the l1 norm Computer Vision and Image Understanding115(7): 967–983.
Dempster A.P. Laird N.M. and Rubin D.B. (1977). Maximum likelihood from incomplete data via the EM algorithm Journal of the Royal Statistical Society39(1): 1–38.
Fei-Fei L. Fergus R. and Perona P. (2007). Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories Journal Computer Vision and Image Understanding106(1): 59–70.
Fei-Fei L. and Perona P. (2005). A Bayesian hierarchical model for learning natural scene categories IEEE Conference on Computer Vision and Pattern Recognition San Diego CA USA pp. 524–531.
Felzenszwalb P.F. Girshick R.B. McAllester D. and Ramanan D. (2010). Object detection with discriminatively trained part-based models IEEE Transactions on Pattern Analysis and Machine Intelligence32(9): 1627–1645.
Fukunaga K. (1990). Introduction to Statistical Pattern Recognition Academic Press San Diego CA.
Gkalelis N. Mezaris V. and Kompatsiaris I. (2011). Mixture subclass discriminant analysis IEEE Signal Processing Letters18(5): 319–322.
Górecki T. and Łuczak M. (2013). Linear discriminant analysis with a generalization of the Moore–Penrose pseudoinverse International Journal of Applied Mathematics and Computer Science23(2): 463–471 DOI: 10.2478/amcs-2013-0035.
Harandi M.T. Sanderson C. Shirazi S. and Lovell B.C. (2011). Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching IEEE Conference on Computer Vision and Pattern Recognition Colorado Springs CO USA pp. 2705–2712.
Hastie T. Buja A. and Tibshirani R. (1995). Penalized discriminant analysis Annals of Statistics23(1): 73–102.