Transformation invariant clustering and subspaces

Fast transformation-invariant component analysis

Transformation invariant component analysis (TCA) is a probabilistic dimensionality reduction method that accounts for global transformations such as translations and rotations while learning local linear appearance deformations. The computational requirement for learning this model using EM algorithm is in the order of O(N2) where N is the number of elements in each training example. This is prohibitive for many applications of interest such as modeling mid to large size images.

In this work, we present an efficient EM algorithm for TCA that reduces the computational requirements to O(N logN). For 256x256 images, this is 4000 times faster!

The proposed algorithm allows TCA to be used in analysis of realistic data; In addition, this facilitates using TCA as a sub-module in other applications that requires learning transformation invariant subspace learning. An example is in modeling images using a layered decomposition, where each layer is explained using a mixture of TCA model.

Project website

Fast Transformation-invariant component analysis

References

  • A. Kannan, N. Jojic, B. J. Frey. Fast Transformation-Invariant Component Analysis, Submitted to Intl. Journal of Computer Vision special issue: Learning for Vision and Vision for Learning [PDF]

  • A. Kannan, N. Jojic, B. J. Frey. Fast Transformation-Invariant Component Analysis (or Factor Analysis), In Advances in Neural Information Processing Systems (NIPS), 2003 [PDF]