Abstract
Algorithms for classification of 3D objects either recover the depth information lost during imaging using multiple images, structured lighting, image cues, etc. or work directly the images for classification. While the latter class of algorithms are more efficient and robust in comparison, they are less accurate due to the lack of depth information. We propose the use of structured lighting patterns projected on the object, which gets deformed according to the shape of the object. Since our goal is object classification and not shape recovery, we characterize the deformations using simple texture measures, thus avoiding the error prone and computationally expensive step of depth recovery. Moreover, since the deformations encode depth variations of the object, the 3D shape information is implicitly used for classification. We show that the information thus derived can significantly improve the accuracy of object classification algorithms, and derive the theoretical limits on height variations that can be captured by a particular projector-camera setup. A 3D texture classification algorithm derived from the proposed approach achieves a ten-fold reduction in error rate on a dataset of 30 classes, when compared to state-of-the-art image based approaches. We also demonstrate the effectiveness of the approach for a hand geometry based authentication system, which achieves a four-fold reduction in the equal error rate on a dataset containing 149 users.