Typically, input on touch screens is reduced from a large contact area to a single point. In the process, rich information is discarded in favor of a more simplistic method of interaction. The generic touch points disregard personal perceptions of where users expect touch points to occur. A problem that is exacerbated for people with motor impairments, who might not be able to point with a single finger.
Leveraging the rich touch information available on the Microsoft Pixelsense, we are using a variety of computer vision, machine learning, and template matching techniques to investigate methods for personalizing touch input on large surface displays.