If robots are ever going to get to the point where they can interact with people, they're going to have to figure out how to read someone's face. If a robot can't decode my expression, it totally won't pick up on my biting sarcasm and will take everything I say at face value, and I don't think I need to tell you what kind of hilarious misunderstandings can spring from that.

Jacob Whitehill, a computer science Ph.D. student from UC San Diego's Jacobs School of Engineering, has created a program that allows him to control the speed of a video via his facial expressions, the first step towards controlling robots via faces.

In the pilot study, the facial movements people made when they perceived the lecture to be difficult varied widely from person to person. Most of the 8 test subjects, however, blinked less frequently during difficult parts of the lecture than during easier portions of the lecture, which is supported by findings in psychology.

One of the next steps for this project is to determine what facial movements one person naturally makes when they are exposed to difficult or easy lecture material. From here, Whitehill could then train a user specific model that predicts when a lecture should be sped up or slowed down based on the spontaneous facial expressions a person makes, explained Whitehill.

Bring on the robotic professors! [Physorg via KurzweilAI.net]