The prototype for Microsoft's Kinect camera and microphone famously cost $30,000. At midnight Thursday morning, you'll be able to buy it for $150 as an Xbox 360 peripheral. Let's take some time to think about how it all works.
Kinect's camera is powered by both hardware and software. And it does two things: generate a three-dimensional (moving) image of the objects in its field of view, and recognize (moving) human beings among those objects.
Older software programs used differences in color and texture to distinguish objects from their backgrounds. PrimeSense, the company whose tech powers Kinect, and recent Microsoft acquisition Canesta use a different model. The camera transmits invisible near-infrared light and measures its "time of flight" after it reflects off the objects.
Time-of-flight works like sonar: If you know how long the light takes to return, you know how far away an object is. Cast a big field, with lots of pings going back and forth at the speed of light, and you can know how far away a lot of objects are.
Using an infrared generator also partially solves the problem of ambient light. Since the sensor isn't designed to register visible light, it doesn't get quite as many false positives.
PrimeSense and Kinect go one step further and encode information in the near-IR light. As that information is returned, some of it is deformed - which in turn can help generate a finer image of those objects' 3-D texture, not just their depth.
With this tech, Kinect can distinguish objects' depth within 1 centimeter and their height and width within 3 mm.
Figure from PrimeSense Explaining the PrimeSensor Reference Design.
At this point, both the Kinect's hardware - its camera and IR-light projector - and its firmware (sometimes called "middleware") are operating. The Kinect has an on-board processor which is using algorithms to process the data to render the three-dimensional image.
The middleware also can recognize people: distinguishing human body parts, joints and movements, as well as distinguishing individual human faces from one another. When you step in front of it, the camera "knows" who you are.
Does it "know" you in the sense of embodied neurons firing, or the way your mother knows your personality or your confessor knows your soul? Of course not. It's a videogame.
But it's a pretty remarkable videogame. You can't quite get the fine detail of a table tennis slice, but the first iteration of the WiiMote couldn't get that either. And all the jury-rigged foot pads and nunchuks strapped to thighs can't capture whole-body running or dancing like Kinect can.
That's where the Xbox's processor comes in: translating the movements captured by the Kinect camera into meaningful on-screen events. These are context-specific. If a river-rafting game requires jumping and leaning, it's going to look for jumping and leaning. If navigating a Netflix "Watch Instantly" menu requires horizontal and vertical hand-waving, that's what will register on the screen.
It has an easier time recognizing some gestures and postures than others. As Kotaku noted this summer, recognizing human movement - at least, any movement more subtle than a hand-wave - is easier to do when someone is standing up (with all of their joints articulated) than sitting down.