Intel Working on AR Chip, as Makers of Terminator Apps Rejoice

Illustration for article titled Intel Working on AR Chip, as Makers of Terminator Apps Rejoice

Embedded augmented reality-chip technology means we'll finally start using AR for good instead of filling all the app stores with Predator, Terminator and Robocop camera overlay apps. Nah, I'm just kidding, it's probably going to get worse.

Advertisement

Reuters reports that Total Immersion marketing chief, Antoine Brachet, says that they've been working with Intel to bring AR to Intel's chipsets. Intel owns a stake in Total Immersion. According to Brachet, the chip could hit the market in two to three years.

What we are doing together with Intel is working on their chipset ... so inside the chipset you can have some AR features, like gesture recognition that can be transferred from software to hardware.

Advertisement

AR technology baked into a chip would mean that applications wouldn't have to include the technology for gestures recognition and the overlaying of information on images. Potentially, all an OS would need is drivers for the chipset to enjoy AR features.

While embedded gesture recognition sounds awesome, AR-overlay technology hasn't exactly taken off with consumers. The widest, probably most useful, use of the technology is probably Yelp's Monocle feature and how often do you use that? [Reuters]


You can keep up with Roberto Baldwin, on Twitter, Facebook, and Google+.

Advertisement

Share This Story

Get our newsletter

DISCUSSION

IceMetalPunk
IceMetalPunk

Monocle? Did you forget about Layar?

Anyway, I like the sound of hardware-based AR. I love the possibilities of AR and I always have. The only two things that have always bothered me about current AR tech are the following:

1) The lighting on the embedded AR objects never matches the lighting in the scene, which makes them look very cut-and-paste.

2) When AR is tracking a marker, there's always a noticeable lag that makes the motion of embedded objects look awkward.

Both of these are currently being fixed. #2 has been all but removed in that tech that takes 2D camera information to map out the 3D world instead of requiring markers.

#1 is on the way to being fixed. There was that recent article about the software that inserts a 3D static or kinetic model into any photograph and matches the lighting in the scene perfectly. It still only works on still photos, and requires a lot of pre-processing manual input, but it's certainly a step in the right direction.

Unfortunately, I can't seem to find links to the above two articles. They were both on Gizmodo, the lighting article more recently than the 3D mapping AR article. Can anyone help me find them?

Anyway, my point is that once those two things are fixed, if it's all eventually moved to hardware, we'll finally have the AR I've dreamed of since I first heard the term "Augmented Reality" :D .