Nvidia just announced the new Tegra X1 processor at its CES press conference. Last year, Nvidia brought its killer desktop Kepler graphics to a mobile chip, and this year X1 will bring the newer, kickass Maxwell graphics there as well.
K1 brought all kinds of crazy power to mobile devices like the Nvidia Shield tablet, but the X1 takes it to a whole new level. Nvidia say X1 can offer twice the performance that K1 could, and what's more is that it can support all the same game engines that Maxwell can run on desktop.
What that means is that the Tegra X1 can run things like the Unreal Engine Elemental demo on a chip that could fit in a phone or tablet. It looks totally mind-blowing for a mobile chip. Not perfect—some of the more extreme effects in the demo cause the framerate to stutter a bit, but it's insane that running it is possible to begin with.
That's great for gaming sure, but Nvidia has its eyes set on something bigger: Creating a super computer brain that controls your whole car. Nvidia's new Drive CX "mobile cockpit computer" is a device that's powered by a Tegra x1 that can push 16.6 megapixels, or 4 HD displays. That is to say, it can power a whole bunch of badass displays in your car all at once. It could turn your car's dash into a sci-fi spaceship dashboard.
Something a little like this:
In practice, this allows for awesome graphics that can finally make the graphics in your GPS be more than just a stupid little arrow chugging along on a map, and instead be something a little more awesome. Something more like this:
Nvidia showed off changing the texture, color, and material of a car's virtual instrument panel on the fly.
But it goes further than just that. Nvidia wants to put the X1 to work to help be the brains of future driverless cars. Nvidia's also announced something called the Drive PX, an "auto-pilot car computer" that's powered by two X1s. The point of this chip? To know everything that's going on in and around your car, from what's displayed on its screens, to anything coming in from driver assistance cameras that are facing outwards. It's the brain that makes sense of what's coming in through the cars' many eyes, that lets it really learn about and understand its surroundings using neural network technology that can teach itself what cars and vans and cyclists and pedestrians look like over time.
Sounds great, right? Hell yeah. The catch is that it's still a long, long way off. The X1 is a chip with the horsepower to make this sort of stuff possible, sure, but cars still have a lot of catching up to do, whether it's by including a ton of high-res panels that will show you all those awesome Tron graphics, or by having a bevvy of outward facing cameras that provide all the information something like an "auto-pilot car computer" would want to process. And that's to say nothing of the challenges of getting this tech—and this tech specifically—into cars; everybody is working a self-driving car these days.
The tech works now though. Nvidia's already got prototypes of this tech that are functional, and by extension, cars with brains smart enough to spot cyclists and pedestrians and other squishy things it best not hit, or to realize that a bunch of brakelights up ahead means that it should probably get ready to start slowing down. Nvidia claims it's building a neural net of learnings that it can wirelessly sync to every other Nvidia-powered car simultaneously to make them all smarter and smarter.
Last but not least, Nvidia says a new Auto Valet feature will let future cars park themselves: the Tegra X1 can generate a point cloud of things it sees with connected cameras, identify an empty parking spot in a garage it's never seen before, and park itself.
Will we see this technology in a new Tesla, perhaps? (Nvidia supplied Tesla with the processors that power the Model S's giant touchscreen.) Can't say for sure. But Audi did pop up on stage to show their support.