Remember when metal band names were good? Names like RATT and Poison and Mötley Crüe elicited the perfect image: slick, sweaty men licking their guitars while wearing tight leather pants and acid washed jeans, wagging their hair-sprayed manes and rocking out harder than any of us so-called millennials could even…
Over the past few weeks, Facebook and Instagram feeds have been flooded with creepily realistic facial transformations produced by “FaceApp,” a free app for iOS and Android whose filters add smiles to photos, alter faces to make them older or younger, and even make them “male” or “female.” The app’s startling…
Human character animation has gotten much better over the years, but it’s still one of the most recognizable issues when enjoying video games. Animations are normally a predetermined set of canned motions, and while real enough looking in the right setting, can totally break the immersive experience when they stray…
By now we’ve seen everything from Fear and Loathing in Las Vegas to Donald Trump to popular memes processed by neural networks like Google’s Deep Dream. They’re like bizarre drug trips, but without the drugs. But it was only recently that Alexander Reben was curious enough to see what a neural network would make of…
If you’re a miserable drawer, have no fear—a new website takes your doodles and, using a machine learning algorithm, turns them into cats. Or nightmares.
The team at Google Brain has made an impressive breakthrough for increasing the resolution of images. They’ve managed to turn 8x8 grids of pixels into monstrous approximations of human beings.
Behold the glorious future of neural networks: disembodied faces rotating in the darkness. Research submitted to Cornell University uses deep neural networks to create detailed 3D models of faces using a single 2D picture.
Is that a baby or the blob? It’s actually just the sick and twisted result of a neural network predicting what a still photo of a baby would look like if it were moving. Researchers at MIT have published demonstrations of their work on generative video, and the “hallucinated” outcomes of are both impressive and…
This creep machine, called Alter, runs entirely off a neural network. That means all its incoherent and erratic movements are 100 percent free of any human control. It’s basically alive.
We’re getting A.I. to do all sorts of weird and wonderful things these days, whether its on the small-scale of text prediction or captioning photos, to driving cars for us and beating people at board games. But what if we turned a neural network into a science fiction writer? The answer is that you’d get a complete…
Microsoft just launched a new online app that offers to try and understand the contents of your photographs and write captions for them. And it’s surprisingly impressive—most of the time.
Neural networks are a fundamental part of Artificial Intelligence: Software systems that train themselves to make sense of the human world. But if you want to understand how they work at a basic level, a cool new website allows you to get under the hood.
Google’s artificial intelligence is getting speedily (and worryingly?) better, as its recent slam-dunk of a human Go champion demonstrated. That victory required highly computationally-efficient AI rather than just brute force, something Google thinks could help it move speech recognition offline.
Self-driving cars spend a lot of time looking at their surroundings to know how they should respond to the road. But autonomous cars will likely spend some time looking at you to work out how they should behave, too.
Predictive text and neural networks have gotten crazy good in the past few years, to the extent that I would actually consider turning them on from time to time. But should you let a computer that knows your writing habits make you a dating profile? Oh hell no.
Neural networks are increasingly taking on jobs that used to be the preserve of the human brain. So Erik Bernhardsson decided to see what would happen if he threw 50,000 fonts at a neural network and left it to chew at them. The results, it turns out, are pretty interesting.
Take one neural network that describes what it sees in an image. Provide it with a webcam feed from the MacBook it’s running on. Then, wander around a city and see what happens. Here are the results of exactly that experiment.
Robots are good at a lot of things, but their track record at picking up objects is poor. So just how hard is it to teach one to pick up an object on demand from a table full of clutter?