Self-driving cars spend a lot of time looking at their surroundings to know how they should respond to the road. But autonomous cars will likely spend some time looking at you to work out how they should behave, too.
Predictive text and neural networks have gotten crazy good in the past few years, to the extent that I would actually consider turning them on from time to time. But should you let a computer that knows your writing habits make you a dating profile? Oh hell no.
Neural networks are increasingly taking on jobs that used to be the preserve of the human brain. So Erik Bernhardsson decided to see what would happen if he threw 50,000 fonts at a neural network and left it to chew at them. The results, it turns out, are pretty interesting.
Take one neural network that describes what it sees in an image. Provide it with a webcam feed from the MacBook it’s running on. Then, wander around a city and see what happens. Here are the results of exactly that experiment.
Robots are good at a lot of things, but their track record at picking up objects is poor. So just how hard is it to teach one to pick up an object on demand from a table full of clutter?
When will a neural network know who Donald Trump is? How long until one can come up with a joke on its own? How about recognize Yoda?
Lion fish. Petri dish. Dough. According to one very befuddled artificial neural network, all of these things can be found in the short intro to Star Trek: The Next Generation.
Google Voice has an incredibly useful function that provides you with a transcribed version of your voicemail—but if often gets things wrong. Now, Google is throwing neural networks at the problem to help improve its performance dramatically.
If you use Google’s new Photos app, Microsoft’s Cortana, or Skype’s new translation function, you’re using a form of AI on a daily basis. AI was first dreamed up in the 1950s, but has only recently become a practical reality — all thanks to software systems called neural networks. This is how they work.
The first week of July 2015 will forever be known as the week the internet freaked out about a bunch of triiiiiiippy images generated by a snoozing computer. Please. In my day we didn’t need Google to help us see melting dog faces with six eyes that are actually snails with centipedes crawling on their shells. We did…
Gmail’s spam filters have always been pretty good, but now they’re getting a shot in the arm. Google’s rolling out its artificial neural network technology, currently used in the likes of its Search and Now apps, to help reduce the weight of unwanted email even further.
By now, the entire internet’s realized that Deep Dream, Google’s artificial neural network, is capable of some pretty trippy images. But what happens when you run a movie about acid trips through the acid trip generator? Fear and Loathing in your worst nightmares, that’s what.
Remember a few weeks back, when we learned that Google’s artificial neural network was having creepy daydreams, turning buildings into acid trips and landscapes into Magic Eye pictures? Well, prepare to never sleep again, because last week, Google made its “inceptionism” algorithm available to the public, and the…
Chatbots are notoriously difficult to make work well. But now Google’s developed a new conversational AI that uses neural networks to learn from movie dialogue—and it can just about hold down conversations about ethics and VPN problems.
When you look for shapes in the clouds, you’ll often find things you see every day: Dogs, people, cars. It turns out that artificial “brains” do the same thing. Google calls this phenomenon “Inceptionism,” and it’s a shocking look into how advanced artificial neural networks really are.