Google’s artificial intelligence has already taken on the form of a human nerd, but now it’s time for its next act. Can AI be an artist?
Douglas Eck, a researcher working on Google Brain, recently revealed that the team will soon launch a new project called Magenta. Though it took some inspiration from DeepDream, another Google Brain scheme that yielded trippy-as-hell images (amongst other things), Magenta has one key difference: It will try to figure out if computers can actually create art, instead of reproducing or distorting it.
Magenta is set to launch in a more official capacity in early June, but Eck provided some early insights during a recent discussion at Moogfest, a music, art and technology festival. The new technology will use TensorFlow, Google’s open source machine learning resource—and like TensorFlow, Magenta’s tools will be available to the rest of us.
Quartz reports that Magenta’s first project is a program “that will help researchers import music data from MIDI music files into TensorFlow, which will allow their systems to get trained on musical knowledge.” After music, Magenta will move on to testing whether their computers can create images and video. )
If all this sounds ambitious, it is—Eck said machine learning is still “very far from long narrative arcs.” But it’s a fascinating effort. Check out this demonstration by one of Eck’s team members during Moogfest, in which a computer listens to a series of musical notes and spits back its own version:
I can’t wait for the day Google’s computers start cutting off body parts and complaining that their vision is being destroyed by the world’s evil, capitalistic impulses!