The success this year of powerful new generative artificial intelligence models like Open AI’s ChatGPT and Stability AI’s Stable Diffusion, have laid the groundwork for a new era of AI tech set to explode even further in 2023. Google, though equipped with its own powerful (but definitely not sentient) LaMDA AI chatbot, says it doesn’t plan rushing its models out to the public.
Executives at the company clarified their comparatively cautious approach during an all-hand meeting according to CNBC where employees asked if they were potentially losing a complete-edge to less cautious upstarts. Unlike smaller startup AI firms with little to lose, CEO Sundar Pichai and AI division head Jeff Dean said mistakes made by Google in generative AI risk tainting users’ reputation of the company on its other more trusted products. In other words, if Google released LaMDA to the public and it immediately started spewing out hateful misinformation, would users still view their Google search results with the same confidence?
“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” Dean said according to CNBC. “But, it’s super important we get this right.”
Taking into consideration potential unintended consequences leading to bias, toxicity, and safety issues, Dean said, were of particular concern for search-based AI systems. The AI lead took some thinly veiled shots at competitors who’ve already released their models to the public saying some of them will “make stuff up” if they are unsure of an answer. Pichai, on the other hand also reportedly tried to reassure employees telling them they had “a lot” planned for AI in the near future.
And there’s no shortage of examples Google can point to where over ambitious tech companies released AI systems too early. Maybe most notably, in 2016, Microsoft released its “Tay” chatbot on Twitter to learn from users’ conversations, only to have the artificial intelligence transform into a racist asshole espousing sympathy for Hitler within 24 hours. Another chatbot trained on 4Chan users earlier this year quickly racked up more than 15,000 racist posts within a day.
The Google all-hands came roughly one week after OpenAI’s ChatGPT took the tech internet by storm. That system, based on OpenAI’s GPT3 language system, was released to the mass public leading to a flurry of screenshots from users commanding the system to generate everything from parody Bible verses and poetry to a less than convincing Gizmodo article. Some more intrepid users found ways to have the model draft lines of code and even answer specific queries to their questions, leading some to wonder if it could, one day, pose a threat to Google’s search business. If one takes a second to imagine a not so distant future world where everyone possesses a Siri-like personal assistant on their phone with the search clarity of an OpenAI, the apish task of opening a browser and typing with your fingers does start to feel a bit old fashioned. Generative AI could, in theory, replace hyperlinks with readable paragraphs.
Despite the Google executive’s call for conservatism, the company has actually already taken some steps this year to open up LaMDA in particular. Back in August, Google let users play with the chatbot in a series of controlled demos via its AI Test Kitchen App. Rather than open up LaMDA to users in a completely open-ended format, however, Google instead opted to present the bot through a set of structured scenarios.
LaMDA isn’t the only mode Google’s working on in the generative AI space either. Last month, Gizmodo attended an AI event at the company’s New York headquarters where they showed off early, but nonetheless impressive, examples of text to image and text to video systems. Unlike most past generative AI systems for video which create basically incoherent blogs of pointless information, Google’s demo was able to create a reliably coherent, 45 second story.