The large language model AI bandwagon is getting awfully full lately. Now Snap, the company that owns Snapchat, announced Monday it’s shoving an AI into its app so that users who are really bored talking with friends and real humans can instead have a friendly conversation with a somewhat unhinged digital tool.
In a short blog post, Snapchat shared that its “My AI” chatbot is now available to anybody who’s paying $3.99 a month for the Snapchat Plus subscription service. The system is running on OpenAI’s GPT technology. OpenAI is the creator of ChatGPT, though the system is running on the “latest version of GPT technology that we’ve customized just for Snapchatters,” according to an email statement Snap sent to Gizmodo. The new feature could be running on OpenAI’s Foundry product for developers, which allows for a dedicated instance of “GPT 3.5 Turbo,” as explained in leaked images tweeted by AI developer Travis Fischer.
Snap CEO Evan Spiegel told The Verge that the plan is to eventually roll out My AI to all Snapchat users, but so far it seems the system is pretty much just a mobile version ChatGPT accessible inside the Snapchat app, for better or worse. Probably most importantly for all users, is that in this early roll out the company said it will store all conversations you have with the AI. Snap wrote: “Please do not share any secrets with My AI and do not rely on it for advice.”
The AI is pinned to the top of the “Chat” tab for all Plus users. According to the company, the system name can be customized inside the app. Users can also customize the wallpaper inside the chat, but otherwise the app acts just like ChatGPT, capable of writing some text and providing brief answers to questions. It will also supposedly offer links and contacts for mental health and drug abuse resources. The platform is reportedly caught up in allegations it’s been used to sell fentanyl-laced pills.
The company declined to answer Gizmodo on which limitations it’s put on the AI. However, the Verge wrote the chatbot declines to write academic essays or share any response that includes, violent, sexual, or hateful content. Microsoft has limited how many queries you can ask its Bing AI every day, and has also curtailed the breadth and range of topics it’s willing to discuss.
The company is apparently aware of just how wrong and weird the large language model can get. The company admitted that “mistakes can occur” even when the AI was designed to avoid incorrect or misleading information. Snap asked users to press and hold on any message on My AI to submit feedback.
“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything... sorry in advance” the company wrote.
This article is part of a developing story. Our writers and editors will be updating this page continuously as new information is released. Please check back again in a few minutes to see the latest updates. Meanwhile, if you want more news coverage, check out our tech, science, or io9 front pages. And you can always see the most recent Gizmodo news stories at gizmodo.com/latest.