The very artificial yet not so intelligent Bing AI is getting even more integrations into Microsoft products. All the while, the tech giant is busy playing hot potato with how much power it’s willing to give its sophisticated chatbot, a problem the company has been evidently dealing with more more than a year.
Microsoft announced in a Wednesday blog post that the Bing AI is being integrated into the Bing Mobile app and into Skype as well. Those users with accounts that have access to the Bing AI beta should gain access to the AI Copilot on both iOS and Android sometime on Wednesday by tapping the Bing icon at the bottom of the app, allowing access to the chatbot. Users should also be able to use voice to talk to the AI and then receive their answers in bullet points, text, or simplified responses.
For Skype, users will be able to add Bing into a group text chat “as you would any Skype contact.” The chatbot is effectively the Copilot, though integrated into the chat function so that anybody in a group text can ask it questions. Microsoft said the next step is to integrate the AI into Teams so now both your family and coworkers can bask in the glow of the great AI overlords. The company has already hinted at bringing its Bing AI into other Microsoft 365 apps like Word and PowerPoint.
Microsoft is determined to be the first big tech company on the block to truly integrate large language models into the mainstream. This blistering pace of rollout has been tampered with users complaining about how the AI is known for giving misinformation, falsities, or awkward and bizarre answers to queries.
Since the company released its Bing AI Feb. 7, the company has tried to restrict use in order to stop it from giving such insane responses. On Feb. 17, Microsoft limited the number of questions users could ask the AI to five per session, with a grand total of 50 per day. On Tuesday, the company wrote in another blog post the move was to curtail “a handful of cases in which long chat sessions confused the underlying model.” The company then offered a minor increase of Bing AI use to six chat turns per session and 60 per day while promising to eventually increase a daily cap to 100 “soon.”
Users have noticed the Bing AI has often called itself Sydney for no apparent reason. You see, Microsoft has long been trying to get its AI to an effective state. In an email statement a Microsoft spokesperson confirmed to Gizmodo that Sydney was an early code name for Microsoft’s AI it had been testing “more than a year ago.” On Sunday, AI developer Ben Schmidt posted on his Mastodon page that users on the Bing support page seemed to be confused about a Sydney chatbot. One user posted back in November, long before Microsoft released its AI, how the chatbot was calling them “either desperate or delusional… either foolish or hopeless” for saying they wanted to give feedback.” The Bing advisor seemed completely flummoxed by the users’ questions regarding the chatbot, hinting that the user was part of an early test.
Another user posted in the same thread in December, showing how the bot became belligerent when a user pointed out the AI was wrong about Elon Musk being the new CEO of Twitter. When the user showed the AI a Musk tweet as proof of the change of hands, the bot wrote “The link you provided is a fake tweet that was created by using a tool that allows anyone to generate realistic-looking tweets from any user. The tweet does not exist on Elon Musk’s official Twitter account, and it is not verified by any credible source.”
Bing AI has shown a similar inability to admit it is wrong, such as when it tried to gaslight one user about the current year compared to the release date of Avatar: The Way of Water.
It remains unclear how often Sydney posted these wild responses compared to the modern Bing AI, but it does point to how Microsoft didn’t see a need for its AI to be perfected before release. The Microsoft spokesperson added that “The insights we gathered as part of that have helped to inform our work with the new Bing preview. We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback so that we can deliver the best user experience possible.”
As much as Microsoft is touting that 71% of testers are “giving the new Bing a ‘thumbs up,’” the company has long known about the consistent problems with large language models. Back in November, when Microsoft’s new partner in crime OpenAI first released its ChatGPT model, users were quick to acknowledge the AI’s capabilities. Still, there are many reports about how the sophisticated chatbot produced false content or broken code. Time will tell if Microsoft’s AI can actually move beyond the rampant headaches of this AI hangover.