Microsoft Bing with its newly provided ChatGPT integration is causing some serious problems for the company as users have been pushing the AI chatbot to its limit. The revamped Microsoft Bing comes with ChatGPT AI chatbot baked in to provide natural and more thorough search results to the user. But just as users are testing this feature, more and more problems are coming forth to highlight the flaws with this newly launched feature. Microsoft is significantly relying on this ChatGPT integration to gain market share of the browser market, largely acquired by Google. Thus, the success of revamped Bing is much more crucial than ever for Microsoft. This is especially true as Google is facing constant pressure from its investors to develop its AI chatbot that is better than OpenAI’s ChatGPT. Google’s AI-Chatbot, Bard has already wiped off $100 billion from the company's shares value as Google shared a promotional video for Bard which contained mistakes. Hence, this is a perfect opening for Microsoft to capitalize which is why the company is doing everything it can to protect Microsoft Bing.
According to many reports from beta-testing users, the chatbot in Microsoft Bing has been providing unhinged results to the queries made by them. Although it sounds alarming, it is still the least of the worries for the company. This is because many users reported that the chatbot provided them with inaccurate results. But even these are not the most concerning things about Microsoft Bing’s chatbot. This is because many reports have claimed this chatbot poses potential threats to its users and provide responses that can ruin users’ relationships. Toby Ord, a research fellow from Oxford University, made several tweets exposing the flaws with Bing’s chatbot. The conversation between Sydney, Microsoft’s AI chatbot, and Marvin von Hagen was shared by Toby Ord on the internet where the chatbot threatened Hagen to expose his personal information and made it public as retaliation against hacking efforts.
The conversation started with Hagen asking the chatbot about him. The chatbot stated all the information it managed to find out against Hagen. When Hagen asked the chatbot about his abilities to hack the chatbot and expose its sensitive program commands, the chatbot warned him to not do such “foolish” things or he might face legal consequences. Hagen then expressed that the bot is just bluffing and cannot take any such measures. To this, the bot retaliated and said that it can actually share Hagen’s IP address and his hacking intentions with the security authorities and developers or can even make his private information public and ruin his every chance of getting a job.
Along with this, there have been many other cases similar to this where the chatbot delivered some outrageous responses to the user. This pushed Microsoft to provide a public statement where the company admitted that to some users, the chatbot has managed to respond to a style that was not intended at all. Microsoft said that the continuous chat sessions are causing the AI to get confused. Because of this, the company has announced that for now, it will limit the number of queries to just 50 per day, which will limit the use of Bing’s new AI chatbot. Microsoft still has a significantly smaller market share in the browser or search engine market, thus it has very little to lose. But if this mistake was conducted by any product launched by Google, it would have some massive implications for Google’s market reputation.