With the AI tool ChatGPT, OpenAI triggers a global earthquake. Many people already use it for support – whether at work or at school. However, the tool is now behaving strangely. The reports about this are making the rounds on X, among other places.
The ChatGPT language model actually stands out technically when it comes to writing texts. Programmers use it to create lines of code, students create essays, and even authors occasionally use the tool. It's usually reliable, just not last Wednesday. Some users reported on
“Over the last few hours, people have reported experiencing a number of issues with ChatGPT,” writes Gary Marcus on his blog. In his text, the scientist bundles reports from various ChatGPT customers who struggled with similar problems. “The nek, the nay, the nash and the north,” wrote the chatbot in response to a query from a computer scientist. There is no translation for it. “ChatGPT Enterprise has lost its mind,” another user posted on X.
Problems with ChatGPT are spreading rapidly within social networks. OpenAI commented accordingly quickly. The company assured that it would fix the problem as quickly as possible. A status update states that the reason for the strange answers appears to be the choice of words.
Essentially, AI language tools generate their texts by randomly selecting words based on probabilities. In very simplified terms, it basically uses previously learned texts to calculate which words could follow one. Words are in turn assigned to numbers and ChatGPT had problems selecting these numbers, “leading to word sequences that didn't make sense,” OpenAI reported.
The problem has now been resolved. Marcus also wrote that he was hoping for a wake-up call. After all, many companies use ChatGPT. Such failures could therefore lead to far-reaching problems.