Aller au contenu Skip to sidebar Passer au pied de page

https://noyb.eu/fr/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

The Austrian organization “Noyb” has filed a complaint against OpenAI, the company responsible for developing the ChatGPT chatbot. This complaint criticizes ChatGPT for its inability to correct its errors, which could lead to the spread of false information. According to OpenAI itself, the application merely generates “responses to user requests by predicting the most likely next words that could appear in response to each request.” In other words, despite having vast training data, there is currently no way to ensure that ChatGPT provides users with factually correct information.

This case highlights the challenges posed by generative artificial intelligence systems like ChatGPT and the importance of understanding their mechanisms before implementing them.

Before delving into the details, let me ask you this question: “How are you?”

Whatever your response, you reacted similarly to an LLM:

  • You analyzed the context of the question.
  • You constructed a coherent response based on your culture/humor(?).

In this process, two key concepts of generative AI are present.

First, there’s the analysis of the request, which involves understanding the meaning of the question. LLMs use vectors to associate meaning with words in their context. For example, expressions like “how are you?” and “hello, how’s it going?” have similar meanings, even though the words used are different. The proximity between the corresponding vectors is therefore small.

Next, there’s the formulation of a response, where the generative AI searches for similar vectors to construct a “probable” response word by word.

The “relevance” of the behavior of LLMs is linked to the number of parameters in the vector (100 trillion for ChatGPT 4). These parameters are adjusted by analyzing the content available on the internet, estimated to be around 120 zettabytes in 2023 (1 zettabyte = 1 billion terabytes).

What to take away from this?

  • LLMs don’t have the ability to understand but have classification capabilities.
  • These capabilities result from learning on a dataset that far exceeds the documentary heritage of companies.
  • LLMs do not provide facts but rather coherent text based on their training data.

Once these limitations are understood, the field of use remains vast. For example:

Reformulate the question in a structured manner for systems capable of providing facts.

  • “Hello, I’m Joao Violante. What is my current balance?”
  • This question can be reformulated structurally: “find ‘current_balance’ for ‘Joao Violante’.”

Search a knowledge base for similar questions and return the associated answers.

  • “How can I close my account?”
  • The system can associate a specific vector with this question, then search the knowledge base for the closest vector and return the associated answer: “To close your account, you need to send form 1254bis.”

Once a factual system has provided the answer, an LLM can be used to generate a personalized response.

  • “Dear Mr. Violante, to close your account, please complete form 1254bis.”

In conclusion, it is crucial to recognize that language models like ChatGPT should not be seen as isolated entities within AI projects but rather as essential integrated components in the overall management of processes.

fr_FRFrench