We know the wonders ChatGPT can perform, but we are less aware of its limitations. For example, ChatGPT is worse than any calculator at math, even though it is supposed to be able to solve math problems.
If you ask ChatGPT (with GPT-3.5 or even GPT-4) to multiply numbers with 4 or 5 digits, the result is almost always wrong1. The result is approximately within the right order of magnitude, but it is incorrect. See the following example (screenshot obtained on 13/05/2024 with ChatGPT / GPT-3.5; you can see that ChatGPT can even propose several wrong answers without any issue).
The correct result is 1,162,238,184. GPT can thus provide an idea of the order of magnitude of the result, but gives an incorrect answer.
What can we say about this?
First of all, this result is surprising. Language models are known for being able to do many things, linguistic tasks that they do quite well (conversing, answering questions, translating, etc.), but also for helping with coding (writing computer code, commenting on programs, etc.) and, finally, solving simple math problems. We can see that we are still far from that. GPT runs on some of the most powerful computers in the world, capable of performing astonishing calculations, but ChatGPT is just incapable of performing a 5-digit multiplication that a pocket calculator can handle very well. How is that possible? At first sight, this seems a mystery… We are certainly far from the super intelligence that is being touted (although, to be honest, this could probably be easily resolved by OpenAI—it would just require OpenAI to allow ChatGPT to access a calculator and perform real calculations instead of making inferences by analogy when it comes to numbers).
The next question is precisely that. Why doesn’t it work? How can ChatGPT make this type of error when the model is supposed to be able to do simple math? Why doesn’t ChatGPT have access to a simple calculation module? We can safely say that the reason lies in the approach adopted: the idea that almost everything can be solved through learning, and even through induction from the observation of an almost infinite amount of data. However, it seems difficult to induce the rules of multiplication just from an infinite number of examples.2 This even seems a bit absurd and somewhat futile. Computer models don’t have to copy the way humans do things, but still, the fact that no one learns to multiply just by looking at billions of examples should at least be questioned. We learn multiplication because we are taught the rules and, at the limit, if there were no educational system, we could imagine being able to find the rules through logical reasoning. In short, humans try to solve problems (with a goal in mind), not just infer rules by observing quantities of examples, without any other method.
It is, however, interesting to see that ChatGPT has acquired some knowledge about numbers. For example, in the case of multiplication, the last digits and the first digits are almost always correct (compare 1,161,222,984 and 1,161,513,784, results proposed by ChatGPT, and 1,162,238,184, the “real” result of the proposed multiplication). The order of magnitude is also reasonable (a bit more than a billion), if that means anything for a multiplication. ChatGPT has thus inferred some knowledge about numbers and perhaps multiplication, but it has not learned to multiply. Seeing billions of examples (of numbers and multiplications) was not enough.
ChatGPT, and its architecture, even if we don’t know the details, remains a linguistic system. Tokenization, vectorization, transformers, and word embeddings are incomparable for dealing with semantics, encoding the meaning of words, and making analogies. To compare expressions, generate paraphrases (including translations, which can be seen—if we aren’t afraid of bold metaphors—as paraphrasing from one language to another), etc. However, the flexibility needed for language is, in a certain way, orthogonal to what is needed for mathematics. The result of 15436 * 75294 is a precise number, whereas there are probably a thousand different ways to translate a sentence, even one as simple as “Longtemps, je me suis couché de bonne heure” (or, to take another example, “friendship” has something to do with “affection”; the two words are semantically close, which vectorization allows us to encode, but this proximity is probably not of the same nature as the proximity between 4 and 5 in arithmetic terms).
In a more global sense, this means that GPT-type systems are perfect for handling languages, that these systems can be coupled with other architectures for image, video, or mathematical analysis, but it is not a single architecture that will solve all problems. We are still far from general intelligence (the famous AGI that we keep hearing about), even if text, image, and video generation models have made impressive progress in recent years (and even if the models are “superhuman” in many ways. But we could also say that a simple calculator was already “superhuman” in its field of competence in the 1970s!).
As François Chollet said a few days ago on Twitter, a “skill” does not magically emerge from data.
There need to be enough examples for the system to be able to identify regularities, and also the model must have an architecture that can optimize learning (and, as we have seen, we do not necessarily need exactly the same architecture to learn how to do math and to learn linguistic tasks, at least concerning current approaches).
What is a bit concerning is that the debate often seems irrational about what recent AI models can or could do, even within the scientific community. The idea of “emergence” (thus the very terminology used) has a magical and largely irrational aspect that does not help to have a calm (or, quite simply, scientific) discussion on the issue. The fact that a system like ChatGPT cannot correctly multiply numbers with 4 or 5 digits should nonetheless prompt us to consider the real capabilities and limitations of these systems.
- On this question, see “Faith and Fate: Limits of Transformers on Compositionality” (N. Drizi et al., NeurIPS 2023). [↩]
- Here again, see (N. Drizi et al., NeurIPS 2023) for more explanations. [↩]
OpenEdition vous propose de citer ce billet de la manière suivante :
Thierry Poibeau (17 mai 2024). Why is ChatGPT so Bad at Math? Hors-texte. Consulté le 12 novembre 2024 à l’adresse https://doi.org/10.58079/11onw