The recent advances in language modeling with GPT-3 got me thinking: at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?
When a sand heap met Eubulides
How many grains of sand can you take from a sand heap until it’s not a heap? Or more personally, how many hairs on your head can you afford to lose before you’re bald, or pounds before you’re thin? Maybe it’s fun to annoy someone by asking one of these Sorites Paradoxes, attributed to the Greek philosopher Eubulides, precisely because they arise when language is imprecise. They expose that words we commonly use without hesitation, like heap, bald, thin, or even intelligent and creative, where we think we know exactly what we mean, actually have boundaries that can be quite vague when you really start to dig into them.
You can think about what’s going on here as a quantitative change: in grains of sand, hair, or weight, leading to a qualitative change that ascribes a property to something, like being a heap, bald, or thin.
[W]e have seen that the alterations of being in general are not only the transition of one magnitude into another, but a transition from quality into quantity and vice versa, a becoming-other which is an interruption of gradualness and the production of something qualitatively different from the reality which preceded it – Hegel
The idea was then taken further by Marx and Engels into the law of passage of quantitative changes into qualitative changes, and finally arrived in the most familiar and widely misattributed form you’ve likely heard:
Quantity has a quality of its own -Various
While it’s not what any of them had in mind, at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?