Quantity Can Have a Quality of its Own for Language Models

The recent advances in language modeling with GPT-3 got me thinking: at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

When a sand heap met Eubulides

How many grains of sand can you take from a sand heap until it’s not a heap? Or more personally, how many hairs on your head can you afford to lose before you’re bald, or pounds before you’re thin? Maybe it’s fun to annoy someone by asking one of these Sorites Paradoxes, attributed to the Greek philosopher Eubulides, precisely because they arise when language is imprecise. They expose that words we commonly use without hesitation, like heap, bald, thin, or even intelligent and creative, where we think we know exactly what we mean, actually have boundaries that can be quite vague when you really start to dig into them.

You can think about what’s going on here as a quantitative change: in grains of sand, hair, or weight, leading to a qualitative change that ascribes a property to something, like being a heap, bald, or thin.

Hegel developed an explicit relation between quality and quantity in Science of Logic:

[W]e have seen that the alterations of being in general are not only the transition of one magnitude into another, but a transition from quality into quantity and vice versa, a becoming-other which is an interruption of gradualness and the production of something qualitatively different from the reality which preceded it – Hegel

The idea was then taken further by Marx and Engels into the law of passage of quantitative changes into qualitative changes, and finally arrived in the most familiar and widely misattributed form you’ve likely heard:

Quantity has a quality of its own -Various

While it’s not what any of them had in mind, at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

Read More »Quantity Can Have a Quality of its Own for Language Models

Demystifying Machine Learning with Your Dog

When someone learns what I do I often find myself explaining why despite what they may have heard, AI is not becoming conscious and taking over the world. I usually try demystifying machine learning by making an analogy to something familiar that would never be considered capable of that kind of domination. So when a fellow dog owner and I had this conversation recently at the dog park, I used our dogs as the example, and although it’s an imperfect analogy, it seems to do the trick.

How a dog learns

If you want your dog to do something on command, you start by getting her to do it, and then saying something or showing her something at the same time and giving her a treat. After seeing this over and over, your dog starts picking up a pattern, and forming an association between the auditory signal (e.g. vocal command) or visual signal (e.g. hand gesture) and the desired action.

So if you’re successful, when you say sit, she realizes it’s not just random noise, but that there’s a significant correlation of getting a reward if she takes your word as input, and outputs her butt on the ground.

Ask any dog owner however, and they’ll tell you it’s not over yet. She may have mastered sitting in your living room when you say the word just so, but has no idea what to do with when you try the same thing in the kitchen, or on the field outside, or when someone else says it.

She memorized a behavior under one very specific set of circumstances, but hasn’t learned she needs to apply it in others that aren’t exactly the same. To her sitting in the living room isn’t the same as in the field, and she only knows to do it in places where it’s been taught. The input to her isn’t just the word, but the conditions under which it was said too. That’s why you need to repeat the same training under different conditions – places, times of day, emotional states, people, and ways of saying it. The more inputs (conditions under which you ask her to sit) you give her, the better she will learn to sit when the input isn’t exactly the same as what you’ve taught her before. She learns to generalize.

Read More »Demystifying Machine Learning with Your Dog

Introduction to Machine Opinings: Machine Learning and Philosophy

For as long as I can remember I’ve been interested in how we (as humans) know things. But more than that, I wanted to create things that know things, to build something that could learn, understand and interact naturally with us. Of course at first I didn’t know that what I was interested in was philosophy, specifically a branch called epistemology, and that creating intelligent machines was the aim of artificial intelligence (AI).

I remember as an undergraduate a philosophy professor said something that stuck with me – great philosophers aren’t those that have the answers, but those that ask important questions. Philosophy aims to understand the world around us, why we do what we do, how we know what we know; it’s not about having the right answer as much as to keep asking questions.

Historically most sciences start off as part of philosophy, and then once they become better understood split off into distinct subjects. The hard, scientific part, where hypotheses are conjectured and empirically evaluated, usually becomes associated with the science, and the squishier aspects remain in philosophy.

Computer Science and its AI subfield are no different. At first computer scientists like Turing and Von Neumann engaged both philosophical and technical aspects of AI. But today with the increasingly successful practical applications of machine learning, most AI practitioners, more accurately machine learning practitioners, focus on how to apply it to solve specific problems. This has led to considerable advancements in our scientific understanding, but without much consideration in the machine learning community for the societal understanding of the implications, or their relation to the vast heritage of philosophical ideas.

Read More »Introduction to Machine Opinings: Machine Learning and Philosophy