A Little Envy Can be a Good Thing (in Humans and Machines)

When most of us scroll through social media and inevitably compare ourselves to those around us we feel crappier, like we’re missing out or falling behind in our personal lives or work.  Not coincidentally, there’s also a growing understanding of well-being and happiness as subjective and adaptive: your happiness largely depends on your expectations. Your expectations adapt, however, and not only to your conditions, but to the conditions of those around you.

You probably thought your drawing in 5th grade was just fine until you saw Linda’s. And that’s also why as people get wealthier they aren’t necessarily happier – the comparisons and expectations keep changing – first you want the house, then the yacht, the island, a political office, then maybe a planet (close by). It’s easy to imagine then that for all of us being exposed to so many people’s lives exposes us to all sorts of conditions that appear in some way better than our own, setting our own expectations higher, and increasing the likelihood for unhappiness.

Social Comparison

There’s a lot understood about what actually matters for being happy, both with your life and in your life – social connections, time meaningfully spent, being healthy, appreciating what you have – and I completely agree with all of it. But I want to focus on the role of jealously and envy, which is often derided.

While social media has undoubtedly exacerbated social comparison and envy, their existence has been around for a long time:

“Whoever sang or danced best, whoever was the handsomest, the strongest, the most dexterous, or the most eloquent, came to be of most consideration; and this was the first step towards inequality…From these first distinctions arose … envy: and the fermentation caused by these new leavens ended by producing combinations fatal to innocence and happiness.” Rousseau, On the Origin of the Inequality of Mankind

The natural response can be like the Stoics, to limit the exposure and stop comparing yourself to others:

“How much time he gains who does not look to see what his neighbor says or does or thinks, but only at what he does himself.” – Marcus Aurelius  

But while removing yourself from the barrage of updates and comparisons is essential to focus on improving yourself, that’s likely not enough. One of the methods for well-being is to identify something you want to improve, focus on it relentlessly, and compare yourself to your previous self. Not to other people who have what you want. Even imperceptibly small daily steps compound over time to make a big difference.

But while that gives you a way how to improve, it’s less clear what you should focus on.

Read More »A Little Envy Can be a Good Thing (in Humans and Machines)

Quantity Can Have a Quality of its Own for Language Models

The recent advances in language modeling with GPT-3 got me thinking: at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

When a sand heap met Eubulides

How many grains of sand can you take from a sand heap until it’s not a heap? Or more personally, how many hairs on your head can you afford to lose before you’re bald, or pounds before you’re thin? Maybe it’s fun to annoy someone by asking one of these Sorites Paradoxes, attributed to the Greek philosopher Eubulides, precisely because they arise when language is imprecise. They expose that words we commonly use without hesitation, like heap, bald, thin, or even intelligent and creative, where we think we know exactly what we mean, actually have boundaries that can be quite vague when you really start to dig into them.

You can think about what’s going on here as a quantitative change: in grains of sand, hair, or weight, leading to a qualitative change that ascribes a property to something, like being a heap, bald, or thin.

Hegel developed an explicit relation between quality and quantity in Science of Logic:

[W]e have seen that the alterations of being in general are not only the transition of one magnitude into another, but a transition from quality into quantity and vice versa, a becoming-other which is an interruption of gradualness and the production of something qualitatively different from the reality which preceded it – Hegel

The idea was then taken further by Marx and Engels into the law of passage of quantitative changes into qualitative changes, and finally arrived in the most familiar and widely misattributed form you’ve likely heard:

Quantity has a quality of its own -Various

While it’s not what any of them had in mind, at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

Read More »Quantity Can Have a Quality of its Own for Language Models

Demystifying Machine Learning with Your Dog

When someone learns what I do I often find myself explaining why despite what they may have heard, AI is not becoming conscious and taking over the world. I usually try demystifying machine learning by making an analogy to something familiar that would never be considered capable of that kind of domination. So when a fellow dog owner and I had this conversation recently at the dog park, I used our dogs as the example, and although it’s an imperfect analogy, it seems to do the trick.

How a dog learns

If you want your dog to do something on command, you start by getting her to do it, and then saying something or showing her something at the same time and giving her a treat. After seeing this over and over, your dog starts picking up a pattern, and forming an association between the auditory signal (e.g. vocal command) or visual signal (e.g. hand gesture) and the desired action.

So if you’re successful, when you say sit, she realizes it’s not just random noise, but that there’s a significant correlation of getting a reward if she takes your word as input, and outputs her butt on the ground.

Ask any dog owner however, and they’ll tell you it’s not over yet. She may have mastered sitting in your living room when you say the word just so, but has no idea what to do with when you try the same thing in the kitchen, or on the field outside, or when someone else says it.

She memorized a behavior under one very specific set of circumstances, but hasn’t learned she needs to apply it in others that aren’t exactly the same. To her sitting in the living room isn’t the same as in the field, and she only knows to do it in places where it’s been taught. The input to her isn’t just the word, but the conditions under which it was said too. That’s why you need to repeat the same training under different conditions – places, times of day, emotional states, people, and ways of saying it. The more inputs (conditions under which you ask her to sit) you give her, the better she will learn to sit when the input isn’t exactly the same as what you’ve taught her before. She learns to generalize.

Read More »Demystifying Machine Learning with Your Dog

Advice for a Bad Career in Data Science

You nailed the interview and landed the hottest job in America; go you, now you’re a data scientist! As you come into work every day to incorporate data science into product development, or apply machine learning to solve business problems, it’s blue skies everywhere. There are so many possibilities it’s hard to know where to begin and how to spend your time.

Well, here are a few suggestions, broken down into the stages of a typical data science project, from my experience practically guaranteed to work every time.

Disclaimer: No data scientists were intentionally harmed. Any resemblance to your coworkers, past or present, or the project you’re working on is purely coincidental, although expected.

Picking your problem

Start working on whatever problem you want.

  • Try to pick the least defined problem, hopefully one you vaguely understand. That way you’ll have plenty of room to explore, and you don’t have to worry about running out of things to try.
  • If someone comes to you with a problem, use it as an opportunity to reframe it into something you want to work on.
  • Don’t bother wasting your time asking anyone what would be useful or valuable for them, you know better anyway! They’re probably not going to have the answers. And really, the fewer people that know about your work the better, you don’t want them interrupting you.
  •  You shouldn’t have to explain yourself either, the value of data science is self-evident; everybody wants it! But if someone does ask you what you’re working on, a good rule of thumb is that the more you have to convince them that it’s useful, the more you’re on the right track and can rub it in their face later.
  • The business will eventually find value in whatever you do, don’t worry about when or how yet.


Start working on any and every problem someone comes to you with.

  • Best not to ask too many questions, they’re the expert after all. They should know exactly what they need, and they’ve already identified the right question to ask and the true causes of the problem. Anytime someone comes to you it means there’s a data science solution that’s worth building. You don’t want them thinking you’re not up to it. They may start doubting your commitment.
  • You have to show them that with data science you can do anything. They’ve likely read the news recently, so reassure them AI can do whatever they want, probably more and better and faster and stronger and 24/7/365 with a smile.

Read More »Advice for a Bad Career in Data Science

Introduction to Machine Opinings: Machine Learning and Philosophy

For as long as I can remember I’ve been interested in how we (as humans) know things. But more than that, I wanted to create things that know things, to build something that could learn, understand and interact naturally with us. Of course at first I didn’t know that what I was interested in was philosophy, specifically a branch called epistemology, and that creating intelligent machines was the aim of artificial intelligence (AI).

I remember as an undergraduate a philosophy professor said something that stuck with me – great philosophers aren’t those that have the answers, but those that ask important questions. Philosophy aims to understand the world around us, why we do what we do, how we know what we know; it’s not about having the right answer as much as to keep asking questions.

Historically most sciences start off as part of philosophy, and then once they become better understood split off into distinct subjects. The hard, scientific part, where hypotheses are conjectured and empirically evaluated, usually becomes associated with the science, and the squishier aspects remain in philosophy.

Computer Science and its AI subfield are no different. At first computer scientists like Turing and Von Neumann engaged both philosophical and technical aspects of AI. But today with the increasingly successful practical applications of machine learning, most AI practitioners, more accurately machine learning practitioners, focus on how to apply it to solve specific problems. This has led to considerable advancements in our scientific understanding, but without much consideration in the machine learning community for the societal understanding of the implications, or their relation to the vast heritage of philosophical ideas.

Read More »Introduction to Machine Opinings: Machine Learning and Philosophy

An Experimental Development Process for Making an Impact with Machine Learning

Originally published on Towards Data Science.

It’s really hard to build product features and internal operations tools that use machine learning to provide a tangible user value. Not just because it’s hard to work with data (it is), or because there are many frivolous uses of AI that are neat but aren’t that useful (there are), but because it’s almost never the case that you’ll have a clearly defined and circumscribed problem handed to you, and there are many unknowns outside of the purely technical aspects that could derail your project at any point. I’ve seen a lot of great articles written on the technology side providing code and advice on how to work with data and build a machine learning model, and the people side of how to hire engineers and scientists, but that’s only one part. The other part is how to steer the technology and people through the hurdles of getting this kind of work to have an impact.

Fortunately, I’ve failed to deploy AI many times and for many reasons to provide business and user value, and watched friends and colleagues from startups to Fortune 500 data science and research groups struggle with the same. Almost invariably the technology could have been valuable, and the people were competent, but what made the difference was how people were working together and what technology they were working on. In other words, I trust you can hire good technically-able people that can apply their tools well, but unless it’s the right people building the right things at the right time for the right business problem, it’s not going to matter. (Yeah, no duh, right, but it’s harder to do that than you may think).

In this post I’ve tried to consolidate learnings, reference existing articles I’ve found useful (apologies to the references I’m sure to have missed), and add some color to why building experimental products is hard, how it’s different from other engineering, what your process could look like, and where you’re likely to encounter failure points.

Read More »An Experimental Development Process for Making an Impact with Machine Learning