Demystifying machine learning with your dog

When someone learns what I do I often find myself explaining why despite what they may have heard, AI is not becoming conscious and taking over the world. I usually try demystifying machine learning by making an analogy to something familiar that would never be considered capable of that kind of domination. So when a fellow dog owner and I had this conversation recently at the dog park, I used our dogs as the example, and although it’s an imperfect analogy, it seems to do the trick.

How a dog learns
If you want your dog to do something on command, you start by getting her to do it, and then saying something or showing her something at the same time and giving her a treat. After seeing this over and over, your dog starts picking up a pattern, and forming an association between the auditory signal (e.g. vocal command) or visual signal (e.g. hand gesture) and the desired action.

So if you’re successful, when you say sit, she realizes it’s not just random noise, but that there’s a significant correlation of getting a reward if she takes your word as input, and outputs her butt on the ground.

Ask any dog owner however, and they’ll tell you it’s not over yet. She may have mastered sitting in your living room when you say the word just so, but has no idea what to do with when you try the same thing in the kitchen, or on the field outside, or when someone else says it.

She memorized a behavior under one very specific set of circumstances, but hasn’t learned she needs to apply it in others that aren’t exactly the same. To her sitting in the living room isn’t the same as in the field, and she only knows to do it in places where it’s been taught. The input to her isn’t just the word, but the conditions under which it was said too. That’s why you need to repeat the same training under different conditions – places, times of day, emotional states, people, and ways of saying it. The more inputs (conditions under which you ask her to sit) you give her, the better she will learn to sit when the input isn’t exactly the same as what you’ve taught her before. She learns to generalize.

Continue reading “Demystifying machine learning with your dog”

Advice for a Bad Career in Data Science

You nailed the interview and landed the hottest job in America; go you, now you’re a data scientist! As you come into work every day to incorporate data science into product development, or apply machine learning to solve business problems, it’s blue skies everywhere. There are so many possibilities it’s hard to know where to begin and how to spend your time.

Well, here are a few suggestions, broken down into the stages of a typical data science project, from my experience practically guaranteed to work every time.

Disclaimer: No data scientists were intentionally harmed. Any resemblance to your coworkers, past or present, or the project you’re working on is purely coincidental, although expected.

Picking your problem
Start working on whatever problem you want.
— Try to pick the least defined problem, hopefully one you vaguely understand. That way you’ll have plenty of room to explore, and you don’t have to worry about running out of things to try.
—If someone comes to you with a problem, use it as an opportunity to reframe it into something you want to work on.
— Don’t bother wasting your time asking anyone what would be useful or valuable for them, you know better anyway! They’re probably not going to have the answers. And really, the fewer people that know about your work the better, you don’t want them interrupting you.
— You shouldn’t have to explain yourself either, the value of data science is self-evident; everybody wants it! But if someone does ask you what you’re working on, a good rule of thumb is that the more you have to convince them that it’s useful, the more you’re on the right track and can rub it in their face later.
— The business will eventually find value in whatever you do, don’t worry about when or how yet.

Start working on any and every problem someone comes to you with.
— Best not to ask too many questions, they’re the expert after all. They should know exactly what they need, and they’ve already identified the right question to ask and the true causes of the problem. Anytime someone comes to you it means there’s a data science solution that’s worth building. You don’t want them thinking you’re not up to it. They may start doubting your commitment.
— You have to show them that with data science you can do anything. They’ve likely read the news recently, so reassure them AI can do whatever they want, probably more and better and faster and stronger and 24/7/365 with a smile.

Continue reading “Advice for a Bad Career in Data Science”

Introduction to Machine Opinings: Machine Learning and Philosophy

For as long as I can remember I’ve been interested in how we (as humans) know things. But more than that, I wanted to create things that know things, to build something that could learn, understand and interact naturally with us. Of course at first I didn’t know that what I was interested in was philosophy, specifically a branch called epistemology, and that creating intelligent machines was the aim of artificial intelligence (AI).

I remember as an undergraduate a philosophy professor said something that stuck with me – great philosophers aren’t those that have the answers, but those that ask important questions. Philosophy aims to understand the world around us, why we do what we do, how we know what we know; it’s not about having the right answer as much as to keep asking questions.

Historically most sciences start off as part of philosophy, and then once they become better understood split off into distinct subjects. The hard, scientific part, where hypotheses are conjectured and empirically evaluated, usually becomes associated with the science, and the squishier aspects remain in philosophy.

Computer Science and its AI subfield are no different. At first computer scientists like Turing and Von Neumann engaged both philosophical and technical aspects of AI. But today with the increasingly successful practical applications of machine learning, most AI practitioners, more accurately machine learning practitioners, focus on how to apply it to solve specific problems. This has led to considerable advancements in our scientific understanding, but without much consideration in the machine learning community for the societal understanding of the implications, or their relation to the vast heritage of philosophical ideas.

Continue reading “Introduction to Machine Opinings: Machine Learning and Philosophy”