For as long as I can remember I’ve been interested in how we (as humans) know things. But more than that, I wanted to create things that know things, to build something that could learn, understand and interact naturally with us. Of course at first I didn’t know that what I was interested in was philosophy, specifically a branch called epistemology, and that creating intelligent machines was the aim of artificial intelligence (AI).
I remember as an undergraduate a philosophy professor said something that stuck with me – great philosophers aren’t those that have the answers, but those that ask important questions. Philosophy aims to understand the world around us, why we do what we do, how we know what we know; it’s not about having the right answer as much as to keep asking questions.
Historically most sciences start off as part of philosophy, and then once they become better understood split off into distinct subjects. The hard, scientific part, where hypotheses are conjectured and empirically evaluated, usually becomes associated with the science, and the squishier aspects remain in philosophy.
Computer Science and its AI subfield are no different. At first computer scientists like Turing and Von Neumann engaged both philosophical and technical aspects of AI. But today with the increasingly successful practical applications of machine learning, most AI practitioners, more accurately machine learning practitioners, focus on how to apply it to solve specific problems. This has led to considerable advancements in our scientific understanding, but without much consideration in the machine learning community for the societal understanding of the implications, or their relation to the vast heritage of philosophical ideas.
However, many topics being discussed today in relation to AI have been mulled over in some form for millennia. For instance, data is one of the foundations for any AI, and what philosophy isn’t about data ? Data represents information. Questions of how we acquire information, how we can trust it, how we know if it’s true, and so on are the cornerstone of philosophy.
As machine learning has delivered tangible benefits, in the last decade it has gained recognition beyond the research community and sparked renewed public interest, becoming the subject of popular discussion. Unfortunately, instead of bridging the technical and popular understanding, most of this discourse has increased the misunderstanding and hype around machine learning and AI.
This is both frustrating and expected. Machine learning related scientific publications usually focus on a rather small technical improvement to already existing methods on a narrowly defined problem. And while there’s a tremendous amount of content discussing challenges and providing advice on how to perform machine learning, there are unfortunately still few places where those discussions and ideas get translated to a wider audience in a meaningful way.
When making the transition to popular news most of the caveats are lost. It’s much more captivating to the imagination to lead with bots inventing a new language, or taking over yet another job. Coupled with the incentive, especially for companies, but one well-meaning researchers fall into as well, to promote and market their advancements, it becomes impossible for most people to distinguish actual achievements – advancements in knowledge – from applying or slightly tweaking what we already know. This makes it seem like new advancements are constantly made, whereas in reality we slowly expand our small set of capabilities and apply them to different data.
This misrepresentation has a perceptible negative effect, leading to unrealistic expectations, followed by skepticism and ultimately rejection by those in society who could benefit. I’ve experienced this first hand in dozens of introductory presentations on machine learning to lay audiences and countless meetings with consumers of the machine learning enabled features I work on. The most gratifying moments are when a 70-year old lawyer comes up afterwards and says “I get what that news service is doing, they’re just clustering the text vectors of documents similar to what I’ve read before.” Yes! Or when someone realizes that machine learning is just learning the importance of some data by counting patterns, and it’s not magic. Yes!
In Machine Opinings I hope to first, examine what philosophy can tell us about how to pursue AI, and second, connect the machine learning community with those who couldn’t care less about the details but are increasingly surrounded by it. I’ll specifically endeavor to take philosophical ideas and make somewhat tenuous connections between machine learning and philosophy in an effort to relate and examine machine learning concepts, how we use them, and how we engage with AI in the real world. In other words, hopefully to raise lots of questions, and maybe a few answers 🙂