For as long as I can remember I’ve been interested in how we (as humans) know things. But more than that, I wanted to create things that know things, to build something that could learn, understand and interact naturally with us. Of course at first I didn’t know that what I was interested in was philosophy, specifically a branch called epistemology, and that creating intelligent machines was the aim of artificial intelligence (AI).
I remember as an undergraduate a philosophy professor said something that stuck with me – great philosophers aren’t those that have the answers, but those that ask important questions. Philosophy aims to understand the world around us, why we do what we do, how we know what we know; it’s not about having the right answer as much as to keep asking questions.
Historically most sciences start off as part of philosophy, and then once they become better understood split off into distinct subjects. The hard, scientific part, where hypotheses are conjectured and empirically evaluated, usually becomes associated with the science, and the squishier aspects remain in philosophy.
Computer Science and its AI subfield are no different. At first computer scientists like Turing and Von Neumann engaged both philosophical and technical aspects of AI. But today with the increasingly successful practical applications of machine learning, most AI practitioners, more accurately machine learning practitioners, focus on how to apply it to solve specific problems. This has led to considerable advancements in our scientific understanding, but without much consideration in the machine learning community for the societal understanding of the implications, or their relation to the vast heritage of philosophical ideas.