Philosophy of AI

Philosophy of AI

A Short Introduction to Philosophy of AI

People often ask me: “what is Philosophy of AI?”

It’s true: if you say “philosophy of mind”, or “philosophy of language” or “aesthetics”, people have a vague idea of what you mean. These are old, well established fields of study. In contrast, “philosophy of AI” may sound like a trendy combination that was just made up recently.

A neon pixelart version of the School of Athens | Figure 1 A neon pixelart version of the School of Athens

However, the philosophical implications of thinking or computing machines have been considered throughout history. Modern AI (software running on electronic hardware) has begun around the mid-20th century and has, since the start, involved philosophical questions. Even the famous Turing test was published by a computer scientist in a philosophy journal. Philosophy of AI is now becoming a classic and central topic, one that must be included in any serious philosophy curriculum.

But so, how do I use it? What do I mean by it?

I see it as the very broad field of study that encompasses any philosophical question raised by artificial intelligence. Personally, I’m fond of artificial consciousness and how technology is reshaping reality. This interest is deeply interwoven across the three classic main branches of philosophy: epistemology, ethics, and metaphysics.

For each of them, we will take a brief tour of the main AI-related questions. I will not go into detailed analysis here, as that would take us too far, but I have included links to relevant resources for readers who want more context or discussion.

Epistemology of AI

Epistemology is the branch of philosophy which is concerned with knowledge. As such, it is closely tied to science and its evolution. How can we know that something is true? How do we build scientific theories? What is a proof? Artificial intelligence reshapes these questions at a deep level.

First is the question of the limits of computation and algorithmic problem-solving. This one taps into fundamental topics in theoretical computer science, such as computability theory (how can we compute?) and complexity theory (how hard is a computation?).

One famous controversy was stirred by Roger Penrose, in his landmark The Emperor’s New Mind where he argued that some Gödelian mental operations are irremediably out of reach for computer programs (more on wikipedia). In a similar vein, many critics of AI will invoke the Church-Turing thesis as evidence of the limits of computable knowledg (see the SEP entry). Whether an area of knowledge is only accessible through non-computational means ties this debate back to metaphysics.

Besides theoretical insights, building AI continuously helps us refine our empirical understanding of how we acquire or process knowledge. A classic theme is the question of representation. During the “GOFAI” era, models of the world were explicitly programmed in the AI agent. But this approach was too rigid and cumbersome. Ingenious engineers started instead implementing algorithms that learned their internal representation dynamically from direct interactions with the world, by integrating feedback loops between the agent and its environment. This experimental field had profound implications which nourished the embodied view of mind, creating a lasting shift in cognitive science (read the post on Varela). Today, progress in AI and neuroscience are even more intimate, and they continue to enlighten our scientific comprehension of knowledge.

A further set of epistemological questions raised by AI relates to how the production of scientific knowledge itself is being reshaped by the use of computer programs. Algorithms can build mathematical proofs, discover new molecules or support the analysis of massive scientific datasets. With the acceleration of scientific production, AI seems to offer the only path forward, but how can we ensure this transition is made beneficially?

Ethics of AI

Ethics is the branch of philosophy that is concerned with morality. It questions the difference between good and bad, and the principles guiding virtuous actions. Over the past decade, ethics of AI has become a central societal topic. How can we shape AI for the good, and what does it mean?

One aspect of AI ethics is strongly practical. We now have a whole arsenal of best practises, benchmarks, tests, compliance frameworks, risk assessment tools, etc. From computer techniques to managerial organisation, they provide direct solutions to build safer, trustworthy systems.

By seeking to build ethical systems, we also learn more about our own morality. For instance, to study what rules an autonomous vehicle should follow, researchers surveyed how humans react to different variations of the classic trolley problem. Their experiment, the “moral machine”, revealed cultural clusters and global preferences: apparently, when crossing a road, it is universally much safer to be a stroller than a cat.

Another aspect is more theoretical and considers the extent to which artificial systems should be moral agents. In a classic computer ethics paper, James Moor relates full ethical agency with “having consciousness, intentionality and free will” . Recent studies have argued that AI system may become conscious or at least robustly agentic in the near future. This immediately raises questions about their rights and responsibilities. The answer to such questions have strong ties to metaphysical debates.

Metaphysics of AI

Metaphysics is the branch of philosophy which is concerned with the study of reality. It’s the most speculative and theoretical area of philosophy. It generates some of the most difficult problems and eccentric lines of argumentation.

My favourite set of questions in this area relates to consciousness. What is phenomenal experience and can computers have it? In general, we tend to compare the mental properties of artificial systems to those of humans: can they act intentionally, understand semantic meaning, or be genuinely creative? Notorious thought experiments illustrate these puzzles: Leibniz’s Mill, Mary’s Room, the Chinese Room and Phenomenal Zombies, to name a few. Such debates can get heated very quickly, as important moral implications are at stake. Some will fervently defend a human exception, while others seem resolved to remove humans from the centre of our understanding of reality.

Another, closely related set of questions addresses the fundamental nature of our universe. Is reality discrete or continuous, and does it matter for AI? Are objects like rocks implementing computations, and does it make them intelligent? Is physics constraining computation or is the world governed by the same computational rules we implement in AI? Are we living in a simulation, like in the Matrix, and is AI creating such simulations?

Conclusion

The fundamental questions raised by AI are not confined to a single area of philosophy. On the contrary, they reach across all layers of inquiry. In this introductory overview, we only brushed the surface through the lens of the three classic branches, and some of my favourite topics, but the ramifications run much deeper. AI creates ripples even in the most specialised corners of philosophy. In future posts, we will take a closer look at some classic controversies mentioned here.

Artificial systems shed new light on old debates: they provide new tools to accelerate our scientific analysis, new formal models against which to compare the human minds, new experimental playgrounds for testing ideas. Consistently, they push the boundaries of what we understand of the world and our place in it.