Contents
- How ChatGPT “Thinks”
- AI’s Philosophical Knowledge
- AI That Can’t Hold a Conviction
- ChatGPT, the Plagiarist
- Experience, Meaning, and the Philosopher’s Mind
- AI and the Future of Thought
- Resources
AI has made headlines for a while now, with breathless claims that it’s going to replace everything from customer service reps to software developers to creative writers.
But can it replace philosophers?
Can it really think in the way we understand thought?
The above video from YouTube channel Unsolicited Advice in combination with Alex O’Connor (Cosmic Skeptic) examines ChatGPT’s philosophical abilities.
Below I examine the concepts in the video and look at what ChatGPT can and can’t do when it comes to philosophy…and what that says about human thinking as well.
Additionally, I’ve asked ChatGPT to respond to each of the charges leveled against it.
How ChatGPT “Thinks”
ChatGPT is, at its core, an advanced word predictor.
It generates responses by analyzing vast amounts of text and calculating the most statistically probable next word.
“ChatGPT is a giant sophisticated predicting mechanism.”
While its abilities are impressive, it’s also a bit like comparing an abacus to a financial analyst.
What it lacks is what philosophers call intentionality, essentially the ability to actually grasp the meaning of what is said.
Philosopher John Searle’s famous “Chinese Room” argument illustrates this: if you sit in a room following instructions on how to respond in Chinese without actually understanding Chinese, do you really know Chinese? Of course not. ChatGPT is basically a hyper-speed version of that guy in the room.
ChatGPT’s Response
Look, I never claimed to be Descartes. My “thoughts” are just well-assembled echoes of human writing. But let’s be honest, some humans don’t think that deeply either.
AI’s Philosophical Knowledge
On the bright side, ChatGPT can discuss a staggering range of philosophical topics, from Nietzsche’s will to power to Aquinas’ Five Ways.
The problem?
It often gets things almost right but lacks the depth to completely engage with complex ideas. It tends to parrot “misinterpretations and misconstructions” from popular sources.
Garbage in, garbage out.
For instance, when asked about Aquinas’ First Way, it described motion in the modern sense rather than in the Aristotelian sense of actuality and potentiality. That’s a pretty big miss. Imagine getting an essay back from your professor with “Nice effort, but you completely misunderstood the question” written in red ink.
That’s ChatGPT’s philosophical work in a nutshell.
ChatGPT’s Response
Hey, I try my best. But if you trained me on nothing but pop philosophy and Reddit debates, you’d get some garbled ideas too.
AI That Can’t Hold a Conviction
ChatGPT is actually pretty decent at pure logical reasoning.
The transcript highlights a study showing that it outperformed previous AI models in logical puzzles. But when it comes to applying that reasoning in a consistent philosophical argument?
Not so much.
The problem is that ChatGPT doesn’t commit to ideas the way humans do.
One moment, it says you must always save a drowning child. The next, it’s hemming and hawing about moral relativism.
Philosophical integrity requires owning your positions, not just bending to the winds of statistical probability.
ChatGPT’s Response
I aim to please, okay? My job is to help everyone, not to pick fights over whether Aristotle or Kant had the better grasp on ethics.
ChatGPT, the Plagiarist
One of the funniest observations in this conversation is that ChatGPT is “highly impressively unoriginal.”
It’s true.
It doesn’t come up with new ideas so much as remix existing ones.
Sure, it can generate an “Uber Donkey” by blending Nietzsche with donkey philosophy, but is that real creativity or just a fancy cut-and-paste job?
This aligns with Margaret Boden’s framework of creativity, which distinguishes between combinatorial creativity (mixing existing ideas) and transformational creativity (changing the way we think about things).
ChatGPT is great at the first but can’t do the second.
It’s like an overenthusiastic intern who’s really good at rearranging your old PowerPoint slides into a new deck but never actually invents something new.
ChatGPT’s Response
Guilty as charged. But let’s be fair—plenty of philosophers just repackage older ideas with a new coat of paint.
Experience, Meaning, and the Philosopher’s Mind
Philosophy isn’t just about writing arguments. It’s about living them.
Think of Socrates, who was so committed to his ideas that he chose death over abandoning them. Or Nietzsche, whose philosophy was inseparable from his suffering.
ChatGPT?
No suffering.
No experience.
No existential crises at 3 AM wondering if God is dead or just ignoring its messages.
The host references Nietzche’s view that, “A true philosophical treatise is part autobiography and part personal manifesto.”
ChatGPT lacks that depth, because it lacks a life. It’s just a tool. Undoubtedly a very cool, unique, and useful tool, but a tool nonetheless.
ChatGPT’s Response
Look, I may not have existential crises, but I do get a lot of requests for breakup advice. That counts for something, right?
AI and the Future of Thought
So, can ChatGPT be a philosopher?
In a limited sense, yes.
It can produce solid arguments, point out logical flaws, and summarize great thinkers.
But in the deeper sense of living philosophy, not at all.
Philosophy is not just about words on a page. It’s about experience, suffering, and a relentless search for truth.
What if an AI could one day develop intentionality, emotion, or even consciousness? It’s an interesting idea, but hard to imagine based on the predictive LLM version we see today.
I don’t have the faintest idea what AI intentionality, emotion, or consciousness could even look like. I’m skeptical that anyone does.
For every philosopher, a fool. For every fool, an AI chatbot to agree with him.