AI Isn’t Alive Like You Think it Is

Does a puppet have a mind? A ventriloquist doll?

2–4 minutes

Does a puppet have a mind? A ventriloquist doll? From the eighth row of the audience, with the lights dimmed, ears straining, we might say so. After all, the performer’s mouth isn’t moving, and as the puppet moves its mouth as we hear speech. The puppet expresses emotions, feels things, comments on the world. It acts and reacts. It responds in different ways at different times (its behavior is non-determinative). In a clever enough performance, we may get a sense that the puppet betrays its inner state through the odd revealing remark, hinting at a hidden jealousy or irritation. The puppeteer and the puppet may even argue and tease one another.

A vintage clown doll with a painted face and colorful clothing, featuring a blue and yellow striped hat, floral patterned trousers, and a patchwork shirt.
Punch the puppet

Reaction to stimuli, the presence of a subconsciousness, nondeterminative behavior, the appearance of a self, a distinctive personality and faculty of perception– these are just a few of the qualities we say minds must have to be minds. The puppet, in performance, exhibits all of these.

But when the crowd spills out of the theater into the evening, none of us believes the puppet is alive. We may comment on the quality of the simulacrum, but no one exits trembling, shocked to have seen the ghost in the wooden machine. The puppet only appears to be alive, animated by a human hand and mind.

By the way: puppetry originated in Ancient Greece; the word puppet comes from nevrospastos, which means “drawn from strings,” which itself comes from nevron, which is “sinew, tendon, muscle, string, wire.” Another etymological descendent of nevron, incidentally, is neuron. However, it would be a mistake to say that strings and wires make up the mind as well as the puppet. This is pretty much the central fallacy of anthropomorphizing the ‘neural networks’ the constitute LLMs.

I’d like to draw a comparison between this and what one might experience using a modern foundational model (used in an LLM, for example). Many chatbot users have come away from conversations with ChatGPT, Gemini, or Grok claiming to have seen the real spark of mind (the NYT’s Kevin Roose infamously wrote about Bing, powered by ChatGPT, as if it held ulterior motives and aimed to manipulate him). This is fundamentally no different than watching a puppet show.

A black and white image depicting four individuals in a room. Two men in metallic outfits stand prominently, while a woman sits on a chair and another man stands nearby in formal attire.
Artificially Intelligent Robots as imagined in 1921, when the term was coined

A chatbot can be as entertaining, as engaging. It can be funny and clever, zippy and helpful, and it can be rude, cruel, and it can maybe even drive you to commit suicide. It will produce different results, even if you serve it with different inputs. These are all the qualities of the mind that we say must exist for a mind to be a mind. (And just like a puppet, a chatbot requires human animation to perform: the text and images they create are manipulations of all the enormous mass of communication we have produced as a species. The puppeteer’s hand is absent, but that doesn’t mean there’s no humanity behind the scenes.)

But if these criteria as easily match a puppet as a chat machine, then can we really say they are useful? Or do we simply acknowledge that they must be applied alongside a good dose of common sense? I argue it’s the latter. The same wisdom that says a human and puppet that share the same non-determinative, personality-laden, subconscious behavior belong to two different categories should make the same judgement about a human and a large language model.

Discover more from Nova Literary-Arts

Subscribe now to keep reading and get access to the full archive.

Continue reading