I recently learned about a concept in the world of philosophy, called the regression of interpretation. Essentially, it’s the idea that to define a word, you need to use other words, which themselves need to be defined.
I could say a tree is a “cylindrical plant with limbs and leaves,” but then I need to define cylindrical, plant, limbs and leaves. And of course, each of those words will use other words to define themselves, and those words will need words, and on and on. There is no end point.
Except, of course, there is. We can map words to the reality we detect with our senses. So I can point at a tree and say “tree.” I can touch an ice cube and define that sensation as “cold.” (My first word was “hot” after touching the stove.)
Philosophers would say it’s a little messier than that, but let’s go with it for now.
But AI has no senses. It can’t point to an object or sensation.
All AI has is an elaborate series of symbols that reference themselves, never arriving at a fixed point.
This relates to another question I’ve often wondered about. What would it be like to be born without any senses at all? No sight, hearing, touch, etc. but also no internal senses, sense of balance, etc. Presumably you would still have some kind of consciousness, but what would fill it? There would be no sensory input, no objects to map language to.
Maybe that would be very pleasant, very Zen.
Now, to be clear, I’m not saying AI is conscious in any sense. (I’m open to the possibility, but I doubt it.) I presume it’s a system that can use language and symbols in a kind of self-referential way, but never actually experience the world that humans use that language to describe.
Some might say that this ability to map language onto sensory input is the “special sauce“ that will make humans always superior to AI. I’m not so sure. I’m pretty impressed with what I can do even with this limitation. (Maybe in some sense it’s not a limitation?)
That’s my thought for today.