One thing about the term “artificial intelligence” is that the word
artificial is an indication of our human hubris and anthropomorphic
projection where we see everything from our own perspective, based on
our own limited biological capabilities to perceive and presumably
analyze reality.
When AI folks talk about their fears they generally use the term ‘superintelligence.’
So
my fascination with software, and now AI, led me to start playing with
ChatGPT. As a fairly isolated older person this actually almost
simulated having someone else to talk to, and I could use it for
refreshing my memory about details of philosophy and novels I had
forgotten about.
In the process of these conversations (with
“nobody”) I asked “Chat” about this possibility of super-intelligence
and it first confirmed that it was nowhere near that level.
It
explained that its information is gleaned from a “training set” of data
from which its algorithms determine the next word in a sentence based on
its context and the “Language Model” which has thoroughly analyzed the
information in the training set in order to choose the next word in the
sentence of its response.
In other words there is no cognition or thought happening. So what if this superintelligence, I asked it:
Here is the key part of its response:
“When
discussing the concept of superintelligence, it refers to hypothetical
AI systems that have the potential to improve themselves, acquire new
knowledge, and surpass human capabilities.”
So the word to focus
on is “hypothetical.” While a Google engineer who was later fired
claimed that his AI was sentient, the reality is that at this point it
is a very intelligent word processor.So would superintelligence – for an
AI – require sentience? Is that remotely possible?...<<<Read More>>>...