Gravitational Attraction
What would happen if two people out in space a few meters apart, abandoned by their spacecraft, decided to wait until gravity pulled them together? My initial thought was that …
In #articles
In his blog post, Chatbots are boring. They aren't AI., PZ Myers the professor of biology at the University of Minnesota Morris has this to say about chatbots and AI:
Chatbots are kind of the lowest of the low, the over-hyped fruit decaying at the base of the tree. They aren’t even particularly interesting. What you’ve got is basically a program that tries to parse spoken language, and then picks lines from a script that sort of correspond to whatever the interlocutor is talking about. There is no inner dialog in the machine, no ‘thinking’, just regurgitations of scripted output in response to the provocation of language input.
He then goes on to say what should work:
Programming in associations is not how consciousness is going to arise. What you need to work on is a general mechanism for making associations and rules. The model has to be something like a baby. Have you noticed that babies do not immediately start parroting their parents’ speech and reciting grammatically correct sentences? They flail about, they’re surprised when they bump some object and it moves, they notice that suckling makes their tummy full, and they begin to construct mental models about how the world works. I’ll be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech.
He may be justified in complaining about overzealous claims from some researchers, but he is dead wrong about what AI is, what it takes to be considered AI, and his criterion for accepting an intelligent machine. He's even wrong about the biology, which is somewhat surprising given his field. Let's have a look at this more closely.
In general, the field of AI, can be summarized as the production of machines that do things that typically only humans have done in the past. It says nothing about the way that those machine solve those problems. The chess-playing tour-de-force, Deep Blue, defeated the best human at chess - a game that at one time was a symbol of human intellect - not by playing like the grandmaster, but by being a really fast, stupid machine. It examined trillions of ridiculous options that the grandmaster never even considered, to cull from those trillions the one or two good moves. Was there any "inner dialog in the machine" or "thinking"? Not at all. Was it AI? Of course it was. It was a machine solving a problem that was, before that point, limited to humans.
Perhaps PZ Myers is complaining that these machine aren't intelligent, like humans are, and I'd agree that we are a long way from the sorts of machines that rival humans in some domains. However, I am not sure that when he uses the term intelligence that he is considering a well-defined quantity. Like Searle's Chinese Room, how do we know that there is a 'dialog in the machine' even inside of a human? How do we know that we are not simply picking "lines from a script", albeit a fairly complex script? I'm not sure we do, so the critique to me seems a bit vacuous.
I’ll be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech
This line baffled me - does PZ Myers believe that a human baby has no pre-programming for language? That the baby is a complete blank slate? Evolution has clearly pre-programmed us for abilities in general language acquisition, so even babies are not a blank slate. His challenge then, for AI, would rule out humans as well!
Finally, much of the recent Deep Learning research is essentially what PZ is wishing for (although not in the domain of language, that I have read). Here, artificial networks start with random starting conditions and learn patterns from the data, to achieve sophisticated inference in complex domains.
It may be premature to claim that we are on the verge of creating artificial consciousness, but the naive responses such as those exemplified by PZ Myers here do not reflect the reality of AI research. Ironically, it smacks of the same tone creationists take to try to keep humans special at all costs, a tone that I am sure PZ Myers would not appreciate being attributed to him.