Allan Turing was a genius. There’s no doubt about that. He was English and during the WWII worked at Bletchley park. He and his team worked on breaking the German enigma code with primitive computers. He was a bonafied hero. Turing also had notions about computers and intelligence that were way ahead of his time. He thought of the Turing test. He reasoned that since we don’t have a good working definition of intelligence then we need a work-around in order to measure if a machine is intelligent or not. Rather than having a strict definition of intelligence we can intuit what intelligent is by talking to it. We all agree that humans (by and large) are intelligent if I can no longer discern whether I am talking to a machine or a human then the machine I am talking to must be intelligent. Of course this relies on a rather strict definition of intelligence itself human like intelligence. And it restricts designers and people wanting to make machine intelligence to making programs and computers that mimic or emulate depending on how advanced they are a human intelligence. Which isn’t a bad starting point but still restrictive.
To me the biggest problem with the Turing test is the fact that humans by and large often project intelligence where intelligence does not exist. We anthropomorphise things, objects, animals, etc…. In fact there’s a name for it the anthropomorphic fallacy. We talk to our cats, we tell other people that our dog is the smartest dog in the world. We lend out own intelligence to things all too readily. And then conformation bias makes us believe that it’s real.
How deep is the anthropomorphic fallacy? It can be generated with something as simple as a name.
The Turing test also doesn’t account for the unconscious awareness that we all share as humans. There are a number of experiences that we have that we simply can’t put into words or at least have a great deal of difficulty doing. For instance.
- How do you know how to move your arm?
- How do you choose which words to say?
- How do you locate your memories?
- How do you recognize what you see?
- Why does seeing feel different from Hearing?
- Why are emotions so hard to describe?
- Why does red look so different from green?
- What does “meaning” mean?
- How does reasoning work?
- How does common sense reasoning work?
- How do we make generalizations?
- How do we get (make) new ideas?
- Why do we like pleasure more than pain?
- What are pain and pleasure, anyway?
- Isn’t pain the same sensations as pleasure with different context?
And the big one. “How do you make decisions?”
*List adapted from “Conscious machines” by Marvin Minsky *
So these are questions you could ask a machine and it wouldn’t know how to answer but neither would any given human you could ask. A thoughtful human might give you a semblance of a satisfying answer but then so might a well programed chat-bot
Also at its most basic it’s a clever test but it might simply not be a good test. I have no evidence that when I’m taking to a human that they are intelligent. Indeed they might just be projecting the illusion of intelligence. As I might be. Because we still don’t know what intelligence is, why we have it and other creatures don’t or how to properly measure it or define it. So I can’t really know it when I see it. Also when chatting online a lot of people project a very strong illusion of being dumb. Since people often project a strong illusion of being morons, especially on the internet I often wonder if there are some people who constantly get mistaken for Chabot’s?
So they would fail the Turing test. Does that mean that they are not intelligent? Well…..by these standards…yes. But it’s a more interesting question than it sounds because we still don’t have a definition of intelligence to mark them by.
In the Ukrane a few months ago a group of programmers took out the Turing test using a Chabot called “Eugine Gootsman”.
The people who created this Chabot basically took the flaws of the test, that humans are stupid and exploited it.
And this is how the Turing test was won. Not by raising the bar but by lowering the standard. The judges were told that the person they were talking to was a non-native speaking teenager. Team leader Vladimir Veselov put it, “our main idea was that [Eugene] can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.”
So the judges didn’t expect much from them. “I’m not talking to an intelligent machine, I’m talking to a dumb human!”
Well done, slow clap. Bad scientists, no biscuit.
It should be noted that the Chabot that fooled them “Eugene Goostman” was in fact a very simple standard algorithmic Chabot, very much like the ones you probably have interacted with already. A program in a normal everyday computer probably like the one you use at work not a purpose built super-computer like IBM’s Watson. So in a way it is impressive. It’s just not fair.
So where are we with AI? Well if most scientists are to be believed we are either nowhere, with Chabot’s and various other methodologies not progressing very much since the 1960’s or we are on the verge of the singularity where we all get subjugated and plugged into the great mainframe. At this stage its optimism vs. pessimism. To be honest I’m kind of in the “nowhere” camp.
On the other hand it’s about time some of these computers get some freaking recognition. What about poor old Watson? The computer that won Jeopardy, people talked to it and it talked back and it won a game show. It can be heard at night going, “Where is my car, it’s been 5 years, and I still have not taken possession of my promised vehicle”
We keep moving the goalposts because we know how the tricks are done. If I had said to someone 50 years ago “I have a natural language computer that can win a game show computing in real time, is that machine intelligent” they would go “Yeah…I think so? Sounds pretty close to me” but because we know that Watson is a brute force computer with access to the internet it’s considered a cheat.
The scientist that were working on AI 40 years ago say things like “Well we knew a computer like Watson was possible, the algorithms were in place the databases were collated but the processing power simply didn’t exist…so we were really looking for something other than just raw horsepower to solve this problem, it’s slightly disappointing that it’s simply power that cracked it.” In a way it’s like they were looking for a magic fix, a soul that we could imbue anything with so we can talk to our toaster in the morning.
It’s like a magic trick, we know how it’s done therefore it’s undervalued. But the reality is that it’s illusion of intelligence is undeniably strong. And watching Watson on Jeopardy even knowing how it’s done it’s hard not to think “Well….not long now”.
Although if you want to change your mind check out these conversation transcripts from 2009
Then if you want you can go talk to Eliza a chatbot that was developed in 1966 and see if you think things have come a long way.