Post Your Entry!
SATURDAY, MAY 28, 2016
The Turing Test has long been the standard test when it comes to determining whether a machine is performing intelligently. The experiment is as follows: A human has a conversation with a hidden computer, through some sort of interface. He doesn't know beforehand whether that he is conversing with a computer, or a human. If the human, by means of conversation, cannot determine whether the thing behind the wall is a computer or a human, then we've successfully programmed artificial intelligence. That is, the computer is functioning so normally - normally, that is, as a human would function, that it isn't possible to see it as anything but human.

The pitfall of the Turing Test is that it automatically assumes the intelligence of the human - and the intelligence of conversations two humans can have. I'm sure, particularly when it comes to instant messaging, we've all come across humans we just don't understand. Humans who are, in fact, more robotic than our own computers we have fine tuned to be extensions of ourselves.

Recently in a competition, the 2008 Loebner Competition - where AI computers are put to the Turing test - Elbot emmerged as a winner. Though it didn't quite pass the Turing Test, it did fool 3 out of 12 judges. You can chat with ElBot here: (Push the red button)

Now, granted, when I first chatted with Elbot, I knew it was a computer. So I wasn't really practicing the Turing Test myself - I had the prior knowledge that the hidden computer is a computer. But to be perfectly frank, after 5 minutes of talking with Elbot, I don't see how anyone can not immediately identify this chat bot as a computer - and a pretty dumb one at that.

The programmers claimed that they felt they could do better in the test if they gave Elbot more personality. Rather than having Elbot assert that it is human, the way other chat bots do, Elbot instead sarcastically admits that it's a computer in an effort to raise doubt. Sure, interesting strategy - except this immediately is not Artificial Intelligence. It would be if Elbot came up with that strategy. But humans came up with it, and fed it to Elbot as a series of directives. It did not adapt this strategy from a series of success and failed attempts. In other words, it did not act intelligently. But even accepting that point, the mere fact that Elbot evades every single possible question you ask it and so generally and loosely responds to key words you may say, there is no way anyone should be fooled by Elbot - unless everyone you ever chat with on instant messengers have the habit of evading every thing you say to them.

If we ever do pass the Turing Test, we're a long way off. Language is far too complicated to be broken down into logical bits. Particularly when it comes to instant messaging, it's why we have to sometimes rely on italicizing what we write, use winks, or give other forms of queues to the other person to denote our sarcasm, our earnestness, our happiness, or what have you. Language is so complicated that even as humans, we have our misunderstandings. This doesn't mean we have to create a computer that's smarter than humans - afterall, if the computer is confused by the same tones a human would be confused by, that would fool us. If it knew exactly what we were saying every single time, that would not only be overkill - and signal that it is too good - but it would also be very frightening. Think: Everytime you try to fool another person in conversation, everytime you exaggerate, everytime you fib, everytime you use sarcasm, this computer would not be fooled. That's very frightening. It also means you'd be able to quickly spot the computer as a computer. Why? Tell it a joke. Most jokes are plays on our own expectations - or rather, our own misunderstandings. We hear the story be set up, and we start understanding it to be one way. The punch line is a clarifying statement which reveals to us that our understanding was a misunderstanding. A computer that understands everything wouldn't find this type of joke particularly funny. So while we don't need to take computers beyond our own level of understanding, bringing them to our own level is difficult enough. Particularly because we don't even understand our own understanding - and how can you ever replicate that which you don't understand?

So how do we understand each other? How do we know when someone is speaking ironically? How do we know when someone is teasing us? How do we know the difference between a sincere statement and a lie? Most of it, is based on experience. The least gullible of us are those who have already come across many misleading statements, can recall instances when they were misleading, and then immediately throw up a red flag. Those who are gullible? Well, people who have never experienced a lie are immediately gullible. For example, most children. They haven't been around long enough, and so happen to believe everything that they are told - they aren't aware of the possibility, and cannot reason that someone would mislead another. Adults who are gullible just have difficulties in recognizing a misleading statement, or otherwise, the one making the statement is particularly good at masking it. But all of this is based on experience. So quite clearly, the best way to get a computer to our level would be through experience. Give it a lifetime worth of conversational experience - where people lie, where people are sarcastic, where people insult, where people quite curiously express terms of endearment (and if you've ever seen the logs of one of these chat bots, you'd see just how many people can't help but talk dirty to a robot). Unfortunately, this is a lot of memory, and a lot of processing time. Get the statement, analyze the statement, compare with other statements, store the statement, compare responses, obtain best and most logical response, return response. While this is something you and I would do in a matter of milliseconds (sometimes longer, in which we say "Uhhh..." instead of "Loading, please wait..."), a computer requires a much longer time to do it. And it's precisely because we don't understand our own brains very well. Our own brains which are remarkably fast at informational retrieval, but also remarkably well at archiving huge amounts of information, with 90% compression rate. Sure, there's a little degradation here and there, but when you consider how much information we can fill in our brain, and how well we compress it, you can accept a little degradation.

To really explain the difficulty at hand, think of a movie you last saw. If you liked the movie, the parts you remember are the crucial moments that defined the story. The scenes which you felt particularly close to. The scenes wehre you may laughed. You don't remember every single shot - very difficult, when you consider that a standard shot is around 5 seconds long, giving a 90 minute film over 1000 shots. Without thinking about it, without remarkign to yourself what's important, what isn't important, you remember all the bits of the movie that help you reconstruct it later. And you forget about the irrelevant parts. All without trying. To get a computer to do this would be incredibly difficult, for the simple reason that we have a lot of trouble explaining how we do it, ourselves. It just... happens. And more remarkably, despite every movie being different (insert bad joke about standard Hollywood movies...), we manage to do it for every single movie. Talk about having dynamic brains! At best, with a computer, we could tell it the salient bits to remember, and which ones to forget. And we'd need to do this one by one, for each movie, because they're different. To be able to have the computer do it on its own is far beyond doable, because we cannot fully describe doing it ourselves.

The genius of the Turing Test is precisely all that it takes into account. It doesn't explicitely say how difficult a task it is to pass the test, but it is an enormous task. Is it doable? I'm sure it one day will be, but we're a long way... though, those three judges may disagree.

When you think of Artificial Intelligence, what comes to mind? Is it Spielberg's 2001 movie? Is it the Matrix? Is it your computer beating you at C...




Personally, I don't think genuine AI, whatever it might look like, is possible, even if some morons are fooled by it upon cursory conversation. The closest it could get to "intelligent", as you point out, would be to let it evolve and experience some sort of life, and let it develop some individual consciousness. But there's a problem there, too, since an individual isn't an individual without other individuals. You'd still have to instruct and inform and teach and converse with the potential AI, and/or have it relate to other potential AIs, just to give it some chance at attaining something close to intelligence.

The human mind works on (among other things) contextualization, on sorting out relevant information from the irrelevant, whereas a computer works from a code, its "computer nature", if you will. Human nature is pretty malleable. Likely the most malleable of all animals, despite our innate, evolutionary programming. The particular thing about human beings' "innate programming", our human nature, is that it is dynamic and flexible. And we're at our most inhuman when we seem to act like programmed computers, doing only what we're precisely instructed to do, taking the orders we're given, and not thinking for ourselves, logically or morally. Humans have individual autonomy, even if we deny it, or so the existential argument goes. We're free, and therefore responsible, charged with the never-ending task of creating our own meanings, collectively and individually. A computer would never even get a Why'd-the-chicken-cross-the-road joke, let alone develop a meaningful philosophy of life.

The complexity of language alone (forget about the complex-enough-already visual processing and other sense data interpretation) makes it impossible for a computer to keep up with a human. True, humans do, as I said, learn from others through example and mimicry, but if that was the only way we learned and developed, well, it's not clear how we'd ever evolve and change as a species. There would be no knew words, no knew and different art- in short, no culural progress at all - if there weren't also some imaginative, creative aspect of human intelligence beyond our purely mimetic, social copy-catting.

If our evolved physical bodies and minds are our hardware, and our culture our software, the computer/AI analogy still breaks down pretty fast. We all run different software programs (culture) on the same hardware (our bodies and brains), but in a real computer if there is just one little ambiguity, one slight contradiction in the software code, the hardware will crash. Not so with humans. We can handle, even delight, in contradictions, paradoxes, and strange jokes with no logical basis. A computer, though, absolutely needs perfectly unambiguous instructions. Without that, they freeze up. Even a dog (or any animal for that matter), if placed between two bowls of food at equal distances, won't stand there, frozen, unable to make a decision. If it's hungry it will just choose a bowl. A computer, no matter how much "intelligence" it has, unless given perfectly unambiguous instructions, would freeze up at the cross-roads between bowls, simply because it has no imagination, no common sense, to just go with the context of the situation, which should tell it, instinctively (a computer has no instincts, though) to just pick a bowl and eat! You're hungry, so eat! Yet this simple, natural, common sensical intelligence is beyond artificial intelligence.

Forget your password?
Don't have an account? Sign Up, it's free!
Most Discussed Articles Top Articles Top Writers