Turing-test

Did a chatbot pass the Turing Test? 

(Would I have posted a sad R2 GIF if the answer was yes?)

You may have heard in the news this week that a chatbot has passed the Turing Test for the first time, fooling human judges into thinking it was a real human being rather than a computer program, meaning that machines can now think and now they’ll be laughing at our jokes and they’ll be there for us and listen to how our day was and give us great advice about that new job and whether we should ask out that special someone.

Well … take the Turing Test news with a grain of silicon salt.

While many news outlets are (rather breathlessly) reporting that the Turing Test has been beaten, lots of AI experts don’t agree. I’m certainly don’t fall under that “expert” category, but count me in the skeptical camp. 

External image

What this is: Some very cool work in the direction of developing human-like AI. A chatbot fooled more than 30% of judges into thinking it was a 13-year-old Ukranian named Eugene Goostman with limited English language skills.

What this isn’t: A computer impersonating a human. 

Kelley Oakes has a great rundown of why this isn’t really a Turing Test victory over at Buzzfeed. And having read an actual conversation that someone had with Eugene the chatbot, I can not for the life of me figure out how anyone thought this was a real human. It reads like what I imagine talking to @Horse_ebooks would have been like back when we all thought it was a bot and not just an esoteric digital performance piece.

Designing your AI as a 13-year-old non-English speaker is kind of moving the goalposts when it comes to forgiving its errors. Not everyone agrees that the Turing Test is a very good measure of machine intelligence (<- read that), either, as simply imitating a human isn’t quite the same as an “intelligent machine." 

So while this is some very cool programming and a worthwhile achievement in the arena of teenage chatbots from eastern Europe with poor vocabularies, I don’t think we have to worry about any intelligent robot overlords just yet.

External image

Failing the Turing Test

What does Eugene Goostman’s triumph over the Turing Test really mean for the future of Artificial Intelligence? Gary Marcus explains: http://nyr.kr/1mxDdUs

“In the years since Turing, many machines have mastered individual tasks, like playing chess (IBM’s Deep Blue exceeded even the best humans) and even Jeopardy (IBM’s Watson). But each such program is tailored to a particular task, and none has possessed the sort of broad intelligence that characterizes humans. No existing combination of hardware and software can learn completely new things at will the way a clever child can.”

Major studios once turned out scores of great-person biographical pictures, but now you rarely see them except during award season. They’re prime Oscar-bait. The new Stephen Hawking biopic The Theory of Everything is a perfect specimen. It’s a letdown, finally, but Eddie Redmayne is amazingly tough; he captures the fury inside Hawking’s twisted frame. And then there’s The Imitation Game, which Benedict Cumberbatch lifts far  above the standard biopic formula. He’s award-caliber strange.

Hear the review:

Benedict Cumberbatch Lifts Above Biopic Formula In ‘Imitation Game’

Predicting the future of artificial intelligence has always been a fool’s game

From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term “ artificial intelligence”.

Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”

Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions – all 95 of them in the library – are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before – no one has ever built one – and our only model is the human brain, which took hundreds of millions of years to evolve.”

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

What makes a computer seem human isn’t how we perceive its intellect but its affect. Can it display frustration, surprise or delight just as we would? A computer scientist friend of mine makes that point by proposing his own version of the Turing Test. He says, “Say I’m writing a program and type in a couple of clever lines of code — I want the machine to say, ‘Ooh, neat!’ ”

That’s the goal of the new field called affective computing, which is aimed at getting machines to detect and express emotions. Wouldn’t it be nice if the airline’s automated agent could rejoice with you when you got an upgrade? Or if it could at least sound that way? Researchers are on the case, synthesizing sadness and pleasure in humanoid that fall just this side of creepy.

A computer program just convinced a panel of judges it was human

On Saturday, the line between human and machine was blurred a little more when a computer program became the first in history to successfully convince a panel of judges that it was a human being — more specifically, a non-native-English-speaking, 13-year-old Ukrainian boy named “Eugene Goostman”.

Read more  | Follow policymic