Story here. Any readers know more about this?
(Thanks to Michael Bramley for the pointer.)
UPDATE: David Chalmers (NYU/ANU) writes:
The chatbot didn't pass the Turing test. In Turing's original article, he predicted that in fifty years, machines would be able to fool 30% of judges into classifying them as human after 5-minute conversations. The organizers have somehow bamboozled the media into taking this prediction as the criterion for passing the test. In Turing's original article it's quite clear that it's nothing of the kind. In fact, in a follow-up discussion he says that he doesn't think the full test will be passed for at least a century. It's also worth noting that the bar has been lowered considerably by having the chatbot pretend to be a 13-year-old with English as a second language. If we're allowed to lower the bar like this, one can trivially write a Turing-test-passing program whose responses are indistinguishable from a human who is asleep!
Also, comments are now open.