Artificial intelligence test
Computers argued, cracked jokes and parried trick questions, all part
of an annual test of artificial intelligence carried out at the
University of Reading. Typing away at split-screen terminals, a dozen
volunteers carried out two conversations at once on recently: one with a
chat program, the other with a human. After five minutes, they were
asked to say which was which. Some were not sure who - or what - they
were talking to.
"There was one time when I was speaking to the two, and there was an
element of humour in both conversations. That's the one that stumped me
more than others," said Ian Andrews, one of the judges in Reading, just
west of London.
Transcripts of the conversations showed some savvy judges ruthlessly
trying to trip programs up with questions about the day's weather, the
global financial turmoil and the colour of their eyes.
"Blue, of course!" answered Eugene Goostman, a 'chatbot' designed by
Pennsylvania-based programmer Vladimir Vesselov. Eugene was one of five
programs competing to pass themselves off as flesh and blood. A sixth
program, Alice, dropped out when it could not be set up in time.
Fred Roberts' Elbot scooped the day's top award: the Loebner
Artificial Intelligence Prize's bronze medal, for duping three out of 12
judges assigned to evaluate it.
"I wish I was as good at conversation as Elbot," the Hamburg,
Germany-based consultant joked after receiving the prize.
The contest draws on the ideas of British mathematician Alan Turing,
who came up with a subjective but simple rule for determining whether
machines were capable of thought. Writing in 1950, Turing argued that
conversation was proof of intelligence. If a computer talked like a
human, then for all practical purposes it thought like a human too.
But judging a computer's eloquence was tricky: Humans might be
prejudiced against a machine.
READING, England AP |