![]() Proposals for amending of even replacing the Turing Test with something that more accurately captures true intelligence have been around for years. Eugene Goostman, for example, was designed so that English was the chatbot's second language, effectively hiding some of its awkward responses. ![]() But the main criticism is that it ignores several other facets of “intelligence” that are just as critical as a human’s language ability, and many chatbots have been designed specifically to fool people into thinking they're human. Using language as a test for neural network’s “intelligence” makes sense to some degree, as it is one of the hardest things for an AI system to imitate. However, some argue that the test is far from perfect. Should We Worry About AI Reaching Singularity?.Most recently, Google’s AI LaMDA passed the test and even controversially convinced a Google engineer that it was “sentient.” In the decade since, many more programs have purported to pass the Turing test. The first attempt at passing the test came in the mid-1960s when computer programmers designed a chatbot named Eliza to mimic a psychologist, and in 2014, the first AI to reportedly pass the test (this is debated) was Eugene Goostman, a program designed to simulate the responses of a 15-year-old Ukrainian boy. If the computer successfully tricks the questioner into thinking it's a human, then it has passed the Turing's test. In this “imitation game,” as Turing originally described it, a human participant blindly asks questions to both a human and a computer. ![]() ![]() ![]()
0 Comments
Leave a Reply. |