Humans are not at risk from artificial intelligence.
A researcher says in this chronicle that the idea that intelligent robots could one day usurp our control is highly overblown.
Recent discussion has focused on the prospect that artificial intelligence (AI) would one day turn against its human creator.
Stephen Hawking, a renowned scientist, has warned that in the future, AI will be able to exceed human intellect and will be able to destroy humans. This was spoken in awe of his freshly updated speech-synthesis system, which can foresee what he intends to say.
Hawking is hardly alone in his fear about superintelligent artificial intelligence. A rising number of futurists, philosophers, and AI experts are concerned that artificial intelligence will outwit and outmanoeuvre humans. My belief is that it is very improbable since humans will always be able to use an upgraded AI to better themselves.
A malicious AI would need to surprise not just human brain activity, but also a mix of human and faithful AI technologies that humans could still govern. This combo is unbeatable.
Following the Turing test
The Turing test is a standard starting point for artificial intelligence. A thought experiment presented by the brilliant mathematician Alan Turing was the inspiration for the exam.
Turing’s specific response to the issue “Can a machine think?” was to construct an imitation game in which the challenge for the machine was to be able to communicate so convincingly on any topic that you could not know whether you were speaking to a person or a computer.
Hugh Loebner, an inventor, established an annual competition, The Loebner Prize, in 1991 to develop an AI that could pass the Turing test. This is what we now refer to as a “chatbot,” or a talking robot. Ian Hocking, one of this year’s competition’s judges, said on his blog that if this year’s participants represented our greatest effort to create human intelligence, then we were decades away from success. AI is still only capable of doing a fraction of what human intellect can.
The University of Reading’s claim that they were able to recover a 13-year-old Ukrainian boy’s English speaking talents does not impress me either. Imitating a juvenile intellect and the linguistic qualities of a foreign language is a long cry from what the Turing test requires.
Decades ago, AI outfitted with pattern-making algorithms could superficially imitate human speech. For instance, the 1960s Eliza software was able to successfully impersonate a psychotherapist.
Eliza demonstrated that it is feasible to deceive certain individuals on occasion. Nevertheless, because Loebner’s $25,000 award has never been claimed, we may deduce that a correctly administered Turing test is a rigorous test of human-like intellect.
Measuring synthetic inventiveness
But are there other facets of human intellect that artificial intelligence can replicate more convincingly?
Mark Riedl of Georgia Tech in the United States has proposed assessing the creativity of artificial intelligence.
Riedl’s Lovelace 2.0 test demands artificial intelligence to develop a product that adheres to a reasonable but arbitrarily complex set of design restrictions. A court imposes these limits and determines whether the effort was successful. The criteria for fulfilling the limits demonstrate inventiveness.
The evaluator may ask the AI to “write a tale about a boy who falls in love with a girl, aliens kidnapping the guy, and the girl saving the planet with the assistance of a talking cat.” In contrast to the Turing test, the machine’s performance is not compared with that of a person.
Riedl advises we disregard aesthetics and instead evaluate if the outcome fits the specified requirements. Therefore, if the computer generates a science fiction novel in which Lotte, Totte, and Mis with the blue eyes repel ET, the test is passed, even if the story is as unoriginal as a children’s book.
I like the concept of testing creativity. AI developers do not yet comprehend some talents and features that constitute the foundation of human intelligence. The heart of Riedl’s exam, however, seems to be the fulfillment of certain requirements, namely problem-solving.
That may be tough enough, but it is certainly not everyone’s notion of innovation. With the elimination of the competitive aspect of Turing’s verbal tennis match, the evaluation of Lovelace 2.0 is much too subjective.
Get a competitive edge with insights from industry leaders and influencers by clicking here