Two bots that were supposed to learn to communicate with people quickly developed their own language. The test run was terminated. The media outcry was great, but the real danger was small.
Although artificial intelligence (AI) is a new phenomenon for many people, science has been studying the topic since the middle of the last century.
Asimovian robot laws and the Turing test
It was the Russian biochemist Isaac Asimov who in 1942 defined the three robotic laws named after him.
According to this, no robot is allowed to allow a person to be injured by another robot or by its own inactivity. Furthermore, the robot has to obey the instructions of its programmer. Unless the command violates the first rule. Finally, each robot should protect its own existence, without violating the first two laws.
Only eight years later, the British computer scientist Alan Turing developed the Turing test. The purpose of this experiment is to determine whether artificial intelligence is equal to the human brain.
For this, a person communicates via computer with two entities- a person and a machine. If the questioner can not clearly say after the survey which was the man and which was the machine, the AI has passed the Turing test.
Musk versus Zuckerberg, caution versus openness
Of course, computer-based intelligence was not a relevant issue in the 1950s. Today the field looks different.
Through social networks and smart devices, artificial intelligence penetrates deep into our lives and into our everyday life. That is why it is not surprising that a lot of big companies are in the first stages of developing AI policy.
In the tech world, there are two big players: on the one hand, there is Tesla boss Elon Musk, who wants clear regulations for the protection of the public, although he is not fundamentally opposed to the use of AI. On the other hand, Facebook’s CEO Mark Zuckerberg is keen and confident to eagerly push forward with AI.
Facebook experiment and technological singularity
It was a Facebook experiment that recently caused headlines. The story goes that two bots from the FAIR program (Facebook AI Research) developed their own language during a test.
Bob and Alice, as the two bots were called, were supposed to learn to communicate better with people. After a while, however, they replaced the English language with their own sentence constructions, which were more logical for the machines than the grammar principles of English.
In the end, the test caused some panicked articles and debate. The big question was: are we (people) still under control?
In order to put an end to the fearmongering, researcher Dhruv Batra spoke out about the test. The scientist said, “Even if it seems alarming / unexpected to outsiders that two artificially intelligence machines communicated with one another in their own language, it is an established subcategory within AI research that has produced results for decades.”
In the end, the technological singularity- the moment when machines can develop new abilities quickly without human intervention, is still some ways away. For now, Alice and Bob, at least, are no threat to mankind.