Last February OpenAI announced that it had developed an algorithm capable of writing completely plausible spam and fake news messages.
At the time, the team's decision was not to release it because it was considered too dangerous. For this reason, OpenAI decided to start a cautious study program by disseminating only parts of the algorithm to write fake news and evaluate the effects.
Today the group claims to have reviewed its risk estimate, and not to have detected excessive feasible failures. For this he decided to release to the public the complete code of the “liar” algorithm.
The artificial intelligence, called GPT-2, was originally designed to answer questions, translate texts and sort content. The researchers then realized (with no small amount of amazement) that the system can also serve to spread an enormous amount of misinformation online.
False alarm? Maybe.
Fortunately, the uses made during the monitoring period have been much more moderate: the algorithm has been used for narrative structures and text-based video games.
In the official post announcing the public dissemination of the mechanism, OpenAI hopes that this artificial intelligence can be used to develop text recognition models that allow to find fake news on the net. “We are deploying this model to help search find concise text,” read on the site.
The idea that there is an AI capable of producing a gigantic mass of fake news that is not easy to disprove is irritating, but History points out to us that such technologies arrive, whether we want it or not.
Well OpenAI would have done to share his work immediately, to give researchers even more time and way to identify tools capable of fighting, or at least recognize artificially created texts. Better late than never, anyway.