Last February OpenAI announced that it had developed an algorithm capable of writing completely plausible spam and fake news messages.
At the time, the team's decision was not to release it as it was deemed too dangerous. For this OpenAI decided to start a cautious study program by spreading only parts of the algorithm to write fake news and evaluate the effects.
Today the group claims to have reviewed its risk estimate, and not to have detected excessive feasible failures. For this he decided to release to the public the complete code of the "liar" algorithm.
Artificial intelligence, called GPT-2, was originally designed to answer questions, translate text and sort content. The researchers then understood (with no little surprise) that the system can also serve to put an enormous amount of disinformation on the net.
False alarm? Maybe.
Fortunately, the uses made during the monitoring period have been much more moderate: the algorithm was used for narrative structures and textual video games.
In the official post announcing the public dissemination of the mechanism, OpenAI hopes that this artificial intelligence can be used to develop text recognition models that allow to find fake news on the net. "We are rolling out this model to help research locate synthetic text," read on the site.
The idea that there is an AI capable of producing a gigantic mass of fake news that is not easy to deny is irritating, but history points out that such technologies arrive, whether we want it or not.
Well OpenAI would have done to share his work immediately, to give researchers even more time and way to identify tools capable of fighting, or at least recognize artificially created texts. Better late than never, anyway.