Once upon a time there were two beautiful brothers, practically two models: seriously, one more beautiful than the other, but it's only the first that I want to talk to you about. Meanwhile, I present them to you.
One was called GPT-3, and was in charge of generating text. He was the best of all at it. The non-profit organization OpenAI, founded by Elon Musk and Sam Altman, had created it to promote research on artificial intelligence for the good of humanity.
The other brother was called Google GMP-3, and he was a real model. Linguistic, I mean. A linguistic model is a mechanism that predicts the next word on the basis of the previous ones. It uses an auto-detect approach, similar to the T9 feature in cell phones, and can produce text for many pages.
In which way did they intend to lead the planet? Now I tell you. In the meantime, however, let's talk about the more beautiful model of the two.
Why was GPT-3 more beautiful?
First of all, it must be said, GPT-3 was more handsome and muscular than his dad, GPT-2, born in 2019, who was trained on 1,5 billion parameters. And I'm not telling you about his grandfather, GPT, who was trained on 2018 million parameters in 117. GPT-3 was trained, think, on 175 billion parameters, and could do things that no one else would have been able to do. He solved problems, wrote poetry and prose, news and blog articles (for example, are you sure he's not writing this too?). To do that, he just needed a brief description of what to write, and maybe a couple of examples.
That's not all: having studied many books, GPT-3 could take on the appearance of any historical figure. He could start talking like Hegel, and express opinions just as the real philosopher would. Or write an entire conversation between two scientists (Like Alan Turing and Claude Shannon) and some Harry Potter characters.

How did he do it?
To train him, the developers of GPT-3 really used everything. All the English Wikipedia, for example, and even novels and web pages. Newspaper articles, poems, programming guides, fanfiction, religious texts. Even information about Latin America, or pseudoscientific textbooks and conspiracy essays.
GPT-3, as mentioned earlier, operated on the basis of automatic detection. When a user typed text, the model examined the language and incorporated the text predictor to produce a likely result. Even without further adjustments or training, the model produced text that came very close to what a real person would write.
And then how did the story end?
I could still talk about all the things that GPT-3 does He has made. I could tell you, for example, that GPT-3 has already helped people focus on more essential tasks by making their jobs easier. Which showed the ability to affect human performance by reducing the chore of humans. How it allowed us to plan and implement entertainment, personal growth, business, scientific research, engineering, economics and politics projects. How he started to become sentient one day.
The truth, however, is that the story isn't over yet. Actually it has only just begun: GPT-3, the most beautiful model ever seen, is fast growing big, and could soon be a dad. Do you have even a vague idea of what GPT-4 could accomplish?