With a live broadcast on YouTube at 21 pm Italian time, OpenAI presented its new gem, GPT-4, which is on a decidedly higher level than the previous version, launched just a few months ago. Now the "conversation expert" ChatGPT is not only more powerful, but also able to "see". And this factor can give way to an unprecedented acceleration in the development of artificial intelligence.
From GPT-3 to GPT-4 a sea passes
ChatGPT, OpenAI's "little electronic genius" that replies to messages with text and code, has quickly become the fastest growing app in history, with over 100 million users per month.
In parallel with its development, communities of users and user guides were born (I also published one: if you are interested you can find it on Amazon) and even a "proto-profession", that of prompt engineer.
However, despite its success, ChatGPT had a few problems to solve. He tended to "hallucinate", generating lyrics that sounded plausible but weren't. It reflected prejudices, sometimes "punched" the filters for illegal phrases provided by its creators.
The new GPT-4-based version solves most of these problems, and improves (much) performance.

Eyes open to the world
The big news about this template is its ability to respond to both textual and visual prompts. Think of the possibilities: identifying the author of a painting, explaining the meaning of a meme, creating captions for photographs... The truth is that the field of possible applications widens so much that a possible list becomes gigantic.
But GPT-4 doesn't stop there: it is also much more "intelligent" than its predecessor, surpassing its results in various tests, such as those for the legal professions (LSAT), those used for admission into American colleges (SAT:), and many others. OpenAI claims that GPT-4 it is 40% more accurate in generating truthful content e 82% less inclined to answer illicit prompts (goodbye "evil" versions of the chatbot).

All the rest is history
We will ask ourselves a lot about the incredible characteristics that generative artificial intelligences will acquire more and more (and faster and faster). Thanks to its ability to "see", GPT-4 will power many applications that we use on a daily basis.
First of all, as mentioned, the new one Chat GPT (in its paid version), now capable of processing texts of up to 25.000 words: it can summarize, write and rewrite, manage entire books. GPT-4 is also part of the search engine Bing. The Khan Academy is using it to create a virtual tutor for students, while Be My Eyes has developed an AI assistant that can analyze and describe photographs for people with visual impairments.
Keeping up with its evolutions will be increasingly complicated.

GPT-4, future prospects
Obviously the system can still be improved, it still has some imperfections, but the progress is remarkable and very rapid. The company is already exploring how to integrate audio, video and other inputs into future versions of the model as well. Their goal is for GPT-4 to become an invaluable tool for improving people's lives by powering numerous applications.

In a while, this gadget will also tell us what it thinks of our clothes, or it will recommend the most suitable haircut. It will power surveillance systems that we introduce to our friends and relatives, to open immediately only to them. It will make our automobiles into "supercars" that interact personally with the driver. It will be the "narrative voice" of many blind people, and will help them feel more integrated. And who knows what else.
As mentioned, I cannot indicate all the arrival points: at the most, I can send you back to the starting point with the official announcement on the OpenAI blog, or with the video of yesterday's presentation, so you get the idea yourself.
For now, however, let's say "welcome" to this extraordinary handyman who, for those who haven't figured it out yet, has already changed our lives.