Someday, in the not too distant future, some of your colleagues may be artificial intelligence avatars. You yourself can have an electronic impersonator attending meetings on your behalf. Adorable, isn't it? Yes: but also terrible.
Some time ago I was telling you about hour one, the company that first showed realistic avatars. It was less than a year ago: it's surprising to see how fast this industry is evolving. Today Synthesia has created AI avatars for business professionals. Companies like the opportunity to manage these cyber lookalikes, because they are extremely realistic. How realistic? Like this:
But will ordinary people want them?
Some people think artificial intelligence will be good for humans. Other people think it will do bad things to us. AI is getting better and better, but some people aren't sure what it will do in the future.
For decades now, the media have portrayed artificial intelligence as equal or superior to human intelligence, but the reality is quite different. At the moment, most AI is made to replace boring tasks. Also for this, after all, no one REALLY sounds the alarm.
But if AI technology enters fields that are more "dangerous" to us such as creativity, emotions, morals and analytical power, then people will feel threatened. What's more: they will be threatened. Would you be replaced by an AI impersonator, or would you feel replaced by him? Subtle but substantial difference in perception.
A virtual double
Some people already have a hard time distinguishing between a real person and one of the virtual avatars of Synthesia. The company can create generic and custom AI human lookalikes that can be programmed to 'play' a script that is typed, translated and spoken in over 50 languages. AI avatars are meant to be used in training materials, corporate communications, and even personalized videos. To create your own AI avatar, just upload a 10 minute video where he talks - one of those videos made with a green screen, to understand. The technology analyzes the video, catalogs the person's voice and facial expressions. And here is the "perfect" virtual double.
It can make things even more difficult than right now
Synthesia is keen to emphasize the company's attention to the privacy and security of their customers. Hackers don't care: with far less data than Synthesia needs, they've created scams. There are already many cases in which scammers have used deepfake to clone people's voices or faces (and have money transferred to their accounts). The risk of "cultural" manipulation is also large: with an AI double of a politician or an influencer, a hacker could deliver ambiguous or misleading messages to the public.
Anyway, beyond the possible damage, Synthesia AI avatars have caused a lot of hype, and today they are already super chosen by many customers. Over 4500 companies (including giants like Nike, Google and BBC) have already used a virtual avatar in some way. If anyone is happy with it, that is the global market.
And U.S? Happy with the AI double?
Like it or not, the era of artificial intelligence doubles is here to stay. AI avatars are a sure thing of the near future. They will save us time, they will make companies a lot of money, and they will probably make us doubt anything that passes before our eyes.
One day some elders of the next decade will come to doubt the love video message of a son asking for money. And it will do well.