This post is part of “Periscopio”, the Linkedin newsletter that explores future future issues every week, and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, find it all here.
Algorithms increasingly govern our lives, guide our choices and fill our days. An algorithm is a "hidden" and omnipresent system that dominates, for the most part, our common digital realities.
When you listen to music on iTunes, play a video on YouTube, search for the next birthday gift on Amazon, watch your favorite show on Netflix or even search for news on Google, it is an algorithm that decides the options available to you and, indirectly, what that you will eventually consume.
Algorithms build real "funnels" in which our vision of reality falls and ends in a precise direction. It's good when Spotify finds a catchy song, but it's terrible when a social platform manages to influence the outcome of what should be free elections.
This is a huge influence, carefully planned and held by very few players in the world: the big tech companies.
Does an algorithm have to work like this?
There is something rewarding about achieving our goals through an algorithm. It is like having someone next to us who “understands us on the fly” and tells us, or advises us, what we always want to hear. This is why human beings want more and more. And companies employ them because they guarantee them greater profits. But can this only work? It's right?
The question has already emerged, we do it more and more often and even the readers of this blog do not shy away from it: how can we defend ourselves from the negative effects of artificial intelligence algorithms?
We can restrict the use of social media, or even delete accounts. We can be offline as much as possible, or at least a few days a month. We can do extensive research in the newspapers to avoid being swayed by fake news and lies. Sure, we can, at the cost of sacrifices, but we can. But why does it have to depend only on us?

Why do we have to do everything ourselves?
There must be something that the technology companies themselves must do, are forced to do to improve the situation. We have to question the whole picture, and the picture is: an algorithm is inherently designed to occupy our time and attention by exploiting our psychological vulnerabilities. Point. This is it, it is the truth.
And it is a serious thing, especially for the new generations who have grown up between constant quarrels on social media and small "like" gratification. And giving each other the advice to refrain from surfing, or have different habits, or make other efforts, means passively accepting that technology companies will continue to exploit algorithms more and more, and worse and worse.
The real question to ask is another. The question is, why is an algorithm optimized for engagement instead of well-being? And what does it take to change this state of affairs?
Wanted algorithm of happiness
With a modest amount of work, algorithms could be modified to defend and enhance our delicate psychology, rather than to exploit it. An algorithm should be trained to improve well-being, rather than interaction.
Try to imagine how things would improve.
Clearly Big Tech doesn't even think about it. Former Facebook president Sean Parker, who saw the birth of the social media giant, he said long ago that the main goal of the platform was how to make the most of users' time and attention.
Yes, you know. The goal is profit. And the currency is our attention. The consequences? In the background. Whether they know what they are doing, or whether they are traveling without brakes until they reach a breaking point, the tech companies are doing damage to us, and they are responsible.

The importance of an AI ethics
Of course, there would be (apparent) good news: the advent of the ethics of AI and open source collaborative initiatives has put some pressure on these companies. They are now doing what they can to show commitment to improving their platforms. Google, Facebook, Microsoft and others have hired many social science experts - the goal? Humanize their technologies.
An obviously arduous task, which encounters obstacles right from the start: we all remember Timnit Gebru, the ethics expert licensed by Google in 2020 for putting the 'racism' of its Artificial Intelligence on the dock. It was not the only, nor the last, layoff of this kind. She followed it up in 2021 Margaret Mitchell, from the same Google ethics team.
Still, these experts are being fired for doing what they were hired to do: analyzing the potential risks of the technology. In other words, it's okay to hire ethics experts, as long as they don't interfere with the company's key plans.
The ethics of AI will be an unsustainable business practice if professionals are unable to do their job, that is, to empower the companies they work for.

Putting people before profit
Over the past two years, the reputation and public image of these companies have been drastically dwindling due to these choices. And more and more researchers are trying to get together to continue working on the ethics of technology without obeying the economic goals of these giants.
And maybe it's for the best: tech companies are unlikely to really listen to their ethics teams if their problem is purely economic. If introducing ethics won't reduce their profits, they will - if not, they'll prevent these teams from working.
For this reason Timnit Gebru herself founded the Distributed Artificial Intelligence Research Institute (DAIR) and Margaret Mitchell works as a researcher and chief ethics scientist at Hugging Face .
If the internal ethics teams can't do a real job (and can't do it, it seems obvious to me), it is better that the solution be sought outside.

The Salvation Army
As mentioned, we are seeing more and more efforts in the field of artificial intelligence, but outside the big tech companies. There are subjects who are working collectively and individually to reverse the fate of the "algorithm domino": in addition to the aforementioned DAIR and Hugging Face there are BigScience, EleutherAI e Montreal AI Ethics Institute , among others. In Italy we have the Italian Society for the Ethics of Artificial Intelligence.
Perhaps the time has come for even those with real political weight and power to take a more active role in monitoring the societies that have our future in their hands.
An algorithm built around humans
In this regard, UNESCO has drawn up a set of recommendations to ensure that every AI algorithm is human-centered.
"We must create international and national rules and frameworks to ensure that these new technologies benefit humanity as a whole," the document reads.
"It is time for AI to serve people, not the other way around"
“Artificial intelligence already affects our lives. There are some legislative gaps in this area that need to be addressed immediately. The first step is to agree which values should be protected and how the regulations should be respected. There are many frameworks and guidelines, but they are applied unevenly and none are truly global. And since artificial intelligence is global, we must be too ".

Commitment to the world
The UNESCO treaty was approved just 7 months ago, on November 24, 2021. It is a fundamental first step to control companies operating in legal fields with super-powerful technologies.
China has also opened the way for unprecedented regulation to hold people accountable for the power of an algorithm. On March 1, the Chinese government activated a law that will allow users to completely disable algorithm recommendations, among other measures to give people decision-making power over technology companies.
The fact that the ethics of artificial intelligence has attracted the attention of global regulatory bodies reveals how significant it is for individual and collective well-being. Are we at the beginning of the research to transform a "sick" algorithm that makes us sick into a wellness algorithm?
I can't tell. But we have to try at all costs.