This post is part of "Periscopio", the Linkedin newsletter which every week delves into Futuro Prossimo themes, and is published in advance on the LinkedIn platform. If you want to subscribe and preview it, find it all here.
Algorithms increasingly govern our lives, guide our choices and fill our days. An algorithm is a “hidden” and omnipresent system that dominates, for the most part, our common digital realities.
When you listen to music on iTunes, play a video on YouTube, search for the next birthday gift on Amazon, watch your favorite show on Netflix or even search for news on Google, it is an algorithm that decides the options available to you and, indirectly, what that you will eventually consume.
Algorithms build real "funnels" into which our vision of reality falls and ends up in a specific direction. It's nice when Spotify finds a catchy song, but it's terrible when a social platform manages to influence the outcome of what should be a free election.
This is a huge influence, carefully planned and held by very few players in the world: the big tech companies.
Does an algorithm have to work like this?
There is something rewarding about achieving our goals through an algorithm. It's like having someone next to you who "understands us quickly" and always tells us, or advises us, what we want to hear. This is why humans always want more. And companies employ them because they guarantee them greater profits. But can it only work like this? It's right?
The question has already emerged, we do it more and more often and even the readers of this blog do not shy away from it: how can we defend ourselves from the negative effects of artificial intelligence algorithms?
We can limit your use of social media, or even delete accounts. We can stay offline as much as possible, or at least a few days a month. We can do in-depth research in newspapers to avoid being influenced by fake news and lies. Of course, we can, at the cost of sacrifices, but we can. But why does it have to depend only on us?
Why do we have to do everything ourselves?
There must be something that the technology companies themselves must do, are forced to do to improve the situation. We have to question the whole picture, and the picture is: an algorithm is inherently designed to occupy our time and attention by exploiting our psychological vulnerabilities. Point. This is it, it is the truth.
And it's a serious thing, especially for the new generations who have grown up with constant arguments on social media and the small gratification of "likes". And to give each other advice to refrain from surfing, or to have different habits, or to make other efforts, is to passively accept that tech companies will continue to exploit algorithms more and more, and worse.
The real question to ask ourselves is another. The question is: why is an algorithm optimized for engagement rather than well-being? And what does it take to change this state of affairs?
Wanted algorithm of happiness
With a modest amount of work, algorithms could be modified to defend and enhance our delicate psychology, rather than exploit it. An algorithm should be trained to improve well-being, rather than interaction.
Try to imagine how things would improve.
Clearly Big Tech doesn't even think about it. The former president of Facebook Sean Parker, who saw the birth of the social giant, he said long ago that the main goal of the platform was how to make the most of users' time and attention.
Yes, you know. The goal is profit. And the currency is our attention. The consequences? In the background. Whether they know what they're doing, or they're riding unrestrained until they reach a breaking point, tech companies are wreaking havoc on us, and they're responsible.
The importance of AI ethics
Of course, there is some (apparent) good news: the advent of AI ethics and open source collaborative initiatives has put some pressure on these companies. Now they do what they can to show commitment to improving their platforms. Google, Facebook, Microsoft and others have hired many social scientists: the goal? Make their technologies more human.
An obviously arduous task, which encounters obstacles right from the start: we all remember Timnit Gebru, the ethics expert fired by Google in 2020 for having put the 'racism' of its Artificial Intelligence in the dock. It was not the only, nor the last, dismissal of this type. She followed it up in 2021 Margaret Mitchell, from the same Google ethics team.
Yet, these experts are fired for doing what they were hired to do: analyze the potential risks of the technology. In other words, it's okay to hire ethicists, as long as they don't interfere with the company's key plans.
AI ethics will be an unsustainable business practice if professionals are unable to do their job, that is, hold the companies they work for accountable.
Putting people before profit
Over the past two years, the reputation and public image of these companies has been drastically reduced due to these choices. And more and more researchers are trying to get together to continue working on the ethics of technology without submitting to the economic objectives of these giants.
And maybe it's for the best: tech companies are unlikely to really listen to their ethics teams if their problem is purely economic. If introducing ethics won't reduce their profits, they will - if not, they'll prevent these teams from working.
For this reason Timnit Gebru herself founded the Distributed Artificial Intelligence Research Institute (DAIR) and Margaret Mitchell works as a researcher and chief ethics scientist at hugging face .
If internal ethics teams can't do a real job (and they can't, it seems clear to me), it's better for the solution to be sought externally.
The Salvation Army
As mentioned, we are seeing more and more efforts in the field of artificial intelligence, but outside of big tech companies. There are subjects who are working collectively and individually to reverse the fate of the "algorithm domino": in addition to the aforementioned DAIR and Hugging Face there are big science, Eleuther AI e Montreal AI Ethics Institute , among others. In Italy we have the Italian Society for the Ethics of Artificial Intelligence.
Perhaps the time has come for even those with real political weight and power to take a more active role in monitoring the societies that have our future in their hands.
An algorithm built around man
In this regard, UNESCO has developed a set of recommendations to ensure that every AI algorithm is human-centered.
“We must create international and national rules and frameworks to ensure that these new technologies benefit humanity as a whole,” the document reads.
“It's time for AI to serve people, not the other way around”
“AI already influences our lives. There are some legislative gaps in this area that need to be addressed immediately. The first step is to agree which values should be protected and how regulations should be respected. There are many frameworks and guidelines, but they are applied unevenly and none are truly comprehensive. And since AI is global, we must be global too.”
Commitment to the world
The UNESCO treaty was approved just 7 months ago, on November 24, 2021. It is a fundamental first step to control companies operating in legal fields with super-powerful technologies.
China has also opened the way for unprecedented regulation to hold people accountable for the power of an algorithm. On March 1, the Chinese government activated a law that will allow users to completely disable algorithm recommendations, among other measures to give people decision-making power over technology companies.
The fact that the ethics of AI has attracted the attention of global regulatory bodies reveals how significant it is for individual and collective well-being. Are we at the beginning of the quest to transform a "sick" algorithm that makes us sick into a well-being algorithm?
I can't tell. But we have to try at all costs.