Artificial intelligence (whatever they say) has not yet engulfed mankind, nor has it become aware. But it is gradually taking the helm of our days, and of our rules.
Virtual assistants eavesdrop on us in our own homes. The algorithms decide our information horizon, the personnel to be hired and in a while also the criminal to be convicted. The ethical boundaries of their use? More and more subtle and confused. Dangerously confused.
The Project December case
It looks like an episode of Black Mirror, Charlie Brooker's masterpiece series that investigates the "failures" of technology, and the dystopian scenarios they could generate. Not just any episode: one in particular, broadcast in 2013. For fans of the series, it was called “Come back to me”, “Be Right back”. Yet, it is not a fiction, it is reality.
A few months ago (I told you about it here) a 33-year-old man named Joshua Barbeau used a service called Project December to create a conversational robot (a chatbot) that could simulate conversations with his late girlfriend Jessica.
Through this chatbot, Barbeau exchanged loving text messages with an artificial “Jessica”. At the time of the article, perhaps with the complicity of the August heat, I did not question myself sufficiently on the ethical level.
Today I ask myself, since there is still no norm to regulate these cases: is it ethically permissible or reprehensible to develop a 'deadbot', the conversational robot of a deceased person?
Deadbot: Right or wrong
Let's take a step back first. Project December it was created by Jason rohrer, a video game developer, using GPT-3, a language model for generating text developed by OpenAI. And violating the Open AI guidelines, which explicitly prohibit the use of GPT-3 for sexual, love, self-harm or bullying purposes.
For Rohrer, however, OpenAI is moralistic and people like Barbeau are "adults and consenting": Project December continues to operate but without using GPT-3, in full controversy with the company.
Let's go back to the dilemma: right or wrong? Are Barbeau and others who may have used this service behaving ethically?
(Perhaps) the will of those who remain is not enough ...
Jessica was a real person: is her boyfriend's will enough to create a robot that mimics her? Even when they die, people are not mere things with which others can do whatever they want.
There are specific crimes, such as vilification of a corpse, which make us understand how much society considers it wrong to profane or not respect the memory of the dead. We have moral obligations towards them, because when someone dies not all of them cease to exist. Feelings, memories, examples remain, and it is right to protect them.
Again: developing a deadbot that replicates someone's personality requires large amounts of personal information. Including social network data, which has been shown to reveal highly sensitive traits.
If it is unethical to use data from the living without their consent, why should it be ethical to do so with the dead? This would also require the consent of the person "imitated", that is, of Jessica. But would that be enough?
... nor the will of those who die
The limits of consent are always a controversial issue. To give an example: some time ago the case of the "cannibal of Rotenburg" rose to the headlines (so to speak) of the news. A guy sentenced to life imprisonment, you can imagine why, despite his victim having agreed to be eaten.
The conviction was motivated by the fact that it is unethical to consent to things that can be harmful to ourselves, physically (selling one's vital organs) or abstractly (alienating one's rights).
While the dead cannot be harmed or offended in the same way as the living, it does not mean that they are invulnerable to evil deeds, nor that these deeds are ethical.
The dead can suffer damage to their honor, reputation or dignity (for example, posthumous smear campaigns) and disrespect for the dead also harms their family members.
In summary, not even a person's consent to be "eaten" (metaphorically) and "spat out" in the form of a conversational robot could be enough.
So how will it go?
We have understood that neither the will of those who want to speak with a "reconstructed" deceased, nor that of those who want to be "imitated" after death may not be sufficient. Are there ethical ways to do such a thing? If so, who would be responsible for the outcomes of a deadbot, especially in the case of damaging effects?
Imagine Jessica's deadbot independently "learning" to behave in a way that dwarfs the memory of the deceased, or damages her boyfriend's mental health.
Deadbot: whose fault is it?
For artificial intelligence experts, the responsibility lies with those involved in the design and development of the system, and secondly between all the agents who interact with them. In this case the subjects involved would be OpenAI, Jason Rohrer and Joshua Barbeau. The first, OpenAI, has explicitly forbidden to use its system for these purposes: I see few faults. The second, Jason Rohrer, designed the deadbot, did it in violation of OpenAI guidelines, and profited from it. The bulk of the responsibility would lie with him. The third, Joshua Barbeau, is to be considered co-responsible for any drifts of the deadbot. In any case, it would not be easy to establish this from time to time.
In summary: would the deadbot be ethical? Only under certain conditions
If all the subjects involved (person "imitated" by the deadbot, person who develops the deadbot and person who interacts with the deadbot) have given explicit consent, have detailed (and restricted) the permitted uses as much as possible and assume responsibility for any negative results, it can be done.
These are three severe conditions, which make the process of creating these systems rigid, but offer serious guarantees
And they confirm how important ethics is in the field of machine learning: we need to develop rules now, because this technology will impose itself in a damn fast way, risking to upset our values and our society.