Artificial intelligence (no matter what anyone says about it) has not yet swallowed up the human race, nor has it gained consciousness. But it is gradually taking the helm of our days, and of our rules.
Virtual assistants eavesdrop on us in our own homes. Algorithms decide our information horizon, the personnel to hire and soon also the criminal to condemn. The ethical boundaries of their use? More and more subtle and confused. Dangerously confused.
The Project December case
It seems like an episode of Black Mirror, Charlie Brooker's masterpiece series that investigates the "failures" of technology, and the dystopian scenarios they could generate. Not just any episode: one in particular, broadcast in 2013. For lovers of the series, it was called "Come back to me", "Be Right back". Yet, it is not fiction, it is reality.
A few months ago (I told you about it here) a 33-year-old man named Joshua Barbaau used a service called Project December to create a conversational robot (a chatbot) that could simulate conversations with his late girlfriend Jessica.
Through this chatbot, Barbeau exchanged affectionate text messages with an artificial “Jessica.” At the time of the article, perhaps with the complicity of the August heat, I did not sufficiently question myself on an ethical level.
Today I ask myself, since there is not yet a law to regulate these cases: is it ethically permissible or reprehensible to develop a 'deadbot', the conversational robot of a deceased person?
Deadbot: Right or wrong
Let's take a step back first. ProjectDecember it was created by Jason rohrer, a video game developer, using GPT-3, a text generation language model created by OpenAI. And violating the Open AI guidelines, which explicitly prohibit the use of GPT-3 for sexual, romantic, self-harming or bullying purposes.
For Rohrer, however, OpenAI is moralistic and people like Barbeau are "adults and consenting": ProjectDecember continues to operate but without using GPT-3, in full controversy with the company.
Let's go back to the dilemma: right or wrong? Are Barbeau and others who may have used this service behaving ethically?
(Maybe) the will of those who remain is not enough...
Jessica was a real person: is her boyfriend's will enough to create a robot that mimics her? Even when they die, people are not mere things with which others can do whatever they want.
There are specific crimes, such as vilification of a corpse, which make us understand how much society considers it wrong to profane or not respect the memory of the dead. We have moral obligations towards them, because when someone dies not all of them cease to exist. Feelings, memories, examples remain, and it is right to protect them.
Again: developing a deadbot that replicates someone's personality requires large amounts of personal information. Including social network data, which has been shown to reveal highly sensitive traits.
If it is unethical to use the data of the living without their consent, why should it be ethical to do so with the dead? For this, the consent of the "imitated" person, i.e. Jessica, would also have been needed. But would it have been enough?
…nor the will of those who die
The limits of consent are always a controversial issue. To give an example: some time ago the case of the "Rotenburg cannibal" hit the headlines (so to speak). A guy sentenced to life imprisonment, you can imagine why, even though his victim had agreed to be eaten.
The conviction was motivated by the fact that it is unethical to consent to things that can be harmful to ourselves, physically (selling one's vital organs) or abstractly (alienating one's rights).
While the dead cannot be harmed or offended in the same way as the living, it does not mean that they are invulnerable to evil deeds, nor that these deeds are ethical.
The dead can suffer damage to their honor, reputation or dignity (for example, posthumous smear campaigns) and disrespect for the dead also harms their family members.
In summary, not even a person's consent to be "eaten" (metaphorically) and "spit out" in the form of a conversational robot could be enough.
So how will it go?
We understood that neither the will of those who want to speak with a "reconstructed" deceased, nor that of those who want to be "imitated" after death may not be sufficient. Are there ethical ways to do something like this? If so, who would be responsible for the outcomes of a deadbot, especially in the case of harmful effects?
Imagine Jessica's deadbot autonomously “learning” to behave in a way that diminishes the memory of the deceased, or that damages the mental health of her boyfriend.
Deadbot: whose fault is it?
For artificial intelligence experts, the responsibility lies with those involved in the design and development of the system, and secondly between all the agents who interact with them. In this case the subjects involved would be OpenAI, Jason Rohrer and Joshua Barbeau. The first, OpenAI, has explicitly forbidden to use its system for these purposes: I see few faults. The second, Jason Rohrer, designed the deadbot, did it in violation of OpenAI guidelines, and profited from it. The bulk of the responsibility would lie with him. The third, Joshua Barbeau, is to be considered co-responsible for any drifts of the deadbot. In any case, it would not be easy to establish this from time to time.
In summary: would the deadbot be ethical? Only under certain conditions
If all the subjects involved (person "imitated" by the deadbot, person who develops the deadbot and person who interacts with the deadbot) have given explicit consent, have detailed (and restricted) the permitted uses as much as possible and assume responsibility for any negative results, it can be done.
These are three severe conditions, which make the process of creating these systems rigid, but offer serious guarantees
And they confirm how important ethics is in the field of machine learning: we need to develop rules now, because this technology will impose itself in a damn fast way, risking to upset our values and our society.