Imagine the scene: the judge calls the witness, the screens light up, and the face of a man who died four years ago appears. He puts on his gray baseball cap, his red beard trimmed, and he begins to speak in the voice that the family remembers perfectly. “Hi, I’m an AI-created version of Chris,” he tells the stunned courtroom. It’s the first court testimony from a deceased person made possible by deepfake technology. And it is unleashing an ethical earthquake.
The case that changed judicial history
Christopher Pelkey he was a 37-year-old US Army veteran, with three tours in Iraq and Afghanistan behind him. His life was cut short on November 21, 2021 in Arizona, during what should have been a trivial traffic incident. Gabriel Horcasitas, 54, shot him after an argument at a traffic light. Christopher had gotten out of the car to clarify the situation: he never got back in.
Last month, during Horcasitas' sentencing hearing, something strange happened. never seen before in the history of the courts. Stacey Wales, Christopher's sister, together with her husband Tim Wales and to his friend Scott Yentzer, presented a video that left the room speechless.
It was not a simple memory film, but an avatar created with artificial intelligence that perfectly reproduced the appearance and voice of the missing brother.
The technology behind the avatar
Christopher's avatar was made using stable diffusion, one of the most advanced artificial intelligence platforms for image generation. The family has fed the system with hundreds of photos, videos and audio recordings, including an interview Christopher gave months before his death. The result was a cinematic-quality deepfake: every facial expression, every vocal inflection faithfully reproduced the veteran's features.
“We gathered testimonies from everyone who knew him,” says Stacey Wales. “From his elementary school teacher to his fellow soldiers in Afghanistan. We wanted to make sure that what Christopher would say truly reflected his character.”
The “script” was written by his sister, but every word was designed to reflect the personality of a man described as “the type who would take his shirt off and give it to you if you needed it.”
The words that moved the court
Christopher’s avatar addressed his killer directly with words that shook the entire courtroom: “In another life, maybe we could have been friends. I believe in forgiveness. In a God who forgives.” The message concluded with a greeting that touched everyone in the room: “I’m off fishing now. I love you all. See you on the other side.”
The judge Todd Lang he did not hide his emotion: “I loved that AI”, he said at the sentencing. “Even though the family was rightfully angry and calling for the maximum sentence, he allowed Chris to speak from the heart as they saw him. I did not hear him calling for the maximum sentence.”
Lang later sentenced Horcasitas to 10 and a half years in prison, the maximum for manslaughter. Perhaps, in the end, that video did nothing to satisfy the family's thirst for justice. But the point, you will imagine, is not this..
The ethical debate that divides experts
The court testimony has sparked a controversy that goes far beyond the specific case. Derek Life, professor of business ethics at Carnegie Mellon University and author of “Ethics for Robots,” expressed strong doubts regarding this matter:
"I don't question the intentions of this family, but I fear that not everyone will use AI correctly. If other families create avatars, will they always be faithful to the wishes of the victim?"
The issue touches on one of the raw nerves of the digital age: posthumous consent. How can we be sure that the words spoken by the avatar really reflect what Christopher would have said? And above all, who has the authority to decide what a dead person can or cannot say?
A precedent that makes you think
Come I was writing to you here, the use of artificial intelligence to recreate the deceased is not an absolute novelty. “Thanabots” (chatbots of the dead) have existed for some time, generally for recreational purposes. I recently told you, for example, about the writing courses taught “directly”, so to speak, by the great writer Agatha Christie “resurrected” by artificial intelligence. But the Pelkey case marks a qualitative leap: for the first time, a digital avatar has directly influenced a judicial decision in a court.
And now it's a storm. The Judicial Conference of the United States has already announced that it will launch a public consultation to regulate the use of evidence generated by artificial intelligence in proceedings. Gary Marchant, professor of law at Arizona State University, warns:
“There is a real concern among prosecutors and lawyers that deepfakes will be used more and more. They are easy to create and anyone can do it with a phone.”
“AI Witnesses” in Court: The Future of Digital Justice
The Christopher Pelkey case is not an isolated incident. It represents the beginning of a new era in which technology redefines the boundaries between life and death, memory and manipulation. As several studies demonstrate, deepfakes are considered by experts to be one of the main threats brought by artificial intelligence, precisely because of their ability to erode trust in audiovisual evidence.
While the Wales family believes they have given Christopher the opportunity to express his final message, the scientific and legal community is wondering about the risks of a technology that could transform courts into theaters of emotion rather than places of justice. Christopher's voice, real or reconstructed, has certainly left its mark. But it has also opened up questions that society will have to face very soon: to what extent are we willing to let artificial intelligence speak for us, even after death?
The answer will determine not only the future of the courts, but the very way we think about human identity in the digital age.