This morning an algorithm decided that Maria, 45, didn’t deserve a loan. Last night another system rejected Ahmed’s resume for an interview. Tomorrow, perhaps, it will be your turn. Welcome to the era of automated decisions without human oversight, where ethical AI becomes the only barrier between ruthless efficiency and social justice.
The more companies rush towards total automation, the more a counter-trend emerges that absolutely must be supported: algorithmic ethics boards that put philosophers, psychologists and engineers around the same table. To do what? What questions. To save companies and our future.
When Algorithms Become Implacable Judges
The tragedy of Brian Thompson, UnitedHealthcare CEO killed in Manhattan in December 2024, has cast a grim light on the drift of uncontrolled automation. The killer Louis Mangione he left three words on the cartridge cases that tell more than a thousand ethics treatises: “deny, defend, depose”. Behind this extreme gesture lies the story of a health algorithm called nH Predict, designed to streamline medical reimbursement but has morphed into an automated system of denial of care.
The 90% “error” rate of this system wasn’t a bug: it was a feature. UnitedHealthcare knew perfectly well that the algorithm systematically ignored doctors' assessments, but continued to use it because it worked so well economically. Thousands of elderly patients were denied life-saving care not by a medical board, but by lines of code programmed to save money.
This case demonstrates with surgical brutality what happens when ethical AI is considered optional. In the early stages of their search for a culprit and a motive, investigators faced a startling reality: Fifty million customers they had potential reasons for resentment towards the company. Fifty million potential killers and fifty million victims at the same time.

Ethical AI Committees: Humanity's Last Line of Defense
With cases like UnitedHealthcare serving as a warning, forward-thinking companies are experimenting with a solution that is as simple as it is effective: multidisciplinary ethics committees who check every algorithmic decision before it goes live. These are not “post-death” auditors, but preventive gatekeepers who assess the moral implications of proposed AI solutions.
IBM has led the way by already establishing in 2018 an internal AI ethics committee, co-chaired by the Data Privacy Officer and consists of representatives from research, business units, communications, legal, and privacy. The rule is ironclad: if a product proposal doesn’t pass the AI ethics committee, the product doesn’t get built.
The trend is spreading rapidly. Many organizations are establishing ethics committees involving experts from different backgrounds to assess the moral implications of proposed solutions. It is essential to implement transparency and accountability policies to ensure that algorithmic decisions are understandable and justifiable.
The Team of Ethical AI Superheroes

Who exactly am I talking about when I say “ethics committee”? It’s not enough to have four computer engineers around a table with a pizza box, like the nerds in 80s TV shows. The most effective ethics boards they bring together transversal skills worthy of true moral “Avengers”.
Ethical philosophers to unravel the most complex moral knots by offering theoretical references on the common good, distributive justice and human dignity. Anthropology to understand how different values, norms and cultures can influence the perception of what is “right” or “desirable”. Psychologists to predict the impact that automated decisions can have on the human psyche, from trust in authority to self-perception.
Do not miss legal experts to anticipate legal implications and build a regulatory framework that can evolve with technology. And of course data scientists to analyze data and ensure that algorithm design meets fairness and transparency criteria. As we have highlighted in this article, Artificial Integrity cannot be just the responsibility of developers but requires everyone's collaboration.
From the Wild West to Preventive Decision Control
The old model expected (perhaps I should say “expects”) that companies would first develop algorithms and then, eventually, worry about the ethical consequences. A sort of digital Wild West where everything was allowed as long as no one complained too loudly. The new paradigm completely reverses the perspective: every business decision involving AI goes through a preventive ethical filter.
This doesn't mean stifling innovation, it means driving it. The most sophisticated ethics committees are not “no-men”, they are “how-men”: they don't just say what can't be done, suggest how to do it the right way. Each algorithm is analyzed not only for its technical effectiveness, but for its social impact, for its potential discriminatory bias, for the long-term consequences on society.
The preventive approach also has clear economic benefits. Preventing ethical scandals costs infinitely less than managing their consequences: reputational damage, lawsuits, loss of customer trust. An AI Ethics Committee can avert potential reputational damage and an obstacle to a lasting relationship with your customers and stakeholders.
Governance that saves companies from themselves
But how does an algorithmic ethics board actually work? The typical process involves each new AI system or substantial modification to existing systems going through a structured evaluation. Experts analyze training data to identify potential biases, they check the transparency of the decision-making process, and assess the impact on people's fundamental rights.
Evaluation is not a simple green or red stamp. It is an iterative process that often leads to substantial changes in the proposed algorithms. Sometimes it means changing the training datasets to make them more representative. Other times it means adding explainability mechanisms to make decisions more transparent. In extreme cases, it means throwing everything away and starting over.
Ethical Governance of AI It’s not just a matter of regulatory compliance. It’s a matter of corporate survival in a world where consumer trust has become the most valuable currency. Companies that fail to demonstrate responsible use of AI risk finding themselves in the same situation as UnitedHealthcare: technically efficient but morally bankrupt.
When the machine has more morality than man
Paradoxically, we are reaching a point where algorithms may be more ethical than their creators. Not because machines have developed a moral conscience, but because Preventive control systems are more rigorous of traditional human supervision.
An executive under pressure to meet quarterly targets might be tempted to turn a blind eye to ethically questionable decisions. An algorithm overseen by an ethics committee doesn’t have that flexibility: either it meets the programmed moral parameters, or it doesn’t work. Human control remains fundamental, but it is exercised, it MUST be exercised upstream, in the design phase, not downstream when it is too late.
This approach is already producing concrete results. Companies that have implemented ethics boards report a significant reduction in complaints related to algorithmic discrimination, increased customer trust, and paradoxically also greater operational efficiency. Because ethically designed algorithms also work better from a purely technical point of view.
Ethical AI, the future has already knocked on the door
What I have just outlined is not a futuristic scenario. It is a reality that is already taking shape in the boardrooms of the most astute companies. The European AI Act and new international regulations are making mandatory what until yesterday was voluntary.
Companies that don’t adapt won’t have problems in ten years: they’ll have them in ten months. Consumers are increasingly concerned about the ethical implications of the technologies they use. Investors are increasingly sensitive to the reputational risks associated with AI. Regulators are increasingly prepared to intervene harshly against algorithmic abuse.
So the question isn’t whether your company needs an AI ethics committee. The question is whether it needs one before or after the first scandal that could destroy the reputation you’ve spent decades building. UnitedHealthcare should serve as a lesson: When algorithms make decisions without ethical human oversight, the consequences can be devastating not only for the victims, but also for those who use them.
Hopefully, the age of wild AI will end, replaced by the age of ethical AI: an age where a moral compass is not optional but a necessity for survival.