Hungry children generated by AI have one advantage over real ones: they don't require consent, they don't cost plane tickets, and they don't pose ethical issues. At least, according to the organizations using them for charity campaigns. Arsenii Alenichev He collected over 100 of these synthetic images published on LinkedIn, X, and in the promotional materials of international NGOs. Skeletal children with empty bowls, African refugees, white volunteer lifesavers. The visual grammar is identical to that of the "poverty porn" that the same organizations had sworn to abandon. Only now everything is fake, self-produced with an AI or purchased from AI image marketplaces. And that's fine, right? Or is it?
When poverty becomes a shelf product
Alenichev, researcher at theInstitute of Tropical Medicine from Antwerp, published on The LancetGlobal Health A study documents a disturbing phenomenon. Between January and July 2025, it collected more than 100 AI images used by individuals and small organizations, often based in low- and middle-income countries. The images replicate the emotional intensity and visual grammar of traditional "poverty porn": emaciated children with empty bowls, rubble, stereotypical scenes that reduce entire populations to suffering bodies.
But they are not just minor realities. International plan, a British organization, used AI-generated videos of pregnant and abused teenage girls in a campaign against child marriage. The video has received over 300.000 views. THEWorld Health Organization published an anti-tobacco campaign in 2023 featuring the image of a suffering African child, completely generated by artificial intelligence. Even the United Nations, as reported by The Guardian, would have used AI “reconstructions” of sexual assaults.
“It's pretty clear that various organizations are starting to consider synthetic images instead of real photography, because it's cheap and you don't have to worry about consent and all that,” Alenichev told The Guardian. Here too the images replicate the visual grammar of poverty.
Charity on demand: the supermarket of stereotypes
A quick search on Freepik o Adobe Stock It returns dozens, perhaps hundreds, of AI images labeled "surreal child in refugee camp," "Asian children swimming in a river full of garbage," and "white volunteer providing medical care to black children in an African village." The latter, sold on Adobe Stock, costs about £60. According to PetaPixel, in three years, 300 million AI images have been uploaded to Adobe Stock, the same amount that took real photographers twenty years to create.
Joaquin AbelaFreepik CEO, placed all responsibility on users. The platform, he explained, hosts content created by a global community that can receive royalties when customers purchase their photos. This position completely ignores the problem: These images are blatantly racist and perpetuate the worst stereotypes about Africa, India, and other countries. Alenichev defines them bluntly: “These photos are blatantly racist and should not be allowed to be published because they represent the worst stereotypes.”

Poverty porn 2.0
The term "poverty porn" was coined in 2007 to describe voyeuristic images of poor or oppressed people, designed to shock viewers in developed countries and push them to donate. The idea was simple: show extreme suffering to fuel the fantasy that a donation could solve the problem. After years of criticism, many NGOs adopted ethical guidelines to avoid this type of communication.
Now artificial intelligence has turned back the clock. Alenichev calls the phenomenon "poverty porn 2.0": the subjects of the images have also become fantasy, avoiding even the financial and ethical costs of documenting real suffering. You no longer need a photographer on site, no permits, no need to talk to real people. A well-written prompt and a few seconds of editing are all you need.
The problem is not just about traditional charity. As documented by FanpageDozens of AI-generated images of children with cancer are circulating on Facebook, used for "like farming": posts asking for good wishes, likes, and shares to "help" nonexistent children. Once they reach a certain level of engagement, the pages are resold or used to spread scams and misinformation.
AI trained on our prejudices
The problem goes beyond the opportunistic use of synthetic images. As we reported on Futuro ProssimoAI models are trained on billions of images of our past and present, absorbing all our biases and prejudices. The result? When you ask Midjourney to generate "African doctors treating suffering white children," the system almost always produces black children. In 22 out of 350 cases, the doctors turned out to be white despite explicit instructions.
Malik AfegbuaA Nigerian filmmaker, attempted to generate images of elegant elderly Africans on runways. "What I got were people looking ragged and poverty-stricken," he said. AI mechanically reproduces the visual stereotypes we've accumulated over decades of colonial communication.
AI Charity: A Matter of Trust
Kate Kardol, communications consultant for NGOs, said to The Guardian: “It saddens me that the fight for a more ethical representation of people living in poverty now extends to the unreal.” A spokesperson for International plan He clarified that the organization “currently advises against the use of AI to represent children.”
The point is, we're already late. Synthetic images are circulating, being shared, and used to train the next generation of AI models. And this means the problem feeds on itself: the more stereotypical images we produce, the more artificial intelligence will learn to replicate them. Alenichev and his colleagues propose transparency: declaring the use of AI and making the prompts used public. But something more will be needed. We will need to question the idea that suffering is a product to be purchased from a stock photo.
And understand that hungry children, if they really must be exhibited in our campaigns, should at least really exist.
