AI, Poverty & Ethics: When NGOs Use AI Imagery

by Archynetys News Desk

Can humanitarian organizations use images generated by artificial intelligence to raise funds? This ethical question has arisen since certain NGOs use photos and videos of false victims produced by AI for their campaign calling for donations.

A study recently published in The Lancet shows that this phenomenon is growing. And this usage creates controversy, with some even speaking of “poverty porn”, as we speak of “food porn” to evoke the images of culinary preparations displayed on social networks.

Arsenii Alenichev, the author of this study, cites several recent examples, such as a campaign by the NGO Plan International against forced child marriages, using videos showing pregnant and abused girls created by AI, or a UN campaign against sexual violence in conflicts, featuring survivors created by AI.

Search for the “perfect” image

This use of artificial images has two advantages: it costs less and helps protect the identity of the victims.

You can ‘dictate’ your idealized scenario to the AI

Valérie Gorin, director of training at the Center for Humanitarian Studies at the University of Geneva

“The idea is to be able to create the perfect image, that is to say an image which would integrate a little bit the perfect way of representing a victim, in the position we want, according to the characteristics we want”, describes Valérie Gorin, director of training at the Center for Humanitarian Studies at the University of Geneva. “We can ‘dictate’ our idealized scenario to the AI”.

According to Valérie Gorin, AI also makes it possible to circumvent the ethical questions posed by real photos: “Organizations are obliged to ask consent from all the people who are featured in the image. However, we know that consent poses a very large number of problems”.

Stereotypes of poverty

Artificial images, however, perpetuate stereotypes of poverty and give rise to a form of voyeurism, what researcher Arsenii Alenichev describes as “poverty pornography 2.0”. He listed around a hundred images generated by AI, used in online campaigns with humanitarian aims and which convey miserabilist clichés.

This type of images is offered on sites like Adobe or Freepik, ready to use. They then flood social networks.

“Image banks are filling up with images generated by AI. It has become easier today for small charitable organizations, for example, to search for illustrative images for their campaigns and causes, at a very affordable price,” describes Maria Gabrielsen Jumbert, researcher at PRIO, the Peace Research Institute in Oslo and at CERI, in Paris.

Many AI-generated images depicting situations of poverty show children (screenshot from freepik.com)

Often children

Maria Gabrielsen Jumbert also describes “absolutely stereotypical” images, reproducing for example images of “children surrounded by disastrous hygienic conditions, sitting in the mud”.

We often find the figure of the child among these images generated by AI, because there is a hierarchy of victims, explains Valérie Gorin. The child evoking “innocence”, it is therefore the “ideal” figure, she underlines. Women and the elderly are also highlighted, but not men.

>> Reread: Valérie Gorin: “In humanitarian work, women and children are better sellers”

A trap for NGOs?

Beyond the stereotypes amplified by AI, its use above all poses a problem of credibility for humanitarian actors. Documentary photographer Niels Ackermann considers this to be a trap for NGOs.

When we talk about a real problem with a false image, it opens the door to great distrust

Niels Ackermann, documentary photographer

“When we talk about humanitarianism, we are talking about real problems on the ground, which we want to resolve. And when we talk about a real problem with a false image, that opens the door to great distrust,” believes the photographer.

For him, this is “all the more problematic” currently as “humanitarian organizations are exposed to attacks from certain governments, from certain groups who seek to discredit them. Those who attack them do not hesitate to use false and misleading content. But if they themselves do so, they are sawing off their branch and that will destroy trust.”

Radio subject: Malika Nedir

Adaptation web: Julie Liardet

Related Posts

Leave a Comment