Twitter apologizes for allowing ads to target neo-Nazis and fans

Image Copyright
fake images

Twitter apologized for allowing the ads to be micro-targeted to certain users, such as neo-Nazis, homophobes and other hate groups.

The BBC discovered the problem and that led the technology company to act.

Our research found that it was possible to target users who had shown interest in keywords such as “transphobic,” “white supremacists” and “anti-gay.”

Twitter allows ads to target users who have published or searched for specific topics.

But the company has now said it regrets not having excluded discriminatory terms.

The anti-hate charities had expressed concern that the advertising platform of the American technology company could have been used to spread intolerance.

What exactly was the problem?

Like many social media companies, Twitter creates detailed profiles of its users by collecting data on the things they post, like, watch and share.

Advertisers can take advantage of this using their tools to select their campaign audience from a list of features, for example, “parents of teenagers” or “amateur photographers.”

Screenshot

The Twitter ads tool had allowed sensitive keywords to be targeted

They can also control who sees your message using keywords.

Twitter gives the advertiser an estimate of how many users will likely qualify as a result.

For example, you would tell a car website that you want to reach people who use the term “gas head” that the potential audience is between 140,000 and 172,000 people.

Twitter keywords were supposed to be restricted.

But our tests showed that it was possible to announce to people who used the term “neo-Nazi.”

The advertising tool had indicated that in the United Kingdom, this would target a potential audience of 67,000 to 81,000 people.

Other more offensive terms were also an option.

How did the BBC prove this?

We created a generic ad from an anonymous Twitter account that said “Happy New Year.”

Then we turn to three different audiences based on sensitive keywords.

The Twitter website said the ads on its platform would be reviewed before being launched, and the BBC announcement initially entered a “pending” state.

But soon after, it was approved and worked for a few hours until we stopped it.

At that time, 37 users saw the publication and two of them clicked on an attached link, which directed them to a news article about memes. Running the ad costs £ 3.84.

Directing an ad using other problematic keywords seemed to be just as easy to make.

According to the Twitter tool, a campaign that uses the keywords “Islamophobic”, “Islamophobia”, “Islamophobic” and “# Islamophobic” had the potential to reach between 92,900 and 114,000 Twitter users.

Advertising to vulnerable groups was also possible.

We ran the same ad to a 13-24 year old audience using the keywords “anorexic,” “bulimic,” “anorexia,” and “bulimia.”

Twitter estimated that the target audience amounted to 20,000 people. The post was viewed by 255 users, and 14 people clicked on the link before stopping it.

What did the activists say?

Hope Not Hate, a charity against extremism, said it feared that Twitter ads could become a propaganda tool for the extreme right.

“I can see that this is used to promote engagement and deepen the conviction of people who have indicated a partial or partial agreement with intolerant causes or ideas,” said Patrik Hermansson, their social media researcher.

The food disorder charity Anorexia and Bulimia Care added that they believed the advertising tool had already been abused.

“I’ve been talking about my eating disorder on social media for some years and I’ve been targeted many times with ads based on dietary supplements, weight loss supplements, spinal corrective surgery,” said Daniel Magson, president of the organization .

“It is quite stimulating for me and I am campaigning for it to stop in Parliament. Therefore, it is great news that Twitter has acted.”

What did Twitter say?

The social network said it had established policies to prevent abuse of keyword targeting, but acknowledged that they had not been applied correctly.

“[Our] Preventive measures include the prohibition of certain sensitive or discriminatory terms, which we update continuously, “he said in a statement.

“In this case, some of these terms were allowed for guidance purposes. This was a mistake.

“We regret that this has happened and as soon as we find out about the problem, we rectify it.”

“We continue to apply our advertising policies, including restricting the promotion of content in a wide range of areas, including inappropriate content aimed at minors.”

Leave a Reply

Your email address will not be published. Required fields are marked *