Meta-owned Facebook and Instagram, as well as TikTok, do not make it easy for users to report illegal content, such as child sexual abuse material and terrorist content
The EU executive body has accused tech giants Meta and TikTok of violating transparency obligations to which they are subject under the European Digital Services Act (DSA). They blame them for restricting accredited researchers from accessing their internal data; a non-compliance for which companies are exposed to million-dollar fines of up to 6% of their annual global turnover.
Community services denounce TikTok and Meta for having developed procedures and tools whose use is “cumbersome.” In practice, it means that researchers only access “partial or unreliable” data, as detailed by the European Commission. Which affects “their ability to conduct investigations, such as whether users, including minors, are exposed to illegal or harmful content.”
And in the case of TikTok, Brussels considers that the Chinese technology company is not sufficiently transparent about how its algorithms work that determine the content that users see on the platform. In 2024, another investigation was opened to check whether TikTok protects minors.
Likewise, it points to Meta for supposedly making it difficult for users of its Instagram and Facebook networks to report illegal content. Among them child sexual abuse material or terrorist content.
«Our democracies depend on trust. “This implies that platforms must empower users, respect their rights and open their systems to scrutiny,” said the vice president of the Community Executive in charge of Technological Sovereignty, Henna Virkkunen.
Meta spokesperson Ben Walters said they disagree with any suggestion of non-compliance with the EU Digital Services Act. “We continue to negotiate with the Commission on these matters,” he added. “Since the entry into force of the DSA, Meta has introduced changes to our content reporting options. Regarding the appeal process and data access tools. We trust that these solutions comply with what is required by EU law.”
Meanwhile, TikTok spokesperson Paolo Ganino said the company was “reviewing the EC’s conclusions. But the requirements to relax data protection put the DSA and the General Data Protection Regulation in conflict. “If it is not possible to meet both, we urge regulators to clarify how these obligations should be reconciled.”
He added that “substantial investments have been made in data sharing. To date almost 1,000 research teams have had access to data through our research tools.

The large platforms have criticized the obligations of European regulations in the past. They understand it as a form of censorship, but the Community Executive argues that the legislation “does the opposite.” It offers tools to ordinary users to defend themselves against possible unilateral content removals.
According to the EU, since April 2024 Meta has made more than 918 million content moderation decisions affecting European users. Of the total, nearly 68 million have been used by these users thanks to the framework offered by the DSA. The success rate in appeals is around 31% (21 million contents). And the company had to return the material initially removed, Brussels sources emphasize to Europa Press.
Toxic content for young people and adolescents
France banned the use of TikTok on official phones of public officials in March 2023 due to cybersecurity and data protection concerns. Furthermore, in September 2025, a parliamentary commission proposed banning social media for minors under 15 years of age. He also filed a complaint against TikTok for exposing young people to harmful content.
At a time when this social network, very popular among young people and adolescents, is under debate in the French country, a forceful report from Amnesty International France emerges. In ‘Entraîné-es dans le Rabbit Hole’ (‘locked in the rabbit hole’) the NGO urges the French Government and the EU to adopt “urgent measures to make TikTok a safe network for young people in the EU and throughout the world.” Modifying the algorithm that, according to AI, leads to the display of content that incites self-mutilation or suicide.

The document highlights the lack of measures by TikTok to address the systemic risks of its design that affect children and young people. Argument coinciding with that of the EU and the actions of Meta and TikTok. “Our technical research shows how quickly adolescents become interested in topics related to mental health. And they can be dragged into spirals of toxic content and fall into them,” the research reveals.
“After just three or four hours of interaction on TikTok’s For You feed, teen test accounts received recommendations for videos idealizing suicide, showing young people expressing their intention to take their own life, and even information on suicide methods,” says AI researcher Lisa Dittmer.
Self-mutilation and suicide
Despite risk mitigation measures announced by TikTok since 2024, the platform continues to expose vulnerable users to content that normalizes self-harm, hopelessness and suicidal ideation, the report maintains.

The testimonies of young people with depression and affected or grieving parents reveal the magnitude of the risks. Also about the damage that TikTok’s business model represents for the mental and physical health of youth who are already experiencing complicated situations.
“There are videos that have stayed with me,” says Maëlle, 18 years old. While describing how in 2021 she found herself swept up in depressive and self-harm content on TikTok’s “For You” feed. Over the next three years, his mental health problems related to self-harm worsened.
Although the report focuses on the amplification of harmful content, the testimonies collected also point to TikTok’s deficiencies in content moderation. According to those who participated in the study, despite repeated complaints from young people and their families, content that incites self-harm or suicide has not been removed from the platform.
In the summer of 2025, for example, Amnesty research staff found two videos about the “lip balm challenge” in the feed of a manually managed test account. The social trend supposedly started as a challenge to guess the scent of someone else’s lip balm. But the idea evolved into a different version. People were encouraged to remove part of their lipstick whenever they felt sad, and to self-harm or commit suicide when it was completely removed.
+ in Cambio16.com:

