The European Commission has proposed that digital providers should be forced to detect, report and remove child sexual abuse images from the internet.
Shifting from a voluntary approach, the Commission has today said internet providers and social media companies will have to assess and reduce the risk of the misuse of the internet and social media to disseminate images of abuse.
They will also have to assess the risk of children being groomed for sexual exploitation through online interactions with adults.
According to the Commission, some 85 million pictures and videos depicting child sexual abuse were reported in 2021, with more going unreported.
Brussels says the problem worsened during the Covid pandemic, referring to findings by the Internet Watch foundation showing a 64% increase in reports of confirmed sexual abuse compared to 2020.
The Commission says that up to 95% of all reports of child sexual abuse came from one company, "despite clear evidence that the problem does not only exist on one platform."
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
Today's proposals, which will need to be agreed by member states and the European Parliament, include a new independent EU Centre on Child Sexual Abuse providing a hub of expertise, reliable information on identified material, receiving and analysing reports from providers to identify erroneous reports which might otherwise be passed on to law enforcement.
The centre would also aim to provide support to victims.
Under the new rules member states would have to designate national authorities in charge of reviewing the risk assessment.
If a significant risk remains, member states can ask a court or independent national authority to issue a detection order for known or new child sexual abuse material or grooming.
Any company receiving a detection order would only be able to detect content using indicators of child sexual abuse verified and provided for by the EU Centre.
Detection technologies must only be used for the purpose of detecting child sexual abuse and providers would have to deploy technologies that are the least privacy-intrusive and that limit the error rate of false positives as far as possible.
Any internet or social media providers that have detected online child sexual abuse will have to report it to the EU Centre.
Under the new approach, national authorities will be able to order companies to remove images if the material is not swiftly taken down.
Internet providers will also be required to disable access to images and videos that cannot be taken down if, for example, they are hosted outside the EU in non-cooperative jurisdictions.
App stores will have to ensure that children cannot download apps that may expose them to a high risk of grooming.
Campaigners have welcomed the proposal, which was launched by European Commissioner for Home Affairs Ylva Johannson.
The global advocacy group The Brave Movement says it "strongly backs" the draft legislation.
In a statement it said: "Digital spaces are in some cases completely unregulated - exposing children to the threat of horrific sexual violence and exploitation.
"Technology companies have the tools to detect and remove online sexual violence materials, and we will continue to put pressure on them to prioritise child safety ahead of anything else.
"This legislation will for the first time enforce mandatory rules on Big Tech - to detect, report, and remove sexual violence material which endangers children and adolescents and violates their rights."