skip to main content

Are social media firms facing a day of reckoning around harmful content?

The era of self-regulation by social media companies is over, Minister for Justice Helen McEntee warned
The era of self-regulation by social media companies is over, Minister for Justice Helen McEntee warned

It has been a deeply uncomfortable week for social media companies based in Ireland, and justifiably so.

News that a dark and sinister threat had been made against Taoiseach Simon Harris and his family in an Instagram post was met with shock by right-thinking members of society.

The fact it stayed online for days before being taken down by the platform, despite a request to remove it from An Garda Síochána, was equally greeted with disbelief.

That incident came in the wake of a string of violent protests on both sides of the border and across the UK in recent weeks, many of which were orchestrated and stoked on social media, often through the use of misinformation and disinformation.

The rising incidence of threats and harmful content directed at politicians and others in recent times has also sharply focused attention on what social media platforms are doing, or more importantly not doing, to address what has become a massive problem.

The question is now, with all the pressure building, are social media giants soon to face a day of reckoning when it comes to their handling of harmful and illegal content, as well as misinformation and disinformation?

Taoiseach Simon Harris, Tánaiste Micheál Martin and Minister for Justice Helen McEntee made it abundantly clear what their views were when asked about the issue midweek.

The era of self-regulation by social media companies is over, Ms McEntee warned.

Elon Musk and his X platform are "problematic", Mr Martin frankly stated.

Coimisiún na Meán has developed the soon-to-be-finalised Online Safety Code

Companies will be hit in the pocket "where it hurts" if they do not abide by a binding code on online safety which is due to come into force later in the year, the Taoiseach claimed.

Directors will also be able to be held personally responsible too, he warned, because social media companies "aren't actually faceless".

What Mr Harris was referring to is the use of the soon-to-be-finalised Online Safety Code.

It has been developed by online and media regulator Coimisiún na Meán under the Online Safety and Media Regulation Act in order to implement the EU Audiovisual and Media Services Directive.

A draft of the code was sent to the European Commission in Brussels for approval in May and following a three-month standstill period during which it and other member states can examine it, it should come into effect here presently.

The code will apply specific rules to so-called Video Sharing Platform Services (VSPS), a group of ten named and designated online platforms with operations in Ireland.

It sets binding rules and holds platforms accountable for keeping all their users safe from harmful content and in the case of children, specifically things like cyberbullying and content that promotes eating disorders, self-harm or suicide.

Platforms will have to prevent the uploading or sharing of a range of illegal content, including incitement to hatred or violence.

Tech companies will have to use age assurance to prevent children from encountering pornography or gratuitous violence online.

The Irish Council for Civil Liberties has concerns about what it claims is a lack of action or tools to deal with what it describes as 'toxic algorithms'

The code also provides clarity to users on how platforms are required to protect them and what their rights are.

If they do not comply with the code, the firms could face fines of up to €20m or 10% of their annual turnover, whichever is greater.

The code will be used alongside the Digital Services Act (DSA), which came into full effect in February of this year.

It requires big tech firms to do more to police illegal and harmful content on their platforms, including restricting the spread of disinformation, quickly removing illegal content and better protecting children using the internet.

Together with the Terrorist Content Online Regulation, the three sets of rules will form Ireland’s Online Safety Framework.

Coimisiún na Meán set up a contact centre in February to receive complaints from the public related to the DSA and the code when it is in place.

"Under the EU Digital Services Act, online platforms must provide a way for people to report content they think is illegal," the commission said in a statement this week.

"Platforms must respond to these reports in a timely and diligent manner. They must also consistently enforce their own terms and conditions relating to content."

"Coimisiún na Meán does not have powers to compel the immediate removal of illegal content from online platforms. Our role is to make sure that the platforms’ content reporting systems are working effectively in compliance with the law," it said.

On the face of it, though, it seems that by the end of the year, the commission will have a reasonable arsenal of tools with which to combat illegal and harmful content.

But doubts remain among some experts about whether it will prove sufficient, among them the Children’s Rights Alliance.

Meta said it spent $5 billion on safety and security last year alone

"While we welcome the renewed Government commitment to protecting children and young people online, we are becoming increasingly concerned that the current draft of the Online Safety Code – the first, legally-binding code of conduct put forward by Coimisúin na Meán – does not go as far as we need it to," said the organisation’s Online Safety Coordinator Noeline Blackwell.

"Currently, there is no requirement for platforms to exclude or take down Irish criminal content which is seen by Irish users. The Online Safety Code is not a silver-bullet that will solve the complexity of issues across all platforms but, it will set a precedent and currently, it is failing to hit the mark," she said.

Ms Blackwell added that harms that occur online can have harrowing real-life consequences.

"Yet, we have also seen the uphill battles that exist when it comes to reporting an issue or taking down harmful content," she said.

"There should be an onus on these online platforms to address the harms that occur on their watch efficiently and effectively, but the draft Code is worryingly vague and unclear about how these platforms will do so," Ms Blackwell added.

Others see the code as deficient in a different respect.

The Irish Council for Civil Liberties (ICCL) has concerns about what it claims is a lack of action or tools to deal with what it describes as "toxic algorithms".

It claims the recommender systems used by platforms, which serve up content based on what users have searched for in the past, as well as their age, location and purchases, often lead to inappropriate video and messages appearing in their feeds, in order to keep them engaged.

"When the Online Safety Code comes in…it will have nothing in it about the toxic algorithms that are artificially amplifying and disseminating material that otherwise would not be seen," said director of Enforce at ICCL Dr Johnny Ryan.

He said it had been lobbied against by major tech firms and industry groups and as a result there will be no great change.

"There may indeed be a change as far as a new robust approach to removing clearly illegal content that is visible to everyone," he said, referring to the DSA.

"But as far as the not visible to everyone amplification of hate and conspiracy and so on, those systems are allowed to operate by default without any change. That is a very, very big problem," Dr Ryan added.

He said it should be the case that people are given the option to opt in to share the kind of data that recommender systems need to operate, rather than opt-out being the default.

While most right-thinking people would not for a second wish politicians to be at the receiving end of online hate and threats, some experts feel that because they increasingly are, it could now lead to more concentrated action on the problem.

The reality that there is a general election just around the corner and opinion polls indicate this is a topic that voters increasingly care about, could also be a factor.

"The fact that this is now impacting politicians in Ireland in a deeply personal way perhaps allows them to understand the harms in a much less abstract way than harm had been communicated or understood in the past," said online privacy and safety consultant Liz Carolan from thebriefing.ie.

She said what she’s heard from politicians and regulators about the issue in recent weeks is a real statement of intent.

"Those signals haven’t been there in the past from Department of An Taoiseach, to the regulatory body, that this is something important, enforceable and to be taken seriously," she said.

"But the implementation of these rules is incredibly complex and tricky and companies have thousands of lawyers and pressure points and everything else to try and avoid them being implemented as well," Ms Carolan added.

Ultimately, though, it is now all about enforcement she added.

However, when that process gets under way in earnest, or even before it does, the expectation in regulatory circles is that there may well be push back legally and otherwise from social media companies, just as has happened with the introduction of enforcement in the early days of the General Data Protection Regulation.

That is because the firms argue that they are already doing a huge amount to keep their platforms free of illegal and harmful content.

For example, Meta, the owner of Facebook, Instagram and WhatsApp, said it spent $5 billion on safety and security last year alone and has 40,000 people globally working on it.

A spokesperson said around 15,000 reviewers in partner companies work to ensure content meets its community standards.

She also pointed to a $150m investment in efforts to combat misinformation which has led to the building of the largest fact-checking network in the industry, as part of its strategy to remove incorrect information, slow the spread of it and inform people when information is false.

The company also said in the first three months of this year it took action on 12.3 million pieces of violence and incitement content on Instagram, 99.5% of which was actioned before a user reported it.

While on Facebook action was taken on 8.7 million piece of such content, 97.9% of which was before it was reported.

However fundamentally, as pressure builds on them, the tech giants may ultimately be left with no choice but to do more.

Fines may only make a minimal dent in their extraordinarily profitable balance sheets, but overwhelming demands from regulators, politicians, shareholders and users may become impossible to ignore.

It increasingly looks like tolerance is coming to an end and the golden age for social media companies is too.