Landmark addiction verdict will have big implications for social media companies
On Wednesday, a California jury found that Meta and Google were liable for designing platforms that are addictive.
It was a landmark moment that could have major implications for how social media companies defend themselves against future claims.
The verdict
I was outside Meta's Dublin headquarters on Wednesday just hours before a jury in Los Angeles returned a verdict that has sent shockwaves through the tech world.
I was there to report on a small number of job losses at Meta's Irish operation. It was a story that soon became eclipsed by the unprecedented outcome of a court case 8,000 kilometres away.
The lawsuit involved a 20-year-old woman known in court by her first name Kaley.
She said she became addicted to Google's YouTube and Meta's Instagram at a young age and that this addiction had harmed her mental health.
Meta was found liable for $4.2 million in damages and Google for $1.8 million.
Both companies said they disagreed with the verdict and would be appealing.
"Teen mental health is profoundly complex and cannot be linked to a single app," Meta said.
"We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online," the company added.
A spokesperson for Google said: "This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site."
Snapchat and TikTok were also defendants in the trial but both settled before it began.
The Los Angeles verdict came a day after a jury in New Mexico found Meta liable for misleading users over the safety of its platforms for children.
The company was ordered to pay $375 million but Meta has vowed to appeal.
Meta, Google, Snapchat and TikTok are facing thousands of lawsuits in US courts over claims that the designs of the their platforms have damaged the mental health of teens and young people.
This week's verdicts have no doubt sparked fears among the tech companies that they can no longer rely on the legal shields of the past that protected them from prosecution.
Defensive shield has been breached
Section 230 of the Communications Decency Act is a 1996 US federal law that generally protects online platforms from liability over content generated by users.
Social media companies have repeatedly used this law to argue that they are not legally responsible for the material posted on their platforms.
That defensive shield has now been breached.
The cases that concluded this week looked at the design of the platforms, rather than the content posted on them.
"By focusing on how these platforms were deliberately designed, rather than on the content they host, plaintiffs' lawyers found a way around this shield that has been consistently used to evade responsibility," said Alex Cooney, CEO of Irish online safety charity CyberSafeKids.
"This is a very clear signal that Section 230 is no longer the impenetrable legal protection Big Tech has relied upon so heavily to date."
"It is completely unacceptable that these platform providers have been able to benefit so substantially from providing harmful products to children for so long without any real accountability."
"We hope that these outcomes finally mark a turning point in the fight for the safety of children online," Ms Cooney said.
Amnesty International said the court findings must lead to platform redesign.
"For years, social media companies including Meta and YouTube have profited from targeting children and young people with addictive design features that prioritise engagement over wellbeing," said Erika Guevara-Rosas, Amnesty International’s Senior Director of Research, Advocacy, Policy and Campaigns.
"They have deliberately built into their platforms features such as infinite scroll, autoplay, and persistent notifications that are engineered to 'hook’ young users into compulsive use."
"This court decision is clear: these platforms are unsafe by design and meaningful change is urgently needed," Ms Guevara-Rosas said.
European response
A switch to focusing on the design of platforms, rather than the content they host is not just happening in the US.
European regulators have also begun to highlight issues with the inner workings of social media apps.
In February, the European Commission accused TikTok of creating an "addictive design" in its app which could harm the physical and mental wellbeing of minors and vulnerable adults.
It was contained in preliminary findings of an investigation into the video-sharing platform.
The Commission said TikTok was guilty of "multiple" violations of the EU's Digital Services Act (DSA).
It highlighted infinite scroll features, autoplay, push notifications, and a highly personalised "recommender system", which uses AI to predict the preferences or ratings a user would give a product.
TikTok rejected the findings claiming they presented a "categorically false and entirely meritless depiction" of its platform.
"We will take whatever steps are necessary to challenge these findings through every means available to us," TikTok said.
In January, when the European Commission announced its investigation into X's Grok AI tool, it also focused on the design of the app rather than the content.
The tool's ability to generate sexualised deepfake images of adults and children sparked global outrage.
While authorities around the world, including An Garda Síochána, are investigating the specific images, the EU's probe is looking at the building blocks of the Grok app.
Under the Digital Services Act, X has an obligation to carry out thorough risk assessments when it comes to illegal content on their platforms.
The Commission said the company had failed to include any risk assessment of Grok.
Social media bans
On Friday, Austria became the latest country to announce plans for a social media ban for children.
The proposal would introduce restrictions for under 14s but the announcement was a little light on detail.
The ban would be introduced "as soon as possible" but there is still no consensus among Austria's three ruling parties regarding the verification method that will be put in place.
A reflection no doubt of how complicated it can be to get age checks right.
Austria joins the likes of France, Spain, Denmark, UK and Greece who have all announced plans for social media bans.
In Ireland, the Government is working on a 'digital wallet' that would use PPS numbers to verify someone's age.
Privacy campaigners have expressed concerns about any plan that would introduce state-run digital identity checks for internet users.
The Media Minister Patrick O'Donovan has said however that no right should trump the right of a child to be protected online.
Australia introduced the world's first social media ban for under 16s in December but there are widespread reports of tech-savvy teenagers there circumventing the rules by tricking facial recognition ID checks and using virtual private networks (VPNs).
Online safety campaigners believe that rather than banning children from social media, Governments should instead ban the platforms from using toxic recommender system algorithms that push harmful content into users' feeds.
"Rather than using blunt tools like banning young teens from social media, states must require a fundamental overhaul of how these platforms operate, including addressing their addictive design," said Amnesty International’s Erika Guevara-Rosas.
"This is the only path to a truly safe social media," she added.
Regulating social media is often described as being like trying to police the Wild West.
Age verification is difficult to enforce, bans are easy to get around and platforms are shielded from responsibility for the content they host.
With this renewed focus on the design of social media apps by courts and regulators, perhaps Big Tech will finally be forced into introducing meaningful change.
Read more:
Lawyer in social media addiction case hopes change is forthcoming
How will Ireland respond as more countries move to ban children from social media?