Opinion: the social media company claim they can't stop the live-streaming of offensive content, but steps can be taken albeit at a financial cost

The terrorist who attacked the mosque in New Zealand live-streamed it on Facebook. Many people, including the New Zealand Prime Minister Jacinda Adhern, said the social media giant should never have allowed this. The company have said they couldn’t have prevented it because no one reported it to them until 12 minutes after it was over.

Facebook rightly claim that automated software couldn’t have detected the live stream as offensive, so they had to rely on humans to identify it as such. It has been suggested that this could have been done by artificial intelligence, but that’s not the case.

Artificial intelligence has to be taught. To detect a video of a terrorist attack, an artificial intelligence first has to learn what one looks like. It does this by being shown many different terrorist attack videos until it finds common elements they all share, but which innocent videos don’t contain. This requires literally thousands of videos of thousands of terrorist attacks. Fortunately for the state of the world, we don’t have that many.

From RTÉ Radio1's This Week, Will Goodbody takes a close look at the multi-faceted area of artificial intelligence

But even if we did, it is highly unlikely there are any common patterns they all share but which are not found on innocent material. Shooting someone with a gun? Police and news videos contain that sort of image all the time. People screaming? That happens in earthquakes, on rollercoasters and at the playground all day long. People running, fighting or falling down? That could be just a passionate GAA match.

If software can’t identify this material, humans have to and that relies on people to report it in the first place. When they did report it, Facebook removed it. Once Facebook had identified it, they used software to find some patterns in the video so it could be blocked automatically from then on. Which they have done, blocking over 1.5 million copies in the first day after the attack. The real culprits here are the 200 people who watched the stream, knowing they were watching a live terrorist attack, but did not report it to the police or Facebook.

While software may be able to keep offensive material out once it’s been identified, initial identification has to be done by humans, known as "moderators". Unfortunately, this requires that moderators spend hours every day viewing material which would make most people throw up. Before we can block material which would traumatise normal people, a normal person has to watch it. They do get traumatised and some are driven to suicide. Many moderators have claimed that Facebook do not offer sufficient support or training, but it’s never going to be anything other than an extremely tough job.

From RTÉ Radio 1's Marian Finucane show, online content moderator Rachel Holdsworth talks about the difficulties involved in the job

Facebook’s position is that they are just a platform and not a publisher. They argue they just give us the technology and don’t tell people what to do with it. In this sense, they are trying to portray themselves as something like Microsoft. In their view, Facebook’s just an online system you can use to do stuff with. We don’t hold Microsoft responsible when someone uses a Windows PC for child pornography, so we shouldn’t hold Facebook responsible for what people do with live-streaming. Others argue that Facebook is a publisher and the person creating the video is like an author. Publishers are responsible for what they produce.

The reality is neither view is accurate. Facebook is something different - part platform, part publisher. The reason we hold a publisher responsible is they see the material before they publish it, then make a decision to publish based on that content. Live streaming is not like that – you don’t need Facebook’s approval to live-stream.

However, it is the Facebook algorithms which cross-link it and promote it, so Facebook can’t totally wash its hands. While only 200 people saw the live stream, Facebook algorithms then distributed the links to it, making it available to the rest of us. It is as if a publisher took a manuscript, didn’t look at it, but printed and distributed millions of copies. We may not hold them responsible for the content, but we’d hold them responsible for something.

From RTÉ Radio 1's Drivetime in December 2018, Philip Boucher-Hayes looks back on a troublesome year for Facebook

Since technology can’t block offensive live-streaming automatically, Facebook’s only solution is to hire moderators to do it for them. At first glance this looks like an impossible task because there are roughly five million live streams running on Facebook at any moment. How could one company monitor five million live streams at the same time? 

Because they’re only looking for offensive imagery, a person could scan more than one video at a time. You can put 20 videos on the same screen, just as we do for CCTV and traffic monitoring. While that reduces the load somewhat, it would still require 250,000 moderators to actively monitor all live streams, compared to the 7,500 Facebook currently have.

Facebook pay their moderators roughly $30,000 a year so hiring another 250,000 would cost $7.5 billion a year. While that may look excessive, Facebook’s profits in 2018 were $25 billion. Facebook could make live-streaming safe by hiring these moderators and still make a sizeable profit. This is asking Facebook to reduce their profit (not income) from $2.8 million per hour to a mere $2 million per hour!

Facebook could make live-streaming safe by hiring 250,000 more moderators and still make a sizeable profit

Software can’t automatically filter live-streaming, but humans can at a cost. Many would argue Facebook can afford it. They certainly have more than enough money. Facebook could completely prevent all harmful live-streaming if they were prepared to reduce a gigantically astronomical profit to a slightly smaller, but still astronomical, profit. Making all live-streaming safe is easy as it’s not a complex problem and it's fixable today. It’s merely a question of money.


The views expressed here are those of the author and do not represent or reflect the views of RTÉ