Twitter has disputes claims that it did not do enough to respond to racist and abusive content on its site which a mixed race couple in Co Meath said forced them to leave the country.

It comes as Garda Commissioner Drew Harris launched a new working definition for hate crime. He said he is concerned about the under-reporting of the issue and wants a uniform response across the country.

Fianna Fáil's Jack Chambers said Twitter's decision to simply remove tweets aimed at Fiona Ryan and her fiancé Jonathan Mathis was "weak" and that the service provided hateful content to a huge audience.

The couple received death threats after appearing in a Lidl ad campaign and announced last week that they had left the country with their 22-month-old son.

Mr Chambers questioned Twitter executives at a meeting of the Oireachtas Committee on Justice.

"If someone with a huge amount of followers brings hatred to a big audience that promotes a racist message to a huge audience, does a simple deletion of that rectify and remedy the consequences for a person who feels they have to leave the country?," he asked.

Twitter Policy Director, Karen White said: "I would sympathise with anyone who has been subjected to targeted abuse or harassment of violent threats whether it's online or offline. It's abhorrent and unacceptable."


Read more:
Online abuse leaves mother fearing for son's safety


She added: "I cannot speak to the individual circumstances, but I want to reassure the committee that we have very robust policies in place in Twitter around hateful behaviour and hateful conduct and violent threats when we are made aware there is a range of enforcement actions we can take."

These enforcement actions include asking a user to delete offensive content, locking their accounts or suspending it if they persistently violate Twitter's own rules.

These measures are having a "real world impact," Ms White said.

She told the committee there is a "wider societal issue that needs to be addressed here" and that "simply removing content from a service is not in all instances going to change the intolerance".

However, Mr Chambers said: "Your net response is to broaden the fudge, that seems to be your public policy response to a lot of f issues: It is complicated, it is multinational, we are platforms not publishers.

"I think when you bring it down to the family that was affected on your platform, the response from Twitter was to delete the tweet and that was it. Surely your enforcement mechanisms can be improved."

Earlier, Fine Gael TD Colm Brophy said he could not understand how the representatives of tech companies were "morally comfortable" with claiming they do not have liability for content that they publish.

He referenced one incident where images of a "horrific incident" were shared multiple times on Facebook which inflicted "absolute horror" on a family for which there was no comeback. He also described an incident where a murder was live streamed on Facebook "using a streaming facility put in for profit."

He said: "I cannot understand morally how people are comfortable defending not having a liability for that." 

He said these companies should be subject to the same laws and regulations as newspapers or broadcasters or other publishers: "I think the principle under which you operate is wrong," he said. "I don't accept this made up term of 'intermediary'. To me you are publishers and you should have the liabilities of publishers.

"As an industry you have pulled off an amazing trick over many decades which has enriched you and your shareholders to a vast level, and the cost is too great," he said.

Director of Public Policy with Facebook, Dualta Ó Broin, said: "What you are getting close to is a monitoring obligation on all content and that would fundamentally change the basis on which our companies operate."

Claire Rush of Facebook said: "If we had that additional layer or that heightened level of publisher-type responsibility you would have people queuing at the door to take claims and all sorts.

"If anything I would actually say that if there was this obligation to pre-review or pre-screen or pre-moderate every single piece of content against all the laws all over the world wherever that content might be shared, regardless of the operational burden and cost it would actually have a huge chilling effect.

"The incentive would essentially be that if anything is remotely dubious from a lawfulness perspective you would take it down and that could have a very detrimental effect on the availability of content," she said.