At 8pm Brussels time on Thursday, 23 November, as central Dublin was in flames, a senior European Commission official received a phone call from an Irish number.
He left the call unanswered as he was out for dinner, but phoned back as soon as the meal was over.
"I told them we were available," said the official.
The call was from Coimisiún na Meán.
Why the European Commission should have been a port of call that night reflects the importance of a major piece of EU legislation which is beginning to bite.
In fact, the call was to trigger an alert under the new legislation, making Ireland the first member state to do so.
The Digital Services Act (DSA) is part of a broader effort by Brussels to regulate what is regarded as an increasingly dangerous and chaotic online sphere.
The general view is that hate speech, incitement to violence, disinformation, interference in elections and child pornography, to name a few, all pose a real-world threat that is becoming systemic.
The DSA was rushed through in record time, having been proposed by the Commission in December 2020 and agreed by member states and the European Parliament just 16 months later.
It does not take full effect until 17 February 2024, the point at which all national capitals are expected to have established a national Digital Services Coordinator (DSC), who will oversee much more robust policing of large tech platforms.
Ireland already has a DSC in the person of John Evans, a former telecoms regulator, who has been expanding his operation - within Coimisiún na Meán - since March.
The European Commission, however, already has powers that took effect on 1 September.
In April, the Commission had designated 19 so-called Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) whose users exceeded 45 million per month, or 10% of the EU population.
These operators include Alibaba Express, Amazon, Apple, Booking.com, Google's main outlets (Play, Maps), Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X, Wikipedia, all the way to Zalando, and they have had to comply with the new legislation since 1 September.

However, in the weeks since then, the commission argued that toxic online content was already dangerously inflaming crisis situations and that mechanisms were needed before the February deadline.
This was prompted by the riots that followed the police killing of a teenager of Algerian descent outside Paris in June, and, in particular, the Hamas terror attacks of 7 October.
The point of the legislation is to ensure that disinformation and incitement to violence do not make a public security crisis - or a public health situation - worse.
On 10 October, the EU's industry Commissioner, Thierry Breton, sent a sharply worded letter to Elon Musk, the head of X (formerly Twitter), reminding him that under the DSA he had to ensure measures were in place to prevent fake, violent and terrorist content from circulating on his platform following the Hamas attack.
"Public media and civil society organisations widely report instances of fake and manipulated images and facts circulating on your platform in the EU, such as repurposed old images of unrelated armed conflict or military footage which actually originated in video games.
"This appears to be manifestly false or misleading information."
Mr Breton warned Mr Musk that if X was found to be in breach of the new legislation it could face huge fines (the maximum would be up to 6% of X’s global turnover).
On 11 October, X CEO Linda Yaccarino wrote back that the platform had taken down thousands of images and removed hundreds of Hamas-related accounts.
Further warnings were issued to Meta and TikTok, both of which said they had been scrambling to staff a fluid situation and comply with the DSA's requirements.
Commission officials have defended the bringing forward of new powers.
"There was a disjointed approach," says an EU official. "The French government called in the social media companies [after the teenager's killing], then the President met them, then the European Commission met them.
"Everyone was having meetings, so you expose yourself to the problem that the tech platforms tell everyone a different story, they play one authority off against another, or they say they already have an agreement with one authority so they don't need to talk to you."
According to its authors, the DSA does not mean the European Commission decides what is disinformation or hate speech.

The legislation is designed to make it easier for users to flag material that is potentially illegal or harmful, and for national regulators or police forces to decide if laws have been broken and to take swift action in response.
Tech platforms are legally obliged to ensure that they have sufficient staff to monitor content, to keep lines of communication open to police forces, and to raise flags if their moderators suspect that a threat to public order is brewing on their services.
This was arguably the situation on 23 November in Dublin.
Coimisiún na Meán’s initial concern that afternoon was that distressing or graphic images of the stabbing victims on Parnell Square East would circulate quickly and cause offence (similar to the impact of the attack on a teen in Navan earlier this year).
Officials contacted the main social media companies - Meta (including Instagram), Google/Youtube, X and Tiktok - to ask them (rather than compel, as they do not yet have the powers) to ensure such images did not go viral.
By late afternoon, the situation had escalated into a public order crisis. Because the European Commission already has scrutiny powers over the big platforms, the Irish regulator contacted Brussels, thereby triggering the alert mechanism.
Officials from DG CONNECT, the Commission arm running the legislation, joined a meeting with Irish DSC staff on the Friday morning. There were further meetings with the tech platforms and gardaí.
Coimisiún na Meán issued a statement that "a feature of the rioting… was the use of messaging and online platforms to spread hatred, to incite violence and crime, as well as to spread disinformation".
European Commission officials said that rather than contacting tech companies directly, they suggested the Irish authorities played that role, with the Commission in support.
"In Dublin it makes no sense for us to go in alone," said one EU official, "as there’s a lot of local context. But, depending on the situation, there has to be a quick response, there’s an evidence acquisition step, where we request a snapshot of what's going on, what has happened, so we know what has not happened."
The Commission was also there, say sources, to ensure that the Irish response was sufficiently robust.
"Since [the European Commission] has been managing multiple crises," said one source, "we can tell the Irish regulator look, come on, we've heard this kind of response [from tech companies] loads of times before. You need to ask for data here, you need to really go deeper. Some of the companies will give you the usual spiel, others are more serious."
Under the DSA, tech firms also have to provide data to law enforcement authorities if requested, so they can track who is posting the content even if they’re anonymous (typically IP addresses are captured or metadata is available).
In some cases, police forces will ask tech platforms to leave potentially illegal content in place so they can establish a proper chain of evidence.
However, part of the three-and four-way discussions in the 24 hours following the riots looked at one particular issue: the lack of Irish-speaking content moderators working for X, TikTok and Google/YouTube.
Initially, briefings in Brussels suggested that some activists may have been posting content in Irish as a way to avoid those posts being taken down, because there were few or no content moderators with the language skills to do so.

That view was strongly contested, and there was scepticism that far-right activists were using Irish language posts to foment violent protest.
Kevin Magee, who produced Céad Míle Fáilte, an award-winning investigative documentary about the far-right in Ireland for TG4, was monitoring a range of social media posts on the night of the riots and found no evidence that Irish was being used.
He said that in his experience, very few far-right activists even spoke Irish.
"I did a lot of monitoring of the sites, not just Twitter, but also on Telegram," he said. "I was on various different formats that [the far-right] use. I didn't see any use of Irish and it would really have sparked my interest if that had been the case."
One source close to the discussions between the European Commission, the tech companies, gardaí and Coimisiún na Meán, said there was no evidence that the use of Irish meant any difference as to whether a post was intercepted by a content moderator or not.
By Friday morning, European Commission spokesperson Johannes Bahrke, told RTÉ's Morning Ireland that, contrary to previous briefings, there was "no evidence" that the Irish language was being used in this way.
However, he emphasised that if tech companies did not employ Irish-speaking content moderators there was a potential gap in their armour, and they could be in breach of the DSA.
It is important to note that it is early days in the operation of the new legislation and some tech firms are genuinely still getting to grips with its requirements.
In reality, there is an ever-expanding architecture to regulate the digital ecosystem - from basic telecommunications firms, to hosting companies, to online retailers, to large platforms such as Meta and X.
At the top of the DSA pyramid are the very large platforms, those with over 45 million monthly users in the EU.
The higher up the pyramid you go, the bigger the regulatory obligations.
The problem for Ireland is that, when you move down to the lower levels of the pyramid, the national regulator has more of a role.
Because Ireland hosts so many big tech companies (13 out of the 19 designated VLOPs) that means a huge workload for Irish Digital Services Coordinator John Evans.
"We've been recruiting very actively," Mr Evans told RTÉ News.
"We’ve just completed recruitment for the senior management, the director of the organisation, and the next level of experts and managers.
"We're going to be reasonably fit for purpose by February 17. We're twice as big as we were six months ago and will be twice as big again in six months' time."
Coimisiún na Meán is aiming to build a team dedicated to investigating platforms, a centre of excellence for data analytics and algorithms, and a user support division that will enhance how digital consumers flag concerns about content.
The agency will have powers to police video-sharing platforms such as YouTube and Snapchat under the Online Safety and Media Regulation Act (2022) once an online code of conduct, particularly around the protection of minors, is in place (a consultation process will get underway shortly).

Yet, the fallout from the riots of 23 November, is continuing, and has become a major political problem for Minister for Justice Helen McEntee, who has sought to blame one of the tech companies - X - for not meeting its obligations under the DSA.
She told the Dáil on Wednesday: "I spoke to a detective in Pearse Street on Saturday who was actively engaged with the social media companies throughout Thursday, who was actively engaged with TikTok, Meta or Instagram and Facebook and Twitter or X.
"She said very clearly that social media companies, in particular TikTok and Meta, were responding, engaging with gardaí and taking down these vile posts as they came up. X were not. They didn’t engage. They did not fulfil their own community standards."
Irish sources confirm that the tech platforms concerned were asked in the meetings the morning after the riots if they had contacted gardaí, a requirement under the legislation.
It is understood some platforms told officials they had active teams engaging with gardaí, while others were more "passive", meaning they had open lines of communication but they might not have pro-actively made contact.
According to one EU official, the response of tech platforms should be measured in two ways: did they engage when contacted, and did they provide the data on social media posts they are required to?
In the case of X, the view is that the company did "pick up the phone" and take questions. However, the jury is still out on whether it provided the data required that will show it responded to illegal content on its service appropriately.
Overall, the Digital Services Act will have a major role in how the social media space is managed in an increasingly antagonistic and violent world.
John Evans will be part of a body of European coordinators all sharing information, evidence and best practice.
The European Commission’s Joint Research Centre (JRC) has established the European Centre for Algorithmic Transparency, which will provide regulators with new ways of analysing content and patterns - such as a flood of misinformation during a European election - and to enhance ways of ensuring tech companies are compliant.
"We can have on-site investigations," says one EU official. "We can ask for all kinds of documents from the back office that can prove how an algorithm works. We can go and check the algorithm and if necessary can start the procedure for noncompliance."
Yet, the new AI technologies that big platforms are using may mean regulators are constantly trying to keep up and to understand what is happening.
Generative AI can produce video content that is virtually undetectable as fake, meaning a phony video of a politician making a shocking admission on camera can circulate instantaneously and cause the chaos and confusion that its creators intended.
How will the DSA deal with that?
The Telegram messaging service is another challenging case. It has not been designated as a Very Large Online Platform by the European Commission and as it is regarded as a private service it is not subject to the same oversight (the company also declined to adopt the EU’s Strengthened Code of Practice on Disinformation).
A report by the EU Disinfo Lab in December last year concluded that "it is entirely up to Telegram to decide if and how it wants to tackle the disinformation challenge.
"Currently, its Terms of Service overlook any reference to not allowing disinformation on the platform."
An EU-commissioned report found that pro-Kremlin Telegram accounts tripled since the start of Russia’s war on Ukraine.
Furthermore, the legislation treads a fine line between enforcement and free speech.
When the DSA legislation was going through the European Parliament, this was the biggest area of contention among the political parties.
"The balance is between making platforms more liable without giving them an obligation to monitor every post," says Christel Schaldemose MEP, the Danish rapporteur on the legislation.

"We feared there would be a freedom of speech issue if we made the platforms directly liable for every piece of content. That was one of the really difficult discussions.
"We ended up agreeing that platforms have to act if there is a threat to democracy or public health. What we’re talking about in the Dublin situation is different.
"Putting out messages to kill people is illegal - there is no discussion - and everyone agrees that that kind of content should be taken down."
Officials in Brussels are at pains to suggest the EU and national regulators are not acting as a Ministry for Truth, monitoring content to decide if it offends one point of view or another.
"No one should think the Commission is deciding what is disinformation and what is not," says one official.
"This is not our role, and it should not be. What platforms are obliged to do is to identify whether their algorithms lead to extremism or lead to the continuous manipulation of content that could lead to a particular way of seeing things, because of bots or because of other inauthentic behaviour on the platform."
Mr Evans, Ireland’s regulator, is careful not to blame platforms for a surge in violence.
"People will find a way of communicating and mobilising one another, and quickly. The means are there, whether it’s a text message or another kind of platform.
"Where platforms contribute to, or aggravate, these risks, is by creating the fertile ground for different kinds of communities and different kinds of [hate] speech to thrive.
"It polarises people’s opinions, it creates echo chambers - the disbelief that people develop towards what they call 'mainstream media’. Those kinds of toxic effects lay the groundwork for the hatred that underlies the motivations of the people who go out to do these things.
"But whether or not to lay the blame on the platforms because it's facilitating the actual communication on a given day is a different kind of issue."
The challenge of detoxifying the public online space is certainly formidable, and operators who want to generate an avalanche of misinformation and division seem to relentlessly succeed despite the best efforts of regulators.
EU member states have adopted a system that is cross border, whereby law enforcement agencies in one country can order a platform in another to take down content if it’s illegal.
If those efforts are unsuccessful, then Europol can get involved.
Already the Dublin riots have plugged Ireland into a matrix of extremism and anti-immigrant sentiment in Europe.
This week Reuters reported that French far-right Telegram groups were sharing videos of the riots, highlighting what they said was the alleged knife attacker’s Algerian origin and hailing the reaction of the Irish far-right.
This followed the murder of a teenager in the southeastern village of Crepol, allegedly by assailants of Arabic-origin.
A French intelligence source told Reuters that the Dublin riots were a "trigger" for the strong reaction in France to the Crepol stabbing, noting there was a will among French far-right activists to be "as good as the Irish".