With just under three weeks to go to the European elections, there’s a battle underway.

But it’s not just a fight between politicians vying for votes.

There’s also a war being waged between online social media and internet companies and those who are seeking to spread misinformation, fake news, hate speech, and get around political advertising rules across the 28 countries where the elections are taking place.

This challenge is a massive one for those companies, including Facebook.

And for it, the frontline of the battle is in central Dublin.

At its international headquarters beside Grand Canal Square, it has set up an election operations centre - in effect a war room - where the company is monitoring what’s happening on Facebook, Instagram and WhatsApp that’s related to the election.

It ran such an operation for the first time last year for the Brazilian elections and then the US mid-terms. But this is the first time one has been set up in Europe.

Just a few weeks ago, in an exclusive interview with RTÉ News, Facebook CEO Mark Zuckerberg said he could not guarantee that Facebook platforms would not be used by bad actors for electoral interference during the EU elections.

But nevertheless, as the campaign heats-up, those working in the centre are confident that they are ready to take on problems as they arise.

"Yes, we believe we are on top of it," said Richard Allan, Facebook’s Vice-President of Public Policy for Europe, the Middle East and Africa.

"We do recognise that we have very sophisticated opponents who, every time we put in a security measure, they are working hard to get around it. So we are going to have to be keeping up with that.

"But overall we are very confident that we will deal with all of the known risks that there are for this election, and as we go though the election people can feel confident that our platform is good for democracy, is actually helping people to have those democratic conversations."

The war room is staffed by 40 people, including threat intelligence specialists, data scientists, engineers and local experts, who are then backed up by hundreds more people on safety and security teams around the world.

Their job is to close fake accounts, check those running political and issue ads are abiding by the transparency rules, review content and shut down coordinated disruptive action.

Facebook has been very much on the back foot since it emerged during the 2016 US presidential election that the platform was being manipulated by foreign actors to try to disrupt the democratic process.

It has since taken massive criticism, come under huge scrutiny and been forced to up its game significantly when it comes to ensuring its services aren’t misused to influence elections.

It has, for example, removed over 2.8 billion fake accounts, and built systems to automatically detect them and take them down.

In Europe, it has partnered with 21 fact-checking organisations across 14 languages, who identify fake or misleading news which the company then either removes or flags to prevent it being distributed widely.

Political and issues based advertising has been made more transparent on the platforms. Parties, politicians and others posting such messages now have to register and prove they are based in the country where the ad will target users.

The ads themselves are also now more transparent, carrying details like who paid for them, who they are targeting and how much was spent on them.

And they are being added to a searchable ad library so that people can look back on an in-depth basis at who is posting what.

Facebook has also become far more attuned to identifying and dealing with bad actors organising coordinated malicious campaigns aimed at disrupting the election through actions like voter suppression or deliberate attempts to create false social tensions.

So far, the company says it hasn’t seen significant evidence of such attempts to manipulate the European elections.

"We see every day that people are trying to create fake accounts on our systems and we have very active security teams trying to shut those down," Richard Allan said.

"We have not seen at this stage a specific attack on the European elections I think we would want to highlight, but if there are such attacks then we will put that information into the public domain.

"Our commitment is to be more transparent. We think that is good for democracy. It means people can see what is going on. It is actually good for security as the more we learn about different types of attacks the better we can get at preventing them."

But while independent experts acknowledge some progress has been made, it’s clear that Facebook, along with other online and social media companies, has a long way to go.

Through its work, the social media monitoring service, Storyful, has a good sense of how elections and other campaign play out on online platforms.

"It’s clear all the platforms have, in various ways, taken steps to increase transparency, to try and combat the spread of misinformation and disinformation on their platforms," said Padraic Ryan, Head of News Intelligence at Storyful.

"But I think that all accept that there are many many challenges for them.

"We’ve seen questions still arising from things like fake accounts, over things like a lack of transparency over who is paying for advertising and we’ve seen a lot of automated accounts, people hiding who they really are and what their true agenda is."

In particular, there is concern about a lack of action to combat election manipulation and disinformation attempts through the use of closed groups and private, often encrypted, messaging services.

"We’ve seen closed platforms like for instance WhatsApp, Facebook’s Messenger app, Telegram and places like that to spread hoaxes, smears, misinformation and disinformation in a range of countries across the world," Mr Ryan said.

"We’ve all seen the examples of places like India, where false information spread across WhatsApp ultimately led to the death of a number of people.

"And while we are not expecting anything as serious as that here in Ireland, it is the case that there’s potential for hoaxes to blow up from these closed platforms, rapidly and without any warning.

"We can all envisage the example where perhaps late in the campaign or even after a broadcast moratorium has been put in place, a false rumour suddenly blows up out of WhatsApp and that could have serious implications for the election."

Facebook says it has addressed this issue by building a system that limits the amount of information that can be forwarded through WhatsApp, thereby reducing the risk of spam.

But nevertheless, experts aren’t convinced the situation is under control.

"Companies are playing catch-up, and in a way they are only now dealing with issues that were pertinent in 2016," said Liz Carolan, Director of Digital Action.

"Across platforms there is a trend of disinformation or fake news happening on closed platforms - in WhatsApp groups, and maybe in Messenger or in other forums.

"There is a whole lot happening on YouTube as we’ve seen lately. And they are only at the very early stages of figuring that out.

"So without a legislative framework and without the companies trying to think two steps ahead rather than respond to the latest scandal, I think we are always going to be chasing our tail on this."

Another tool that not everyone is yet convinced is working is Facebook’s new ad transparency tools.

"We’ve seen examples where there is still a lack of clarity over where the administrators of these pages are based," said Padraic Ryan.

"Are they based here in Ireland for instance if they are trying influence the Irish elections? We’ve a lack of clarity occasionally on who is paying for ads where it seems as if it is one person but actually question marks remain over that."

The use of keywords to try to identify, track and analyse political and issue based ads on Facebook platforms in the run up to polling day is also a concern.

The company has used the results of Eurobarometer surveys to identify the pertinent issues in each territory, but accepts the localised lists it has drawn up are not exhaustive.

"In a way they are relying on robots to determine what is political in an election that is happening in 28 countries, and it is very hard for them without them having to really rethink their infrastructure and the way that they let political ads and political activity happen on their platform," said Liz Carolan.

"They are missing some content quite a bit, and also it is unclear whether or not Instagram ads, which in Spain’s election at the weekend were actually a big part of the campaigning that happened there, are included.

"It is a better system than was in place last year for the referendum and they have listened to people’s concerns.

"But it is just very very slow and it is not backed by legislation."

Despite some scepticism though, Facebook remains confident its war room and its soldiers are ready to take the fight to those seeking to disrupt the outcome.

"Yes there will be some attackers," Facebook’s Richard Allen said.

"But we will keep those to the minimum possible. It would be foolish to say to someone there will be no problem. When people are setting out to attack your system, you don’t control the extent to which they are going to attack it.

"So they are going to come. But the people here in this room in Dublin, these teams, these world leading experts in areas like security and misinformation give us the best hope to keep that down to the absolute minimum and make sure there are no easy ways onto our platform for those people that are trying to attack and disrupt the democratic process."