A sizable majority of Irish compliance professionals think criminals are outpacing companies and regulators in their explotation of artificial intelligence.
A survey by the Compliance Institute found that 77% of professionals in the field think firms and regulators are behind the curve.
A further 20% believe criminals are "somewhat" ahead of others at the moment.
"We're talking about cyber crime here which is now the number one crime in Ireland," said Michael Kavanagh, CEO of the Compliance Institute.
"We're looking at deep fake audio and video impersonations... generating convincing forged documents, IDs, payslips etcetera. More alarming now, building deep psychological profiles of individuals and scraping social media accounts... all of that across the board," he said.
The survey also found that almost a third of professionals believe that the companies using AI ultimately bear responsibility for an errors that might occur.
That compares to just 11% who think responsibility lies with the tech's developers, and 3% who feel it is down to regulators.
More than half of respondents felt responsibility for such errors should be shared among all stakeholders.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
"The onus is on the company as opposed to maybe the vendor of the AI system or the regulator to put the processes in place," said Mr Kavanagh. "So they need procedures and they need processes in place."
He said there had not yet been an legal test of where responsibility lies when it comes to AI errors, but companies are generally required to stand over their own systems and processes.
As a result he said it was important that firms were more proactiving in training staff - and that they do not entirely cede important tasks to computers.
"You need to constantly update your training of staff in this whole area, and continually stress-testing the AI system in place," he said. "Moreso, human intervention is needed - human oversight is needed.
"The machines have not taken over yet."
Those that do not risk significant reputational risk, he said, while an over-reliance on AI could also expose firms to a financial threat too, he said.
"The reputational risk of companies is enormous and indeed the legal liability," he said. "But also the regulatory fines - we're going to see more regulatory fines in this space as well by the various parties under the AI act."