There is increasing consensus that artificial intelligence systems will change how we work, and how we live.
With all change comes risk, but the spectrum of risk discussed in relation to artificial intelligence is remarkable. For some experts, the key urgent concern is that AI systems might be poorly calibrated and become biased against certain types of job or loan applicants.
For others, it's essential to consider now that AI could advance to the point where it develops superintelligence sufficient to manipulate people, gain power and usurp humanity to destroy the world.
Many experts stand somewhere in the middle, but all have at least some level of concern, as they told Mark Coughlan of RTÉ's Prime Time.
In the University of Cambridge, Dr Seán Ó hÉigeartaigh oversees the Centre for the Study of Existential Risk.
Every day he opens his Twitter app, to check what new developments there have been in the suddenly constantly changing world of artificial intelligence.
He's increasingly concerned artificial intelligence systems are exhibiting powerful capabilities and the processes needed to control them are not being prioritised.
Over the last six months, the pace of improvement in AI tools that can generate text and images has stunned users and surprised top experts, including Dr Ó hÉigeartaigh.
"AI could help us with many of the big challenges that we face but the potential unintended consequences and potential misuses are also really quite large," he told Prime Time.
Degradation of collective trust
"If you can generate essays easily, then you can generate propaganda at scale targeted to different communities, which could help to foment unrest."
"You could create false images, false videos, and it might be very hard to tell if a person had actually said something, maybe a politician."
"It's proving to be more challenging to keep up with ways of identifying that kind of misuse than it is to push forward."
Multiple companies across the world now offer to produce digital duplicates – avatars - of individuals.
Users agree to be filmed and have their voice recorded, once processed they can enter a script into a text box and click generate. The AI systems will create a video of the digital duplicate speaking the content of the script in the real person’s own voice, with their own body movements.
Many of the companies offering these options tightly limit the use of avatars to certain types of content, to try to avoid them being used for misinformation or disinformation.
Crucially, users are limited to creating videos with avatars of people who have licenced their image and voice for such purposes. Their typical customers are companies looking to easily and quickly update training videos.
Yet there are also websites already that allow users to generate videos of well-known individuals – actors, journalists, television presenters, sportspeople and politicians - saying whatever the user wants using similar technology.
Other systems enable users to type into a text box and produce photorealistic still images.
Prime Time added 'fake’ to the image below. The moment portrayed never happened, the image was wholly generated by AI.
It was created in moments by asking an image generating AI model to create ‘Joe Biden hugs Donald Trump in the White House Rose Garden, photorealistic, shot on a Canon Mkii, 32k, incredible level of detail.’
Just six months ago, it was not possible to generate such plausible fake images using AI tools.

The pace of development in image generating artificial intelligence tools is arguably outpacing that of language generation tools like ChatGPT, which have attracted more global attention.
Erosion of Democracy
"Everything that we take to believe with all our senses is suddenly, apparently up for grabs," says Jessica Cecil.
Mrs Cecil is a former chief of staff at the BBC. She founded the Trusted News Initiative which brought together major news organisations and technology companies to try to combat misinformation.
"Democracy relies on voters being able to look at information, work out whether they believe it or not, and then have a shared dialogue around agreed facts," she told Prime Time.
"If you don't know whether you can trust a piece of information, and one person's apparent truth looks very different from another person's apparent truth, how can you possibly have shared dialogue?"
She says companies developing image and text generating systems need come together and agree to a system of self-regulation, to ensure collective trust in information isn’t undermined by their products.
Regulating artificial intelligence is a mammoth task that governments and international organisations are currently trying to get to grips with.
Just this week, the EU AI Act, the first act globally to try to regulate artificial intelligence, passed another legislative stage in the European Parliament.
It would require producers to inform users that the content is AI generated. However, many believe that the motivations of entities spreading disinformation make individual pieces of ethical legislation unlikely to be effective.
"This is a transnational issue and it's moving very, very fast," said Jessica Cecil.
"It's incredibly difficult for regulators to keep up with what's going on and that's why self-regulation is really, really vital."
Dr Seán Ó hÉigeartaigh says the risks also extend beyond the potential to use AI-generated disinformation campaigns.
"We do need to start worrying about the possibility that we just have systems are able to out-think us, at least in some domains."
The Rise of Machines
Similar concerns were expressed by Geoffrey Hinton the so-called ‘Godfather of AI’ earlier this month.

A leading figure in the development of the systems which underpin models like ChatGPT, and a long-time advocate for the development of AI systems, he left his job in Google and spoke out in the media.
He warned that insufficient attention is being paid to mitigating potential negative outcomes because major companies are racing to outdo each other with launches of new systems.
Professor Hinton told Wired he was worried that AI systems could be getting closer to outsmarting humans, and may then seek more control.
"What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is to get more power - get more control," he said.
Such views were previously expressed mainly by people from a community of AI researchers, academics and analysts who focus on the danger of ‘singularity’.
It’s the hypothesis that a technology could advance beyond a tipping point, resulting in it gaining a capability to upgrade and develop itself, and define its own goals – in pursuit of which it could subvert and destroy humanity.
A leading voice from that viewpoint is Eliezer Yudkowsky, a researcher at the nonprofit Machine Intelligence Research Institute.
Hinton told Wired he recently watched a speech by Yudkowsky. "I listened to him thinking he was going to be crazy. I don't think he's crazy at all."
In 2022, researchers asked 4,000 AI experts to estimate the percentage chance that AI systems could get out of control in the future and cause human extinction. The median answer was 10%.
That survey was published in August, prior to the launch of ChatGPT and the recent surge in new AI systems.
Others are worried that the technology could inadvertently cause major destruction in other ways.
Inadvertent military destruction
Professor David Chalmers is a philosopher at New York University, where he is co-director of the Center for Mind, Brain and Consciousness.
He told Prime Time he believes a reliable authentication system for deepfakes is needed "but deep fakes are going to be the least of it once you really have impressive artificial intelligence which are superhuman in their capacities."
"Merely faking human behavior, that's going to be easy mode for them."
One of his concerns relates to potential military application of artificial intelligence.
"Autonomous weapons turn out to be more efficient for many purposes than human controlled weapons. You can just give it an algorithm and they're not going to be bothered by pangs of conscience or distraction or hunger, and they'll often be more accurate."
"You just need to specify their goal just slightly in the wrong way, and these AI systems are going to have the capacity for things to go badly wrong. It's very fragile, I think, and potentially very dangerous."
Day-to-day bias and judgement
For many others, such concerns about singularity and autonomy within a future virtual reality risk missing the point.
"I think if we develop the technology responsibly, we can really have an extremely bright future and one that hopefully will save the environment," says Professor Barry O’Sullivan of University College Cork.
His concerns are more immediate. He’s worried AI systems could be misused in relation to facial recognition and application screening, where in parts of the world they are already in use.
"When these systems are making decisions about you and are judging what kind of person you are, that's where the rubber really hits the road."
"Is this automated welfare system going to disadvantage me by telling me I'm not entitled to something that I am entitled to? Or that bank loan system tell me I can't get a bank loan when in fact I should be able to get one?"
"Or the woman who goes for the job... and her CV isn't screened because the AI system hasn't been trained on women who are successful at that role."
"They're the things you need to care about and they're the things that are the greatest threat at the moment, not some kind of long-term existential threat."
A Prime Time special programme on artificial intelligence broadcasts Tuesday, May 16 at 9.35pm on RTÉ One.