A group of 26 international experts has warned that rogue states, criminals and terrorists could use artificial intelligence (AI) in crimes, terror attacks and to manipulate public opinion.

The report suggests that unless preparations are made now to prevent the malicious use of the technology, cybercrime will rapidly increase in years to come.

Artificial intelligence is increasingly being used positively for applications such as digital assistants, smartphones and autonomous vehicles.

But this study, written by dozens of AI, cybersecurity and technology experts and academics, looks forward a decade to the threats AI could pose if certain potential risks are not addressed.

Sounding an alarm the authors warn, for example, of a rapid growth in cybercrimes fuelled by the technology.

These might include automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves, the reports states.

It also warns that AI could facilitate the rise of highly believable fake videos and intelligent bots, which could be used to manipulate news, public opinion, social media and elections.

The study also forecasts that AI could be deployed to hijack drones and autonomous vehicles, which could then be used in attacks or to hold critical infrastructure to ransom.

In addition, the report points to how AI-driven autonomous weapons systems could result in human control being lost in war situations and how the systems themselves could also be hacked.

Dr Sean Ó hEigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk, and one of the co-authors, said choices must be made if such concerns are to be addressed.

"We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real," he said.

"There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe."

The study suggests a number of ways in which their concerns could be addressed. These include re-examining cybersecurity and promoting a culture of responsibility.

It also proposes the exploration of new models of openness in the sharing of information around AI and cybersecurity and making AI researchers and engineers more aware that the technology has a dual-use.

The co-authors of the report come from a wide range of organisations and disciplines.

These include Oxford University’s Future of Humanity Institute as well as Cambridge University’s Centre for the Study of Existential Risk.

OpenAI, a leading non-profit AI research company, the Electronic Frontier Foundation, the Center for a New American Security and other organisations also contributed.

"For many decades hype outstripped fact in terms of AI and machine learning. No longer," said Dr Ó hÉigeartaigh.

"This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this."