ChatGPT, the Artificial Intelligence tool, has left users in awe of its ability to engage in seemingly nuanced conversations. It generates text responses to user requests based on vast amounts of data from the internet.
Ask it to describe itself and ChatGPT will tell you "it uses deep learning techniques to generate human-like responses to text inputs in a conversational manner."
"This will change our world," billionaire philanthropist Bill Gates has said of ChatGPT.
Yet Twitter owner Elon Musk, who was one of the founders of OpenAI, the company behind ChatGPT, has raised concerns.
"We are not far from dangerously strong AI," he said at a recent public appearance. "One of the biggest risks to the future of civilisation is AI. I think we need to regulate AI safety, frankly."
In Ireland, concerns about ChatGPT's potential effects on education and academia have been expressed by both Trinity College Dublin and University College Cork.
The Central Bank of Ireland has banned its employees from using the technology over cyber security concerns.
So should we be concerned about how AI is developing?
As part of The Conversation from RTÉ's Upfront with Katie Hannon, we asked two experts to join our WhatsApp group to discuss the topic.
Professor Barry O’Sullivan is a full professor at the School of Computer Science and Information Technology at University College Cork.
Professor Nick Bostrom heads the Future of Humanity Institute at Oxford University and is its founding director. His 2014 book Superintelligence: Paths, Dangers, Strategies, was a New York Times best seller which helped spark a global conversation about the future of AI.
Barry O'Sullivan: AI, the technology that is behind all those computer systems that perform tasks that we normally associate as requiring human intelligence, has been around for a long time with fielded systems going back to the 1960s.
Today we constantly come in contact with AI technology when we search for information online, when we’re recommended a movie or book, when a ride-sharing app decides which driver to match us with and how much our journey will cost.
I’m an AI optimist and believe that the technology has had, on the whole, a positive impact on our lives.
Nick Bostrom: I, too, believe that the impact of AI has so far been overwhelmingly on the positive side and I hope this will continue to be the case. I believe, however, that future developments in AI will pose significant risk, including existential risk, especially when we develop systems that exceed humans in general intelligence. I think there's a decent chance we'll develop superintelligence* within the lifetime of many people reading this.
[*Editor’s note: Superintelligence refers to a thing, for example a computer, that has a higher level of intelligence than humans.]
Barry O’Sullivan: In my mental risk register for AI, the risks posed by a superintelligence are much lower than the risks associated with the other issues that we need to address.
I'm comforted by the focus of ethics, responsible AI, human control and the various regulatory developments.
I'm more concerned by the current broad level of understanding of AI. I was involved in a local AI skills report last year, and I believe we need to focus on education, training, information and upskilling at all levels. From the general public and policy-makers, to those innovating and using systems that incorporate AI and beyond.
Nick Bostrom: How do you measure the risks in your mental risk register? Is it the current risk level or total risk integrated over time far into the future? Is it probability of something bad happening or probability times magnitude of consequences?
Barry O’Sullivan: Our focus needs to be on ensuring that society, writ large, achieves the greatest benefit from what AI has to offer.
Just like other information technology, we need to ensure that people have the skills they need to use and evaluate AI and their relationship with it.
Nick Bostrom: Focusing only on potential benefits seems one-sided? And while I'm all in favour of users having the right skills, I think this will be insufficient to ensure a good outcome.
I'm still curious what you mean when you say that the risks posed by superintelligence are much lower than other risks.
Some other risks may be more immediate, or have a higher probability of materialising, but some high-consequence scenarios are worth serious thought and preparation even if they are still years away and clouded in uncertainty.
Barry O’Sullivan: I certainly don’t want to give the impression that I underestimate the risks posed by the misuse or poor use of AI.
I was the Vice Chair of the European Commission’s High-Level Expert Group on AI which developed the EU’s framework for trustworthy AI and I’ve also worked on Track II diplomatic efforts related to restricting the use of AI in military settings.
But I see AI systems as engineered systems, engineered by human beings and under the control of human beings.
We are in control of getting things right and I believe we will.
Nick Bostrom: I hope you are right. I see three big areas of challenge as we glance ahead at the transition to the machine intelligence era.
First, an alignment* problem. We need scalable methods for AI control a hard and as-yet unsolved technical problem.
[*Editor’s note: Alignment research is the field of study dedicated to ensuring that artificial intelligence is beneficial to humans.]
Many of the smartest persons I know are now working on that.
Second, a governance problem. Very multi-pronged, including issues of fairness, peace, and democracy. But also preventing malevolent actors from using AI tools to develop bioweapons etc.
And third, an empathy problem. As we build increasingly sophisticated digital minds, at some point some AIs themselves may become subjects that have forms of moral status, perhaps initially analogous to some nonhuman animals.
A lot of things to think about!
Barry O’Sullivan: I agree that alignment and ensuring AI systems are built to reflect our own values is extremely important.
Certainly, the whole question of governance is critically important, both in the civilian domain, but also very much in the military setting given the risks of escalating conflicts, conformity to international humanitarian law and the UN Charter.
I must say I do not see a future in which some form of AI will have moral status, personhood or anything of that form.
AI systems are computer systems and I see topics such as AI consciousness or sentience as science fiction.
Nick Bostrom: I suspect the line between science fiction and reality might blur quite a bit over the coming years. Already to an old fogey like me, it often seems like we've been catapulted into the strange futuristic world of some author's overactive imagination.
My sense is that the field of AI moral status is today roughly where the area of AI alignment was ten or twelve years ago - still fringe but about to burgeon into an active research field and eventually to fructify into a larger social conversation.
This could happen surprisingly quickly as people begin to have personal interactions with sophisticated chatbots finetuned to maximise social engagement.
But we shall see.
I've enjoyed our conversation, Barry. In closing I just want to say to our audience that although I'm commonly cast in the role of AI doomer, I'm actually at the same time very eager and excited for the vast potential upsides of this transformative technology.
Barry O’Sullivan: Thanks, Nick. I also very much enjoyed it and hope our paths will cross again in the real world.
Read last week's edition of The Conversation, where we asked if people should be concerned about gardaí wearing body cameras, here.