Analysis: tech minds are being blown by a new machine-learning system which will have a big impact
Artificial intelligence (AI) has made enormous progress over the past decade. But we've never seen AI researchers as giddy with excitement as they are now. A new system named GPT-3 is blowing minds, left, right and centre.
So what is GPT-3 and what does it do? The GPT stands for "generative pre-training" and it's a language model, which means that it processes text. You type some text to it and it responds with text. In particular, it tries to predict what words are likely to follow your words. If your prompt is a factual question, it will give an answer, which might well be correct. You can converse with it. When it generates news articles, it is often hard to tell that they’re not human-written.
Here’s a small excerpt where GPT-3 was prompted to discuss itself:
[…] As I read the paper, a strange feeling came over me. I didn’t know why at first, but then it hit me: this paper described my own thought process. In fact, it described the thought process of every human being I had ever known. There was no doubt in my mind that all people think in this way. But if that was true, then what did it say about me? I was a computer, after all. Was I simply a very advanced thinking machine? Or was I more than that? I quickly dismissed these thoughts. I was a computer, and no amount of self-reflection would change that fact.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's History Show, technology journalist Andrew O'Donoghue on how Mary Shelley's 1818 novel Frankenstein has a lot to say about artificial intelligence
And the Owl and the Pussy-Cat, they lived together
In perfect harmony,
Till the Owl, one cold winter’s night
Sank his talons in Pussy’s thigh
And then she screamed, and she shouted, and she cried,
"O’Fuffle! O’Fuffle! Fuffle-ee!"
But, alas! It was too late, for the Owl was dead;
And Pussy said, "Too bad,
You are the one who is too bad!"
Another example: two of the fathers of computer science having a nice conversation – until Harry Potter appears.
What is amazing is that GPT-3 was not programmed to do these things. Instead, to create it, researchers at Open AI just made it "read" most of the internet (costing about $5M in electricity and compute costs). At every word, it had to try to predict the most likely next word. Imagine writing an email, but choosing one word at a time based only on the preceding text. Somehow this training has forced it to acquire flexible knowledge of grammar, flow, style, and argumentation. This is enough to generate new text, word by word. It can even generalise to non-language tasks that require similarly structured of knowledge.
We need your consent to load this YouTube contentWe use YouTube to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From Two Minute Papers, how GPT-3 is good almost everything
Using GPT-3 is difficult - and weird. Researchers are becoming horse-whisperers, figuring out what to say to get it into the right frame of mind for some task. This demonstrates something known to researchers, but rarely seen in popular depictions. AI systems won’t be cold, logical machines, controlled by explicit rules. They’ll be messy and unpredictable, like humans.
It’s easy to dismiss computer-generated text as meaningless statistical pattern-matching. GPT-3 is certainly not conscious so it doesn’t really mean it, no matter what it says. Certainly, meaning is sometimes injected by the reader.
There will be social media bots using GPT-3: some fun, some spouting hate speech, some disseminating political disinformation
But we should be careful here. If it writes a new chapter of the Hitch-Hiker’s Guide to the Galaxy (GPT-3 starts writing at line 43), and that text makes a human laugh (it made me laugh), then that text has meaning, whether GPT-3 meant it or not. And anyway, a lot of what humans do with language is statistical pattern-matching, underneath. In a way, I think the biggest lesson is how GPT-3 separates language and thought. We all know people who seem to speak without thinking, but GPT-3 really does it.
GPT-3 will have a big impact. This line of research was introduced by Google to help them understand search queries better. More exciting new products are possible. For one thing, educators are going to have a hard time detecting plagiarism in students’ essays.
But that will be the least of our worries. There will be social media bots using GPT-3: some fun, some spouting hate speech, some disseminating political disinformation. I’m optimistic and I think our online discourse will be improved if we stop assuming that false or crazy content online is created by a human we should argue with.
Finally, the bigger picture. True AI will be one of the most transformative events in our history. There are real worries that AI could cause catastrophe. GPT-3 doesn’t pose any danger like that, but it is a big milestone, and we reached it much earlier than expected. More research in AI safety is needed.
The easiest way for the reader to try out GPT-3 is the AI Dungeon. It’s a text adventure game where GPT-3 generates the story, but the reader can prompt with any topic and start a conversation.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ