There’s a new AI chatbot to check out — provided the servers that host it aren’t down from overwhelming traffic.
Since ChatGPT launched last week, more than a million people have signed up to use it, according to OpenAI’s president, Greg Brockman. It’s a funny, inventive, engaging, and totally untrustworthy conversation partner, and I highly recommend you check it out when the servers aren’t staggered under the load.
Other writers have had a ball getting ChatGPT to, say, write a rap battle between antibodies and small molecule groups, or a Seinfeld script where Jerry learns about the bubble sort algorithm. But there’s no funny AI-generated text here for you today, just some thoughts on ChatGPT and where we’re headed.
A few weeks ago, I wrote about the stunning recent advances in AI, and I quoted Google executive Mo Gawdat, who tells the story of how he became concerned about general AI after he saw robotics researchers working on an AI that could pick up a ball: After many failures, the AI grabbed the ball and held it up to the researchers, eerily humanlike. “And I suddenly realized this is really scary,” Gawdat said. “It completely froze me. … The reality is we’re creating God.”
Many people working on AI systems have had a moment like that at one point or another over the past few years — a moment of awe mixed with dread when it suddenly became clear to them that humanity is on the verge of something truly enormous. But for the general public, before 2022, there was little chance to come face to face with what AI is capable of. It was possible to play with OpenAI’s GPT-3 model, but on a relatively inaccessible site with lots of confusing user settings. It was possible to talk with chatbots like Meta’s Blenderbot, but Blenderbot was really, really dumb.
So ChatGPT is the general public’s first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are having our version of the Gawdat moment. ChatGPT, by default, sounds like a college student producing an essay for class (and its most immediate implication is that such essays will likely become a thing of the past).
But it doesn’t have to sound like that; tell it to clean up its essays in the New Yorker house style, and it writes better. Tell it to write Shakespeare, and it’ll try (the cadence of anything meant to be spoken is generally not very good, so good luck with iambic pentameter). It is particularly good for rephrasing great philosophers or great works of literature in the vernacular of a 1920s mobster or a 1990s rapper; it can be funny, though it’s never clear how intentionally. “This is big,” I have heard from multiple people who were previously AI-skeptical.
The First Law: Don’t get canceled
It’s still far from perfect. Despite OpenAI’s best efforts, ChatGPT still frequently makes up nonsense and still sometimes can be coaxed into saying racist or hateful things. And as part of a desperate effort to train the system to not say racist and hateful things, OpenAI also taught it to often be silly or evasive on any question that might even touch on a controversial topic.
Sometimes, though not reliably, ChatGPT will claim that it’s offensive to “make generalizations about any group of people based on their gender,” if asked a basic factual question such as “are men typically taller than women?” (They are.) If asked about difficult topics, it immediately insists at length that it is just a language model trained by OpenAI, with no beliefs or opinions, and yet at other times, if prompted cleverly, it will happily express beliefs and opinions.
It’s not hard to see why OpenAI did its best to make ChatGPT as inoffensive as possible, even if getting around those limits is eminently doable. No reputable AI company wants its creation to start spewing racism at the drop of a hat, as Microsoft’s Tay chatbot did a few years ago. If OpenAI trained its system using some Isaac Asimov-style Laws of Robotics, the first law is definitely “don’t embarrass OpenAI.”
A glimpse into what’s ahead for us
But if ChatGPT is flawed, it’s smart enough to be useful despite its flaws. And many of the flaws will be edited away with more research and effort — quite possibly very soon, with the next major language model from OpenAI just weeks or months away.
“The piece of this that just makes my brain explode ... is that ChatGPT is not even OpenAI’s best AI chatbot,” the New York Times’s Kevin Roose said this week on the Times tech podcast Hard Fork. “Right now, OpenAI is developing the next version of its large language model, GPT4, and if you talk to people in Silicon Valley who work in AI research, they kind of talk about this like it’s magic.”
Silicon Valley’s biggest names have been entirely candid about why they’re doing this and where they think it’s headed. The aim is to build systems that surpass humans in every respect and thereby fundamentally transform humanity’s future, even though that comes with a real chance of wiping us out if things go wrong. “ChatGPT is scary good. We are not far from dangerously strong AI,” Elon Musk tweeted earlier this month. OpenAI CEO Sam Altman offered qualified agreement, replying, “i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too.”
There’s been a tendency to dismiss such claims as meaningless hype; after all, every startup in Silicon Valley claims that it’s going to transform the world, and the field of AI has been marked by summers of optimism followed by winters of dashed hopes. But ChatGPT makes it clear that behind the hype and the fear, there’s at least a little — and maybe a lot — of substance.
A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!