clock menu more-arrow no yes mobile

Filed under:

“Send a Baby to Mars!” and other AI-generated petitions

An AI was asked to write petitions to the government. Here’s what it came up with.

Activists hold petitions in front of the White House during a protest on September 26, 2013, in Washington, DC. 
Brendan Smialowski/AFP/Getty Images
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

You may have heard of — it’s a site where people can submit petitions to the government or to private companies, vote on other users’ petitions, and bring attention to the pressing issues of the day. More than 1,000 petitions are uploaded every day, demanding everything from regulatory approval of a cancer treatment trial for a sick baby to a remake of Game of Thrones season eight “with competent writers.”

It’s a great tour of what people care about — which also makes it a potentially interesting data set. What if you took the petitions in there, fed them into a smart AI, and had the AI generate its own petitions?

That’s what Janelle Shane, a research scientist in optics, did last week with the publicly available version of OpenAI’s GPT-2 language generation system. Vox has covered GPT-2 before — the system achieves state-of-the-art performance on some important tasks in natural language processing, writes decent poetry, and has attracted controversy due to OpenAI’s decision not to release the full version while it contemplates countermeasures to prevent its use for spam, Amazon review fraud, and fake news.

Shane’s use of GPT-2 (albeit a scaled-down version of it) might be the funniest yet. She fed GPT-2 petitions from then asked it to generate its own petitions based on what it learned. Here are some of the petitions the AI came up with:

  • Everyone: Put the Bats on YouTube!
  • Help Bring Climate Change to the Philippines!
  • Taco, Chipotle, and Starbucks: Bring Back Lettuce Fries
  • Mr.person: I want a fresh puppy in my home
  • Sign Petition for Houston’s New Fireworks to be Offensive
  • Donald Trump: Change the name of the National Anthem to be called the “Fiery Gator”
  • Taco Bell: Offer hot wings and non-perfumed water for all customers
  • Dogs are not a thing!! Dog Owners are NOT Human beings!!
  • Filipinos: We want your help stopping the killing of dolphins in Brazil in 1970’s
  • Officials at Prince Alfred Hospital: Aurora to Tell The Company To Send A Baby to Mars
  • Kim Hsu: Tougher Penalties for Pedestrians and Elephants on City Street in Austin Texas

Not quite a serious batch of proposals! If nothing else, GPT-2 is hard to beat for surrealism; it often feels like it’s reporting from an alternate universe that works sort of like ours but is ... just a little off.

But amusing as the results may have been, we should be careful not to jump to too many conclusions about AI from Shane’s project — even though we can learn some things from it.

The public loves AI systems we can talk to

From the very first chatbots — automated systems that can carry out a “conversation” with human beings — AI systems that talk to us have played an outsize role in our understanding of AI.

The earliest chatbot, ELIZA, reads from a script and sounds like a therapist — albeit one who is not very good at listening. It follows absurdly simple rules to generate its comments and replies, but it has struck a chord with people since it came out in the 1960s.

Today, you can talk to language AI systems that are a great deal more sophisticated. OpenAI’s GPT-2 system, unveiled in February, is just the most recent example.

The way GPT-2 works is that after being fed a text prompt or passage, it predicts the next words, using sophisticated new techniques that allow it to maintain more continuity across paragraphs than earlier language systems have been capable of. Modern machine learning techniques are leaps and bounds beyond what Eliza and those early chatbots could do. (You can sample a scaled-down version at

People have wielded GPT-2 to write poetry and fiction, comment on current events, and run blogs that imitate the style of various writers. It’s pretty impressive, and you should check it out (the publicly available version, anyway).

But there are some things to keep in mind, if you’re inclined to play around with GPT-2 and draw conclusions about the state of the field.

First, the “big,” capable versions of GPT-2 — the ones that actually represent state-of-the-art performance — haven’t been released to the public yet, so all the poetry and silliness generated thus far was created with much weaker AI systems released for public sampling. The fact that the full version of GPT-2 has done many other impressive things should warn us against complacency.

Second, the state of AI today is really confusing. Even among the researchers working directly at the forefront of the field, there are profound disagreements over how impressive recent advances are. Some people think that deep learning really represents a fundamental transformation of the field, capable of getting us to AIs with broad, general human-like capabilities.

Others expect the gains from deep learning to be exhausted in the next few years, and expect that AIs with human-like abilities are centuries away. With this level of expert disagreement, the rest of us shouldn’t be too confident in any conclusion we draw from a few pages of GPT-2 output — no matter how compelling, silly, accurate, or outrageously inaccurate.

But while we can’t learn much about the state of AI from the language AIs that are publicly available, I have a lot of sympathy for the human desire to understand what AIs are capable of by, well, talking with them.

It makes sense that when we’re trying to assess AI, we assess how well it reasons, how much of a chain of logic it can hold in its “mind,” how good it is at guessing how we’ll respond to its arguments.

There’s a reason Alan Turing proposed that we evaluate an AI’s intelligence by evaluating whether it can imitate a person. And while the Turing test is not the best possible way of making the distinction Turing saw as so important, it’s one that has stood the test of time much better than claims that AIs will never beat us at chess or Go.

So while there are limits to what we can learn about AI from language systems like ELIZA and Shane’s project, that doesn’t mean they don’t go some way toward familiarizing ourselves with AI — especially important now in a period when the technology seems to be evolving at a bewildering pace.

Just remember that AIs are frequently superhuman in some respects while being amazingly incompetent in others, and that we will continue being surprised by what turns out to be possible.

As GPT-2 assured me in a conversation I had with it on this weekend, “AI will remain a mysterious thing, far from anyone’s control, but we should at least want to think about it. And as AI becomes more and more reliable through the use of machines, we need not worry so much about what AI might decide to do with us.”

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.