Archived: We Programmed ChatGPT Into This Article. It’s Weird.

This is a simplified archive of the page at https://www.theatlantic.com/technology/archive/2023/03/chatgpt-api-software-integration/673340/

Use this page embed on your own site:

Please don’t embarrass us, robots.

text generator, bot-written stories, AI confetti bomb, chat service, software of all kinds, Greg Brockman, ChatGPT service, general-purpose language system, obvious idea, possible future, use of the functionality, ChatGPT API, application programming interface, machine-written language, software applications, current events, magazine websites, human being, simple search engine, human-written prose, reader behavior, perceived AI gold rush, new kinds of word processors, chat app, such care, new work, type of query, generative-content widgets, ChatGPT, potential users of the ChatGPT API, API, word processor, kinds of uses, computer people, internet-famous, lot of time, red teaming, potential risks, unthinkable deluge of generative copy, requested topic.But, system of plumbing, necessary part of AI, surgical example, much more controllable model, track record, delivery apps, technology team, necessary component of ChatGPT, massive quantities, simple test, technology, TechnologyReadArchived

Please don’t embarrass us, robots.

An abstract image of green liquid pouring forth from a dark portal.
Daniel Zender / The Atlantic; Getty

ChatGPT, the internet-famous AI text generator, has taken on a new form. Once a website you could visit, it is now a service that you can integrate into software of all kinds, from spreadsheet programs to delivery apps to magazine websites such as this one. Snapchat added ChatGPT to its chat service (it suggested that users might type “Can you write me a haiku about my cheese-obsessed friend Lukas?”), and Instacart plans to add a recipe robot. Many more will follow.

They will be weirder than you might think. Instead of one big AI chat app that delivers knowledge or cheese poetry, the ChatGPT service (and others like it) will become an AI confetti bomb that sticks to everything. AI text in your grocery app. AI text in your workplace-compliance courseware. AI text in your HVAC how-to guide. AI text everywhere—even later in this article—thanks to an API.

API is one of those three-letter acronyms that computer people throw around. It stands for “application programming interface”: It allows software applications to talk to one another. That’s useful because software often needs to make use of the functionality from other software. An API is like a delivery service that ferries messages between one computer and another.

Despite its name, ChatGPT isn’t really a chat service—that’s just the experience that has become most familiar, thanks to the chatbot’s pop-cultural success. “It’s got chat in the name, but it’s really a much more controllable model,” Greg Brockman, OpenAI’s co-founder and president, told me. He said the chat interface offered the company and its users a way to ease into the habit of asking computers to solve problems, and a way to develop a sense of how to solicit better answers to those problems through iteration.

But chat is laborious to use and eerie to engage with. “You don’t want to spend your time talking to a robot,” Brockman said. He sees it as “the tip of an iceberg” of possible future uses: a “general-purpose language system.” That means ChatGPT as a service (rather than a website) may mature into a system of plumbing for creating and inserting text into things that have text in them.

As a writer for a magazine that’s definitely in the business of creating and inserting text, I wanted to explore how The Atlantic might use the ChatGPT API, and to demonstrate how it might look in context. The first and most obvious idea was to create some kind of chat interface for accessing magazine stories. Talk to The Atlantic, get content. So I started testing some ideas on ChatGPT (the website) to explore how we might integrate ChatGPT (the API). One idea: a simple search engine that would surface Atlantic stories about a requested topic.

But when I started testing out that idea, things quickly went awry. I asked ChatGPT to “find me a story in The Atlantic about tacos,” and it obliged, offering a story by my colleague Amanda Mull, “The Enduring Appeal of Tacos,” along with a link and a summary (it began: “In this article, writer Amanda Mull explores the cultural significance of tacos and why they continue to be a beloved food.”). The only problem: That story doesn’t exist. The URL looked plausible but went nowhere, because Mull had never written the story. When I called the AI on its error, ChatGPT apologized and offered a substitute story, “Why Are American Kids So Obsessed With Tacos?”—which is also completely made up. Yikes.

How can anyone expect to trust AI enough to deploy it in an automated way? According to Brockman, organizations like ours will need to build a track record with systems like ChatGPT before we’ll feel comfortable using them for real. Brockman told me that his staff at OpenAI spends a lot of time “red teaming” their systems, a term from cybersecurity and intelligence that names the process of playing an adversary to discover vulnerabilities.

Brockman contends that safety and controllability will improve over time, but he encourages potential users of the ChatGPT API to act as their own red teamers—to test potential risks—before they deploy it. “You really want to start small,” he told me.

Fair enough. If chat isn’t a necessary component of ChatGPT, then perhaps a smaller, more surgical example could illustrate the kinds of uses the public can expect to see. One possibility: A magazine such as ours could customize our copy to respond to reader behavior or change information on a page, automatically.

Working with The Atlantic’s product and technology team, I whipped up a simple test along those lines. On the back end, where you can’t see the machinery working, our software asks the ChatGPT API to write an explanation of “API” in fewer than 30 words so a layperson can understand it, incorporating an example headline of the most popular story on The Atlantic’s website at the time you load the page. That request produces a result that reads like this:

As I write this paragraph, I don’t know what the previous one says. It’s entirely generated by the ChatGPT API—I have no control over what it writes. I’m simply hoping, based on the many tests that I did for this type of query, that I can trust the system to produce explanatory copy that doesn’t put the magazine’s reputation at risk because ChatGPT goes rogue. The API could absorb a headline about a grave topic and use it in a disrespectful way, for example.

In some of my tests, ChatGPT’s responses were coherent, incorporating ideas nimbly. In others, they were hackneyed or incoherent. There’s no telling which variety will appear above. If you refresh the page a few times, you’ll see what I mean. Because ChatGPT often produces different text from the same input, a reader who loads this page just after you did is likely to get a different version of the text than you see now.

Media outlets have been generating bot-written stories that present sports scores, earthquake reports, and other predictable data for years. But now it’s possible to generate text on any topic, because large language models such as ChatGPT’s have read the whole internet. Some applications of that idea will appear in new kinds of word processors, which can generate fixed text for later publication as ordinary content. But live writing that changes from moment to moment, as in the experiment I carried out on this page, is also possible. A publication might want to tune its prose in response to current events, user profiles, or other factors; the entire consumer-content internet is driven by appeals to personalization and vanity, and the content industry is desperate for competitive advantage. But other use cases are possible, too: prose that automatically updates as a current event plays out, for example.

Though simple, our example reveals an important and terrifying fact about what’s now possible with generative, textual AI: You can no longer assume that any of the words you see were created by a human being. You can’t know if what you read was written intentionally, nor can you know if it was crafted to deceive or mislead you. ChatGPT may have given you the impression that AI text has to come from a chatbot, but in fact, it can be created invisibly and presented to you in place of, or intermixed with, human-authored language.

Carrying out this sort of activity isn’t as easy as typing into a word processor—yet—but it’s already simple enough that The Atlantic product and technology team was able to get it working in a day or so. Over time, it will become even simpler. (It took far longer for me, a human, to write and edit the rest of the story, ponder the moral and reputational considerations of actually publishing it, and vet the system with editorial, legal, and IT.)

That circumstance casts a shadow on Greg Brockman’s advice to “start small.” It’s good but insufficient guidance. Brockman told me that most businesses’ interests are aligned with such care and risk management, and that’s certainly true of an organization like The Atlantic. But nothing is stopping bad actors (or lazy ones, or those motivated by a perceived AI gold rush) from rolling out apps, websites, or other software systems that create and publish generated text in massive quantities, tuned to the moment in time when the generation took place or the individual to which it is targeted. Brockman said that regulation is a necessary part of AI’s future, but AI is happening now, and government intervention won’t come immediately, if ever. Yogurt is probably more regulated than AI text will ever be.

Some organizations may deploy generative AI even if it provides no real benefit to anyone, merely to attempt to stay current, or to compete in a perceived AI arms race. As I’ve written before, that demand will create new work for everyone, because people previously satisfied to write software or articles will now need to devote time to red-teaming generative-content widgets, monitoring software logs for problems, running interference with legal departments, or all other manner of tasks not previously imaginable because words were just words instead of machines that create them.

Brockman told me that OpenAI is working to amplify the benefits of AI while minimizing its harms. But some of its harms might be structural rather than topical. Writing in these pages earlier this week, Matthew Kirschenbaum predicted a textpocalypse, an unthinkable deluge of generative copy “where machine-written language becomes the norm and human-written prose the exception.” It’s a lurid idea, but it misses a few things. For one, an API costs money to use—fractions of a penny for small queries such as the simple one in this article, but all those fractions add up. More important, the internet has allowed humankind to publish a massive deluge of text on websites and apps and social-media services over the past quarter century—the very same content ChatGPT slurped up to drive its model. The textpocalypse has already happened.

Just as likely, the quantity of generated language may become less important than the uncertain status of any single chunk of text. Just as human sentiments online, severed from the contexts of their authorship, take on ambiguous or polyvalent meaning, so every sentence and every paragraph will soon arrive with a throb of uncertainty: an implicit, existential question about the nature of its authorship. Eventually, that throb may become a dull hum, and then a familiar silence. Readers will shrug: It’s just how things are now.

Even as those fears grip me, so does hope—or intrigue, at least—for an opportunity to compose in an entirely new way. I am not ready to give up on writing, nor do I expect I will have to anytime soon—or ever. But I am seduced by the prospect of launching a handful, or a hundred, little computer writers inside my work. Instead of (just) putting one word after another, the ChatGPT API and its kin make it possible to spawn little gremlins in my prose, which labor in my absence, leaving novel textual remnants behind long after I have left the page. Let’s see what they can do.