Archived: What Facebook criticism can teach us about A.I. criticism

This is a simplified archive of the page at https://maxread.substack.com/p/what-facebook-criticism-can-teach

Use this page embed on your own site:

The danger of accepting an industry's terms for itself

ReadArchived

I’ve found myself a little bit annoyed lately by some of the claims I’ve seen around the alleged “misinformation” threat posed by the latest generation of generative A.I. applications like Midjourney and ChatGPT. The A.I. researcher and critic Gary Marcus (to whom I’m generally very sympathetic) recently published in his newsletter an example of a jailbroken Bing chatbot spewing a Qanon narrative with fake “references,” and cautioned that the “potential for automatically generating misinformation at scale is only getting worse.” In the hothouse of Twitter, the claims and the warnings are even stronger, if somewhat vaguer:

This kind of response is obviously silly; we know quite well that you don’t need Midjourney or GPT-4 to create doctored photos or effective propaganda, and, even if generative A.I. might make such production slightly more efficient, as Sayash Kapoor and Arvind Narayanan write, citing Seth Lazar,

“the cost of producing lies is not the limiting factor in influence operations."

But beyond the basic point that generative A.I. doesn’t really change the economic or practical structures of misinformation campaigns, we know that deepfakes and “fake news”--whether or not they’re generated by LLMs--are not, in and of themselves, the cause or source of “misinformation” on a politically relevant scale, and moreover that misinformation campaigns or operations were largely inefficient failures on their own terms, and that misinformation on social media in general was far from the most important factor in the global rise of political instability, right-wing reaction, acceptable bigotry, etc. We know all this because we’ve spent a lot of the past decade thinking, observing, and arguing about Facebook (and its similar peers), and there are lessons to be gleaned from this knowledge as new and increasingly advanced generative A.I. apps are deployed across the internet.

Obviously the lesson is not “don’t sweat it.” I understand the sense of anxiety being articulated by these warnings about misinformation; I am still personally trying to figure out how I feel about “A.I.,” a task made no less difficult by what often feels like an astounding pace of change. Google has now introduced its own chat app, Bard, to contend with Microsoft’s Bing/Sydney; these megaplatforms enter the public A.I. arms race against a background of dramatic visible progress in the quality of output of the major generative A.I. apps, e.g.:

The weekly introduction of new chatbots and generative A.I. applications and the obvious recent velocity of advances in the capabilities of the large language models that power them--and its immediate visibility to the rest of us in the form of Midjourney shitposts on Twitter--has helped imbue every conversation or prediction about “A.I.” that I have read in newspapers (or on Twitter or in Discord) over the last few months with a sense of existential urgency, if not desperation, a vibe Ezra Klein articulated well in his recent Times column:

“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.” […]

I find myself thinking back to the early days of Covid. There were weeks when it was clear that lockdowns were coming, that the world was tilting into crisis, and yet normalcy reigned, and you sounded like a loon telling your family to stock up on toilet paper. There was the difficulty of living in exponential time, the impossible task of speeding policy and social change to match the rate of viral replication. I suspect that some of the political and social damage we still carry from the pandemic reflects that impossible acceleration. There is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time.

But that is the kind of moment I believe we are in now. We do not have the luxury of moving this slowly in response, at least not if the technology is going to move this fast.

I agree wholeheartedly with Klein’s conclusion, which is that “we cannot […] put these systems out of our mind, mistaking the feeling of normalcy for the fact of it.” But my immediate (and possibly very unwise!) instinct is to resist the sense of urgency, which I think is being imposed on us by the companies and people developing the “A.I.” systems to which we should be paying close attention.

“Imposed,” of course, in the obvious sense that the technology is not developing itself: It’s moving “this fast” because of decisions made by A.I. companies, in particular OpenAI, whose research and release schedule manufactures the “urgency.” But “imposed” also in the sense that this is exactly how OpenAI would like us to talk about it, i.e. as a world-historically important technology, just months away from inevitable total global transformation. A.I. doomerism is A.I. boosterism under a different name.

That this urgent A.I. millenarianism emerges from the same group of people who are developing “A.I.” is sometimes treated as a puzzle or contradiction. As Klein writes, “I often ask [A.I. doomer researchers] the same question: If you think calamity so possible, why do this at all? […] A tempting thought, at this moment, might be: These people are nuts.” But the dynamic is familiar to anyone who followed the mainstream discourse about “disinformation” and Facebook as it evolved in the years after 2016. As Joe Bernstein’s excellent 2021 Harper’s article on the subject explains, Facebook’s entire business proposition prevented it from dismissing the disinformation panic; it was better to cop to wrongdoing than to admit powerlessness:

Compared with other, more literally toxic corporate giants, those in the tech industry have been rather quick to concede the role they played in corrupting the allegedly pure stream of American reality. […] Facebook’s basic business pitch made denial impossible. Zuckerberg’s company profits by convincing advertisers that it can standardize its audience for commercial persuasion. How could it simultaneously claim that people aren’t persuaded by its content? Ironically, it turned out that the big social-media platforms shared a foundational premise with their strongest critics in the disinformation field: that platforms have a unique power to influence users, in profound and measurable ways. Over the past five years, these critics helped shatter Silicon Valley’s myth of civic benevolence, while burnishing its image as the ultra-rational overseer of a consumerist future.

By the same token, if you are trying to sell A.I. systems (or secure funding for research), it’s better to predict total imminent A.I. apocalypse than it is to shrug your shoulders and say you don’t really know what effects A.I. will have on the world, but that those effects will probably be complicated and inconclusive, occur over a long timeline, and depend to a large degree on social, political, and economic conditions out of any one A.I. company’s control. Tweeting “It’s so over” is more likely to go viral than tweeting “It’s always already happening and will continue to do so forever.”

I recognize that to some extent this sounds like splitting hairs. Things are bad and might be getting worse! Who cares if we’re correct about the the precise nature of A.I. risk and its possible outcomes, so long as we’re doing something to blunt its effects? But this is the exact problem with “urgency,” and precisely what we should learn from a decade or so of Facebook criticism: Facebook has obviously played a significant role in the political developments of the past decade, but as Bernstein documents, misapprehension about the nature of its role--a misapprehension encouraged by Facebook!--has directed an enormous amount of attention, energy, and resources away from anything like a realistic or achievable “solution” to the problems the company poses, not to mention the problems that the company simply exacerbates.

It would be good, I think, to recognize that Facebook both shaped political and economic conditions and was shaped by them itself. Would Facebook have been the same had its first decade as a public company not been marked by high unemployment, low interest rates, a soaring stock market, and a political establishment reliant on the tech and financial sectors as the key engines for economic growth?

When we accept A.I. developers' own framing of their products as (1) inevitable and (2) politically and economically transformative, it becomes easy to elide the obvious fact that the forms A.I. takes (i.e., as chatbots! As "search engines"!) and the uses to which it is put (i.e., the jobs it will augment or replace! The tasks it will make easier or harder!) are contingent on the political and economic conditions in which it emerges.

As an example of what I mean, take the legendary sci-fi magazine Clarkesworld, which last month suspended its open submissions after being overwhelmed with a deluge of A.I.-generated short stories. This is obviously a concerning development for independent publishers, but it’s not an inevitable consequence of widespread A.I. access: It’s a direct result of generative A.I. being deployed into a world in which popular TikTok and YouTube hustlers are touting A.I.-based get-rich-quick schemes like “generate sci-fi stories to submit to Clarkesworld.” Clarke has since successfully reopened submissions; in his most recent editor’s note he writes that the real threat to Clarkesworld’s existence is the capriciousness of Amazon deciding to end its Kindle subscription program.

The ongoing story of Clarkesworld suggests to me that A.I. is neither a wholly and immediately transformative technology, set to snuff science-fiction writers and publishers out of existence within months, nor an unimportant bust that will disappear when the hype dies down. Instead it’s, to put it in the most direct terms possible, another thing to deal with

whose importance lies mostly in how it interacts with all the other things we have to deal with.

I’m open to the possibility that we rest on the edge of a precipice--that a world “unrecognizably transformed” by large language models is only a matter of months away, as Paul Christiano seems to believe. But a basic rule of thumb of this newsletter is that things change slowly and stupidly rather than quickly and dramatically, and a proper A.I. criticism needs to account for this likelihood. For now, I am filled with resentment to find myself once again in the midst of a discourse about technology in which the terms and frameworks for discussion have been more or less entirely set by the private companies that stand to profit off of its development and adoption.