I have a distinct memory of my mother being mad at Netflix. It was 2011, the year I graduated high school. Mom had been a loyal customer for years, but just learned the company was discontinuing the subscription service she loved: the snail-mail delivery of up to three DVDs per month.1
Those returnable red envelopes sparked moments of joy for our family. It’s easy to romanticize these things in hindsight, but as I recall, the need to wait for their arrival built anticipation for the titles on order.2 The movies could be hit or miss, but that made the good ones feel like more of a find—etc.
Back then, I had little reason to doubt that the new way would be better. My teenage life had seen one miracle of technology after another.
Just four years into the smartphone and social media eras, the sum of human knowledge was at our fingertips. We could send unlimited texts (!) to anyone, anywhere. GPS and ride-sharing made travel a breeze—no MapQuest printouts required.3 YouTube offered infinite, ad-free entertainment. Facebook preserved old friendships; Instagram, old memories. Internet culture felt authentic, rebellious, liberating. Social media helped elect the first black president, and the Arab Spring was in full bloom.4 I remember feeling giddy, at 18 years old, at the unimaginable wonders this century would contain, and at my incredibly good fortune to be young in it.
Yet even that year, Netflix debuted a show called Black Mirror, which became a hit by tapping into a certain unease. As the 2010s wore on, it became clear that the changes technology brought were not ubiquitously good, and were coming dangerously fast. Tech rewired our brains and altered our behavior. It changed how we thought, what we stressed over, and how we spoke to each other. It upended ancient norms. By 2016, it tore us apart.
When Season 14 came out in April, I wondered: if Black Mirror’s debut episode5 in 2011 had instead shown society as it exists today, how would audiences have reacted? Would it have seemed realistic? Would they like what they saw?
We don’t know exactly which problems are attributable to tech, but we have some idea. Social media algorithms sorted people into echo chambers and then riled them up with content optimized to outrage. That contributed to a cycle of outgroup hatred and threat perception that made our politics unrecognizably tribal, polar, and hostile. This cycle found its apotheosis in Trump, and we know how that’s going.
Liberal institutions have crumbled, democracy is backsliding, and the global order is in tatters. The President of the United States attempted a coup! Then he was convicted of one of his many crimes, and then he got reelected. A plausibly lab-leaked, plausibly bioengineered pandemic threw the world into chaos for years, worsened by disinformation and collapsing trust in traditional authorities. If you’d predicted any of this in 2011, you’d have been thought a fearmongering loon.
Suppose you blame something else for all of the above.6 Fine. Clearly, technology is a huge part of why mental health has plummeted. Most of us are literally addicted to a glowing, beeping rectangle we must keep with us at all times in order to socially or professionally function. Our children are so hooked they spend half their waking hours staring at it.
summarizes:“There is substantial evidence that spending excessive amounts of time on smartphones and other devices causes...increased depression and anxiety, increased suicidality, fewer social connections, less productivity, worse educational performance, etc.”
This understates it. I feel anxiety when Edwin Diaz walks the bases loaded with a one-run lead. What kids today feel is crushing, all-consuming insecurity and self-awareness due to the crowd of judgy pricks that’s always in their pocket. Online posts are routinely deleted if they don’t get enough likes. Authentic expressions of emotion became “cringe,” so romantic relationships in high school are now rare, and kids’ entire personalities must now be assembled from memes that their algorithms mark as socially permissible. Social media threw millennials off kilter, paralyzed Gen-Z, and made whatever’s after Gen-Z into dysfunctional zombies.7
Attention spans evaporated. Performance in school cratered; even college students are essentially illiterate. We go outside less, hang out with friends less, feel more lonely, have less sex, and report less life satisfaction. Fuck fentanyl: phones are the defining public health crisis of our era, and the fact that politicians give them zero mention in describing what's wrong with society is a testament to how deep the rot has gotten.8
A broken politics and phone addiction are two problems caused largely by technology today. I do not have space to describe all the others in detail, because I just started page three, and again: my readers’ attention spans are fried.
But I’ll put it this way: when I was young, the world felt so generally stable that even small issues got disproportionate attention. Remember acid rain? Remember the supposed epidemic of razor blades getting passed out in people’s Halloween candy? Remember the fucking War on Terror? How for a whole ass decade, politicians had to pretend that zealots with pipe bombs in their underwear were the single greatest threat to our way of life, because voters just could not wrap their heads around how trivially tiny that risk was? We had so few new and major problems that we made them up, until people got distracted and moved on.
I feel like technology speedruns this process, except the problems are real and never get solved. They just accrue. Cyberbullying, data privacy, the surveillance state, cyberattacks, media fragmentation, attention fragmentation, disinformation, polarization, phone addiction, porn addiction, gender wars, widespread anxiety, AI deepfakes, AI worker displacement…each gets a year of think pieces before the next pops up to replace it. And we all just cope? We shrug and accept the new normal.
So yeah: what if Black Mirror showed all of that in 2011—crowds hyped on QAnon storming the Capitol and all?9 Would viewers have found it plausible by 2050, let alone 2025? Or would we have rolled our eyes and called it science fiction?
It sure seems like the gap between America in 2011 and America in 2025 is a lot wider than it was between, say, 1971 and 1985. Technology has always changed things, but now it does it faster. So now, “science fiction” is the term people use for the most important things that will happen in any of our lifetimes, a single digit number of years before they happen.
AI safety as a vibe
Five years ago, I joined the EA student group on my grad school campus. I’d just left the Army and had a rare flexibility to pick almost any career I wanted. As I considered the range of EA-sanctioned options before me—global health and development, animal welfare, or long-termist causes like biosecurity or AI—I knew I had to start narrowing it down.
The least appealing option was the easiest to cross off: AI, I knew, just wasn’t for me. I was glad some people were working on it, but computers were never my thing. Besides, the whole thing seemed pretty far-fetched. Chat-GPT did not exist yet. After COVID, pandemics seemed much more pressing.
Once all alternatives have been exhausted, Americans can always be counted on to do the right thing. And two months ago, after years of trying the “other folks are handling that one, right?” strategy, I accepted a new position working on frontier AI governance.
The philosophical case for AI policy has always been strong, and developments over the past nine months or so have made it stronger. But I want this post to make the layman’s case, which I sense is more important to our cause’s political prospects. Even if you don’t care about the far future at all, you may share my sense that this train is moving a little too fast?
Here is the premise of our science nonfiction book. Emerging technology has facilitated the most profound societal changes of our lifetimes. The pace, breadth, and effects of those changes are only increasing and compounding. AI will accelerate these changes, perhaps exponentially, sooner than most people realize. Whether this does a tremendous amount of good or a tremendous amount of bad may depend on specific choices before us today, which are harder than ever to make in our chaotic political environment.
Too often, AI policy is reduced to a Rorschach test for things we understand better or care about more. It’s the latest excuse to rehash old fights about big government, social justice, or billionaire greed. It’s subsumed into US-China competition, where hawks and doves squawk over the latest tool of statecraft. It’s been devoured, like everything else, by voracious culture wars. And these are all needed conversations. AI cannot sidestep them.
I fear, though, that we don’t have time to resolve the old fights before AI transforms our world forever. I suspect this transformation will be dramatic enough that we’ll look back on these next few years—maybe two, maybe ten, but in any case, a historical blip—as a defining before/after moment in our lives: a transition so radical that our grandchildren will marvel to learn we grew up in the before-times. We cling to the caboose of a swerving, teetering train on track to a wondrous future, if only it stays on the rails.
It is the job of the AI safety community to convey that urgency to political decision-makers and the public they answer to. No matter how accessibly Ben Todd writes them, long explainers of recent trends in reinforcement learning will not do the trick. We need a relatable sentiment to tap into, something that links our pitch with what people already feel.
My growing hunch is that it’s roughly the same sentiment that Black Mirror viewers would have felt watching 2025 in 2011: unease, grief, and fear regarding how quickly and radically emerging technology has already changed our lives.
The core political reality of our era is a global mood that something is deeply wrong with the status quo. I noted in December that this “same mood tumbled incumbent parties in the UK, France, Germany, South Korea, Japan, India, Belgium, Bulgaria, Croatia, Lithuania, New Zealand, and Australia. Crucially, these countries have a wide range of social and economic policy contexts,” so it’s not “neoliberalism” to blame. It’s not that the whole world suddenly got woke, or more capitalist, or more interracial, etc. What is it?
Blaming phones alone would oversimplify. But there is, I think, a global anxiety that ancient aspects of face-to-face civilization are slipping away, to be replaced by modes of living that human beings are just not built for. There is bipartisan desire to build a society less digital and more tangible, a desire so relatable that it resonates with populists and policy nerds alike. To achieve its political goals, the AI safety community needs to give voice to that feeling.
This is not the same as being anti-tech, or techno pessimists, in the way some leftists think it’s all useless scammy gizmos. Nor is it the cheery “abundance” mindset I’m fond of in some other policy contexts. It’s less ideological than both of those views. It’s more of a vibe that tech holds awesome power, that we need to be careful with it, and that we’ve been too careless recently. Our sloppy, vibey evidence is: *gestures wildly* look around!
Vibes are less sophisticated than ideas, which can be both frustrating and useful. They make few distinctions between different kinds of tech, different kinds of catastrophe, or what to do about it. But that allows common cause with such diverse allies as Steve Bannon, labor unions, Marjorie Taylor Greene, anti-porn advocates, certain tech companies, the ACLU, and foreign policy think tanks. It allows us to write bills precisely, message about them vaguely, and rally a broad coalition behind that message.
Another benefit: vague sentiments are less likely to be proven wrong than concrete predictions. If AI timelines slow down, EAs will be insulated from the credibility damage if they hitch their cause to the broader fight to take back our lives and societies from unelected tech companies. Instead of guessing at timelines for various risk pathways, we can just express that these decisions affect all of us, and so would the harms they may inflict.
Still, the crudeness of this approach will irritate some readers. Most EAs, and many EA critics, are too smart to communicate in vibes. And if you have technically informed skepticism that transformative AI is imminent, that’s genuinely helpful!
has a great piece on the AI safety movement’s recent tendencies towards sensationalism and selection bias (two other problems, I’ll add, which emerging technology has exacerbated since 2011). Maybe this post falls victim to that. I have no idea what the future will look like, and it’s entirely possible that the AI skeptics are right. Maybe AI will be a normal technology with only limited and gradual improvements.I have two things to say about this argument. First, the stakes are absurdly high on one side of it. You don’t have to be a domer to recognize the overwhelming importance of mitigating existential risk. If there is a 90% chance that the skeptics are right and a 10% chance that the doomers are right, the appropriate courses of action are those the doomers suggest.
Second, and more relevant to my thesis here: that argument is wonky and technical, and I think most skepticism of short timelines is not. Usually, the bigger obstacle is that it sounds like science fiction. And I agree with Helen Toner that “dismissing discussion of AGI, human-level AI, transformative AI, superintelligence, etc. as ‘science fiction’ should be seen as a sign of total unseriousness.” We already live in science nonfiction, and the experience of this past decade gives good reason to suspect that the next will be just as surreal, so mocking fast timelines without a technical understanding is irresponsible myopia.
New goal: quality over quantity
July Fourth marked one year since I started Exasperated Alien. The next day, I sat to write something commemorating the anniversary, more from a sense of duty than because I had anything new to share.
It was 2 pm on a beautiful Saturday, after a week of remote work in the same apartment. While my smarter girlfriend lounged at the public pool, I was delayed in beginning by an inbox full of emails; by bills I had to pay, each needing a texted 2FA code to log in; by open tabs I’d not yet gotten to reading. Then I got writer’s block.10 By 4:30 pm, I was stressing over made-up deadlines that no one cares about but me, and realized: it’s not just everyone else that tech is making unhappy.
Writing is what I love most, but it’s also the art of applying ass to seat, and I spend too much damn time in this particular seat. So with my new job picking up, I should warn you that I plan to do it less. Not never, or so little you should unsubscribe. I averaged 3.3 posts/month the first year; maybe I’ll aim for 2 now? In any case, writing about the exasperation will take a back seat to doing something about it, both personally and professionally. Maybe less clutter in your inbox will help you unplug, too.
I hope that one day, from a perspective of relative stability, we look back on this tumultuous decade as a near miss. I hope that the chaos produced by smartphones and social media goes down as a warning shot that alerted us to the meteor inbound. And once the bullet is dodged, I hope we can find some Zen, with healthier relationships, dumber phones, and machines of loving grace to do the chair work for us.
Technically, it was not to be discontinued but rather split off onto a separate website called Qwikster, with its own login and subscription product. Still, consumers like my mother were so upset that Netflix walked back the plan later that year. In 2016, after streaming was more firmly entrenched, the snail-mail service moved to DVD.com, where it was finally discontinued in 2023.
Science confirms that we’re more appreciative of things we can’t take for granted, and that appreciation is a crucial component of happiness.
Another distinct memory is my parents printing out turn-by-turn directions from MapQuest before every road trip. We seemed to go from that to unreliable GPS sold for hundreds of dollars as a dashboard attachment, to reliable GPS embedded in the screen of fancier cars, to Google Maps as a free app on everyone’s phones that worked almost flawlessly in about 5 years.
I was conserva-tarian at the time, so Obama didn’t personally thrill me. But I’m confident that even then, I’d have much preferred him to what we have now. In any case, his campaign’s association with Facebook strengthened the impression that social media was a liberalizing force.
(which was uncharacteristically awful, might I add.)
Is neoliberalism in the room with us right now?
I don’t say this to talk shit on younger generations, as if mine were somehow impervious to these pressures. When I was in undergrad, Nickelback became a viral punchline for allegedly making cliché music—so I stopped listening to Nickelback. We are all afraid of being ostracized by a crowd, and the internet has become the ever-hovering crowd, tearing down little pieces of everyone.
I have a post in my drafts folder titled: “Whatever it is you care about, phone addiction is probably a bigger problem than that.” The trouble with that title is that some of my most avid readers are in the EA community, so they have optimized for caring about the biggest problems. Maybe for that crowd, the cheekier way to title it would be “the AI takeover has already started.”
Or what if it showed us Netflix’s new show, Adolescence?
To spur ideas, I clicked back to my first ever post, which described how I picked my title. Without intending to make this point, it alluded to how much of my exasperation was beamed into my brain by beeping rectangles:
“Each major controversy—COVID, BLM, January 6th, October 7th, an election here or a mass shooting there—stirs up a new firestorm that makes everyone feel worse. The internet explodes with bad-faith polemics that hype us up on our own bullshit. We reshare our side’s best arguments and mock the other side’s worst. When things simmer down, we go back to our bubbles having learned little and achieved less. The discourse stays broken, and so does seemingly everything that discourse is meant to guide.”