Savannah, Georgia—In the old lacquered coffee shop on the corner of Chippewa Square, I eat a blueberry scone the size of a young child's head, and sip a black iced coffee while I stare incredulously at my phone. I'm watching Hank Green speak on AI. It's my third viewing. By now, I have the gist of his monologue memorized, his cited sources jotted in my notebook with aggressive ornamentation scribbled around a few key terms— *Anthropic, Center For AI Safety, and Control AI.* This all sounds very official and urgent, and I didn't want to forget anything.
But there's something else about this video, about Hank specifically, that seems off. There's something missing that I can't put my berry stained finger on. I need to figure out what that something is so I came to Savannah for a few days for a little writing retreat. You'd be surprised how cheap you can get a hotel in a college town a few days before a national holiday. So, I'm here in this coffee shop watching Hank Green flail his arms in a shirt more purple than my prose.
I shove another piece of scone in my mouth and wash it down with a long sip through my straw. Dunking pastry into iced coffee in broad daylight feels obscene to me. So I keep a steady beat of *scone-sip-chew-swallow*— a lovely little melody of proper scone consumption with the swell occurring only when my mouth is shut. Savannah is a dignified place. I'm not an animal, you know.
I visit Savannah often. The bronze statues and sprawling oaks make me feel more like a writer. The historic downtown is packed with so much implied history, it just begs its tourists to ask about it. But, everyone knows where the doors below the stoops once lead to. No one, including me, is asking about it.
You won't find too many history tours here, anyway. There are perhaps a dozen or more ghost tours, though. Each guide stops at the same old Victorian mansions. Each with a slightly different story designed to scare the hell out of you. It had never occurred to me why the city of Savannah wants you to believe that it's most expensive real estate is haunted, until a friend clued me in a few years back when I visited them in New Orleans (a similarly spooky town).
Savannah leans on the fantastical to hide a much darker reality. In doing so, whether intentional or not, obfuscates any potential to learn from our past.
I restart Hank's video for a fourth time, and disassociate for the requisite 5 to 10 seconds while YouTube serves me a fresh batch of ads. I'm one of those freaks who doesn't use ad block, but also refuses to pay for YouTube Premium.  I guess I believe in content creator is getting paid, but I won't feed the corporate machine with my attention if I can help it.
Anyway, Hank Green opens with a warning of catastrophe. This is the part that really gets me. There's something in his tone. It's that Green Brother-cadence that makes you feel like the world reached a consensus on something important when you weren't looking. Nine times out of ten that certainty is comforting. The world moves so fast, we're forced to rely on others to keep us informed. But, catastrophe? That's dark, man.
## Section
If your mind went straight to climate change, or, some resurgence of a depression-era virus thought to be eradicated, when hearing the word *catastrophe*, congratulations. Your brain still thinks horses when you hear hoofbeats, not zebras.
// Something detective-y here.
In “We've Lost Control of AI, Hank Green comes across like he's pulling from a wide-range of sources and experts. But in fact, every point Hank makes, each source cited, and every innuendo of a science fiction-like AI apocalypse, comes from three loosely affiliated organizations pushing a specific, unverifiable, non-peer reviewed narrative and lobbying effort that just so happens to align with big tech’s regulatory capture strategy— Anthropic, Control AI, and Center For AI Safety.
There's a fourth relevant organization not mentioned in Hank's video. Well, maybe not so much an organization as an individual with a blog/forum that kept appearing in my search results.

Eliezer Yudkowsky but we'll get to him in a bit.
Hank attempts to legitimize these claims by citing the Statement on AI Risk—a tweet-sized document from the Center For AI Safety, and “signed by a thousand AI scientists, thought leaders, and CEOs,” as Hank puts it.
Statement on AI Risk, in its entirety:
> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The document's brevity is its strongest tool. It's a clever trick that leans on the incontestable without burdening itself with evidence of threat, or even identifying the nature of the threat. Mitigating the risk of human extinction from *anything* should be a global priority. *Of course I'll sign it!*
Statement on AI Risk doesn't make a claim on how imminent the danger is, or what we must do to mitigate it. It doesn't even speak to how AI would create an extinction level event. The document could just as easily be referencing the dangers of energy consumption and climate change, a far more imminent and material threat caused by AI.
There’s actually a few good reasons why this document, doesn’t go into the details, yet supposedly has garnered support from multiple Nobel Prize winners and AI scientists.
First, the Statement on AI Risk, hosted on the Center For AI Safety’s website, *isn’t the original document* signed by 1,000 experts.