Savannah, Georgia—In the old lacquered coffee shop on the corner of Chippewa Square, I sit and eat a blueberry scone the size of a young child's head, and sip cold black coffee while staring incredulously at my phone. I'm watching Hank Green interview Nate Sores, co-author of the new book *If Anyone Builds It, Everyone Dies* and I am in utter disbelief at the conversation taking place before my eyes. Hank Green, the internet's favorite rational science nerd, doesn't appear to be approaching this interview through a critical lens. Instead, Hank seems to be gushing over Sores, an AI-Doomerist who's made it impossible to know where he ends and big tech begins.
Let me explain...
The second author of *Everybody Dies* is self-described genius Eliezer Yudkowsky, founder of the Peter Thiel-funded non-profit Machine Intelligence Research Institute (MIRI), and leader of the definitely-not-a-cult Rationalist movement. In his spare time, Eliezer Yudkowsky writes the LessWrong blog where he tells his followers that they should find a dignified way to die. To Yudkowsky, the AI apocalypse isn't a cautionary tale in the abstract. It is biblical. It's a prophecy. According to Yudkowsky, the singularity will happen in 2025. And yet, despite being the hardest working guy in the AI-doomerist biz, Eliezer Yudkowsky still finds the time to take the odd selfie with OpenAI CEO Sam Altman.
Hank Green promoted this guy's book in an hour long video. Hank then followed up with a second video where he makes an impassioned argument about AI safety that sounds a lot like the arguments Sam Altman and other tech CEOs are making to Congress this very moment. You wouldn't know it watching Hank's AI videos, but to many, this AI-doomerist rhetoric, propped up by lobbying firms cosplaying academic non-profits, and hand delivered to Congress people by AI company CEOs, is a very obvious attempt at a regulatory capture strategy that would kill open source and place AI technology in the hands of just a few billionaires.
## Statement of AI Risk
I shove another piece of scone in my mouth and wash it down with a long sip of my straw (I'm a stress eater). I'm watching Hank Green speak on AI, this time directly to his audience. It's my third viewing. By now, I have the gist of his monologue memorized, his cited sources jotted in my notebook with aggressive ornamentation scribbled around a few key terms— *Anthropic, Center For AI Safety, and Control AI.* This all sounds very official and urgent, and I didn't want to forget anything.
In *We've Lost Control of AI,* Hank Green warns us of catastrophe. He cites the Statement on AI Risk—a tweet-sized document on the Center For AI Safety's website “signed by Nobel Prize winners, scientists, and even AI company CEOs,” as Hank claims.
Hank reads the statement aloud almost verbatim. Except, curiously, he omits a single word: *extinction*. As in, “Mitigating the risk [of] *extinction* from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Hank's experts believe that AI may cause humanity's extinction, so much that they've organized. Presumably, Hank believes this threat is imminent enough that he felt compelled to make a video about it. He very clearly wants us to know that he's leaning on the expert testimony of *Nobel Prize winners*. Yet, he omits their most consequential claim? If AI is poised to enact an extinction-level event, why leave that out? What gives?
Is this Hank's attempt to spare us the anxiety of yet another possible extinction-level event? Perhaps. Hank seems like a nice guy. He's thoughtful. He doesn't want us all to freak out. That makes sense, I guess. But then why make the video at all?
Maybe what I'm seeing is a glimmer of self-awareness. Maybe Hank left out the extinction part because, well, deep down he was embarrassed.
It turns out, the Statement of AI Risk, the one hosted on The Center For AI Safety's website, isn't the document signed by multiple Nobel Prize winners. It is, however, the document signed by Sam Altman, Bill Gates, Anthropic’s CEO, and other AI company CEOs. And it is the document those CEOs are using to lobby Congress for specific regulations around AI.
I'd imagine that Sam Altman is through the roof with joy watching the same Hank Green video that is currently filling me with dread.
## The devil is always in the details
The document's brevity is its strongest tool. It's a clever linguistic trick because the document leans on the incontestable without burdening itself with evidence of threat, or even identifying the nature of the threat. Mitigating the risk of human extinction from *anything* should be a global priority. *Of course I'll sign it!*
It doesn't make a claim on how imminent the danger is, or what we must do to mitigate it. It doesn't even speak to how AI would create an extinction-level event.
Omitting those pesky details frees up tech CEOs to make the Statement on AI Risk whatever they need it to be, to author fantastical stories backed by closed-door corporate-funded studies. The statement can be the prolog to any science fiction tale, so long as it compels congress to legislate for them a lair for AI technology where only a handful of billionaire hold the key.
However you feel about AI and its longterm usefulness, I’m sure we can all agree that artificial intelligence technology should not be the catalyst for a tellecommunications-style monopoly where four companies own our most critical communications infrastructure.
From *[AI Doomerism is a Decoy](https://archive.ph/mPNPZ)*:
> Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving.
https://archive.ph/mPNPZ
The Center For AI Safety is a billionaire funded think tank / lobbying firm.
## Anthropic
Each example Hank provides as evidence that “we've lost control of AI” comes from one company—Anthropic. None of Anthropic's fantastical claims have been peer-reviewed, or even replicable by an impartial third-party. Yet, Hank felt comfortable enough to conclude that catastrophe was near based on the claims of this for-profit company.
On the same week Hank’s video was posted, Anthropic announced that Chinese hackers had used its AI model Claude to conduct a cyber attack that as 90% autonomous. You know, because hackers love using consumer-facing tools connected to American cloud infrastructure when conducting cyber attacks.
Anthropic’s press release was immediately criticized for a lack of transparency, and unreplicable results.
[Researchers question Anthropic claim that AI-assisted attack was 90% autonomous](https://archive.ph/Gfz8Q)
> “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”
Just a couple of weeks later, as if Anthropic’s hacker story was a movie trailer for an upcoming blockbuster, the company releases its latest version of Claude, and touts how good it is at coding.
## Stochastic Parrots
The problem with evangelizing the coming of a spiteful God AI, and prophesying a biblical-like certainty of humanity's subsequent extinction, is that people start to believe you, then [act accordingly](https://archive.ph/tAcYe).
We've seen this play before with American Evangelical Christianity. When people believe that the end is near, problems like climate change, poverty, corporate plundering, it all suddenly seem so trivial. And boy, doesn't that indifference sometimes feel like the point?
*[How Religion Intersects With Americans’ Views on the Environment](https://archive.ph/UpmTH)*:
> Those who believe humanity is living in the end times are less likely than those who do not believe this to say they think climate change is an extremely or very serious problem (51% vs. 62%)
## Savannah often
I visit Savannah often. The bronze statues and sprawling oaks make me feel more like a writer. The historic downtown is packed with so much implied history, it just begs its tourists to ask about it. But, everyone knows where the doors below the stoops once lead to. No is asking about it.
The stories of Savannah Georgia are decidedly spooky. Walk through the downtown square and you can hear tales told by a dozen or more ghost tour guides, all stopping at the same Victorian mansions, each with a slightly different version, all crafted to give you a cheeky Saturday night scare. This is how we consume the history of this once bustling trade port city.
In the early 2010s, many ghost tour guides told tales of enslaved people turning violent against their white owners. These tours were wildly popular. One such story, which later became the subject of a book, was about an enslaved woman named Molly who, as the story tellers put it, had “an affair” with her master.
Tiya Miles, author of The Haunted South, was told the story of Molly on one of these ghost tours, and was horrified to learn how slavery in the south is often reduced to entertainment.
It had never occurred to me why the city of Savannah would want you to believe that it's most expensive real estate was haunted, until a friend clued me in a few years back when I visited them in New Orleans (a similarly spooky town).
In Savannah, some people lean on the fantastical to hide a much darker history.
> Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me.
https://archive.ph/mPNPZ
## Ending
As it turns out, Molly the slave never existed. It was a made up story that victimized slave owners, a common theme in Savannah ghost tours before Miles' book was published.
It's hard to know how to write a post like this one. There's just so much ground to cover. I could've easily spent two thousand words on regulatory capture.
I consider Hank Green to be, well, you know, a standard-issue white liberal dude. I know going in I won't agree with his centrist-y political worldview, and that's okay. But I also don't worry much about someone like Hank getting pilled.
It is not lost on me that I am calling out the internet's favorite science nerd. A guy who, just one video after his AI video, talked about how slavery was racist and called out other comedians.
But that's exactly why I'm dragging Hank for filth. He knows. He knows that AI video was horse shit. He knows that telling his audience that he believes AI will caused the extinction of humanity is a fantastical story.
Hank green knows better. I think he knew what he was doing from the very start.
You have these fantastical tales by boring white guys with power that seems to attract left leaning folks  for some unexpectable reason. I think it's beside it gives us something anti-AI to cling on to
And then you have boring tales by fantastic people whose message is orphaned because people on the right don't listen to Black and brown people and women. I say "boring tales" with the utmost respect for those trying to warn us of actual harms. It's just sometimes the truth can be boring.
## Misc.
This is not the first time Hank has done this: https://www.reddit.com/r/nerdfighters/comments/1o2c6kp/we_need_to_talk_about_that_video_hank_endorsed/