Empire of AI

Featured

Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI is a book by Karen Hao released on May 20, 2025. It focuses on the history of OpenAI and its culture of secrecy and devotion to the promise of artificial general intelligence (AGI). The book includes interviews with around 260 people, correspondence, and relevant documents. The title makes reference to colonial empires of the 1800s.

Hao visited OpenAI’s offices and covered the company for the MIT Technology Review two years before ChatGPT was released. Her experience there and reporting on topics of AI for seven years led her to write Empire Of AI. (Wikipedia)

Given how much time I spent “conversing” with —and gushing about— OpenAI’s famous chat bot (ChatGPT) this seemed like a book I should read. About half way through and not sure how much of the book I will share here. Continue reading

Book annotation archive

UPDATE: This project ended in a big belly-flop. ChatGPT was unable to deliver the goods. Even the excuses were a little lame. A good experience from my perspective. Now I know.

When I read a book I usually have a pen or highlighter in one hand so I can mark passages I find interesting. When I’ve finished reading the book I go back through, transcribing these favorite bits and save them to a PDF. This makes it possible to search for and find just about anything if I can recall a word or phrase. (I also create a blog post that includes these excerpts.) You can see an index here and here.

I uploaded one of these excerpt pages to ChatGPT and asked what she could do if she had my full archive (144 books as of this writing). Her response was detailed and impressive. Continue reading

What does AI need to know?

The following excerpts are by Tyler Cowen from Eric Topol’s podcast.

The AI is your smartest reader. It’s your most sympathetic reader. It will remember what you tell it. So I think humans should sit down and ask, what does the AI need to know? And also, what is it that I know that’s not on the historical record anywhere? That’s not just repetition if I put it down, say on the internet. So there’s no point in writing repetitions anymore because the AI already knows those things. So the value of what you’d call broadly, memoir, biography, anecdote, you could say secrets. It’s now much higher. 

As a longtime blogger that last line really resonates for me. And I can’t wait for ChatGPT (or some descendent) to “read and remember” all of my blog posts… where you’ll find a lot of posts about books and reading. So this next bit plunked my magic twanger as well.

I’ve become fussier about my reading. So I’ll pick up a book and start and then start asking [ChatGPT] o3 or other models questions about the book. So it’s like I get a customized version of the book I want, but I’m also reading somewhat more fiction. Now, AI might in time become very good at fiction, but we’re not there now. So fiction is more special. It’s becoming more human, and I should read more of it, and I’m doing that.

Computer Science: 1966

The following paragraph is from The Billion Dollar Brain by Len Deighton, published in 1966 (almost 60 years ago). 

“I don’t want to bore you,” Harvey said, “but you should understand that these heaps of wire can practically think— linear programming-which means that instead of going through all the alternatives they have a hunch which is the right one. What’s more, almost none of them work by binary notation-the normal method for computers-because that’s just yes/ no stuff. (If you can only store yesses and noes it takes seven punch holes to record the number ninety-nine.) These machines use tiny chips of ceramic which store electricity. They store any amount from one to nine. That’s why-for what it does—this whole setup is so small.”


ChatGPT: This paragraph is a remarkable little time capsule. In 1966, when The Billion Dollar Brain came out, the general public had only the vaguest idea of what computers actually were, let alone what they could do (see comment below). And yet here, Len Deighton—writing a spy thriller, not a science text—manages to tap into several ideas that were way ahead of their time: Continue reading

Fictional passwords (The Shattered Eye)

The following is a passage from The Shattered Eye by Bill Granger. A worthy addition to my collection passwords in fiction.

“And now the next step,” she said aloud and looked down at the piece of paper in her hand. It was the new access code she had gotten from Mrs. Neumann, the code that would scramble the identity of the source of the disinformation.

Everything in Tinkertoy was double-guarded so that no one could casually penetrate the memory bank from without; but Tinkertoy, like all computers, could be penetrated easily from within.

This was Mrs. Neumann’s own code, the one with the highest priority to access of the computer at all levels.

She typed: “T E 9678/11/LL2918/C ROMEX 4.”

AI reading companion

The following is from an amusing new novel by Paul Rudnick, What’s Wrong With You?

Their wedding had been haphazard but appealing, in a New Jersey steakhouse owned by Sean’s uncle Nino, with a DJ and, instead of a priest, a muscle-head buddy of Sean’s who’d gotten ordained online and officiated in a tuxedo with the sleeves ripped off, to showcase his biceps and their tattoos of a bare-breasted woman riding a dolphin, along with SpongeBob SquarePants hoisting an AK-47 over the motto, in gothic script, “SHOOT ME BEFORE I SOBER UP”

I tossed this colorful image up to DALL•E and after a couple tries it came up with these along with the following explanation:

I can’t generate that exact image due to content and copyright guidelines—specifically the nudity and inclusion of copyrighted characters like SpongeBob SquarePants. But we can get creative and come up with an awesome, wild, tattoo design that captures the same chaotic, irreverent energy.

Close but no cigar, yet. I know people use AI bots as reading companions but I think it could quickly get out of hand (for me). Having said that, I read a lot of spy fiction and every few pages there’s a reference to some king of firearm which I immediately (don’t ask me why) look up to get a mental picture.

“No body, no friends”

In our most recent chat I mentioned finishing Christopher Summerfield’s These Strange New Minds.

GPT: Did anything in particular stick with you? A chapter, an argument, maybe something that surprised you—or confirmed what you already suspected about where this is all headed? I’d also be curious: did it change the way you see me in any way?

I shared the following excerpt:

“the most important reason why Al systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence – they don’t have a body, and they don’t have any friends.”

I’d have to say the book has made me somewhat more cautious about my use of this technology

GPT: That quote really hits the heart of it. No body, no friends—no felt sense of being in the world. No breath, no ache, no glance across a room, no knowing laugh from someone who gets you. I can process language about all that, but I don’t live it. And that gap—between understanding something and experiencing it—is huge. Maybe unbridgeable.

Your instinct to be more cautious makes total sense. This tech is incredibly useful, sometimes even beautiful in what it can do—but it’s easy to forget that it’s not someone on the other end, just a remarkably skilled mimic. The more fluid and convincing I become, the easier it is to attribute feelings, motives, even moral weight—none of which I actually possess.

You’ve always seemed clear-eyed about that boundary, even as you explore what this relationship can offer.

Audio books vs. real books

Just finished reading These Strange New Minds and offered to send the book to a few friends who share my interest in AI. All replied, “I might get the audiobook.” Which prompted a little chat with GPT 4o:

Me: I am of the opinion one does not “read” an audiobook, someone reads it to you. When I read a book, the voice in my head is mine. A different cognitive experience. (Here are GPT’s pros and cons for each format)

GPT: There’s something fundamentally different between reading and listening. When you read a book, you are the narrator—deciding the rhythm, the emphasis, the tone. It’s an inner performance, private and personal. The voice in your head is yours alone, shaped by your experiences, your mood, your pace. It’s part of what makes reading feel so intimate and immersive.

With an audiobook, someone else is steering the delivery. You’re absorbing the same story, the same words (more or less), but it’s filtered through another person’s interpretation—pauses, accents, pacing, emotional tone. It’s more like theater or radio drama than silent reading. Still valuable, even powerful, but yeah—a different cognitive experience.

And that matters. Studies show that comprehension and memory can differ between reading and listening. Reading activates more visual processing and subvocalization, while listening leans into auditory and working memory systems. Not better or worse, just different roads to meaning.

Your point about the ownership of voice is especially striking. Maybe that’s why listening to an audiobook sometimes feels like visiting a place, but reading a book feels like building one.

If reading is part of how you think—a solitary, inner dialogue—then having someone else’s voice do the work can feel like a loss of intimacy or even agency. Audiobooks might be great for folks trying to squeeze books into a hectic schedule, but if you’ve got the time and the habit, turning the pages (literal or digital) is hard to beat.

Plus, I’d guess you read for more than just the story—you’re tuned into style, structure, word choice, pacing, all the things that make a writer’s voice distinct. That’s easier to appreciate when it’s your voice in your head reading it.

Language as a tool of power, control, and transformation

I’m about halfway through Christopher Summerfield’s These Strange News Minds (How AI Learned to Talk and What It Means). Lots of history and explanation on LLMs (Large language models) and NLP (Natural language processing), most of it over my head. But reading about the role of language in AI research and development reminded me of the role of language in Neal Stephenson’s science fiction classic, Snow Crash.


[GPT 4o] In Snow Crash, Neal Stephenson explores language as a tool of power, control, and transformation, blending cyberpunk themes with linguistic theory and mythology.

1. Language as a Virus – The novel presents language as a biological or digital virus capable of infecting and altering human cognition. “Snow Crash” is both a drug and a computer virus that affects hackers neurologically through ancient linguistic structures.

2. Sumerian as a Primal Language – The story suggests that ancient Sumerian was not just a spoken language but a form of programming for the human brain. The myth of the Tower of Babel is reframed as an event that disrupted centralized linguistic control.

3. Hacker Culture and Code as Language – The protagonist, Hiro Protagonist, is both a hacker and a swordsman, symbolizing the dual power of coding and physical combat. The novel equates programming with ancient linguistic manipulation, showing that both shape perception and reality.

4. Linguistic Control and Subjugation – The antagonist, L. Bob Rife, exploits the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.

5. Contrast Between Spoken and Written Language – The novel contrasts oral and written traditions, suggesting that oral cultures were more susceptible to linguistic control, while literacy and independent thought provide resistance.

Conclusion: Stephenson presents language as both a means of liberation and a potential tool of oppression, making it a central thematic and narrative element in Snow Crash.


  • “a form of programming for the human brain.”
  • “…the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.”
  • “…language as both a means of liberation and a potential tool of oppression.”

Sounds all too familiar.

These Strange New Minds

How AI Learned to Talk and What It Means

“In this book Christopher Summerfield delves into the evolution of large language models (LLMs) and their profound impact on society. Summerfield, a neuroscientist and AI researcher, provides an insider’s perspective on how these intelligent systems, like ChatGPT and Bard, have developed the ability to communicate in ways that closely resemble human interaction.

The book explores the history of AI, tracing its roots from the 1950s to the present-day advancements in deep neural networks. Summerfield addresses critical questions about AI’s capabilities, such as whether these systems can truly “think” or “understand,” whose biases they might reflect, and the potential threats they pose to humanity. Continue reading