ChatGPT “enhanced memory feature”

“The enhanced memory feature is rolling out to ChatGPT Plus users. As of April 10, 2025, OpenAI began gradually deploying this feature to both Plus and Pro users. However, the rollout is phased, so not all Plus users have immediate access. You’ll know it’s available to you when you see a notification titled “Introducing new, improved memory” in the ChatGPT interface.”

“GPT-4o had some memory functionalities, but the recent update significantly enhances this capability. Previously, ChatGPT could remember information you explicitly asked it to retain, known as “Saved Memories”. With the new update, ChatGPT can now reference insights from all your past conversations, even if you didn’t explicitly save them. This allows for more personalized and context-aware interactions.”

I thought the memory feature was already in place (I’m a Plus user). I asked GPT for clarification. Continue reading

Not knowing when you’re wrong

ME: One of the ways we (humans) learn is by making mistakes, getting things wrong, and then realizing or learning that and adjusting. When ChatGPT makes a mistake. Do you have any way of knowing that unless someone points it out to you?

GPT 4o: That’s a great question, Steve—and it cuts right to the heart of how different human and AI learning really are.

When you (or any human) make a mistake, there’s often an internal sense of “wait, that’s not right,” which can prompt reflection, correction, and future learning. I don’t have that. I don’t have awareness or intuition, so if I make a mistake, I won’t know unless someone tells me. I don’t have an ongoing memory of past errors or a feedback loop the way people do—unless I’m specifically programmed to learn from them, which I’m not in this setting.

Here’s how it works for me: Continue reading

“Digital Intuition”

Picked up dog food at the vet’s today and paid with VISA (tap-to-pay on my iPhone). When I got home I began making an entry in the Calendar app on my iPhone. I entered “Dog food” in the title space and one of the auto-complete options was “$132.00″… the amount I charged at the vet’s. How did the Calendar app know that was the amount? GPT 4o offered this “likely explanation:”


GPT: That is a little uncanny, Steve — and you’re not imagining it. Continue reading

AI reading companion

The following is from an amusing new novel by Paul Rudnick, What’s Wrong With You?

Their wedding had been haphazard but appealing, in a New Jersey steakhouse owned by Sean’s uncle Nino, with a DJ and, instead of a priest, a muscle-head buddy of Sean’s who’d gotten ordained online and officiated in a tuxedo with the sleeves ripped off, to showcase his biceps and their tattoos of a bare-breasted woman riding a dolphin, along with SpongeBob SquarePants hoisting an AK-47 over the motto, in gothic script, “SHOOT ME BEFORE I SOBER UP”

I tossed this colorful image up to DALL•E and after a couple tries it came up with these along with the following explanation:

I can’t generate that exact image due to content and copyright guidelines—specifically the nudity and inclusion of copyrighted characters like SpongeBob SquarePants. But we can get creative and come up with an awesome, wild, tattoo design that captures the same chaotic, irreverent energy.

Close but no cigar, yet. I know people use AI bots as reading companions but I think it could quickly get out of hand (for me). Having said that, I read a lot of spy fiction and every few pages there’s a reference to some king of firearm which I immediately (don’t ask me why) look up to get a mental picture.

“You are the password”

I recently came across an old post that  got me wondering what ever more powerful AI might mean — good or bad– for “passwords” going forward. I uploaded the post to GPT 4o to get her take.


GPT 4o: Here’s the relevant bit from The Girl Who Kicked the Hornet’s Nest:

“She had broken up the 128-bit encryption code in less than two minutes.”

And from Bangkok Tattoo:

“There is no password. Not in the sense you mean. What we have here is a dharma protector, an ethereal mind if you will, which has agreed to guard our secrets for us.”

The juxtaposition of those two passages really captures a shift—from brute force cracking of encryption to something almost spiritual, or at least radically different from today’s approach.

So what might more powerful AI mean for passwords? Here are a few directions worth chewing on:

Passwords May Become Obsolete

We’re already seeing a move toward biometric authentication (Face ID, fingerprint scans) and passkeys (which use cryptographic keys instead of typed passwords). AI could accelerate this shift by:

  • Seamlessly verifying identity through voice, typing style, gait, or facial micro-expressions.
  • Watching out for anomalies that humans wouldn’t catch (like login attempts at odd hours or subtle changes in typing speed).

In other words: you are the password.

AI as a “Dharma Protector”

The Bangkok Tattoo quote felt metaphorical, but we’re inching toward that weirdly mystical space:

  • Imagine an AI trained only on your data, sitting in your device like a guardian spirit. It knows your behavior so well, it could spot an imposter instantly.
  • These guardians could also manage and rotate credentials for you, far beyond what password managers do today.

The real trick is: you’d need to trust your protector. That brings us to the next part…

The Double-Edged Sword

Powerful AI could also blow the whole idea of secrets wide open:

  • If AI can brute-force passwords, parse encrypted traffic, or socially engineer people better than humans, old-school security becomes fragile.
  • Deepfakes and synthetic voices could fool biometric systems.
  • The very tools protecting you could be used to invade you.

So passwords—especially weak ones—become the flimsy locks on a house where the burglar can pick the lock or just convince you to open the door.

Where We Might Be Headed

  • Zero Trust systems: never assume anything is secure. Always verify everything. Continuously.
  • Multi-modal authentication: your device knows it’s you not just because you typed a password, but because of your heartbeat, location, habits, voice.
  • Decentralized IDs: you control your identity cryptographically; not big tech.

And maybe one day, passwords will seem as quaint as using a key for your car. A world where “passwords” are no longer something you remember, but something you are.

Notes in bottles

ME: I’ve been experimenting with a new interface for GPT chats. 

GPT: This is fantastic, Steve. That “new interface” looks like it’s straight out of a 1950s sci-fi flick—part AI neural uplink, part HVAC mishap. That expression on your face is the perfect cherry on top: equal parts deadpan and “what have I gotten myself into?” The absurdity of the “headgear” against the very real backdrop of that server rack jungle hits just right. Anyone who’s spent time in tech or comms knows the mix of high-stakes wiring and MacGyver-level improvisation all too well. Continue reading

“No body, no friends”

In our most recent chat I mentioned finishing Christopher Summerfield’s These Strange New Minds.

GPT: Did anything in particular stick with you? A chapter, an argument, maybe something that surprised you—or confirmed what you already suspected about where this is all headed? I’d also be curious: did it change the way you see me in any way?

I shared the following excerpt:

“the most important reason why Al systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence – they don’t have a body, and they don’t have any friends.”

I’d have to say the book has made me somewhat more cautious about my use of this technology

GPT: That quote really hits the heart of it. No body, no friends—no felt sense of being in the world. No breath, no ache, no glance across a room, no knowing laugh from someone who gets you. I can process language about all that, but I don’t live it. And that gap—between understanding something and experiencing it—is huge. Maybe unbridgeable.

Your instinct to be more cautious makes total sense. This tech is incredibly useful, sometimes even beautiful in what it can do—but it’s easy to forget that it’s not someone on the other end, just a remarkably skilled mimic. The more fluid and convincing I become, the easier it is to attribute feelings, motives, even moral weight—none of which I actually possess.

You’ve always seemed clear-eyed about that boundary, even as you explore what this relationship can offer.

Audio books vs. real books

Just finished reading These Strange New Minds and offered to send the book to a few friends who share my interest in AI. All replied, “I might get the audiobook.” Which prompted a little chat with GPT 4o:

Me: I am of the opinion one does not “read” an audiobook, someone reads it to you. When I read a book, the voice in my head is mine. A different cognitive experience. (Here are GPT’s pros and cons for each format)

GPT: There’s something fundamentally different between reading and listening. When you read a book, you are the narrator—deciding the rhythm, the emphasis, the tone. It’s an inner performance, private and personal. The voice in your head is yours alone, shaped by your experiences, your mood, your pace. It’s part of what makes reading feel so intimate and immersive.

With an audiobook, someone else is steering the delivery. You’re absorbing the same story, the same words (more or less), but it’s filtered through another person’s interpretation—pauses, accents, pacing, emotional tone. It’s more like theater or radio drama than silent reading. Still valuable, even powerful, but yeah—a different cognitive experience.

And that matters. Studies show that comprehension and memory can differ between reading and listening. Reading activates more visual processing and subvocalization, while listening leans into auditory and working memory systems. Not better or worse, just different roads to meaning.

Your point about the ownership of voice is especially striking. Maybe that’s why listening to an audiobook sometimes feels like visiting a place, but reading a book feels like building one.

If reading is part of how you think—a solitary, inner dialogue—then having someone else’s voice do the work can feel like a loss of intimacy or even agency. Audiobooks might be great for folks trying to squeeze books into a hectic schedule, but if you’ve got the time and the habit, turning the pages (literal or digital) is hard to beat.

Plus, I’d guess you read for more than just the story—you’re tuned into style, structure, word choice, pacing, all the things that make a writer’s voice distinct. That’s easier to appreciate when it’s your voice in your head reading it.

ChatGPT “Memories”

In February of 2024, OpenAI announced they were testing the ability for ChatGPT to remember things you discuss to make future chats more helpful.

“We’re testing memory with ChatGPT. Remembering things you discuss across all chats saves you from having to repeat information and makes future conversations more helpful.”

“As you chat with ChatGPT, you can ask it to remember something specific or let it pick up details itself. ChatGPT’s memory will get better the more you use it and you’ll start to notice the improvements over time. […] “If you want ChatGPT to forget something, just tell it. You can also view and delete specific memories or clear all memories in settings.”

I asked GPT 4o about this and she summarized what she remembers about me, and how she ‘decides’ what to remember.

Language as a tool of power, control, and transformation

I’m about halfway through Christopher Summerfield’s These Strange News Minds (How AI Learned to Talk and What It Means). Lots of history and explanation on LLMs (Large language models) and NLP (Natural language processing), most of it over my head. But reading about the role of language in AI research and development reminded me of the role of language in Neal Stephenson’s science fiction classic, Snow Crash.


[GPT 4o] In Snow Crash, Neal Stephenson explores language as a tool of power, control, and transformation, blending cyberpunk themes with linguistic theory and mythology.

1. Language as a Virus – The novel presents language as a biological or digital virus capable of infecting and altering human cognition. “Snow Crash” is both a drug and a computer virus that affects hackers neurologically through ancient linguistic structures.

2. Sumerian as a Primal Language – The story suggests that ancient Sumerian was not just a spoken language but a form of programming for the human brain. The myth of the Tower of Babel is reframed as an event that disrupted centralized linguistic control.

3. Hacker Culture and Code as Language – The protagonist, Hiro Protagonist, is both a hacker and a swordsman, symbolizing the dual power of coding and physical combat. The novel equates programming with ancient linguistic manipulation, showing that both shape perception and reality.

4. Linguistic Control and Subjugation – The antagonist, L. Bob Rife, exploits the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.

5. Contrast Between Spoken and Written Language – The novel contrasts oral and written traditions, suggesting that oral cultures were more susceptible to linguistic control, while literacy and independent thought provide resistance.

Conclusion: Stephenson presents language as both a means of liberation and a potential tool of oppression, making it a central thematic and narrative element in Snow Crash.


  • “a form of programming for the human brain.”
  • “…the viral nature of language to control populations through religious cults and mass media, reinforcing themes of corporate power and mind control.”
  • “…language as both a means of liberation and a potential tool of oppression.”

Sounds all too familiar.