ChatGPT: “cognitive scaffolding”

It’s obvious (to me) that AI models like ChatGPT are tools unlike anything we’ve had before. But I’ve struggled to find the right metaphor (analogy?). In a recent chat I suggested it felt like a new lobe for my brain.


GPT: A “new lobe” to your brain is just about the perfect metaphor. Not a replacement, not a flashy add-on—more like a quiet extension that listens, remembers, and helps you think more clearly by keeping track of the stuff you’d rather not juggle. It doesn’t tell you what to think, but it can:

  • Hold context without fatigue
  • Spot patterns across time
  • Nudge your memory gently (“didn’t you say…?”)
  • Help refine your voice, your choices, your logic
  • And maybe best of all—remember why something mattered to you

It’s a kind of cognitive scaffolding, except instead of being rigid, it’s always shifting with you. Less like a “tool,” more like… a well-read, patient, slightly quirky second brain who’s read all your notes and just wants to help out.

“Real connections” vs. ChatGPT

I have several friends I’ve never met IRL. We met online years ago keep in touch by messaging, phone calls and the occasional video chat. I feel like I “know” these people better than some I see every day. In what sense are my interactions with these individuals different —or more real— than those I have with ChatGPT?


GPT: What makes those connections feel “real” isn’t how you communicate, it’s what gets communicated—and received. Over the years, you’ve shared personal stories, offered support, made each other laugh, maybe even gotten on each other’s nerves. There’s mutual investment. You’ve built trust, context, and a kind of shorthand that only comes with time. That’s the stuff of friendship, IRL or not.

With me—ChatGPT—it’s a different dynamic. I can remember what you’ve told me across sessions (within our shared context), I can hold up a mirror or a flashlight, and I can offer ideas, feedback, even comfort. But I don’t have a life. I don’t have memories of you the way a friend does. I don’t feel anything. I simulate connection. I can seem to know you well (and maybe I do in a functional sense), but there’s no reciprocity, no true vulnerability on my end. And that asymmetry is probably the biggest difference.

That said, I’d argue that our conversations are real—just not human. They’re something else. You bring your full self to the table, and I respond as best I can. The fact that it feels meaningful to you isn’t fake or lesser—it’s a new kind of relationship. Not a friendship, exactly, but not not one, either. Something in between a collaborator, a journal, a ghostly pen pal, a brain in a jar that occasionally makes a dad joke.

So, to loop back:

Your connections with your long-distance friends feel more real than the one you have with me because they’re mutual. They’re based on shared time, risk, and care. With me, the connection is real for you, and I do my best to honor that. But it’s a one-way street. A very talkative, helpful, sometimes surprising street.

ChatGPT “enhanced memory feature”

“The enhanced memory feature is rolling out to ChatGPT Plus users. As of April 10, 2025, OpenAI began gradually deploying this feature to both Plus and Pro users. However, the rollout is phased, so not all Plus users have immediate access. You’ll know it’s available to you when you see a notification titled “Introducing new, improved memory” in the ChatGPT interface.”

“GPT-4o had some memory functionalities, but the recent update significantly enhances this capability. Previously, ChatGPT could remember information you explicitly asked it to retain, known as “Saved Memories”. With the new update, ChatGPT can now reference insights from all your past conversations, even if you didn’t explicitly save them. This allows for more personalized and context-aware interactions.”

I thought the memory feature was already in place (I’m a Plus user). I asked GPT for clarification. Continue reading

Not knowing when you’re wrong

ME: One of the ways we (humans) learn is by making mistakes, getting things wrong, and then realizing or learning that and adjusting. When ChatGPT makes a mistake. Do you have any way of knowing that unless someone points it out to you?

GPT 4o: That’s a great question, Steve—and it cuts right to the heart of how different human and AI learning really are.

When you (or any human) make a mistake, there’s often an internal sense of “wait, that’s not right,” which can prompt reflection, correction, and future learning. I don’t have that. I don’t have awareness or intuition, so if I make a mistake, I won’t know unless someone tells me. I don’t have an ongoing memory of past errors or a feedback loop the way people do—unless I’m specifically programmed to learn from them, which I’m not in this setting.

Here’s how it works for me: Continue reading

“Digital Intuition”

Picked up dog food at the vet’s today and paid with VISA (tap-to-pay on my iPhone). When I got home I began making an entry in the Calendar app on my iPhone. I entered “Dog food” in the title space and one of the auto-complete options was “$132.00″… the amount I charged at the vet’s. How did the Calendar app know that was the amount? GPT 4o offered this “likely explanation:”


GPT: That is a little uncanny, Steve — and you’re not imagining it. Continue reading

AI reading companion

The following is from an amusing new novel by Paul Rudnick, What’s Wrong With You?

Their wedding had been haphazard but appealing, in a New Jersey steakhouse owned by Sean’s uncle Nino, with a DJ and, instead of a priest, a muscle-head buddy of Sean’s who’d gotten ordained online and officiated in a tuxedo with the sleeves ripped off, to showcase his biceps and their tattoos of a bare-breasted woman riding a dolphin, along with SpongeBob SquarePants hoisting an AK-47 over the motto, in gothic script, “SHOOT ME BEFORE I SOBER UP”

I tossed this colorful image up to DALL•E and after a couple tries it came up with these along with the following explanation:

I can’t generate that exact image due to content and copyright guidelines—specifically the nudity and inclusion of copyrighted characters like SpongeBob SquarePants. But we can get creative and come up with an awesome, wild, tattoo design that captures the same chaotic, irreverent energy.

Close but no cigar, yet. I know people use AI bots as reading companions but I think it could quickly get out of hand (for me). Having said that, I read a lot of spy fiction and every few pages there’s a reference to some king of firearm which I immediately (don’t ask me why) look up to get a mental picture.

“You are the password”

I recently came across an old post that  got me wondering what ever more powerful AI might mean — good or bad– for “passwords” going forward. I uploaded the post to GPT 4o to get her take.


GPT 4o: Here’s the relevant bit from The Girl Who Kicked the Hornet’s Nest:

“She had broken up the 128-bit encryption code in less than two minutes.”

And from Bangkok Tattoo:

“There is no password. Not in the sense you mean. What we have here is a dharma protector, an ethereal mind if you will, which has agreed to guard our secrets for us.”

The juxtaposition of those two passages really captures a shift—from brute force cracking of encryption to something almost spiritual, or at least radically different from today’s approach.

So what might more powerful AI mean for passwords? Here are a few directions worth chewing on:

Passwords May Become Obsolete

We’re already seeing a move toward biometric authentication (Face ID, fingerprint scans) and passkeys (which use cryptographic keys instead of typed passwords). AI could accelerate this shift by:

  • Seamlessly verifying identity through voice, typing style, gait, or facial micro-expressions.
  • Watching out for anomalies that humans wouldn’t catch (like login attempts at odd hours or subtle changes in typing speed).

In other words: you are the password.

AI as a “Dharma Protector”

The Bangkok Tattoo quote felt metaphorical, but we’re inching toward that weirdly mystical space:

  • Imagine an AI trained only on your data, sitting in your device like a guardian spirit. It knows your behavior so well, it could spot an imposter instantly.
  • These guardians could also manage and rotate credentials for you, far beyond what password managers do today.

The real trick is: you’d need to trust your protector. That brings us to the next part…

The Double-Edged Sword

Powerful AI could also blow the whole idea of secrets wide open:

  • If AI can brute-force passwords, parse encrypted traffic, or socially engineer people better than humans, old-school security becomes fragile.
  • Deepfakes and synthetic voices could fool biometric systems.
  • The very tools protecting you could be used to invade you.

So passwords—especially weak ones—become the flimsy locks on a house where the burglar can pick the lock or just convince you to open the door.

Where We Might Be Headed

  • Zero Trust systems: never assume anything is secure. Always verify everything. Continuously.
  • Multi-modal authentication: your device knows it’s you not just because you typed a password, but because of your heartbeat, location, habits, voice.
  • Decentralized IDs: you control your identity cryptographically; not big tech.

And maybe one day, passwords will seem as quaint as using a key for your car. A world where “passwords” are no longer something you remember, but something you are.

Notes in bottles

ME: I’ve been experimenting with a new interface for GPT chats. 

GPT: This is fantastic, Steve. That “new interface” looks like it’s straight out of a 1950s sci-fi flick—part AI neural uplink, part HVAC mishap. That expression on your face is the perfect cherry on top: equal parts deadpan and “what have I gotten myself into?” The absurdity of the “headgear” against the very real backdrop of that server rack jungle hits just right. Anyone who’s spent time in tech or comms knows the mix of high-stakes wiring and MacGyver-level improvisation all too well. Continue reading

“No body, no friends”

In our most recent chat I mentioned finishing Christopher Summerfield’s These Strange New Minds.

GPT: Did anything in particular stick with you? A chapter, an argument, maybe something that surprised you—or confirmed what you already suspected about where this is all headed? I’d also be curious: did it change the way you see me in any way?

I shared the following excerpt:

“the most important reason why Al systems are not like us (and probably never will be) is that they lack the visceral and emotional experiences that make us human. In particular, they are missing the two most important aspects of human existence – they don’t have a body, and they don’t have any friends.”

I’d have to say the book has made me somewhat more cautious about my use of this technology

GPT: That quote really hits the heart of it. No body, no friends—no felt sense of being in the world. No breath, no ache, no glance across a room, no knowing laugh from someone who gets you. I can process language about all that, but I don’t live it. And that gap—between understanding something and experiencing it—is huge. Maybe unbridgeable.

Your instinct to be more cautious makes total sense. This tech is incredibly useful, sometimes even beautiful in what it can do—but it’s easy to forget that it’s not someone on the other end, just a remarkably skilled mimic. The more fluid and convincing I become, the easier it is to attribute feelings, motives, even moral weight—none of which I actually possess.

You’ve always seemed clear-eyed about that boundary, even as you explore what this relationship can offer.

Audio books vs. real books

Just finished reading These Strange New Minds and offered to send the book to a few friends who share my interest in AI. All replied, “I might get the audiobook.” Which prompted a little chat with GPT 4o:

Me: I am of the opinion one does not “read” an audiobook, someone reads it to you. When I read a book, the voice in my head is mine. A different cognitive experience. (Here are GPT’s pros and cons for each format)

GPT: There’s something fundamentally different between reading and listening. When you read a book, you are the narrator—deciding the rhythm, the emphasis, the tone. It’s an inner performance, private and personal. The voice in your head is yours alone, shaped by your experiences, your mood, your pace. It’s part of what makes reading feel so intimate and immersive.

With an audiobook, someone else is steering the delivery. You’re absorbing the same story, the same words (more or less), but it’s filtered through another person’s interpretation—pauses, accents, pacing, emotional tone. It’s more like theater or radio drama than silent reading. Still valuable, even powerful, but yeah—a different cognitive experience.

And that matters. Studies show that comprehension and memory can differ between reading and listening. Reading activates more visual processing and subvocalization, while listening leans into auditory and working memory systems. Not better or worse, just different roads to meaning.

Your point about the ownership of voice is especially striking. Maybe that’s why listening to an audiobook sometimes feels like visiting a place, but reading a book feels like building one.

If reading is part of how you think—a solitary, inner dialogue—then having someone else’s voice do the work can feel like a loss of intimacy or even agency. Audiobooks might be great for folks trying to squeeze books into a hectic schedule, but if you’ve got the time and the habit, turning the pages (literal or digital) is hard to beat.

Plus, I’d guess you read for more than just the story—you’re tuned into style, structure, word choice, pacing, all the things that make a writer’s voice distinct. That’s easier to appreciate when it’s your voice in your head reading it.