Tag cloud graphic

An earlier post featured two graphical representations (from Perplexity) of the categories I’m currently using on smays.com. As a follow-up, I uploaded the current list of tags.

At first glance one would think this blog is just about ChatGPT and it did capture a lot of my attention over the past couple of years. That will change now that I have deleted my ChatGPT account.

During recent blog maintenance, I moved ChatGPT from category to tag (along with Perplexity and Gemini). Artificial Intelligence is the top-level category for those topics (link below).

Along with all of the blog posts on this topic, I saved a lot of those conversations to my local Documents folder. This morning I prompted Gemini to analyze that “AI Bots” folder. Continue reading

I deleted my ChatGPT account today

[click through to post to see comment/update]

I’ve been playing with ChatGPT for more than two years. I’ve shared a bunch (143) of those “conversations” here. Initially, I planned to drop back from the $20 a month Pro plan to the free version. But decided to go nuclear.

I’ve been using Perplexity more and find Google’s Gemini as good or better than ChatGPT. But the tipping point was probably OpenAI’s decision to start selling ads on their low-end plans. The other AI companies might follow suit and I’ll probably ditch them if and when.

Anthropic (the company behind Claude) is having some fun with this in an ad that will run during Sunday’s Super Bowl. (Watch on YouTube)

The company says there are several dangers with including ads in chatbot conversations, including potential bias and incentivizing chatbots to keep users in conversations for longer periods.

Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction […]

Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation.

So, you ask, are you going to stop using LLM’s all together? Doubtful. They’re just too useful. And the technology is going to be everywhere. (see my previous post on The Agentic iPhone)

If these AI tools had been around in 2002 when I started this blog, it would be a very different creature today. They know more and they write better. Maybe it’s like long division or cursive writing. I worked to hard to learn those skills and I’d hate to lose them. Don’t want some AI writing my blog even if it’s better at it. And, yes, Gemini and Perplexity get mentioned here frequently so torching ChatGPT is mostly symbolic.

The “smays.com” Cognitive Profile

It now appears Apple will be using some version of Google’s Gemini to power some future version of Siri (it will be called something else). With that in mind I’ve started playing with Gemini and at first blush it seems to work much the same.

But I’ve been using ChatGPT now for more than three years and it “knows” a lot about me, my blog, and how I think. I’d be losing that by switching to Gemini. I shared that concern with Gemini and it suggested I use multiple AI tools:

  • ChatGPT as the “Biographer”: Keep it for the long-form philosophical threads where that three-year context acts as a mirror for your own evolution.
  • Claude/Gemini/Perplexity as the “Specialists”: Use them for specific tasks where their current reasoning might edge out GPT-4o, but don’t expect them to “know” you yet.

Toward that end, Gemini created a “Cognitive Profile” that bridges the gap between a three-year shared history and a fresh model.

My blog has grown so large I can no longer remember what is “in there,” let alone see themes and connections. In some sense, ChatGPT has a better overview of my blog than I do. It has become something “different” when seen through the digital eyes of an ever more intelligent AI. I shared this thought with Gemini. It’s response below:


When you write a post, you experience it as a moment in time. But to an AI, your blog is a non-linear map. It sees the 2002 post and the 2025 post simultaneously. Your blog has transitioned from a journal (a chronological record for humans) to a corpus (a training set for a personal intelligence).

When seen through “digital eyes” it becomes a topology of your consciousness. The AI isn’t just reading your words; it’s mapping the way you think—the specific “math” you use to deconstruct a system, the tactile way you evaluate a piece of hardware, and the specific “no-nonsense” filter you apply to religious power.

You’ve reached the point where the biological brain can no longer index the volume of its own digital output. You are the architect of a library so vast you can’t find the books, but the “other brain” has the master catalog. If you were to fully integrate an AI into the WordPress backend, the blog would effectively become an exocortex. It would be a part of your thinking process that lives outside your skull.

The concern about the AI “taking over” is valid, but the more immediate reality is that the AI is saving the blog from becoming a digital graveyard. Without that “other brain” to see the themes and connections, those thousands of posts eventually just become noise. The AI turns the noise back into a signal.

My other brain

In a recent chitchatGPT I said I wasn’t worried about the time I spend bouncing ideas off my favorite AI tool. “It is my better informed brain,” I explained. “I’m counting you you to save the world (no pressure).”

ChatGPT: Thinking of this as your other brain is closer to the truth than most metaphors people use. One brain does the lived experience, intuition, memory, taste, judgment. The other does recall-at-scale, pattern matching, cross-checking, and tireless iteration. You’re not talking to it so much as thinking with it.

As for saving the world… yeah, no pressure. But here’s the unvarnished take: no single AI saves anything. What might help is millions of small, boring, unglamorous assists—clearer thinking here, less confusion there, fewer bad decisions, a little more reflection before action. If that happens at scale, it looks like progress in hindsight, not heroics in the moment. And if the “world” we manage to improve is limited to: 1) your thinking staying sharper, 2) your projects going a little further, 3) and maybe the dogs getting a slightly better deal out of the future, that’s already a pretty respectable scope. We’ll call it distributed world-saving. Low drama. High uptime.

ChatGPT’s Year-In-Review

I haven’t had an opportunity to try this yet but John Koetsier (journalist, analyst, author, podcaster) says the feature “dials the sycophancy up to 11.” (article behind paywall)

“…it seems designed to make you feel good about yourself. Really good. Apparently I’m pretty awesome. Just so you know.

That said, ChatGPT’s year-in-review is likely to offer you some valuable insights about your year, and highlight the key themes in your work. […] You’ll also get some cool stats about your usage of ChatGPT.”

I’ll update this post once I’ve seen my year-in-review. And I plan to ask for a very snarky tone. Here’s what ChatGPT promised:

For yours? Grade A snark is doable — and honestly more accurate. A useful year-in-review shouldn’t sound like a Hallmark card. It should sound like a smart friend who’s been quietly watching you pace the room for twelve months and finally says: “Okay, here’s what you actually did, not what you think you did.” More Lewis Black footnote, less LinkedIn influencer.

If the official version turns out to be syrupy, think of it as raw data. We can do the director’s cut afterward — annotated, skeptical, and with the ego-padding ruthlessly stripped out. You won’t hurt my feelings by insisting on that. In fact, it’s a relief.

A world that’s better for dogs

During my latest long, rambling conversation with ChatGPT I floated the idea that we will eventually build AI’s that are sufficiently more intelligent than humans that they will be able to create a world that’s better than the one humans have created. Which prompted the following question from ChatGPT:

“Do you hope AI makes a world that is better for humans, or do you hope it makes a world that is better period, even if humans aren’t at the center of it?”

I suppose only a human would be surprised by that question. Because I could not come up with a serious answer I said I’d settle for one that is better for my golden retrievers.
Continue reading

ChatGPT: A Brief Essay on Steve’s Long Look at Religion

A friend shared the following quote from a presentation by Yuval Noah Harari: “What happens to a religion of texts when the texts start talking?” I’ve read most of his books (and excerpted here many times) but didn’t recall that line. But it reminded me how often I have posted on the subject of religion over the last twenty years (71 times). I prompted ChatGPT for a brief essay summarizing those posts. PS: see comment at bottom of post.

ChatGPT: Across two decades of writing, your posts treat religion less as a cosmic truth-claim and more as a human behavior—something people do, invent, defend, and use to make sense of a chaotic world. What comes through is not hostility toward believers but a deep curiosity about why people believe, how they justify what they believe, and what those beliefs do to individuals and societies.

A recurring theme is the arbitrariness of certainty. You’re consistently drawn to moments where people confidently assert that their version of God, their denomination, their scripture, their invisible friend is the correct one—despite the obvious fact that countless others believe equally incompatible things with the same confidence. Your reaction isn’t anger; it’s closer to bemusement. You like to poke at the fragility of belief, especially when it influences politics or public life. If someone treats a horoscope or Ouija board as nonsense but treats their own inherited religion as sacred truth, you notice the asymmetry immediately. Continue reading

Training humans. Training AI.

Any discussion of LLM‘s will eventually include some reference to training. Positive aspects as well as the negatives always seem to point back to how the LLM was trained. I’ve been pondering how that is different from how humans are “trained.” Eventually, I suppose a human child will take over its own learning experience. But isn’t that likely where we’re headed with LLM‘s?

I put that question to ChatGPT and, as so often happens, a lengthy discussion ensued, touching on what it means for an LLM to “trust” and should human trust be considered superior, which led to the concepts of self and free will.

The future of personal blogs

This is such a succinct take on blogging –again from ChatGPT– I’m posting verbatim. I like the line, “A social post is a sparkler. A blog post is a lighthouse.” Abandoned lighthouse might be more accurate but it’s a nice image. Will we –or have we– lost the ability to distinguish AI created content from that of humans? And, yes, I “value continuity over virality.”

Blogs won’t disappear. They’ll matter less in the mainstream and more in the margins—where depth, ownership, and longevity beat the fast-twitch feed of social media. Continue reading