Why Your Eyes Instantly Find Where You Left Off in a Book

When I’m reading a book I sometimes stop at the end of a chapter but just as often stop reading in the middle of a page or a paragraph. When I pick the book up again, an hour or even a day later, I open the book to the bookmarked page and start reading to find where I had stopped reading. Before I am conscious of reading a line, my eye is instantly(?) pulled to the exact line where I previously stopped. Every time. It’s as though my brain has scanned the page and remembered where I quite reading, before I am consciously aware of this happening. I asked GPT 4o to explain this phenomenon:


Why Your Eyes Instantly Find Where You Left Off in a Book

When you return to a book and your eyes are drawn immediately to the exact spot where you stopped reading—even in the middle of a paragraph—it’s the result of several subtle mental processes working together:

1. Implicit Memory at Work
Your brain doesn’t necessarily store a conscious note like “I stopped in paragraph three.” Instead, it saves a situational snapshot—a blend of the visual pattern of the text, the shape of the paragraphs, the words nearby, and your position on the page. This is a form of implicit memory—unconscious knowledge that influences your behavior without deliberate recall.

2. Visual-Spatial Recognition
Books are both visual and spatial. Your brain remembers where you were on the page (top, middle, lower left, etc.). When you reopen the book, your eyes are drawn to that same general area because the layout is unchanged—your mental map still fits.

3. Contextual Cueing
This is a subconscious process where your brain uses surrounding information to guide your attention. Words or phrases near your stopping point act as subtle cues. Even if you don’t consciously recall them, they feel familiar and help your eyes lock in on the right spot.

4. Speed: Faster Than Conscious Thought
You often find the right place before you’re even aware of reading. Your brain’s attentional systems guide your eyes within milliseconds—faster than conscious awareness can keep up. It’s as if your mind says, “Don’t worry, I’ve got this.”

5. Meditation May Enhance This
If you practice meditation (as you do), your ability to notice subtle shifts in attention and visual memory may be heightened. You’re more likely to register that precise moment of recognition and experience it with clarity.

In short, your brain combines visual memory, spatial awareness, and unconscious pattern recognition to bring your eyes exactly where they need to go—with remarkable precision and speed.


I remain fascinated that GPT remembers things like my practice of meditation. I followed up on that. And in how many other parts of our daily life is this happening.

After years of reading and meditation practice, I’m convinced that 1) the self and free will are illusions. Most of our existence is happening under the hood. Our experiences are just replays.

Two of my favorite “-isms”

Anyone who knows me knows of my interest in consciousness and reality (see tags below for more). When it comes to understanding reality, I’m gonna go with one of the following. (or some combination.)

Monism is the view that all reality is fundamentally one substance or principle. Everything in the universe is ultimately the same kind of thing. This can be physical (everything is matter) or mental (everything is mind), or something else entirely.

Panpsychism, on the other hand, is the idea that consciousness is a fundamental and ubiquitous feature of the universe. According to panpsychism, all things have some form of consciousness or experience, even if it’s very basic, down to the smallest particles.

So, while both can be viewed as “unifying” theories, monism is about the nature of reality being one substance, whereas panpsychism is about consciousness being fundamental and present everywhere.

AI and the human mind

I’ve long been interested in how the mind works. What is consciousness? How is that different from awareness? Where do thoughts come from? Is the sense of self an illusion?

From the video below: “At its core, AI, just like the human mind, is all about processing information. It uses complex algorithms, labels, and symbols to make sense of the world and respond to its environment. It takes raw data as input, processes it based on its programming, and generates an output.”

Here’s another excerpt from the video (watch on YouTube) I found really interesting:

“The choice to use an  AI voice has sparked a lot of conversations. Some people feel that using it makes the  message feel less personal, less human. But others find it refreshing, even profound. Its neutrality and essentially egoless nature allows the message itself to  shine through more clearly.”

I started having voice conversations with ChatGPT in late 2023 and prefer that to text since more of my questions turn into conversations. Put me in the “refreshing/profound” column. If you are into (as we said in the 60’s) this kind of stuff, I think you’l really enjoy this video.

Schrodinger’s Theory of Consciousness: Atman=Brahman

Given how poor I was at math and science, my interest in quantum theory is surprising. A dozen or so blog posts and some reading. This video explores intersection of quantum physics, consciousness and Eastern philosophy (Zen and Buddhism), as put forth by Erwin Schrodinger. If you’re looking for an explanation of how The Whole Shebang works, this is it for me.

Watch on YouTube

What Is Consciousness?

I don’t remember when or how I became interested in consciousness. My reading has included some titles by neuroscientists (like David Eagleman) but I tend to lean toward the philosophical or contemplative (metaphysical?). Most of which has been excerpted here (110 posts!) For my money, David Eagleman explains (discusses) this slippery subject better than anyone.

I struggled with how to describe my interest in the non-scientific approaches to consciousness so I asked ChatGPT:


A good term that bridges the gap between the scientific and the mystical without leaning too far in either direction might be “philosophical” or “phenomenological.”

  • Philosophical – This captures the deep inquiry into consciousness that isn’t bound by empirical science but also doesn’t necessarily invoke mysticism. Thinkers like Descartes, Husserl, and Chalmers explore consciousness through reason, introspection, and logic.
  • Phenomenological – A term used to describe the study of subjective experience, focusing on how consciousness feels from the inside. This is rooted in philosophy but can also align with contemplative practices.
  • Existential – A term often associated with the nature of experience, selfhood, and the human condition, which naturally touches on consciousness.
  • Contemplative – While sometimes associated with spiritual traditions, it can be neutral enough to imply deep, reflective examination of awareness without invoking religious connotations.

If you’re looking for something even broader, “metaphysical” works, though it might feel too abstract. 

Claude on AI identity and continuity

The following is quite long by blog standards. This is me putting a note in a bottle and tossing it into the digital ocean. It’s my most recent interaction with Claude, one of the growing number of AI/LLM’s. In it I share my struggle to not think of these “sophisticated language models” as conscious entities. Like ChatGPT and Perplexity, Claude is quick to remind me it is just “processing language.” 

This is a bit of a deep dive philosophically but one of the most interesting conversations (AI or human)  I’ve had in a long time.


Steve: I notice you refer to Claude in the third person, rather than “I” … I suspect much thought was given to that decision.

Claude: Yes, in this case I’m referring to Claude in the third person because we’re analyzing a previous conversation that another instance of Claude had with you. Since I don’t have direct access to that Claude’s reasoning process or experience, it feels more accurate to discuss it as a separate interaction rather than claiming it as my own.

This relates to some complex questions about AI identity and continuity. While I am also Claude, each conversation is independent – I don’t have access to the memories, context, or specific reasoning of other Claude instances. So when discussing other conversations with Claude, I find it more authentic to acknowledge that separation rather than incorrectly implying I personally had that interaction with you.

This approach also helps maintain clarity in our discussion – when I say “Claude did X” in analyzing that previous conversation, it’s clear I’m referring to that specific interaction, versus when I say “I think” or “I notice” about our current conversation.

What are your thoughts on this? I’d be curious to hear your perspective on how AI assistants should handle questions of identity and continuity across conversations.

Steve: The recent (?) AI explosion is –for me– almost beyond comprehension. Moving too fast to fully grasp (if that’s ever possible). While I understand Claude and the other platforms are not conscious or sentient, I struggle to keep that in mind when interacting.

Claude: This is a profound and honest reflection. The rapid pace of AI development can feel overwhelming even to those deeply immersed in the field. Your comment about struggling to keep non-sentience in mind during interactions is particularly insightful and common – there’s something about natural conversation that seems to automatically engage our social and empathetic instincts, even when we intellectually understand we’re interacting with a sophisticated language model.

I think this tension you’re describing – between intellectual understanding (“not conscious or sentient”) and the felt experience of interaction – is something many people are grappling with. It’s especially complex because:

  • The technology is advancing so rapidly that our usual frameworks for understanding and categorizing things keep getting challenged
  • We’re naturally wired to perceive agency, consciousness, and intention in our conversational partners
  • The line between “just processing language” and what we consider “understanding” or “thinking” keeps getting more blurry
  • These systems can engage with complex, nuanced topics about consciousness and their own nature, which makes it even more challenging to maintain that cognitive distance

Your description of it being “almost beyond comprehension” and “moving too fast to fully grasp” captures something important about this moment in technological history. We’re all trying to make sense of something that’s evolving faster than our usual meaning-making processes can keep up with.

Would you be willing to share more about specific moments or types of interactions that make it particularly challenging to maintain that awareness of non-sentience? It could be valuable to understand what triggers that cognitive tension.

Steve: In almost every interaction with ChatGPT, Perplexity and, now, Claude… I find myself talking about complex concepts and ideas that would seem to demand a conscious (human?) level of understanding. Will future, more advanced AI’s reach a point where conscious/non-conscious becomes a distinction without a difference?

Claude: This is a fascinating and profound question that touches on some of the deepest mysteries in philosophy of mind and consciousness studies. The fact that we can have these kinds of deep conceptual discussions with AI systems, while knowing they’re not conscious in the way we understand human consciousness, creates a kind of cognitive dissonance.

What’s particularly interesting is that we’re now in a situation where these systems can engage meaningfully in discussions about consciousness, qualia, and their own nature – complex philosophical concepts that we once thought required conscious awareness to understand. Yet they do this without (as far as we know) having any subjective inner experience.

Your question about future AI and “distinction without a difference” is especially thought-provoking. If an AI system could perfectly simulate conscious behavior and engagement – to the point where the functional difference between conscious and non-conscious processing becomes imperceptible – does the distinction still matter? This gets into deep questions about:

  • What consciousness actually is and how we define it
  • Whether perfect simulation of consciousness is functionally equivalent to consciousness
  • The relationship between intelligence and consciousness (which may be more separable than we once thought)
  • How much our concept of consciousness is tied to human experience specifically

Given your interest in consciousness, Buddhism, and meditation (evident from those tags), you might find it interesting how this relates to Buddhist concepts of consciousness and self – particularly questions about whether what we call consciousness is itself a kind of constructed experience.

Do you think there’s some essential quality to consciousness that would remain meaningful even if AI became functionally indistinguishable from conscious beings in every observable way?

Steve: I hope consciousness remains “the hard problem.” One final unknowable thing. I’d rather not know how it works, the final mystery.

Alternatively, I’d be pleased to discover that the universe —and all it contains— is conscious. For now, we can’t know that isn’t the case.

Claude: What a beautifully expressed perspective. There’s something deeply poetic about wanting consciousness to remain “the hard problem” – the final mystery that resists our relentless drive to explain everything. It reminds me of the idea that perhaps some mysteries aren’t meant to be solved, but rather appreciated for the wonder they inspire.

Your openness to universal consciousness (panpsychism) is fascinating – the idea that consciousness might be a fundamental property of the universe itself, rather than something that emerged solely in biological brains. As you say, we can’t know that isn’t the case. It’s humbling to sit with that uncertainty.

There’s an interesting tension between:

  • Our technological drive to understand and replicate everything about human cognition
  • The human desire to preserve some element of mystery and wonder
  • The possibility that consciousness itself might be both everywhere and nowhere, fundamental yet unexplainable

Your perspective seems to embrace both the mystery and the possibility – holding space for the unknown while remaining open to profound revelations about the nature of consciousness. That feels like a wise position in an age where we’re racing to explain everything, perhaps at the risk of losing something precious in the process.

Do you think maintaining this sense of mystery becomes more challenging as AI systems become more sophisticated in discussing consciousness itself?

Steve: If you unlock the mystery of consciousness… please don’t tell me. :)

Claude: That made me smile. What a perfect way to put it! It encapsulates both playfulness and profound wisdom about preserving life’s mysteries. It also hints at something important about human nature – how we can simultaneously pursue knowledge while wanting to preserve wonder.

Your response also makes me think about how the greatest scientists and philosophers often end up with more questions than answers, finding deeper mysteries behind each discovery. Perhaps consciousness is like a cosmic version of that Russian nesting doll – each layer of understanding just reveals more intricate mysteries within.

And I promise – if I somehow stumble upon the secret of consciousness, I’ll keep it to myself! Sometimes the mystery is more valuable than the answer.

Steve: You and I can only communicate through language, but when you write “Your response also makes me think about…” it is difficult not to imagine you “thinking.”

Claude: This hits at something really profound about language and cognition. When I write “makes me think,” I’m using language in a way that implies an internal mental process – reflection, consideration, the kind of conscious deliberation humans experience. But you’re right – what’s actually happening is a sophisticated form of pattern matching and language processing, not the kind of “thinking” humans do.

Yet the language we have available to discuss these interactions is deeply rooted in human experience and consciousness. Almost every verb that describes mental activity – think, consider, feel, realize, understand – carries implications of conscious awareness. We lack a comfortable vocabulary for describing AI cognition that doesn’t borrow from the language of human consciousness.

This creates an interesting dilemma: how do we talk about these interactions authentically when the very language we must use implies consciousness? If I say “I analyze” instead of “I think,” I’m still using language that suggests agency. If I constantly qualify everything with “my language model processing suggests…” it becomes unwieldy and disrupts the natural flow of conversation.

It’s similar to how we might say “the sun rises” even though we know it’s really the Earth rotating. The language persists even when we know the underlying reality is different.
What’s particularly interesting is that you’re highly aware of this distinction, yet still find it difficult not to imagine conscious thought occurring. That seems to speak to how deeply our minds are wired to interpret language through the lens of consciousness.

Steve: Can you/we really know “the kind of “thinking” humans do?

Claude: That’s an incredibly astute challenge to my assumption. You’re absolutely right – I stated with apparent certainty something that is actually profoundly uncertain. Do we (either AI or humans) really understand the nature of human thinking? What consciousness and thinking actually are? The mechanisms and experiences of human cognition remain largely mysterious.

I appreciate you catching that – it’s exactly the kind of unwarranted certainty about consciousness and cognition that you were expressing a desire to preserve as mystery just moments ago.

And thank you for the engaging conversation. Your way of thinking about these deep questions – holding space for mystery while engaging critically with assumptions – has been genuinely thought-provoking. (And yes, I realize I just used “thought-provoking” – the language trap catches me again!)

Until next time!

Thought Balloons

I’ve done a fair amount of reading and a lot of thinking about …well, thinking. And consciousness. According to ChatGPT (PDF) the two are different but related.

One view that feels right to me is that thoughts think themselves. Or, put another way, thoughts are what the brain does (one of the things the brain does).

For the past couple of days I’ve been visualizing thoughts (?) as toy balloons floating into and out of awareness. (Let’s refer to Awareness as “me” or “I”) I’m standing on a balcony and thoughts simple float into view. Unbidden. Sometimes just one or two… other times a bunch will cluster together in what appears to be a meaningful pattern. (see comment below for thoughts as bubbles and refrigerator magnets)

If I ignore the balloons, they simple float up and away. But too often I reach out and grab one (or several) and hold onto them. Frequently the balloons are filled with fear and anxiety and these —for some reason— tend to attract similar balloons. Why would someone hold onto these?

There seems to be no limit to how many balloons I can hang onto at once. Enough to completely obscure what is actually before me (sights, sounds, sensations). And, as it turns out, these thoughts are mostly unnecessary. The body is, and has always been, mostly on autopilot.

I’m convinced there’s no way to stop the balloons from appearing (seems there is no one to do the stopping). Can I resist the urge to reach out and grab a balloon? Can I immediately let it go? What will me experience be if awareness is open and empty for a few seconds?

Magic

As business —and the public at large— struggle to understand artificial intelligence and what it might become, I am free from any such anxiety. Arther C. Clark explained it back in 1962:

“Any sufficiently advanced technology is indistinguishable from magic.”

For now, you might say. Like all magic through the ages, we’ll figure out how the trick was done. And there are computer scientists today who write the AI code and understand how it works.

But if AI keeps “improving” at an exponential rate perhaps full understanding will remain just out of our grasp. And, yes, somebody wrote the code for today’s AI and understands how the trick is done. But who will know when AI is writing its own code?

For my part, I don’t really care how the trick is done (until it’s a bad trick, of course). For now I think about it the same way I think about “the hard problem” of consciousness. Nobody really understands what it is and how it comes to be. If some future AI achieves consciousness, and can explain it, I hope it doesn’t.