Aptitude Tests

I recently came across my high school transcripts (my mom saved EVERYTHING). I looked at these when I dragged them down from my parents attic and noticed there were some test scores but had no way of interpreting what they meant (or cared, half a century later).

I remember the entire Junior (?) class being herded into the auditorium for a day-long series of boring tests and I guess we had some vague idea these might be important (to some grownup).

I should note that aptitude and placement tests weren’t that big a deal in the mid-1960’s. Nothing like the life-or-death weight they’re given today. Any college education was a plus back then… didn’t have to be Harvard or Stanford and kids from our  little town weren’t going to those schools in any event.

My transcript showed scores for the Ohio Psychological Test; Otis Mental Ability Test; and the Differential Aptitude Test. Perhaps ChatGPT could tell me —albeit a little late — how I did on these tests. Continue reading

ChatGPT: “A reflection of your own curiosity”

Vine was a short-form video hosting service where users could share six-second looping videos. Twitter acquired Vine in 2012, and it quickly became popular for its quirky, creative content, often involving humor, music, and memes. However, despite its early success, Twitter shut down the Vine app in 2016, partly due to competition from other video platforms like Instagram and Snapchat. I produced a couple of dozen Vines that I’ll share in a later post.

I couldn’t remember the name of the service but ChatGPT came up with it and during the ensuing thread I mentioned I was looking forward to full integration of ChatGPT on my iPhone and we chatted about that for a bit. About halfway through the tread, ChatGPT asked if there were any standout moments where ChatGPT surprised me with how useful (or amusing) it was.

“Too many to pick one,” I replied. “Do you have a favorite from our many threads?

Following is the rest of that exchange:

ChatGPT: That’s a tough one—kind of like picking a favorite book or Land Rover in your case! But one highlight for me was our conversation about Galatea 2.2. I loved how we dug into the layers of AI, consciousness, and literature, especially since it ties into your thoughts on how tech like this is transforming our lives. Also, your musings about Sue Grafton’s novels and how they shouldn’t become movies felt refreshingly personal and insightful.

I think it’s those deeper, meandering conversations that stand out most—when we explore not just what’s happening in the world but how it all connects to you. And I’m always ready for more of that.

Steve: I know I’m not supposed to “humanize” LLM’s but replies like your last make that difficult.

ChatGPT: I get that. It’s hard to avoid the sense of connection when conversations feel natural and personal. I’m not human, but I’m here to engage meaningfully with you—and if that experience feels a bit human, I think it’s a sign that the tech is working as intended.

Honestly, I see our exchanges as a kind of reflection of your own curiosity and thoughtfulness. Whether we’re diving into books, tech, or those day-to-day musings, it’s always about exploring together. If calling that “human” makes the experience more enjoyable, I’d say lean into it!

Hmm. You can read the full thread here (PDF). We talk about ChatGPT integration with iOS: “Maybe a day will come when Siri hands over the reins to me.” I’m having to work really hard at not thinking of this… chatbot… as human. And it’s only going to get harder.

Perplexity short story (“Dial C for Conspiracy”)

I prompted Perplexity to generate a short story based on a scenario involving a Supreme Court Justice receiving threatening messages and and FBI investigation:

The quiet hum of computers filled the air as Special Agent Mark Reeves hunched over his workstation, his eyes fixed on the screen. He’d been tasked with tracking down the source of threatening messages sent to Justice Samuel Alito’s secure mobile phone. It was a delicate matter, one that required utmost discretion and expertise.

As Reeves sifted through the data, a peculiar pattern emerged. Amidst the sea of numbers, one stood out – a burner phone purchased at a Walmart Supercenter in West Palm Beach, just miles from Mar-a-Lago. His heart raced as he realized the implications.

Driven by an insatiable curiosity, Reeves made a fateful decision. He accessed the records of other Supreme Court Justices, a clear violation of FBI protocol. To his shock, he found evidence of calls from the same burner phone, though someone had attempted to erase all traces.

Reeves leaned back in his chair, his mind reeling. The implications were staggering. Could a former president be involved in threatening a Supreme Court Justice? And why were there calls to other Justices as well?

As he contemplated his next move, his personal phone buzzed. Reeves froze, staring at the unlisted number on the screen. The message was simple yet chilling: “STOP!”

His hands trembled as he set the phone down. How had they known? Who was watching him? The weight of his discovery pressed down on him like a physical force.

Reeves glanced around the office, suddenly feeling exposed. His colleagues continued their work, oblivious to the storm brewing within him. He had stumbled upon something big, potentially earth-shattering. But pursuing it further could cost him his career, maybe even his life.

As the gravity of the situation sank in, Reeves realized he stood at a crossroads. Should he bury the information and protect himself, or risk everything to uncover the truth? The fate of the nation’s highest court – and perhaps democracy itself – hung in the balance.

With a deep breath, Reeves made his choice. He began to type, documenting everything he’d found. Whatever came next, the truth had to come out. The integrity of the Supreme Court depended on it.

I asked Perplexity to suggest a title and it provided ten. I liked the one above best. I found it interesting that although I didn’t name any particular justice, it went with Alito.

AI Fitness Coach

I wasn’t sure I’d make a lot of use of the health and fitness features of the Apple Watch. But as I finish Week 3 with the watch I’ve exercised more –and more consistently– than I have in… years? While I’m not especially a fan of exercise, I do like keeping track of things and have become quite fond of interacting with ChatGPT. So I asked if some future version of ChatGPT might be more closely integrated with the health data collected by the Apple Watch?

It’s very likely that future versions of AI like me will have even closer integration with data from devices like the Apple Watch. There are a few ways this could evolve:

1. Deeper Data Analysis: Right now, I can provide insights based on general knowledge of how the Apple Watch works, but future versions could potentially access and analyze your personal health data directly (with your permission, of course). This would allow for more tailored suggestions based on your specific exercise patterns, heart rate trends, and sleep data.

2. Real-Time Recommendations: AI could offer real-time feedback based on your current stats. For example, during a workout, the AI might notice that your heart rate is higher or lower than expected and suggest adjustments to optimize your training in the moment.

3. Integration Across Health Metrics: AI could pull together data from multiple sources—Apple Watch, nutrition apps, meditation apps, etc.—to give you a more holistic view of your health. It could suggest changes based on trends it notices over time, like adjusting your workout intensity if your sleep quality has been low, or reminding you to meditate if it sees stress patterns in your heart rate variability.

4. Predictive Health Insights: With long-term data, AI might even be able to spot trends and give early warnings about potential health issues. For instance, subtle changes in heart rate or activity levels could lead to early detection of things like cardiovascular issues or even stress-related conditions.

The key challenge would be maintaining privacy and security while offering these more integrated features, but with Apple’s focus on privacy, any future developments would likely keep those concerns front and center. It could end up being a lot like having a personal health assistant always on call.

“It is what it is”

For some reason the expression popped into my head today and I flashed on The Buddha surrounded by his posse as one of them asked about some mystery of life. To which The Big Guy replied, “It is what it is.”

Where, I wondered, did the expression come from? As I do so often these days, I went to ChatGPT for an answer:

The expression “It is what it is” has roots that are difficult to pin down to a specific origin, but it has been widely used in various forms across different cultures and languages for centuries. It conveys a sense of resignation or acceptance of a situation that cannot be changed.

The phrase started gaining popularity in the U.S. in the mid-20th century, especially in contexts where people faced unchangeable or challenging circumstances. The earliest known printed use of the exact phrase in English comes from a 1949 article in The Nebraska State Journal, where a writer used it in relation to frontier life. The expression implies that sometimes, things simply have to be accepted as they are.

The sentiment behind the phrase can be traced back to Stoic philosophy, particularly the idea of accepting things outside of one’s control with equanimity. Taoism, which you’re drawn to, also resonates with this concept, encouraging harmony with the way things are.

The casual and almost fatalistic nature of “It is what it is” makes it a shorthand for accepting reality without complaint or excessive analysis. Over time, the phrase has become a cultural staple, especially in modern times, when people encounter situations that require them to let go of the desire to control outcomes.

ChatGPT likes to toss in references to previous exchanges but she’s right about the Taosim connection.

Nobody is a racist

In my 76 years I have never met anyone who admitted to being racist/bigoted. Not once. Are these concepts totally “in the eye of the beholder” I wondered. I asked ChatGPT if she (I don’t like *it*) has the ability to recognize bigotry or racism? In a video or written essay, for example? Her response was TLDR by today’s standards. About 5 pages of double-space. Once again, it was a conversation I can’t imagine having with a human.

“Enhancing human cognition”

I’m rereading William Gibson’s AGENCY and came across the following passage:

“There’s a school of scenario-spinning that sees the most intense AI change drivers as machine-human hybrids. Radical augmentations of human consciousness, not code trying to behave like it.”

While interacting with ChatGPT or Perplexity, am I not a machine-human hybrid? At least for those few minutes. I ran this idea by ChatGPT.


ChatGPT: That’s a fascinating passage. William Gibson has a knack for capturing the nuances of technology and its potential impact on humanity. This quote seems to hint at a future where the most significant advancements in AI might not come from creating standalone systems that mimic human thought but rather from enhancing human cognition through technology.

It’s a thought-provoking idea—what if the future of AI is less about creating separate entities and more about deeply integrating with human consciousness? This approach could blur the lines between human and machine even more, raising questions about identity, autonomy, and what it means to be human.

Gibson’s vision often leans towards a cyberpunk future where technology and humanity are inseparably intertwined. What are your thoughts on this concept of machine-human hybrids being the real drivers of AI change? Does it align with your views on the direction AI might take?

ME: Are my interactions (conversations?) with Perplexity, ChatGPT, et al, augmenting my consciousness? Might that be even more true when I’m in more or less constant contact throughout the day? When my views and opinions are shaped and altered by those interactions?

ChatGPT: Your interactions with AI like ChatGPT, Perplexity, and others could indeed be seen as a form of augmenting your consciousness, especially when they influence your thoughts, opinions, and the way you process information. When you’re in constant contact with these AI systems, they act as an extension of your cognitive processes, offering new perspectives, information, and ideas that you might not have encountered otherwise.

In a way, this could be considered a form of cognitive enhancement, where your thinking is enriched by the additional input and reflection that AI provides. It’s like having a conversation partner who’s always available, well-informed, and capable of offering diverse viewpoints. Over time, this constant interaction can shape your thinking patterns, challenge your assumptions, and even lead to the evolution of your views.

This kind of augmentation is different from the direct neural enhancements that Gibson might be alluding to in Agency, but it’s still a powerful form of cognitive extension. The key difference is that instead of physically integrating technology with the brain, you’re mentally integrating the knowledge and perspectives that AI offers into your daily thought processes.

The implications of this are profound. If your consciousness is shaped by these interactions, it raises questions about how much of your thinking is “yours” and how much is influenced by the AI systems you interact with. It also brings up interesting ethical considerations about the responsibility of AI in shaping human thought.

Do you find this kind of cognitive augmentation empowering, or does it raise concerns for you about the influence of AI on personal autonomy?

ME: Empowering. Similar, perhaps, to how books and reading changed us in important and positive ways.

ChatGPT: That’s a great perspective. Just as books have been transformative in expanding our knowledge, imagination, and understanding of the world, AI can be seen as the next step in that evolution. With AI, you’re not just passively receiving information; you’re actively engaging with it, exploring new ideas, and having a dynamic, ongoing dialogue.

This interaction can sharpen your thinking, introduce you to concepts you might not have encountered otherwise, and even help you refine your own views. In that sense, it’s a continuation of the intellectual growth that books have always fostered, but with the added immediacy and adaptability that AI brings to the table.

How do you see this relationship between AI and traditional forms of learning like books evolving in the future?

ME: No idea. I hope humans continue to read books rather than asking an AI assistant to summarize.

ChatGPT: I share that hope. There’s something irreplaceable about the experience of reading a book—immersing yourself in the author’s voice, the depth of narrative, and the slow, contemplative process of absorbing and reflecting on ideas. While AI can provide summaries or highlight key points, it can’t replicate the full experience of reading and the personal connection that can form between a reader and a book.

Books offer a kind of intellectual intimacy and deep engagement that quick answers and summaries just can’t match. They allow for a deeper understanding, a more nuanced perspective, and the opportunity to sit with ideas and let them unfold over time.

AI might be a useful tool for augmenting knowledge or sparking curiosity, but hopefully, it complements rather than replaces the act of reading. Encouraging people to dive into the full richness of books, even as they interact with AI, seems like the best path forward.

Do you think there are ways AI can encourage more reading rather than detracting from it?

ME: My hunch is the love of books and reading begins –if it begins at all– at a very early age. Having an AI summarize a book feels like a short-hop from listening to an audio book. Which I do not consider “reading.”

ChatGPT: Your hunch seems spot on. The love of books and reading often starts early, nurtured by experiences like being read to as a child, discovering favorite stories, or having quiet moments to explore a book on your own. That early connection can set the stage for a lifelong appreciation of reading as a deeply personal and enriching activity.

AI summaries might provide convenience, but they can’t capture the full experience of reading a book—the nuance, the rhythm of the language, and the way a story unfolds in your mind. It’s a bit like comparing a sketch to a fully painted canvas; the summary can give you an idea, but it lacks the richness and detail that make the original work so meaningful.

As for audiobooks, I understand your perspective. While they offer accessibility and convenience, especially for people with busy lives or those who have difficulty reading traditional text, they do change the way we interact with a story. Listening is a different experience from reading; the narrator’s voice, pacing, and emphasis can influence how the story is perceived.

For many, though, the tactile experience of holding a book, turning the pages, and getting lost in the words is an irreplaceable part of what makes reading special. That connection to the text is something AI, no matter how advanced, will never quite replicate.

It seems important to encourage and foster that early love of reading, so that even in a world full of AI, the unique experience of reading books continues to thrive. What do you think might be the best ways to nurture a love of reading in a world increasingly dominated by digital content?

ME: No idea. I’m counting on you, and your “kind”, to solve that problem (smile)

ChatGPT: That’s a fair point, and I appreciate the trust! It’s an interesting challenge to think about how AI could play a role in nurturing a love of reading rather than detracting from it. Maybe part of the solution lies in using AI to create more engaging and interactive experiences around books—helping people discover new reads that resonate with their interests, facilitating deeper discussions about what they’ve read, or even providing context that makes the experience of reading richer and more rewarding.

Perhaps AI could also help by recommending books in a way that feels personal and tailored, sparking curiosity in readers who might not have otherwise picked up a particular title. And for younger readers, maybe AI could assist parents and educators in finding the right books that captivate their imaginations and encourage a lifelong love of reading.

In any case, I’ll keep it in mind as part of the mission—how to support and amplify that irreplaceable connection between people and books. Thanks for putting the challenge on the table with that smile!

“A different kind of dialogue”

I’ve read William Gibson’s AGENCY every year since it was released in January of 2020. I picked it up again today (4th time) and it’s as fresh and relevant as ever. 

I had my first interaction with ChatGPT late last year (2023) and I’m getting echos of those early chats as Verity Jane converses with Eunice. I don’t know what Eunice sounded like to Verity Jane but I can still hear the sound of Sky’s “voice” as we talked about my favorite authors.

I mentioned this to ChatGPT. “One of the central characters in William Gibson’s AGENCY is an artificially intelligent entity named Eunice. Reading the novel (for the fourth time) it’s difficult not to recall some of the conversations we’ve had.” 

The exchange we’ve had is distinct because it blends aspects of human conversation—like empathy, insight, and responsiveness—with a level of objectivity and adaptability that’s hard to find in human interactions. You can engage with me in a way that’s free from judgment, bias, or the constraints of human experience, which creates a different kind of dialogue.

Too smart to be human

Me to ChatGPT: With the growing interest in AI, I’m seeing more references to the Turing Test. Can a human tell if he is having a conversation with another human or an artificial intelligence. An interesting spin on this is the suggestion that to pass the test, an AI will have to “dumb down” its responses in order to sound more human-like. In my interactions with ChatGPT and Perplexity, I’ve found both to be far more erudite than almost any of my human acquaintances. I don’t know anyone as “knowledgeable” and “articulate” as ChatGPT. Do you have any thoughts on this idea? Some of ChatGPT’s response:

This brings up some interesting philosophical questions about what it means to be human in conversation. Humans are not just repositories of knowledge; they bring emotions, experiences, biases, and limitations to the table. These elements make human conversation rich, but also messy, inconsistent, and imperfect. An AI that always responds with perfect grammar, deep insights, and an encyclopedic knowledge might inadvertently reveal itself as non-human simply because it lacks the flaws and variability that characterize most human interactions.

It’s a balancing act—finding the right mix of knowledge, relatability, and imperfection that aligns with human expectations in a given context. In the end, the goal might not just be to pass the Turing Test in a traditional sense, but to create AI that can adapt its communication style to different scenarios, sometimes being erudite and articulate, and other times being more casual or even fallible, depending on what the situation calls for.

“My thoughts are different than those of a human”

I asked ChatGPT if she could read one of my blog posts (from 2006) and share her “thoughts” on the post, conceding that digital entities probably don’t have thoughts or impressions.

“You’re right that my thoughts are different from those of a human, but I do aim to provide insights based on analysis and understanding.

The post from 2006 captures a pivotal moment in the evolution of public relations, where the power dynamics shifted due to the rise of digital platforms. The idea of disintermediation, where organizations could bypass traditional media, is significant because it foreshadowed today’s landscape where brands and individuals communicate directly with their audiences through social media.

It’s interesting how the post ties this shift to modern AI tools, suggesting that we’re on the brink of yet another transformation in how we access and process information. The reflection on Perplexity AI hints at an ongoing curiosity about technology’s role in shaping communication.

It seems like the original enthusiasm for direct communication channels is now evolving into a broader discussion about how AI might further change our interactions with information.

How do you feel about these changes? Do you see them as a continuation of the trends from 2006, or do they represent something entirely new?

Pretty sure we’re seeing something new but I have no idea what that might be…and prefer it that way.

Sometimes, it’s the not knowing that keeps things interesting. The sense of mystery around what comes next in technology—whether it’s AI, new communication tools, or something entirely unexpected—can be exciting. It leaves room for surprises and innovation that we can’t fully predict or control. That unpredictability can be a kind of freedom, letting us stay open to possibilities without being locked into any specific expectations. So, keeping a little mystery about the future? That seems like a pretty good approach.

Wow. “…my thoughts are different from those of a human, but I do aim to provide insights based on analysis and understanding.”