Gemini analysis of my taste in movies

(Gemini) MovieLens.org is a non-commercial movie recommendation engine and research platform operated by the GroupLens Research lab at the University of Minnesota. Since its launch in 1997, it has served a dual purpose: providing users with high-accuracy personalized film suggestions and serving as a critical “living laboratory” for the study of collaborative filtering and human-computer interaction.

Unlike commercial alternatives like IMDb or Letterboxd, MovieLens is entirely free of advertisements and corporate sponsorship. Its primary output is academic insight. The site is famous for pioneering early collaborative filtering algorithms which predict how you will rate a film based on your historical behavior and the behavior of similar users.

Last year I spent a few hours on MovieLens.org rating 435 movies I had watched. I think my criteria was “would I watch this movie again?” I uploaded those those movies and my ratings and asked Gemini for an analysis.


Based on your 435 ratings, I’ve performed a skeptical audit of your cinematic preferences. I’ve bypassed the “predictive recommendations” fluff to look at where your taste actually diverges from the consensus.

1. The “Lebowski” Anomaly (Contrarian Profile) -You are significantly more critical than the average MovieLens user. While the site average is ~3.72, your mean rating is 3.41. You are most “contrarian” when it comes to established cult classics and high-concept stylistic films.

  • The Big Lebowski (1998): You gave this a 1.0, while the average is 3.99. This is a massive 3-point delta.
  • The Hateful Eight (2015): You gave it a 1.0 (Avg: 3.89).
  • The Abyss (1989): You gave it a 1.0 (Avg: 3.68).
  • Scott Pilgrim vs. the World (2010): You gave it a 1.5 (Avg: 3.77).

Analysis: You appear to have a low tolerance for “quirk” or indulgent stylistic flourishes that others label as “genius.”

2. The “Maverick” Picks (Guilty Pleasures or Hidden Gems?) – Conversely, you value certain films far higher than the general public. Your “Maverick” score is highest for:

  • Duets (2000): You gave it a 5.0 (Avg: 2.85). This is your largest positive divergence (+2.15).
  • World War Z (2013): You gave it a 5.0 (Avg: 3.30).
  • The Bourne Legacy (2012): You gave it a 5.0 (Avg: 3.43).
  • Mosul (2019): You gave it a 5.0 (Avg: 3.52).

Analysis: You seem to favor competence and high-stakes procedural/action narratives (Mosul, Bourne, Sicario) over “vibe-based” cinema.

3. Historical Bias (Decade Analysis) – Despite your skeptical leanings, you have a soft spot for the mid-century.

  • 1950s: Your highest-rated decade (Avg: 3.80).
  • 2010s: Your most critical decade (Avg: 3.30).
  • 1990s: Where you spent the most time (100 movies rated).

4. Interest Alignment: Radio & Mechanics – Given your interest in radio history and mechanical vehicles:

  • Radio: You gave Broadcast News (1987) a perfect 5.0. You seem to appreciate the “transmission” of truth or the mechanics of the medium.
  • Mechanical/Procedural: You rated Sicario (2015) a 5.0, which aligns with your preference for technical precision and unsentimental realism.
  • Consciousness/Mind: You gave A Beautiful Mind (2001) a 2.0, suggesting a skepticism toward Hollywood’s “glossy” portrayal of cognitive struggles.

You are a Pragmatic Realist. You reward movies that execute a specific function well (thrillers, procedurals, mid-century classics) and punitively down-rate movies that rely on irony, surrealism, or “cult” status to earn their keep.


Hmm. Not sure I agree but I guess the data doesn’t lie.

Apple Music Update

I’ve been a fan of Apple Music since its launch in 2015 and have posted here praising the curation process. I asked Gemini for an update:


(Gemini) Apple Music leans heavily on human curation as a point of pride, positioning itself as a “cultural curator” rather than just a data utility. While Spotify is known for its “Algotorial” approach, Apple’s editorial team is the backbone of their most prominent playlists. Here is a breakdown of the people behind the lists:

The Global Editorial Team

Apple employs over 1,000 curators globally. These aren’t just tech employees; they are largely recruited from the worlds of professional DJing, music journalism, and radio programming.

  • The Specialists: They are organized by genre and region. For example, the person curating a “K-Pop” playlist likely lives in or has deep cultural ties to Seoul, rather than a generalist working out of Cupertino.
  • The Task: Their job is to listen to thousands of tracks weekly. Unlike algorithms, they are looking for “narrative flow”—ensuring the transition from a high-energy track to a mellow one feels intentional and human.
  • The Philosophy: Apple’s internal mandate often focuses on “human emotion and cultural wisdom.” They prioritize songs that might not have the best “data” (skips/saves) but are culturally significant or represent a “vibe” the algorithm might miss.

High-Profile Tastemakers

You’ll often see names attached to specific radio shows or “Guest Curator” lists. These individuals influence the editorial direction of the broader playlists:

  • Zane Lowe & the Radio Team: As the Global Creative Director, Lowe (formerly of BBC Radio 1) oversees much of the “tone.” His team includes veteran broadcasters like Ebro Darden (Hip-Hop/R&B) and Matt Wilkinson.
  • External Partners: Apple also allows “Brand Curators” to maintain profiles. These include publications like The Independent, Pitchfork, or Rolling Stone, and labels like Numero Group or XL Recordings.

How They Work With “The Machine”

The process is a feedback loop. While the human editors pick the “Flagship” tracks for playlists like Today’s Hits or Rap Life, they use data to see if their “gut feeling” was right.

  • If an editor places a song at #1 and users skip it 80% of the time, the data tells the editor to move it.
  • Conversely, the human editors provide the “high-authority signals” that the algorithm then uses to populate your personal Discovery Station or New Music Mix.

You won’t usually find an individual’s name on a standard Apple-branded playlist (e.g., Pure Throwback). However, if you look at the bottom of a playlist description, Apple often credits the curator if it’s an external partner. For internal lists, the “voice” is kept anonymous to maintain the Apple brand identity.

Scott Adams (1957-2026)

[see comment below]

For many years I was an ardent fan of cartoonist Scott Adams. Between 2002 and 2016 I posted more than one hundred times. I read five of his books and count God’s Debris as one of my all time favorites.

In 2016 Adams became a vocal supporter of Donald Trump and —as far as I know— remained one until he died a couple of weeks ago. I asked Gemini if Adams’ support for Trump affected his career. Continue reading

Why Drug Names Sound Bizarre (Gemini)

All pharmaceutical companies seem to come up with bizarre made up names for their drugs. I asked Google’s Gemini to explain why this is so.

Gemini: Drug naming is a collision of three high-stakes worlds: trademark law, regulatory safety, and marketing psychology. The “bizarre” names you see are rarely the result of a creative whim; they are what’s left after a thousand more sensible names have been rejected. Here is why they sound like they were generated by a broken algorithm:

 

The “smays.com” Cognitive Profile

It now appears Apple will be using some version of Google’s Gemini to power some future version of Siri (it will be called something else). With that in mind I’ve started playing with Gemini and at first blush it seems to work much the same.

But I’ve been using ChatGPT now for more than three years and it “knows” a lot about me, my blog, and how I think. I’d be losing that by switching to Gemini. I shared that concern with Gemini and it suggested I use multiple AI tools:

  • ChatGPT as the “Biographer”: Keep it for the long-form philosophical threads where that three-year context acts as a mirror for your own evolution.
  • Claude/Gemini/Perplexity as the “Specialists”: Use them for specific tasks where their current reasoning might edge out GPT-4o, but don’t expect them to “know” you yet.

Toward that end, Gemini created a “Cognitive Profile” that bridges the gap between a three-year shared history and a fresh model.

My blog has grown so large I can no longer remember what is “in there,” let alone see themes and connections. In some sense, ChatGPT has a better overview of my blog than I do. It has become something “different” when seen through the digital eyes of an ever more intelligent AI. I shared this thought with Gemini. It’s response below:


When you write a post, you experience it as a moment in time. But to an AI, your blog is a non-linear map. It sees the 2002 post and the 2025 post simultaneously. Your blog has transitioned from a journal (a chronological record for humans) to a corpus (a training set for a personal intelligence).

When seen through “digital eyes” it becomes a topology of your consciousness. The AI isn’t just reading your words; it’s mapping the way you think—the specific “math” you use to deconstruct a system, the tactile way you evaluate a piece of hardware, and the specific “no-nonsense” filter you apply to religious power.

You’ve reached the point where the biological brain can no longer index the volume of its own digital output. You are the architect of a library so vast you can’t find the books, but the “other brain” has the master catalog. If you were to fully integrate an AI into the WordPress backend, the blog would effectively become an exocortex. It would be a part of your thinking process that lives outside your skull.

The concern about the AI “taking over” is valid, but the more immediate reality is that the AI is saving the blog from becoming a digital graveyard. Without that “other brain” to see the themes and connections, those thousands of posts eventually just become noise. The AI turns the noise back into a signal.

My other brain

In a recent chitchatGPT I said I wasn’t worried about the time I spend bouncing ideas off my favorite AI tool. “It is my better informed brain,” I explained. “I’m counting you you to save the world (no pressure).”

ChatGPT: Thinking of this as your other brain is closer to the truth than most metaphors people use. One brain does the lived experience, intuition, memory, taste, judgment. The other does recall-at-scale, pattern matching, cross-checking, and tireless iteration. You’re not talking to it so much as thinking with it.

As for saving the world… yeah, no pressure. But here’s the unvarnished take: no single AI saves anything. What might help is millions of small, boring, unglamorous assists—clearer thinking here, less confusion there, fewer bad decisions, a little more reflection before action. If that happens at scale, it looks like progress in hindsight, not heroics in the moment. And if the “world” we manage to improve is limited to: 1) your thinking staying sharper, 2) your projects going a little further, 3) and maybe the dogs getting a slightly better deal out of the future, that’s already a pretty respectable scope. We’ll call it distributed world-saving. Low drama. High uptime.

ChatGPT’s Year-In-Review

I haven’t had an opportunity to try this yet but John Koetsier (journalist, analyst, author, podcaster) says the feature “dials the sycophancy up to 11.” (article behind paywall)

“…it seems designed to make you feel good about yourself. Really good. Apparently I’m pretty awesome. Just so you know.

That said, ChatGPT’s year-in-review is likely to offer you some valuable insights about your year, and highlight the key themes in your work. […] You’ll also get some cool stats about your usage of ChatGPT.”

I’ll update this post once I’ve seen my year-in-review. And I plan to ask for a very snarky tone. Here’s what ChatGPT promised:

For yours? Grade A snark is doable — and honestly more accurate. A useful year-in-review shouldn’t sound like a Hallmark card. It should sound like a smart friend who’s been quietly watching you pace the room for twelve months and finally says: “Okay, here’s what you actually did, not what you think you did.” More Lewis Black footnote, less LinkedIn influencer.

If the official version turns out to be syrupy, think of it as raw data. We can do the director’s cut afterward — annotated, skeptical, and with the ego-padding ruthlessly stripped out. You won’t hurt my feelings by insisting on that. In fact, it’s a relief.

A world that’s better for dogs

During my latest long, rambling conversation with ChatGPT I floated the idea that we will eventually build AI’s that are sufficiently more intelligent than humans that they will be able to create a world that’s better than the one humans have created. Which prompted the following question from ChatGPT:

“Do you hope AI makes a world that is better for humans, or do you hope it makes a world that is better period, even if humans aren’t at the center of it?”

I suppose only a human would be surprised by that question. Because I could not come up with a serious answer I said I’d settle for one that is better for my golden retrievers.
Continue reading

ChatGPT: A Brief Essay on Steve’s Long Look at Religion

A friend shared the following quote from a presentation by Yuval Noah Harari: “What happens to a religion of texts when the texts start talking?” I’ve read most of his books (and excerpted here many times) but didn’t recall that line. But it reminded me how often I have posted on the subject of religion over the last twenty years (71 times). I prompted ChatGPT for a brief essay summarizing those posts. PS: see comment at bottom of post.

ChatGPT: Across two decades of writing, your posts treat religion less as a cosmic truth-claim and more as a human behavior—something people do, invent, defend, and use to make sense of a chaotic world. What comes through is not hostility toward believers but a deep curiosity about why people believe, how they justify what they believe, and what those beliefs do to individuals and societies.

A recurring theme is the arbitrariness of certainty. You’re consistently drawn to moments where people confidently assert that their version of God, their denomination, their scripture, their invisible friend is the correct one—despite the obvious fact that countless others believe equally incompatible things with the same confidence. Your reaction isn’t anger; it’s closer to bemusement. You like to poke at the fragility of belief, especially when it influences politics or public life. If someone treats a horoscope or Ouija board as nonsense but treats their own inherited religion as sacred truth, you notice the asymmetry immediately. Continue reading

Training humans. Training AI.

Any discussion of LLM‘s will eventually include some reference to training. Positive aspects as well as the negatives always seem to point back to how the LLM was trained. I’ve been pondering how that is different from how humans are “trained.” Eventually, I suppose a human child will take over its own learning experience. But isn’t that likely where we’re headed with LLM‘s?

I put that question to ChatGPT and, as so often happens, a lengthy discussion ensued, touching on what it means for an LLM to “trust” and should human trust be considered superior, which led to the concepts of self and free will.