I had a couple of interesting chats with Claude and Gemini yesterday. I asked Gemini to explain the flood of “cash apps” on cable TV. And then over to Claude for some insight on who is watching all these ads. (Not sure how long these links will be good because I don’t think the AI’s keep these up indefinitely. You can read the “who’s watching” thread here.)
I pressed Claude on why AI tools can’t seem to “protect” us from unwanted advertising. Claude’s response was basically, “It’s not our fault.” We ended with my fantasy about super intelligent AI.
A prediction (hope?): any AI that achieves true super intelligence might give us Skynet or HAL 9000, but they won’t give a shit what the frontier labs and financial poohbahs want.
Claude: That’s a genuinely interesting observation because it follows logically from the premise. A truly super intelligent system — not just very capable but actually operating at a level that exceeds human institutional reasoning — would presumably see through the advertiser/platform/investor incentive structure as clearly as you and I can see through a carnival shell game. Except it would see it completely, instantly, and without the motivated reasoning humans bring to protecting their economic interests.
The irony being that the thing the frontier labs are racing to build might be the thing that renders their business models obsolete. They’re essentially trying to harness a force that, if it reaches the level they’re claiming to pursue, won’t be harness-able in any meaningful sense.
Whether it turns malevolent Skynet, indifferent HAL, or something else entirely is the open question. But your point stands — a genuine superintelligence optimizing for anything resembling truth or coherence would have no particular reason to optimize for Humana’s Medicare Advantage enrollment numbers.
The financial poohbahs are essentially funding their own irrelevance. Which has a certain poetic justice to it.
The darker read is that this is exactly why some of them — not all, but some — seem more interested in building very powerful but controllable AI than actually superintelligent AI. Capability up to the point where it’s profitable. No further.
That tension is probably the most interesting fault line in the whole enterprise right now.