From Bacteria to Bach and Back

From Bacteria to Back and Back. The Evolution of Minds, by Daniel C. Dennett. The book’s cover teased me with “How did we come to have minds?” The author dragged me through 300 pages of “groundwork” before providing anything I could recognize as an answer. But I took notes (below), if underlining counts as taking notes. And here’s a review by Thomas Nagel. And a 45 minute audio interview at The Big Think.

The immaterial mind, the conscious thinking thing that we know intimately through introspection, is somehow in communication with the material brain, which provides all the input but not of the understanding or experience. 

Can there be reasons without a reasoner, designs without a designer? (Dennett says yes)

A central feature of human interaction, and one of the features unique to our species, is the activity of asking others to explain themselves, to justify their choices and actions, and then judging, endorsing, rebutting their answers, in recursive rounds of the “why?”

Natural selection doesn’t have a mind, doesn’t itself have reasons. […] For instance, there are reasons why termite colonies have the features they do, but the termites do not have or represent reasons, and their excellent designs are not products of an intelligent designer.

Turing showed that it was possible to design mindless machines that were Absolutely Ignorant, but that could do arithmetic perfectly. […] He foresaw that there was a traversable path from Absolute Ignorance to Artificial Intelligence. […] Both Darwin and Turing claim to have discovered something truly unsettling to a human mind — competence without comprehension.

Why and how did human-style comprehension arrive on the scene?

Ontology – the set of “things” a person believes to exist.

Comprehension is an emergent effect of systems of uncomprehending competence.

What is consciousness for (if anything)? If unconscious processes are fully competent to perform all the cognitive operations of perception and control.

Information is always relative to what the receiver already knows.

If DNA can convey information about how to build a nest without any terms for “build” and “nest,” why couldn’t a nervous system do something equally inscrutable?

Intentional mind-clearing, jettisoning information or habits that endanger one’s welfare, is not an unusual phenomenon, sometimes called unlearning. […] The brain’s job in perception is to filter out, discard, and ignore all but the noteworthy features of the flux of energy striking one’s sensory organs.

One of Darwin’s most important contributions to thought was his denial of  essentialism, the ancient philosophical doctrine that claimed for each type of thing, each natural kind, there is an essence, a set of necessary and sufficient properties for being that kind of thing.

Children learn about seven words a day, on average, from birth to age six.

Understanding a word is not the same as having acquired a definition of it.

Words don’t exist,  strictly speaking. They have no mass, no energy, no chemical composition.

Memes are transmitted perceptually, not genetically.

Words are memes that can be pronounced.

“In terms of the brain, we know that concepts are somehow stored there, but we have little idea of exactly how.”

The acquisition of a language — and of memes more generally — is very much like the installation of a predesigned software app of considerable power, like Adobe Photoshop, a tool for professionals with many layers that most amateur users never encounter.

We may “know things” in one part of our brain that cannot be accessed by other parts of the brain when needed. The practice of talking to yourself creates new channels for communication that may, on occasion, tease the hidden knowledge into the open.

Nature makes heavy use of the Need to Know principle, and designs highly successful, adept, even cunning creatures who have no idea what they are doing or why.

Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines.

We learn about others from hearing or reading what they say to us, and that’s how we learn about ourselves as well.

“We speak not only to tell others what we think, but to tell ourselves what we think.” — John Hughlings Jackson

Bare meanings, with no words yet attached, (can) occupy our attention in consciousness.

Evolution has given us a gift (the mind?) that sacrifices literal truth for utility.

(The mind is) that thinking thing with which you are so intimately acquainted that is hardly distinguishable from you, yourself. No wonder we are reluctant to see it as illusory; if it is illusory, so are we!

If free will is an illusion then so are (we).

Human consciousness is unlike all other varieties of animal consciousness in that it is a product in large part of cultural evolution, which installs a bounty of words and many other thinking tools in our brains, creating thereby a cognitive architecture unlike the “bottom-up” minds of animals. By supplying our minds with systems of representations, this architecture furnishes each of us with a perspective—a user-illusion—from which we have a limited, biased access to the workings of our brains, which we involuntarily misinterpret as a rendering of both the world’s external properties (colors, aromas, sounds,. . . ) and many of our own internal responses (expectations satisfied, desires identified, etc.).

Deep learning will not give us — in the next fifty years — anything like the “superhuman intelligence” that has attracted so much alarmed attention recently. […] I have always affirmed that “strong AI” is “possible in principle” — but I viewed it as a negligible practical possibility, because it would cost too much and not give us anything we really needed.

The real danger, I think, is not that machines more intelligent than we are will usurp our roles as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.

When you are interacting with a computer, you should know you are interacting with a computer. Systems that deliberately conceal their shortcuts and gaps of incompetence should be deemed  fraudulent, and their creators should go to jail for committing the crime of creating or using an artificial intelligence that impersonates a human being.