Tag Archives: darwin

Anthropocentrism and Scientific Progress

Short answer: they don’t go together.

Think of Galileo battling the Vatican, of Darwin’s critics, both in the 19th century and (unbelievably) today. It’s the thought I had when I was reading The Atlantic‘s A Perfect and Beautiful Machine, written to commemorate the 100th birthday of Alan Turing.

Turing’s contribution to science deserves multiple books by itself (and just for the record, the Google Doodle accompanying this was pretty fabulous, if difficult to understand at first), but it was the tone of the article that particularly struck me yesterday. This quote sums it up:

The very idea that mindless mechanicity can generate human-level — or divine level! — competence strikes many as philistine, repugnant, an insult to our minds, and the mind of God.

The whole reason why robotics and the idea of artificial intelligence strikes fear and disgust into the minds of people is the underlying assumption that human thought can be reduced to mechanic, and therefore mindless, principles. It is the same reason, Dennett explains, that some of us automatically reject the idea of evolution. Can the same process that produced the earthworm also produce Shakespeare?

The problem, I think, is that we have a very human-based view of the universe and our place in it. Which is both obvious and understandable; we’re human, it’s what we do. But a human scale of time — measured in terms of a century, if that — and a geological scale of time are incompatible in some respects, because it means we just don’t grasp how much time it takes for evolution to occur. We also don’t appreciate, on a scale of space, how vast our galaxy is, and how many light years lie between us and our nearest spiral galactic neighbor, Andromeda.

We can also say, similarly, that we don’t appreciate the tightly interconnected, intuition-like steps of cognition that help us achieve the most mundane tasks. We are content to assume that human thought is special and significant in some way. But the genius that we admire — the non-linear thinking and aesthetics of poetry and sculpture, or the form-function perfection of architectural feats — is still a very human admiration for a very human set of accomplishments.

Perhaps we make the case that, while evolution is an extremely strong theory and practically a law of nature, the notion of computing and artificial intelligence is simply another approach to reality, merely one way to progress scientifically.

That is: holding an anthropocentric view of human life is actually an impediment to the understanding and acceptance of evolution, which affects all life. But the only thing that point of view would have affected, in the field of computing, is the presence of smartphones in our lives.

Turing said this:

It is possible to invent a single machine which can be used to compute any computable sequence.

Here’s where I go out on a limb and say that Turing’s idea — (simplistically) that everything can be computed — is a subset of a greater idea that might have deeper scientific significance, something that could be damaged irretrievably by our anthropocentric view. The point I’m trying to get at is that Turing thought of human activity as reducible to computable components, and therefore things that could be reproduced by a machine. A large part of our “intelligence” is now replicable by machines.

The greater idea I’m thinking of is intelligence. What is it? Does only human intelligence qualify? Do individual components of a system — like humans in society — qualify as intelligent, or does the society itself exhibit “intelligent” properties? Do animals have their own systems of intelligence?

The ideas I’m playing with here are, to some extent, farfetched. Possibly insane. But with so much astronomical work being done recently in the field of habitable planets and planetary configurations and the idea of water on Mars, I honestly think we’ve begun — quietly, without any fanfare, almost unconsciously — to take the idea of alien life seriously.

The problem is: will we recognize it if we see it?

I’m fully cognizant of the importance of DNA as building blocks of matter and of carbon in life-forms, but that is a very biological point of view to have. Not that I think we can or should be doing anything else; if we’re remotely mining our neighbor for signs of life, we really only have so much material to work with.

But what happens when we discover something unusual or unexpected? Stuck as we are on the idea of human intelligence, will we be able to recognize a non-human form of intelligence, of computability?

Let us, in fact, forget alien life for a while and focus on artificial intelligence here on Earth. We keep aiming for smarter machines and more intuitive objects — will we be able to recognize when we’ve succeeded, when the only standard of intelligence we’re looking at is human intelligence? Admittedly, there’s no other kind of intelligence that’s really relevant or that we even know of. And I’m certainly not talking about a Skynet-like self-consciousness. When do we know when machines have begun, not to spew out random phrases a la chatbots, but have begun to actually respond?

[This post was definitely supposed to end about five paragraphs ago.]

My point is this — as we scale upwards towards larger and larger slices of the known universe, scale down towards the nano-world in search of self-assembly and interesting behavior, build smarter machines and put the power of computability in nearly everyone’s hands, our view of Nature and of life has to change. We’re not the only players in this game.