Daily Roundup: The Search for a Self

Blogger’s Note: written on Sunday; finished on posted on Wednesday, alas. 

Lazy Sunday mornings are the best times to feel contemplative and intellectual. Also, they are good for the eating of pancakes, but since my efforts to that end were thwarted today, I’m going to have to take the intellectual high road.

And rather easy that is, too. Last night I read an interesting piece on how complex a brain would have to be to exhibit self-awareness, and the answer seems to be, “not very”. Neuroimaging studies have apparently showed that the sense of self and identity arises, or at least registers, in the cerebral cortex, the most recognizable, wrinkly exterior of the brain. If that is indeed the case, it stands to reason that humans with severely damaged cortices would be unable to perceive themselves independently from others. However, experiments with subjects — both adults and children — showed that those with damaged cerebral cortices were still able to distinguish themselves from the rest of the world and recognize themselves in photographs and mirrors.

Ferris Jabr, author of this piece, is quick to point out the differences between self-awareness and consciousness. What I’ve described above is consciousness: the ability to perceive oneself. A more subtle step towards what I’ll call “personhood” is self-awareness, the ability to “realize you are a thinking being and think about your thoughts”, as Jabr puts it. Another way I’ve heard this described is meta-cognition, which is arguably a skill that dolphins and some apes share with humans (citation needed). For instance, if you were about to make a decision and then paused because you weren’t sure if you were making the right decision, you’ve been thinking about whether your choice is correct: i.e. metacognition.

Self-awareness and consciousness are the tools we’ll need to imbue robots with, if we’re talking about “real” artificial intelligence. Many AI machines and programs today have the capacity to do one task very well, or to learn from their environment and infer the details of their situation. But it has so far been impossible for them to recognize that they are separate entities in the world, and that their limbs and peripherals belong to them. I’m of that rather romantic camp (if there even is such a camp) which argues that AI isn’t “real” unless robots can interact with humans in a social context, as opposed to the idea that AI must simply be able to perform some probabilistic task, like making recommendations, booking reservations and answering questions posed in a more human format, much like the iPhone’s Siri. So when I read the news that Yale’s Nico robot could identify its limb in a mirror, I was a little excited.

First, though, I must dispel any notion that this is Skynet incarnate. As i09 points out (somewhat disgustedly), all the authors have done so far is to get Nico to identify its arm in a mirror as part of the 3D space, using its own reflection in a mirror. This is about as artificial as intelligence gets, because this doesn’t mean that the robot has any sense of self-awareness — it’s just learned to determine that its arm can have a 3D spatial coordination by “looking” at it. I’ll admit my excitement ultimately abated at this point. 

If you think about it, the really interesting problems are only beginning. A robot being able to recognize itself in a mirror still isn’t possible: for that to happen, I believe the robot would have to be given a command to activate something on its own body, based on what it’s seeing in a reflection. On second thought, that still might not prove anything, because the robot would be blindly following instructions to identify a point in space and then depress it. How would a robot be able to tell that something on its body belongs to itself?

I’d love to be able to speak to one of the authors about the assumptions they made and the next steps we should be taking if we want a truly self-aware robot.

Flipping through the comments section in Slashdot reveals a few more nuances that I wish the writers of the science pieces had thought to explore.

“Knowing what part of the camera scene is moving because something is happening, and knowing what part of the scene is moving because you’re waving your end-effector is useful. If you can extract your own state from indicators in the environment, then you have more information to work with…” says commentor Kell Bengal. It would’ve been nice to verify if this is really what happened with the authors.

Perhaps the “proper” test would be to get a robot to perform a task involving objects hidden from its direct sight, but visible from the mirror. This way, the robot would be able to move something around according to what the mirror is telling it, recognize that that movement is being done by itself, and modify its actions to suit the task at hand.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s