Building Self-Aware Robots

“I want to meet, in my lifetime, an alien species,” said Hod Lipson, a roboticist who runs the Creative Machines Lab at Columbia University. “I want to meet something that is intelligent and not human.” But instead of waiting for such beings to arrive, Lipson wants to build them himself—in the form of self-aware machines.

To that end, Lipson openly confronts a slippery concept—consciousness—that often feels verboten among his colleagues. “We used to refer to consciousness as ‘the C-word’ in robotics and AI circles, because we’re not allowed to touch that topic,” he said. “It’s too fluffy, nobody knows what it means, and we’re serious people so we’re not going to do that. But as far as I’m concerned, it’s almost one of the big unanswered questions, on par with origin of life and origin of the universe. What is sentience, creativity? What are emotions? We want to understand what it means to be human, but we also want to understand what it takes to create these things artificially. It’s time to address these questions head-on and not be shy about it.”

One of the basic building blocks of sentience or self-awareness, according to Lipson, is “self-simulation”: building up an internal representation of one’s body and how it moves in physical space, and then using that model to guide behavior. Lipson investigated artificial self-simulation as early as 2006, with a starfish-shaped robot that used evolutionary algorithms (and a few pre-loaded “hints about physics”) to teach itself how to flop forward on a tabletop. But the rise of modern artificial intelligence technology in 2012 (including convolutional neural networks and deep learning) “brought new wind into this whole research area,” he said.

(continue reading)

Icon-O