Baxter the Robot Fixes Its Mistakes by Reading Your Mind

Oh good. To tell when it's made a mistake, a charming robot reads your mind.

Baxter is but a child, with bright eyes and a subtle grin. It sits at a table and cautiously lifts a can of spray paint, then dangles it over a box marked “WIRE.” The error seems to smack Baxter across the face---its eyebrows furrow and blush appears on its cheeks. It swings its arm to an adjacent box marked “PAINT” and drops in the can with a clunk and that spray-paint rattle.

“Good,” says a voice off-screen, as Baxter’s face reverts to a grin.

Baxter is in fact a robot, and an industrial one at that, with hulking arms meant for lifting much larger things than cans and wire. Its face is not flesh, but a screen. And its decisions are not entirely its own, but those of a human sitting across the table---a woman with electrodes strapped to her head. The setup detects a particular signal in her brain's electrical activity when she sees a mistake. In real time, the woman telepathically scolds Baxter for choosing the wrong box, and the robot corrects.

Researchers didn’t set out to embarrass an innocent machine, but to push further into the frontier of human-robot interaction, as they detail in a paper published online today. More and more, you’ll be interacting with machines: You’ll share hospital corridors with robots delivering food and medicine, and you could even fly a plane with your thoughts alone. For the time being, though, interacting with robots is crazy-awkward---they’re stilted and, well, robotic. The challenge now is socializing them.

Today, communicating with the machines is mostly about typing or vocalizing commands, which creates lag time. Letting Baxter read your mind takes milliseconds. “It's a new way of controlling the robot that I actually like to think of as being natural, in the sense that we aim to have the robot adapt to what the human would like to do,” says MIT roboticist Daniela Rus, a co-author on the study. Namely, don’t put the paint in the wrong box, dummy.

The underlying technology is shiny and new and complex, but the idea is straightforward. When you notice a mistake, your brain emits a faint type of signal, known in neuroscience as an error-related potential. But that’s among all the other electrical chaos coursing through your brain that an EEG picks up, so machine learning algorithms sniff out the signal. When Baxter is about to make a mistake, the system translates the error-related potentials in the woman’s brain into code a robot understands.

The human and machine are communicating at the most basic of levels---not speech but the electrical signals that prelude speech. “The paper shows an interesting capability in terms of doing this in real time,” says Carnegie Mellon roboticist Aaron Steinfeld. The researchers' machine learning algorithms are so powerful, they can sort the error-related potentials from the other electrical noise to immediately create something the robot can comprehend.

Now, you may have been hearing recently that a robot will one day steal your job. I can’t guarantee that’s untrue, but a world is coming in which robots work alongside humans. Imagine a robot assistant helping you assemble Ikea furniture. “The robot could actually be passing the human different pieces of the chair,” says roboticist and study co-author Stephanie Gil of MIT. “So maybe a chair leg or an arm rest. And the human is actually using his hands to put these different pieces together.”

But you shouldn’t have to constantly bark orders at your assistant, right? “We don't want to have to explicitly use verbal cues or a push of a button, something that's very unnatural for the human to communicate with the robot,” Gil adds. “We want this to be very natural and almost seamless.” And nothing is more seamless than a robot reading your mind.

This technology operates as a binary at the moment---Baxter only knows if it’s doing something wrong or something not wrong---but you can expect the range of communications to diversify as the technology matures. Detecting emotions, for example. “We're also very interested in the potential for using this idea in driving,” says Rus, “where you have passengers in an autonomous car and the passengers’ fears or brain signals---I mean this is getting futuristic---but the brain waves from the passengers get used by the car to adjust its own behavior.”

Backseat drivers, rejoice.