A NEW SCIENCE OF LEARNING IS IN THE OFFING, SAYS HHMI INVESTIGATOR AND SALK INSTITUTE RESEARCHER TERRENCE SEJNOWSKI. HE AND OTHERS ARE TAPPING THE DISCIPLINES OF PSYCHOLOGY, NEUROSCIENCE, AND MACHINE LEARNING TO CREATE INNOVATIVE WAYS TO ENGAGE EVEN THE YOUNGEST STUDENTS. ONE GOAL: ROBOTIC ASSISTANTS FOR TOMORROW'S TEACHERS.
We face a crisis in public education in America. In my state, California, class sizes are up, budgets are down, and there is no clear idea how to improve learning as the system lurches on with more and more kids left behind.
Science can help. New research on human and machine learning is extending our knowledge of how kids and computers can collaborate. This work goes far beyond video games and keyboard or controller interfaces. I'm talking about something much more human—and far more complex to deliver—a social robot.
Research (and common sense) shows that the most effective teaching happens one on one. Such directed instruction can improve a student's performance from mid quartile to top quartile. Unfortunately, the current ratio of 15 students to 1 teacher in American schools doesn't support this level of interaction. There just aren't enough teachers—or enough money to pay for more.
But something strange and wonderful is happening: as teachers become rarer and more expensive, computers get cheaper and more powerful. We're on the verge of an era where inexpensive robotic teaching machines can augment classroom learning. Imagine Ms. Smith's science class with an Albert Einstein at each desk to help kids grasp the theory of relativity. Not a computer screen, but a fully interactive automaton that can talk, recognize facial expressions, anticipate needs, and learn from the student how to teach better.
What sounds like a Hollywood sci-fi flick is already being refined in the laboratory. Underlying the effort is a new science of learning that combines psychology, neuroscience, machine learning, and education, a concept I reviewed with colleagues Andrew Meltzoff and Patricia Kuhl of the University of Washington, and Javier Movellan of the University of California at San Diego, in the July 17, 2009, issue of Science.
We're getting a much better idea how young children come to understand the world around them. They follow social and physical cues from adults and their peers, mirror behaviors, and empathically connect with those around them. But what we do so naturally as humans is both second nature and enormously complicated. And it depends on the right cues at the right time.
One of the biggest challenges in designing social robots is getting children to interact with the machine as a peer. A robot has to respond to the child within a human interval, which turns out to be about two seconds.
Javier Movellan has built a social robot named RUBI and tests it daily in the classroom with 18- to 24-month-olds. At first, he found that toddlers wanted to treat it like a toy and pull its arms off. The solution? Install pressure sensors in the arms and program RUBI to “cry” when mishandled. Presented with this very human response, the children stopped tugging and hugged the machine instead.
But robots need to do much more than cry—they have to act human in key ways. One such golden behavior is shared attention. RUBI is programmed to follow a child's gaze at a third object. Combine that with facial recognition (RUBI smiles back when smiled at) and the little students become enthralled with their new acquaintance. Let the learning begin.
If you're thinking this is all implausible, realize that RUBI consists of $500 worth of computer parts and motors. Her progeny will no doubt be cheaper to produce, and more capable, but the real sticking point to future adoption of social robots in the classroom is humans. Teachers and unions are wary of mechanized aides and it will likely be a long, slow pull to get the technology accepted by school boards dubious that a “droid” can help a human instructor.
School administrators will no doubt be pushed by parents, who tend to be intrepid, early adopters of technology that can help their children learn. But we need a better system to get innovations from the lab to the classroom faster. A National Science Foundation-sponsored group I codirect called The Temporal Dynamics of Learning Center is working to do just that, bringing together researchers, teachers, administrators, and policy makers to create a collaborative science of education.
The classroom of the future may look quite different with social robot assistants, but how do we get from here to there? We need an X Prize for education, just like the $10 million competition now under way to build an automobile capable of reaching 100 miles per gallon. Let's award $1 million to the schools that deliver the best improvements for teaching reading, science, and math, and then $10 million to the school that is best able to scale it up.
Terrence Sejnowski is director of the Computational Neurobiology Laboratory at the Salk Institute.
Interview by Randy Barrett
Photo: John Dole