Skip to main content

The Adorable Robot That’s Helping Deaf Children Communicate

A new robot-avatar combo is helping deaf children learn to communicate during a crucial time in their development.

Released on 12/05/2017

Transcript

[Narrator] You're looking at

a subtly fascinating interaction.

The robot attracts the infant's gaze,

then directs that gaze to the screen.

Now the child is engaged in doing something

essential for a developing brain.

Communicating.

It's the first glimpse at a clever new way

to ensure that both deaf and hearing children

build the communication skills

they'll rely on their whole lives.

(computer music)

There's a bit of urgency when

it comes to children and language.

If kids aren't exposed to enough

natural language early in life,

for instance by reading to them,

they're significantly more likely

to develop problems with language.

Scientists here at Gallaudet University,

a school for deaf and the hard of hearing,

can pinpoint what areas of the brain

natural language activates.

These are the same bits whether

you're learning spoken language

or sign language.

The brains of deaf people and hearing people

are identical with regard to human sensitivities

to language and human patterns,

and patterns that get pressed out

onto the hands or the tongue.

They're equivalent.

So this learning tool is ideal

for all children who have minimal experience

with natural language.

[Narrator] What these researchers

have figured out is that both deaf

and hearing infants respond to the rhythmic patterns

of natural language.

We decided why don't we build a learning tool

that will give the child those exact patterns.

[Narrator] The pivotal piece here is the robot.

Sure you can plop a kid down in front of a TV

to expose them to language, but it's the interaction

that really gets kids engaged with language.

Ideally that means talking and reading to your kid.

But not every home can provide that kind of interaction.

So this robot gets the child's attention,

then looks over at the avatar.

Meanwhile, a thermal camera is watching

for subtle changes in temperature,

which is linked with heightened awareness.

This is combined with face-tracking software

to determine when the infant takes the robot's cue

and engages with the avatar,

which then signs a nursery rhyme.

It works, and not just for deaf children.

One of the most exciting pilot findings

is that even a young hearing child

will begin to sign back to the avatar.

Which means that we have hit on a learning tool

that will be socially beneficial to all children.

[Narrator] This isn't meant to be a replacement

for parenting, but for busy parents,

it could one day keep infants engaged with language,

even when the adult is occupied elsewhere.

What's interesting from an engineering perspective,

is that this is supposed to be one of the things

robots are bad at.

Robots stink at reading our expressions,

and are even worse at making their own expressions.

But here we have a robot using body language

to give instructions to a baby human.

I mean, how could you say no to that face?