The Origins of Babble
             By Melissa Hendricks
       Illustration by Craig Terkowitz

WHOOSH. LUB. GURGLE. The womb envelops a
fetus in a symphony of sounds. The expectant
mother's heart drums. Her intestines burble.
And above the thrum flows the muted melody of
the mother's voice.

Thus begins a complex series of lessons on
language. Within months after birth, a child
learns to say, "ba!" or "Mama." In another
year or so, she is already demanding, "Cookie
me!" and "Hi, Daddy!" Language acquisition
seems to happen at lightning speed. The
challenge, says Johns Hopkins professor of
psychology Peter Jusczyk, "is explaining how
kids pick up language so rapidly."

When you stop to think about it, learning
language is an amazing accomplishment. Mommy
and Daddy do not sit down with Junior and
say, "Okay son, now, `doggie' is a noun, and
`run' is a verb. The proper order is
noun-verb. `Doggie runs.'"

But somehow Junior learns what the words
sound like, what they mean, how to order them
in a sentence, and how to make them agree
grammatically--quite often before his second
birthday.

Moreover, as the months pass, Junior also
learns that a word can mean one thing in one
context--"Bees like honey"--and something
entirely different in another--"Honey, give
me a smile." Eventually, he also absorbs the
overlayers of unspoken meaning conveyed
through tone or context; he can detect
sarcasm in a voice dripping with it, for
example, or unctuousness in a voice oozing
with sweetness.

How do babies do it?

Until about 30 years ago, language
researchers focused their studies on infants
who had already begun to babble, according to
Jusczyk, who has written a book on how
children acquire language titled The
Discovery of Spoken Language (The MIT Press,
1997). Babies start to vocalize at around
four months of age, and to babble in strings
of words at around six or seven months.

"Theories around at that time said that
infants perceived speech sounds by producing
them," says Jusczyk. In other words, by
listening to themselves babble, babies
learned to tell one sound from another. Mom,
Dad, or the babysitter would reinforce these
sounds by repeating their utterances like,
"Baba! That's bottle."

Researchers, however, had not developed
methods of deciphering what went through a
baby's mind before baby uttered his first
"Ma" or "Papa." So Jusczyk and other
experimentalists devised techniques that
allow them to study the pre-babbler. They
have demonstrated that speech is the
culmination of a tremendous amount of
learning. Long before a baby utters his first
"baba," the researchers discovered, his mind
is furiously sorting out the sounds and
shapes of words and sentences.

Colleagues credit Jusczyk for being one of
the key experimentalists to bridge the gap
between the study of infant speech
perception and language development. "Peter
is the father of a lot of this work," says
Robin Cooper, an associate professor of
psychology at Virginia Polytechnic
Institute in Blacksburg, who studies infant
language acquisition.

Jusczyk, who arrived at Hopkins in fall
1996, after six years at the State
University of New York at Buffalo, says his
interest in the field grows partly out of a
lifelong love of language and literature.
He has a penchant for Eastern European
writers, Faulkner, and poetry. "I'm one of
the few people I know who buy poetry
books," he remarks.

In their decades-long search for the
universal truths about language
acquisition, Jusczyk and collaborators
around the world have found that at every
stage of development, babies know a lot
more than they'd been given credit for. The
very seeds of language learning, in fact,
start to develop in the womb.

Researchers cannot easily investigate
language perception in the womb, however.
So they study newborn babies' reactions to
sounds that mimic the muffled language that
penetrates the womb. In this technique,
newborn babies listen to filtered
recordings of a woman (the baby's mother or
another mother) speaking, while sucking on
a pacifier that is attached to a pressure
transducer. Filtering erases the crisp
edges of words, while leaving intact other
features such as rhythm, melody, pitch, and
intonation-- similar to what a fetus hears
in the womb. "It's kind of like listening
to a stereo next door," says William Fifer,
an associate professor of developmental
psychobiology at Columbia University. "You
hear a lot of bass, but not the crisp,
clear high frequencies."

Using this technique, Fifer and his
colleagues found that newborns suck harder
on the pacifier when listening to filtered
recordings of their own mother's voice in
comparison to another mother's. The
newborns thus recognize and prefer their
own mother's voice, concludes Fifer.

In further studies, Jusczyk and postdoc
Thierry Nazzi found that newborns prefer
filtered recordings of their own native
language over that of a foreign language.
Babies like what they know, says Jusczyk.
Newborns, he says, apparently learn the
rhythm of their native language and of
their mother's voice while in the womb.

How does a baby then build onto this
rhythmic foundation?

When Jusczyk was an undergraduate at Brown
University in the 1960s, he and
psychologist Peter Eimas made a remarkable
discovery. Using the pacifier technique,
they found that one-month-old infants could
already distinguish between "pa" and "ba."
Subsequent studies by other investigators
revealed that babies could do this even at
birth.

"Babies come equipped with basic speech
perception capacities, as if it were
hardwired," Jusczyk concluded. "The ability
is part of our biological endowment.
Babbling is useful for learning how to
produce the sounds. But babies don't need
to babble before they can tell the
difference between sounds."

Jusczyk's and Eimas's pa/ba findings, which
were published in Science in 1971, provided
some of the first experimental backing of
theories on the hardwiredness of language
proposed by noted MIT linguist Noam
Chomsky. Their research also opened the
doors to a renewed interest in infant
speech perception, and encouraged more
scientists to start exploring how language
develops before a baby starts to speak.

Further, psychologist Janet Werker and
colleagues at the University of British
Columbia demonstrated that babies can
distinguish between a large array of
phonetic differences, including ones that
are not part of the language spoken around
them. For example, an English Canadian
six-month-old knows that the hard English
"da" is different from the softer Hindi
"da," which is pronounced with the tongue
tip touching the back of the palate rather
than the upper teeth. Other researchers
showed that babies born to speakers of the
African language Kikuyu, which does not
contain the ba/pa distinction, can hear the
ba/pa difference, and that Japanese babies
can distinguish "ra" from "la," even though
Japanese speech does not include "ra."

But scientists have found that even
chinchillas can distinguish between "pa"
and "ba," says Jusczyk. So, obviously, this
ability is not all that is required to
launch into language. Jusczyk believes that
the ability to make phonetic distinctions
is part of a broader auditory skill, which
is shared by humans, chinchillas, and other
mammals. The broader talent may include the
innate ability to distinguish different
musical tones, for example. (Jusczyk is
currently testing that hypothesis in human
infants, see Arithmetic of the Soul.)

Whatever it is that chinchillas and humans
share, it is not long before a baby
advances beyond all other creatures.

During baby's first year, as words flow
across the folds and contours of gray
matter, their sounds and rhythms sculpt the
brain, and baby becomes sensitive to more
idiosyncracies of her particular language.

To a nine-month-old baby, the contrast
between the English "da" and the Hindi "da"
is less noticeable than it was three months
earlier. By the time the baby is a year,
she does not appear to notice the
difference, paying no more attention to one
than the other. "It's use it or lose it,"
remarks Jusczyk.

The ability to detect these phonetic
differences does not entirely disappear,
however. As an adult, an English speaker
can still hear the Hindi "da," just as a
Japanese can learn to pronounce "ra"--but
the adult will have to work harder at it.

A BABY'S TASK OF LEARNING LANGUAGE would be
much easier if words were spoken in sharp,
discrete packets. "But we run words into
each other," says Jusczyk. Consider: The
ants are my friend. They're blowing in the
wind.

This headline, which accompanied a Buffalo
News article about Jusczyk's research,
illustrates the point. It takes a
sophisticated listener to hear that the
folk song is about an answer and not
airborne ants.

As anyone listening to an unfamiliar
foreign language or to certain lyrics
knows, it is difficult to tell where one
word begins and ends and another begins.
But even children whose parents speak as
rapidly as Abbott and Costello learn to
pluck words out of this torrent of speech.

"How babies do this is a critical problem,
and was avoided in speech [acquisition]
research for many years," says Jusczyk. But
Jusczyk has used an experimental technique
called the head-turn procedure to conduct a
series of studies on how infants locate
words in fluent speech.

Which is why, one day in between snack and
nap time, nine-month-old Alexandra Elliot
is sitting on her mother's lap in a dimly
lit booth on the Homewood campus listening
to Dutch. Dressed in her cutest outfit,
with a bluebird barrette holding back all
three strands of her strawberry blond hair,
Alexandra stares straight ahead, at a green
light on a wall in front of her. The light
starts flashing, as the experiment begins.

Seated at a computer behind this wall,
graduate student Derek Houston watches
Alexandra through a peephole. Both he and
Alexandra's mother, Donna Pitts, wear
earplugs and headphones that play loud
music to mask the Dutch recordings. The
researchers want Alexandra's reactions to
be her own, not to be influenced by the
data collector or by barely perceptible
movements from her mom.

Pendel, announces a hidden speaker behind a
pegboard wall on Alexandra's left. At the
same time, a red light on the wall starts
to flash.

Alexandra's eyes open wide and her mouth,
framed by chubby cheeks, makes an "O,"
revealing two bottom teeth. "Dada!" she
exclaims, and reaches toward the light.

Kusten, reports a hidden speaker to
Alexandra's right, while a red light on the
right wall flashes. Alexandra then stares
at this light.

Die pendel ligt op het bureau van mijn
oom... continues the recording. Several
other Dutch passages follow. Some contain
the words pendel or kusten, and some do
not.

At the computer, Houston records precisely
how long Alexandra stares at either of the
lights.

Later, Houston explains the experiment's
premise. Babies believe that flashing light
equals sound, he says. If a baby is
interested in the words she hears spoken,
she will look to see where they are coming
from. The more interested she is in the
words, the longer she will stare at a light
that flashes while the words play. Often,
the difference is slight: a baby will stare
at a light for six seconds while one
passage is played compared to seven seconds
while another is played. But even one
second difference, says Jusczyk, can be
statistically significant.

Obviously, Alexandra does not know that
pendel means "hanging lamp," nor that
kusten means "coasts," says Houston, but
she can hear the prosody of these words.
"Prosody," says Jusczyk, "means rhythm,
melody, accentuation on syllables, and
intonation--the suprasegmental information"
contained in a word.

"The rhythmic properties of English are
such that English words usually (about 75
percent of the time) begin with a stressed
(or accented) syllable," says Houston.
Think of "bottle," "carrot," "baby,"
"pencil." Babies raised in an
English-speaking environment, the theory
goes, apparently recognize this acoustical
strong-weak pattern, and use it to pick out
words in the sea of babble--that is, to
segment speech.

Indeed, in earlier experiments, Houston,
Jusczyk, and their colleagues found that
American seven-and-a-half-month-old infants
listen longer to unfamiliar words that
follow this strong-weak pattern than they
do to unfamiliar words such as "beret" or
"guitar," which do not.

To add weight to their hypothesis, the
researchers then tested Dutch infants while
the babies listened to similar two-syllable
English words and passages containing those
words. Now the researchers are testing
American infants, like Alexandra, while
those babies listen to analogous recordings
of spoken Dutch. "Dutch and English have
very similar rhythmic properties," says
Houston. "If infants are relying mainly on
rhythmic properties to locate words in
fluent speech, it may not matter that it's
a different language, so long as the words
follow the typical stress pattern of
English," as pendel and kusten do.

If a word does not fit the strong-weak
pattern, Englishspeaking parents make it
conform to the pattern, adds Jusczyk:
"horse" becomes "horsie." "Dog" becomes
"doggie." It is as though parents
instinctively know that the strong-weak
pattern will help their baby learn the
outlines of a word. Not every word in
English obeys the strong-weak rule, but
finding words that abide by the rule may be
an entry point into the language.

But not all languages favor strong-weak
accented two-syllable words. Consider
chalet and touche and many other French
words. They are syllable-timed. Both
syllables receive approximately equal
emphasis. "So how do French babies learn to
segment words?" asks Jusczyk.

The last syllable of many French words is
slightly longer than the first syllable
(think of chateau). "Maybe French babies
look for [a second] syllable that is
accented just a little bit longer," posits
Jusczyk. Jusczyk's wife, Ann-Marie, who is
laboratory coordinator for the infant
language studies, recently trained
researchers in France who are now
investigating this question. After France,
the Jusczyks hope to conduct similar
studies in countries where other languages
are spoken.

Of course, language is not just about
words. It also involves grammar and syntax.
Jusczyk believes that information such as
prosody can help babies learn the rules.

"When you grow up, your voice will change,
my mother told me" has distinct syntactic
units, says Jusczyk. "Children learn what
the right packages are." They learn that
"When you grow up" is a correct package,
and that "When you grow up your voice" is
an incorrect package.

In a series of experiments, Jusczyk's team
played recordings of speech in which the
scientists had inserted one-second pauses in
one of two places: at the boundaries between
clauses, or within the clauses themselves.
They then played the recordings for seven- to
10-month-old infants. The researchers found
that babies listened longer to passages
containing one-second pauses between clauses
than passages containing pauses that
interrupted clauses, says Jusczyk.

"This line of work suggests that infants are
learning about cues that are going to help
them identify syntax," he concludes. "When
you get to the end of a clause, the pitch
drops, the syllable lengthens, and you tend
to pause." These features appear to signal a
shift from one phrase to the next, or from
one clause to the next.

Likewise, says Jusczyk, information in speech
helps language learners comprehend grammar.
Nouns, for instance, are often preceded by
"the" or "a." These words serve as cues, and
help the baby learn which words are nouns.

But how does a baby know she must hunt for a
noun in the first place?

"That is the $64,000 question," says Jusczyk.
"These are issues that are still very much
debated."

Some authorities argue that speech contains
all the information a baby needs to
understand language. Others, including
Chomsky and MIT cognitive neuroscientist
Steven Pinker, believe that this knowledge
is, to some degree, innate. According to this
view, babies are hardwired with an
understanding of nouns and verbs from birth.

Jusczyk falls somewhere in between. Babies
use information from speech to learn
language, he says. But they also have an
innate tendency to look for categories that
we call noun and verb. Babies do not know
that "dog" is a noun and "runs" is a verb,
but they appear to be innately inclined to
look for parts of speech that correspond to
objects and actions, he says.

Parents probably unconsciously guide babies
in learning the structures of words and
phrases, says Jusczyk. Parentese, or
child-directed speech, is the singsongy voice
that parents use when speaking to an infant.
Researchers find that moms and dads around
the world appear to use it, and babies, in
turn, appear to prefer parentese to
adult-directed speech.

Parentese highlights important features of
the language, Jusczyk hypothesizes. When
adults speak to adults, they often mark
syntactic boundaries by changes in prosody
(such as changes in pitch, syllable duration,
and pauses). When adults speak to babies,
they exaggerate these prosodic cues. Mother
says, "Look at the pretty babieeee." It is as
though the adult is erecting linguistic
signposts that indicate a) baby is a noun,
and b) the sentence is ending.

More research is needed, however, says
Hopkins cognitive scientist Michael Brent. He
recently recorded 200 hours of mothers
speaking to their babies. The eight mothers
who volunteered for the study wore
lightweight recorders tucked inside fanny
packs that recorded their speech as they went
about their daily routines and cared for
their children. Recordings were made every
two weeks from the time the babies were nine
months old until they were 15 months. In
addition, Brent periodically tested the
babies' language abilities.

Brent is now writing computer programs to
analyze his collection of child-directed
speech. "I'd like to see how often mothers
use isolated words, whether they are using
mostly nouns or verbs, how long is each
utterance, and what's the pitch."

Once he has the anthology, says Brent, he and
other researchers will be able to study how
mothers' child-directed speech changes over
time, as their children learn language. He
will also look to see how often mothers speak
single words to their children--for example,
just, "doggie" or "car"--without combining
those words with others. Brent hypothesizes
that babies first learn these single words,
and build upon them, in learning how to
segment speech.

AT SOME POINT, a baby does not just absorb
language but begins producing words. Baby's
vocabulary really takes off at around 18
months, says Jusczyk. During this vocabulary
explosion, some researchers believe, babies
learn to speak as many as nine new nouns per
day. Scientists have struggled to explain how
babies could learn words so quickly.

Jusczyk recently added a piece to the puzzle.
Long before they speak words or know their
meaning, babies appear to memorize the sounds
of words they hear frequently.

Jusczyk's team recorded a woman reading
children's stories, and played the stories
for eight-month-old babies once a day for 10
days in the babies' homes. The stories
included words unfamiliar to an
eight-month-old, such as peccaries, python,
and hornbill.

In the lab two weeks later, the babies
listened to recordings of lists of words that
occurred frequently in the stories. They also
listened to words that were similar to the
story words but had not been in the stories.

The babies preferred the lists containing
words they had heard in the stories, the
scientists reported in the September 26
Science. Clearly, the eight-month-olds didn't
know that a hornbill is a bird, or that a
python is a snake. But they did remember the
sounds of these words--a helpful starting
point for future word comprehension. "The
mind is wired to retain sound patterns, and
then you piece them together," concludes
Jusczyk. It is as though the child first
builds an acoustic frame, and then fills it
in with a semantic picture.

FROM THE RHYTHMS OF LANGUAGE that override
the whoosh of the womb to the complex rules
of grammar and syntax, the stages and timing
of language acquisition are being defined.

But the steps of language acquisition are
just details of the process, notes Brent.
"Generally, the state of the theory is that
none of us can explain how children learn
language. So far, it defies explanation.

"It would be like someone saying, `My
computer works because someone stuck a
central processing unit into it.'" But how
does the CPU work? Likewise, we can assume
that the concept of noun and verb is
genetically programmed into our brains, but
how does that program work?

Future research may uncover some answers. For
example, Jusczyk is interested in studying
how listeners integrate many types of
information along with language. For example,
a person listening to someone speaking not
only pays attention to the speaker's words.
He also thinks about the appearance,
mannerism, and professional standing of the
speaker, and may even ponder memories related
to the speaker's topic. Jusczyk would like to
know how the brain analyzes language in the
context of this other information.

Likewise, says Jusczyk, "We're a long way
from understanding the relationship between
what's happening in the brain and in
behavior." At around eight to nine months,
for example, the long-range connections among
different brain regions proliferate, and the
brain's metabolism becomes more like an
adult's. These changes are interesting, says
Jusczyk, "but are they causal?" Does this
proliferation of brain connections account
for the ability of eight-month-old babies to
memorize the sounds and rhythms of words? Or
does baby's new ability to remember words
spur a proliferation of neuronal connections?
"Sometimes I don't think we even know if
we're on the right page or not," says
Jusczyk.

"I don't think anybody knows for sure how we
learn language," says Jusczyk. "But we have
certain hypotheses that make more sense than
they did before."

Melissa Hendricks is the magazine's senior
science writer.