Posts tagged "linguistics"

Brain ‘talks over’ boring speech quotes

Storytelling is a skill not everyone can master, but even the most crashing bore gets help from their audience’s brain which ‘talks over’ their monotonous quotes, according to scientists.

Researchers from the University of Glasgow’s Institute of Neuroscience and Psychology investigated the ‘voice-selective’ areas of the  and revealed that when listening to someone monotonously repeating direct quotations, the brain will ‘talk over’ the speaker to make the quotes more vivid.

Previously, the researchers had shown the brain ‘talks’ when silently reading direct quotations.

Dr Bo Yao, the principal investigator of the study, said: “You may think the brain need not produce its own speech while listening to one that is already available.

“But, apparently, the brain is very picky on the speech it hears. When the brain hears monotonously-spoken direct speech quotations which it expects to be more vivid, the brain simply ‘talks over’ the speech it hears with more vivid speech utterances of its own.”

The research was conducted by Dr Yao and colleagues Professor Pascal Belin and Professor Christoph Scheepers within the Institute’s Centre for Cognitive Neuroimaging.

The team enlisted 18 participants in the study and scanned their brains using functional magnetic resonance imaging (fMRI) while they listened to audio clips of short stories containing direct or indirect speech quotations. The direct speech quotations — e.g., Mary said excitedly: “The latest Sherlock Holmes film is fantastic!” – were either spoken ‘vividly’ or ‘monotonously’ (i.e., with or without much variation in speech melody).

The results showed that listening to monotonously spoken direct speech quotations increased brain activity in the ‘voice-selective areas’ of the brain. These voice-selective areas – originally discovered by Prof Belin – are certain areas of the auditory cortex which are particularly interested in human voices when stimulated by actual speech sounds perceived by the ears.

First listening experiences: Dee do do do, de da da da?
"On the face of it, it is a strange phenomenon: adults who, the moment they lean over to peer into a baby buggy, start babbling a curious baby talk. And it doesn’t just happen to fathers and mothers; it overcomes many people in the same situation. In fact, we all seem to be capable of it, this “de do do do, de da da da.” But what exactly are we saying to our little fellow human beings? What message can be derived from this “de do do do, de da da da?”
The official term for this baby talk is infant-directed speech (IDS). It is a form of speech that distinguishes itself from normal adult speech through its higher overall pitch, exaggerated melodic contours, slower tempo, and greater rhythmic variation. It appears to be a kind of musical language, however it is one with an indistinct meaning and virtually no grammar. For these reasons, I will call it “babble music.” Babies love it, and coo with delight in response to the rhythmic little melodies, which often have the same charm as pop songs like The Police’s “De do do do, De da da da” and Kylie 
 
Numerous archives around the world have recordings of musical babble conversations between adults and children. If you listen to several of them, most of the time you won’t be able to understand what’s being said, but you will be able to identify the situation and particularly the mood because of the tone. It will quickly become apparent whether the message is playful, instructive, or admonitory. Words of encouragement such as “That’s the way!” or “Well done!” are usually uttered in an ascending and subsequently descending tone, with the emphasis on the highest point of the melody. Warnings such as “No, stop it!” or “Be careful, don’t touch!” on the other hand, are generally voiced at a slightly lower pitch, with a short, staccato-like rhythm. If the speech were to be filtered out so that its sounds or phonemes were no longer audible and only the music remained, it would still be clear whether encouragement or warning was involved. This is because the relevant information is contained more in the melody and rhythm then it is in the words themselves.
Most linguists see the use of rhythm, dynamics, and intonation as an aid for making infants familiar at a young age with the words and sentence structures of the language of the culture in which they will be raised. Words and word divisions are emphasized through exaggerated intonation contours and varied rhythmic intervals, thereby facilitating the process of learning a specific language. (This, apart from the discussion about which aspects of language are innate.)
Pedagogically speaking, the period during which parents use “babble music” is remarkably long. Infants have a distinct preference for babble music from the moment they are born, only developing an interest in adult speech after about nine months. Before that time, they appear to listen mostly to the sounds themselves. An interest in specific words, word division, and sound structure only comes after about a year, at which time they also begin to utter their first meaningful words. The characterization of IDS as an aid to learning a specific language therefore seems less plausible to me, at least with respect to the earliest months.
An alternative might be to see IDS not as a preparation for speech but as a form of communication in its own right: a kind of “music” used to communicate and discover the world for as long as “real” speech is absent. If you subsequently emphasize the type of information most commonly conveyed in babble music, or rather, those aspects of speech in which infants have the greatest interest during their first nine months, the conclusion must be that babble music is first and foremost a way of conveying emotional information. It is an emotional language that, even without grammar, is still meaningful…”
(Henkjan Honing | Psychology Today)
Image: New Mothers Resource Guide 

First listening experiences: Dee do do do, de da da da?

"On the face of it, it is a strange phenomenon: adults who, the moment they lean over to peer into a baby buggy, start babbling a curious baby talk. And it doesn’t just happen to fathers and mothers; it overcomes many people in the same situation. In fact, we all seem to be capable of it, this “de do do do, de da da da.” But what exactly are we saying to our little fellow human beings? What message can be derived from this “de do do do, de da da da?”

The official term for this baby talk is infant-directed speech (IDS). It is a form of speech that distinguishes itself from normal adult speech through its higher overall pitch, exaggerated melodic contours, slower tempo, and greater rhythmic variation. It appears to be a kind of musical language, however it is one with an indistinct meaning and virtually no grammar. For these reasons, I will call it “babble music.” Babies love it, and coo with delight in response to the rhythmic little melodies, which often have the same charm as pop songs like The Police’s “De do do do, De da da da” and Kylie 

Numerous archives around the world have recordings of musical babble conversations between adults and children. If you listen to several of them, most of the time you won’t be able to understand what’s being said, but you will be able to identify the situation and particularly the mood because of the tone. It will quickly become apparent whether the message is playful, instructive, or admonitory. Words of encouragement such as “That’s the way!” or “Well done!” are usually uttered in an ascending and subsequently descending tone, with the emphasis on the highest point of the melody. Warnings such as “No, stop it!” or “Be careful, don’t touch!” on the other hand, are generally voiced at a slightly lower pitch, with a short, staccato-like rhythm. If the speech were to be filtered out so that its sounds or phonemes were no longer audible and only the music remained, it would still be clear whether encouragement or warning was involved. This is because the relevant information is contained more in the melody and rhythm then it is in the words themselves.

Most linguists see the use of rhythm, dynamics, and intonation as an aid for making infants familiar at a young age with the words and sentence structures of the language of the culture in which they will be raised. Words and word divisions are emphasized through exaggerated intonation contours and varied rhythmic intervals, thereby facilitating the process of learning a specific language. (This, apart from the discussion about which aspects of language are innate.)

Pedagogically speaking, the period during which parents use “babble music” is remarkably long. Infants have a distinct preference for babble music from the moment they are born, only developing an interest in adult speech after about nine months. Before that time, they appear to listen mostly to the sounds themselves. An interest in specific words, word division, and sound structure only comes after about a year, at which time they also begin to utter their first meaningful words. The characterization of IDS as an aid to learning a specific language therefore seems less plausible to me, at least with respect to the earliest months.

An alternative might be to see IDS not as a preparation for speech but as a form of communication in its own right: a kind of “music” used to communicate and discover the world for as long as “real” speech is absent. If you subsequently emphasize the type of information most commonly conveyed in babble music, or rather, those aspects of speech in which infants have the greatest interest during their first nine months, the conclusion must be that babble music is first and foremost a way of conveying emotional information. It is an emotional language that, even without grammar, is still meaningful…”

(Henkjan Honing | Psychology Today)

Image: New Mothers Resource Guide 

 
Bilingualism: A Matter of Apples and Oranges
How do people who speak more than one language keep from mixing them up? How do they find the right word in the right language when being fluent in just one language means knowing about 30,000 words?
That’s what science has wondered about for decades, offering complicated theories on how the brain processes more than one language and sometimes theorizing that bilingualism degrades cognitive performance.
But University of Kansas psycholinguist Mike Vitevitch thinks that complicated explanations of how the brain processes two or more languages overlook a straightforward and simple explanation.
“The inherent characteristics of the words — how they sound — provide enough information to distinguish which language a word belongs to,” he said. “You don’t need to do anything else.”
And in an analysis of English and Spanish, published in the April 7 online edition of Bilingualism: Language and Cognition, Vitevitch found few words that sounded similar in the two languages.
Most theories of how bilingual speakers find a word in memory assume that each word is “labeled” with information about which language it belongs to, Vitevitch said.
But he disagrees. “Given how different the words in one language sound to the words in the other language, it seems like a lot of extra and unnecessary mental work to add a label to each word to identify it as being from one language or the other. “
Here’s an analogy. Imagine you have a bunch of apples and oranges in your fridge. The apples represent one language you know, the oranges represent another language you know and the fridge is that part of memory known as the lexicon, which contains your knowledge about language. To find an apple you just look for the round red thing in the fridge and to find an orange you just look for the round orange thing in the fridge. Once in a while you might grab an unripe, greenish orange mistaking it for a granny smith apple. Such instances of language “mixing” do happen on occasion, but they are pretty rare and are easily corrected, said Vitevitch.
“This process of looking for a specific piece of fruit is pretty efficient as it is — labeling each apple as an apple and each orange as an orange with a magic marker seems redundant and unnecessary.”
Given how words in one language tend to sound different from words in another language, parents who speak different languages should not worry that their children will be confused or somehow harmed by learning two languages, said Vitevitch.
“Most people in most countries in the world speak more than one language,” said Vitevitch. “If the U.S. wants to successfully compete in a global economy we need people who can communicate with potential investors and consumers in more than one language.”
Michael S. Vitevitch. What do foreign neighbors say about the mental lexicon? Bilingualism: Language and Cognition, 2011; DOI: 10.1017/S1366728911000149

Bilingualism: A Matter of Apples and Oranges

How do people who speak more than one language keep from mixing them up? How do they find the right word in the right language when being fluent in just one language means knowing about 30,000 words?

That’s what science has wondered about for decades, offering complicated theories on how the brain processes more than one language and sometimes theorizing that bilingualism degrades cognitive performance.

But University of Kansas psycholinguist Mike Vitevitch thinks that complicated explanations of how the brain processes two or more languages overlook a straightforward and simple explanation.

“The inherent characteristics of the words — how they sound — provide enough information to distinguish which language a word belongs to,” he said. “You don’t need to do anything else.”

And in an analysis of English and Spanish, published in the April 7 online edition of Bilingualism: Language and Cognition, Vitevitch found few words that sounded similar in the two languages.

Most theories of how bilingual speakers find a word in memory assume that each word is “labeled” with information about which language it belongs to, Vitevitch said.

But he disagrees. “Given how different the words in one language sound to the words in the other language, it seems like a lot of extra and unnecessary mental work to add a label to each word to identify it as being from one language or the other. “

Here’s an analogy. Imagine you have a bunch of apples and oranges in your fridge. The apples represent one language you know, the oranges represent another language you know and the fridge is that part of memory known as the lexicon, which contains your knowledge about language. To find an apple you just look for the round red thing in the fridge and to find an orange you just look for the round orange thing in the fridge. Once in a while you might grab an unripe, greenish orange mistaking it for a granny smith apple. Such instances of language “mixing” do happen on occasion, but they are pretty rare and are easily corrected, said Vitevitch.

“This process of looking for a specific piece of fruit is pretty efficient as it is — labeling each apple as an apple and each orange as an orange with a magic marker seems redundant and unnecessary.”

Given how words in one language tend to sound different from words in another language, parents who speak different languages should not worry that their children will be confused or somehow harmed by learning two languages, said Vitevitch.

“Most people in most countries in the world speak more than one language,” said Vitevitch. “If the U.S. wants to successfully compete in a global economy we need people who can communicate with potential investors and consumers in more than one language.”

Michael S. Vitevitch. What do foreign neighbors say about the mental lexicon? Bilingualism: Language and Cognition, 2011; DOI: 10.1017/S1366728911000149