Showing posts with label speech perception. Show all posts
Showing posts with label speech perception. Show all posts

Saturday, December 22, 2018

The feeling before it happens: Anticipated touch and executive function--in (haptic) pronunciation teaching

Tigger warning*: This post is (about) touching!

Another in our continuing, but much "anticipated", series of reasons why haptic pronunciation teaching works or not, based on studies that at first glance (or just before) may appear to be totally unrelated to pronunciation work.

Fascinating piece of research by Weiss, Meltzoff, and Marshall of  University of Washington's Institute for Learning and Brain Sciences, and Temple University entitled, Neural measures of anticipatory bodily attention in children: Relations with executive function", summarized by ScienceDaily.com. In that study they looked at what goes on in the (child's) brain prior to an anticipated touch of something. What they observed (from the ScienceDaily.com summary) is that: 

"Inside the brain, the act of anticipating is an exercise in focus, a neural preparation that conveys important visual, auditory or tactile information about what's to come  . . . in children's brains when they anticipate a touch to the hand, [this process] . . . relates this brain activity to the executive functions the child demonstrates on other mental tasks. [in other words] The ability to anticipate, researchers found, also indicates an ability to focus."

Why is that important? It suggests that those areas of the brain responsible for "executive" functions, such as attention, focus and planning, engage much earlier in the process of perception than is generally understood. For the child or adult who does not have the general, multi-sensory ability to focus effectively, the consequences can be far reaching.

In haptic pronunciation work, for example, we have encountered what appeared to be a whole range of random effects that can occur in the visual, auditory, tactile and conceptual worlds of the learner that may interfere with paying quality attention to pronunciation and memory. In some sense we have had it backwards.

What the study implies is that executive function mediates all sensory experience as we must efficiently anticipate what is to come--to the extent that any individual "simply" may or may not be able to attend long enough or deeply enough to "get" enough of the target of instruction. The brain is set up to avoid unnecessary surprise at all costs. The better and more accurate the anticipation, of course, the better.

If the conclusions of the study are on the right track, that the "problem" is as much or more in executive function, then how can that (executive functioning) be enhanced systematically, as opposed to just attempting to limit random "input" and distraction surrounding the learner? We'll return to that question in subsequent blog posts but  one obvious answer is through development of highly disciplined practice regimens and careful, principled planning.

Sound rather like something of a return to more method- or instructor-centered instruction, as opposed to this passing era of overemphasis on learner autonomy and personal responsibility for managing learning? That's right. One of the great "cop outs" of contemporary instruction has been to pass off blame for failure on the learner, her genes and her motivation. That will soon be over, thankfully.

I can't wait . . .



Citation:
University of Washington. (2018, December 12). Attention, please! Anticipation of touch takes focus, executive skills. ScienceDaily. Retrieved December 21, 2018 from www.sciencedaily.com/releases/2018/12/181212093302.htm.

*Used on this blog to alert readers to the fact that the post contains reference to feelings and possibly "paper tigers" (cf., Tigger of Winnie the Pooh)


Saturday, December 23, 2017

Vive la efference! Better pronunciation using your Mind's Ear!

"Efference" . . . our favorite new term and technique: to imagine saying something before you actually say it out loud, creating an "efferent copy" that the brain then uses in efficiently recognizing what is heard or what is said.  Research by Whitford, Jack, Pearson, Griffiths, Luque, Harris, Spencer, and Pelley of University of New South Wales, Neurophysiological evidence of efference copies to inner speech, summarized by ScienceDaily.com, explored the neurological underpinnings of efferent copies, having subjects imagine saying a word before it was heard (or said.)
Clker.com

The difference in the amount of processing required of subsequent occurrences following the efferent copies, as observed by fMRI-like technology, was striking. The idea is that this is one way the brain efficiently deals with speech recognition and variance. By (unconsciously) having "heard" the target or an idealized version of it just previously in the "mind's ear", so to speak, we have more processing  power available to work on other things with . . .

Inner speech has been studied and employed in the second language research and  practice extensively  (e.g., Shigematsu, 2010, dissertation: Second language inner voice and identity) and in different disciplines.  There is no published research on the direct application of efference in our field to date that I’m aware of.

The haptic application of that general idea is to “imagine” saying the word or phrase synchronized with a specifically designed pedagogical gesture before articulating it.  In some cases, especially where the learner is highly visual, that seems to be helpful, but we have done no systematic work on it.  The relationship with video modeling effectiveness may be very relevant as well. Here is a quick thought/talk problem for you to demonstrate how it works:

Imagine yourself speaking a pronunciation-problematic word in one of your other languages before trying to say it out loud. Do NOT subvocalize, move your mouth muscles. (Add a gesture for more punch!) How’d it work?

Imagine your pronunciation work getting better while you are at it!




Saturday, October 14, 2017

Empathy for strangers: better heard and not seen? (and other teachable moments)

The technique of closing one's eyes to concentrate has both everyday sense and empirical research support. For many, it is common practice in pronunciation and listening comprehension instruction. Several studies of the practice under various conditions have been reported here in the past. A nice 2017 study by Kraus of Yale University, Voice-only communication enhances empathic accuracy, examines the effect from several perspectives.
😑
What the research establishes is that perception of the emotion encoded in the voice of a stranger is more accurately determined with eyes closed, as opposed to just looking at the video or watching the video with sound on. (Note: The researcher concedes in the conclusion that the same effect might not be as pronounced were one listening to the voice of someone we are familiar or intimate with, or were the same experiments to be carried out in some culture other than "North American".) In the study there is no unpacking of just which features of the strangers' speech are being attended to, whether linguistic or paralinguistic, the focus being:
 . . . paradoxically that understanding others’ mental states and emotions relies less on the amount of information provided, and more on the extent that people attend to the information being vocalized in interactions with others.
😑
The targeted effect is statistically significant, well established. The question is, to paraphrase the philosopher Bertrand Russell, does this "difference that makes a difference make a difference?"--especially to language and pronunciation teaching?
😑
How can we use that insight pedagogically? First, of course, is the question of how MUCH better will the closed eyes condition be in the classroom and even if it is initially, will it hold up with repeated listening to the voice sample or conversation? Second, in real life, when do we employ that strategy, either on purpose or by accident? Third, there was a time when radio or audio drama was a staple of popular media and instruction. In our contemporary visual media culture, as reflected in the previous blog post, the appeal of video/multimedia sources is near irresistible. But, maybe still worth resisting?
😑
Especially with certain learners and classes, in classrooms where multi-sensory distraction is a real problem, I have over the years worked successfully with explicit control of visual/auditory attention in teaching listening comprehension and pronunciation. (It is prescribed in certain phases of hapic pronunciation teaching.) My sense is that the "stranger" study actually is tapping into comprehension of new material or ideas, not simply new people/relationships and emotion. Stranger things have happened, eh!
😑
If this is a new concept to you in your teaching, close your eyes and visualize just how you could employ it next week. Start with little bits, for example when you have a spot in a passage of a listening exercise that is expressively very complex or intense. For many, it will be an eye opening experience, I promise!
😑

Source:
Kraus, M. (2017). Voice-only communication enhances empathic accuracy, American Psychologist 72(6)344-654.



Sunday, October 8, 2017

The shibboleth of great pronunciation teaching: Body sync!

If there is a sine qua non of contemporary pronunciation teaching, in addition to the great story of the first recorded pronunciation test in history that we often use in teacher training, it is the use of mirroring (moving along with a spoken model on audio or video). If you are not familiar with the practice of mirroring, here are a few links to get you started by Meyers (PDF), Meyers (video) and Jones.
Clker.com

There are decades of practice and several studies showing that it works, seems to help improve suprasegmentals, attitudes and listening comprehension--among other things. There has always been a question, however, as to how and why. A new study by Morillon and Baillet of McGill University reported by ScienceDaily.com not only suggests what is going on but also (I think) points to how to better work with a range of techniques related to mirroring in the classroom.

The study looked at the relationship between motor and speech perception centers of the brain. What it revealed was that by getting subjects to move (some part) of their bodies to the rhythm of what they were listening to, their ability to predict what sound would come next was enhanced substantially. Quoting from the ScienceDaily summary:

"One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception."

The researchers go on to note that a good analogy is the experience of being in a very noisy cocktail party and trying focus in on the speech rhythm of someone you are listening to better understand what they are saying. (As one whose hearing is not what it used to be, due in part to just age and tinnitus, that strategy is one I'm sure I employ frequently.) You can do that, I assume, by either watching the body or facial movement or just syncing to rhythm of what you can hear.

As both Meyer and Jones note, with the development of visual/auditory technology and the availability to appropriate models on the web or in commercial materials, the feasibility of any student having the opportunity and tools to work with mirroring today has improved dramatically. Synchronized body movement is the basis of haptic pronunciation teaching. We have not done any systematic study of the subsequent impact of that training and practice on speech perception, but students often report that silently mirroring a video model helps them understand better. (Well, actually, we tell them that will happen!)

If you are new to mirrored body syncing in pronunciation teaching or in listening comprehension work, you should  try it, or at least dance along with us for a bit.

Source:
McGill University. (2017, October 5). Predicting when a sound will occur relies on the brain's motor system: Research shows how the brain's motor signals sharpen our ability to decipher complex sound flows. ScienceDaily. Retrieved October 6, 2017 from www.sciencedaily.com/releases/2017/10/171005141732.htm

Saturday, August 30, 2014

Improve L2 pronunciation-- with or without lifting a finger!

Clip art: Clker
Listen to this! (You may even want to sit down before you do!) New study showing how movement can affect listening by Mooney and colleagues at Duke University, summarized by Science Daily. Here's the summary:

"When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. The interplay between movement and hearing has a counterpart deep in the brain. A new study used optogenetics to reveal exactly how the motor cortex, which controls movement, can tweak the volume control in the auditory cortex, which interprets sound."

Now, granted, the study was done on mice who probably have some other stuff going on down there in their motor cortices as well. Nonetheless, the striking insight into the underlying relationship between movement and volume control on our auditory input circuits is enough to give us (an encouraging) "pause . . . " in two senses:

First, learning new pronunciation begins with aural comprehension, being able to "hear" the sound distinctions. We have played with the idea of having learners gesture along with instructor models while listening. The study suggests that may not be as effective as we thought, or at least the conditions that we set up have to be more sensitive to "volume" and ambient static. You can see the implications for aural comprehension work in general as well. 

Second, during early speaking production in haptic pronunciation instruction, being able to temporarily  suppress auditory input (coming in through the ears) is seen as essential. Following Lessac and many others in speech and voice training, what we are after initially is focus on vocal resonance in the upper body and kinaesthetic awareness of the gestural patterns, what we call "pedagogical movement patterns" or PMPs. 


We do that, in part, to dampen (i.e., turn down the volume) on how the learner's production is perceived initially, filtered through the L1 or personal interlanguage versions, trying to focus instead on the core of the sound(s), approximations, not absolute accuracy. Some estimates of our awareness of our own voice suggest that it is less than 25% auditory, that is coming in through the air to our ears, the rest being body-based, or somatic. 

What we hear should be moving, not what we hear with apparently! 

SCID citation: Duke University. "Stop and listen: Study shows how movement affects hearing." ScienceDaily. ScienceDaily, 27 August 2014 .

Sunday, October 14, 2012

In your ears!!! (Not for accurate sound discrimination!)


Clip art: Clker
Clip art: Clker
Have long recommended that learners NOT use headsets when working on pedagogical movement patterns--and also go easy on that practice in general sound discrimination work. (For one thing their arms get tangled up in the cords!) Now there is an empirical study that adds a little support to that principle. As reported in Science Daily, Okamoto and Kakigi of the National Institute for Physiological Sciences, Japan, along with Pantev and Teismann from the University of Muenster, have demonstrated that listening to loud music with mini earphones may have a detrimental effect on ability to make fine judgements on sound discrimination. Although the "damage" was not detectable using standard hearing tests, the effect was striking with their more sensitive instrumentation. They termed the effect one of losing perception of "vividness" in contrast. The impact would then be even more "pronounced" with a learner that does not have good sound discrimination ability in the first place--especially one who plays his or her mp3 player at levels well beyond "vivid!" On the other hand, the learner may be cranking up the volume to compensate for lack of perceived vividness--especially men with typical loss of high frequency response with age. So, help students learn to carefully manage the volume of their recorded pronunciation practice and the rest of their mp3-ing. Sound advice. 

Tuesday, June 26, 2012

Phonetic (or phonemic) gesture revisited (in the classroom)

Clipart: Clker

Clipart: Clker
In the development of our understanding of speech perception, one of the terms used by some researchers was "phonetic gesture." It essentially referred to the process by which sounds are perceived--the articulatory, not the acoustic properties. The key question was just how much one's ability to articulate a sound determined ones's ability to perceive it. What subsequent research has shown is that it is a mixed bag; the relationship between external properties of sound and our internal processing of it is very complex and developmental as well. In short, ongoing perception of speech turns out to be more a matter of our conceptual systems "expectations" than it is with the actual physical properties of what we hear. That is not to say that the felt sense of the bodily "mechanics" is not important and cannot contribute both to understanding and learning. I like the term, phonetic gesture, as relating to the somatic, physical side of sound production and perception. In our work, a better application of that idea might be "phonemic gesture," that is pedagogical movement patterns that represent key meaningful units of sound within English, including vowels, rhythm patterns, stress assignment and intonation contours. As noted earlier, one of my first, informal research studies was to sit in classes of my colleagues and take notes on the use of gesture they used to accompany pronunciation instruction. Those observations got me started on this line of thinking about 20 years ago. That language instructors adapt gesture for many purposes was the subject of this research by Stam and Teller. (Their work is reported in other publications as well.) One interesting finding was the expansion of the range and depth of field of motion of gesture used " . . . an equivalent of shouting in gesture form." So, what is your current pedagogical "phonemic gesture inventory?" What do you mean