Showing posts with label listening comprehension. Show all posts
Showing posts with label listening comprehension. Show all posts

Sunday, February 21, 2021

Synchronization of brain hemispheres for (pronounced) better auditory processing

For well over a century, synchronization of brain hemipheres has been thought to be somehow integral  to efficient or focused learning in many disciplines. Basically, and overgeneralizing, in processing language sound, the left hemisphere, linked to the right ear, initially handles vowels and consonants and syllables; the right, intonation and rhythm. It seemed to follow that enhancing synchronization should enhance that processing, especially the integration of both sources and meaning. 

Fascinating, forthcoming study by Preisig and colleagues at the University of Zurich, summarized in Neurosciencenews.com as "Synchronization of Brain Hemispheres Changes What We Hear" (to appear in PNAS) that examines the role of gamma ray modulation in brain hemisphere synchronization. What the research demonstrated, in part, was that as synchronization was modulated (by gamma wave variance) auditory processing was correspondingly downgraded or enhanced. For example, techniques such as stimulating dream recall with gamma wave stimulation, seem to operate in similar ways.  

That concept, synchronization and integration, has become something of the gold standard in many forms of therapy and optimalization of performance systems. From a non-invasive perspective, that is ways that do not involve stimulating the brain with electrical current or implanted devices, embodiment practices such as yoga, mindfulness--and many types of physical and athletic engagement, have been shown to influence or enhance brain hemisphere synchronization and integration. 

What "moves" do you do in teaching that involve hemispheric synchronization that may enhance your students listening comprehension or help them be more "mindful" of your teaching?  

In haptic pronunciation teaching, HaPT, there are several "bilateral" pedagogical practices, such as:

  • Alternating hands/arms exercises
  • Touching the other hand, arm, shoulder or opposing side of the body
  • Practicing a movement/gestural pattern both left to right and right to left
  • Doing gestural patterns that repeatedly cross the visual field, back and forth
  • Intentional positioning of different haptic tasks in different areas of the visual field of students in the classroom. 
  • Most activities involve continuous body engagement, using gesture and body movement. 

You haven't heard of haptic pronunciation teaching? Go to our website, www.actonhaptic.com, and try out a few of our best "moves!" While you are there, check out the new Acton Haptic Pronunciation system. It will be available soon! 

Keep in touch! 


Saturday, October 14, 2017

Empathy for strangers: better heard and not seen? (and other teachable moments)

The technique of closing one's eyes to concentrate has both everyday sense and empirical research support. For many, it is common practice in pronunciation and listening comprehension instruction. Several studies of the practice under various conditions have been reported here in the past. A nice 2017 study by Kraus of Yale University, Voice-only communication enhances empathic accuracy, examines the effect from several perspectives.
😑
What the research establishes is that perception of the emotion encoded in the voice of a stranger is more accurately determined with eyes closed, as opposed to just looking at the video or watching the video with sound on. (Note: The researcher concedes in the conclusion that the same effect might not be as pronounced were one listening to the voice of someone we are familiar or intimate with, or were the same experiments to be carried out in some culture other than "North American".) In the study there is no unpacking of just which features of the strangers' speech are being attended to, whether linguistic or paralinguistic, the focus being:
 . . . paradoxically that understanding others’ mental states and emotions relies less on the amount of information provided, and more on the extent that people attend to the information being vocalized in interactions with others.
😑
The targeted effect is statistically significant, well established. The question is, to paraphrase the philosopher Bertrand Russell, does this "difference that makes a difference make a difference?"--especially to language and pronunciation teaching?
😑
How can we use that insight pedagogically? First, of course, is the question of how MUCH better will the closed eyes condition be in the classroom and even if it is initially, will it hold up with repeated listening to the voice sample or conversation? Second, in real life, when do we employ that strategy, either on purpose or by accident? Third, there was a time when radio or audio drama was a staple of popular media and instruction. In our contemporary visual media culture, as reflected in the previous blog post, the appeal of video/multimedia sources is near irresistible. But, maybe still worth resisting?
😑
Especially with certain learners and classes, in classrooms where multi-sensory distraction is a real problem, I have over the years worked successfully with explicit control of visual/auditory attention in teaching listening comprehension and pronunciation. (It is prescribed in certain phases of hapic pronunciation teaching.) My sense is that the "stranger" study actually is tapping into comprehension of new material or ideas, not simply new people/relationships and emotion. Stranger things have happened, eh!
😑
If this is a new concept to you in your teaching, close your eyes and visualize just how you could employ it next week. Start with little bits, for example when you have a spot in a passage of a listening exercise that is expressively very complex or intense. For many, it will be an eye opening experience, I promise!
😑

Source:
Kraus, M. (2017). Voice-only communication enhances empathic accuracy, American Psychologist 72(6)344-654.



Sunday, October 8, 2017

The shibboleth of great pronunciation teaching: Body sync!

If there is a sine qua non of contemporary pronunciation teaching, in addition to the great story of the first recorded pronunciation test in history that we often use in teacher training, it is the use of mirroring (moving along with a spoken model on audio or video). If you are not familiar with the practice of mirroring, here are a few links to get you started by Meyers (PDF), Meyers (video) and Jones.
Clker.com

There are decades of practice and several studies showing that it works, seems to help improve suprasegmentals, attitudes and listening comprehension--among other things. There has always been a question, however, as to how and why. A new study by Morillon and Baillet of McGill University reported by ScienceDaily.com not only suggests what is going on but also (I think) points to how to better work with a range of techniques related to mirroring in the classroom.

The study looked at the relationship between motor and speech perception centers of the brain. What it revealed was that by getting subjects to move (some part) of their bodies to the rhythm of what they were listening to, their ability to predict what sound would come next was enhanced substantially. Quoting from the ScienceDaily summary:

"One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception."

The researchers go on to note that a good analogy is the experience of being in a very noisy cocktail party and trying focus in on the speech rhythm of someone you are listening to better understand what they are saying. (As one whose hearing is not what it used to be, due in part to just age and tinnitus, that strategy is one I'm sure I employ frequently.) You can do that, I assume, by either watching the body or facial movement or just syncing to rhythm of what you can hear.

As both Meyer and Jones note, with the development of visual/auditory technology and the availability to appropriate models on the web or in commercial materials, the feasibility of any student having the opportunity and tools to work with mirroring today has improved dramatically. Synchronized body movement is the basis of haptic pronunciation teaching. We have not done any systematic study of the subsequent impact of that training and practice on speech perception, but students often report that silently mirroring a video model helps them understand better. (Well, actually, we tell them that will happen!)

If you are new to mirrored body syncing in pronunciation teaching or in listening comprehension work, you should  try it, or at least dance along with us for a bit.

Source:
McGill University. (2017, October 5). Predicting when a sound will occur relies on the brain's motor system: Research shows how the brain's motor signals sharpen our ability to decipher complex sound flows. ScienceDaily. Retrieved October 6, 2017 from www.sciencedaily.com/releases/2017/10/171005141732.htm

Monday, September 8, 2014

More than a gesture: When to use gesture in L2 teaching

Should you still need more convincing as to the value and contribution of gesture in L2 learning and instruction, the September 2014 issue of The Modern Language Journal (98) has two excellent,  complementary articles that you should read, one by Dahl and Ludvigsen on the effect of gesture on listening comprehension and a second, by Morett, on gesture as a "cognitive aid" during speaking production and communication. (See full references below.)

The first study examines how observing gesture complements comprehension; the second then demonstrates how actually producing the gesture as you learn and then communicate with a new L2 term in the early stages of the process results in more effective acquisition, retention and recall. 

The learner populations involved are quite different, as are the research methodologies, but the two studies together contribute substantially to our understanding of how and when gesture works. (You'll have to access them through your library online or shell out the usual 5-6 Vente Carmel Frap equivalents for each, of course--but it may be worth it in this case.) There is also an earlier (free, accessible online) 2012 paper by Morett, Gibbs and McWhinney, The Role of Gesture in Second Language Learning: Communication, Acquisition, & Retention, that lays out the theoretical background for the new study as well.

One striking (but not surprising) finding of the Morett study is that using a gesture while speaking and communicating results in better acquisition than just observing the gesture being used by someone else. The other study examines the conditions under which seeing gesture performed functions best. 

AH-EPS v3.0
The bottom line: Systematic incorporation of gesture in (at least initial) L2 learning is again shown to be exceedingly effective. It must be carefully timed and linked to meaning, but the results of both studies are very persuasive. Another good example of that, of course, is AH-EPS v3.0 Bees and Butterflies - Serious fun! (Which rolls out this month, in fact!) 


Full references:
Dahl, T. and Ludvigsen, S. (2014). How I See What You're Saying: The Role of Gestures in Native and Foreign Language Listening Comprehension The Modern Language Journal, 98, 3, (2014), pp. 813–833.
Morett, L. (2014) When Hands Speak Louder Than Words: The Role of Gesture in the Communication, Encoding, and Recall of Words in a Novel Second Language, The Modern Language Journal, 98, 3, (2014), pp. 834–853.





Sunday, April 7, 2013

Perfect pronunciation?

Clip art: 
Clker
There are any number of ways to do that, of course. ("Per'fect" it, that is!) One place I would generally not begin with beginners or upper beginners, however, would be at the Merriam Websters Learner Dictionary and its "Perfect Pronunciation Exercises."  As noted in earlier posts, one place I might start however, one of my favorites, one very compatible with AH-EPS, is at English Accent Coach. The "difference" between the two sites is instructive. One begins with listening to words in sentence context; the other, begins with the sounds themselves, leads learners through a series of exercises (and games) and then extends to the sounds in words, etc. The EAC model is a good one, whether you use that specific site or do the same within your listening/pronunciation syllabus/curriculum. Most of the traditional pronunciation packages did something similar but did not have the technology available to make it fast and efficient, as does EAC.

Now EAC will probably not entirely agree with me that embodying them first gives learners a much better "touch" for the vowels of English, before they play the game there, but give Professor Thomson a break . . . he's listening.

So, once you finish Module 3 in AH-EPS, send your students over to EAC for some very fine, fine tuning. 

Tuesday, March 12, 2013

Vigilance decrement during pronunciation work?

clip art:
Clker
I knew there had to be a scientific term for why students lose interest in pronunciation work occasionally . . . and a cure! The term is used in relation to yet another study that discovered that gum chewing can be good for things "cognitive." In this case, in the study by Morgan, Johnson and Miles of Cardiff University, summarized by Science Daily, it was found that "Gummies" were able to persist longer on an audio recognition task than the "gum-less." The Gum-less started out stronger but were overtaken and passed by the Gummies near the end. And the reason that the Gummies did better? They were more immune to "vigilance decrement" during the task. I have yet to read a cogent explanation as to WHY gum works the way it does. (If you know of that research, please link it here.)

Because of surgery a few years ago cutting out a saliva gland, I have to chew gum to function effectively. I had never done gum before and very much dislike it now, but I do have some "haptic' felt sense of what they are talking about, how it combats "vigilance decrementia." It at least gives me something to do during interminable harangues during faculty meetings.

My guess, however, is that it has something to do with keeping the wiring that goes from the brain to the articulatory equipment energized, in effect working in the opposite direction, very much like haptic technology drives feedback back to the brain through the hands. Not sure I'm in for having students do gum during work that is basically oral production-oriented, but next time your class has to just sit and do nothing but listen, give it a try. "Gum up the work a bit, eh!"


Journal reference (compliments of Science Daily): Kate Morgan, Andrew J. Johnson and Christopher Miles. Chewing gum moderates the vigilance decrement.British Journal of Psychology, 8 MAR 2013. 

Saturday, February 16, 2013

Sing first: listen later: Noticing new or different sounds in L2 pronunciation learning

Here's one for all of us who make extensive use of singing in class. (Here is yet another case where experienced practitioners know it works from experience but have been just waiting for research to catch up and tell them why!) Research by McLachlan, Marco,  Light and Wilson at Melbourne School of Psychological Sciences, summarized, as usual, by Science Daily . . . notes the following:
Clip art: 
Clker
 "What we found was that people needed to be familiar with sounds created by combinations of notes before they could hear the individual notes. If they couldn't find the notes they found the sound dissonant or unpleasant . . . This finding overturns centuries of theories that physical properties of the ear determine what we find appealing."
 
In other words, at some very basic level, appreciation of a style of music is learned. The "notes" in the study had to be first encountered in relation to others in the system before they could be identified or appreciated. Singing in language instruction--and probably to a lesser degree, listening comprehension techniques with pronunciation-- certainly serve that function. This is an important study, one with very interesting potential ramifications for our work. I will try to get the full research report and report back . . ..

Notice: Here is my annual apology for using sometimes less than reliable or politically neutral  secondary sources, such as Science Daily or The New York Times or research abstracts from studies that receive  public support but publish in journals that you can't access with out being a member of "The Guild" or can't afford to pay $32 per article for (or wouldn't just on ethical grounds if you did have the spare change lying around): Sorry about that. (There. Done.)


Sunday, November 20, 2011

The rhythm of (haptic) English linking (training)

Clip art: Clker
Here is a 9-minute video of the "standard" approach to teaching English linking by McIntyre--done very well--from the 2003 Clearly Speaking project, headed by Burns and Claire.

Opinion in the field is split as to just how much time should be spent between working on comprehension of linking, as opposed to active training in producing linked speech. On fixed phrases such as "black'n white," etc., teaching production makes sense. Requiring students to practice linking on sentences such as: "They_ate_every_orange_in_Norman's _bucket!"--a common practice in "elocution" training--as a model for them of what good speaking should resemble, is recommended by few that I am aware of. (The 1982 student book still used in some programs, Whaddayasay?, does suggest that, in fact.)

The EHIEP approach, on the contrary, assumes that students have at least been introduced to linking in listening comprehension work, much as done by McIntyre. The effective haptic anchoring of rhythm and rhythm groups in practice and conversation should do three things: (1) encourage the natural phonological process of linking when rhythm and stress are appropriately balanced, (2) create a strong contrast between stress and unstressed elements that de-emphasizes backgrounded material, and (3) promote overall intelligibility so that "missing" linking is not as noticeable. "Whadayagonnadoweh?"