Showing posts with label aural discrimination. Show all posts
Showing posts with label aural discrimination. Show all posts

Saturday, December 23, 2017

Vive la efference! Better pronunciation using your Mind's Ear!

"Efference" . . . our favorite new term and technique: to imagine saying something before you actually say it out loud, creating an "efferent copy" that the brain then uses in efficiently recognizing what is heard or what is said.  Research by Whitford, Jack, Pearson, Griffiths, Luque, Harris, Spencer, and Pelley of University of New South Wales, Neurophysiological evidence of efference copies to inner speech, summarized by ScienceDaily.com, explored the neurological underpinnings of efferent copies, having subjects imagine saying a word before it was heard (or said.)
Clker.com

The difference in the amount of processing required of subsequent occurrences following the efferent copies, as observed by fMRI-like technology, was striking. The idea is that this is one way the brain efficiently deals with speech recognition and variance. By (unconsciously) having "heard" the target or an idealized version of it just previously in the "mind's ear", so to speak, we have more processing  power available to work on other things with . . .

Inner speech has been studied and employed in the second language research and  practice extensively  (e.g., Shigematsu, 2010, dissertation: Second language inner voice and identity) and in different disciplines.  There is no published research on the direct application of efference in our field to date that I’m aware of.

The haptic application of that general idea is to “imagine” saying the word or phrase synchronized with a specifically designed pedagogical gesture before articulating it.  In some cases, especially where the learner is highly visual, that seems to be helpful, but we have done no systematic work on it.  The relationship with video modeling effectiveness may be very relevant as well. Here is a quick thought/talk problem for you to demonstrate how it works:

Imagine yourself speaking a pronunciation-problematic word in one of your other languages before trying to say it out loud. Do NOT subvocalize, move your mouth muscles. (Add a gesture for more punch!) How’d it work?

Imagine your pronunciation work getting better while you are at it!




Friday, April 12, 2013

Face it . . . Your pronunciation could look better!

According to research by Lander and Caper at the University of Lancaster, a little  more lipstick and work on your speech style may be in order. (Watched yourself on video lately when you ask a student "look at my mouth" as you provide a model?) Their study demonstrated unequivocally that your listeners' ability to understand you if they can see you can be enhanced considerably with a little tweaking. One feature that made words more easily understood, not surprisingly, was backing off from conversation style toward more declarative articulation, especially in times of potentially disruptive background noise. In addition, although other movement of facial muscles does play a supporting role or is synchronized with mouth and lip movement, it was the mouth that carried the functional load primarily. 

Clip art: Clker
This is a particularly interesting problem in haptic work, in part because the eyes of the student are naturally drawn to the hand and arm movements. Consequently, you must be a bit more conscientious about how you articulate a model word, for example, as you do the corresponding pedagogical movement pattern, to be sure that students can also read you lip patterning as well. Record some of your work, turn off the sound and spend a little time trying to figure out what you were saying . . . 

Obviously nothing to just "pay lip service to!" 


Citation: Investigating the impact of lip visibility and talking style on speechreading performance - http://dx.doi.org/10.1016/j.specom.2013.01.003

Sunday, December 23, 2012

Sound discrimination training: perceived "phon-haptic" distance

Clip art: Clker
Clip art: Clker
Ask any Japanese EFL student how they managed to perceive and later produce the distinction between [i] and [I] or [u] and [U] in English and they'll probably tell you that it was difficult . . . or impossible. The same goes, of course, for L1/L2 phoneme mismatches for most learners, at least initially. The problem, of course, is the "competition" between phonetic or articulatory distance, that is how different, physically it is to produce two sounds, and phonemic categorical distance. If the brain "decides" that two sounds represent the same phoneme, regardless of how different it "feels" to produce them--case closed. At least that is what most research suggests. A 2004 study by Gerrits and Schouten of Utrecht University (linked here at the University of Rochester) suggests that the task used in the discrimination process can significantly impact perception of phonemic categories.

In plain English, what does that mean? Basically this: The method you use to assist learners in hearing or producing a phonemic distinction in their L2 can, itself, affect whether they get it or not. Really? Well, maybe . . .  So how do you usually do that? Do a class listening discrimination task of some kind? Give them an audio to listen to? Show them line drawings and have them repeat after you? Sit down with the learner and use a Starbucks coffee stir to get their articulators realigned?

 As described in earlier blogposts, the EHIEP approach is to establish points in the visual field where the hands touch as the sound is articulated, what we term "phon-haptically." Those points, or nodes, are strategically placed so that distinctions such as those above are experienced as being both physically distant from each other and somatically have very distinct texture or type of touch involved (tapping, pushing, scratching, brushing, twisting, etc.) The touch-type is chosen to "imitate" the felt sense of producing the vowel in the the vocal tract in some way, if only metaphorically. Does it work? Try it and let us know. Keep in touch. 

Sunday, October 14, 2012

In your ears!!! (Not for accurate sound discrimination!)


Clip art: Clker
Clip art: Clker
Have long recommended that learners NOT use headsets when working on pedagogical movement patterns--and also go easy on that practice in general sound discrimination work. (For one thing their arms get tangled up in the cords!) Now there is an empirical study that adds a little support to that principle. As reported in Science Daily, Okamoto and Kakigi of the National Institute for Physiological Sciences, Japan, along with Pantev and Teismann from the University of Muenster, have demonstrated that listening to loud music with mini earphones may have a detrimental effect on ability to make fine judgements on sound discrimination. Although the "damage" was not detectable using standard hearing tests, the effect was striking with their more sensitive instrumentation. They termed the effect one of losing perception of "vividness" in contrast. The impact would then be even more "pronounced" with a learner that does not have good sound discrimination ability in the first place--especially one who plays his or her mp3 player at levels well beyond "vivid!" On the other hand, the learner may be cranking up the volume to compensate for lack of perceived vividness--especially men with typical loss of high frequency response with age. So, help students learn to carefully manage the volume of their recorded pronunciation practice and the rest of their mp3-ing. Sound advice. 

Saturday, November 26, 2011

Pronunciation discrimination: Say it right now . . . hear it later (and vice versa)


Clip art: Clker
Clip art: Clker
There have been a few studies which suggest that pronunciation training improves aural discrimination and aural comprehension, including this one from 2001 by Ghorban and others. Most research in the last three decades in this area, for a number of reasons,  has focused on the opposite effect, that of comprehension on pronunciation or intelligibility.

One of the primary goals of HICP work is to prepare learners to be more effective "kinaesthetic" listeners who can continue improving their pronunciation beyond the classroom, essentially by being able to quickly capture the felt sense of models that they hear--and play it back haptically using the basic protocols--so that they can also either "record" it in memory or discard it.

That kinaesthetic monitoring and listening are generally seen as the final benchmarks of EHIEP training. At will, learners can "feel" phrases or sentences in their bodies, whether listening to themselves or someone else. Kinaesthetic monitoring generally does not interfere with conversation, allowing one to detect errors in performance and deal with them later, rather than right at that instant. Kinaesthetic listening works the same way, like an independent flash drive, that allows later recall and redo. Got a good feel for what we're saying here? If so, it'll sound even better later.