Showing posts with label Pedagogical movement patterns. Show all posts
Showing posts with label Pedagogical movement patterns. Show all posts

Wednesday, March 14, 2018

Teaching EnglishL2 advanced conversation (with hand2hand prosodic and paralinguistic "comeback")

Clker.com
We'll be doing a new workshop: "Pronunciation across the 'spaces' between sentences and speakers."  At the 2018 BCTEAL Conference here in Vancouver in May. Here is the summary:


This workshop introduces a set of haptic (movement + touch) based techniques for working with English discourse-level prosodic and paralanguistic bridges between participants in conversation, including, key, volume and pace. Some familiarity with teaching of L2 prosodics (basically: rhythm, stress, juncture and intonation) is recommended.

The framework is based to some extent on Prosodic Orientation in English Conversation, by Szczepek-Reed, and new features of v5.0 of the haptic pronunciation teaching system: Essential Haptic-interated English Pronunciation (EHIEP), available by August, 2018. The innovation is the use of several pedagogical movement patterns (PMPs) that help learners attend to the matches and mismatches of prosodics and paralanguage between participants in conversation that create and maintain coherence and . . . empathy across conversational turns.

For a quick glimpse of just the basic prosodic PMPs, see the demo of the AH-EPS ExIT (Expressiveness) from EHIEP v2.0.

The session is only 45 minutes long, so it will just be an experiential overview or tour of the set of speech-synchronized-gesture-and-touch techniques. The video, along with handouts, will be linked here in late May.

Join us!





Sunday, May 29, 2016

Why does haptic pronunciation teaching work?

Good question! Here is an excerpt from the new Haptic Pronunciation Teaching - English (HaPT-E) Instructor notes. (If you'd like to preview the first 2 modules of the course (no charge) and get a free a copy of the Instructor Notes, contact: info@actonhaptic.com)

Essential Haptic Integrated Pronunciation Teaching (EHIEP):
  • Provides a principled way to integrate body movement into pronunciation teaching, "embodying" a number of techniques commonly used, some consciously, some less so-- emphasizing the importance of the kinesthetic, “felt sense” of fluent body movement and speech. 
  • Is HAPTIC!, using touch to make use of gesture systematic, consistent, focused and (relatively) "safe" and nonthreatening.
  • Focuses on intelligibility and fluency, not just accuracy, but can be used for accent reduction, if desired.
  • Integrates in basic voice training and public speaking skills --especially vocal resonance training--so that some improvement in vocal production is noticed relatively quickly by learner.
  • Uses vowels as the conceptual center of the presentation and practice system, establishing a conceptual and sensory space matrix in which (1) sounds and processes can be learned and adjusted, and (2) production can be consciously regulated better.
  • Is structured so that almost anyone, regardless of native language or learning style can learn it or learn to teach using it.
  • Hooks learners on the process so that they do their homework! (If done right, it is stimulating and refreshing, especially when done for at least 30 minutes, every other day!) 
  • Involves a set of basic, easy to learn exercises and techniques (warm up, vowels, word stress, rhythm and intonation) that are then integrated into classwork as the need arises. Seems especially effective in doing impromptu, incidental correction and modeling of pronunciation in classroom instruction.
  • Balances conscious analysis and “noticing” with contextualized drill and controlled practice; balances energizing, motivating activities with controlled, focused procedures.
  • Is more output-based system, encouraging earlier “safe” speaking and oral production than does many contemporary methods.
  • Is based on research from several fields in addition to pronunciation teaching, including public speaking, drama, music, haptics, sports training, psychology and neuroscience. 
  • Has been classroom tested over the last decade by hundreds of teachers. (Several empirical studies are now underway to better establish the effectiveness of the EHIEP method on more empirical, "scientific" grounds!)  
See also the YouTube summaries of the main modules from v3.0 (Not great video quality but reasonably informative.) 

Wednesday, February 3, 2016

Gestured pronunciation instruction: Better online?

Clker.com
It is now well-established in several fields that "Students learn more when their teacher has learned to gesture effectively" (Alibali, Young, Crooks, Yeo, Wolfgram, Ledesma, Nathan, Breckinridge-Church and Knuth, 2013). In pronunciation work use of "live" models is typically limited to either "talking heads" often zeroing in on the mouth or a recording of an instructor presenting something resembling a typical lesson with explanation and practice. If you have never spent some time experiencing some of what is now out there from the learner's perspective, stop for a bit and join us when you have. Most of it mind-numbing, at best.

Clker.com
Although there is no research that I am aware of focusing in on the specific contribution of video to pronunciation instruction, the assumption seems to be simply that the "better" (the production quality), the more effective. There is a rapidly growing market for web-based, visually compelling teaching of pronunciation.

One of the obvious problems with video-based instruction, especially the more visually captivating, ironically, is the potential for viewers to drop back into "TV-trance-mode", absorbing but not doing much processing or demonstrating meaningful engagement. (There is also a very serious issue with visual modality overpowering auditory and kinaesthetic, as well.) In pronunciation work, where re-education of the body is central, not enthusiastically joining "the dance" is a deal breaker . . . One key contribution of gesture to instruction is to create stronger engagement and enhancement of moment-by-moment attention.

A 2014 study, The effect of gestured instruction on the learning of physical causality problems by Carlson, Jacobs, Perry and Ruth Breckinridge-Church demonstrates how systematic use of gesture by instructors on video can significantly improve learning of another "physical" process. Subjects who viewed the "gesture-articulated" instructor, rather than just the spoken presentation did better on the post test. This study is particularly relevant in that it deals with gesture enabling cognition of what is a very "tactile" concept, that of manipulating gear movement and direction.

AMPISys, Inc.
In haptic pronunciation teaching as unpacked in several earlier posts, it is apparently the case that not only is gesture with video more effective, but gesture+video+touch is even better. The basic reasons for that are that (a) touch makes gesture not only more systematic but (b) provides it with more impact, (c) whether done by the learner or just observed. And furthermore, (d) just training learners in haptic-anchored gesture, at least initially, is for many, if not most, instructors simply too far outside of their comfort and zone of "haptic intelligence." (See Research References page)

I came up with this system over a decade ago and still use videos (of myself) when introducing students to the basic gestural inventory, or pedagogical movement patterns (PMP). I'm just so much better online . . . (and you will be, too!)

References:
Alibali, M., Young, A., Crooks, N., Yeo, A., Wolfgram, M., Ledesma, I., Nathan, M.,  Breckinridge Church, R. and E. Knuth. (2013). Students learn more when their teacher has learned to gesture effectively. Gesture 13:2, 210–233.
Carlson, C., Jacobs, S.,  Perry, M. and R. Breckinridge-Church. (2014). The effect of gestured instruction on the learning of physical causality problems. Gesture 14:1, 26–45.


Sunday, November 17, 2013

Pay attention to pronunciation!

As reported in earlier posts, no matter how terrific our attempt at pronunciation teaching is, if a learner isn't paying attention or is distracted, chances are not much uptake will happen--especially when haptic anchoring is involved. No surprise there. A new study by Lavie and colleagues of UCL Institute of Cognitive Neuroscience, focusing on "inattentional blindness" entitled,"How Memory Load Leaves Us 'Blind' to New Visual Information," just reported at Science Daily, sheds new "light" on exactly how visual attention serves learning.

In essence, when subjects were required to momentarily attend to an event or object in the visual field and remember it, their ability to respond to new events or distractions occurring immediately afterward was curtailed significantly. (The basic stuff of hypnosis, stage magicians and texting while driving, of course!)

What is of particular interest here is that, whereas the visual image that one is attempting to focus on can strongly exclude other competing distractions, that effect works precisely the other way around in haptic-integrated pronunciation instruction. It helps explain the potential effectiveness of pedagogical movement patterns of EHIEP and AH-EPS:

  • Carefully designed gestures across the visual field 
  • Performed while saying a word, sound or phrase 
  • With highly resonate voice, and
  • Terminating in some kind of touch on a stressed vowel, what we term "haptic anchoring." 
It also explains why insightful and potentially priceless comments from instructors coming in too close proximity to vivid and striking pronunciation-related "visual events" . . . may not stick or get "uptaken!" 

See what we mean? 



Monday, August 26, 2013

What comes first? Speaking confidently or confident speaking?

I know . . . trick question. A recent Facebook post by the seriously "positive" Tim Murphey got me thinking. He was commenting on a study commented on by Lynn McTaggart at Positive News.org.uk, commenting on a study done by Michigan State University researchers. (One of my alma maters, by the way, so it must be true!) The point of the article was that people speak in public more confidently when they think about others in their group and not just how nervous they are or whatever. Murphey's point is that when we are connected, we are confident. (In the original study, however,  they seem to have not controlled for the intentional mental focus on anything other than stage/speaking fright--a near fatal flaw--an effect well-established by research and practice in several fields.)

Acton Haptic -
English Pronunciation System
Mea culpa. I tend to be a little skeptical about claims in "confidence before competence" models, especially in pronunciation teaching. An interesting 2007 doctoral thesis by Montha Songsiri of Victoria University, nonetheless, demonstrated, at least in part, how pedagogy can indeed engender confidence in speaking that appears to show up in greater intelligibility and more accurate pronunciation.

And then recently I did a 10-day intensive speaking/pronunciation/accent reduction program using a modified version of the AH-EPS system with pre-MBA nonnative speakers--and may have watched it happen: Beginning with a great deal of speaking in public (oral reading and highly formatted interactions, coupled with public speaking confidence tricks such as posture, breathing)--and concentrating on something other than performance anxiety--seemed to "work!" (In this case, the pedagogical movement patterns of AH-EPS to some extent, I assume.) Where participants' improved pronunciation came from exactly and so quickly is, of course, impossible to say, but the degree of reported improvement alone was almost surprising.  But I am confident in speaking from that perspective, of course! 

Wednesday, March 6, 2013

Intrusive and proactive pronunciation instruction (IPPI!)

Clip art: Clker
Love that acronym! Over the life of the blog there have been several posts that relate to exercise persistence. What comes out of that research, from several disciplines, is the idea that advising and helping students managing their time is a very good idea. In this Science Digest summary of MA thesis research by Kansas State student, Tennant, the effect of intrusive and proactive advising and engagement with freshman college students is striking. (US universities are rediscovering the importance of student retention in these difficult economic times, apparently.)  Although Tennant's work focuses primarily on at-risk students, the implications for our work are clear: resources and energy spent on assisting students in mangage life and study outside of class pay off.

What do you know about how your students study and practice of pronunciation on their own? (For that matter, what do you know about their life outside of class?) The ambivalence that we all deal with between learner autonomy and empowerment on one hand-and motivating (or cajoling) them to do their homework that you have assigned for their own good on the other . . . reflects where the field is today. The position that there could possibly be one basic pronunciation program that "fits all"--and that it could be integrated into general speaking and listening instruction--seems very much a throw back to earlier structuralist language teaching.

We have learned a great deal since the 50s about method design and what constitutes the range of strategies and technologies that can be applied to the process. The AH-EPS approach is to (A) use the basic phonological structures of the language as a standard point of departure for enhancing and integrating learners' ability to learn new sounds and vocabulary, and (B) to carefully prescribe a framework for what should go on between formal classes (or working with a haptic video independently.) That framework involves both fixed warm ups and pedagogical-movement routines associated with L2 sound features, and, most importantly, staged extension to learners' individual needs and current program of study.

In a classroom setting that means training both instructors and learners to use a set of techniques for presenting, correcting, remembering and recalling what should be integrated into spontaneous speaking, listening, reading and writing. IPPI! (or perhaps, H-IPPI!)  

Monday, December 31, 2012

Can't see how to say it right? (Self-reflective, visual-soma-kinaesthetic correction of mispronunciation)

So you try to demonstrate with your face and mouth how a learner should be pronouncing a vowel, for example--and it simply does not work. In fact, the mispronunciation may just get worse. New research by Cook of City University London, Johnston of University College London, and Heyes of the University of Oxford (Summarized by Science Daily) may suggest why: visual feedback of the difference between one's facial gesture and that of a model can be effective in promoting accommodation; simple proprioceptic feedback (i.e., trying to connect up the correct model with the movements of the muscles in your face, without seeing what you are doing simultaneously) generally does not work very well. Amen, eh.

I have had students whose brains are wired so that they can make that translation easily, but they are the exception. The solution? Sometimes a mirror works "mirror-cles;" some new software systems (noted in earlier blogs) actually does come up with a computer simulation that attempts to show the learner what is going on wrong inside the mouth and what should be instead--with apparently very modest, but expensive results.

Clip art: Clker
Clip art: Clker
The EHIEP approach is to early on anchor the positioning and movement of the jaw and tongue to pedagogical movement patterns of the arms and hands. From that perspective, it is relatively easy, at least on vowels, stress and intonation (and some consonants) to provide the learner with both visual, auditory and proprioceptic feedback simultaneously, showing both the appropriate model and how the learner's version deviates. (In fact, in some correction routines, it is better to anchor the incorrect articulation first, before going to the "correct" one.) In effect, "(Only if) Monkey see (him or her mis-speak), (can) Monkey do (anything about it!)"


Wednesday, December 5, 2012

Effortless learning of the iPA vowel "matrix" of English?

Image: Wikipedia
Could be, according to 2011 research by Watanabe at ATR Laboratories in Kyoto and colleagues at Boston University, as summarized by Science Daily--using fMRI technology in the form of neurofeedback tied to carefully scaffolded visual images. Mirroring what appears to go on in real time, in the experiment it was evident that " . . . pictures gradually build up inside a person's brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example."

This is an intriguing idea, something of a "bellwether" of things to come in the field, using fMRI-based technology joined with multiple-modality features to facilitate acquisition of components of complex behavioral patterns. The application of that approach to articulatory training alone, assembling a sound, in effect, one parameter at a time, just the way it is done by expert practitioners--should be relatively straightforward.

Clip art: Clker
The EHIEP vowel matrix resembles the standard IPA matrix on the right, except that it is positioned in mirror image and includes only the vowels of English. In training learners to work within it, we do a strikingly similar build up to that identified in the study, lines < edges < shapes < motion (which is different for each vowel.) Each quadrant is then given a colour that corresponds to something of the phonaesthetic quality of the vowels positioned there. Once the "matrix" is kinaesthetically presented and practiced, it is then gradually, haptically anchored as the vowels are presented and practiced using distinct pedagogical movement patterns terminating in some form of "Guy or Girl touch" for each as the sound is articulated.

Out of the box? Not for long, my friends!


Friday, November 23, 2012

Do-it-Yourself! haptic-integrated pronunciation teaching


Clip art: Clker
Clip art: Clker
Haptic work is, by definition . . . touching! As explored in several previous posts, there are a wide range of conditions under which haptic anchoring of movement, visual images and sound may or may not be effective in instruction. (According to new research, by Patterson and colleagues at the University of Liecester, summarized by Science Daily, there may even be a bias in favor of those of us over the age of 65 in responding to the typical "fuzziness" of haptic cinema!)

One of the most striking discoveries in our work has been the realization that some of the EHIEP pedagogical movement patterns can be taught well face-to-face but others may be better introduced by a video model, especially vowels, vowel "compaction" and intonation. That video model can be the instructor, him or herself, or someone else--such as in the EHIEP system of videos and student workbooks that I am developing, of course! Why that should be is complex but understood (See this blogpost by Grant on http://filmanalytical.blogspot.ca/)

In essence, it is emotionally and interpersonally very powerful. In some contexts, either because of the personality of the instructor or the class, video is a better option for perhaps half of the PMPs. One reason for that is the impact of eye contact on mirroring in a classroom setting. In essence, vivid "moving" visual feedback from students, whether negative  or positive can dramatically undermine an instructor's ability to teach PMPs. Once they are introduced, however, classroom use of a PMP to anchor vowels, stress, rhythm, intonation or pitch/volume/pace seems to be less susceptible to disruption.

Bottom line: It takes training to do pronunciation work of any kind effectively or efficiently. Either you get trained or have somebody else do it for you, either in your program or through technology. Haptic video and its post-production technology is very promising. I am tempted to use a term like "CAPT Video," Computer-Assisted-Pronunciation-Teaching with Video, were there not already a near-relevant song by that name .  .  .  

Sunday, November 18, 2012

Got an itch to teach pronunciation?

Clip art: Clker
Clip art: Clker
This is fun. Several of the pedagogical movement patterns in the EHIEP system involve either scratching (or brushing) one hand with the fingernails (or just fingers) of the other hand, as the sound is articulated. Have known for some time that when it is demonstrated by the instructor (on video) and learners are asked to mirror that movement, that pattern catches on very quickly. Now we know why. Research by Holle, Warne, Seth, Critchley and Ward of the Universities of Sussex and Hall (abstract on PNAS website) even suggests which personality trait might respond more readily to seeing someone else scratch an itch: neuroticism (tendency to respond disproportionately to negative emotions.)

Research on mirror neurons alone demonstrates just how powerful the impact of witnessing movement or gesture by another person can be. In this study the extension to tactile/touch is important for understanding just how haptic-integrated pronunciation instruction works, especially the potential effectiveness of pronunciation-based haptic anchors (gesture which includes hands touching as a stressed syllable of a word is spoken.)

Not sure exactly how neuroticism figures in, but in some of the protocols (sets of training techniques) we do use contrasting sets of positive and negative terms, anchored on opposite sides of the body or visual field, e.g. tough/nice, tricky/easy, puzzling/beautiful, complicated/fascinating. The "negatives" may actually resonate more with some! So don't be too concerned if you get an itch to get "tough" on your potentially neurotic students or colleagues who are critical of our work, who see it as too puzzling, tricky or complicated . . . 

Thursday, November 15, 2012

FLASH! Conscious suppression of pronunciation work!

Clip art: Clker
Clip art: Clker
Conscious Flash Suppression (CFS) technology could well be in the future of pronunciation teaching, based on research by Hassin, Sklar, Goldstein, Levy, Mandel and Maril at Hebrew University, as reported in Science Daily. CFS is described as " . . . one eye is exposed to a series of rapidly changing images, while the other is simultaneously exposed to a constant image. The rapid changes in the one eye dominate consciousness, so that the image presented to the other eye is not experienced consciously." What they discovered was that the material not experienced consciously was still processed and responded to non-consciously in various ways.

Their conclusion: " . . . humans can perform complex, rule-based operations unconsciously, contrary to existing models of consciousness and the unconscious." Avoiding conscious interference with pronunciation change is big. Now that may sound like a candidate for your "Well . . . duh!" file (A finding that is not only common sense but probably not worth the grant money blown on coming up with it.) Two important developments here, however:

  • First, so much of what happens between instruction and spontaneous performance in pronunciation work is unconscious--or at least not the subject of research today. Even the focus in HICPR on the "clinical" is still a relative "outlier" in this field, although not in some related disciplines. We should be able to study that more systematically. 
  • Second, all methodologists assign a great deal of the work to the "dark side," whether they make that explicit (consciously) or not, some more than others, such as Lozanov . . . or Acton! We need to stop suppressing the use of several great techniques that have been proven by experience to work the subconscious effectively.

Would love to get ahold of some of that CFS technology and try it out with haptic anchoring of academic word list vocabulary in time for TESOL in Dallas. Just imagine the impact of a pedagogical movement pattern accompanying the "constant" image of the acronym "CFS." Hard to suppress the excitement already . . .   

Sunday, November 4, 2012

Anchoring pronunciation: Do you see what you are saying?


Clip art: Clker
Clip art: Clker
You can, in fact--if you are pronouncing a sound, word or phrase using EHIEP-like pedagogical movement patterns, PMPs (gestures across the visual field terminating in some form of touch by both hands.) Not only CAN you, according to research by Xi and colleagues at Northwestern University, summarized by Science Daily, but your eyes strongly interpret for you the "feeling of how it happens." The visual "character" of the dynamic gesture (its positioning, fluidity, distance from the eyes and texture on contact with the other hand) may well override the actual tactile feedback from your hands and proprioceptic "coordinates" of movement from your arms.

In the study, subjects were simultaneously presented with video clips that slightly contradicted what their hands and arms were doing. It was clearly demonstrated that even though subjects were also instructed to ignore the video and concentrate on the actual positioning, movement and related information about touch and weight coming from the hands, the "eyes have it." What they were seeing reinterpreted the other incoming sensory data.

As noted in earlier posts, visual can often override other modalities. What is "new" here and contributes to our understanding of how and why haptic-integration works is that the subjects' perception of the EHIEP sound-touch-movement "event" would appear to be strongly influenced by the style or flair or precision and consistency of the PMP. That has been one of key problems in creating the video models: insufficient clarity and consistency in the execution of PMPs (by me!)

This is both good news and bad news. Good, in that the PMP is, indeed, a potentially a very powerful anchor--and that the visual "feel" of each can contribute substantially to anchoring effectiveness. Bad, in that for maximal effectiveness the video/visual model needs to be exceedingly precise and consistent. (I have explored the use of Avatars instead of me but there are even bigger potential issues there.) Preparing/getting in shape now to do a new set of videos after the holidays, based on this and simular research. Can't wait to see what those feel like!

Wednesday, October 31, 2012

"Couch potato" pronunciation learning


Clip art: Clker
Clip art: Clker
So what if some students, for whatever reason, cannot or decide not to participate in your choral drills or (from a haptic perspective) "move" along with the model on the video or mirror your movements as you try to correct a mispronunciation? According to Science writer, Paul, that may not be as much of a problem as you might think. Apparently, your more passive learners, "couch potatoes" are capable of getting it, too--with a few conditions attached. Research cited by Paul suggests that it is helpful if they have previously been at least exposed to the movement pattern, even better if they have actually been through it physically in some manner. In addition, if they know what to expect or know what is coming, they may pick up more as well. (In one experiment just lying still during an fMRI, so their brain activity could be monitored, as they thought about a coming test on what they were to about to watch, showed both increased activity in related motor areas and enhanced retention of movement patterns later.) But then this final challenge to the more "unmoved":

"Lastly, Grafton of UC-Santa Barbara notes that as valuable as watching others can be, multiple studies have shown that “the benefit from learning by observing is never as strong as advantages derived from physical practice.' With apologies to the couch potatoes out there, sometimes you just need to get up and dance."

Of course, the irony here is that EHIEP uses video clips (the virtual breeding ground of couch potatoes) as the basis of instruction. Turns out that, if done right, the "medium" can indeed still be experienced as the  "massage," (and not just the message) as well!

Saturday, October 27, 2012

Sound mirroring of pronunciation: Trick or treat?

Clip art: Clker

Clip art: Clker
In keeping with the spirit of the season, how about this title of a summary from Science Magazine Why creepy people give us chills! of research by Leander and colleagues at the University of Groningen in the Netherlands. Because mirroring of pedagogical movement patterns (PMPs) is central to EHIEP training, insight into what influences learner response (i) when they are mirroring video models or (ii) where they mirror a "live" instructor--or (iii) where they, themselves, are mirrored during correction of pronunciation--is very important.

Quoting Finkel of Northwestern University, "The study . . . effectively combines several hot research topics, from behavioral mimicry to embodied cognition, the idea that humans can feel their emotions in very physical ways." One key (not surprising) finding was that, " . . . people who fail to appropriately imitate the mannerisms of others during social interactions can actually make their peers feel colder—" Without going into the details of the experiment, in one condition, subjects actually DID report feeling colder, literally!

Now not that any instructor doing haptic-integrated work even could be un-empathetic, the subtle impact of mirroring (effective or ineffective) has been the subject of several earlier posts. One of the principles that has emerged --as strange as it may sound--is that having students mirror a video in initial training is generally preferable. (That can be a simple video created by the instructor, his or herself--or later one that we'll be making publicly available.) Likewise, subsequent use of mirroring of PMPs in correction must be done appropriately as well. It is a cool (but not creepy), good trick that almost always treats the problem efficiently! 

Thursday, October 25, 2012

Haptic entrainment: Why haptic works 2


Clip art: Clker
Clip art: Clker
May do a series of research updates on "Why haptic works!" Following up on the previous one relating to Grapheme-phoneme linkage, here is another connection. Research by Matthews, Beckman, Fabiani and Gratton of the University of Illinois, reported by Science Daily, has demonstrated that those subjects showing stronger "alpha" brain waves tend to be better at learning how to play a new video game. Alpha wave states have been associated with a wide range of behaviours and dispositions. Another way to modulate alpha wave intensity is through "entrainment," using various kinds of meditative or haptic-based body movement exercises. The pedagogical movement patterns, accompanied by vocal production, of the EHIEP system qualify as entrainment. Although I have not verified that, the impact on alpha wave frequency with fMRIs on students, the effect on general concentration, relaxed composure and attention is always evident and consistent. In fact, haptic-integration and anchoring is often so enjoyable that we should perhaps coin a new term for it: "ENTERTRAINMENT!" 

Sunday, October 14, 2012

In your ears!!! (Not for accurate sound discrimination!)


Clip art: Clker
Clip art: Clker
Have long recommended that learners NOT use headsets when working on pedagogical movement patterns--and also go easy on that practice in general sound discrimination work. (For one thing their arms get tangled up in the cords!) Now there is an empirical study that adds a little support to that principle. As reported in Science Daily, Okamoto and Kakigi of the National Institute for Physiological Sciences, Japan, along with Pantev and Teismann from the University of Muenster, have demonstrated that listening to loud music with mini earphones may have a detrimental effect on ability to make fine judgements on sound discrimination. Although the "damage" was not detectable using standard hearing tests, the effect was striking with their more sensitive instrumentation. They termed the effect one of losing perception of "vividness" in contrast. The impact would then be even more "pronounced" with a learner that does not have good sound discrimination ability in the first place--especially one who plays his or her mp3 player at levels well beyond "vivid!" On the other hand, the learner may be cranking up the volume to compensate for lack of perceived vividness--especially men with typical loss of high frequency response with age. So, help students learn to carefully manage the volume of their recorded pronunciation practice and the rest of their mp3-ing. Sound advice. 

Sunday, October 7, 2012

Hearts and hand grenades: Why students must like you using movement in pronunciation teaching!


Clip art: Clker
Clip art: Clker
Now here is a very interesting, relevant study. According to Aziz-Zadeh and colleagues at USC, (Summarized by Science Daily, of course!) if your students like you, they will be more likely to mirror your movements more accurately--and enjoy doing it. Not only will they be able to "lock on" better, they will perceive your actions to be relatively faster than were they to like you "less." There have been studies demonstrating the impact of attitude toward the speaker on perception of message, etc. for decades, but this one demonstrates how that happens, how it affects the observer's response.

This "I like the way you move, there!" effect is, in part, behind the use of video as the "lead instructor" in EHIEP work. Learners are initially oriented to and trained in the protocols (sets of procedures that teach one or more techniques that can be used in the classroom or for independent study) in short, aerobic-training-like videos. (Currently, I am the model, but we will replace me before long!)  Getting to that strategy took over a decade of experience with training ESL/EFL teachers in how to do selected techniques themselves in front of the class. What we discovered was that most trainees could learn to do the techniques easily but the results when they took them back to the classroom were mixed, at best. Once the entire system was in place we could begin to see why a particular strategy did or did not work.

One thing became obvious: the relationship between the instrutor and learner was crucial, from several  perspectives. Having someone mirror your movements is, in many respects--as reported in previous blog posts--analogous to requiring better rapport and empathy, obviously something many students may not buy into! Ironically, why a technique didn't seem to work could be due to lack of "liking" or excessive "liking." Either one. Going in the opposite direction from the USC research, if you are "too close" to a student or students in front of you not only can it cause you to look at them too often but it can also easily disrupt your ability to execute and monitor the pedagogical movement patterns in play.

The solution: have a video model do the critical initial training--and then the instructor and students can use the PMP as necessary in presenting, correcting, monitoring and recalling a sound or word or phrase with a "repaired" sound in it. You're gonna like EHIEP (or the instructional videos your create yourself, even of yourself)--so will your students. 

Tuesday, October 2, 2012

Haptic bonding: connecting new or modified L2 pronunciation back to visual images of words or graphemes


Clip art: Clker
Clip art: Clker
Haptic bonding! I love that term! It has been common practice with children to use tactile engagement in working with pre-reading, helping them link the sounds with graphemes. The same ideas have been applied widely in rehabilitation as well but the underlying mechanisms involved have not been well understood. In a fascinating-- and very relevant--study by Gentaz and colleagues at the Laboratoire de Psychologie et Neurocognition in Grenoble (CNRS/Université Pierre Mendès France de Grenoble/Université de Savoie), Learning of Arbitrary Association between Visual and Auditory Novel Stimuli in Adults: The “Bond Effect” of Haptic Exploration, summarized by Science Daily, it was demonstrated that " . . . When visual stimuli can be explored both visually and by touch, adults learn arbitrary associations between auditory and visual stimuli more efficiently." And there you have it!

Friday, September 28, 2012

Paying attention to pronunciation - II (the FBI approach)

Clip art: Clker

Clip art: Clker
Following up on the previous post, it appears that a little non-attention is perfectly normal--in fact, essential. In a study by Constantino, Pinggera, Paranamana, Kashino, and Chait of the UCL Ear Institute, summarized by Science Daily, "Detection of appearing and disappearing objects in complex acoustic scenes," it is demonstrated how the brain prefers to attend to novel sounds and may often not even notice the absence of sounds in the background. That explains, in part, why an experienced instructor can often hear one "deviant" sound segment being produced by one student in a class of 30. The question is, why should we occasionally bother to stop and briefly do a choral (full-body) interdiction (FBI) for just one "problem?" By "FBI" I mean having students do the pedagogical movement pattern (PMP), which generally includes articulating the sound along a upper body movement/gesture of some kind. For the 29 who have an acceptable version of the sound already, the PMP serves to momentarily reestablish (for the required 3 seconds!) what we might call "somatic speech awareness," where sound production can generally be monitored in speaking without seriously interfering with things like . . . thinking, while at the same time, for some, defusing anxiety and promoting relaxation. And the beauty of that is, of course, you probably won't hear the that one "error" again either! 

Friday, September 21, 2012

Sweeten your pronunciation work to get it moving again


Clip art: Clker
Clip art: Clker
Ever been tempted to try using M&Ms to motivate students in your pronunciation work? (I have used that treatment in other contexts quite successfully, in fact.) New research by DiFeliceantonio of the University of Michigan suggests that there is an interesting connection between desire to overindulge in eating sweets, for example, and the neostriatum, an area of the brain earlier associated primarily with movement. The Science Daily summary even notes a "moving" occasion: " . . . what happens in our brains when we pass by our favorite fast food restaurant and feel that sudden desire to stop."(Emphasis, mine.) As other research has demonstrated recently, there are often very direct connections between the metaphors we use and the physical sensations and event. (See earlier posts on textural metaphors, for example.) Maybe the more important effect of handing out a few M&Ms before class is just to get neostriatums in gear. In haptic-integrated work, readiness for performing and perceiving pedagogical movement patterns is essential. And at 3.4 calories per M&M, in a couple of minutes you can almost certainly burn off enough calories with a few PMPs to come out even. Sweet.