Research on mirror neuron function (even in monkeys, according the Association for Psychological Science - see full citation below!) has important implications for use of gesture in teaching, especially pronunciation. Normally, our mirror neurons mimic observed movement, giving us something of the sensation that we are actually doing what we are seeing, or perhaps moving along in synchrony with a person in our visual field. (Watch the audience at a dance recital, most discretely "moving along with" the dancers.) That should, in principle, make using gesture a potentially powerful vehicle for instruction. For most it probably is; for some, it isn't.
There are any number of reasons why gesture may not be that effective or why some learners and instructors simply do not feel comfortable with much "co-gesticulation." After decades of wondering exactly why gestural techniques were not more generally adopted (and adapted) in pronunciation instruction--when it was so natural and easy for me, personally--I got an answer from a student: the REVEG and ADAEBIP effects.
Rui was what I would term extremely "visually eidetic," meaning that she had a near photographic memory, such that if she learned a gesture in one position in the visual field and an instructor used that motion even very slightly off the original pattern, she could not process it or at least became very frustrated. Likewise, even looking in a mirror at herself performing gestures was maddening, since she, too, could not consistently move her hands in precisely the same track.
That encounter was a game changer. Within 6 months the EHIEP methodology had been changed substantially.
Since then we have encountered any number of learners who appear to have had varying degrees of "REVEG" (Rui's extreme visual-eidetic "gift") or "ADAEBIP" (aversion to doing anything potentially embarrassing with your body in public!) What that means is that for them, whatever the underlying cause, being required to mimic with any degree of accuracy someone's gesture can be maddening, near-traumatic or impossible.
The solution, at least in part, has been "haptic"-- to use touch to anchor the patterns to the relatively same locations in the visual field--along with anchoring the stressed syllable of a targeted word or phrase at the same time with touch. In addition, instead of the sometimes "wild and crazy" or "over the top," spontaneous gesturing used by some instructors, the idea is now to use highly controlled, systematic, "tasteful" and regular movements for pointing out, noticing and anchoring, and homework.
In fact, one of the advantages of using video models in EHIEP (as in AHEPS, v3.0) is that at least the patterns students are trained on are consistent. In that way, when a pattern (what we call a "pedagogical movement pattern) is used later in working on "targets of opportunity," such as modelling and correction, learners tend to be more accepting of the instructor's slight deviations from the "standard" locations.
In general, haptic anchoring of patterns (PMPs) tends to keep positions of prescribed gestures within range even for the more REVEG among us. Extreme accuracy in actually producing the PMPs in practice and anchoring is really not that critical for the individual learner.
So, if gesture work is still not in within your perceptual or comfort zone, we may have a (haptic) work-around for you. Keep in touch.
Association for Psychological Science. (2011, August 2). Monkey see, monkey do? The role of mirror neurons in human behavior. ScienceDaily. Retrieved December 10, 2014 from www.sciencedaily.com/releases/2011/08/110801120355.htm