Thursday, December 5, 2013

Why "Out of body" haptic pronunciation teaching!

This post is a bit long, but also long overdue. Short answer: "Haptic Video Bill," is at least better than you are!
Clip art: Clker

As we get ready to launch AH-EPS v2.0 (Acton Haptic English Pronunciation System), I was reminded of one of the most important FAQs: Why use video (of me in v2.0!) to train students to do the pedagogical movement patterns initially, rather than do it yourself, in front of the class?

If there are a couple of generally unspoken reasons why instructors may resist converting to haptic (or more kinaesthetic) pronunciation teaching, it may be these: either the assumption that (a) "I can do it better than video!"; or (b) "I just do not like drawing attention to my body when I'm teaching--or anytime." I used to think it was more (Western) cultural. See nice 1997 summary of research on body image by Fox that establishes that as a more universal phenomenon.

As we have seen in decades of experience with using kinaesthetic techniques in this field, the latter is unquestionably the case, even with just requiring a discrete tapping out of rhythm or word stress on the desk. For some, that simply demands too much coordination, brain integration--or risk taking. All I have to do is ask one question of a trainee: Do you like to dance? From that I can predict at least how quickly, he or she will "get" kinaesthetic and haptic work. Finding a successful (technology-based) approach to that obstacle has been key to the effectiveness of the AH-EPS project.

In a highly publicized 2011 study of 'Out of body experience," it was observed that, although we all may experience such momentary sensations, those who have serious, recurrent episodes have particular difficulty in adopting " . . . the perspective of a figure shown on the computer screen." (That is performing the movement or posture mirror image to the model on the screen.)

One early discovery in AH-EPS work was that the video model had to be presented in mirror image, so that when the model moved to the learner's right, for example, the learner would move in the same direction, simultaneously. Doing that, alone, modelling the gestures in person in class, at least in training is--to put it mildly-- very "cognitively complex!" I now rarely, if ever, attempt to train students in person, face to face; I am SO much better on haptic video! (With apologies to Brad Paisley!)

The research and clinical reports on why that should the case in "body training" and body-based therapeutic systems is extensive. (If interested, be glad to share that with you. It is pretty well unpacked in the v2.0 AH-EPS Instructor's Guide.)

AMPISys, Inc. 
Once students are "trained by the video," however, a process taking perhaps 15 minutes, an instructor or peer can easily then use the pattern for anchoring presentation or correction. For example, the training for the vowel system includes 15 vowels of English.

A correction of a mispronunciation, on the other hand,  involves using the pedagogical movement pattern (PMP) for just one vowel typically--a quick "interdiction," as we call it, lasting maybe a minute, at most. In that case, the PMP is performed as the model is spoken or as the learner practices the new or enhanced pronunciation of the word or phrase, 3 or 4 times.

That was . . . quick!

Keep in touch!

No comments:

Post a Comment