Monday, August 29, 2011

Future pronunciation: Hands-free "optic" anchoring (with eye tracking)

Clip art: Clker
Ever since seeing the Clint Eastwood 1982 film, Firefox, I have been intrigued with the idea of using eye tracking for teaching pronunciation. In the film, Eastwood flies a stolen, high tech Russian fighter--controlled by his eyes and a few spoken Russian commands-- back to the US. As noted in the right column, the work of Bradshaw and Cook in developing "Observed Experiential Integration,", which involves extensive, therapeutic use of eye tracking, was fundamental to much of the early development of the HICP/EHIEP framework. Here is, basically, a marketing piece for a company that has developed (from my perspective) an amazing range of eye tracking-based software applications. As I read the product list, it would take only two or three applications to allow a learner to do almost all of the protocols or procedures that we have developed, hands free.

In fact, the "optic anchoring" created simply by tracking the eyes across the visual field in roughly the same patterns that we do with arms and hands would be at least as effective, if not more so. Although we do use some eye tracking techniques in working with accent reduction, in general EHIEP work, no explicit eye tracking is used, in part because of the inherent potency of eye tracking procedures and the absolute necessity of being formally trained in working with it. This technology is certainly worth "taking a look at" now. It is clearly integral to the future of virtual reality-based language instruction.

Plastic Brain . . . Pronunciation Change

Clip art: Clker
One of the most striking findings of recent research, such as this 2002 one by on neuroplasticity in motor learning by Ungerleider, Doyon and Karni is not just how the brain works but its inherent plasticity in many respects, its ability to reorganize and relearn or learn in other ways if necessary. One obvious implication of that is that just because students have individual preferences for particular learning styles does not mean they can not, in many cases rather easily, switch to other styles or develop better use of secondary preferences. The danger of cognitive style or learning style categories is . .. that they are categorical. Once we "know" what we are, that's it. (In fact, research suggests that once you know your style, especially based on some simpleminded 5-minute questionnaire,  you become even moreso--one of the basic assumptions of hypnotherapy, of course.)

Bottom line here: even the "adult brain" (and this is especially good news for learners of my generation and beyond) is capable of enormous flexibility and re-generation. So forget all that nonsense that you have heard about having to alter your teaching style to fit those of your students: retrain them instead! Well, actually, you should be constantly training everybody, yourself included, in multiple modality learning. Get HIP(oeces), eh!

Sunday, August 28, 2011

HAPTICULATE! (Learning new or changed pronunciation efficiently)

Clip art: Clker
I like that term . . . Among voice coaches, the asymetrical relationship between "bone conduction" (perception of one's own voice experienced through the bones of the face) and "air conduction" (awareness based on input via the auditory nerve from the ears) is generally a given. Estimates range from 80/20 to 60/40. Thus, in training programs, the internal "felt sense" of the voice is understood as primary. (This abstract of a  study looks at varying frequency ranges involved.)

Assuming that observation is essentially correct, or at least useful--and drawing on research cited in several recent posts on the relative strength of different modalities in speech production and comprehension, here are the fundamentals from a HICPR perspective on how to manage your attention (or those of your students), to learn a new or corrected sound with optimal efficiency. In brief, there are 4 basic components: (What function each fulfills has also been elaborated in previous blog posts.)

A. Breathe in through the nose, then breathe out through the mouth as the word or phrase is articulated, accompanied by specific modality management--with haptic anchoring. See B and C, below.)
B. Focus strongly on the felt sense in your personal Vowel Resonance Center (a point, typically, in the bones of the face between the eyes or thereabouts, where bone-sound conduction is experienced most intensely or, for some, at a point in the throat or chest when speaking). The breathing procedure in A helps to create and maintain that focus.
C. Manage the visual field (Visual Field Management). Do that either by focusing on a fixed point in front of you, tracking hand movements with eyes or closing your eyes--or some combination.
D. Perform 2 or 3 "pedagogical movement patterns" (basically sign language-like movements/gestures through the visual field, terminating with both hands touching on the key, stressed syllable –haptic anchoring) as the target word or expression is . . . well . . hapticulated!

Saturday, August 27, 2011

Haptic preferences of 5-12th graders (and adult learning style plasticity in pronunciation teaching)

Clip art: Clker
This summary study found that "average" 5-12th graders in the US, Hong Kong and Japan had a relatively balanced learning style profile, with a slight preference for haptic (37%), with auditory at 34% and visual at only 27%. Those results appear to contrast substantially with the "typical" adult learner who tends to be biased in favor of visual with auditory second and haptic a distant third. From that perspective, our goal should be to assist adult learners in developing a more balanced, multiple-modality-based learning style profile more like they had in school. Not sure about the applicability of the research but I certainly like the results. On the face of it, however, that looks like an almost ideal mindset for pronunciation change, a good target for our research and instruction.

Why "haptic-integrated" pronunciation method? Really?

Clip art: Clker
I am frequently asked why I continue to insist on using the phrase “haptic-integrated pronunciation” as the focus of HICP/EHIEP. Much of what passes for pronunciation instruction today is still (at best) like a good Youtube video: (a) an explanation, followed by (b) classroom practice—some of it very well done, by the way, but generally conducted as decontextualized exercises, and then (c) . . . nothing . . . the learner is from that point on entrusted with the responsibility of either figuring out how to practice outside of class, or assumed to subconsciously integrate the new pronunciation without further attention or guidance.

The EHIEP model attempts to “supercharge” both the classroom and out-of-classroom experience by helping to integrate pronunciation teaching more effectively, in two senses. First, after initial brief training sessions (9 or 10, 30-minute modules, done by the instructor or video-based, spread out over the course of about 12 weeks) attention to pronunciation from then on occurs within the context of “regular” speaking and listening tasks, integrated in as the need or opportunity arises for increased intelligibility or accuracy. Second, learners experience in-class and out-of-class (in regular, prescribed homework), consistent, multi-sensory/modality learning of sounds and words that should greatly facilitate integrating those elements into their spontaneous speaking and listening. I had the basic idea back in 1984, but could never quite figure out how to get consistent integration and anchoring. About thirty years later, I was introduced to haptic research.

Friday, August 26, 2011

To breathe or not to breathe during pronunciation practice

Clip art: Clker
In most basic strength and flexibility training, some kind of systematic control of breathing is practiced. My experience has been primarily with running, weight training and yoga, where there is a general consensus that "nose breathing," at least when inhaling, is recommended. Here is a brief summary of some of the potential health benefits. (There is extensive, well established research also on the effects of breathing in yoga systems.)

I have been exploring the use of controlled breathing in HICP/EHIEP work for sometime now. The idea is to breathe in through the nose before haptic anchoring of a sound or word, then exhaling through the mouth with the anchor as the sound or word is articulated (hapticulated, as we say!) There are several potential benefits (in addition to the biochemical changes evident in the research) including: improved pacing of exercises, enhanced "felt sense" and concentration on the target sound, improved posture encouraged by conscious nasal inhaling, improved aspiration on aspirated consonants--and perhaps most strikingly, a general sense of well-being that remains for some time after practice. (Research seems to indicate that that feeling is  probably the result of greater oxygen absorption.)

So, if your pronunciation work seems to be sucking all the oxygen and enthusiasm out of the room . . . such controlled, embodied systematic "inspiration" (and expiration) could well be a real "breath of fresh air!"

Thursday, August 25, 2011

On the tip of the tongue:Tip/top hapticulation

I have given up on finding a Youtube/video that works effectively with directing learners to the felt sense of the tip of the tongue, as opposed to the "top" of the tongue just behind the tip--or for that matter, the top/middle (the blade) or the sides of the tongue. (I was tempted just to include a list of exemplary worst offenders, but decided to keep my tongue/fingers in check . . . )

The exact contact point of the tongue with the teeth, lips, alveolar ridge (just behind the teeth) is essential for efficient consonant repair (e.g., th/th, f/v, s/z, sh/zh, r, l, n, ng, t/d.)  Most learners, with a few exceptions, without the aid of an instructor and mirror, are not able to accurately anchor to those points. That does not mean that by trial and error--and brute force-- many problematic sounds cannot be eventually approximated, particularly for those who are better wired to extrapolate sound into movement. I will be posting some "hapticulation guidelines" (articulating sounds with haptic anchoring) for use in the classroom. The basic, haptic-integrated classroom teaching requirement: if you can't fix a consonant sound in 2 minutes or less, don't. Schedule an office visit.

Clip art: Clker (stick with marshmallow)
In preparation for that series of protocols, go to Starbucks--not to stir up trouble here--order a coffee and walk off with about a dozen wood coffee stirs. Take one or two, and practice breaking off 1/3 so that you end up with a rather jagged edge. Discard the resultant short piece. (The marshmallow is optional!) See if you can figure out how to establish the appropriate haptic anchor points on the upper body for the settings of the "World Englishes" that you teach. Until you can get your hands on the new EHIEP consonant protocols (March, 2013), however, I take no responsibility whatsoever for any collateral damage that you accidentally inflict on yourself or your students. 

Tuesday, August 23, 2011

Half-haptic, "Whole Brain" Power Teaching

I am often asked how "Power Teaching" relates to HICP work. The answer is . . . well, sort of. It does involve gesture-synchronized speech and what is certainly whole brain/body engagement. The main differences are that PT makes extensive use of iconic gesture (drawing a picture of something) and involves only accidental haptic (movement plus touch) anchoring.

Actually, there are occasions when I use some (relatively wacky) PT-like routines to get learners warmed to the idea of full body work in the first place. The other more interesting dimension of PT, however, is what is often called the "yes set," that is getting students to agree to follow commands--as "retro" as that may sound. Learning the EHIEP protocols requires students to mirror and follow along with either a video model or a "live" instructor and do it with considerable precision until the basic haptic strategies are  mastered,  so they can be used in the classroom whenever the need arises.

With apologies to our excessively  "Critical" colleagues, sometimes the judicious application of a little pedagogical "power"(teaching) in class is not a bad idea!

Monday, August 22, 2011

Your vowels are within you . . .

Clip art: 
Clker
In many religious and meditative traditions, vowels have distinct character, quality and (often incredible) impact. In spoken English vowels do retain some subtle phonaesthetic qualities, as noted in earlier posts, but nothing comparable to those that form the basis of the great chakras and chants. (Not even backward build up drill or Jazz Chants) This video provides a nice introduction to the central role of vowels in that context. (Here is a bit more "analytic" and almost entertaining presentation of some of the same concepts.)

 When we work with the concept of the "felt sense" of a sound, we are working somewhere near the other end of the intensity continuum from the settings of those vowels, but a rich experience of resonance and the momentary, conscious situating of the "feeling" of a vowel someplace in the body is essentially the same goal. Here is one case where today's (over)emphasis (in my humble opinion) on metacognition in pronunciation teaching may just be on the right track. Got a vowel problem? Try meditating on it.

Metalinguistic feedback and "Tell backs" in correcting pronunciation errors


Clip art: Clker
Clip art: Clker
This interesting paper by Reed and Michaud that illustrates where the field is headed, especially how cognitive phonologists view the priorities of the process of correcting errors in spontaneous speech. The two key strategies are to (metalinguistic) instruct the learner to do something with the piece of sound to fix it, and then (tell back) have the student report back to the instructor in quasi-technical terms what the problem is and the preferred method for fixing it. And then fix it. (I may be overtruncating the process a bit, of course!) For some learners, I'm sure that works, especially those more advanced in EAP programs, where metalinguistic work (talk about language structure and awareness) is the essence of the pedagogical process.

Now haptic anchoring could, in principle, be applied after the student has talked back to the instructor--and I'm sure that would be the response from Reed and Michaud. There appears, however, to be no evidence at present to verify that method in correcting pronunciation, although there is substantial research supporting such metalinguistic "chat" in the areas of grammar and vocabulary. From a HICP/EHIEP perspective, of course, metacognitve reflection for the most part follows haptic anchoring, not the reverse. I don't  find this "tell back" innovation/development a cause for great optimism at this point, but I'm sure that I can be talked out it . . .

What we can learn from (at least) one model of Hypnosis

Credit: The Milton Erickson
Foundation
One of the most important influences on my understanding of how language works, especially the use of voice in therapeutic change and clinical process, was Milton Erickson, considered by many to be the founder of modern hypnotherapy. To quote from the website, "In Ericksonian hypnosis, language is used to direct the attention inwards on a search for meaning or to verify what is being said." Whereas many therapies make extensive use of the visual field with movement or "gadgets,"

Erickson was a paraplegic, who was also apparently somewhat dyslexic, color blind and tone deaf-- and had only one tool to work with: his voice! A book of his collected therapeutic stories, My Voice will go with You, remains a favorite. Note the focus of Erickson's work: (a) direct attention inward . . . and (b) verify what was said. In effect, it is focusing with extraordinary attention on the felt (auditory, kinaesthetic and resonant) sense of a word, phrase or experience.

As earlier posts have explored, the interplay between external visual stimuli and "internal" haptic and auditory is critical to effective anchoring, especially in moderating the effects of both internal and external visual distraction and (often) persistent mispronunciations tied to orthography. In pronunciation teaching (and especially HICP), systematic control of both instructor and student voice quality and expressiveness is key to sound learning. But, as Erickson might have suggested, I need not bother trying to convince you of that . . . you feel (and speak) that way already . . .  

Sunday, August 21, 2011

The Myth of Learning Styles

Here is a "must-read" on the concept of "learning styles" from Change Magazine, 2010. It begins with an interesting claim: "There is no credible evidence that learning styles exist." 



Although Riener and Willingham focus on the validity of the idea at college-level, their basic claim, that ability, student background and content (including the media in which the instruction is packaged) are far more relevant to instruction than is the potential impact of individual learning style (visual, auditory, kinesthetic, etc.), is simply dead on. (This is one of those pieces of research that you discover--with which you almost agree with too much--that forces you to smile for the rest of the day!) Enjoy!

Saturday, August 20, 2011

Pronunciation Homework: Doing the heavy lifting!

As noted in an earlier post, I have been unable to find any good research on the effect of consistent pronunciation homework. (If you know of some please, let me know!) Given the more directly physical character of EHIEP protocols, it seems reasonable to look to a couple of related fields, in this case, formal exercise courses and weight lifting for insights into how to keep learners engaged appropriately. (In pronunciation work, there is a great deal published on pronunciation journals, workouts and after-the-fact reflections on outside of class work, but apparently next to nothing on persistence to prescribed program homework.)

Clip art: Clker
The college exercise class study linked above used a 3x per week model and found that the required regimen not only achieved course objectives but actually resulted in increased activity beyond the course. An every-other-day pattern of practice is also standard in most weightlifting, running and other sports where recovery time for properly exercised muscles is at least 48 hours (for older and less fit, even longer.)

That has been our experience with HICP homework as well, probably in part because of the body and visual field focus and stretching: 48 hours between "workouts" and no more than 3, 30-minute homework sessions per week. The research in "physical" disciplines (See earlier post on exercise persistence.) suggests that short, intense, programmed, disciplined, spaced, regular exercise is optimal. Prescribing and carefully monitoring pronunciation homework is certainly not "speaking out of school!"

Friday, August 19, 2011

Could Krashen's "Monitor Model" have been 25% correct?

Clip art: Clker
Here is an article that begins with a quote from Krashen (1982) that states his initial articulation of the "Monitor Model," arguing, among other things, that attention to form or correction in L2 acquisition is not effective or productive, at best. Following on from recent posts, you can see how he had captured a critical dimension of the process but was tossing out even the possibility of any directed, modality-mediated monitoring of spontaneous speaking (that is, modulating attention appropriately to learner cognitive style profile among the four senses or modalities), as we have been exploring for some time now. I'm sure I am not the first to suggest that "Krashen's Error" was that he was  just slightly "out of touch" . . .

Thursday, August 18, 2011

Magic pronunciation change: Look at that sound in your ear!

One of the great mysteries (or complexities or frustrations) of working with multi-sensory or multiple modality pronunciation learning systems is understanding the relative contribution of one sense to the effectiveness of the process. As you might guess, some research reveals that the senses are in some contexts highly integrated; in others, quite the opposite: the incoming data is interpreted and remains relatively partitioned. In looking at the range of possible configurations of learner cognitive styles (e.g., visual, auditory, kinesthetic--and all possible combinations of those three where one is dominant and the other to some degree less prominent, such as: visual-auditory, auditory-visual, auditory-kinesthetic, kinesthetic-visual, etc.) the puzzle becomes even more complicated. What some research suggests, such as this 2009 study by Jacobs and Sham, is that some degree of separation or isolation of the senses might work for us in changing pronunciation.

An earlier post, Change the Channel Fallacy, focused on how difficult it can be to change "the channel," for example, modify auditory output by doing simple repetition of the correct sound, without engaging a learner's nondominant modalities, perhaps visual or kinesthetic.What that could mean is that the best avenue or potential impact  might be through nondominant modalities, not catering to the learner's preferred modality as is the common practice.

So, for the visual-auditory learner, emphasize auditory; for the auditory-visual, visual; for the kinesthetic-auditory, auditory, etc. And for almost any combination of the three main senses, try teaching more through haptic (adding touch to movement), which is generally even more outside of conscious awareness, less likely to disturb ongoing communication or thought. EHIEP methodology is not magic, but it certainly does involve a focus of attention away from the problem channel to a parallel modality where change can take place less obtrusively, a nice,  haptic "slight of hand", if  you will . . . and you should!

Do your (pronunciation) homework exercises!

Clip art: Clker
One area of pronunciation instruction that I have been unable to find anything but anecdotal research on is the effect of students' systematic practice outside of class. I have been convinced for decades that if I can just get a student to do prescribed homework on schedule, progress is inevitable and predictable. Just for fun, I once had a student read the phone book every morning for 15 minutes, just focusing on speaking clearly and warmly . . . amazing improvement! (I suspect that accounts for the "success" of some online accent reduction programs:  just do something regularly, almost anything!)

For HICP work, the best parallel is research and practice in physical exercise persistence. In this doctoral study, done in a US upper middle class health club, it was shown that (a) autonomous self direction, (b) basic exercise competence level, and (c) relatedness (identifying with group, such as "the fit" or the club) predicted exercise persistence in terms of duration, intensity and enthusiasm. One factor, need support or perception of a "caring" context by the club, was not significant. (Will do a blogpost on that one shortly.)

Setting aside the obvious cultural dimension that foregrounds "autonomy," those four factors, when adjusted appropriately for the learner population go a long ways in helping us understand how to design homework that will keep learners engaged. In our work, the basic haptic protocols should provide a 10-minute aerobics-like foundation/warm up for homework that the body is more apt to go along with for starters--until the rest of the brain comes on line. So if your students don't do their homework, at least do yours . . .

Tuesday, August 16, 2011

Mirroring, Tracking and Listening

M, T and L are basic tools of pronunciation teaching. It has been assumed for some time that tracking, that is having a learner speak along with a simple audio recording, is something of an overt form of what naturally goes on in the body in listening. There was earlier research that seemed to suggest that the vocal apparatus (mouth, vocal cords, etc.) moved along with the incoming speech at a subliminal level.
Clip art: Clker
Turn outs, according to this research, by Menenti of the University of Glasgow, Hagoort of Radboud University, and Gierhan and Segaert of the Max Planck Institute, summarized by Science Daily, that general listening (without seeing the speaker, "live," visually) does not necessarily involve such sympathetic "vibrations." In other words, the felt sense of listening in some contexts can be decidedly non-somatic or divorced from embodied attention.

That does not mean that tracking is still not a useful technique for assisting learners with the intonation of the language, but clearly, the neuro-physiological rationale may be suspect. This raises several interesting questions related to the complex inter-relationships underlying listening, speaking and pronunciation skills--and how to teach them, especially in adults. The evidence that mirroring, on the other hand, engages the body is unequivocal. That certainly speaks to the HICP/EHIEP--and any pronunciation teaching practitioner who is listening . . .

(Haptic) Pronunciation Rehabilitation

Clip art: Clker
Here is an interesting paper outlining a virtual-reality approach to using haptic rehabilitation technology with stroke victims. The parallels to some aspects of haptic-integrated pronunciation work, especially in dealing with fossilized pronunciation, are striking: (a) focus on "daily" actions, (b) exploit the visual field as a 3D structure--not just 2-dimensional, vertical and horizontal, and (c) use haptic guidance and anchoring. Changing fossilized (cf., Acton 1984) pronunciation requires a somewhat different approach where the targets must, at least initially, be words and phrases with high likelihood of daily active or receptive use by the learner. (Often you have to simply "ferret out" every word with problematic sounds, one by one!)

Following Lessac, only then can language bits practiced in (relative) isolation as "homework" begin to integrate into spontaneous speaking. The 3-dimensional space allows not only consistent haptic anchoring of language bits but also provides for registering emotional and expressive intensity, key elements in working with seemingly intractable mispronunciations. From that perspective, the term "rehabilitating (fossilized) pronunciation," has a nice ring to it. Now if we can just apply that principle to contemporary pronunciation teaching in general . . .

Monday, August 15, 2011

Advantages of 2-handed haptic anchoring in pronunciation work

A few excerpts from a sports websiteblog that could as well be describng beautifully the felt sense and effectiveness of haptic anchoring w/both hands involved:
Clip art:
Clker

. . . all of your upper torso will be behind the [stroke]
. . . two-handed . . . is more forgiving.
. . . two-handed . . . frequently known for hitting . . . incredible angles.
. . . two-handed . . . is easier to “groove” (hit consistently) and keep grooved than a one-handed
 . . . Without the extra controlling presence of the non-dominant arm, there is much greater potential for unwanted motion both horizontally and vertically.
. . . two-handed . . . from each side are identical, the learning that occurs on one side will serve to reinforce the learning that takes place with the other stroke.
. . . two-handed . . . are more powerful and are hit with a greater degree of control and touch.
. . . The less you attempt to do, the less there is that can go wrong
. . . when using two hands from both sides, I’m breathless by the end .
. . . What have you got to lose?

Game, set, match!

Mirror Neurons, Dance Movement Therapy and Haptic Mirroring

Clip art:
Clker
This 2006 article by Berol looks at the underlying neurophysological basis of mirroring in dance-movement therapy. Of particular interest to our work is both the full-body engagement with the model and the strong emotional, empathetic grounding involved. As noted in earlier posts, carefully managed mirroring offers an extraordinary level of attention to aspects of communication generally thought to be outside of the domain of the classroom, to be learned inductively. It can be, however, very difficult to incorporate into instruction consistently and in a way that is psychologically "safe."

Dance therapy protocols do an extraordinary job of setting the stage for engaging the client/learner in the "moving" experience. The language used and therapeutic process of DMT offer valuable insights into effective task staging of the pedagogical process for pronunciation instruction. HIPoeces employs haptic mirroring (mirrored movement with a range of touch intensities and configurations), using a number of DMT strategies,including teaching from a mirror-image perspective at times. A good first step is to set your mirror neurons down in front of a video of yourself teaching intonation for a couple of hours . . .

Sunday, August 14, 2011

Remembering: Just close your eyes to block out that distracting sound in the background or pronunciation

Clip art: Clker
Here is (an abstract only of) a study by Perfect, Andrade and Eagan (2011) which demonstrates that under certain conditions, closing the eyes effectively blocks out background/environmental  auditory clutter (in this case.) The technique seemed to enhance or at least protect both visual and auditory recall. Earlier posts have reported on research related to the impact of eye closure (or purely haptic, nonvisual engagement) on encoding, showing a parallel effect. It is as if once the eyes have done their part in establishing an object's properties or pattern, in some contexts or stages, it may be better to carefully limit further, potentially distracting gaze.

Most disciplines deal with directed eye engagement in some form, especially in expert practice and performance. That can be done any number of ways from full closure to fixed positions in the visual field. (I'm sure you have students who, themselves, use similar strategies at times. Regardless, the implication for pronunciation instruction is intriguing: To get or recall the optimal auditory felt sense of a sound, simply cut out external auditory interference. That appears to be relatively easy . . . you can  do it with your eyes closed!

Saturday, August 13, 2011

Touching Song!

Clip art:
Clker
Although this is actually just a (great) pitch for a Samsung smart phone, it does "embody" something of the spirit of what we do and how we do it. Here is my very rough translation of the lyrics:

“Love is about touch.
Don’t you want to touch me because you love me?
If you love me, you will want to touch me.
It’s only real when you can touch it! . . .
and if your are truly HIP(oeces),
then the EHIEP method is the only way to teach pronunciation!"

Keep in touch, eh. 

Thursday, August 11, 2011

(With Child L2 pronunciation learning) Haptic or not too haptic?

Clip art: Clker
In this study by Gori et al (2008) we get a glimpse into why children up to the age of 8-10 are more haptic in some contexts. For example, in determining relative size they will tend to rely on relative touch and motion; in figuring out orientation in space, more visual. Beyond that age, modalities are gradually more and more integrated. In working with L2 pronunciation of children with the EHIEP protocols it is striking (but from this research not surprising) how quickly they are able to "get" and mirror accurately the pedagogical movement patterns across the visual field and beyond. And, at the same time, they learn the haptic anchoring of sounds and recall them without the teacher even mentioning what is going on.

Later, as we have seen, as adults the senses are capable of working together or in opposition, depending on a number factors. To the old canard, "learn like a child," we can here see why it can be so difficult to get into that state--and how we may be able to construct frameworks that do allow occasional access to more uni-modal, direct learning when necessary.

Seen from that perspective, haptic is not just for kids.

Haptic leading and (pronunciation) practice: getting into the swing of things

Clip art: Clker
Inspired by the interaction between swing dance partners (where all leading is done haptically, without verbal commands or visual signals), a system where a robot acted as lead and a human as follower was developed to study the nature of haptic-only guidance. What was discovered was that, as long as the partners knew the basic swing dance moves, shared the same "vocabulary," haptic-only leading worked reasonably well. If not, the two could not coordinate their actions effectively.

What that seems to imply is that haptic anchoring is most useful or more potent in HICP work when applying a basic pedagogical movement  pattern (PMP) to new materials (sounds, words or phrases)--not as much in the early, initial phases of learning the PMPs, where being able to see locations in the visual field consistently is critical. In other words, for efficient practice of new material.

As noted in an earlier post, for some learners, anchoring new pronunciation(s) may be better accomplished with eyes closed, without the visual interference, in fact. That integrative process, of moving from patterns to improvisation haptically (Just go with the flow, so to speak.), parallels many kinds of learning, but especially the process of "repairing" incorrect or fossilized pronunciation. For those who have yet to get into the swing of haptic-integrated pronunciation work, the "un-HIP," just pick up a few of the basic steps off the videos, turn down your visual cortex, and follow our lead . . .

Wednesday, August 10, 2011

Haptic robot (clinical pronunciation) therapy?

Clip art: Clker
Leave it to the Italians to figure out how to do therapy with a "haptically endowed" robot! (There are, of course, occasions when one would prefer a hug or back rub from a robot, rather than the therapist or instructor at hand!) The character on the EHIEP logo, which we affectionately refer to as "EH-bot" was designed to embody the personna of a fun-loving, "hapticobot." The EHIEP system requires consistent, precise pedagogical movement patterns that are haptically anchored in the visual field. Another of the HICP pedagogical acronyms is: MAPLE (Maximally Attentive, Physically Laid back Engagement). Both the precision of robot-like gestures and the therapeutic (relaxed, confident) felt sense of the haptic anchoring contribute greatly to the efficacy of the system.

Students often report that just doing the vowels and fluency protocols can be very therapeutic. One of the serious misconceptions in pronunciation teaching is that motivation, "pedagogical gesticulation" (such as dance or drama) and enthusiasm are always positives. Nothing could be further from the truth. Research cited in earlier posts showed convincingly that effective kinesthetic learning (which is the essence of pronunciation learning) is highly sensitive to distraction. In other words, it can be difficult to remember a sound or movement that was embedded in too many "fun and games." (Lexical items or terms, on the other hand, seem to survive a little better.) Turns out, EH-bot (the hapticobot) may be a surprisingly good model and deliverer of both quality pronunciation input--and appropriately enabling warm fuzzies--after all. Perhaps a more Italian moniker is in order? EH-Botticelli, it is!

Tuesday, August 9, 2011

Collaborative haptic-integrated instruction

As explored in earlier posts, for any number of reasons, HICP work has been restricted to engaging but culturally "tasteful" touching of one's own hands, deltoids (or clavicle) and quadriceps. The "collaborative music controller" developed at Stanford, would, in principle, function like the "haptic mirror neurons" in the brain, guiding and synchronizing the hand of the other.

Clip art: Clker
Imagine the possibilities: being able to quickly train learners in the correct pedagogical movement patterns (virtually) without touching them. As it is now, if a learner is having difficulty picking up a pattern, given the right setting and relationship with that learner, I might occasionally physically guide a hand or arm momentarily--but do not recommend that as a regular classroom practice. If necessary, brandishing a pointless, "guiding" pencil will usually be sufficient.

Were EHIEP  to be imported into a virtual reality system, much of the basic training could probably be done using similar haptic-mirroring technology. By that point it would also be far easier to persuade the "haptically challenged" to "mirror-ly" get with it as well . . .

Eye dominance and HICP: the "shotgun" approach

For a small percentage of learners and instructors, eye dominance can be problematic. If one is strongly left-eye dominant, some of the  EHIEP protocols and activities can be either ineffective or slightly dis-orienting. EHIEP is based on the concept that discourse prominence, that is phrasal and sentence stress (but not word stress) must be anchored in the right visual field. The reason for that is that, for the right eye dominant, the right visual field and hand are more sensitive and responsive. For an instructor, facing a class, that means that some pedagogical movement patterns have to be done mirror image to what the students will be either doing or observing. Some report they cannot do that effectively.

Up to this point, we have been insisting that the poor, struggling left-eye dominant conform to the majority--most actions terminating in the right visual field. We are now beginning to explore the possibility, perhaps a bit inspired by the Shotgunworld blog, of having the strongly left-eye dominant learner who struggles with the system, instead work to the left, much as does the EHIEP instructor. The advice to the novice "shotgunner" is often to just switch to shooting left-handed. It is not as if  we have been "holding a gun to the head" of some learners . ..  but there are some interesting parallels. It is probably time that we bit the bullet and gave it a shot.

Monday, August 8, 2011

Haptic testimonials from the stars - III

Clip art: Clker
Imagine having Neytiri from the movie Avatar show up to sub for you in your HICP class on the  day that the lesson plan calls for intonation and discourse markers of emotion work. Sound pretty far out? Maybe not. In the 2002 University of Berkeley dissertation by Barrientos,  a model is developed for providing avatars with a relatively simple but adequate (for avatars) gesture + emotion repertoire.

In fact, I am beginning to think that avatars could probably do a better job of teaching some pedagogical movement patterns than could a live instructor at the head of the class, for several reasons. First: consistent, precision of movement pattern, both in terms of size, position in the visual field and speed. Second: With slight facial adjustment and vocal expression, the avatar can present most basic emotions with the pattern with words--free of personal agenda, high-fashion outfit of the day or other distraction, allowing learner to focus on and either repeat or mirror the PMP and the emotion conveyed--not the gesticulating bozo up front. (There is a great deal of research in the psychotherapeutic literature on the interaction between therapist and client in face to face "instruction.")

Even when doing EHIEP work "live," ourselves, we have learned through review of haptic research and classroom experience that the key to efficient HICP instruction is to assume a slightly robotic "persona" at times as we do. (Note the EHIEP-bot logo in the upper right hand corner of the blog.) So watch yourself! (Preferably on video many times.) Your students are . . .

Haptic testimonials from the stars - II

Clip art: Clker
/Library of Congress
Talk about being out (side) of the (Cornell university research) box:

"I loved the haptics. They were much better than Cats." -Matthew Broderick, actor

"A haptic box? What's the deal with a haptic box? Someone tell me, because I'd like to know." -Jerry Seinfeld, actor

"I felt like I was living inside the Cornell box." -Shaquille O'Neal, professional basketball player

"These haptics made me a better-looking man. After I experienced the Cornell box, I had a date every night for a year!" -Steven Adler, the original drummer for Guns and Roses

Talk about endorsements for all things haptic! Rather than unpack what they think that are talking about specifically, I'll just leave it at that . . . 

Haptic feedback: On the other hand . . .

Clip art: Clker
Here is an abstract (only) of a 2005 exploratory project by  Kohli and Whitton of the University of North Carolina at Chapel Hill that looked at feasibility of using the "non-dominant" hand to provide feedback to the dominant hand in virtual reality. (Typically, only the dominant, or at least one hand, is engaged.) This is the first study that I have found that seems to suggest that research is catching up with EHIEP design! What it means, essentially, is that the concept of both hands touching in the visual field to enhance multiple modality learning appears very consistent with current VR technology. (EHIEP pedagogical movement patterns all involved both hands touching in the visual field on a stress syllable.) Only a matter of time before EHIEP goes VR? Perhaps. Still a great deal of work to do, however, before it can be "handed over!" But let's give them a hand, regardless!

Sunday, August 7, 2011

Starting "from scratch" (Haptically speaking!)

Clip art: Clker
Haptic-based, integrated pronunciation instruction relies upon consistent, "memorable" anchors (speech-coordinated, movement terminating in both hands touching or one hand touching the body someplace.) If the felt sense or impact of the haptic "collision" is not strong or appropriately located--as noted in a previous post--best case: nothing is connected or learned; worst case: the diffuse haptic action works against any meaningful encoding of visual and auditory stimuli.

2007 research by Gallacea, Tanc, Haggardd and Spencea suggests that haptic anchoring is especially sensitive to intensity. In other words, with more pressure or skin "trauma", short term memory for haptic stimuli should improve. In some preliminary work on intensifying EHIEP protocol anchors, it does appear that using the finger nails in some contexts to impact or lightly "scratch" the skin of the other hand, rather than the flesh of the finger as is the practice now, creates a strikingly persistent anchor. This does feel like a promising lead. We have, however, only just "scratched" the surface . . .

New Accent Reduction Seminar for International Professionals (ARSIP) format

ARSIPTM Accent Reduction Seminar for International Professionals, offered by AMPISys. Inc.  The provocative (but factually accurate) tag line is: "Dramatically improve your English accent in just 24 hours!"

A new format has been developed: 12 weeks of systematic individualized work, done 30 minutes a day, four times a week. The four weekly self-directed practice sessions involve the basic 9 EHIEP protocols, a model conversation, target-sound practice and professional word list protocols. There are optional feedback mechanisms available also, such as webcam consultations, audio recording evaluations and creation of workplace specific conversations and word listings-- the basic cost, about $400.

 If you are interested, keep in touch. Final announcement as to availability will be announced in conjunction with the 2013 TESOL Convention in Dallas in late March. 

Saturday, August 6, 2011

Hand sensitivity (probably) matters in haptic pronunciation work

Clip art: Clker
There is a long tradition in various healing arts for "healing touch" and training in how to develop hand sensitivity and awareness of the "energy" of our hands. Hand and eye dominance seem to be parallel. For most, one's dominant eye and hand will be the more sensitive. Earlier posts reported research on analogous "hot" spots in the visual field--which have been incorporated in various ways in aspects the EHIEP/HICP model such as placement of vowel nodes and pitch levels. (We have also researched and experimented with the use of various aromatic hand creams in sensitizing mind and body.)

There are tests of hand sensitivity which may be of helpful in understanding why some learners are better than others in establishing consistent pedagogical movement patterns  in the visual field, anchored by touch. With some type of touch sensitivity training such as this one, the process of working from potent haptic anchors might be improved significantly, at least giving the "haptically challenged" a (better) hand, so to (better) speak . . . 

Friday, August 5, 2011

Haptic coordination thresholds and caveats

This 2001 research by Kelso, Fink, DeLaplain and Carson may suggest an explanation for a frequent observation: in haptic-based work there seems to be a threshold of some sort. Once a learner puts it all together, the coordination of the visual, auditory, kinesthetic and tactile sensory input with the linguistic concept in focus, the learning process seems to go into overdrive, become much more efficient. Until that happens, however, it can be very frustrating for some.

Clip art: Clker
One conclusion of the study was that haptic engagement, if not well synchronized with the other sensory inputs, has the effect of seriously compromising the other elements, especially visual and auditory. (We know, for example, from other studies that visual can in some contexts easily cancel out auditory or haptic --but not combined auditory and haptic.) That may explain the sometimes contradictory results of studies looking at the benefits of kinesthetic strategies in instruction.

This also has important implications for haptic-based instructional task sequencing and scaffolding (or any type of multi-sensory teaching for that matter). To paraphrase "The Duke": "You ain't got a thing, if you don't [until you] got that swing."

Wednesday, August 3, 2011

Kinesthetic empathy and haptic listening

Here is the first of two very cool videos from a neuroscience/dance project and conference: "From Mirror neurons to Kinesthetic Empathy." (The sound quality is problematic in places.) Dance-related research in kinesthetic empathy explores, in part, how the observer of dance "moves along" with the dancer--and how that experience can be utilized and enhanced.

Credit: www.watchingdance.ning.com
One frequent observation by EHIEP learners is that near the end of the program their listening skills have improved in a somewhat unexpected manner. Specifically, they have become better at remembering what is said, how it is said and able to repeat what they have heard (often using EHIEP pedagogical movement patterns). The "felt sense" of that experience seems to be very much body-based, non-cognitive, as if the whole body is recording the conversation. Although we have for sometime been terming that "kinesthetic listening," we have not yet developed the advanced listening comprehension protocol systematically. We should soon. Hapticempathy?