Friday, September 30, 2011

First Annual International HICP (Haptic-integrated Clinical Pronunciation) Convention

Great response from HICP as a possible name for the organization, pronounced "hiccup," so let's go with that. I hope to be on sabbatical in Spring 2013, so I propose that we hold our first "convention" in late Spring 2013, someplace warm. I use the term "convention" because I assume that by then the size of the organization will so large, with haptic anchors (the term we might use for a local or national affiliates) everywhere, that to call it only a conference would deny the true felt sense of the event. With enough time to plan and keep in touch, the convention should come off without a hiccup! KIT (Keep in touch!)

Thursday, September 29, 2011

Clinical Phonetics and "Prononciation Cliniques" or HIPC

I have been trying to come up with a good, descriptive organization name for what we do for some time. In practice, the term "clinical pronunciation" is probably close, parallel to the field of "clinical phonetics" (linked above.) This definition of "counseling psychology" may provide a better point of departure: a branch of psychology that specializes in both discovering new knowledge and in applying the art and science of psychology to people with emotional or behavioral disorders . . .  Not sure that the analogy extends well to one with pronunciation issues as having an "emotional or behavioral disorder," of course, but systematic work with emotion and behavior (especially directed movement) is central to effective pronunciation change. We work more with "hands on" anchoring and integration, rather than with cognitive schema and metacommunication (rules, goal setting and planning) about the process with learners. So, I might propose, instead, the French term: Prononciation cliniques--literally, clinical pronunciation. It sounds somewhat more euphonic to the English ear (my focal group assures me) and it seems to also suggest indirectly, the notion of "pronunciation clinic," which, in turn, by itself in English sounds a bit too "mechanical." Or perhaps: Clinique de Prononciation Cliniques? (Clinical Pronunciation Clinic = CPC). The simple acronyms "CP, PC and CPC" don't work too well, for various reasons--but haptic-integration of pronunciation certainly does. So how about:  HIPC (haptique intégrée Prononciation cliniques)?

Haptic vocal stress relief (and "n/l-able-ing," as well!)

Linked is Jaime Vendera's book promo that has the standard vocal stress relief techniques. Because in haptic anchoring the quality of the sound produced, the felt sense in the head, neck and chest is fundamental, a relaxed set of vocal cords is essential, for both instructor and student. In addition, if you spend too much time sitting on an exercise ball, like I do, or talk too much with upper body tension or incorrect posture, your vocal cords may lock up on you. Here is a technique I (think) I learned first from a weight lifter, but it is probably in one of Lessac's books, possibly "Body wisdom" as well. Try this: Press the tip (ONLY)--not the blade--of your tongue up behind your top teeth with as much force as you can for about 2 minutes, then relax. Check your voice. If necessary, do it once or twice more. Almost never fails. If you have students, typically some Chinese dialects, who cannot de-nasalize initial "l", first have them do the same thing, while saying "n" and looking up intently at their hairline or thereabouts as they do. Then, have them maintain tongue contact with the alveolar ridge but as lightly as possible and say "l"--as their eyes fall to focus on their chin or thereabouts. Usually takes about half a dozen repetitions to get the distinct felt sense of both. The eye engagement is critical for the n/l anchoring. Works fine, too, just for anchoring "l", as distinct from "r". (Haptic "r" is a little more complicated and would require a video to demonstrate how to anchor that. Will post that one later.) As I have noted many times before, haptic-integrated work can be wonderfully "n/l-able-ing!"

Wednesday, September 28, 2011

A model of multimedia pronunciation teaching: about as good as it can get!

Linked is a patent application that presents a design for a multimedia system to teach pronunciation and spelling. Although the description is only text, it is striking how well, in principle, the design appears to provide a learning platform that can integrate a wide range of visual and auditory experience for the learner. It represents the best of what we see today in video and online language teaching systems. But, given where we are now in the development of virtual reality and haptic systems, this proposed patent is "history" already. From one perspective, it is just expensive, applicable to only those with the technology--and the appropriate learning preferences. In other words, visual-auditory learners with money. More importantly, however, it simply does not systematically engage the body at all, either in the computer mediated environment or out of it--the final gasp of the sterile language lab. In a sense, it is not a bad model of contemporary pronunciation teaching either. It can present to the learner in an organized fashion almost any conceivable pronunciation target, from several perspectives, but  it cannot (or will not consider seriously how to) seal the deal, ensure that what is presented is adequately anchored and integrated later. It would only take a touch of haptic to make it something extraordinary.

Contribution of (Haptic) Pronunciation work during interlanguage-related periods of L2 identity "liminality"

The relationship between pronunciation and L2 identity has been researched extensively in several fields. How pronunciation instruction might figure in to formation or sustaining of identity has been been alluded to occasionally but not seriously studied. In the linked study of "identity work" (various strategies for attempting to maintain a serious "distance runner" identity during extended periods of "liminality" due to injury), three dimensions emerged: (a) materiality (physical, body-based therapeutic or light exercise activities),  (b) associative, social or conceptual activities (meeting with other runners or self-talk or inner-speech confirming "runner identity"), and (c) "vocabularic" strategies (change in language usage toward more explicit or extensive use of distance-runner terms and phrases.) Pronunciation work obviously contributes from all three perspectives to L2 identity development, but the first, especially from a haptic-integrated perspective, is intriguing. That enhanced physical grounding and anchoring of the language and pronunciation, particularly as they influence creation of a unique "L2 voice," is worth exploring, especially as it relates to personal, gender-specific modelling for learners. In the past I had made extensive use of audio recordings of "ideal" voice character for learners to use as targets. With "full-body," haptic engagement, it is relatively easy to assist the learner in modifying voice quality and " voice personality" in many respects to attune better to their desired L2 identity. The possibilities are "liminality-less!"

Tuesday, September 27, 2011

EHIEP (Haptic-integrated) is smarter pronunciation teaching

And you thought we were just "going through the motions" here!

The Krieger method of accent reduction

(Note: What follows should in NO way be construed as an endorsement of this linked Youtube video!)

Clip art:
I just posted this comment on another discussion board: "His claims are, of course, outlandish. But watch that video a couple of times very carefully. He has stumbled on to an essentially kinesthetic, "ballistic" technique that, for some learners, will enhance their intelligibility—if all they need is more stress contrast and processing time for their listeners. It is used in many public speaking courses, in fact."

I worked with something like that about 20 years ago. I still use it occasionally when I have a learner who needs a very quick fix —and maybe just needs to slow down and kick back." Everybody has a piece of the puzzle. In the EHIEP system we do sometimes do a haptic version of the "Krieger thrust" to affect more integration--but never up front!

The music of rise-fall prosodic triggers

Clip art: Clker
Listen to a native speaking English language instructor repeat the citation form a word (in isolation) at the front of the class. Almost invariably, he or she will use what is termed a "rise-fall" intonation pattern, peaking on the primary-stressed syllable. This 2009 summary of a  research report on by Hsu, by Janata at University of California, Davis explains why, in recalling and vocalizing a word, the "music or melody" of the word, its intonation or tonal pattern, should help it "come back."

Haptic research suggests that EHIEP-like "haptic anchoring" of that rise-fall contour (a pedagogical movement pattern across the visual field which includes some type of hand touching on the primary stressed syllable) should enhance encoding and recall. I have yet to do a controlled, empirical test of that prediction with such prosodic triggers, but it has been standard practice for some time to have learners use a rise-fall PMP as one step in working with new or changed sounds or words in homework. They consistently report that it helps them remember the felt sense of a word either (1) in trying to access the pronunciation directly or--(2) more importantly--in noticing after-the-fact a mispronounced or changed target in conversation (as explored in a recent post.)

Try it. Add a little prosodic or  haptic riff to your citation forms. Stay tuned.

Monday, September 26, 2011

Developing a touch for discriminating between objects (and sounds!)

How about this from Science Daily: "New research from the University of Notre Dame shows that people's ability to learn and remember information depends on what they do with their hands while they are learning.

Clip art: Clker
According to a study . . . people holding objects they're learning about process detail and notice differences among objects more effectively, while keeping the hands away from the objects helps people notice similarities and consistencies among those things." That suggests just why a "hands on" haptic approach to learning sounds and words, especially distinctions between L1 and L2 sounds, should work -- and why maintaining a "hands off" attitude toward pronunciation instruction . . . may not! "Now just hold ON!" (I can hear your saying.) Exactly.

Sunday, September 25, 2011

Selecting a "sound" haptic anchor

So if you were trying to find a good anchor, you'd go to a "Pro," right? Having looked to fishing for paradigms and metaphors a couple times before, will try one more. Here is the Bass Pro Shop's criteria for a good anchor:

(1) Strong craftsmanship
(2) Can be set and re-set quickly and easily under all conditions
Clip art: Clker
(3) Good holding power (Holds well in all types of bottom: weed, rock, sand, mud.)
(4) Can be stored easily (on deck) -- compact
(5) Can be retrieved easily
(6) Can be released easily and effortlessly from the bottom.
I'm sure you can quickly extrapolate the first four parameters to haptic anchoring of pronunciation.

The 5th and 6th focus on two additional features worth elaborating. Ease of "retrieval" translates to how readily and effectively awareness of the "stored" new sound is triggered later, during conversation or listening. Probably the most important experiential benchmark in haptic-based change is when the learner becomes aware, after the fact, of either correct usage or the lingering mispronunciation. That is often experienced primarily as a body sensation, not a visual or "self-talk" auditory signal that would interfere with communication or relationships!

The final parameter, releasing the anchor, is also important. Haptic anchoring tends to fade quickly--unless practiced and re-experienced frequently--which works out just right for fast, short-term change. So, if your pronunciation teaching seems adrift,  doesn't seem to be "catching," lately, don't throw it overboard . . . just get some better (haptic) anchors.

Disembodied anchoring strategies in pronunciation work

There are good sources of recommendations on how to integrate pronunciation into classroom instruction. Here is a nice 2004 piece by Levis and Grant which covers the basic options. (I recommend reading it before continuing if you are not familiar with that general framework.) Note that after identifying those aspects of pronunciation that should be attended to and setting up teaching contexts, they identify several (mostly visual, aural/auditory, cognitive/noticing) anchoring strategies:

(a) pointing out errors or processes
(b) providing formal rules
(c) oral repetition/practice
(d) writing something down on the board or in notes
(e) student discussion or analysis
(f) impromptu oral comments linking current issue to earlier work
(g) A further assumption is that the effects of relevant context, meaningful practice, communicative "cash value" and student initiative will do the rest. You'd think it would . . .

Clip art: Clker
Clip art: Clker
Previous blogposts here have explored in great detail why adding haptic-based anchoring is potentially so much more effective than traditional approaches alone which, for the most part, either (1) stop short of guiding the learner to efficient "storage" options for new sounds (with explanations, demands to "notice" or assumptions that uptake is the learner's responsibility, not the instructor's) --or (2) simply attempt to drill the changes into submission. In subsequent posts we will consider how to "hapticulate" or embody some of the strategies described by Levis and Grant.

Saturday, September 24, 2011

EHIEP I - Hapticulum

The EHIEP haptic-integrated curriculum (hapticulum!) involves, essentially, 10 brief classroom presentations delivered either by the instructor or video. Each "lesson" teaches a set of pedagogical movement patterns (PMP)s for use in the classroom during "regular" instruction in any skill area. What you see in a video is just the goal of each 25-30 minute, last piece of a 5-step instructional presentation. (Experienced instructors could figure out how to break down and teach the PMPs; some introductory instructional videos will be available later for those with less background in pronunciation instruction.) The basic hapticulum would be roughly: (a) daily warm up, (b) discourse prominence marking, (c) vowels--all of them, believe it or not, (d) focal/output groups, (e) selected intonation contours, (f) conversation rhythm, and (g) integration anchoring. (Listed there also are a few optional modules--which would not make sense without some further orientation to the EHIEP system.) All PMPs there are haptic-anchored. In one of the next blogposts (EHIEP II - Application) I'll explore how those PMPs (or any less haptic, typical, disembodied, cognitive "noticing" strategy for that matter) can be used throughout language teaching programs. Keep in touch . . .

Friday, September 23, 2011

Pronunciation teaching: from methods to integrative "tool kits" and back again!

I'd characterize today's pronunciation teaching as presenting "integrative tool kits" to instructors, that is providing for instructors a wide range of options in terms of techniques and perspectives on how to work with pronunciation in their classes and programs. For what they are, there are several good ones, including Gilbert's prosodic framework. For instructors of a requisite level of training, that kind of support is potentially adequate. For those less trained, and those who do not have a good sense of how to create optimal sequencing of pronunciation work, that type of informed "recommendations" is often relatively useless--at best.

Clip art: Clker
The approach to working with pronunciation that has been developed on this blog is, in many respects a method, a relative fixed set of procedures, focusing on a limited set of phonological targets, presented to learners in a fixed order and then used in all aspects of language teaching. That such a system could be applicable or capable of being modified for most learners and classrooms is, I realize, quite a claim. In subsequent blog posts, I'll outline the EHIEP system in some detail, not as the "only" way to integrate pronunciation but as a model for how it can be done. In our "post-method" era, it may seem odd to propose one again, but, as always, methods will inevitably emerge from the current often bewildering, chaotic, mix-and-match-ness.

Thursday, September 22, 2011

The grammar of covert, integrated pronunciation instruction

A good model for understanding the transition from
Image: Routledge, Pub.

  • pronunciation taught in isolation in a free-standing pronunciation course, 
  • to a system where basically the same course is chopped up in pieces and inserted throughout the curriculum (generally in speaking or conversation courses), 
  • to quasi-integrated models where instructors are provided with a (mostly suprasegmental) set of tools and told attend to relevant targets of opportunity--
  • to the kind of integration we have been considering in EHIEP 

is "Teaching grammar in second language classrooms" by Nassaji and Fotos.

Functionally, the relevant pronunciation issues are dealt with as well, but from instructor and learner perspective, the work has merged with both output and input instruction such that pronunciation is now  not seen as a discrete skill by many . . . attention to "pronunciation" flowing naturally out of the language being learned and the tasks of the classroom.

Some of it has been subsumed under the rubric of "expressiveness," but most of it is to be somehow engineered into the coursework as meaningful response to a problem in effective communication. There are still short explicit introductory presentations of aspects of the system, analogous to the "form-focused instruction--or focus on form" distinction as characterized by Nassaji and Fotos, but overall the integrated pronunciation-oriented tasks should become so integral to the learning process that they, in effect, can no longer be perceived as optional or nonessential.

Can't see the teaching of pronunciation from that perspective? Good!

Tuesday, September 20, 2011

Marking the territory: What wolves can teach us about integrated anchoring and learning of L2 pronunciation

Clip art:
There are many who report using haptic anchoring in teaching pronunciation, techniques such as clapping hands, stomping feet, tapping on the desk or pulling at rubber bands-- coordinated with stressed words or rhythm groups in speaking. Such "marking of the territory" does certainly help to reinforce the goal of the activity, but it is not "haptic-integrated," in the sense that we use it here.

In the 2003 study of the marking behavior of wolves in Poland, by Zub, Theuerkauf, Jędrzejewski, Jędrzejewska, Schmidt and Kowalczyk, an elaborate system was revealed such that each incidence of territorial marking could only be interpreted by considering three parameters simultaneously: (a) significance, (b) variability, and (c) relationship to other marking(s). In other words, the efficacy and meaning of the "mark" was dependent entirely upon its relative place in an integrated, multi-dimensional map of the territory.

In the same sense, haptic anchoring of new pronunciation only contributes effectively when it is thoroughly integrated into speaking, listening, reading or writing tasks. If it is experienced outside of meaningful discourse, narrative and task sequencing, as traditional, isolated pronunciation exercises inserted in the midst of a lesson--no matter how vividly or dramatically the haptic anchoring is executed, chances are it will not be all that different or memorable in principle than when the wolves, themselves, employ the same marking "technology" for other, more mundane functions in the Bialowieza woods . . .

Getting (pronunciation) out of dictionaries and into the body!

Clip art: Clker
Have just heard that our proposal (Mike Burri, Alaina Brodie, Michelle Goertzen, Olga Ulyasheva and myself) for a 45 minute demonstration at TESOL in Philadelphia, "Getting Optimal Pronunciation from English Learner Dictionaries and beyond," has been accepted and will be on the program on the 31st of March, 2012. We are doing a similar presentation at the Tri-TESOL conference in Washington on the 22nd of October, next month, "Moving Pronunciation, Meaning and Usage from the Dictionary!" with Brian Teaman. (See earlier post on dictionary pronunciation for "dummies!")

In terms of demonstrating the applicability and effectiveness of haptic-integrated pronunciation work, this workshop format is promising. In about an hour or so, instructors get a good sense of the "felt sense" of the system. An earlier version at an ESL volunteer conference recently was (not surprisingly) "pronounced" a resounding success!

Monday, September 19, 2011

Thresholds in learning pronunciation

Clip art:
Previous posts have considered thresholds in several disciplines, including recent looks at learning to juggle and the use of hypnotic suggestion in facilitating pronunciation change. In Lessac's work there is a similar point in the 12-step process where the student has arrived at a level where a quantum leap has been achieved (the ability to perform "the call") and the voice has a new quality about it that does not follow directly from the work that has proceded. The same experience is frequent in development of skill with musical instruments and complex athletic skills.

Here is study of, of all things, "critical evaluation of information resources" by upper division undergraduates. Those who were seen as having crossed the threshold into their chosen professions, in some recognizable sense, were able to " . . . establish the authority, quality and credibility of [discipline-specific] information sources--a remarkable, if somewhat mystical experience.

Pronunciation change often happens as abruptly, with analogous parameters. The "authority" of a sound or word is best thought of as its place in the system or in those words where it occurs; the "quality" of the sound, its resonant and articulatory features; the "credibility," both the felt sense (haptic anchoring) and the confidence attributed to the changed sound or word. To the learner, a new pronunciation should, for the most part, just "show up," be a pleasant surprise, not be consciously integrated into spontaneous speech most of the time. For the instructor, the designer, the process and protocols must be transparent and managed. We haven't crossed that threshold yet, but we are closer.

Pronunciation change: The feeling of what happens

Clip art: Clker
One of the books (and theorists) that has greatly influenced my thinking on teaching pronunciation, and especially the benchmarks in the process from the learner's perspective, is "The feeling of what happens: Body, emotion and the making of consciousness," by Antonio Damasio. To wildly oversimplify Damasio's main argument: the "feeling" or emotion underlying a thought, in neurological terms, happens before words or images come into awareness. At the time of the publication of the book, over a decade ago, that was a more striking assertion than it is today, of course, but he helped establish (or re-establish in Western thinking) the role of the body and embodiment in consciousness. (Another of his great books, Decartes' Error, earlier set out the philosophical position.)

How that figures in to haptic-integrated, more body-centered pronunciation teaching is that it sets up learner awareness to recognize when a targeted sound is at least being mispronounced--and does it in a way that generally does not disturb ongoing spontaneous speaking. As most would recognize, once a learner begins to recognize or notice the "old" pronunciation in oral output, the "game is afoot" (to quote Sherlock Holmes.)

The feeling, or haptic anchor of the sound will often be felt or experienced by the learner, momentarily, after the "error" occurs--but not before, interfering with thought and conversation. That post hoc (after the fact) monitoring is nearly certain to happen if the anchor has been well established with touch and movement and the learner has accepted the suggestion (in the best sense of hypnotic suggestion) that it is going to happen when constructive change is "afoot!" So "suggest" that benchmark to your students, and see what happens . . . or at least get a feel for it.

Sunday, September 18, 2011

TPR (Tempered, Pre-fontal cortex Regulation) Pronunciation

Clip art
I'm often asked how HICP/EHIEP relates to Total Physical Response teaching methods. In some sense, one is a mirror image of the other. TPR, very effective in what it does well, focuses for the most part on learners connecting up movement to words and concepts--in that order. HICP, on the other hand, foregrounds movement, ideally creating an experience for the learner where all dimensions of the word are integrated simultaneously, but pedagogically, beginning with movement and then "attaching" sounds, letters and meanings.

The best way to understand what we try to achieve, however, relates to the previous post on juggling and pronunciation. What juggling creates, in part, is a temporary state where some of the conscious executive and planning functions of the brain are at least distracted or taken partially offline (The point of Nike's famous "Just do it!"logo.) Many of those functions are located in the pre-frontal cortex of the brain. By tempering the need to control, monitor and regulate emotional receptivity in the awareness of the learner, we can often capture enough focussed attention to get a sound change registered and more likely to be remembered and recalled later. If you do haptic work, you are hereby commanded to use more TPR!

Saturday, September 17, 2011

Learning and coaching new pronunciation and juggling

Clip art:
There are several "methods" for learning to juggle, as there are many methods for helping a learner change pronunciation. Linked above is a mathematician's method that presents a fascinating parallel with HICP/EHIEP haptic-integrated line-of-march. (Also check out Tim Murphey's sometime, too!) The 7 step process is very much "felt sense" based but moves systematically from conscious focus on individual movements to "automatic" performance. It involves three phases:
(1) anchoring the basic moves
(2) instructor and learner working together to integrate the moves
(3) learner "solos!"

In PHASE ONE, learner works on the felt sense of tossing (i) one ball in one hand, (ii) one ball going back and forth between hands, and then (iii) a second ball is introduced in the hand that will catch the other ball and be tossed away just before "main" ball arrives. The haptic parallel is basically anchoring the essential movements of the target sound without attempting to coordinate them. (There are rarely more than three critical parameters.)

In PHASE TWO, learner begins to combine features as the instructor/coach responds when needed in achieving accurate individual movements. Next learner and coach juggle/do the sound together. In the process, the learner's attention is directed away not only from environmental distractions but also from focus on the mechanics of each parameter, which is becoming more automatic and non-conscious. The instructor/student dance does much to enable that integration.

In PHASE THREE, learners "juggle" the new sound on their own. I have not seen a better model (or metaphor) for changing pronunciation. So should you learn to juggle first or simply "juggle" your teaching? It's probably a toss up . . .

Friday, September 16, 2011

The case for slightly boring pronunciation classes

It is not easy to come up with a reason to justify mind-numbing, repetitive decontextualized pronunciation drill, but maybe here is one . . . This 2001 study by Martin at the University of Missouri, summarized by a UK Daily Mail reporter, "discovered" that if you are having too good a time, it may affect your ability to remember "data." As the summary of the research notes, that could be caused by several factors, but one connection to haptic work is what others refer to as the Hansel and Gretel effect: context is encoded with the stuff to be remembered. If the trace back to a word is only "bread crumbs," chances are you won't get there; if the path back involves getting in a party mood first, that can be almost as bewildering later it turns out.
Clip art:
Clip art:

The trick, of course, is to create an event or experience that is anchored as efficiently as possible (with "noticing" in overdrive), with attention limited to the target, not the visual or emotional setting--or even unduly "thick" memories of past events. The immediate linguistic context of a sound or word, however, must also remain as a permanent part of the package. To do that for most learners requires momentary, conscious control of mind, brain, body and immediate surroundings: from our perspective, a "haptic-integrated" felt sense that is both highly energized and relaxed at the same time. In such a heightened state of awareness it is, of course, relatively easy to stay "in touch"-- and nearly impossible to be bored--or miss the party.

Wednesday, September 14, 2011

Getting pronunciation out of the dictionary in 9 steps . . . for dummies!

With the development of electronic dictionaries, you'd wonder if "dead tree-tionaries" are on the verge of extinction. Not so, not quite yet. The felt sense of a good, print learner dictionary with audio file attached --and there are several on the market now-- is still preferable for most students at earlier stages of pronunciation learning, and for many long beyond that. The combination of the more accessible full visual field of the dictionary entry and the (generally) good color/font layout add considerably to the material available for good encoding and recall. Plus, the "feel" of the physical book and ability to place it in an optimal position in the visual field is hard to match at the moment electronically. (Although probably not for long!)

We have developed a haptic-based protocol for teaching students how to go to the dictionary and have a much better chance of coming away with the pronunciation, grammatical category, meaning and usage. The key, of course, is continuous haptic anchoring and sequencing--not just saying the word or words to yourself or out loud. In the "Public speaking for dummies, 2nd edition"are, in fact, all the basic elements of the protocol (interpreted with a little imagination and translation, of course!), just not quite in this order:

Clip art: Clker
(a) Identify the stressed syllable.
(b) Identify and anchor the vowel quality in the stressed syllable.
(c) Say the word out loud, anchoring the stressed vowel with emphatic (rise-fall) intonation.
(d) Anchor the grammatical category, doing the emphatic (rise-fall) statement, "It's a X!" twice.
(e) Repeat "c"
(f) Read aloud twice the meaning, using a flat, "robot-like" but good-humored, intonation contour.
(g) Repeat "c"
(h) Anchor the usage example twice with "declarative" or "rising/question" intonation, whichever is appropriate
(i) Repeat "c"one final time.

We'll be doing this next month at the Tri-TESOL conference in Washington, "Haptic Dictionary Pronunciation," and at the TESOL convention in Philadelphia next March (2012). Even if you can't join us to experience it first hand, try that 9-step haptic dance with your students. They'll get a lot out of it.

Monday, September 12, 2011

Pronunciation (learning) posture

Image credit: ATI
In part because of the importance of upper body fluidity in learning the stress and rhythm of English (haptically, at least--see earlier "Fly Fishing" post), I have often recommended that a student with poor posture, often the result of an earlier accident or chronic work-related stress, do Alexander Technique Training (linked here and in the right column.) I considered the title, "Heading off pronunciation problems," for this post to set up one of the basic AT techniques, that of helping the student achieve such upper body relaxation that there is the strong felt sense that the head is "floating" atop the neck and shoulders, with optimal body posture and breathing.

In that frame of mind and body, attention focus and ability to mirror HICP pedagogical movement patterns should be heightened substantially--corroborated by research from many disciplines. So, if you are being "stiffed" by your students, perhaps you (or them) should "head off" to your local Alexander practitioner!

Sunday, September 11, 2011

The Pain of Pronunciation Teaching

Clip art: Clker
How often do you hear it said that teaching pronunciation is such a pain? We obviously don't feel that pain--at least not all that intensely most of the time,  but . . . there is something important that we can learn from research and therapeutic approaches to dealing with pain: how to modulate intensity of anchoring. Whether it is the resonance in the learner's skull, the sensation of the vocal cords vibration, the turbulence of the air across the lips, the pressure of the tongue pushing on something, the amount of energy involved in one hand tapping or scratching or sliding past the other hand, any sensory input is scale-able.

The earlier posts on the use of the concept of "felt sense" explored how conscious assessment and modulation of somatic intensity is used in many disciplines. Learners can be easily trained to report on a scale of 1-5 just how intense the feeling is. Once that process is established, it can be used to monitor and adjust the intensity up or down, depending on the situation. For example, in early phases of accent reduction work, arriving at a scalar framework for communicating about the intensity and location of resonance centers in the upper body is essential. (The same general process is the basis of much stress reduction work as well.)

So . . . should you currently find yourself on the low end of "haptic-o-phile" scale (those who love haptic-integrated pronunciation teaching), it is probably time for you to get moving--or at least get in touch! If you are not already a solid 4, post a comment as to why and we'll see what we can do to ease your pain in future posts!

Saturday, September 10, 2011

(Fly) Fishing for the felt sense of a relaxed stressed syllable

Clip art: Clker
Every sport has its identifiable moments when mind, body and purpose intersect, when all conscious attention is on the action, not the form of how it is being done. One of my favorite metaphors (or: haptic-a-phors) has been the flowing, relaxed upper body of the accomplished fly fisher. If nonnatives speakers are  fly fishers, it is almost always easy for them to "catch" and  integrate English rhythm and stress patterns. Although this video is not all that great (and you have to sit out the commercial) the visual of the upper body fluidity and torso nod, especially that of the instructor, is pretty good.

Fly fishing upper torso movement accompanying short phrases or sentences never fails to anchor the basic pedagogical movement pattern. (To get full haptic anchoring, have the learner "catch" the rod hand with the control hand on the stressed syllable.) If you are not a fly fisher, cast around for a different sport, e.g., a basketball slam dunk, hammering in roofing nails, playing one sforzando after another on the piano, pounding the desk with your iPad after it freezes for the last time on you . . .

Friday, September 9, 2011

Haptic note taking and study notes for pronunciation work

There are a number of note taking software applications out today that can be used to fit with haptic-integrated pronunciation work. I know of none that can actually encode movement and touch but several now allow integration of audio and video, including Microsoft One Note. The iPhone app, Mental Note, for example, works well for taking notes and creating practice routines focusing on one sound or sound process at a time.

Here is a blogpost by Pietrzak that includes both an interesting model for organizing the kind of study notes system that we need and an experimental "haptic wristband" which sounds fascinating. Pietrzak works in the area, in fact; will see if I can go see that in person sometime. If sounds are well anchored in the classroom haptically, a note system that is only video, auditory and visual should be adequate for the time being. Take note, however: a fully integrated haptic note taking system is not far out-- or far off!

Thursday, September 8, 2011

The felt sense of writing (speaking, listening and reading) with the body

Clip art: Clker
I think I missed a great workshop by Sondra Perl on writing with the body. In previous posts I have commented on the felt sense of haptic-integrated pronunciation work and the "full-body" or kinesthetic listening comprehension that gradually develops. If you can write "with" the body, you can obviously read with the body, as well. In part, that means that the writer or reader is intentionally trying to access and validate less conscious memories or perceptions of sensations of various kinds and integrate them in more effectively with conscious processing of the task at hand. (That is the essence of the concept of "felt sense" and the central concept of HICR/EHIEP instruction.)

There are many exercise, training and performance systems that use similar concepts and language.  Were I to attempt to identify the most pervasive problem with contemporary pronunciation instruction today, it would have to be either dis-integrated/decontextualized methods or lack of systematic "body" engagement. One or the other . ..  think I'll go with the body!

A nose for Pronunciation work

Clip art: Clker
I have for years played with the use of essential oils in pronunciation work, especially rosemary and pine. Experience has shown that many learners are simply too sensitive to work with any level of aroma therapeutic techniques. I have only used oils with individuals, never a class for that reason. In our work, on some protocols, either the hand passes close to the nose or the nose passes close to a deltoid--where oil can be applied (very lightly!) to good effect. This research highlights the well documented potential impact of rosemary on memory and concentration. It contrasts it with lavender which also has some useful application in other contexts.

So, after you have tried it on yourself, consider employing rosemary in a one-on-one lesson with an amenable  learner sometime. Never fails to be memorable and make "scents!"

Wednesday, September 7, 2011

L2 Identity, pronunciation and body imaging

 One of the effects of haptic anchoring and attention to the felt sense of the L2 sound system for many learners is an inevitable refining or consolidating of their L2 image and identities. There has been a great deal of research on the nature of L2 identity and its socio-cultural dimensions in the field but relatively little in our field on the dynamics of how language, and especially one's pronunciation or accent figure in to that process. (There is, however, a great deal of research and writing in the general  area of embodiment theory and identity.)
Clip art: Clker
Clip art: Clker

We need only turn to professional actors for insight. Here is the "mission" statement from an ongoing project at the University of London: "In a long-term enquiry this project is investigating the best methods of maintaining psychological and physical health within the acting community, regarding informed and intelligent awareness of self/body/identity within the complexities of professional and industry contexts." Much of the discussion could apply as well to our work where the learner's professional image and identity, from any number of perspectives, come into play.

Tuesday, September 6, 2011

Semiotics of the visual field

Here is a research study that uses the "Personal Styles Inventory" in exploring the personality traits of subjects arrested for DWI. (My colleague and mother-in-law) Dr Corrine Cope, was instrumental in developing the conceptual framework, depicted in the octagon below, for identifying the relationship between  traits in an individual. The layout of the sectors of the PSI octagon is strikingly similar to the underlying bases of visual representations of many psychological and philosophical systems: the basic "meaning" of the vertical/horizontal axes. (Note the  simple External/Internal ~ Change/Stability figure at the bottom.) 
From several earlier posts examining the character or tendencies inherent in the various areas in the visual field (See  links to visual metaphor usage, NLP, OEI and phonaesthetics), it has been acknowledged that anchoring or retrieving a sound or process higher or lower--or more to the left or the right-- in front of the learner should make a difference. The meanings of the PSI octagon sectors provide a fascinating template for mapping on the felt sense of the vowel system of a language . . . as long as the front vowels are to the right as in EHIEP vowel displays, rather than to the left as is traditionally charted by phoneticians. 

Monday, September 5, 2011

"Phonetic" Justice

An excerpt from "Poetic Justice" by rapper, Kool Moe Dee. The full text of the linked rap is not recommended or endorsed, but he does make an interesting link, metaphorically--depending on your worldview, between phonetic and justice which is worth considering briefly:

"They play the weak cause the weak won't speak
But that just makes a fan go seek
A station that they know will bust this
(Who's on the radio) poetic justice
Poetic Justice
What goes around, comes around
Goes around, comes around (2x)
Doing justice to poetry
Poetic, phonetic, genetic, fanatic - you connect it."

The primary focus of HICP work has evolved to center on the classroom where

(1) There is minimal technology, only video playback.
(2) The instructor has minimal, if any training in phonetics and pronunciation.
(3) The students may be motivated (or not-probably not) to practice outside of class and may or may not be literate.
(4) All sound change work (what is still referred to by some as "pronunciation instruction") is seamlessly integrated into class work.
(5) The students generally cannot afford or wouldn't bother with an Android or iPhone pronunciation app. to work on their "intelligibility."

I had years earlier joked that I specialized in "those with accents and money." EHIEP protocols still work with  those clients, of course, but "phonetic justice" demands something more . . .

Sunday, September 4, 2011

The felt sense of stress timing (Going to the mat with Pilates)

Clip art: Clker
For over a decade (during my "pre-haptic period") I had used the concept of the "upper torso nod" in various contexts to help learners get the felt sense of what the body is doing in English when it stresses words or phrases. The function of the upper torso nod is well documented in gesture and movement research. That training seems to transfer effectively into public speaking with most students, especially in preparing them to give short, focused speeches or oral reports.

The problem was always how to get the bodies to move with correct posture, efficiency and consistency so that there was as little extraneous, random gesturing or wobbling as possible--trying to appear confident and business-like while speaking. These Pilates exercises are about the best haptic grounding for the felt sense of the well executed upper torso nod that I have seen (and tried!) Try it, yourself. I'm sure you, too, will give it your (haptic) nod of approval.

Saturday, September 3, 2011

Hammering in or "haptic anchoring" of new pronunciation!

Clip art: Clker
There are few accessible models today that attempt to explain how pronunciation gets integrated into spontaneous speaking, or how to make that happen. One of the more  radical (or "retro," depending on your perspective) is that of Dr Olle Kjellin, which involves carefully engineered, massive doses of what he terms "quality repetition." The choral repetition is accompanied by various types of body engagement or awareness of the articulatory process and ingenious exercises, but the fundamental driver of integration is repetition. (I'm a big fan of Olle's work and the degree of accuracy he is able to achieve consistently--although its general applicability is somewhat limited.) Contrast that with:

(a) Communicative methods (where genuine communication or attention to the correct form/pronunciation is seen as the critical link between the form and its pronunciation, not repetition drill) or
(b) More cognitive or metacognitive approaches (where planning and insight into the problem and the system are understood as the more important influences on ultimate anchoring new forms-- "simple" repetition being not favored or recommended generally) or
(c) HICP/EHIEP-based methods where with in-situ (haptic) anchoring (changing pronunciation in the context of ongoing content-based speaking or listening-focused instruction) there should be as few overt repetitions of changed forms as possible.

It is worth repeating, however: either drill change in with gusto as does Kjellin (assuming you can keep your students engaged, outgoing and motivated) or use a lighter, meaning-centered, integrated (haptic) touch to move them to intelligible change. There appears to be relatively little "communicative middle ground" available. (One of my heroes of the structuralist/ALM era and right up until his death in 2006 was, in fact, Hector HAMMERly.)

Correcting "Inner Speech" to pronunciation instruction

Clip art: Clker
It has been established for sometime that in controlled silent reading the eyes will pause slightly longer on long vowels, as opposed to short vowels. Although I do not have the instrumentation to check this, based on this 2010 study by Heustegge, I would assume that EHIEP protocols which assign longer haptic anchors to long vowels and diphthongs ought to be doing precisely the same thing, that is help the learner develop more accurate representations of the vowels in memory. In the study, that effect was only evident in "silent speech" when subjects were apparently subvocalizing at some level, not when they said the words out loud.

That does suggest that perhaps teaching vowels--or even prosody-- might work even more efficiently if we begin with more visual/haptic anchoring (downplaying overt, spoken repetition) and then bring in monitored audition (speaking) a bit later, more gradually. I know you are saying to yourself: That sounds crazy to me! (Quick replay please: Did your eyes pause longer on the longer vowels? QED!) That is precisely how it is done in the EHIEP system. 

Thursday, September 1, 2011

Bottom-up pronunciation teaching: "Touchinami"

Clip art: Clker
Here is a 1997 article by Chela-Flores that was influential in forming my understanding of the place of rhythm in pronunciation instruction. Essentially, the position was that pronunciation instruction should be based on rhythm groups, with all other elements seen as fitting within and taught within that structure. Lessons are rhythm-centered; the felt sense of a word has a clear rhythmic identity, etc.

Now, take that concept and add on top of each rhythm group an "intonation group" as characterized nicely by Celik--and a "haptic-anchor" as developed here on this blog--and you have what, in EHIEP  work, we call a "touch-i-nami (from Japanese: touch wave)," a basic pedagogical tool: a rhythm group with a chunk of intonation "on top," well-grounded haptically in memory. Bottoms up!

"Fast and furious" pronunciation learning

Clip art: Clker
The Web is filled with sites claiming to have products that accelerate learning of the pronunciation of English. Almost without exception (I haven't reviewed all of them!) they probably work--as long as you are not concerned with developing general ability to communicate intelligibly. In fact, many of them, such as this one, probably serve to even further disconnect pronunciation from spontaneous speech, by relying so heavily on visual engagement. (We have seen in earlier posts the potential impact of that visual bias.)

Although for other reasons, I sort of like the "Eye-speak" name of the program, in the sense that is not far off in concept from any attempt to teach pronunciation as cute little vignettes inserted throughout the curriculum or course without systematic follow up using the skills introduced in  succeeding classes where speaking or listening is involved.

I am beginning to believe that a great deal of what passes for good pronunciation instruction, done for the most part in isolation from basic course content--even with the typical proviso that the students go practice on their own with no further guidance--actually works against acquisition of intelligible speech by either partitioning it off (very successfully) or having little or no impact in the first place. Actually, with a little haptic tweaking, that just might work: int-EYE-gration. What a concept!