Tuesday, September 8, 2020

Next Haptic Pronunciation Teaching (Free!) Webinars!

We (the MATESOL at Trinity Western University) are doing two FREE introductory webinars on haptic pronunciation teaching: Friday, October 2nd and Saturday, November 14th, 2020. The webinars are held from 7:30 p.m. PST to 9:00. Contact: william.acton@twu.ca for more information and reservations.  (Places limited!) At least two reasons we are offering those: 

First, "haptic" is the only way to teach pronunciation (at least in our modest opinion!) 

Second, every spring, beginning in mid-January, we offer an online, 3-credit graduate course, Ling 611 - Applied Phonology. Roughly one quarter of that course is "Haptic Pronunciation Teaching." 

For more detail on the webinars noncredit haptic course and the grad course, go here! 

You can apply to take either the regular course (for about $2200 CAD, as a special student) or the noncredit haptic stream by itself (for about $500--comes with a certificate.)

You do need some prerequisite work to do Ling 611, for example, some background in phonetics, linguistics and pronunciation teaching. (Check with me if you have a question on that.) No prereqs required for the haptic stream, however. The grad course runs 14 weeks; the haptic certificate, 12. The grad course takes about 8~10 hours a week; the certificate, about 3. 

Ling 611 or the certificate course can also be hosted at your school or program, done for groups or individually.  

See you next month!



Tuesday, August 4, 2020

(New) Acton Haptic Accent Enhancement for International Professionals

For the last 5 or 6 years I have been working with a "new" accent enhancement system, based on haptic pronunciation teaching face-to-face, on campus, with select international graduate students and professionals. With COVID, beginning early this spring, I began working on a new online version of that individualized course. It is all one-on-one (or possibly one-on-two) with weekly, 45-minute sessions on Zoom or SKYPE. 

I have been doing accent work since about 1975 or so. The first paper was published on it in 1984. (If you'd like a free copy of that, let me know and I'll send you one.) Our 2013 article gives you a pretty good picture of what it is about. Would love to work with you if you have the "wiring" and time. If interested, check out the AHAE program page. (It is still a work in progress but it will give you a pretty good idea of what it is about.) 


Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Sunday, June 28, 2020

Haptic pronunciation teaching (un)masked!

A student just asked the question: How can I teach pronunciation in a mask? Where he is, already back in the classroom, he and most of his students are wearing masks. It can be difficult enough when you can't see your students' faces, let alone when they can't see yours! The end of pronunciation teaching as we know it? No, not at all. Here's how . . .

In 2014, I was in the Middle East doing teacher training workshops. I was scheduled to do one at a women's college. NEVER occurred to me that the (150) students might be wearing burqas . . . which almost all of them were, covered, head to foot. One of the most successful and well received sessions I have ever done. (See the blogpost on that for more detail as to how it happened and my thoughts as to why it seemed to go so well!) 

With the exception of most consonants and a few features of vowels, most everything else of real importance in pronunciation work can be done in a mask . . . haptically. By that I mean, taught "from scratch," except where the learner has relatively little idea of where things in the vocal track have to go and touch to come up with a vowel or consonant sound.

Suprasegmentals (rhythm, stress and intonation) done in masks is a piece of cake, in fact, maybe even preferable in some cases. If you haven't already, go to www.actonhaptic.com and watch the demo videos. Even for vowels, you can do correction and feedback in a mask effectively, as long as the learner has the basic physical routine stored "in there" somewhere that can be recalled.

Doing a new demonstration shortly of more ideas on effective "masked" pronunciation as part of the upcoming webinars. July 24th and 25th. Contact info@actonhaptic.com for reservations.

Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Thursday, June 4, 2020

CPR for Pronunciation homework and teaching . . . that works!

Excellent study by Martin, "Pronunciation Can Be Acquired Outside the Classroom: Design and Assessment of Homework-Based Training," a real MUST READ for you if you are serious about pronunciation teaching, demonstrating that at least one kind of (computer-mediated)  homework system is not only effective, but may work as well as classroom-only instruction. 

The basic process in the homework phase was what is termed, iCPR, computer-based, intelligibility focused cued pronunciation reading. Learners are provided with explicit instruction, explanation and then both perceptual and production training and practice, with feedback in the perceptual phase/practice only. 

The study involved adult learners of German, extending over 10 weeks, with the equivalent of about 30 minutes of instruction either in class or out of class. The in-class lessons seemed to closely mimic the process and time allocation of the homework. From a number of perspectives, either treatment showed equally significant improvement and student satisfaction. Methodologically, the project seems tight, although the use of the term, homework, is probably a little misleading today when the learner never really "leaves" the web in some form during the day except for sleep . . . 

In corresponding with the researcher, my only question was: How (on earth) did you get the students to DO their homework? Surely it  had something to do with the "sell" up front, the allocation of grade points (easily accounted for in the computer-mediated system) and (probably) early student awareness to some degree of the program's efficacy. So . . . it looks well conceived, a highly detailed blueprint of how to set up a similar system. 

Setting aside the question of just how readily the process can be adopted and adapted for the moment, what this shows or means is that Martin has given us another intriguing picture of the future of pronunciation teaching: pronunciation work handled outside of in-class instruction. 

To paraphrase Lincoln Steffens: "I have seen the future (of pronunciation teaching) and it works. [remark after visiting the Soviet Union in 1919]” or maybe even Marshall McLuhan: "If it works, it's obsolete." . . . The field is changing fast. Pronounced change, to put it mildly!

The Modern Language Journal, 0, 0, (2020) DOI: 10.1111/modl.12638 0026-7902/20/1–23 National Federation of Modern Language Teachers Associations

Tuesday, May 26, 2020

The sound of gesture: Ending of gesture use in language (and pronunciation) teaching

Quick reminder:  Only one week to sign up for the next haptic pronunciation teaching webinars! 

Sometimes getting a rise (ing pitch) out of students is the answer . . . This is one of those studies that you read where a number of miscellaneous pieces of a puzzle momentarily seem to come together for you. The research, by Pouw and colleagues at the Donders Institute. “Acoustic information about upper limb movement in voicing”, summarized by Neurosciencenews.com, is, well . . . useful.

In essence, what they "found" was that at or around the terminal point of a gesture, where the movement stops, the pitch of the voice goes up slightly (for a number of physiological reasons). Subjects, with eyes closed, could still in many cases identify the gesture being used, based on parameters of the pitch change that accompanied the nonsense words. The summary is what is fun and actually helpful, however.

From the summary:

"These findings go against the assumption that gestures basically only serve to depict or point out something. “It contributes to the understanding that there is a closer relationship between spoken language and gestures. Hand gestures may have been created to support the voice, to emphasize words, for example.”

Although the way the conclusion is framed might suggest that the researchers may have missed roughly three decades of extensive research on the function of gesture, from theoretical and pedagogical perspectives, it certainly works for me--and all of us who work with haptic pronunciation teaching. That describes, at least in part, what we do: "  . . . Hand gestures . . . created to support the voice, to emphasize words, for example.” Now we have even more science to back us up! (Go take a look at the demonstration videos on www.actonhaptic.com, if you haven't before.) 

What can I say? I'll just stop right there. Anything more would just be but an empty gesture . . .

“Acoustic information about upper limb movement in voicing”. by Wim Pouw, Alexandra Paxton, Steven J. Harrison, and James A. Dixon. PNAS doi:10.1073/pnas.2004163117

Monday, May 18, 2020

Cognitive Restructuring of Pronunci-o-phobia - (and Alexa-phobia): Hear, hear! (Just don't peek!)

Caveat emptor: If you are emotionally co-dependent on Alexa, you might want to "ALEXA, STOP ME!" at this point. We love you, but you are lost . . .

New study by "a team of researchers at Penn State" (Summarized by ScienceDaily.com) explored the idea of using ALEXA to help you "cognitively restructuring" your public speaking anxiety, Anxious about public speaking? Your smart speaker could help. Actually what they did was to compare two different ALEXAs in talking you through/out of some of your public speaking, pre-speech anxiety, a more social one with a less social one. (Fasten your seat belt . . . ) Subjects who engaged with the former felt less stressed at the prospect of the giving a speech. From the summary from the researchers:

"People are not simply anthropomorphizing the machine, but are responding to increased sociability by feeling a sense of closeness with the machine, which is associated with lowered speech anxiety . . . Alexa is one of those things that lives in our homes, . . As such, it occupies a somewhat intimate space in our lives. It's often a conversation partner, so why not use it for other things rather than just answering factual questions?"

Houston, we have a problem. Several, in fact. For instance, if ALEXA can do that, imagine what a real person online, just audio only, could accomplish! Forget Zoom and SKYPE! I'd predict that that may even account for some, if not a great deal, of the reduction in anxiety alone. In that condition, a real person might be exponentially more effective . . . worth checking on, I'd think. In addition, from the brief report we get no indication as to what ALEXA actually said, only that "she" was more socially engaging in one condition, than the other. 

What it does suggest, however, is that we should be able to use the same general strategy in dealing with the well-researched anxiety on the part of  instructors and students toward pronunciation work. The impact of a person facing you as you try to modify your pronunciation is important. For many learners, they literally have to close their eyes to repeat a phrase with a different articulation--or at least dis-focus their eyes momentarily. That is is an especially critical dimension of haptic and general gesture techniques in pronunciation teaching. 

This idea is explored in Webinar II in the upcoming Haptic Teaching Webinars I and II, June 5th and 6th. Please join us! (Contact info@actonhaptic.com to reserve you place!) 

And if you'd like to continue this discussion, give me a call . . . Keep in Touch!

Penn State. (2020, April 25). Anxious about public speaking? Your smart speaker could help. ScienceDaily. Retrieved May 18, 2020 from www.sciencedaily.com/releases/2020/04/200425094114.htm

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!

Wednesday, April 15, 2020

What do you expect? (A "Tsough" question for pronunciation teaching!)

Intriguing title of  recent piece/summary on ScienceDaily.com: "Flaw in Rubber Hand Illusion raise tsough questions for psychology" (a real double threat: not only a spelling miscue, but a grammar issue as well.)  Do those two little "glitches" affect your expectations as to what is in the article? Unavoidably, eh . . . and that is too bad. The research by Lush of University of Sussex being summarized is potentially paradigm shaking (original title): Demand Characteristics Confound the Rubber Hand Illusion.
From the summary: 

"The Rubber Hand Illusion, where synchronous brush strokes on a participant's concealed hand and a visible fake hand can give the impression of illusory sensations of touch and of ownership of the fake hand, has been cited in more than 5,000 articles since it was first documented more than 20 years ago."

What that appeared to establish early on is that the brain was in some sense "hard wired" to tranfer sensation throughout the body, as a function of consciousness. The problem, according to Lush, and demonstrated in the study, is that the results from experiments exploring that effect, may be hopeless biased by what are termed "demand characteristics," of the study, in effect (hypnotic-like) suggestion as to what the researcher expects to find and the subjects experience. 

In other words, subjects will do their best to exhibit the effect being elicited. In Lush's study, subjects' expectations for how they would respond to the "rubber hand", having read the original introductory protocols, were striking to the extent that they were biased in favor of experiencing the "ghost sensations" in the rubber hand. 

Since in haptic pronunciation teaching the hands play a central role in linking sound, gesture and concepts, we clearly have a "pony in this race" as well.

A couple of decades ago, in a piece on the role of suggestion in language teaching in the JALT Language Teacher, I cited a paragraph from a (then) popular student pronunciation book (bold-face, mine):

"Acquiring good pronunciation is the most difficult part of learning a new language. As you improve your articulation you have to learn to listen and imitate all over again. As with any activity you wish to do well, you have to practice, practice, practice, and then practice some more . Remember that you cannot accomplish good pronunciation overnight; improvement takes time. Some students may find it more difficult than others and will need more time than others to improve" (Orion, 1989, pp. xxiii-iv).

I went on to note: "In those . . . words and phrases . . . can you not hear echoes of that famous line above the door in Dante's Inferno, "Abandon hope, all ye who enter here?"

This relates back to two blog posts ago on "pronunciation preambles," that is the way instructors set up work in pronunciation. Human beings, at least most of them, are highly suggestable. They have to be to be capable of picking up subtle cues in their environment quickly and efficiently. Pronunciation teaching, and pronunciation, in general, has gotten a bad rap, some of it deservedly so, of course, but how it is presented to learners, consciously and subconsciously, makes an enormous difference in outcome.

A "slight of hand" in the truest sense. What are you suggesting?

University of Sussex. (2020, April 10). Flaw in Rubber Hand Illusion raise tsough questions for psychology. ScienceDaily. Retrieved April 15, 2020 from www.sciencedaily.com/releases/2020/04/200410162432.htm

Friday, April 10, 2020

Haptic Pronunciation Teaching Webinars!

The first new, v5.0 "double webinar" is set to go, October 2nd and November 14th, 1930~2100 hours, Pacific Standard Time. Reserve your place now. (No deposit required.)

The webinars are highly experiential and participatory. You'll need
  •  a hands free set up
  • preferably projected on a TV screen, laptop or iPad of some kind, but a handheld with a BIG screen is OK, too 
  • positioned at eye level  
  • Wireless headsets or no headset at all are best, but headsets with a long cord are adequate, 
  • since you have to stand up and "dance" on several occasions! 
The 75 minute, recorded sessions are followed by 15 minute Q and A.
Enrolment is limited to 50 participants in each webinar. There may be some time-zone restrictions, depending on early registration. Reserve your place now at: william.acton@twu.ca

Webinar topics 
  • Introduction to Haptic Pronunciation Teaching
  • Dictionary use for pronunciation
  • North American English vowels
  • Syllables and phrase grouping
  • Intonation 
  • Haptic homework
  • Select consonants
  • Fluency and linking
  • Conversation rhythm and pausing
  • Advanced intonation and secondary stress
  • Classroom correction, feedback integration techniques
Webinars can be offered exclusively for one English teaching organization, as well as "on the ground," f2f one-day workshops.  (Contact: info@actonhaptic.com for information on group packages.)
The noncredit haptic pronunciation course meets in a weekly 1-hour webinar and includes about two  hours of practice following the session. Course completion requires passing a certification test which includes a video test. 
The graduate course, Ling 611 - Applied phonology, is a 3-credit online seminar. It is composed of three relatively equal streams: (a) the haptic pronunciation teaching, which is essentially the same as the noncredit course, (b) a phonological analysis of learner data stream, and (c) a theory and methods of applied linguistics stream with focus on speaking, listening and pronunciation. There is a combination of synchronous and asynchronous meetings and assignments. 

Monday, April 6, 2020

The "story" of pronunciation teaching: Engaging Preambles

One of the potential advantages of having taught pronunciation for a few years (in my case, almost 50) is that you have on hand a near endless supply of "success stories" from former students, no matter what you are teaching, ways to introduce and (hopefully) motivate yourself and students at the "drop of a hat."

Was reminded of that recently after viewing a plenary by one of the great storytellers in our field, Mario Rinvolucri. Although he does not talk about the use of stories as "preambles" in instruction per se in that talk or in this nice piece in TeachingEnglish.org,  I'm sure he'd concur with their value as such. Several other studies of storytelling in the field cover a wide range of classroom possibilities, but none that I have been able to find examine the "preamble" function.

My introduction to this function of storytelling was the work of Milton Erickson, back in the 1980s. (One of my all time favorite books on that was Erickson's classic "My voice will go with you." Here is an example of one of Erickson's stories done by Bill O'Hanlon (The audio of the originals with Erickson actually telling the stories is available but less accessible.)

I'll begin with one of my favorite personal "pronunciation preambles." Please add one of yours. Let's see where this story takes us!

Better pronunciation: over night!

I did a 1-hour workshop at a Korean University for about 400 undergraduates. The objective was to improve the rhythm of their spoken English . . .  overnight. All of them had conversations classes the next morning. (Important note: Only one of about 6 of the conversation teachers came to the workshop, although all were invited.) I trained the students to act like they were boxing when they spoke along first with easy dialogues on the screen and then, before we finished, with simple roleplays, in pairs. It got a little chaotic, as you can imagine, but they loved it! And just before I concluded the workshop, I gave them a "secret mission" . . . The next morning, in their speaking classes they were to use the same feeling in their upper bodies--without punching the air as boxing, as they were speaking in class WITHOUT LETTING ON TO THEIR TEACHERS THAT ANYTHING WAS DIFFERENT. I heard some amazing stories back. In the classes that pulled it off, the teachers were stunned by the difference in the rhythm and energy . . . and even playfulness evident in the speaking of the class.

Never fails. To see the basic technique, go here and check out the RFC demo.

Give us your best Pronunciation Preamble!

Tuesday, March 24, 2020

Recipe for curing (Chinese) distaste for pronunciation teaching

Have trouble selling your students on pronunciation, developing an 'appetite" for it? Research by Madzharov, Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food, summarized by Science Direct, suggests that you may just need to give at least the more self-controlled among them a "hands-on" taste of it to get them to buy in. To quote the abstract:

"The present paper presents four studies that explore how sampling and eating food by touching it directly with hands affects hedonic evaluations and consumption volume."

What they found, however, was that for only the high self-control, disciplined consumers that they perceived the food to be better tasting and they were disposed to eat more of it. For the other subjects (like me maybe!), adding touch did not appear to contribute or enhance either taste or appetite for the food samples in the study. Why that should be the case, was not clear, other than the possibility that in the less self-controlled consumers, the executive control centers of the brain were offline already in terms of the direct, unfettered attraction of FOOD!

A few years ago, had a visiting scholar from China here with us for a year. It took almost the entire time for her to get me to understand how to get Chinese students to buy in to (haptic) pronunciation teaching, specifically, but, in general, more integrated, communicative pronunciation work. My "mistake" had been trying to convince relatively high-control consumers of pronunciation teaching in this case, to first be more like me, less high-control and more experiential as learners.

It has always been a problem for some, not just the Chinese students, to buy into highly gesture-based instruction. But touch was another thing entirely. Most any student can "get it", how touch can enhance learning and memory-- and be coaxed into trying some of the gestural, kinaethetic techniques. Probably for several reasons, one being that the functions of touch in the haptic system are to (1) carefully control gesture use, and (2) intensify the connection between the gesture and lexical or phonological target, the word or sound process. Also, it was  (3) much easier to present the general, popular research on the contribution of touch to experience and learning, and (4) the concept of somehow getting a learner to work in their least dominant modality, a basic construct in hypnosis, for example, can be the most effective or powerful.

The assumption here is that the metacognitively self-controlled are less likely to be influenced by immediate feelings or impressions, but once that "barrier" is bridged, as touch does so effectively, the relatively novel sensual experience for them has greater impact. Think: men and the power of perfume . . .

In other words focusing initially on the touch that concluded every gesture made a difference. Have been doing that ever since. Students are much more receptive to trying the gestural techniques once they feel that they have sufficient understanding . . . and then once they have tried it, focusing more on touch than on gesture . .  they are "hooked" . . . being more able and amenable to sense the power of embodiment in learning pronunciation from then on.

If you have a taste for pronunciation work with Chinese students, what is your recipe?

Keep in touch . . .

Original Source:
Madzharov, A. Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food Journal of Retailing Volume 95, Issue 4, December 2019, Pages 170-185 https://doi.org/10.1016/j.jretai.2019.10.009

Thursday, March 19, 2020

Love it or leave it: 2nd language body, voice, pronunciation and identity

Recall (if you can) the first time you were required to listen to or maybe analyze a recording of your voice. Surprising? Pleasing? Disgusting? Depressing? There are various estimates as to how much of your awareness of your voice is based on what it "feels" like to you, not your ears, but somewhere around 80% or so. Turns out your awareness of what your body looks like is similar.

A new study by Neyret, Bellido Rivas, Navarro and Slater, of the Experimental Virtual Environments (EVENT) Lab, University of Barcelona,  “Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality” as summarized by Neuroscience News, found that our simple gut feelings about how (un)attractive our body shape or image is is generally more negative  than if we are able to view it more dispassionately or objectively "from a distance," as it were. Surprise. Using virtual reality technology subjects were presented with different body types and sizes, among them one that is precisely, to the external observer what the subject's body shape is. Subjects rated their "virtual body" shape more favorably than their earlier pre-experiment self-ratings presented in something analogous to a questionnaire format.

In psychotherapy, the basic principle of "distancing" from emotional grounding is fundamental; all sorts of ways to accomplish that such as visualizing yourself watching yourself doing something disconcerting or threatening to you. It is the "step back" metaphor that the brain takes very seriously if done right.

In this case, when visualizing the shape of your body (or your voice, by extension as part of the body,) you'll see it at least a little more favorably than when you describe it based on how it "feels" internally, the reason "body shaming" can work so effectively in some cases, or in pronunciation work, "accent shaming."

So, how can we use the insights from the research? First, systematic work by learners in critically listening to their voice should pay off, at least in some sense of resignation or even "like" so that the ear is not automatically tuned to react or aver.  (I'm sure there is research on that someplace but, for the life of me, I can't find it! Please help out with a good reference, if you can on that!) Is this some long overdue partial vindication of the seemingly interminable hours spent in the language lab? Could be in some cases.

Second, once a learner is able to "view" their L2 voice/identity relative to some ideal more dispassionately, it should be easier to work with it and make accommodations. That is one of the central assumptions of the "Lessac method" of voice development, which I have been relying on for over 30 years. It also calls into question the idea that aiming toward an ideal, native speaker accent is necessarily a mistake. You have to "see" yourself relative to it as more of an outsider, not  just from your solar plexus out . . . through your flabby abs, et al. . . .  My approach to accent reduction always begins there, before we get to changing anything. Call it: voice and body "re-sensitization."

See what I mean? If not, have somebody you don't know read this post to you again at Starbucks . . .

Original Source:
“Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality”. Solène Neyret, Anna I. Bellido Rivas, Xavi Navarro and Mel Slater. Frontiers in Robotics and AI doi:10.3389/frobt.2020.00031.

Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
10.  [ʊ]
1.  [iy]
2.  [I]

9.  [ow]
8.  [Ɔ]

(eye level)
3.  [ey]
4.  [ɛ]

7.    [ʌ]

5. [ae]

6. [a]       

Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,

Sunday, March 8, 2020

Becoming a great (haptic), "good looking" pronunciation teacher: Modeling

If your are in the Vancouver, British Columbia next month, join us at the joint 2020 BCTEAL and Image Conference. Always a great get together.

If you haven't done a video of yourself teaching in the last couple of years, you might do that before you read the rest of this post. Better still, doing pronunciation or conversation work where you, up front, are providing at least some of the pronunciation models. (I have a rubric for that for my grad students. If you'd like a copy, email me.) 

I'll be doing a new workshop, "Modeling and correcting pronunciation in and out of class," based on the idea that as an instructor, really any kind, but especially one doing (haptic) pronunciation, your dynamic pedagogical body image (DPBI) e.g., Iverson, 2012, your visual model, your physical presence, movement and gesture in the classroom, from several perspectives, are worth considering carefully. How you dress, your pronunciation and accent, the coordination of your speech with your overall body movement in providing models of language and general postural presentation, all have meaning. When, as in haptic pronunciation work, you are asking students to synchronize some of their speech and gesture with yours, the nature of what is in front of them visually, can obviously contribute to or detract from instructional effectiveness.

In haptic work, in principle, all aspects of pronunciation can be represented/portrayed or embodied using gesture and body movement. From that perspective then, just modeling a word, or phrase or clause, or passage, involves choreography, demonstrating both the sound but also the gestural complex that represents it. (to see examples of the earlier v4.5 version of the haptic system, check out the models on the website).

The same goes for in-class correction or required homework on the form attended to in class or self-correction by the student. The instructor may present the more appropriate form first, choreographed, and then have the student or students "do" the targeted piece of language/text together (never "repeat after me", always "let's do that together.") All key, necessary pronunciation work is to be embedded, practiced, synchronized with gesture for at least a week or so as homework to insure some degree of anchoring in memory and spontaneous speaking, or at least aural comprehension.

For most kinds of instruction what you look like and how you move can be pretty much irrelevant--one of the reasons I love online teaching!!! For some, however, it does, even if it means just cutting down on "clutter" in the visual field up front.

v5.0 will be out before long. This is, nonetheless, a good first step . . . continually taking a "good look" up front at the dynamic model you are providing for your students, and yourself.

Wednesday, February 19, 2020

RHYTHM FIRST (new) pronunciation teaching technique: Haptic Side Step!

Full disclosure: the following post includes explicit, dance and intrapersonal touching, something of a
follow up to two recent posts:
What is new here is the active, simultaneous use of feet, literally and figuratively. The idea is that much of the basics of English pronunciation and practice can (and should) be taught to the beat of the rhythmic feet of the text being spoken. The tempo will vary but the “dance step” is essentially the same.
  • All text used at the beginning should be staged/indicated on paper or expressed or broken up into rhythmic feet (groups of 1~9 syllables in this system, although in the classical sense, a "foot" is usually limited to 3 or 4 syllables). For example: 
    • The stressed syllable / in the word or phrase / should, in general, / be highlighted / (underlined or boldfaced / for example.) 
  • The body is moving gently from side to side, to the rhythm of the designated rhythmic feet, using what we call a "haptic side step, where the forefoot comes down on the stressed element. 
  • See short video of me "DEMONSPLAINING" how the basic procedure works in a clip from a recent presentation at UBC. (It is especially clear in the second part of the 15 minute video.) Password: HaPT-Demo3
  • As noted in the video, in haptic pronunciation work the upper body may also be simultaneously executing various touch-based pedagogical (gesture) movement patterns related to a targeted pronunciation feature, such as a vowel sound or key word, a rhythm or intonation pattern, etc.  
The "side step" has been developed over the last five years as an optional feature of more advanced, accent modification work.The rest of the full, full-body version of the haptic system, Haptic Pronunciation Teaching, v5.0: RHYTHM FIRST! will be rolled out later this fall.

In the meantime, try some form of that basic technique in class with any simple dialogue, or word list, or dialogue or even spontaneous chat (as I do on the video) and, as usual, report back!

The technique will be featured at the next webinar, March 27th and 28th. (Contact: info@actonhaptic.com for further information.)

Caveat emptor: This looks easy.

Monday, January 6, 2020

What mouse circadian memory should remind us of (in recalling pronunciation or anything learned earlier)

Mystery partially solved. In doing research on homework efficiency and compliance in pronunciation teaching, something I had never taken all that seriously: If given an option, almost all students seem to prefer to do pronunciation practice/homework after supper, or later, rather than in the morning, before class. (Is that the case with your students as well?) Maybe it is a matter of priorities, save the least important for last; wait until you are incapable of doing much of anything else. (Earlier, when "homework" was mostly mindless drill, that probably made sense.) In fact, for no empirical reason really, other than the possibility of more immediate follow up, we have for years suggested learners do just that . . . schedule pronunciation work later in the day. There is, of course, overwhelming evidence that a good night's sleep does wonders for learning consolidation and memory.

Now an extraordinary study by Hasegawa et al., reported in ScienceDaily, demonstrating that mice,
at least (and by extension probably all us caffeine-addicted academics), have periods in the day when they are not as good at remembering things as others. Specifically, that period (for mice) just before or around the time they would usually wake up. No surprise there, eh! But what is a surprise is that the contrast is so striking during that brief interval: their memory, especially for recent training, is almost . . . nonexistent. Later, it is "back." Why so? The researchers end the piece wondering why mice--and us probably--would have evolved with that temporary "black hole" in our functional system.

I can tell them. When I first wake up the last thing I want to think about is the training or encounters of yesterday. Give my subconscious a little more time to process that while I attend to my more immediate concerns of survival, for example.

There is also lots of research focusing on learning efficiency of school-age students during different parts of the day, especially those who really don't get going until about noon. Why not the same consideration for when language learning students practice and the types of practice required? Good question.

Back to the mice. Their "task" involved touching a level to get food. During their brief, selective memory-free zone, in exploring what is in front of them, they would touch the level longer, in effect feeling it out, figuring out what it is. If given the task later they touched it immediately and with authority. Haptic pronunciation work involves extensive use of touch in virtually all activities. Our working hypothesis, based on decades of research on tactile memory, is that touch is the link both to integration of the other senses and vividness or strength of recall of phonological element in focus. We have, however, always observed great variability in learners' reports of their experience of that touch, in terms of intensity and impact.

It is about "time" we investigated that further!

Full citation:
University of Tokyo. (2019, December 18). Forgetfulness might depend on time of day. ScienceDaily. Retrieved January 5, 2020 from www.sciencedaily.com/releases/2019/12/191218090152.htm

Friday, December 27, 2019

Drawing on drawing to enhance learning of sounds and pronunciation

About 40 years ago, in working with dyslexia in the family, specifically elementary school reading and spelling tests, we stumbled onto the idea of, in effect, forming the letters of the alphabet for the words on the spelling list that week--with the body, in cheerleader or ballet-like fashion. Our "alphabeteer" became lightning fast. The technique worked well, or at least it helped.

Drawing on the concept of the "body alphabet", creating stylized body movement that iconically represented letters and sounds, we developed the haptic pronunciation teaching system, beginning in about 1985. New gestures were created that visually and somatically represented in tangible and recognizable ways, sounds, graphemes and a range of phonological processes, such as vowels, phrasing of syllables and intonation patterning. Those routines were intentionally designed to not carry common problematic social meanings, such as waving goodbye or signalling some degree of pleasure or displeasure.

Just read a remarkable piece of neuroscience research that seems to get at some of the critical, underlying mechanisms involved: Relating visual production and recognition of objects in human visual cortex, by Fan, et al. (2019).

Quoting the summary from Science Daily:

"As the participants drew each object multiple times, (line drawings of pieces of furniture) the activity patterns in (visual) occipital cortex remained unchanged, but the connection between occipital cortex and parietal cortex, an area involved in motor planning, grew more distinct. This suggests that drawing practice enhances how the brain shares information about an object between different regions over time. . .This means people recruit the same neural representation of an object whether they are drawing it or seeing it."

Especially for the more kinaesthetic among us, sketching, allowing the pen or brush, or the body itself a more prominent role in supporting memory can be wonderfully enabling and effective. One has to wonder, however, what we are doing to our collective memories and coming generations as we "hand off" more and more of our primary encoding and recalling to our essentially visual-auditory smartphone interfaces. Research on that question and the general interconnectivity between areas of the brain is extensive and growing rapidly.

The implications of that observation and many like it recently are paradigm changing. Much of what we have come to understand as relatively isolated sections and functions of the brain, and by extension our behavior, are really anything but. The bad news and the good news:

In effect, everything we experience at any given moment can contribute substantially to what is later remembered and recalled. We, as educators or influencers, are accountable for much more, but, on the other hand, we now have license to do more as well.

v5.0 of the haptic system is about to launch. It does more . . .

Keep in touch!

Full Reference:
Judith E. Fan, Jeffrey D. Wammes, Jordan B. Gunn, Daniel L. K. Yamins, Kenneth A. Norman and Nicholas B. Turk-Browne, Journal of Neuroscience 23 December 2019, 1843-19; DOI: https://doi.org/10.1523/JNEUROSCI.1843-19.2019

Sunday, November 3, 2019

Full-body and voice burn out prevention workshop for language teachers!

If you will be at BCTEAL regional conference on 11/16, please join Angelina Van Dyke and me for the "Full-body and voice burn out prevention warm up". (If not, it will be recorded and available off the blog shortly thereafter.) In all modesty, this will be a great session, not just because I'm in it, but Angelina, an accomplished concert and recording artist and voice teacher, has just finished an advanced diploma in voice science and will be sharing some amazing new techniques for "saving your pipes" as we say!

Here is the abstract from the program:

Feeling sluggish, stressed or caffeine deprived? This session, created by voice and pronunciation specialists for the language teacher (and students), should help. The carefully scaffolded, “restorative” exercises activate and focus body and vocal tract in less than 10 minutes. No meditation, medication or mendacity required.

My part of the party, body activation and preservation, takes about 15 minutes. Here is the list of the quick exercises involved: (Note: In some cases the name of the technique is more creative than descriptive, but you get the idea!)

1.     Mandibular massage
2.     Jaw shaker
3.     Neck slow header
4.     Trapezes circles
5.     Rotator cup “rolls”
6.     Hand/Forearm/Finger stretcher
7.     Shoulder and upper body boogie
7.5. Temple wings!
8.     Lateral leanings
9.      Glute Glutin’ 
10.   Core Belly Dance roll up (or plank or Dead Bug)
1.   Hip rotation girations
12.  Progressive lunge (with chair)
13.  Quads lifts (with chair)
14.   Hamstring swing (with chair)
15.   Adductor/abductor swing (with chair)
16.  Progressive mini-squats (with chair)
17.   Upper and lower Achilles tune ups (with chair)
18.   Calf and shin rock (with chair)
19.   Cursive ankle alphabet (with chair)
20.   Visual field scan and full-arm fluency (on the compass)
21.   Hyper lipper (8 vowel tour)
22.   Back and arms hyper stretch (3x) to vocal cone
23.   Chest and mouth hyper stretch (3x) from maximum pucker!

With the video you should be able to do both parts of the workshop any morning you need to get tuned up for the day. See you there or later!