Sunday, December 20, 2015

Lost in space: Why phoneme vowel charts may inhibit learning of pronunciation

In a recent workshop I inadvertently suggested that the relative distances between adjacent English vowels on various standard charts, such as the IPA matrix or those used in pronunciation teaching were probably not all that important. Rather than "stand by" that comment, I need to "distance myself" from it! Here's why.

Several posts on the blog, including a recent one, have dealt with the basic question of to what extent visual stimuli can potentially undermine learning of sound, movement and touch (the basic stuff of the haptic approach to pronunciation teaching.) I went back to Doellar and Burgess (2008), "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries" (Full citation below.), one of the key pieces of research/theory that our haptic work has been based on.

In essence, that study demonstrated that we have two parallel systems for learning locations, in two different parts of the brain, one from landmarks in the visual (or experiential) field and another from boundaries of the field. Furthermore, boundaries tend to override landmarks in navigating. (For instance, when finding your way in the dark, your first instinct is to go along the wall, touching what is there, if possible, not steer through landmarks or objects in the field in front of you whose relative location may be much less fixed in your experience.)

Most importantly for us, boundaries tend to be learned incidentally; landmarks, associatively. In other words, location relative to boundaries is more like a map, where the exact point is first identified by where it is relative to the boundary, not the other points within the map itself. Conversely, landmarks tend to be learned associatively, relative to each other, not in relation to the boundary of the field, which may be irrelevant anyway, not conceptually present.

So what does that imply for teaching English vowels? 
  • Learner access in memory to the vowels when still actively working on improving pronunciation is generally a picture or image of a matrix, where the vowels are placed in it. (Having asked learners for decades how they "get to" vowels, the consistent answer is something like: "I look at the vowel chart in my mind.")
  • The relative position of those vowels, especially adjacent vowels, is almost certainly tied more to the boundaries of the matrix, the sides and intersecting lines, not the relative auditory and articulatory qualities of the sounds themselves. 
  • The impact of visual schema and processing over auditory and haptic is such that, at least for many learners, the chart is at least not doing much to facilitate access to the articulatory and somatic features of the phonemes, themselves. (I realize that is an empirical question that cries out for a controlled study!)
  • The phonemic system of a language is based fundamentally on relative distances between phonemes. The brain generally perceives phonemic differences as binary, e.g., it is either 'u' or 'U', or 'p' or 'b', although actual sound produced may be exceedingly close to the conceptual "boundary" separating them. 
  • Haptic work basically backgrounds visual schema and visual prominence, attempting to promote a stronger association between the sounds, themselves, and the "distance" between them, in part by locating them in the visual field immediately in front of the learner, using gesture, movement and touch, so that the learner experiences the relative phonemic "differences" as distinctly as possible.
  • We still do some initial orientation to the vowel system using a clock image with the vowels imposed on it, to establish the technique of using vowel numbers for correction and feedback, but try to get away from that as soon as possible, since that visual schema as well gives the impression that the vowels are somehow "equidistant" from each other--and, of course, according to Doellar and Burgess (2008) probably more readily associated with the boundary of the clock than with each other.
 (Based on excerpt from Basic Haptic Pronunciation, v4.0, forthcoming, Spring, 2016.)

Doellar, C. and Burgess, N. (2008). "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries", retrieved from, December 19, 2015.

Friday, December 18, 2015

On developing excellent pronunciation and gesture--according to John Wesley,1770.

Have just rediscovered Wesley's delightful classic "Directions Concerning Pronunciation and Gesture", a short pamphlet published in 1770. The style  that Wesley was promoting was to become something of the hallmark of the Wesleyan movement: strong, persuasive public speaking. Although I highly recommend reading the entire piece, here are some of Wesley's  (slightly paraphrased) "rules" below well worth heeding, most of which are as relevant today as were they then.

  • Study the art of speaking betimes and practice it as often as possible.
  • Be governed in speaking by reason, rather than example, and take special care as to whom you imitate.
  • Develop a clear, strong voice that will fill the place wherein you speak.
  • To do that, read or speak something aloud every morning for at least 30 minutes.
  • Take care not to strain your voice at first; start low and raise it by degrees to a height.
  • If you falter in your speech, read something in private daily, and pronounce every word and syllable so distinctly that they may have all their full sound and proportion . . . (in that way) you may learn to pronounce them more fluently at your leisure.
  • Should you tend to mumble, do as Demosthenes, who cured himself of this defect by repeating orations everyday with pebbles in his mouth. 
  • To avoid all kinds of unnatural tones of voice, endeavor to speak in public just as you do in common conversation.
  • Labour to avoid the odious custom of spitting and coughing while speaking.
  • There should be nothing in the dispositions and motions of your body to offend the eyes of the spectators.
  • Use a large looking glass as Demosthenes (again) did; learn to avoid all disagreeable and "unhandsome" gestures.
  • Have a skillful and faithful friend to observe all your motions and to inform you which are proper and which are not.
  • Use the right hand most, and when you use the left let it only be to accompany the other.
  • Seldom stretch out your hand sideways, more than half a foot from the trunk of your body.
  •  . . . remember while you are actually speaking you are not be studying any other motions, but use those that naturally arise from the subject of your discourse.
  • And when you observe an eminent speaker, observe with utmost attention what conformity there is between his action and utterance and these rules. (You may afterwards imitate him at home 'till you have made his graces your own.)
 Most of the "gesture" guidelines and several of those for pronunciation are employed explicitly in public speaking training--and in haptic pronunciation teaching. Even some of the more colorful ones are still worth mentioning to students in encouraging effective speaking of all sorts. 

Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

Monday, November 30, 2015

The Music of Pronunciation (and language) Teaching

Like many pronunciation and "speaking" specialists, I have long believed that in some way systematic use of music should be "in play" at all times in class. I suspect most in the field feel the same. Up until recently there has not appeared to be much of an academically credible way to justify that or investigate the potential connection to language teaching more empirically.

A recent 2015 study, Music Congruity Effects on Product Memory, Perception, and Choice, by North, Sheridan and Areni, published in the Journal of Retailing (DOI, below), suggests some interesting possibilities. Quoting the summary, the study basically demonstrated that:
  • Ethnic music (e.g., Chinese, Indian) increased the recall of menu items from the same country.
  • Ethnic music increased the likelihood of choosing menu items from the same country.
  • Classical music increased willingness to pay for products related to social identity.
  • Country music increased willingness to pay for utilitarian products.
So, what may that mean for our work, or explain what we have seen in our classrooms?
  • (Recall) For example, we might predict that using English music of some kind with prominent vowels, consonants, intonation and rhythm patterns would enhance memory for them.
  • (Perception) Having listened to "English" music should enable being able to better perceive or recognize appropriate pronunciation models or patterns of English. I suspect that most language teachers believe that intuitively, have seen the indirect effects in how students' engagement with the music of the culture "works". 
  • (Milieu) I, like many, have used classical music for "selling" and relaxing and creating ambiance for decades. There is research from several fields supporting that. Only recently have I been attempting to tie it into specific phonological structures or sounds, especially the expressive, emotional and relational side of work in intonation. 
  • (Function) I frequently use country-like music or rap for working on functional areas, warm ups, rhythm patterns, and specific vowel contrasts.
I am currently experimenting more with different rhythmic, stylistic and genre-based varieties of music. (Specifically, the new, v4.0 version of the haptic pronunciation teaching system, EHIEP - Essential Haptic-integrated English Pronunciation.) Over the years I have used music, from general background or mood setting to highly rhythmic tunes tied directly to the patterns being practiced. I just knew it worked . . .

The "Music congruity" study begins to show in yet another way just how music affects both associative memory and perception, conveying in very real terms broad connections to culture and context. More importantly, however, it gives us more justification for creating a much richer and more "memorable" classroom experience.

If you use music, use more. If not, why not? 

In press (2015) doi:10.1016/j.jretai.2015.06.001

Sunday, November 29, 2015

Keeping the pain in pronunciation teaching (but working it out with synchronized movement and dance)

Three of the staples of pronunciation work, choral repetition, drill and reading have been making something of a comeback--but just waiting for studies like this one to surface. (Or, confirm what any experienced practitioner could tell you without doing a controlled study in the lab.) In essence, the key idea is: choral, doing it together, in sync.

 A 2015 study, Synchrony and exertion during dance independently raise pain threshold and encourage social bonding by Tarr,  Launay, Cohen and Dunbar found " . . . significant independent positive effects on pain threshold (a proxy for endorphin activation) and in-group bonding. This suggests that dance which involves both exertive and synchronized movement may be an effective group bonding activity." (Full disclosure here.) The dance treatment used was a type of synchoronized dancing at 130 beats per minute, which does sound relatively "exertive"--perhaps not a perfect parallel to use of synchronized gesture and body movement in language teaching. It is, I think, still close enough, especially when you review the extensive literature review presented in the article. (And besides, the subjects in the study were high school students who obviously have energy to "burn!")

One of the fascinating "paradoxes" of pronunciation instruction is the way use of gesture and movement can be both energizing and distracting. Appropriate choral speaking activities using synchronized gesture or body movement may work to exploit the benefits of prescribed movement, without the downsides, the "pain", including just the personal or cultural preferences related to the appropriateness of  moving one's body in public. (See several earlier posts on that topic.)

One of the major shifts in pronunciation teaching--and probably one reason for the concurrent lack of both interest in and effectiveness of current methodology, has been the move to "personalized" pronunciation with computers and hand held devices, as putative substitutes for "synchronized" learning in a class . . . of people, with bodies to move with. In essence, we have in many respects, disembodied pronunciation teaching, disconnecting it from both social experience and integrated (including the often relatively hard "exertion" of) learning.

In v4.0 of the EHIEP system, most of the basic training is done using designed pedagogical movement patterns, along with simple, line dancing-like dance steps. (There is also the option of doing the practice patterns without accompaniment, not to a fixed rhythm, although the work is still done with complete synchrony between instructor and student.) In most cases the "step pattern" is just a basic side to side movement with periodic shifts in orientation and direction, done in the 48 to 60 beats per minute range. (A demonstration video will be available later this month and the entire system, early next spring.)

One of our most successful workshops along these lines was titled: So you think you can dance your way to better pronunciation! Turns out, you can, even if that only means that all the bodies in the class are synchronized "naturally" as they mirror each others' movement as the result of their mirror neurons locking into highly engaged f2f communication in general.

Turns out the "pain" is essential to the process, both the physical and social "discomfort" since response to it and exploiting it also enables powerful, multi-sensory learning. Or as Garth Brooks put it: "I could have missed the pain, but I'd had to miss the dance."

Full citation:
Tarr, B., Launay, J., Cohen, E., Dunbar, R. (2015) Synchrony and exertion during dance independently raise pain threshold and encourage social bonding The Royal Society Biology Letter 28: October, 2015.DOI: 10.1098/rsbl.2015.0767

Thursday, November 26, 2015

Drawing on the haptic side of the brain (in edutainment and pronunciation teaching)

How is your current "edutainmental quality of experience" (E-QoE), defined as degree of excitement, enjoyment and "natural feel" (to multimedia applications) by Hamam, Eid and El Saddik of the DISCOVER Lab, University of Ottawa, in a nice 2013 report, "Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications"? (Full citation below.) EQoE (pronounced: E-quo, I'd guess) is a great concept. Need to come up with a reliable way of measuring it in our research, something akin to that in Hamam et al. (2013).

In that study, a gaming application configured both with and without haptic or kinaesthetic features (computer mediated movement and touch in various combinations, in this case a haptic stylus)--as opposed to having just visual or auditory engagement, employing just eyes, ears and hands--was examined for relative EQoE. Not surprisingly, the former was significantly higher in EQoE, as indicated in subject self-reports.

I am often asked how "haptic" contributes to pronunciation teaching and what is "haptic" about EHIEP. This piece is not a bad, albeit indirect, Q.E.D. (quod erat demonstrandum)--one of my favorite Latin acronyms learned in high school math! (EHIEP uses movement and touch for anchoring sound patterns but not computer-mediated, guided movement--at least for the time being!)

The potential problems with use of gesture in instruction, the topic of several earlier posts, tend to be (a) inconsistent patterns in the visual field, (b) perception by many instructors and students as being out of their personal and cultural comfort zones, and (c) over-exuberant, random and uncontrolled gesture use in general in teaching, often vaguely related to attempts to motivate or "loosen up" learners--or, more legitimately, to just have fun. EHIEP succeeds in overcoming most of the potential "downside" of body-assisted Teaching (BAT).

In a forthcoming 2016 article on the function of gesture in pronunciation teaching, the EHIEP (Essential, Haptic-integrated English Pronunciation) method is somewhat inaccurately described as just a "kinaesthetic" system for teaching pronunciation using gesture, a common misconception. EHIEP does, indeed, use gesture (pedagogical movement patterns) to teach sound patterns, but the key innovation is use of touch to make application of gesture in teaching controlled, systematic and more effective in providing modeling and feedback--and obviously enhance E-QoE--very much in line with Hamam et al (2013).

The gaming industry has been on to haptic engagement for decades; edutainment is coming on board as well. Now if we can just do the same with something as unexciting, un-enjoyable and "unnatural" as most pronunciation instruction. We have, in fact . . .

Keep in touch!


Hamam, A, Eid, M., and  El Saddik, A. (2013). Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications.Multimedia Tools and Applications archive
67:2, 455-472.

Friday, November 20, 2015

Good looking, intelligible English pronunciation: Better seen (than just) heard

One of the less obvious shortcomings of virtually all empirical research in second language pronunciation intelligibility is that is generally done using only audio recordings of learner speech--where the judges cannot see the faces of the subjects. In addition, the more prominent studies were done either in laboratory settings or in specially designed pronunciation modules or courses.

In a fascinating, but common sense 2014 study by Kawase, Hannah and Wang it was found that being able to see the lip configuration of the subjects, as they produced the consonant 'r', for example, had a significant impact on how the perceived intelligibility of the word was rated. (Full citation below.) From a teaching perspective, providing visual support or schema for pronunciation work is a given. Many methods, especially those available on the web, strongly rely on learners mirroring visual models, many of them dynamic and very "colorful." Likewise, many, perhaps most f2f pronunciation teachers are very attentive to using lip configuration, their own or video models, in the classroom.

What is intriguing to me is the contribution of lip configuration and general appearance to f2f intelligibility. There are literally hundreds of studies that have established the impact of facial appearance on perceived speaker credibility and desirability. So why are there none that I can find on perceived intelligibility based on judges viewing of video recordings, as opposed to just audio? In general, the rationale is to isolate speech, not allowing the broader communicative abilities of the subjects to "contaminate" the study. That makes real sense on a theoretical level, bypassing racial and ethnic and "cosmetic" differences, but almost none on a practical, personal level.

There are an infinite number of ways to "fake" a consonant or vowel, coming off quite intelligibly, while at the same time doing something very much different than what a native speaker would do. So why shouldn't there be an established criterion for how mouth and face look as you speak, in addition to how the sounds come out? Turns out that there is, in some sense. In f2f interviews, being influenced by the way the mouth and eyes are "moving" is inescapable.

Should we be attending more to holistic pronunciation, that is what the learner both looks and sounds like as they speak? Indeed. There are a number of methods today that have learners working more from visual models and video self recordings. That is, I believe, the future of pronunciation teaching, with software systems that provide formative feedback on both motion and sound. Some of that is now available in speech pathology and rehabilitation.

There is more to this pronunciation work than what doesn't meet the eye! The key, however, is not just visual or video models, but principled "lip service", focused intervention by the instructor (or software system) to assist the learner in intelligibly "mouthing" the words as well.

This gives new meaning to the idea of "good looking" instruction!

Full citation:
Kawase S, Hannah B, Wang Y. (2014). The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers. J Acoust Soc Am. 2014 Sep;136(3):1352. doi: 10.1121/1.4892770.

Sunday, November 15, 2015

Emphatic prosody: Oral reading rides again! (in language teaching)

Two friends have related to me how they conclude interviews. One (a) asks applicants "Napoleon's final question" (that he would supposedly pose to potential officers for his army): "Are you lucky?" and (b) has them do a brief, but challenging oral reading. 'A' provides most of what the first needs to know about their character. 'B', the other says, is the best indicator of their potential as a radio broadcaster--or as language teacher. I occasionally use both, especially in considering candidates for (haptic) pronunciation teaching.

One of the "standard" practices of the radio broadcasters (and, of course, actors) on their way to expertise (which some claim takes around 10,000 hours), I'm told, is to consistently practice what is to be read on air or performed, out loud. Have done a number of posts over the years on "read aloud" techniques in general reading instruction with children and language teaching, including the Lectio Divina tradition. Research continues to affirm the importance of oral work in developing both reading fluency and comprehension.

Recently "discovered" a very helpful paper 2010 paper by Erekson, coming out of research in reading, entitled, Prosody and Interpretation, where he examines the distinction between syntactic (functioning at the phrasal level) prosody and emphatic prosody used for interpretation (at the discourse level.) One of the interesting connections that Erekson examines is that between standard indices of reading fluency and expressiveness, specifically control of emphatic prosody. In other words, getting students to read expressively has myriad benefits. Research from a number of perspectives supports that general position on the use of "expressive oral reading" (Patel and McNab, 2011); "reading aloud with kids"  (De Lay, 2012); "automated assessment of fluency" (Mostow and Duong, 2009); "fluency and subvocalization" (Ferguson, Nielson and Anderson, 2014).

The key distinction here is expressiveness at the structural as opposed to discourse level.  It is one thing to get learners to imitate prosody from an annotated script (like we do in haptic work--see below) and quite another to get them to mirror expressiveness in a drama, whether reading from a script without structural cues, as in Reader's Theatre, or impromptu.

Oral reading figures (or figured) prominently in many teaching methods.  The EHIEP (Essential Haptic-integrated English Pronunciation) system, provides contextualized practice in the form of short dialogues where learners use pedagogical movement patterns (PMPs), gestural patterns to accompany each phrase which culminate with hands touching on designated stressed syllables. That is the most important feature of assigned pronunciation homework. Although that is, of course, primarily structural prosody  (in the Lectio Divina tradition) we see consistent evidence that oral performance leads to enhanced interpretative expressiveness.

I suspect that we are going to see a strong return to systematic oral reading in language teaching as interest in pragmatic and discourse competence increases. So, if expressiveness is such an important key to not only fluency but interpretation in general, then how can you do a better job of fostering that in students?


Read out loud, expressively: "Read out loud expressively and extensively!" 

Tuesday, November 10, 2015

Alexander Guiora - Requiescat in pace

Last month the field of language teaching and language sciences lost a great friend, colleague, researcher and theorist, Alexander Guiora, retired Professor Emeritus, University of Michigan. To those of us in English language teaching, his early work into the concepts of empathy, "language ego" and second language identity, the famous "alcohol" study and others, were foundational in keeping mind and the psychological self foregrounded in the field. As Executive Editor of the journal, Language Learning, he was instrumental in elevating it to the place it holds today, the standard for research publication by which all others are to be measured.

Working with him, doing research as a doctoral student was a unique experience. His research group, composed of faculty and graduate students from several disciplines over the years, met every Friday morning. There was always a project underway or on the drawing boards. Several important, seminal publications resulted. Shonny was an extraordinary man. I recently shared the following with his family:

I think the great lesson we learned from him early on was how to be brutally honest--and yet still love and respect our colleagues unconditionally. All of us, recalling when were newbie grad students, "cherish" memories of being jumped all over for making a really stupid mistake-- which we would surely never commit again! And then, minutes later, he could just as well say something genuinely complimentary about an idea or phrasing in a piece that we were responsible for. Talk about cultivating and enhancing "language researcher ego"! He taught us to think and argue persuasively from valid research, how to not take criticism of our work, personally. Few of us did not develop with him a lasting passion for collaborative research.

Thursday, October 22, 2015

We have met the enemy (of pronunciation teaching in TESOL), and he is us!
Am often reminded of that great quip in the political cartoon Pogo, by Walt Kelly, embellished in the title of this post. In workshops we often encounter the following three misconceptions about pronunciation teaching, based vaguely and incorrectly on "research" in the field. Recently, in the comments of one reviewer of a proposal for a workshop on teaching consonants for the 2016 TESOL convention--which was rejected, by the way--all three showed up together! Here they are, with my responses in italics:

"Most learners have access to websites that model phonemes, such as Rachel’s English and Sounds of Speech by the University of Iowa."

Really? "Most" learners? What planet is that on? Billions of learners don't have web access, including the preponderance of those in settlement programs here in Vancouver. And even those that do still need competent instruction on not only to use them effectively, but find them in the first place. Furthermore, those sites are strongly visual-auditory and EAP biased, better suited to what we term "EAP-types" (English for the academically privileged). For the kinaesthetic or less literate learner, those web resources are generally of little value. There are half a dozen other reasons why that perspective is excessively "linguini-centric."

Theory, Practice and Research Basis ·      
"There has been much research, which has shown the central importance of the peak vowel in a stressed syllable. The focus on consonant articulation is less important."

That represents an "uninformed" consensus from more than a decade ago. Any number of studies have since established the critical importance of selected consonants for intelligibility of learners of specific L1s. Think: Final consonants in English for some Vietnamese dialects or some Spanish L1 speakers of English. 

Support for Practices, Conclusions, and/or Recommendations ·      
"The article made a nice specific connection between haptic activities, and acquisition of consonant sounds. However, there was only one source."

Good grief. The workshop was proposed as a practical, hands-on session for teachers, presenting techniques for dealing with specific consonants.(The one reference is a published conference paper linked off the University of Iowa website.) Have heard similar reports from other classroom practitioners, such as myself, who had  proposals rejected: Only "researcher certified" proposals welcome. So much for our earlier enthusiasm in TESOL for teacher empowerment . . .

Wednesday, October 21, 2015

8 ways to teach English rhythm to EVERYbody but no BODY!

Here's one for your "kitchen sink" file (a research study that throws almost every imaginable technique at a problem--and succeeds) . . . well, sort of. In Kinoshita (2015) over the course of a four-week course, students were taught using seven different, relatively standard procedures for working on Japanese rhythm with JSL students. If you are new to rhythm work, check it out.

Those included: rhythmic marking (mark rhythm groups with a pencil and then trace them with their fingers), clapping (hands), pattern grouping (identify type of rhythm pattern for know vocabulary), metronome haiku (listening to and reading haiku to a metronome), auditory beat (reading grouped text out loud), acoustic analysis (using Praat), shadowing (attempting to read or speak along with an audio recording or live person). Impressive! They worked with each one for over an hour.

Not surprisingly, their rhythm improved. It is not entirely clear what else may have contributed to that effect, including other instruction and out of class experience, since there was no control group, but the students liked the work and identified their favorite procedure, which apparently aligned with their self-identified cognitive/learning style. Although after having done that many hours of rhythm work it had to be a bit difficult for the learner to  assess which technique they "liked" best, let alone which actually worked best for them individually.

Of particular interest here are the first two techniques, marking rhythm and tracing along with a finger, and clapping hands--both of which are identified as "kinaesthetic" by Kinoshita. (The other techniques are noted as combinations of auditory and visual.) They are, indeed, movement-and touch-based. The first at least involves moving a finger along a line. The second, clapping hands, could, in principle, involve more of the body then just the hands, but it also might not, of course.

Neither technique, at least on the face of it, meets our basic "haptic" threshold--involving more full-body engagement and distinctly anchoring stressed vowels. By that I mean that including touch in the process does not, in principle, help to anchor (better remember) the internal structure of the targeted rhythm groups--in fact it may serve to help cancel out memory for different levels of stress, length and volume of adjacent syllables. (There have been several blogposts dealing with this topic, one recently and the first, back in 2012 that focused on how haptic "events" are encoded or remembered.)

In essence, the haptic "brain" area(s) are not all that good at remembering different levels of pressure applied to the same point on the body. In other words, it is more challenging, for example, to remember which syllable in a clapped or traced rhythm group was prominent. (The number of syllables involved may be another matter.) So, to the extent that rhythm cannot or should not be divorced from word and phrasal stress, Kinoshita's two procedures probably are not contributing much variance to the final "progress" demonstrated.

That is not to say that more holistic,"full body" techniques such as "jazz chants", poetry, songs or dance, such as those promoted by Chan in her paper in the same conference proceedings (Pronunciation Workout), are not useful, fun, engaging, motivating and serve functions other than acquisition of the rhythm of an L2. 

A basic assumption of haptic work is that systematic body engagement, involving the whole person,  especially from the neck down, is essential to efficient instruction and learning. (Train the body first! - Lessac). v4.0 will include extensive use of "pedagogical dance steps" and practicing of most pedagogical movement patterns (gesture plus touch) to rhythmic percussion loops. 

As always, if you are looking for a near perfect "haptic" procedure for teaching English rhythm, where differentiated movement and touch contribute substantially to the process, I'd, of course, recommend begiining with the AHEPS v3.0 Butterfly technique-at least as a replacement for hand clapping. And for most of the other eight as well as matter of fact!

Full citation:
Kinoshita, N.(2015). Learner preference and the learning of Japanese rhythm. In J. Levis, R. Mohammed, M. Qian; Z. Zhou (Eds). Proceedings of the 6th Pronunciation in Second Language Learning and Teaching Conference (ISSN 2389566), Santa Barbara, CA (pp.49-62). Ames, IA: Iowa State University.

Monday, October 19, 2015

The perfect body image for haptic pronunciaiton teaching!

Is haptic pronunciation teaching for you? According to research, here's a way to check. Put on your exercise clothes. Stand in front of a full length mirror. If you don't like what you see (really!) or you like what you see too much . . . maybe not. If you are not up to speed on the impact of body image, this readable, 1997 summary of research by Fox is a pretty good place to start.

We have known for over a decade that some instructors and students may find haptic pronunciation work disconcerting for a number of reasons-- including culture and personality. They can be understandably skeptical about moving their bodies and gesturing during instruction, in class or in private. Likewise, teaching, standing in front of a class, has proven in many contexts not the most effective way do initial haptic pronunciation training.

Fast forward to the age of media and the potential of body image to affect personality and performance is magnified exponentially. In a new study of the impact of body imagery presented on the website "Fitsperation"and Pinerest, Teggeman and Zaccardo of Flinders University, found that for college age-women, viewing attractive fitness models generally does nothing for body image; quite the contrary, in fact. The subjects in the study reported lower satisfaction with their body after viewing the Fitsperation images, but better, more positive sense of body image after looking at a selection of "travel" pictures.

Now there could be many explanations for that effect. (I do need to get a copy of those "travel" pictures!) Numerous other studies have found that the same goes for motivating you for long term diet and fitness persistence. Short term is another matter. Great looking models do help get you and your credit card in the door! The point is that in this kind of media-based instruction, especially haptic pronunciation work that is, in essence, training the body to control speech, the appearance of the model may be important. I'm sure it is, in fact.

In part for those reasons, the Acton-haptic English Pronunciation System (AHEPS) training videos use a relatively non-distracting model whose image could not possibly intimidate, one that should not negatively impact body image. We found one: ME, in black and white, dressed in a white, long sleeve pullover with dark grey sweater vest, wearing black beret.

I must admit that I was a bit disheartened at first when I was told by consultants that I was a near perfect model: 70+ years old, bald, no distinguishing facial features, nondescript body shape, "professor-type"--my appearance would distract no one from the gestural patterns I was doing with my hands and arms in front of my upper body. Great. So much for my plan to use a "Fitsperational" model for the 120+ videos of the system.

For a time we tried using an avatar, but he was not engaging enough to hold attention. Alas, I proved to be "avatar-enough" in the end. In addition, any number of studies have confirmed the relatively fragile nature of haptic engagement. It is exceedingly sensitive to being overridden or distracted off by visual or auditory interference. 

With a few exceptions, such as workshops at conferences, most hapticians, myself included, let the videos do the initial training, where learners and models need to do a good deal of uninhibited upper body movement of hands and arms. Later, in classroom application of the pedagogical movement patterns, instructors use a very discrete, limited range of movement in correction and modeling--generally within the "body-image-comfort-zone" of most.

Not quite ready to teach pronunciation haptically, yourself?--Let us do it for you!

Keep in Touch

Wednesday, October 7, 2015

Great memory for words? They're probably out of their heads!

Perhaps the greatest achievement of neuroscience to date has been to repeatedly (and empirically) confirm common sense. That is certainly the case with teaching or training. Here's a nice one.

For a number of reasons, the potential benefit of speaking a word or words out loud and in public
when you are trying to memorize or encode it--rather than just repeating it "in your head"--is not well understood in language teaching. For many instructors and theorists, the possible negative effects on the learner of speaking in front of others and getting "unsettling" feedback far outweigh the risks. (There is, of course, a great deal of research--and centuries of practice--supporting the practice of repeating words out loud in private practice.)

In what appears to be a relatively elegant and revealing (and also common-sense-confirming) study, Lafleur and Boucher of Montreal University, as summarized by ScienceDaily (full citation below) explored under which conditions subsequent memory for words is better: (a) saying it to yourself "in your head", (b) saying it to yourself in your head and moving your lips when you do, (c) saying it to yourself as you speak it out loud, and (d) saying the word out loud in the presence of another person. The last condition was substantially the best; (a) was the weakest.

The researchers do speculate as to why that should be the case. ( quoting the original study):

"The production of one or more sensory aspects allows for more efficient recall of the verbal element. But the added effect of talking to someone shows that in addition to the sensorimotor aspects related to verbal expression, the brain refers to the multisensory information associated with the communication episode," Boucher explained. "The result is that the information is better retained in memory."

The potential contribution of interpersonal communication as context information to memory for words or experiences is not surprising. How to use that effectively and "safely" in teaching is the question. One way, of course, is to ensure that the classroom setting is both as supportive and nonthreatening as possible. Add to that a social experience with others that also helps to anchor the memory better.

Haptic pronunciation teaching is based on the idea that instructor-student, and student-student communication about pronunciation must be both engaging and efficient--and resonately and richly spoken out loud. (Using systematic gesture does a great deal to make that work. See v4.0 later this month for more on that.)

I look forward to hearing how that happens in your class or your personal language development. If that thread gets going, I'll create a separate page for it. 

Keep in touch!

University of Montreal. "Repeating aloud to another person boosts recall." ScienceDaily. ScienceDaily, 6 October 2015. .

Monday, September 28, 2015

4 rituals for improving how students feel about their pronunciation


It is getting to the point now that whenever you need advice on all things related to feeling or doing better, your default is your local "neuroscientist".  A favorite venue of mine for such pop and entertaining council--other than Amy Farrah  Fowler on Big Bang Theory-- is In what is better read as simply "tongue-in-cheek", Eric Barker has a fun piece entitled, "4 rituals that will make you a happier person."

I recommend you read it, if only to get a good picture of where we are headed and how neuroscience is being hijacked by pop psychology, or vice versa . . . 

Those "rituals" are:
  • Ask why you feel down. (Once you identify the cause, your brain will automatically make you feel better.)
  • Label negative feelings.(That will relocate them in a part of the brain that generally doesn't mess with feelings.)
  • Make that decision. (As long as your brain is being managed by the executive center, you are in command and feeling powerful.)
  • Touch people. I have always been a fan of oxytocin. Touch, all kinds, including hugging generates it.  
Notice that the first three are not all that far off from the magician's (or psychologist's) basic technique of distracting the audience away from the trick--looking someplace else or looking at the problem through a lens or two to knock off or defuse the negative feelings. 

So, how might this work for changing pronunciation or at least taking on more positive attitudes toward it? For example (avoiding micro-aggressions to the extent possible):

Question: Why do you feel down?  
Answer: Your pronunciation is bad; not inferior, just bad.

Question: Why the negative feelings?
Answer: I have unrealistic expectations or you are a bad teacher.

Question: What decision should you make? 
Answer: Get in touch with my local "haptician" (who teaches pronunciation haptically) or consult my local neuroscientist so I can at least feel better about my pronunciation . . .

Question: How can I get in(to) touch?
Answer: Start here, of course!

Sunday, September 20, 2015

Tapping into English rhythm--but not teaching it or remembering it!

Credit: Anna Shaw
One question I often pose to language teachers is something like: How do you teach rhythm? The most frequent answers: I don't! (or) You can't! (or) How do you do that? There are no studies that I am aware of that investigate relative effectiveness of teaching L2 rhythm in English. A recent study of instructor priorities in teaching pronunciation, by Saito (2013) includes a questionnaire that does not even  mention rhythm as an option.

So why can rhythm be difficult to teach? New research by Tierney and Kraus (2016 - full citation below), entitled, Evidence for Multiple Rhythmic Skills, suggests why--and possibly something of a solution. What they found, in VERY simple terms. was, in essence, that the brain "circuitry" for keeping up a beat, such as tapping fingers along music, is actually quite different from the neurological connections that encode and recall rhythmic patterns. In other words, just because students can follow along with common rhythm techniques, such as tapping fingers on the desk or clapping hands to rhythmic patterns, does not mean that they will be able to remember or use those patterns later.

This is big. In an earlier post, I reported on the "haptic" basis of similar research, showing that differentiation between multiple instances of repeated touch on one location can be exceedingly difficult for the brain to process. That is, from a pronunciation perspective, tapping on desks or clapping hands or stretching rubber bands to learn stress patterns, where one syllable is spoken louder or stronger than the others may not be all that effective.

In part in response to that research, the Essential Haptic-integrated English Pronunciation (EHIEP) system uses a framework where rhythm is taught using a gestural framework that involves encoding the pattern not just as a sequence of touches on the body, but also places the stressed element in a different location from the unstressed elements--AND--uses consistent positions and movement across the visual field to further distinguish the pattern. Here is a good example, the Butterfly technique.

For more on how to teach that way, tap here!

Full citation:
Tiery, A. and Kraus, A. (2016) Evidence for Multiple Rhythmic Skills, September 16, 2015
DOI: 10.1371/journal.pone.0136645