Thursday, February 4, 2016

You CAN teach old dogs new pronunciation!!!

Clker.com
Can't resist this one . . . New study by Wallis, Virányi, Müller, Serisier, Huber, and Range, University of Vienna, on how age effects learning in (pet) dogs, border collies, to be precise. What they found, according to the ScienceDaily summary,  was that with older dogs:
  • they learn more slowly and exhibit lower cognitive flexibility
  • their logical reasoning improves with age
  • their long-term memory for touchscreen stimuli (emphasis, mine) is not affected by age
Now the research was looking at memory and using touch screen technology as the media. Turns out, in part what they may have actually "discovered" is the effect of touch on old dog memory. The report of the study does not mention that possible "confounding" variable or other studies using different media. Really . . .

For the last 40 years of so one of my main interests has been the fossilized pronunciation of "old dogs", always trying to figure out how to undo seemingly intractable errors in pronunciation. That led me to gesture and then a decade ago to touch and embodiment theory. One of the most consistent "findings" of my (haptic-integrated) clinical work has been that for the fossilized, at least, gesture + touch is a remarkably effective antidote. In other words, "more mature" learners can change and remember new pronunciation (in English, for the most part) if you anchor it with . .. (ready?) TOUCH! Like the touch screen in the "old dogs" research.

That model has been extended to all learners with v1.0, v2.0, v3.0 and (soon) v4.0 of the EHIEP approach (Essential Haptic-integrated English Pronunciation). For puppies and younger dogs there are, of course, all kinds of possibly effective ways to change pronunciation. But, at least if you are working with "older dogs" the haptic approach is . . . well . . . logical!

We don't use a touch screen yet, but we could, of course. We, instead, use hands touching each other or other strategic body locations, e.g., shoulders, bachio radiali, on stressed syllables. This new research gives hope to the fossilized and those still seriously out of touch . . .

Wallis, L., Virányi, Z., Müller, C., Serisier, S., Huber, L., and F. Range. Aging effects on discrimination learning, logical reasoning and memory in pet dogs. AGE, 2016; 38 (1).

Wednesday, February 3, 2016

Gestured pronunciation instruction: Better online?

Clker.com
It is now well-established in several fields that "Students learn more when their teacher has learned to gesture effectively" (Alibali, Young, Crooks, Yeo, Wolfgram, Ledesma, Nathan, Breckinridge-Church and Knuth, 2013). In pronunciation work use of "live" models is typically limited to either "talking heads" often zeroing in on the mouth or a recording of an instructor presenting something resembling a typical lesson with explanation and practice. If you have never spent some time experiencing some of what is now out there from the learner's perspective, stop for a bit and join us when you have. Most of it mind-numbing, at best.

Clker.com
Although there is no research that I am aware of focusing in on the specific contribution of video to pronunciation instruction, the assumption seems to be simply that the "better" (the production quality), the more effective. There is a rapidly growing market for web-based, visually compelling teaching of pronunciation.

One of the obvious problems with video-based instruction, especially the more visually captivating, ironically, is the potential for viewers to drop back into "TV-trance-mode", absorbing but not doing much processing or demonstrating meaningful engagement. (There is also a very serious issue with visual modality overpowering auditory and kinaesthetic, as well.) In pronunciation work, where re-education of the body is central, not enthusiastically joining "the dance" is a deal breaker . . . One key contribution of gesture to instruction is to create stronger engagement and enhancement of moment-by-moment attention.

A 2014 study, The effect of gestured instruction on the learning of physical causality problems by Carlson, Jacobs, Perry and Ruth Breckinridge-Church demonstrates how systematic use of gesture by instructors on video can significantly improve learning of another "physical" process. Subjects who viewed the "gesture-articulated" instructor, rather than just the spoken presentation did better on the post test. This study is particularly relevant in that it deals with gesture enabling cognition of what is a very "tactile" concept, that of manipulating gear movement and direction.

AMPISys, Inc.
In haptic pronunciation teaching as unpacked in several earlier posts, it is apparently the case that not only is gesture with video more effective, but gesture+video+touch is even better. The basic reasons for that are that (a) touch makes gesture not only more systematic but (b) provides it with more impact, (c) whether done by the learner or just observed. And furthermore, (d) just training learners in haptic-anchored gesture, at least initially, is for many, if not most, instructors simply too far outside of their comfort and zone of "haptic intelligence." (See Research References page)

I came up with this system over a decade ago and still use videos (of myself) when introducing students to the basic gestural inventory, or pedagogical movement patterns (PMP). I'm just so much better online . . . (and you will be, too!)

References:
Alibali, M., Young, A., Crooks, N., Yeo, A., Wolfgram, M., Ledesma, I., Nathan, M.,  Breckinridge Church, R. and E. Knuth. (2013). Students learn more when their teacher has learned to gesture effectively. Gesture 13:2, 210–233.
Carlson, C., Jacobs, S.,  Perry, M. and R. Breckinridge-Church. (2014). The effect of gestured instruction on the learning of physical causality problems. Gesture 14:1, 26–45.


Saturday, January 23, 2016

Вниманы! Highly emotional L2 pronunciation teaching! (Ah . . . forget it!)

Clipart.com
Every language has at least one expression that gets its message across better than most all other languages, emotionally and phonaesthetically. In Russian, for me at least, one is "Вниманы or Vnimanie!" (Attention!) Said with the right emotional "zing," it can "grab" the attention like no expression I have ever experienced.

Optimal holding and systematic management of learner attention and emotion is the foundation of haptic pronunciation work. (See earlier post.) It is often assumed, however, that simply the more emotion involved in language teaching or learning, the better; the better words and meanings are remembered. Turns out, not surprisingly, that is really not the case.

Research by Schirmer, Chen, Ching, Tan, Ryan and Hong (2012), summarized by Science Daily,  investigating the impact of emotion in the spoken voice on memory for words and meanings, confirms what common sense tells us: sometimes strong emotion either "clouds" or "enhances" both understanding and memory. In that study, subjects were wired with fMRIs and shown and heard spoken words with varying degrees and kinds of emotion.

In one condition " . . . participants recognized (the actual) words better when they had previously heard them in the neutral (relatively unemotional) tone compared with the sad tone." However, expressions spoken with more emotion captured subjects' attention better and were recognized more quickly later. In addition, women were better at recognizing emotionally loaded words than men. In effect, emotion seemed to enhance memory for meaning but  downgrade recall of specific words. The brain mapping confirmed the differential processing of the emotion-loaded targets. That makes sense. Emotion is more a discourse function, relating to context and the story.

In the context of language learning this research might suggest that emotion in the voice would enhance listening comprehension, for example--but perhaps not pronunciation or even remembering specific vocabulary. That has always been one of the "conundrums" of using drama in language teaching or highly "gesticular" routines: they do seem to improve general expressiveness, confidence, rhythm, and intonation but not pronunciation of individual words or even memory for them. It is not because attention isn't focused on the target but that the emotion involved simply directs attention elsewhere in the brain.

So what is the bottom line here? It is apparently this: Sometimes drawing learners' attention to pronunciation to be learned and remembered with various emotional overlays and highlights may be fun, stimulating and a good change of pace (and still worth doing, of course, for other reasons) but in the long run  . . . not all that memorable (unlike this post, of course!) That does not mean that the sterile language lab of old or the web-based "drilling machines" are the answer but that pronunciation teaching must generally be embedded in authentic communication where emotion and attention to form occur naturally and systematically--like in your classroom?

Citation:
Springer Science+Business Media. (2012, December 11). Emotion in voices helps capture listener's attention, but in the long run the words are not remembered as accurately. ScienceDaily. Retrieved January 22, 2016 from www.sciencedaily.com/releases/2012/12/121211112742.htm







Saturday, January 16, 2016

Can't stand teaching pronunciation? You should reconsider!

When you work with pronunciation, how often do you have students on their feet? In both general education and business the benefits of "thinking on your feet" (literally) is well-established. (I'm doing this blogpost, as usual, standing in the kitchen next to the coffee maker!) A new study by Mehta, Shortz and Benden of Texas A&M University, summarized by Science Daily, seems to establish for the first time the specific "neuro-cognitive" basis of that effect.

Based on students' preferences, they were assigned to use standing desks during the experimental study. According to the authors, quoted by Science Daily:

"Test results indicated that continued use of standing desks was associated with significant improvements in executive function and working memory capabilities," Mehta said. "Changes in corresponding brain activation patterns were also observed."

 Wow! That almost deserves a standing ovation! On the blog in the past I've reported on a number of studies that demonstrate the cognitive benefits of exercise on learning and memory and the corresponding enhancement of attitude and motivation that getting students up and moving around produces.

AMPISys, Inc.
In the classroom application of haptic pronunciation teaching (and STRONGLY recommended for haptic independent study) virtually ALL initial training in the core pedagogical movement patterns is done with students on their feet, typically mirroring the the model on the LCD screen at the front of the room. (To preview those, go here.)

Even if your school is not set up with stand up desks, you can at least get students on their feet occasionally, not just for pronunciation but almost any in-class activity (as I'm sure many of you do already.) One of my all time favorites is the "Talkaboutwalkabout!" in fact.

Full citation:
Texas A&M University. "New study indicates students' cognitive functioning improves when using standing desks." ScienceDaily. ScienceDaily, 14 January 2016. .

Saturday, January 9, 2016

Time to switch back to English Only (in pronunciation teaching)?

Clker.com
There is one counter-argument to use of L1 in the L2 classroom that you don't hear that often today: that L2 pronunciation may be compromised in the process. When I was trained 4 decades ago, that was a given. It may be time for a slight "switch" in perspective.

Goldrick, Runnqvist and Costa (2014), summarized by Psychological Sciences.org, conducted an interesting study where they had bilingual subjects switching back and forth between English and Spanish (their dominant language) nouns. Spanish consistently influenced their pronunciation of English consonants but English did not affect Spanish consonants. Spanish influence was not readily apparent when English terms were articulated consecutively.

The point of the study is that the additional processing load, itself, of switching--not just the differences in articulation of the L1 and L2 consonants--was contributing to the emergence of the more salient Spanish influence.

In an EFL class, where more of the speaking is in the L1 the "switching-processing" effect may be quite "pronounced." Even in an ESL class, where students, themselves, may be using the L1 privately, outside of the flow of the class, the effect on L2 pronunciation could, likewise, be significant. In the structuralist-audio-lingual period, exclusion of L1 in teaching was, indeed, a given.

Now if all the switching effect does is allow in a bit more "accent," then that may not be all that problematic, but that is not what the study seems to be implying: switching causes a generalized processing overload that probably affects much more than just pronunciation. It may be time we reexamine that effect, at least in terms of pronunciation teaching in integrated classroom instruction. The study deserves replication/extension to current methodology--and a closer look at L1 and L2 switching in your class as well?




Full citation:
Goldrick, M., Runnqvist, E., & Costa, A. (2014). Language Switching Makes Pronunciation Less Nativelike. Psychological Science, 25 (4), 1031-1036. DOI: 10.1177/095679761352001

Friday, January 1, 2016

3D pronunciation instruction: Ignore the other 3 quintuplets for the moment!

Clker.com
For a fascinating look at what the field may feel like--from a somewhat unlikely source, a 2015 book, 3D Cinema: Optical illusions and tactile experience, by Ross, provides a (phenomenal) look at how and why contemporary 3D special effects succeeds in conveying the "sensation of touch". In other words, as is so strikingly done in the new Star Wars epic, the technology tricks your brain into thinking that you are not only there flying that star fighter but that you can feel the ride throughout your hands and body as well.

This effect is not just tied in to current gimmicks, such as moving and vibrating theater seats or spray mist blown on you, or various odors and aromas being piped in, although it can be. Your mirror neurons respond more as if it is you who is doing the flying, that you are (literally) "in touch" with the actor. The neurological interconnectedness between the senses (or modalities) provides the bridge to greater and greater sense of the real or a least very "close encounter."

How does the experience in a good 3D movie compare to your best multi-sensory events or teachable moments in the classroom, focusing on pronunciation? 

It is easy to see, in principle, the potential for language teaching, creating one vivid teachable moment after another, "Wowing!" the brain of the learner with multi-sensory, multi-,modal experience. As noted in earlier blogposts on haptic cinema, based in part on Marks (2002), that concept, "the more multi-sensory, the better", by just stimulating more of the learner's (whole) brain virtually anything is teachable, is implicit in much of education and entertainment.

Although earlier euphoria has moderated, one reason it can still sound so convincing is our common experience of remembering the minutest detail from a deeply moving or captivating event or presentation. We all have had the experience of being present at a poetry reading or great speech where it was as if all our senses were alive, on overdrive. We could almost taste the peaches; we could almost smell the gun powder.

Part of the point of 3D cinema is that it becomes SO engaging that our tactile awareness is also heightened enormously. As that happens the associated connections to other modalities are "fired" as well. We experience the event more and more holistically. How that integration happens exactly can probably be described informally as something like: audio-visual-cognitive-affective-kinasethetic-tactile-olfactory and "6th sense!" experienced simultaneously.

At that point, apparently the brain is multitasking at such high speed that everything is perceived as "there" all at once. And that is the key notion. That would seem to imply that if all senses are strongly activated and recording "data" then, what came in on each sensory circuit will later still be equally retrievable. Not necessarily. As extensive research and countless commercially available systems have long established,  for acquisition of vocabulary, pragmatics, reading skills and aural comprehension, the possibilities of rich multi-sensory instruction seem limitless at this point.

Media can provide memorable context and secondary support, but why that often does not work as well for learning of some other skills, including pronunciation is still something of a mystery. (Caveat emptor: I am just completing a month-long "tour of duty" with seven, young grandchildren . . . ) In essence, our sensory modalities are not unlike infant octuplets, competing for our attention and storage space. Although it is "possible" to attend to a few at once, it is simply not efficient. Best case, you can do maybe two at a time, one on each knee.

The analogy is more than apt. In a truly "3D" lesson, consistent with Ross (2015), whether f2f or in media, where, for example, the 5 primary "senses" of pronunciation instruction (visual, auditory, kinaesthetic, tactile and meta-cognitive) are near equally competitive, that is vividly present in the lesson, overwhelmingly so. Tactile/kinaesthetic can be unusually prominent, accessible, in part, as noted in earlier blogposts, because it serves to "bind together" the other senses. In that context, consciously attending to any two or three simultaneously is feasible.

So how can we exploit such vivid, holistically experienced, 3D-like milieu, where movement and touch figure in more prominently? I never thought you'd ask! Because of the essentially physical, somatic experience of pronunciation--and this is critical, from our experience and field testing--two of the three MUST be kinaesthetic and tactile--a basic principle of haptic pronunciation teaching.(Take your pick of the other three!)

Consider "haptic" simply an essential "add on" to your current basic three (visual, auditory and meta-cognitive), and "do haptic" along with one or two of the other three. The standard haptic line-of march:

A. Visual-Meta-cognitive (very brief explanation of what, plus symbol, or key word/phrase)
B. Haptic-metacognitive (movement and touch with spoken symbol name or key word/phrase, typically 3x)
C. Haptic-auditory (movement and touch, plus basic sound, if the target is a vowel or consonant temporarily in isolation, or target word/phrase, typically 3x)
D. Haptic-Visual-Auditory (movement and touch, plus contextualized word or phrase, spoken with strong resonance, typically 3x)
E. Some type of written note made for further reference or practice
F. (Outside of class practice, for a fixed period of up to 2 weeks follows much the same pattern.)

Try to capture the learner's complete (whole body/mind) attention for just 3 seconds per repetition--if possible! Not only can that temporarily let you pull apart the various dimensions of the phonemic target for attention, but it can also serve to create a much more engaging (near 3D) holistic experience out of a potentially "senseless" presentation in the first place--with "haptic" in the mix from the outset.

Happy New Year!

Keep in touch.

Citation:
Ross, M. (2015). 3D Cinema: Optical Illusions and Tactile Experiences. London: Springer, ISBN: 978-1-349-47833-0 (Print) 978-1-137-37857-6 (Online)



Sunday, December 20, 2015

Lost in space: Why phoneme vowel charts may inhibit learning of pronunciation

In a recent workshop I inadvertently suggested that the relative distances between adjacent English vowels on various standard charts, such as the IPA matrix or those used in pronunciation teaching were probably not all that important. Rather than "stand by" that comment, I need to "distance myself" from it! Here's why.

Several posts on the blog, including a recent one, have dealt with the basic question of to what extent visual stimuli can potentially undermine learning of sound, movement and touch (the basic stuff of the haptic approach to pronunciation teaching.) I went back to Doellar and Burgess (2008), "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries" (Full citation below.), one of the key pieces of research/theory that our haptic work has been based on.

In essence, that study demonstrated that we have two parallel systems for learning locations, in two different parts of the brain, one from landmarks in the visual (or experiential) field and another from boundaries of the field. Furthermore, boundaries tend to override landmarks in navigating. (For instance, when finding your way in the dark, your first instinct is to go along the wall, touching what is there, if possible, not steer through landmarks or objects in the field in front of you whose relative location may be much less fixed in your experience.)

Most importantly for us, boundaries tend to be learned incidentally; landmarks, associatively. In other words, location relative to boundaries is more like a map, where the exact point is first identified by where it is relative to the boundary, not the other points within the map itself. Conversely, landmarks tend to be learned associatively, relative to each other, not in relation to the boundary of the field, which may be irrelevant anyway, not conceptually present.

So what does that imply for teaching English vowels? 
  • Learner access in memory to the vowels when still actively working on improving pronunciation is generally a picture or image of a matrix, where the vowels are placed in it. (Having asked learners for decades how they "get to" vowels, the consistent answer is something like: "I look at the vowel chart in my mind.")
  • The relative position of those vowels, especially adjacent vowels, is almost certainly tied more to the boundaries of the matrix, the sides and intersecting lines, not the relative auditory and articulatory qualities of the sounds themselves. 
  • The impact of visual schema and processing over auditory and haptic is such that, at least for many learners, the chart is at least not doing much to facilitate access to the articulatory and somatic features of the phonemes, themselves. (I realize that is an empirical question that cries out for a controlled study!)
  • The phonemic system of a language is based fundamentally on relative distances between phonemes. The brain generally perceives phonemic differences as binary, e.g., it is either 'u' or 'U', or 'p' or 'b', although actual sound produced may be exceedingly close to the conceptual "boundary" separating them. 
  • Haptic work basically backgrounds visual schema and visual prominence, attempting to promote a stronger association between the sounds, themselves, and the "distance" between them, in part by locating them in the visual field immediately in front of the learner, using gesture, movement and touch, so that the learner experiences the relative phonemic "differences" as distinctly as possible.
  • We still do some initial orientation to the vowel system using a clock image with the vowels imposed on it, to establish the technique of using vowel numbers for correction and feedback, but try to get away from that as soon as possible, since that visual schema as well gives the impression that the vowels are somehow "equidistant" from each other--and, of course, according to Doellar and Burgess (2008) probably more readily associated with the boundary of the clock than with each other.
 (Based on excerpt from Basic Haptic Pronunciation, v4.0, forthcoming, Spring, 2016.)

Doellar, C. and Burgess, N. (2008). "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries", retrieved from http://www.pnas.org/content/105/15/5909.long, December 19, 2015.


Friday, December 18, 2015

On developing excellent pronunciation and gesture--according to John Wesley,1770.

Have just rediscovered Wesley's delightful classic "Directions Concerning Pronunciation and Gesture", a short pamphlet published in 1770. The style  that Wesley was promoting was to become something of the hallmark of the Wesleyan movement: strong, persuasive public speaking. Although I highly recommend reading the entire piece, here are some of Wesley's  (slightly paraphrased) "rules" below well worth heeding, most of which are as relevant today as were they then.

 Pronunciation
  • Study the art of speaking betimes and practice it as often as possible.
  • Be governed in speaking by reason, rather than example, and take special care as to whom you imitate.
  • Develop a clear, strong voice that will fill the place wherein you speak.
  • To do that, read or speak something aloud every morning for at least 30 minutes.
  • Take care not to strain your voice at first; start low and raise it by degrees to a height.
  • If you falter in your speech, read something in private daily, and pronounce every word and syllable so distinctly that they may have all their full sound and proportion . . . (in that way) you may learn to pronounce them more fluently at your leisure.
  • Should you tend to mumble, do as Demosthenes, who cured himself of this defect by repeating orations everyday with pebbles in his mouth. 
  • To avoid all kinds of unnatural tones of voice, endeavor to speak in public just as you do in common conversation.
  • Labour to avoid the odious custom of spitting and coughing while speaking.
Gesture
  • There should be nothing in the dispositions and motions of your body to offend the eyes of the spectators.
  • Use a large looking glass as Demosthenes (again) did; learn to avoid all disagreeable and "unhandsome" gestures.
  • Have a skillful and faithful friend to observe all your motions and to inform you which are proper and which are not.
  • Use the right hand most, and when you use the left let it only be to accompany the other.
  • Seldom stretch out your hand sideways, more than half a foot from the trunk of your body.
  •  . . . remember while you are actually speaking you are not be studying any other motions, but use those that naturally arise from the subject of your discourse.
  • And when you observe an eminent speaker, observe with utmost attention what conformity there is between his action and utterance and these rules. (You may afterwards imitate him at home 'till you have made his graces your own.)
 Most of the "gesture" guidelines and several of those for pronunciation are employed explicitly in public speaking training--and in haptic pronunciation teaching. Even some of the more colorful ones are still worth mentioning to students in encouraging effective speaking of all sorts. 



Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!

Clker.com
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

Monday, November 30, 2015

The Music of Pronunciation (and language) Teaching

Like many pronunciation and "speaking" specialists, I have long believed that in some way systematic use of music should be "in play" at all times in class. I suspect most in the field feel the same. Up until recently there has not appeared to be much of an academically credible way to justify that or investigate the potential connection to language teaching more empirically.

A recent 2015 study, Music Congruity Effects on Product Memory, Perception, and Choice, by North, Sheridan and Areni, published in the Journal of Retailing (DOI, below), suggests some interesting possibilities. Quoting the ScienceDirect.com summary, the study basically demonstrated that:
  • Ethnic music (e.g., Chinese, Indian) increased the recall of menu items from the same country.
  • Ethnic music increased the likelihood of choosing menu items from the same country.
  • Classical music increased willingness to pay for products related to social identity.
  • Country music increased willingness to pay for utilitarian products.
    Clker.com
So, what may that mean for our work, or explain what we have seen in our classrooms?
  • (Recall) For example, we might predict that using English music of some kind with prominent vowels, consonants, intonation and rhythm patterns would enhance memory for them.
  • (Perception) Having listened to "English" music should enable being able to better perceive or recognize appropriate pronunciation models or patterns of English. I suspect that most language teachers believe that intuitively, have seen the indirect effects in how students' engagement with the music of the culture "works". 
  • (Milieu) I, like many, have used classical music for "selling" and relaxing and creating ambiance for decades. There is research from several fields supporting that. Only recently have I been attempting to tie it into specific phonological structures or sounds, especially the expressive, emotional and relational side of work in intonation. 
  • (Function) I frequently use country-like music or rap for working on functional areas, warm ups, rhythm patterns, and specific vowel contrasts.
I am currently experimenting more with different rhythmic, stylistic and genre-based varieties of music. (Specifically, the new, v4.0 version of the haptic pronunciation teaching system, EHIEP - Essential Haptic-integrated English Pronunciation.) Over the years I have used music, from general background or mood setting to highly rhythmic tunes tied directly to the patterns being practiced. I just knew it worked . . .

The "Music congruity" study begins to show in yet another way just how music affects both associative memory and perception, conveying in very real terms broad connections to culture and context. More importantly, however, it gives us more justification for creating a much richer and more "memorable" classroom experience.

If you use music, use more. If not, why not? 

In press (2015) doi:10.1016/j.jretai.2015.06.001

Sunday, November 29, 2015

Keeping the pain in pronunciation teaching (but working it out with synchronized movement and dance)

ClipArt: Clker.com
Three of the staples of pronunciation work, choral repetition, drill and reading have been making something of a comeback--but just waiting for studies like this one to surface. (Or, confirm what any experienced practitioner could tell you without doing a controlled study in the lab.) In essence, the key idea is: choral, doing it together, in sync.

 A 2015 study, Synchrony and exertion during dance independently raise pain threshold and encourage social bonding by Tarr,  Launay, Cohen and Dunbar found " . . . significant independent positive effects on pain threshold (a proxy for endorphin activation) and in-group bonding. This suggests that dance which involves both exertive and synchronized movement may be an effective group bonding activity." (Full disclosure here.) The dance treatment used was a type of synchoronized dancing at 130 beats per minute, which does sound relatively "exertive"--perhaps not a perfect parallel to use of synchronized gesture and body movement in language teaching. It is, I think, still close enough, especially when you review the extensive literature review presented in the article. (And besides, the subjects in the study were high school students who obviously have energy to "burn!")

One of the fascinating "paradoxes" of pronunciation instruction is the way use of gesture and movement can be both energizing and distracting. Appropriate choral speaking activities using synchronized gesture or body movement may work to exploit the benefits of prescribed movement, without the downsides, the "pain", including just the personal or cultural preferences related to the appropriateness of  moving one's body in public. (See several earlier posts on that topic.)

One of the major shifts in pronunciation teaching--and probably one reason for the concurrent lack of both interest in and effectiveness of current methodology, has been the move to "personalized" pronunciation with computers and hand held devices, as putative substitutes for "synchronized" learning in a class . . . of people, with bodies to move with. In essence, we have in many respects, disembodied pronunciation teaching, disconnecting it from both social experience and integrated (including the often relatively hard "exertion" of) learning.

In v4.0 of the EHIEP system, most of the basic training is done using designed pedagogical movement patterns, along with simple, line dancing-like dance steps. (There is also the option of doing the practice patterns without accompaniment, not to a fixed rhythm, although the work is still done with complete synchrony between instructor and student.) In most cases the "step pattern" is just a basic side to side movement with periodic shifts in orientation and direction, done in the 48 to 60 beats per minute range. (A demonstration video will be available later this month and the entire system, early next spring.)

One of our most successful workshops along these lines was titled: So you think you can dance your way to better pronunciation! Turns out, you can, even if that only means that all the bodies in the class are synchronized "naturally" as they mirror each others' movement as the result of their mirror neurons locking into highly engaged f2f communication in general.

Turns out the "pain" is essential to the process, both the physical and social "discomfort" since response to it and exploiting it also enables powerful, multi-sensory learning. Or as Garth Brooks put it: "I could have missed the pain, but I'd had to miss the dance."

Full citation:
Tarr, B., Launay, J., Cohen, E., Dunbar, R. (2015) Synchrony and exertion during dance independently raise pain threshold and encourage social bonding The Royal Society Biology Letter 28: October, 2015.DOI: 10.1098/rsbl.2015.0767

Thursday, November 26, 2015

Drawing on the haptic side of the brain (in edutainment and pronunciation teaching)

ClipArt: Clker.com
How is your current "edutainmental quality of experience" (E-QoE), defined as degree of excitement, enjoyment and "natural feel" (to multimedia applications) by Hamam, Eid and El Saddik of the DISCOVER Lab, University of Ottawa, in a nice 2013 report, "Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications"? (Full citation below.) EQoE (pronounced: E-quo, I'd guess) is a great concept. Need to come up with a reliable way of measuring it in our research, something akin to that in Hamam et al. (2013).


In that study, a gaming application configured both with and without haptic or kinaesthetic features (computer mediated movement and touch in various combinations, in this case a haptic stylus)--as opposed to having just visual or auditory engagement, employing just eyes, ears and hands--was examined for relative EQoE. Not surprisingly, the former was significantly higher in EQoE, as indicated in subject self-reports.

I am often asked how "haptic" contributes to pronunciation teaching and what is "haptic" about EHIEP. This piece is not a bad, albeit indirect, Q.E.D. (quod erat demonstrandum)--one of my favorite Latin acronyms learned in high school math! (EHIEP uses movement and touch for anchoring sound patterns but not computer-mediated, guided movement--at least for the time being!)

The potential problems with use of gesture in instruction, the topic of several earlier posts, tend to be (a) inconsistent patterns in the visual field, (b) perception by many instructors and students as being out of their personal and cultural comfort zones, and (c) over-exuberant, random and uncontrolled gesture use in general in teaching, often vaguely related to attempts to motivate or "loosen up" learners--or, more legitimately, to just have fun. EHIEP succeeds in overcoming most of the potential "downside" of body-assisted Teaching (BAT).

In a forthcoming 2016 article on the function of gesture in pronunciation teaching, the EHIEP (Essential, Haptic-integrated English Pronunciation) method is somewhat inaccurately described as just a "kinaesthetic" system for teaching pronunciation using gesture, a common misconception. EHIEP does, indeed, use gesture (pedagogical movement patterns) to teach sound patterns, but the key innovation is use of touch to make application of gesture in teaching controlled, systematic and more effective in providing modeling and feedback--and obviously enhance E-QoE--very much in line with Hamam et al (2013).

The gaming industry has been on to haptic engagement for decades; edutainment is coming on board as well. Now if we can just do the same with something as unexciting, un-enjoyable and "unnatural" as most pronunciation instruction. We have, in fact . . .

Keep in touch!

Citation:

Hamam, A, Eid, M., and  El Saddik, A. (2013). Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications.Multimedia Tools and Applications archive
67:2, 455-472.

Friday, November 20, 2015

Good looking, intelligible English pronunciation: Better seen (than just) heard

One of the less obvious shortcomings of virtually all empirical research in second language pronunciation intelligibility is that is generally done using only audio recordings of learner speech--where the judges cannot see the faces of the subjects. In addition, the more prominent studies were done either in laboratory settings or in specially designed pronunciation modules or courses.

In a fascinating, but common sense 2014 study by Kawase, Hannah and Wang it was found that being able to see the lip configuration of the subjects, as they produced the consonant 'r', for example, had a significant impact on how the perceived intelligibility of the word was rated. (Full citation below.) From a teaching perspective, providing visual support or schema for pronunciation work is a given. Many methods, especially those available on the web, strongly rely on learners mirroring visual models, many of them dynamic and very "colorful." Likewise, many, perhaps most f2f pronunciation teachers are very attentive to using lip configuration, their own or video models, in the classroom.

What is intriguing to me is the contribution of lip configuration and general appearance to f2f intelligibility. There are literally hundreds of studies that have established the impact of facial appearance on perceived speaker credibility and desirability. So why are there none that I can find on perceived intelligibility based on judges viewing of video recordings, as opposed to just audio? In general, the rationale is to isolate speech, not allowing the broader communicative abilities of the subjects to "contaminate" the study. That makes real sense on a theoretical level, bypassing racial and ethnic and "cosmetic" differences, but almost none on a practical, personal level.

There are an infinite number of ways to "fake" a consonant or vowel, coming off quite intelligibly, while at the same time doing something very much different than what a native speaker would do. So why shouldn't there be an established criterion for how mouth and face look as you speak, in addition to how the sounds come out? Turns out that there is, in some sense. In f2f interviews, being influenced by the way the mouth and eyes are "moving" is inescapable.

Should we be attending more to holistic pronunciation, that is what the learner both looks and sounds like as they speak? Indeed. There are a number of methods today that have learners working more from visual models and video self recordings. That is, I believe, the future of pronunciation teaching, with software systems that provide formative feedback on both motion and sound. Some of that is now available in speech pathology and rehabilitation.

There is more to this pronunciation work than what doesn't meet the eye! The key, however, is not just visual or video models, but principled "lip service", focused intervention by the instructor (or software system) to assist the learner in intelligibly "mouthing" the words as well.

This gives new meaning to the idea of "good looking" instruction!

Full citation:
Kawase S, Hannah B, Wang Y. (2014). The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers. J Acoust Soc Am. 2014 Sep;136(3):1352. doi: 10.1121/1.4892770.

Sunday, November 15, 2015

Emphatic prosody: Oral reading rides again! (in language teaching)

Clipart: Clker.com
Two friends have related to me how they conclude interviews. One (a) asks applicants "Napoleon's final question" (that he would supposedly pose to potential officers for his army): "Are you lucky?" and (b) has them do a brief, but challenging oral reading. 'A' provides most of what the first needs to know about their character. 'B', the other says, is the best indicator of their potential as a radio broadcaster--or as language teacher. I occasionally use both, especially in considering candidates for (haptic) pronunciation teaching.

One of the "standard" practices of the radio broadcasters (and, of course, actors) on their way to expertise (which some claim takes around 10,000 hours), I'm told, is to consistently practice what is to be read on air or performed, out loud. Have done a number of posts over the years on "read aloud" techniques in general reading instruction with children and language teaching, including the Lectio Divina tradition. Research continues to affirm the importance of oral work in developing both reading fluency and comprehension.

Recently "discovered" a very helpful paper 2010 paper by Erekson, coming out of research in reading, entitled, Prosody and Interpretation, where he examines the distinction between syntactic (functioning at the phrasal level) prosody and emphatic prosody used for interpretation (at the discourse level.) One of the interesting connections that Erekson examines is that between standard indices of reading fluency and expressiveness, specifically control of emphatic prosody. In other words, getting students to read expressively has myriad benefits. Research from a number of perspectives supports that general position on the use of "expressive oral reading" (Patel and McNab, 2011); "reading aloud with kids"  (De Lay, 2012); "automated assessment of fluency" (Mostow and Duong, 2009); "fluency and subvocalization" (Ferguson, Nielson and Anderson, 2014).

The key distinction here is expressiveness at the structural as opposed to discourse level.  It is one thing to get learners to imitate prosody from an annotated script (like we do in haptic work--see below) and quite another to get them to mirror expressiveness in a drama, whether reading from a script without structural cues, as in Reader's Theatre, or impromptu.

Oral reading figures (or figured) prominently in many teaching methods.  The EHIEP (Essential Haptic-integrated English Pronunciation) system, provides contextualized practice in the form of short dialogues where learners use pedagogical movement patterns (PMPs), gestural patterns to accompany each phrase which culminate with hands touching on designated stressed syllables. That is the most important feature of assigned pronunciation homework. Although that is, of course, primarily structural prosody  (in the Lectio Divina tradition) we see consistent evidence that oral performance leads to enhanced interpretative expressiveness.

I suspect that we are going to see a strong return to systematic oral reading in language teaching as interest in pragmatic and discourse competence increases. So, if expressiveness is such an important key to not only fluency but interpretation in general, then how can you do a better job of fostering that in students?

Ready?

Read out loud, expressively: "Read out loud expressively and extensively!" 





Tuesday, November 10, 2015

Alexander Guiora - Requiescat in pace

Last month the field of language teaching and language sciences lost a great friend, colleague, researcher and theorist, Alexander Guiora, retired Professor Emeritus, University of Michigan. To those of us in English language teaching, his early work into the concepts of empathy, "language ego" and second language identity, the famous "alcohol" study and others, were foundational in keeping mind and the psychological self foregrounded in the field. As Executive Editor of the journal, Language Learning, he was instrumental in elevating it to the place it holds today, the standard for research publication by which all others are to be measured.

Working with him, doing research as a doctoral student was a unique experience. His research group, composed of faculty and graduate students from several disciplines over the years, met every Friday morning. There was always a project underway or on the drawing boards. Several important, seminal publications resulted. Shonny was an extraordinary man. I recently shared the following with his family:


I think the great lesson we learned from him early on was how to be brutally honest--and yet still love and respect our colleagues unconditionally. All of us, recalling when were newbie grad students, "cherish" memories of being jumped all over for making a really stupid mistake-- which we would surely never commit again! And then, minutes later, he could just as well say something genuinely complimentary about an idea or phrasing in a piece that we were responsible for. Talk about cultivating and enhancing "language researcher ego"! He taught us to think and argue persuasively from valid research, how to not take criticism of our work, personally. Few of us did not develop with him a lasting passion for collaborative research.



Thursday, October 22, 2015

We have met the enemy (of pronunciation teaching in TESOL), and he is us!

Clker.com
Am often reminded of that great quip in the political cartoon Pogo, by Walt Kelly, embellished in the title of this post. In workshops we often encounter the following three misconceptions about pronunciation teaching, based vaguely and incorrectly on "research" in the field. Recently, in the comments of one reviewer of a proposal for a workshop on teaching consonants for the 2016 TESOL convention--which was rejected, by the way--all three showed up together! Here they are, with my responses in italics:

Currency/Importance/Appropriateness 
"Most learners have access to websites that model phonemes, such as Rachel’s English and Sounds of Speech by the University of Iowa."

Really? "Most" learners? What planet is that on? Billions of learners don't have web access, including the preponderance of those in settlement programs here in Vancouver. And even those that do still need competent instruction on not only to use them effectively, but find them in the first place. Furthermore, those sites are strongly visual-auditory and EAP biased, better suited to what we term "EAP-types" (English for the academically privileged). For the kinaesthetic or less literate learner, those web resources are generally of little value. There are half a dozen other reasons why that perspective is excessively "linguini-centric."

Theory, Practice and Research Basis ·      
"There has been much research, which has shown the central importance of the peak vowel in a stressed syllable. The focus on consonant articulation is less important."

That represents an "uninformed" consensus from more than a decade ago. Any number of studies have since established the critical importance of selected consonants for intelligibility of learners of specific L1s. Think: Final consonants in English for some Vietnamese dialects or some Spanish L1 speakers of English. 

Support for Practices, Conclusions, and/or Recommendations ·      
"The article made a nice specific connection between haptic activities, and acquisition of consonant sounds. However, there was only one source."

Good grief. The workshop was proposed as a practical, hands-on session for teachers, presenting techniques for dealing with specific consonants.(The one reference is a published conference paper linked off the University of Iowa website.) Have heard similar reports from other classroom practitioners, such as myself, who had  proposals rejected: Only "researcher certified" proposals welcome. So much for our earlier enthusiasm in TESOL for teacher empowerment . . .