Showing posts with label gesture-synchronized speech. Show all posts
Showing posts with label gesture-synchronized speech. Show all posts

Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.
Clker.com

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Tuesday, May 26, 2020

The sound of gesture: Ending of gesture use in language (and pronunciation) teaching

Quick reminder:  Only one week to sign up for the next haptic pronunciation teaching webinars! 

Sometimes getting a rise (ing pitch) out of students is the answer . . . This is one of those studies that you read where a number of miscellaneous pieces of a puzzle momentarily seem to come together for you. The research, by Pouw and colleagues at the Donders Institute. “Acoustic information about upper limb movement in voicing”, summarized by Neurosciencenews.com, is, well . . . useful.

In essence, what they "found" was that at or around the terminal point of a gesture, where the movement stops, the pitch of the voice goes up slightly (for a number of physiological reasons). Subjects, with eyes closed, could still in many cases identify the gesture being used, based on parameters of the pitch change that accompanied the nonsense words. The summary is what is fun and actually helpful, however.

From the summary:

"These findings go against the assumption that gestures basically only serve to depict or point out something. “It contributes to the understanding that there is a closer relationship between spoken language and gestures. Hand gestures may have been created to support the voice, to emphasize words, for example.”

Although the way the conclusion is framed might suggest that the researchers may have missed roughly three decades of extensive research on the function of gesture, from theoretical and pedagogical perspectives, it certainly works for me--and all of us who work with haptic pronunciation teaching. That describes, at least in part, what we do: "  . . . Hand gestures . . . created to support the voice, to emphasize words, for example.” Now we have even more science to back us up! (Go take a look at the demonstration videos on www.actonhaptic.com, if you haven't before.) 

What can I say? I'll just stop right there. Anything more would just be but an empty gesture . . .

Source:
“Acoustic information about upper limb movement in voicing”. by Wim Pouw, Alexandra Paxton, Steven J. Harrison, and James A. Dixon. PNAS doi:10.1073/pnas.2004163117

Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Saturday, December 23, 2017

Vive la efference! Better pronunciation using your Mind's Ear!

"Efference" . . . our favorite new term and technique: to imagine saying something before you actually say it out loud, creating an "efferent copy" that the brain then uses in efficiently recognizing what is heard or what is said.  Research by Whitford, Jack, Pearson, Griffiths, Luque, Harris, Spencer, and Pelley of University of New South Wales, Neurophysiological evidence of efference copies to inner speech, summarized by ScienceDaily.com, explored the neurological underpinnings of efferent copies, having subjects imagine saying a word before it was heard (or said.)
Clker.com

The difference in the amount of processing required of subsequent occurrences following the efferent copies, as observed by fMRI-like technology, was striking. The idea is that this is one way the brain efficiently deals with speech recognition and variance. By (unconsciously) having "heard" the target or an idealized version of it just previously in the "mind's ear", so to speak, we have more processing  power available to work on other things with . . .

Inner speech has been studied and employed in the second language research and  practice extensively  (e.g., Shigematsu, 2010, dissertation: Second language inner voice and identity) and in different disciplines.  There is no published research on the direct application of efference in our field to date that I’m aware of.

The haptic application of that general idea is to “imagine” saying the word or phrase synchronized with a specifically designed pedagogical gesture before articulating it.  In some cases, especially where the learner is highly visual, that seems to be helpful, but we have done no systematic work on it.  The relationship with video modeling effectiveness may be very relevant as well. Here is a quick thought/talk problem for you to demonstrate how it works:

Imagine yourself speaking a pronunciation-problematic word in one of your other languages before trying to say it out loud. Do NOT subvocalize, move your mouth muscles. (Add a gesture for more punch!) How’d it work?

Imagine your pronunciation work getting better while you are at it!




Sunday, October 8, 2017

The shibboleth of great pronunciation teaching: Body sync!

If there is a sine qua non of contemporary pronunciation teaching, in addition to the great story of the first recorded pronunciation test in history that we often use in teacher training, it is the use of mirroring (moving along with a spoken model on audio or video). If you are not familiar with the practice of mirroring, here are a few links to get you started by Meyers (PDF), Meyers (video) and Jones.
Clker.com

There are decades of practice and several studies showing that it works, seems to help improve suprasegmentals, attitudes and listening comprehension--among other things. There has always been a question, however, as to how and why. A new study by Morillon and Baillet of McGill University reported by ScienceDaily.com not only suggests what is going on but also (I think) points to how to better work with a range of techniques related to mirroring in the classroom.

The study looked at the relationship between motor and speech perception centers of the brain. What it revealed was that by getting subjects to move (some part) of their bodies to the rhythm of what they were listening to, their ability to predict what sound would come next was enhanced substantially. Quoting from the ScienceDaily summary:

"One striking aspect of this discovery is that timed brain motor signaling anticipated the incoming tones of the target melody, even when participants remained completely still. Hand tapping to the beat of interest further improved performance, confirming the important role of motor activity in the accuracy of auditory perception."

The researchers go on to note that a good analogy is the experience of being in a very noisy cocktail party and trying focus in on the speech rhythm of someone you are listening to better understand what they are saying. (As one whose hearing is not what it used to be, due in part to just age and tinnitus, that strategy is one I'm sure I employ frequently.) You can do that, I assume, by either watching the body or facial movement or just syncing to rhythm of what you can hear.

As both Meyer and Jones note, with the development of visual/auditory technology and the availability to appropriate models on the web or in commercial materials, the feasibility of any student having the opportunity and tools to work with mirroring today has improved dramatically. Synchronized body movement is the basis of haptic pronunciation teaching. We have not done any systematic study of the subsequent impact of that training and practice on speech perception, but students often report that silently mirroring a video model helps them understand better. (Well, actually, we tell them that will happen!)

If you are new to mirrored body syncing in pronunciation teaching or in listening comprehension work, you should  try it, or at least dance along with us for a bit.

Source:
McGill University. (2017, October 5). Predicting when a sound will occur relies on the brain's motor system: Research shows how the brain's motor signals sharpen our ability to decipher complex sound flows. ScienceDaily. Retrieved October 6, 2017 from www.sciencedaily.com/releases/2017/10/171005141732.htm

Saturday, October 15, 2016

(Really) great body-enhanced pronunciation teaching

If you are interested in using gesture more effectively in your teaching, a new 2016 study by Nguyen, A micro-analysis of embodiments and speech in the pronunciation instruction of one ESL teacher, is well worth reading. The study is, by design, wisely focused more on what the instructor does with her voice and body during instruction, not on student learning, uptake or in-class engagement.

The literature review establishes reasonably well the connection between the gesture described in the study and enhanced student learning of language and pronunciation. I can almost not imagine a better model of integrated gestural use in pronunciation teaching . . . The instructor is a superb performer, as are many who love teaching pronunciation. (Full disclosure: From the photos in the article I recognize the instructor, a master teacher with decades of experience in the field teaching speaking and pronunciation.)

From decades of work with gesture, myself, one of the most consistent predictors of effective use of gesture in teaching is how comfortable the instructor feels with "dancing" in front of the students and getting them to move along with her. The research on body image and identity and embodiment are unequivocal on that: to move others, literally and figuratively, you must be comfortable with your own body and its representation in public.

Knowing this instructor I do not need to see the video data to understand how her personal presence could command learner attention and (sympathetic, non-conscious) body movement, or her ability to establish and maintain rapport in the classroom. Likewise, I have not the slightest doubt that the students' experience and learning in that milieu are excellent, if not extraordinary.

The report is a fascinating read, illustrating use of various gestures and techniques, including body synchronization with rhythm and stress, and beat gesture associated with stress patterning. If you can "move" like that model, you got it. When it comes to this kind of instruction, however, the "klutzes" are clearly in the majority, probably for a number of reasons.

The one popular technique described, using stretching of rubber bands to identify stressed or lengthened vowels is often effective--for at least presenting the concept. It is marginally haptic, in fact, using both movement and some tactile anchoring in the process (the feeling of the rubber band pressing differentially on the inside of the thumbs.) In teacher training I sometimes use that technique to visually illustrate what happens to stressed vowels or those occurring before voiced consonants, in general. There is no study that I am aware of, however, that demonstrates carry over of "rubber banding" to changes in spontaneous speech or even better memory for the specific stressed syllables in the words presented in class. I'd be surprised to find one in fact.

In part the reason for that, again well established in research on touch, is that the brain is not very good at remembering degrees of pressure of touch. Likewise, clapping hands on all syllables of a word or tapping on a desk but a bit harder on the stressed syllable should not, in principle, be all that effective. That observation was, in fact, one of the early motivations for developing the haptic pronunciation teaching system.  By contrast, isolated touch, usually at a different locations on the body, seems to work much better to create differentiated memory for stress assignment. (All haptic techniques are based on that assumption.)

I, myself, taught like the model in the research for decades, basically using primarily visual-kinesthetic modeling and some student body engagement to teach pronunciation. The problem was trying to train new teachers on how to do that effectively. For a while I tried turning trainees into (somewhat) flamboyant performers like myself. I gave up on that project about 15 years ago and began figuring out how to use gesture effectively even if you, yourself, are not all that comfortable with doing it, a functional . . . klutz.

The key to effective gesture work is ultimately that the learner's body must be brought to move both in response to the instructor's presentation and in independent practice, perhaps as homework.(Lessac's dictum: Train the body first!)  Great performers accomplish that naturally, at least in presenting the concepts. The haptic video teaching system is there for those who are near totally averse to drawing attention to their body up front, but, in general, managed gesture is very doable. There are a number of (competing) systems today that do that. See the new haptic pronunciation teaching certificate, if interested in the most "moving and touching" approach.

Citation:
Nguyen, Mai-Han. (2016). A micro-analysis of embodiments and speech in the pronunciation instruction of one ESL teacher. Issues in Applied Linguistics. appling_ial_24274. Retrieved from: http://escholarship.org/uc/item/993425h1

Tuesday, August 9, 2016

Haptic dance, Aikido and the future of language (and pronunciation) teaching

Clker.com
For an intriguing glance at what future language (or pronunciation) teaching may "look" like, check out the following "haptic dances" by media artist, Landau: Exploratory Dance 1.1, and Motor Imagery Dance. Computer-mediated mirroring will be key; not just culturally appropriate body movement and gestures, for example, but gesture-synchronized speech as well (not unlike what we see in haptic pronunciation teaching, not surprisingly!)

Another piece or theoretical model of what that process will involve is evident in this article in Gesture focusing on the full-body, dialogic "dance" between opponents in Aikido,"The coordination of moves in Aikido interaction." Lefebvre's 2016 framework was developed examining the interplay involved, created with the goal of being able to better characterize the way entire bodies communicate with each other, the intricate synchrony of moves and counters that characterizes all "conversation".

Aikido embodies the moves of your opponent, in part by almost subliminally synchronizing the body to the motion coming at you. Bouts are "won" often by simply redirecting or escorting one's opponent to the floor or out of the ring. (That was, by the way, my wife's basic approach to dealing with first graders: You cannot possibly just block them or stop them, but you can almost always deftly redirect their energy and motion more to your purposes!)

What those two systems in part provide us with is the beginnings of a framework by which to design methodologies that (literally) embody language models, including technology that "manages" articulation as well. There have been for quite some time haptic systems that assist patients with various articulatory conditions, guiding the vocal apparatus in producing more "normal" speech patterns.

Embodied, computer-mediated language learning, something analogous to the Aikido experience, will provide learners with a way to (safely and completely) give themselves over to the "dance" as they are guided to speak and move with models, and ultimately be able to adopt and use the energies, words and moves of the L2, themselves--faster and more efficiently.

This is one dance you'll not want to miss! In the meantime, of course, you might prepare by doing some Aikido--and Haptic Pronunciation Teaching!

Full citations:
Daniel Landau - http://www.daniel-landau.com/about
Lefebvre, A. The coordination of moves in Aikido interaction. Gesture 5(2) 123-155.