Tuesday, May 26, 2020

The sound of gesture: Ending of gesture use in language (and pronunciation) teaching

Quick reminder:  Only one week to sign up for the next haptic pronunciation teaching webinars! 

Sometimes getting a rise (ing pitch) out of students is the answer . . . This is one of those studies that you read where a number of miscellaneous pieces of a puzzle momentarily seem to come together for you. The research, by Pouw and colleagues at the Donders Institute. “Acoustic information about upper limb movement in voicing”, summarized by Neurosciencenews.com, is, well . . . useful.

In essence, what they "found" was that at or around the terminal point of a gesture, where the movement stops, the pitch of the voice goes up slightly (for a number of physiological reasons). Subjects, with eyes closed, could still in many cases identify the gesture being used, based on parameters of the pitch change that accompanied the nonsense words. The summary is what is fun and actually helpful, however.

From the summary:

"These findings go against the assumption that gestures basically only serve to depict or point out something. “It contributes to the understanding that there is a closer relationship between spoken language and gestures. Hand gestures may have been created to support the voice, to emphasize words, for example.”

Although the way the conclusion is framed might suggest that the researchers may have missed roughly three decades of extensive research on the function of gesture, from theoretical and pedagogical perspectives, it certainly works for me--and all of us who work with haptic pronunciation teaching. That describes, at least in part, what we do: "  . . . Hand gestures . . . created to support the voice, to emphasize words, for example.” Now we have even more science to back us up! (Go take a look at the demonstration videos on www.actonhaptic.com, if you haven't before.) 

What can I say? I'll just stop right there. Anything more would just be but an empty gesture . . .

Source:
“Acoustic information about upper limb movement in voicing”. by Wim Pouw, Alexandra Paxton, Steven J. Harrison, and James A. Dixon. PNAS doi:10.1073/pnas.2004163117

Monday, May 18, 2020

Cognitive Restructuring of Pronunci-o-phobia - (and Alexa-phobia): Hear, hear! (Just don't peek!)

Clker.com
Caveat emptor: If you are emotionally co-dependent on Alexa, you might want to "ALEXA, STOP ME!" at this point. We love you, but you are lost . . .

New study by "a team of researchers at Penn State" (Summarized by ScienceDaily.com) explored the idea of using ALEXA to help you "cognitively restructuring" your public speaking anxiety, Anxious about public speaking? Your smart speaker could help. Actually what they did was to compare two different ALEXAs in talking you through/out of some of your public speaking, pre-speech anxiety, a more social one with a less social one. (Fasten your seat belt . . . ) Subjects who engaged with the former felt less stressed at the prospect of the giving a speech. From the summary from the researchers:

"People are not simply anthropomorphizing the machine, but are responding to increased sociability by feeling a sense of closeness with the machine, which is associated with lowered speech anxiety . . . Alexa is one of those things that lives in our homes, . . As such, it occupies a somewhat intimate space in our lives. It's often a conversation partner, so why not use it for other things rather than just answering factual questions?"

Houston, we have a problem. Several, in fact. For instance, if ALEXA can do that, imagine what a real person online, just audio only, could accomplish! Forget Zoom and SKYPE! I'd predict that that may even account for some, if not a great deal, of the reduction in anxiety alone. In that condition, a real person might be exponentially more effective . . . worth checking on, I'd think. In addition, from the brief report we get no indication as to what ALEXA actually said, only that "she" was more socially engaging in one condition, than the other. 

What it does suggest, however, is that we should be able to use the same general strategy in dealing with the well-researched anxiety on the part of  instructors and students toward pronunciation work. The impact of a person facing you as you try to modify your pronunciation is important. For many learners, they literally have to close their eyes to repeat a phrase with a different articulation--or at least dis-focus their eyes momentarily. That is is an especially critical dimension of haptic and general gesture techniques in pronunciation teaching. 

This idea is explored in Webinar II in the upcoming Haptic Teaching Webinars I and II, June 5th and 6th. Please join us! (Contact info@actonhaptic.com to reserve you place!) 

And if you'd like to continue this discussion, give me a call . . . Keep in Touch!

Source:
Penn State. (2020, April 25). Anxious about public speaking? Your smart speaker could help. ScienceDaily. Retrieved May 18, 2020 from www.sciencedaily.com/releases/2020/04/200425094114.htm

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!
Clker.com

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!