Showing posts with label aural comprehension. Show all posts
Showing posts with label aural comprehension. Show all posts

Wednesday, November 14, 2018

When "clear speech" is not clear . . . or meaningful, but still instructive.

Clker.com
Once in a while you stumble on a study that seems, at least at first, to fly in the face of contemporary theory and methodology. This is one does: "How clear speech equates to clear memory: Researchers find that a speaker's clearly articulated style can improve a listener's memory of what was said." by  researchers Keerstock and Smiljanic of the University of Texas at Austin.

Actually, the title, when read correctly does get at the reality behind oral comprehension work: the type of "clear speech" used in the study SHOULD result in "clear" memory, that is nothing much of substance or meaning being recalled later. The results seem to confirm that, in fact.

Let me summarize it for you so you don't have to read it yourself. There is an (ironically) useful piece to the study, albeit not what the researchers intended. They head in the right direction initially but land someplace else:
  • Subjects, natives and nonnatives, heard 6 sets of 12 sentences read either in " . . . "clear" speech, in which the speaker talked slowly, articulating with great precision, and (or) a more casual and speedily delivered "conversational" manner." (Can't wait to see what controls they had in place in terms of every variable related to content and delivery!)
  • After hearing the 12 sentences they were given some "clues" for each sentence and then asked to write down verbatim the rest of the words in each sentence. (Since no data or protocols are provided, we must assume that the sentences were of reasonable length and vocabulary level, and as a group were probably not thematically related.) 
  • Everybody remembered more words in the "clear speak" condition. (Did the natives or nonnative speakers understand the meaning better? Are the results based just on how many words were recalled? Hard to tell from the brief description of the study.)
Their conclusion (from the ScienceDaily.com summary):

"That appears to be an efficient way of conveying information, not only because we can hear the words better but also because we can retain them better."

Wow. I don't even know where to begin on that . . . so I won't, but if you are not up to speed on current thinking in L2 aural comprehension work, check out Conti's blog on that topic.  I will just note that the practice of doing a precise word-by-word oral reading--and then doing the same PASSAGE of say 200 words or more a second time in a highly expressive frame of voice and mind has long standing in both public speaking and "Lectio Divina" traditions. It is a proven technique, a way to both prepare for an expressive oral reading and dig into the meaning of the text. In haptic work, that practice is fundamental as well.

But the methodology of this study has to be one of the best ways to "clear memory" of meaning and motivation imaginable!

So . . . try . . . that . . . out . . . with . . . your . . . class . . . tomorrow . . . morning . . . and . . . see . . . how . . . it . . . works! And report back.

KIT

Don't forget to sign up for the upcoming Haptic Pronunciation Training Webinars!!!


Source: 
https://www.sciencedaily.com/releases/2018/11/181105200736.htm

Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Sunday, November 12, 2017

OMG! Hand2hand combat in the classroom: Facing problems in (pronunciation) teaching

OMG! (other-managed gesture) is fundamental to effective, systematic use of gesture in any classroom, especially pronunciation teaching. And exactly how you "face" that issue may be critical. Two fascinating new studies may suggest how.

As Sumo fan, haptician (practitioner of haptic pronunciation teaching) and veteran, one of my favorite metaphors for ongoing interaction in the (pronunciation) classroom has always been "H2H" (hand2hand combat.) Research by Mojtahedi, Fu and Santello, of Arizona State University - Tempe highlights an important variable in such engagement, evident in the title: On the role of physical interaction on performance of object manipulation by dyads.

Clker.com
Two of their key findings: (a) those subjects whose solo performance on a "physical" task was initially relatively low benefited from H2H training in dyads. Those of higher skill coming in, did not,  and (b) for those who do benefit, standing side-by-side, enabling dyadic work was superior to working F2F The "assistive" task was manipulating a horse-shoe like object in space, following varied instructions, either together or separately, best done by "coming alongside" the other person.

Granted, there is a difference between two people holding on to a piece of metal and guiding it around together, cooperatively--and an instructor being mirrored in gesturing by students across the room, synchronized with speaking words and phrases. Research in mirror neurons in the brain, however, would suggest that the difference is far less than one might think. In a very real sense, if you are paying close attention, watching something being done is experienced and managed in the brain very much like doing it yourself.

Now hold that thought for a minute while we go on to the next, related study, How spatial navigation correlates with language by Vukovich and Shtyrov at the HSE Centre for Cognition and Decision Making. In this study, subjects were first identified as to whether they were more "egocentric" or "allocentric" in their ability to grasp the perspective of another person, somewhat independent of their own position in space or time. (A concept somewhat analogous to field dependence/independence.)

What they discovered was that subjects who were (spatially) allocentric were also better at understanding oral instructions that required differing responses, depending on whether the subject pronoun of the description was 1st person singular or 3rd person. And more importantly the same areas of the brain were "lighting up", meaning processing the problem, for both language and spatial navigation.

Now juxtapose that with the finding of the other research which demonstrated that side-by-side (SxS) rather than face-to-face (F2F) "help" on the H2H task was more effective. F2F assistive engagement requires, in part the transposing of the movement of the person facing you to the opposite side of your body, an operation that we discovered a decade ago in haptic pronunciation teaching was exceedingly difficult for some instructors and students.

So what we have is a complex of the factors affecting success in gesture work: (probably) inherited ego or allo-centric tendencies which will impact how well one can accommodate a model moving in front of you, taking on the same handedness, as opposed to mirror image, and fact that some, less skillful learners are assisted more effectively by a partner SxS instead of standing F2F.

In other words, both studies seem to be getting at the same underlying variable or issue for us: why some gestural work works and some doesn't. This is potentially an important finding for haptic pronunciation teaching or just use of gesture in teaching in general, one that should impact our "standing" in the classroom, where we locate ourselves relative to learners when we manage or conduct gesture.

Sometimes facing your problem is not the answer!


Sources:

Mojtahedi K, Fu Q and Santello M (2017) On the Role of Physical Interaction on Performance of Object Manipulation by Dyads. Front. Hum. Neurosci. 11:533. doi: 10.3389/fnhum.2017.00533

Nikola Vukovic et al, Cortical networks for reference-frame processing are shared by language and spatial navigation systems, NeuroImage (2017). DOI: 10.1016/j.neuroimage.2017.08.041






Saturday, August 30, 2014

Improve L2 pronunciation-- with or without lifting a finger!

Clip art: Clker
Listen to this! (You may even want to sit down before you do!) New study showing how movement can affect listening by Mooney and colleagues at Duke University, summarized by Science Daily. Here's the summary:

"When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. The interplay between movement and hearing has a counterpart deep in the brain. A new study used optogenetics to reveal exactly how the motor cortex, which controls movement, can tweak the volume control in the auditory cortex, which interprets sound."

Now, granted, the study was done on mice who probably have some other stuff going on down there in their motor cortices as well. Nonetheless, the striking insight into the underlying relationship between movement and volume control on our auditory input circuits is enough to give us (an encouraging) "pause . . . " in two senses:

First, learning new pronunciation begins with aural comprehension, being able to "hear" the sound distinctions. We have played with the idea of having learners gesture along with instructor models while listening. The study suggests that may not be as effective as we thought, or at least the conditions that we set up have to be more sensitive to "volume" and ambient static. You can see the implications for aural comprehension work in general as well. 

Second, during early speaking production in haptic pronunciation instruction, being able to temporarily  suppress auditory input (coming in through the ears) is seen as essential. Following Lessac and many others in speech and voice training, what we are after initially is focus on vocal resonance in the upper body and kinaesthetic awareness of the gestural patterns, what we call "pedagogical movement patterns" or PMPs. 


We do that, in part, to dampen (i.e., turn down the volume) on how the learner's production is perceived initially, filtered through the L1 or personal interlanguage versions, trying to focus instead on the core of the sound(s), approximations, not absolute accuracy. Some estimates of our awareness of our own voice suggest that it is less than 25% auditory, that is coming in through the air to our ears, the rest being body-based, or somatic. 

What we hear should be moving, not what we hear with apparently! 

SCID citation: Duke University. "Stop and listen: Study shows how movement affects hearing." ScienceDaily. ScienceDaily, 27 August 2014 .

Thursday, November 28, 2013

Giving aural comprehension "a hand"-- in haptic pronunciation training

A common question we get is something to the effect of "How do the pedagogical gestures (PMPs - movement across the visual field terminating in touch on a stressed element of a word) work?" 2012 research by Turkeltaub and colleagues at Georgetown University, reported by Science Daily, suggests how that happens. In that study
it was demonstrated that what you are doing with your hands may affect what you hear, or at least how quickly you hear it.

In essence, subjects were instructed to respond by touching a button when they detected a heavily embedded background sound, either with their right or left hand. Right handed response was better at detecting fast-changing sounds; the left, better at slow changing sounds, according to Turkeltaub, " . . . the left hemisphere likes rapidly changing sounds, such as consonants, and the right hemisphere likes slowly changing sounds, such as syllables or intonation . . . " Well, maybe . . .

The study at least further establishes the potential connection between haptic work and L2 sound change. In this case, when the learner performs a PMP, mirroring the model and listening to the model of the target sound--without overt speaking--anchoring should be enhanced, more efficient. Part of the reason for that, as reported in several pervious posts, is that "fast" sounds tend to be in the right visual field (attached to the left hemisphere) and "slower" sounds, the left.

AMPISys, Inc. 
In the EHIEP protocol for intonation, for example, the intonation contour or tone group begins in the left visual field with the left hand moving to the right until it touching the right hand on the stressed syllable or focus word. (See Intonation PMP demonstration linked off earlier post.) In the vowel protocols, similar PMPS are involved as well as the visual display reflects the "fast and slow" phonaesthetic quality of the vowels. (See earlier post on that as well.)

Keep in touch! (v2.0 will be released next week!)

Friday, April 12, 2013

Face it . . . Your pronunciation could look better!

According to research by Lander and Caper at the University of Lancaster, a little  more lipstick and work on your speech style may be in order. (Watched yourself on video lately when you ask a student "look at my mouth" as you provide a model?) Their study demonstrated unequivocally that your listeners' ability to understand you if they can see you can be enhanced considerably with a little tweaking. One feature that made words more easily understood, not surprisingly, was backing off from conversation style toward more declarative articulation, especially in times of potentially disruptive background noise. In addition, although other movement of facial muscles does play a supporting role or is synchronized with mouth and lip movement, it was the mouth that carried the functional load primarily. 

Clip art: Clker
This is a particularly interesting problem in haptic work, in part because the eyes of the student are naturally drawn to the hand and arm movements. Consequently, you must be a bit more conscientious about how you articulate a model word, for example, as you do the corresponding pedagogical movement pattern, to be sure that students can also read you lip patterning as well. Record some of your work, turn off the sound and spend a little time trying to figure out what you were saying . . . 

Obviously nothing to just "pay lip service to!" 


Citation: Investigating the impact of lip visibility and talking style on speechreading performance - http://dx.doi.org/10.1016/j.specom.2013.01.003

Tuesday, March 12, 2013

Vigilance decrement during pronunciation work?

clip art:
Clker
I knew there had to be a scientific term for why students lose interest in pronunciation work occasionally . . . and a cure! The term is used in relation to yet another study that discovered that gum chewing can be good for things "cognitive." In this case, in the study by Morgan, Johnson and Miles of Cardiff University, summarized by Science Daily, it was found that "Gummies" were able to persist longer on an audio recognition task than the "gum-less." The Gum-less started out stronger but were overtaken and passed by the Gummies near the end. And the reason that the Gummies did better? They were more immune to "vigilance decrement" during the task. I have yet to read a cogent explanation as to WHY gum works the way it does. (If you know of that research, please link it here.)

Because of surgery a few years ago cutting out a saliva gland, I have to chew gum to function effectively. I had never done gum before and very much dislike it now, but I do have some "haptic' felt sense of what they are talking about, how it combats "vigilance decrementia." It at least gives me something to do during interminable harangues during faculty meetings.

My guess, however, is that it has something to do with keeping the wiring that goes from the brain to the articulatory equipment energized, in effect working in the opposite direction, very much like haptic technology drives feedback back to the brain through the hands. Not sure I'm in for having students do gum during work that is basically oral production-oriented, but next time your class has to just sit and do nothing but listen, give it a try. "Gum up the work a bit, eh!"


Journal reference (compliments of Science Daily): Kate Morgan, Andrew J. Johnson and Christopher Miles. Chewing gum moderates the vigilance decrement.British Journal of Psychology, 8 MAR 2013. 

Saturday, February 16, 2013

Sing first: listen later: Noticing new or different sounds in L2 pronunciation learning

Here's one for all of us who make extensive use of singing in class. (Here is yet another case where experienced practitioners know it works from experience but have been just waiting for research to catch up and tell them why!) Research by McLachlan, Marco,  Light and Wilson at Melbourne School of Psychological Sciences, summarized, as usual, by Science Daily . . . notes the following:
Clip art: 
Clker
 "What we found was that people needed to be familiar with sounds created by combinations of notes before they could hear the individual notes. If they couldn't find the notes they found the sound dissonant or unpleasant . . . This finding overturns centuries of theories that physical properties of the ear determine what we find appealing."
 
In other words, at some very basic level, appreciation of a style of music is learned. The "notes" in the study had to be first encountered in relation to others in the system before they could be identified or appreciated. Singing in language instruction--and probably to a lesser degree, listening comprehension techniques with pronunciation-- certainly serve that function. This is an important study, one with very interesting potential ramifications for our work. I will try to get the full research report and report back . . ..

Notice: Here is my annual apology for using sometimes less than reliable or politically neutral  secondary sources, such as Science Daily or The New York Times or research abstracts from studies that receive  public support but publish in journals that you can't access with out being a member of "The Guild" or can't afford to pay $32 per article for (or wouldn't just on ethical grounds if you did have the spare change lying around): Sorry about that. (There. Done.)


Saturday, October 29, 2011

Disembodied Pronunciation done well--by Rosetta Stone

Although there appears to be no readily accessible published research as to the efficacy of the widely promoted language teaching program, Rosetta Stone, if the testimonials on the website are to be believed,  it certainly works for at least some learners.  Having reviewed the Korean and ESL programs, I am struck by how well it does a little--a good business model.

Image: Rosetta stone.com
For the visual-auditory, literate learner (See previous posts on the myth of learning styles, however!)  for whom pronunciation will not be much of an issue, it offers reasonably good, low cost, individualized access to at least functional vocabulary and structure. In fact, that it does not do active pronunciation instruction (other than repetition with some feedback) may almost be a plus, as opposed to presenting it even more "disembodied," as is the case with many current, computer-based systems.

A haptic interface could certainly be developed to use with it. I suggested that when I talked to one of the designers a couple of years ago. Basically, he confidently informed me that according to Krashen and most experts in the field, comprehensible input and aural comprehension were generally sufficient for developing acceptable pronunciation--and selling the product. Well . . . duh. He was at least half right. And besides, recall that the pronunciation of the original Rosetta Stone took over three decades to figure out . . .