Showing posts with label auditory. Show all posts
Showing posts with label auditory. Show all posts

Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Saturday, October 14, 2017

Empathy for strangers: better heard and not seen? (and other teachable moments)

The technique of closing one's eyes to concentrate has both everyday sense and empirical research support. For many, it is common practice in pronunciation and listening comprehension instruction. Several studies of the practice under various conditions have been reported here in the past. A nice 2017 study by Kraus of Yale University, Voice-only communication enhances empathic accuracy, examines the effect from several perspectives.
😑
What the research establishes is that perception of the emotion encoded in the voice of a stranger is more accurately determined with eyes closed, as opposed to just looking at the video or watching the video with sound on. (Note: The researcher concedes in the conclusion that the same effect might not be as pronounced were one listening to the voice of someone we are familiar or intimate with, or were the same experiments to be carried out in some culture other than "North American".) In the study there is no unpacking of just which features of the strangers' speech are being attended to, whether linguistic or paralinguistic, the focus being:
 . . . paradoxically that understanding others’ mental states and emotions relies less on the amount of information provided, and more on the extent that people attend to the information being vocalized in interactions with others.
😑
The targeted effect is statistically significant, well established. The question is, to paraphrase the philosopher Bertrand Russell, does this "difference that makes a difference make a difference?"--especially to language and pronunciation teaching?
😑
How can we use that insight pedagogically? First, of course, is the question of how MUCH better will the closed eyes condition be in the classroom and even if it is initially, will it hold up with repeated listening to the voice sample or conversation? Second, in real life, when do we employ that strategy, either on purpose or by accident? Third, there was a time when radio or audio drama was a staple of popular media and instruction. In our contemporary visual media culture, as reflected in the previous blog post, the appeal of video/multimedia sources is near irresistible. But, maybe still worth resisting?
😑
Especially with certain learners and classes, in classrooms where multi-sensory distraction is a real problem, I have over the years worked successfully with explicit control of visual/auditory attention in teaching listening comprehension and pronunciation. (It is prescribed in certain phases of hapic pronunciation teaching.) My sense is that the "stranger" study actually is tapping into comprehension of new material or ideas, not simply new people/relationships and emotion. Stranger things have happened, eh!
😑
If this is a new concept to you in your teaching, close your eyes and visualize just how you could employ it next week. Start with little bits, for example when you have a spot in a passage of a listening exercise that is expressively very complex or intense. For many, it will be an eye opening experience, I promise!
😑

Source:
Kraus, M. (2017). Voice-only communication enhances empathic accuracy, American Psychologist 72(6)344-654.



Sunday, August 20, 2017

Good listening (and pronunciation teaching) is in the EYE of the beholder (not just the ear)!

clker.com
Here is some research well worth gazing at and listening to by Pomper and Chait of University College London: The impact of visual gaze direction on auditory object tracking, summarized by Neurosciencenews.com:

In the study, subjects "sat facing three loudspeakers arranged in front of them in a darkened, soundproof room. They were instructed to follow sounds from one of the loudspeakers while ignoring sounds from the other two loudspeakers. . . . instructed to look away from the attended loudspeaker" in an aural comprehension task. What they found was that " . . . participants’ reaction times were slower when they were instructed to look away from the attended loudspeaker . . .  this was also accompanied by an increase in oscillatory neural activity . . .

 Look . .  I realize that the connection to (haptic) pronunciation teaching may not be immediately obvious, but it is potentially significant. For example, we know from several research studies (e.g., Molloy et al. 2015) that visual tends to override or "trump" audio--in "head to head" competition in the brain. In addition, auditory generally trumps kinesthetic, but the two together may override visual in some contexts. Touch seems to be able to complement the strength or impact of the other three or serve to unite them or integrate them in various ways. (See the two or three dozen earlier blog posts on those and related issues.)

In this study, you have three competing auditory sources with the eyes tracking to one as opposed to the others. Being done in a dark room probably helped to mitigate the effect of other possible visual distraction. It is not uncommon at all for a student to chose to close her eyes when listening or look away from a speaker (a person, not an audio loudspeaker as in the study). So this is not about simply paying attention visually. It has more to do with eyes either being focused or NOT. 

Had the researchers asked subjects to gaze at their navels--or any other specific object--the results might have been very different. In my view the study is not valid just on those grounds alone, but still interesting in that subjects' gaze was fixed at all.) Likewise, there should have been a control group that did the same protocols with the lights on, etc. In effect, to tell subjects to look away was equivalent to having them try to ignore the target sound and attend to it at the same time. No wonder there was " . . .  an increase in oscillatory neural activity"! Really!

In other words, the EYEs have it--the ability to radically focus attention, in this case to sound, but to images as well. That is, in effect, the basis of most hypnosis and good public speaking, and well-established in brain research. In haptic pronunciation teaching, the pedagogical movement patterns by the instructor alone should capture the eyes of the students temporarily, linking back to earlier student experience or orientation to those patterns. 

So try this: Have students fix their eyes on something reasonable or relevant, like a picture or neutral, like an area on the wall in front of them--and not look away--during a listening task. Their eyes should not wander, at least not much. Don't do it for a very long period of time , maybe 30 seconds, max at the start. You should explain to them this research so they understand why you are doing it. (As often as I hammer popular "Near-ol'-science", this is one case where I think the general findings of the research are useful and help to explain a very common sense experience.)

 I have been using some form of this technique for years; it is basic to haptic work except we do not specifically call attention to the eye tracking since the gestural work naturally accomplishes that to some degree. (If you have, too, let us know!)

This is particularly effective if you work in a teaching environment that has a lot of ambient noise in the background. You can also, of course, add music or white noise to help cancel out competing noise or maybe even turn down the lights, too, as in the research. See what I mean?

Good listening to you!

References:
UCL (2017, July 5). Gaze Direction Affects Sound Sensitivity. NeuroscienceNew. Retrieved July 5, 2017 from http://neurosciencenews.com/sound-sensitivity-gaze-direction-7029/
Molloy, K, Griffiths, D.,  Chait, Lavie, N. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience, 2015; 35 (49): 16046 DOI: 10.1523/JNEUROSCI.2931-15.2015





Sunday, August 28, 2016

Great pronunciation teaching? (The "eyes" have it!)

Clker.com
Attention! Внимание!

Seeing the connection between two new studies, one on the use of gesture by trial lawyers in concluding arguments and one on how a "visual nudge" can seriously disrupt our ability to describe recalled visual properties of common objects--and by extension, pronunciation teaching--may seem a bit of a stretch, but the implications for instruction, especially systematic use of gesture in the classroom are fascinating.

The bottom line: what the eyes are doing during pronunciation work can be critical, at least to efficient learning. Have done dozens posts over the years on the role or impact of visual modality on pronunciation work; this adds a new perspective. 

The first, by Edmiston and Lupyan of  University of Wisconsin-Madison, Visual interference disrupts visual knowledge, summarized in a ScienceDaily summary:

"Many people, when they try to remember what someone or something looks like, stare off into space or onto a blank wall," says Lupyan. "These results provide a hint of why we might do this: By minimizing irrelevant visual information, we free our perceptual system to help us remember."

The "why" was essentially that visual distraction during recall (and conversely in learning, we assume), could undermine ability to describe visual properties of even common well-known objects, such as the color of a flower. That is a striking finding, countering the prevailing wisdom that such properties are stored in the brain more abstractly, not so closely tied to objects themselves in recall.

Study #2: Matoesian and Gilbert of the University of Illinois at Chicago, in an article published in Gesture entitled, Multifunctionality of hand gestures and material conduct during closing argument. The research looked at the potential contribution of gesture to the essential message and impact of the concluding argument to the jury. Not surprisingly, it was evident that the jury's visual attention to the "performance" could easily be decisive in whether the attorney's position came across as credible and persuasive. From the abstract:

This work demonstrates the role of multi-modal and material action in concert with speech and how an attorney employs hand movements, material objects, and speech to reinforce significant points of evidence for the jury. More theoretically, we demonstrate how beat gestures and material objects synchronize with speech to not only accentuate rhythm and foreground points of evidential significance but, at certain moments, invoke semantic imagery as well. 

The last point is key.  Combine that insight with the "Nudge" study. It doesn't take much to interfere with "getting" new visual/auditory/kinesthetic/tactile input. The dominance of visual over the other modalites is well established, especially when it comes to haptic (movement plus touch). These two studies add an important piece, that random VISUAL, itself, can seriously interfere with targeted visual constructs or imagery as well. In other words, what your student LOOK at and how effective their attention is during pronuncation work can make a difference--an enormous difference, as we have discovered in haptic pronunciation teaching.

Whether learners are attempting to connect the new sound to the script in the book or on the board, or are attempting to use a visually created or recalled script (which we often initiate in instruction) or are mirroring or coordinating their body movement/gesture with the pronunciation of a text of some size, the "main" effect is still there: what is at that time in their visual field in front of them or in their created visual space in their brain may strongly dictate how well things are integrated--and recalled later. (For a time I experimented with various system of eye tracking control, myself, but could not figure out how to develop that effectively--and safely, but emerging technologies offer us a new "look" at that methodology in several fields today.)

So, how do we appropriately manage "the eyes" in pronunciation instruction? Gestural work helps to some extent, but it requires more than that. I suspect that virtual reality pronunciation teaching systems will solve more of the problem. In the meantime, just as a point of departure and in the spirit of the earlier, relatively far out "suggestion-based" teaching methods, such as Suggestopedia, assume that you are responsible for everything that goes on during a pronunciation intervention (or interdiction, as we call it) in the classroom. (See even my 1997 "suggestions" in that regard as well!)

Now I mean . . . everything, which may even include temporarily suspending extreme notions of learner autonomy and metacognitive engagement . . .

See what I mean?

Sources: 
Matoesian, G. and Gilbert, K.  (2016). Multifunctionality of hand gestures and material conduct during closing argument. Gesture, Volume 15, Issue 1, 2016, pages: 79 –114
Edmiston, P. and  Gary Lupyan, G. (2017) Visual interference disrupts visual knowledge. Journal of Memory and Language, 2017; 92: 281 DOI: 10.1016/j.jml.2016.07.002

Friday, January 1, 2016

3D pronunciation instruction: Ignore the other 3 quintuplets for the moment!

Clker.com
For a fascinating look at what the field may feel like--from a somewhat unlikely source, a 2015 book, 3D Cinema: Optical illusions and tactile experience, by Ross, provides a (phenomenal) look at how and why contemporary 3D special effects succeeds in conveying the "sensation of touch". In other words, as is so strikingly done in the new Star Wars epic, the technology tricks your brain into thinking that you are not only there flying that star fighter but that you can feel the ride throughout your hands and body as well.

This effect is not just tied in to current gimmicks, such as moving and vibrating theater seats or spray mist blown on you, or various odors and aromas being piped in, although it can be. Your mirror neurons respond more as if it is you who is doing the flying, that you are (literally) "in touch" with the actor. The neurological interconnectedness between the senses (or modalities) provides the bridge to greater and greater sense of the real or a least very "close encounter."

How does the experience in a good 3D movie compare to your best multi-sensory events or teachable moments in the classroom, focusing on pronunciation? 

It is easy to see, in principle, the potential for language teaching, creating one vivid teachable moment after another, "Wowing!" the brain of the learner with multi-sensory, multi-,modal experience. As noted in earlier blogposts on haptic cinema, based in part on Marks (2002), that concept, "the more multi-sensory, the better", by just stimulating more of the learner's (whole) brain virtually anything is teachable, is implicit in much of education and entertainment.

Although earlier euphoria has moderated, one reason it can still sound so convincing is our common experience of remembering the minutest detail from a deeply moving or captivating event or presentation. We all have had the experience of being present at a poetry reading or great speech where it was as if all our senses were alive, on overdrive. We could almost taste the peaches; we could almost smell the gun powder.

Part of the point of 3D cinema is that it becomes SO engaging that our tactile awareness is also heightened enormously. As that happens the associated connections to other modalities are "fired" as well. We experience the event more and more holistically. How that integration happens exactly can probably be described informally as something like: audio-visual-cognitive-affective-kinasethetic-tactile-olfactory and "6th sense!" experienced simultaneously.

At that point, apparently the brain is multitasking at such high speed that everything is perceived as "there" all at once. And that is the key notion. That would seem to imply that if all senses are strongly activated and recording "data" then, what came in on each sensory circuit will later still be equally retrievable. Not necessarily. As extensive research and countless commercially available systems have long established,  for acquisition of vocabulary, pragmatics, reading skills and aural comprehension, the possibilities of rich multi-sensory instruction seem limitless at this point.

Media can provide memorable context and secondary support, but why that often does not work as well for learning of some other skills, including pronunciation is still something of a mystery. (Caveat emptor: I am just completing a month-long "tour of duty" with seven, young grandchildren . . . ) In essence, our sensory modalities are not unlike infant octuplets, competing for our attention and storage space. Although it is "possible" to attend to a few at once, it is simply not efficient. Best case, you can do maybe two at a time, one on each knee.

The analogy is more than apt. In a truly "3D" lesson, consistent with Ross (2015), whether f2f or in media, where, for example, the 5 primary "senses" of pronunciation instruction (visual, auditory, kinaesthetic, tactile and meta-cognitive) are near equally competitive, that is vividly present in the lesson, overwhelmingly so. Tactile/kinaesthetic can be unusually prominent, accessible, in part, as noted in earlier blogposts, because it serves to "bind together" the other senses. In that context, consciously attending to any two or three simultaneously is feasible.

So how can we exploit such vivid, holistically experienced, 3D-like milieu, where movement and touch figure in more prominently? I never thought you'd ask! Because of the essentially physical, somatic experience of pronunciation--and this is critical, from our experience and field testing--two of the three MUST be kinaesthetic and tactile--a basic principle of haptic pronunciation teaching.(Take your pick of the other three!)

Consider "haptic" simply an essential "add on" to your current basic three (visual, auditory and meta-cognitive), and "do haptic" along with one or two of the other three. The standard haptic line-of march:

A. Visual-Meta-cognitive (very brief explanation of what, plus symbol, or key word/phrase)
B. Haptic-metacognitive (movement and touch with spoken symbol name or key word/phrase, typically 3x)
C. Haptic-auditory (movement and touch, plus basic sound, if the target is a vowel or consonant temporarily in isolation, or target word/phrase, typically 3x)
D. Haptic-Visual-Auditory (movement and touch, plus contextualized word or phrase, spoken with strong resonance, typically 3x)
E. Some type of written note made for further reference or practice
F. (Outside of class practice, for a fixed period of up to 2 weeks follows much the same pattern.)

Try to capture the learner's complete (whole body/mind) attention for just 3 seconds per repetition--if possible! Not only can that temporarily let you pull apart the various dimensions of the phonemic target for attention, but it can also serve to create a much more engaging (near 3D) holistic experience out of a potentially "senseless" presentation in the first place--with "haptic" in the mix from the outset.

Happy New Year!

Keep in touch.

Citation:
Ross, M. (2015). 3D Cinema: Optical Illusions and Tactile Experiences. London: Springer, ISBN: 978-1-349-47833-0 (Print) 978-1-137-37857-6 (Online)



Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!

Clker.com
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

Saturday, February 7, 2015

Why haptic (pronunciation) teaching and learning should be superior!

Wow. How about this "multi-sensory" conclusion from Max-Planck-Gesellschaft researchers Mayer, Yildiz, Macedonia, and von Kriegstein, Visual and motor cortices differentially support the translation of foreign language words (full citation below)--summarized by Science daily (boldface added for emphasis) :

"The motor system in the brain appears to be especially important: When someone not only hears vocabulary in a foreign language, but expresses it using gestures, they will be more likely to remember it. Also helpful, although to a slightly lesser extent, is learning with images that correspond to the word. Learning methods that involve several senses, and in particular those that use gestures, are therefore superior to those based only on listening or reading."

The basic "tools" of haptic pronunciation teaching, what we call "pedagogical movement patterns," are defined as follows:

As a word or phrase is visualized (visual) and spoken with resonant voice, a gesture moving across the visual field is preformed which culminates in hands touching on the stressed syllable of the word or phrase (cognitive/linguistic), as the sound of the word is experienced as articulatory muscle movement in the upper body and by vibrations in the body emanating from the vocal cords and (to some degree) sound waves returning to the ears (auditory). 
Clipart'
Clker.com

And what bonds that all together? A 2009 study by Fredembach,et al demonstrated just how haptic anchoring--and the PMP should work: in relative terms, the major contribution of touch may generally be exploratory and assembling of multi-sensory experiences. The key is to do as much as possible to ensure that learners keep as many senses in play during "teachable moments" when new word-sound complexes are being encountered and learned. 

Make sense? Keep in touch!

Citations:
Fredembach, B., Boisferon, A. & Gentaz, E. (2009) Learning of arbitrary association between visual and auditory novel stimuli in adults: The “Bond Effect” of haptic exploration. PLoS ONE, 2009, 4(3), 13-20.
Max-Planck-Gesellschaft. (2015, February 5). Learning with all the senses: Movement, images facilitate vocabulary learning. ScienceDaily. Retrieved February 7, 2015 from www.sciencedaily.com/releases/2015/02/150205123109.htm

Sunday, November 4, 2012

Anchoring pronunciation: Do you see what you are saying?


Clip art: Clker
Clip art: Clker
You can, in fact--if you are pronouncing a sound, word or phrase using EHIEP-like pedagogical movement patterns, PMPs (gestures across the visual field terminating in some form of touch by both hands.) Not only CAN you, according to research by Xi and colleagues at Northwestern University, summarized by Science Daily, but your eyes strongly interpret for you the "feeling of how it happens." The visual "character" of the dynamic gesture (its positioning, fluidity, distance from the eyes and texture on contact with the other hand) may well override the actual tactile feedback from your hands and proprioceptic "coordinates" of movement from your arms.

In the study, subjects were simultaneously presented with video clips that slightly contradicted what their hands and arms were doing. It was clearly demonstrated that even though subjects were also instructed to ignore the video and concentrate on the actual positioning, movement and related information about touch and weight coming from the hands, the "eyes have it." What they were seeing reinterpreted the other incoming sensory data.

As noted in earlier posts, visual can often override other modalities. What is "new" here and contributes to our understanding of how and why haptic-integration works is that the subjects' perception of the EHIEP sound-touch-movement "event" would appear to be strongly influenced by the style or flair or precision and consistency of the PMP. That has been one of key problems in creating the video models: insufficient clarity and consistency in the execution of PMPs (by me!)

This is both good news and bad news. Good, in that the PMP is, indeed, a potentially a very powerful anchor--and that the visual "feel" of each can contribute substantially to anchoring effectiveness. Bad, in that for maximal effectiveness the video/visual model needs to be exceedingly precise and consistent. (I have explored the use of Avatars instead of me but there are even bigger potential issues there.) Preparing/getting in shape now to do a new set of videos after the holidays, based on this and simular research. Can't wait to see what those feel like!

Sunday, September 25, 2011

Disembodied anchoring strategies in pronunciation work

There are good sources of recommendations on how to integrate pronunciation into classroom instruction. Here is a nice 2004 piece by Levis and Grant which covers the basic options. (I recommend reading it before continuing if you are not familiar with that general framework.) Note that after identifying those aspects of pronunciation that should be attended to and setting up teaching contexts, they identify several (mostly visual, aural/auditory, cognitive/noticing) anchoring strategies:

(a) pointing out errors or processes
(b) providing formal rules
(c) oral repetition/practice
(d) writing something down on the board or in notes
(e) student discussion or analysis
(f) impromptu oral comments linking current issue to earlier work
(g) A further assumption is that the effects of relevant context, meaningful practice, communicative "cash value" and student initiative will do the rest. You'd think it would . . .

Clip art: Clker
Clip art: Clker
Previous blogposts here have explored in great detail why adding haptic-based anchoring is potentially so much more effective than traditional approaches alone which, for the most part, either (1) stop short of guiding the learner to efficient "storage" options for new sounds (with explanations, demands to "notice" or assumptions that uptake is the learner's responsibility, not the instructor's) --or (2) simply attempt to drill the changes into submission. In subsequent posts we will consider how to "hapticulate" or embody some of the strategies described by Levis and Grant.

Saturday, August 27, 2011

Haptic preferences of 5-12th graders (and adult learning style plasticity in pronunciation teaching)

Clip art: Clker
This summary study found that "average" 5-12th graders in the US, Hong Kong and Japan had a relatively balanced learning style profile, with a slight preference for haptic (37%), with auditory at 34% and visual at only 27%. Those results appear to contrast substantially with the "typical" adult learner who tends to be biased in favor of visual with auditory second and haptic a distant third. From that perspective, our goal should be to assist adult learners in developing a more balanced, multiple-modality-based learning style profile more like they had in school. Not sure about the applicability of the research but I certainly like the results. On the face of it, however, that looks like an almost ideal mindset for pronunciation change, a good target for our research and instruction.

Tuesday, August 16, 2011

Mirroring, Tracking and Listening

M, T and L are basic tools of pronunciation teaching. It has been assumed for some time that tracking, that is having a learner speak along with a simple audio recording, is something of an overt form of what naturally goes on in the body in listening. There was earlier research that seemed to suggest that the vocal apparatus (mouth, vocal cords, etc.) moved along with the incoming speech at a subliminal level.
Clip art: Clker
Turn outs, according to this research, by Menenti of the University of Glasgow, Hagoort of Radboud University, and Gierhan and Segaert of the Max Planck Institute, summarized by Science Daily, that general listening (without seeing the speaker, "live," visually) does not necessarily involve such sympathetic "vibrations." In other words, the felt sense of listening in some contexts can be decidedly non-somatic or divorced from embodied attention.

That does not mean that tracking is still not a useful technique for assisting learners with the intonation of the language, but clearly, the neuro-physiological rationale may be suspect. This raises several interesting questions related to the complex inter-relationships underlying listening, speaking and pronunciation skills--and how to teach them, especially in adults. The evidence that mirroring, on the other hand, engages the body is unequivocal. That certainly speaks to the HICP/EHIEP--and any pronunciation teaching practitioner who is listening . . .

Sunday, August 14, 2011

Remembering: Just close your eyes to block out that distracting sound in the background or pronunciation

Clip art: Clker
Here is (an abstract only of) a study by Perfect, Andrade and Eagan (2011) which demonstrates that under certain conditions, closing the eyes effectively blocks out background/environmental  auditory clutter (in this case.) The technique seemed to enhance or at least protect both visual and auditory recall. Earlier posts have reported on research related to the impact of eye closure (or purely haptic, nonvisual engagement) on encoding, showing a parallel effect. It is as if once the eyes have done their part in establishing an object's properties or pattern, in some contexts or stages, it may be better to carefully limit further, potentially distracting gaze.

Most disciplines deal with directed eye engagement in some form, especially in expert practice and performance. That can be done any number of ways from full closure to fixed positions in the visual field. (I'm sure you have students who, themselves, use similar strategies at times. Regardless, the implication for pronunciation instruction is intriguing: To get or recall the optimal auditory felt sense of a sound, simply cut out external auditory interference. That appears to be relatively easy . . . you can  do it with your eyes closed!

Friday, August 5, 2011

Haptic coordination thresholds and caveats

This 2001 research by Kelso, Fink, DeLaplain and Carson may suggest an explanation for a frequent observation: in haptic-based work there seems to be a threshold of some sort. Once a learner puts it all together, the coordination of the visual, auditory, kinesthetic and tactile sensory input with the linguistic concept in focus, the learning process seems to go into overdrive, become much more efficient. Until that happens, however, it can be very frustrating for some.

Clip art: Clker
One conclusion of the study was that haptic engagement, if not well synchronized with the other sensory inputs, has the effect of seriously compromising the other elements, especially visual and auditory. (We know, for example, from other studies that visual can in some contexts easily cancel out auditory or haptic --but not combined auditory and haptic.) That may explain the sometimes contradictory results of studies looking at the benefits of kinesthetic strategies in instruction.

This also has important implications for haptic-based instructional task sequencing and scaffolding (or any type of multi-sensory teaching for that matter). To paraphrase "The Duke": "You ain't got a thing, if you don't [until you] got that swing."

Friday, July 22, 2011

Pronunciation modalities: out of sight--but IN mind!.

clip art: Clker
In this 2009 study of modality dominance, by Hecht and Reiner, when visual is paired one-on-one with either haptic or auditory competing stimuli, visual consistently overpowers either of the two. When the three are presented simultaneously, however, the dominance of visual disappears. That may explain why having some learners focus on a visual schema (such as the orthography) while articulating or practicing a new sound may not turn out to be very efficient--or doing a kinesthetic "dance" of some kind to practice a rhythm pattern (without speaking at the same time) while looking at something in the visual field, may not work all that well either for some learners.

The presence of eye engagement may override or nullify information in the competing modality. In HICP, where all three modalities are usually engaged, the "distracting" influence of sight is at least lessened. In fact, the tri-modality "hexus" should only better  facilitate the integration of the graphic word, the felt (haptic) sense of producing it and the internal (auditory) bone- resonance and vibrations. Although a substantial amount of pronunciation learning may be better accomplished with eyes closed, tri-modal (haptic, visual and auditory) techniques probably come in a close second. We will "see" in forthcoming research!

Friday, June 17, 2011

Keeping listening in the picture . . . or out of it!

Clip art: Clker
Several posts have addressed the question of the relationship between learning modalities in general learning and pronunciation teaching. What this important 2010 study by Lavie and Macdonald of the Institute of Cognitive Neuroscience at UCL, reported by Science Daily, demonstrates is that in some contexts visual input appears to trump auditory input. In other words, being engaged visually in a task may limit ability to hear critical information.

We know from experience that some highly visual learners may find learning pronunciation especially difficult. This helps to explain why. From whatever source, even stunning visual aids or computer displays, "visual interference" with learning new sounds may be significant. The implication for EHIEP instruction is that haptic and auditory input, key components of  multiple modality instruction--along with a modest amount of video on the side, perhaps, is the best overall learning format. Get the picture . . .or the sound . . . take your pick!

Sunday, June 12, 2011

The felt sense of a new or "replacement" vowel: Y-buzz and beyond

Clip art: 
Clker
The first phase of EHIEP training is involved with haptically anchoring the vowels of English. Even if the learner "has" a vowel already in his or her repertoire, it is essential that a new and more focused, conscious awareness of the somatic qualities of the vowel be established to facilitate later change and monitoring of spontaneous speaking.

That concept is based on Lessac's notion of the "Y-buzz" sensation. Here is a 2007 study by Barrichelo and Behlau that looked at the perceptual salience of that highly resonant sound/sensation, as opposed to "normal" production by subjects of the acoustically similar [i] sound (as in the word, "me,' for example.) The unique, therapeutically created Y-buzz vowel felt sense is the model for our work. The learner's ability to produce the Y-buzz is almost entirely body-based, not auditory. In that way, the learner can produce it without having to "go through" the possibly "defective" [i] vowel in his or her current interlanguage phonology. (See earlier post on "changing the channel.")

Need to put a little more "buzz" in your teaching?