Showing posts with label multiple modality. Show all posts
Showing posts with label multiple modality. Show all posts

Monday, March 26, 2018

What you see is what you forget: pronunciation feedback perturbations

Tigger warning* This blogpost concerns disturbing images, perturbations, during pronunciation
work.

In some sense, almost all pronunciation teaching involves some type of imitation and repetition of a model. A key variable in that process is always feedback on our own speech, how well it conforms to the model presented, whether coming to us through the air or perhaps via technology, such as headsets--in addition to the movement and resonance we feel in our vocal apparatus and bone structure in the head and upper body.  Likewise, choral repetition is probably the most common technique, used universally. There are, of course, an infinite number of reasons why it may or may not work, among them, of course, distraction or lack of attention.

Clker.com
We generally, however, do not take all that seriously what is going on in the visual field in front of the learner while engaged in repetition of L2 sounds and words. Perhaps we should. In a recent study by Liu et al, Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates,  it was shown that differing amounts of random light flashes in the visual field  affected the ability of learners to adjust the pitch of their voice to the model being presented for imitation. The research was done in Chinese, with native Mandarin speakers, attempting to adjust the tone patterns of words presented to them, along with the "light show". They were instructed to produce the models they heard as accurately as possible.

What was surprising was the degree to which visual distraction (perturbation) seemed to directly impact subjects' ability to adjust their vocal production pitch in attempting to match the changing tone of the models they were to imitate. In other words, visual distraction was (cross-modally) affecting perception of change and/or subsequent ability to reproduce it. The key seems to be the multi-modal nature of working memory itself. From the conclusion: "Considering the involvement of working memory in divided attention for the storage and maintenance of multiple sensory information  . . .  our findings may reflect the contribution of working memory to auditory-vocal integration during divided attention."

The research was, of course, not looking at pronunciation teaching, but the concept of management of attention and the visual field is central to haptic instruction, in part because touch, movement and sound are so easily overridden by visual stimuli or distraction. Next time you do a little repetition or imitation work, figure out some way to insure that working memory perturbation by what is around learners is kept to a minimum. You'll SEE the difference. Guaranteed.

Citation:
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B and Liu H (2018) Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front. Neurosci. 12:113. doi: 10.3389/fnins.2018.00113

*The term "Tigger warning" is used on this blog to indicate potentially mild or nonexistent emotional disruption that can easily be overrated. 

Saturday, February 4, 2017

Killing Pronunciation 2: "Over and under-learning"

You may have seen a report on this research on "overlearning" recently, Overlearning hyperstabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, by Shibata, Sasaki, Bang, Walsh, Machizawa, Tamaki, Chang and Watanabe of Brown University. (There is a pretty readable summary on Medicalexpress.com.) According to the abstract: "Overlearning in humans abruptly changes neurochemical processing, to hyperstabilize and protect trained perceptual learning from subsequent new learning."

Wow. Some useful terms there for you: Neurochemical processing . . . hyperstabilize  . . . inhibitory-dominant . . . 

Clker.com
Basically, researchers examined the effect of overlearning of a visual mapping procedure on retention in one of three conditions: (a) another new learning procedure was introduced immediately, (b) a time period was inserted (3 hours) before the next procedure, or (c) the first procedure was carried out with overlearning (operationalized as going over the correct set of moves yet again, again), followed by a second new procedure.

In essence, both (b) and (c) resulted in better recall later. In other words, you can protect new learning by putting some space between that and the next piece of training--especially if the two procedures have some potential overlap of some kind, or . . . by hammering it in, so to speak.

Shibata et al. suggest that the findings probably apply to a wide range of learning contexts, while conceding that the focus on visual modality also limits applicability. More research needed, of course. But what might that imply for pronunciation teaching? A few things:
  • Some kinds of drill may work as well as we know they do. (Especially if it is anchored with gesture-plus-touch!)
  • Research has long established that just "pointing out" or simple recasting (repeating back the correct pronunciation without further comment) rarely are effective. 
  • As was reported in the previous blogpost, the role of visual stimuli and distraction in moderating integration of other modalities, can be problematic, at best. That is to say the applicability of this "visual" study to embodied pronunciation may be marginal. 
  • The concept of "spacing" various procedures in pronunciation training does make. The behaviorists had that one figured out 60 or 70 years ago. (In fact, this possible additional empirical validation of overlearning must put a bit of a smile on the face of any "hyper-senior" researchers of the period still with us.)
  • Good trainers in virtually all physical disciplines know and practice this idea. Again, as developed in several previous blogposts, the idea of partitioning off leaning has always been central to hypnosis, allowing the unconscious mind a role in the party. How you do that can vary enormously, simple waiting time being one. 
Two possible takeaways here: (a) However you accomplish it, pronunciation learning, being the highly modality-integrated process that it is, requires or should be followed by uncompromised attention, processing space around it of some kind and "full-body" armor. (b) If not an integral part of your method, don't be surprised if little sticks or is "uptaken"!

If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings

Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
*With apologies, of course, to Bill O'Reilly for the use of his "killing" meme, as in his recent books on well known figures of the past, e.g., Killing Jesus, Killing Lincoln, Killing Kennedy. At least a couple of future posts will use the same "killer" title hook.

Source:
Nature Neuroscience (2017)doi:10.1038/nn.4490






  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
    Overlearning hyper-stabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, Nature Neuroscience, nature.com/articles/doi:10.1038/nn.4490

    Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
    Overlearning hyper-stabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, Nature Neuroscience, nature.com/articles/doi:10.1038/nn.4490

    Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp






  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp

    Friday, January 1, 2016

    3D pronunciation instruction: Ignore the other 3 quintuplets for the moment!

    Clker.com
    For a fascinating look at what the field may feel like--from a somewhat unlikely source, a 2015 book, 3D Cinema: Optical illusions and tactile experience, by Ross, provides a (phenomenal) look at how and why contemporary 3D special effects succeeds in conveying the "sensation of touch". In other words, as is so strikingly done in the new Star Wars epic, the technology tricks your brain into thinking that you are not only there flying that star fighter but that you can feel the ride throughout your hands and body as well.

    This effect is not just tied in to current gimmicks, such as moving and vibrating theater seats or spray mist blown on you, or various odors and aromas being piped in, although it can be. Your mirror neurons respond more as if it is you who is doing the flying, that you are (literally) "in touch" with the actor. The neurological interconnectedness between the senses (or modalities) provides the bridge to greater and greater sense of the real or a least very "close encounter."

    How does the experience in a good 3D movie compare to your best multi-sensory events or teachable moments in the classroom, focusing on pronunciation? 

    It is easy to see, in principle, the potential for language teaching, creating one vivid teachable moment after another, "Wowing!" the brain of the learner with multi-sensory, multi-,modal experience. As noted in earlier blogposts on haptic cinema, based in part on Marks (2002), that concept, "the more multi-sensory, the better", by just stimulating more of the learner's (whole) brain virtually anything is teachable, is implicit in much of education and entertainment.

    Although earlier euphoria has moderated, one reason it can still sound so convincing is our common experience of remembering the minutest detail from a deeply moving or captivating event or presentation. We all have had the experience of being present at a poetry reading or great speech where it was as if all our senses were alive, on overdrive. We could almost taste the peaches; we could almost smell the gun powder.

    Part of the point of 3D cinema is that it becomes SO engaging that our tactile awareness is also heightened enormously. As that happens the associated connections to other modalities are "fired" as well. We experience the event more and more holistically. How that integration happens exactly can probably be described informally as something like: audio-visual-cognitive-affective-kinasethetic-tactile-olfactory and "6th sense!" experienced simultaneously.

    At that point, apparently the brain is multitasking at such high speed that everything is perceived as "there" all at once. And that is the key notion. That would seem to imply that if all senses are strongly activated and recording "data" then, what came in on each sensory circuit will later still be equally retrievable. Not necessarily. As extensive research and countless commercially available systems have long established,  for acquisition of vocabulary, pragmatics, reading skills and aural comprehension, the possibilities of rich multi-sensory instruction seem limitless at this point.

    Media can provide memorable context and secondary support, but why that often does not work as well for learning of some other skills, including pronunciation is still something of a mystery. (Caveat emptor: I am just completing a month-long "tour of duty" with seven, young grandchildren . . . ) In essence, our sensory modalities are not unlike infant octuplets, competing for our attention and storage space. Although it is "possible" to attend to a few at once, it is simply not efficient. Best case, you can do maybe two at a time, one on each knee.

    The analogy is more than apt. In a truly "3D" lesson, consistent with Ross (2015), whether f2f or in media, where, for example, the 5 primary "senses" of pronunciation instruction (visual, auditory, kinaesthetic, tactile and meta-cognitive) are near equally competitive, that is vividly present in the lesson, overwhelmingly so. Tactile/kinaesthetic can be unusually prominent, accessible, in part, as noted in earlier blogposts, because it serves to "bind together" the other senses. In that context, consciously attending to any two or three simultaneously is feasible.

    So how can we exploit such vivid, holistically experienced, 3D-like milieu, where movement and touch figure in more prominently? I never thought you'd ask! Because of the essentially physical, somatic experience of pronunciation--and this is critical, from our experience and field testing--two of the three MUST be kinaesthetic and tactile--a basic principle of haptic pronunciation teaching.(Take your pick of the other three!)

    Consider "haptic" simply an essential "add on" to your current basic three (visual, auditory and meta-cognitive), and "do haptic" along with one or two of the other three. The standard haptic line-of march:

    A. Visual-Meta-cognitive (very brief explanation of what, plus symbol, or key word/phrase)
    B. Haptic-metacognitive (movement and touch with spoken symbol name or key word/phrase, typically 3x)
    C. Haptic-auditory (movement and touch, plus basic sound, if the target is a vowel or consonant temporarily in isolation, or target word/phrase, typically 3x)
    D. Haptic-Visual-Auditory (movement and touch, plus contextualized word or phrase, spoken with strong resonance, typically 3x)
    E. Some type of written note made for further reference or practice
    F. (Outside of class practice, for a fixed period of up to 2 weeks follows much the same pattern.)

    Try to capture the learner's complete (whole body/mind) attention for just 3 seconds per repetition--if possible! Not only can that temporarily let you pull apart the various dimensions of the phonemic target for attention, but it can also serve to create a much more engaging (near 3D) holistic experience out of a potentially "senseless" presentation in the first place--with "haptic" in the mix from the outset.

    Happy New Year!

    Keep in touch.

    Citation:
    Ross, M. (2015). 3D Cinema: Optical Illusions and Tactile Experiences. London: Springer, ISBN: 978-1-349-47833-0 (Print) 978-1-137-37857-6 (Online)



    Friday, August 22, 2014

    Providing pronunciation teaching with signs (and wonders!) and a hand!

    More fascinating research on the role of gesture in learning from Goldin-Meadow at the University of Chicago, summarized by Science Daily. The research in part looked at "homesign-ing," that is systems created by children not introduced to the standard signing system of the language or culture. One conclusion of the study:
    ". . . gesture cannot aid learners simply by providing a second modality. Rather, gesture adds imagery to the categorical distinctions that form the core of both spoken and sign languages."

    That research also sheds light on the function of the pedagogical movement patterns (PMPs) of haptic pronunciation teaching work as well. (Several of the gestural patterns closely resemble signs used in American Sign Language, and early development of the system was informed and inspired by ASL, in fact.)

    One of the more interesting parallels is the fact that ASL signs of high emotional intensity more often tend to terminate in touch--as do all PMPs. A second is that the PMPs of EHIEP (Essential haptic-integrated English Pronunciation), for the most part, present vivid visual pictures that are learned and recalled easily. If you'd like to learn more, just join us next month in Costa Rica!


    Citation: University of Chicago. "Hand gestures improve learning in both signers, speakers." ScienceDaily. ScienceDaily, 19 August 2014. .

    Wednesday, December 5, 2012

    Effortless learning of the iPA vowel "matrix" of English?

    Image: Wikipedia
    Could be, according to 2011 research by Watanabe at ATR Laboratories in Kyoto and colleagues at Boston University, as summarized by Science Daily--using fMRI technology in the form of neurofeedback tied to carefully scaffolded visual images. Mirroring what appears to go on in real time, in the experiment it was evident that " . . . pictures gradually build up inside a person's brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example."

    This is an intriguing idea, something of a "bellwether" of things to come in the field, using fMRI-based technology joined with multiple-modality features to facilitate acquisition of components of complex behavioral patterns. The application of that approach to articulatory training alone, assembling a sound, in effect, one parameter at a time, just the way it is done by expert practitioners--should be relatively straightforward.

    Clip art: Clker
    The EHIEP vowel matrix resembles the standard IPA matrix on the right, except that it is positioned in mirror image and includes only the vowels of English. In training learners to work within it, we do a strikingly similar build up to that identified in the study, lines < edges < shapes < motion (which is different for each vowel.) Each quadrant is then given a colour that corresponds to something of the phonaesthetic quality of the vowels positioned there. Once the "matrix" is kinaesthetically presented and practiced, it is then gradually, haptically anchored as the vowels are presented and practiced using distinct pedagogical movement patterns terminating in some form of "Guy or Girl touch" for each as the sound is articulated.

    Out of the box? Not for long, my friends!


    Sunday, November 25, 2012

    Play it again, HIRREM! (A musical tone approach to balanced pronunciation learning?)

    Clip art: Clker
    With apologies to Humprey Bogart, one of the basic "learning" assumptions in most training systems is that some degree of balance between relevant areas of the brain, whether left~right, top~bottom or front~back (or all of those) is optimal. How that is to be achieved is the question, of course. As blogged earlier on several occasions, brain research (e.g., as in neurotherapy) is now beginning to offer alternatives or at least compliments to cognitive and physical exercises or disciplines: brain frequency "adjustment."

    In a new study by Tegler and colleagues at Wake Forest University (summarized by Science Daily), musical tones were mirrored back to the brains of subjects to achieve a more balanced overall brain frequency profile--which appeared to successfully lessen insomnia, at least for a month or so. Tegler does note that " . . . the changes observed with HIRREM, could be due to a placebo effect. In addition, because HIRREM therapy involves social interaction and relaxation, there may be other non-specific mechanisms for improvement, in addition to the tonal mirroring."

    Now granted, this specific technology may not directly impact a learner's ability to learn new or repaired sounds--or even "HIRREM" better, but it is clearly on the right track. (Nothing to lose sleep over if you can't spring for the 30k to get you a " . . . high-resolution, relational, resonance-based, electroencephalic mirroring or, as it's commercially known, Brainwave Optimization™ . . . " set up!) But multiple-modality and balanced "all-brain" engagement is the key to pronunciation change. It's coming. Keep in touch. 

    Sunday, January 22, 2012

    Keeping pace with PACE (with HICP)

    Wow! How about a program that claimed to be able to enhance your child's (Excepted from the PACE website):
    • Auditory Processing: to process sounds. Helps one hear the difference, order, and number of sounds in words faster; basic skill needed to learn to read and spell; helps with speech defects.
    • Auditory Discrimination: to hear differences in sounds such as loudness, pitch, duration, and phoneme.
    • Auditory Segmenting: to break apart words into separate sounds.
    • Auditory Blending: to blend individual sounds to form words.
    • Auditory Analysis: to determine the number, sequence, and which sounds are within a word.
    • Auditory-Visual Association: to be able to link a sound with an image.
    • Comprehension: to understand words and concepts.
    • Divided Attention: to attend to and handle two or more tasks at one time such as taking notes while listening and carrying totals while adding the next column. Required for handling tasks quickly or tasks with complexity.
    • Logic and Reasoning: to reason, plan, and think.
    • Long-Term Memory: to retrieve past information.
    • Math Computations: to do math calculations such as adding, subtracting, multiplying, and dividing.
    • Processing Speed: the speed at which the brain processes information. Makes reading faster and less tiring; makes one more aware of his or her surrounding environment; helps with sports such as basketball, football, and soccer and with activities such as driving.
    • Saccadic Fixation: to move the eyes accurately and quickly from one point to another.
    • Selective Attention: to stay on task even when distraction is present.
    • Sensory-Motor Integration: to have the sensory skills work well with the motor skills — such as with eye-hand coordination.
    • Sequential Processing: to process chunks of information that are received one after another.
    • Simultaneous Processing: to process chunks of information that are received all at once.
    • Sustained Attention: to be able to stay on task.
    • Visual Processing: to process and make use of visual images. Helps one create mental pictures faster and more vividly; helps one understand and “see” word math problems and read maps; improves reading comprehension skills.
    • Visual Discrimination: to see differences in size, colour, shape, distance, and orientation of objects.
    • Visual Manipulation: to flip, rotate, move, change colour, etc. of objects and images in one’s mind.
    • Visualization: to create mental images or pictures.
    • Visual Span: helps one see more and wider in a single look. Improves side vision. Enables faster reading and better, faster decisions in sports.
    • Working Memory: to retain information while processing or using it.
    This is a long-established local private school. My guess is that they can probably do most of that, too! What is fascinating is that HICP/EHIEP work should explicitly attend to many of those (italicized) as well. Good multiple modality teaching and learning is like that . . . like this!

    Thursday, October 20, 2011

    "Sensational" pronunciation teaching? Chances are about 50/50.

    Clip art: CLker
    Here is a brief summary of an article by Killingsworth and Gilbert published in the journal, Science. It includes this interesting quote from the article: “The ability to think about what isn’t happening is a significant cognitive achievement, but one that comes at an emotional cost.” The data revealed that most of us are "in the present moment" emotionally, only about 50% of the time, at best. The concept of "mindfulness," sensing as much as possible the "felt sense" of our body (e.g., heart rate, muscle tension, breathing) and learning to function within it--not trying to escape it or cool it down consciously-- is applied extensively in many fields today.

    One of the great advantages of multiple modality instruction is that it provides a means of (at least momentarily) capturing the full attention of the mind to the task at hand. Haptic techniques, engaging the body as they do, if done with correct form and perhaps some eye tracking, are "mindful" or mind-filling in the best (felt) sense. It is not that clear explanations, discussion, insight, planning and disembodied drilling related to a learner's pronunciation are not helpful; they are, of course--but they can also easily interfere with efficient anchoring of sound change.

    In other words, stop thinking about pronunciation and how difficult, time-consuming and anxiety-producing it can be. Just do it (haptically)!

    Sunday, September 25, 2011

    Selecting a "sound" haptic anchor

    So if you were trying to find a good anchor, you'd go to a "Pro," right? Having looked to fishing for paradigms and metaphors a couple times before, will try one more. Here is the Bass Pro Shop's criteria for a good anchor:

    (1) Strong craftsmanship
    (2) Can be set and re-set quickly and easily under all conditions
    Clip art: Clker
    (3) Good holding power (Holds well in all types of bottom: weed, rock, sand, mud.)
    (4) Can be stored easily (on deck) -- compact
    (5) Can be retrieved easily
    (6) Can be released easily and effortlessly from the bottom.
    I'm sure you can quickly extrapolate the first four parameters to haptic anchoring of pronunciation.

    The 5th and 6th focus on two additional features worth elaborating. Ease of "retrieval" translates to how readily and effectively awareness of the "stored" new sound is triggered later, during conversation or listening. Probably the most important experiential benchmark in haptic-based change is when the learner becomes aware, after the fact, of either correct usage or the lingering mispronunciation. That is often experienced primarily as a body sensation, not a visual or "self-talk" auditory signal that would interfere with communication or relationships!

    The final parameter, releasing the anchor, is also important. Haptic anchoring tends to fade quickly--unless practiced and re-experienced frequently--which works out just right for fast, short-term change. So, if your pronunciation teaching seems adrift,  doesn't seem to be "catching," lately, don't throw it overboard . . . just get some better (haptic) anchors.

    Monday, August 29, 2011

    Plastic Brain . . . Pronunciation Change

    Clip art: Clker
    One of the most striking findings of recent research, such as this 2002 one by on neuroplasticity in motor learning by Ungerleider, Doyon and Karni is not just how the brain works but its inherent plasticity in many respects, its ability to reorganize and relearn or learn in other ways if necessary. One obvious implication of that is that just because students have individual preferences for particular learning styles does not mean they can not, in many cases rather easily, switch to other styles or develop better use of secondary preferences. The danger of cognitive style or learning style categories is . .. that they are categorical. Once we "know" what we are, that's it. (In fact, research suggests that once you know your style, especially based on some simpleminded 5-minute questionnaire,  you become even moreso--one of the basic assumptions of hypnotherapy, of course.)

    Bottom line here: even the "adult brain" (and this is especially good news for learners of my generation and beyond) is capable of enormous flexibility and re-generation. So forget all that nonsense that you have heard about having to alter your teaching style to fit those of your students: retrain them instead! Well, actually, you should be constantly training everybody, yourself included, in multiple modality learning. Get HIP(oeces), eh!

    Saturday, August 27, 2011

    Haptic preferences of 5-12th graders (and adult learning style plasticity in pronunciation teaching)

    Clip art: Clker
    This summary study found that "average" 5-12th graders in the US, Hong Kong and Japan had a relatively balanced learning style profile, with a slight preference for haptic (37%), with auditory at 34% and visual at only 27%. Those results appear to contrast substantially with the "typical" adult learner who tends to be biased in favor of visual with auditory second and haptic a distant third. From that perspective, our goal should be to assist adult learners in developing a more balanced, multiple-modality-based learning style profile more like they had in school. Not sure about the applicability of the research but I certainly like the results. On the face of it, however, that looks like an almost ideal mindset for pronunciation change, a good target for our research and instruction.

    Friday, August 19, 2011

    Could Krashen's "Monitor Model" have been 25% correct?

    Clip art: Clker
    Here is an article that begins with a quote from Krashen (1982) that states his initial articulation of the "Monitor Model," arguing, among other things, that attention to form or correction in L2 acquisition is not effective or productive, at best. Following on from recent posts, you can see how he had captured a critical dimension of the process but was tossing out even the possibility of any directed, modality-mediated monitoring of spontaneous speaking (that is, modulating attention appropriately to learner cognitive style profile among the four senses or modalities), as we have been exploring for some time now. I'm sure I am not the first to suggest that "Krashen's Error" was that he was  just slightly "out of touch" . . .

    Wednesday, July 27, 2011

    Touching sound to teach it

    image credit: yvonbonenfant.com
    When contemporary vocal artists, in this case Yvon Bonenfant, create in multi-senses, the interplay between sound and touch often becomes the focal "zonenubergang," the crossover. As has been evident in posts related to haptic interfaces and, more recently, deafblind communication, our touch metaphors, e.g.,"How touching!" connect much more than simple mental concepts. Bonnefant describes the sense of the engagement of touch and sound in his work as best experienced as a silk-like "membrane" between us where the sound passes through into tactile meaning and understanding almost unimpeded. That is a remarkable characterization of what we are after in linking pronunciation with the felt sense of producing it.

    Friday, June 17, 2011

    Keeping listening in the picture . . . or out of it!

    Clip art: Clker
    Several posts have addressed the question of the relationship between learning modalities in general learning and pronunciation teaching. What this important 2010 study by Lavie and Macdonald of the Institute of Cognitive Neuroscience at UCL, reported by Science Daily, demonstrates is that in some contexts visual input appears to trump auditory input. In other words, being engaged visually in a task may limit ability to hear critical information.

    We know from experience that some highly visual learners may find learning pronunciation especially difficult. This helps to explain why. From whatever source, even stunning visual aids or computer displays, "visual interference" with learning new sounds may be significant. The implication for EHIEP instruction is that haptic and auditory input, key components of  multiple modality instruction--along with a modest amount of video on the side, perhaps, is the best overall learning format. Get the picture . . .or the sound . . . take your pick!

    Thursday, June 2, 2011

    Quod erat demonstrandum: Why pronunciation teaching fails

    Clip art: Clker
    University of Wisconsin researcher Alibali is quoted in the linked summary by Science Daily as saying, ""Body movements are one of the resources we bring to cognitive processes." From our perspective, it might be better framed: ""Cognitive processes are one of the resources we bring to learning pronunciation, "multiple modalitily." What a nice example of the obvious "cognitive" bias prevalent in this field today as well-- such that the body is still thought of principally as an "add-on" or afterthought in understanding human functioning and designing instruction. Some estimates are that the body figures in to most popular models of cognitive functioning at well below the 20% level. 

    The researchers speculate that it might even be a good idea to consider suppressing body engagement to stimulate other forms of disembodied learning. They need not bother . . . We have ample evidence in contemporary pronunciation teaching as to what happens when that is the common practice.

    (Hat tip to Charles Adamson, founder and guiding spirit of the Japan NLP association for this link to the study summarized at Sciencedaily.com. He has been the source of several Science Daily summaries that I have also linked here in connection with a relevant post.)