Showing posts with label multi-sensory. Show all posts
Showing posts with label multi-sensory. Show all posts

Saturday, October 1, 2022

What comes next in pronunciation teaching! (Why being in touch is so important!)

An intriguing new study by researchers at East Anglia University, Aix-Marseille University and Maastricht University, summarized by Neurosciencenews.com: How the Sounds We Hear Help Us Predict How Things Feel,  (title and actual empirical findings to be revealed later, with no link to the actual study, itself, other than a note that it will appear in Cerebral Cortex)

I am, nonetheless, delighted to take their word for it since I LOVE the conclusions and find them "touching!" Apparently they have uncovered yet another "new" type of connection between sound and touch or tactile processing. The key finding from the summary:  

“ . . . research shows that parts of our brains, which were thought to only respond when we touch objects, are also involved when we listen to specific sounds associated with touching objects. (Italics, mine.) This supports the idea that a key role of these brain areas is to predict what we might experience next, from whatever sensory stream is currently available.”

Across this unique, recently discovered circuit, for example, when we hear a sound, like that of a single consonant, the brain in principle simultaneously connects it with the physical sensations (touch, vocal resonance, micro-movements involved in producing it) associated with articulating it. If the focus is a word, on the other hand, we assume that other multiple, analogous circuits come into play that link to other dimensions. But the "touch" circuit has those unique properties. 

So what might that mean in the classroom, especially pronunciation and effectiveness? (I'll get to haptic pronunciation later, of course!) For one thing, (NO SURPRISE HERE!) a sound may be associated with the somatic (body) sensations in the vocal tract but not necessarily with a the concept, or phoneme, the phonological complex/nexus and the graphemic representation, itself. It is as if the sound points at the body, not the "brain" as a whole. 
 
On the other "hand," any number of other words could have have virtually identical "points of impact" on the body, associated with the same vowel "sound." The same may apply to a word articulated simultaneously with a gesture, or any experience associated with a sound, one heard or self-generated. That circuit connects the auditory image to at least the "body," but not necessarily one concept. 

Then what is the "workaround" for bringing together the multisensory event termed a "word," or for  example, assuming that it has been learned truly "multi-sensorialy," that is with as many senses as possible, or at least a "quorum level," vividly or intensely engaged as possible? 

In a sense, the "answer" is in the question: consistent, rich multisensory engagement. There are an almost infinite number of ways to accomplish that, of course, but haptic pronunciation teaching, based on touch-anchored speech-synchronized gesture attempts to do that, systematically. In principle, any sound, word or sound process can be experienced as a nexus involving: 
  • the physical sensation of articulating the sound/process
  • the auditory features of the sound (acoustic)
  • a concept (in the case of a word or, in come cases, patterns of pitch movement)
  • a gesture that involves hands touching with each other or the body, in some manner that mimics either the nature of the sensations involved in articulation or the "shape" of the concept itself, such as hands rising on a rising pitch or intonation, or hands positioned high in the visual field to represent a "high" vowel.  
According to the study, the use of haptic, touch-anchored gesture should strengthen considerably the connection between the concept associated with the gesture and the sound by "pointing" to the body-sensations involved in articulating the sound.

 And, of course, from our perspective, KINETIK (method) is what is coming next! 

 Source: https://neurosciencenews.com/auditory-tactile-processing-21279/


Friday, November 27, 2020

Motivation to do Pronunciation work: Smell-binding study!

Rats! Well . . . actually . . . mice who are motivated to (voluntarily) exercise more are genetically set up or developed to have better, more discriminating vomeronasal glandular structure. Is that big, or what? Check out the Neuroscience News summary of this unpublished study by Haga-Yamanaka, Garland and colleagues at UC-Riverside, forthcoming in PLOS ONE, Exercise Motivation Could Be Linked to Certain Smells  I LOVE the researchers' potential application of the research: 

“It’s not inconceivable that someday we might be able to isolate the chemicals and use them like air fresheners in gyms to make people even more motivated to exercise,” Garland said. “In other words: spray, sniff, and squat.”

Being a runner, myself,  I especially like the study since it uses mice who are what they term "high runners!" Admittedly it is a bit of a stretch to jump to the gym and then to the ELT/pronunciation classroom from the study, but the reality of how smell affects performance is well established in several disciplines--and probably in your classroom as well! 

Decades ago, a colleague who specialized in olfactory therapies and was a consultant in the corporate world on creating good-smelling work spaces, etc., sold me on the idea of using a scent generator in my pronunciation teaching. Required a mixing of two or three oils to get students in the mood to do whatever I wanted them to  better. Back then it seemed to be effective but there was little research to back it up and it was before we have been forced to work in "scent-free" and other things-free spaces.

What is interesting about the study to our work is the connection between persistence in physical exercise and heightened general sensory awareness, and the way smell in this case is enhanced. My guess is that touch, foundational in haptic pronunciation teaching is keyed in similar ways. Gradually as students practice consistently with the gestural gross and fine motor gestural patterns, what we call pedagogical movement patterns, their differential use of touch increases. (An earlier post identifies over two dozen "-emic" types of touch in the system.) In other words, touch becomes more and more powerful/effective in anchoring sound change and memory for it. 

That insight is central to the new haptic pronunciation teaching system, Acton Haptic Pronunciation Complement--Rhythm First, which will be rolled out early in 2021. (For preliminary details on that, check out the refurbished Acton Haptic website! )



Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Saturday, December 22, 2018

The feeling before it happens: Anticipated touch and executive function--in (haptic) pronunciation teaching

Tigger warning*: This post is (about) touching!

Another in our continuing, but much "anticipated", series of reasons why haptic pronunciation teaching works or not, based on studies that at first glance (or just before) may appear to be totally unrelated to pronunciation work.

Fascinating piece of research by Weiss, Meltzoff, and Marshall of  University of Washington's Institute for Learning and Brain Sciences, and Temple University entitled, Neural measures of anticipatory bodily attention in children: Relations with executive function", summarized by ScienceDaily.com. In that study they looked at what goes on in the (child's) brain prior to an anticipated touch of something. What they observed (from the ScienceDaily.com summary) is that: 

"Inside the brain, the act of anticipating is an exercise in focus, a neural preparation that conveys important visual, auditory or tactile information about what's to come  . . . in children's brains when they anticipate a touch to the hand, [this process] . . . relates this brain activity to the executive functions the child demonstrates on other mental tasks. [in other words] The ability to anticipate, researchers found, also indicates an ability to focus."

Why is that important? It suggests that those areas of the brain responsible for "executive" functions, such as attention, focus and planning, engage much earlier in the process of perception than is generally understood. For the child or adult who does not have the general, multi-sensory ability to focus effectively, the consequences can be far reaching.

In haptic pronunciation work, for example, we have encountered what appeared to be a whole range of random effects that can occur in the visual, auditory, tactile and conceptual worlds of the learner that may interfere with paying quality attention to pronunciation and memory. In some sense we have had it backwards.

What the study implies is that executive function mediates all sensory experience as we must efficiently anticipate what is to come--to the extent that any individual "simply" may or may not be able to attend long enough or deeply enough to "get" enough of the target of instruction. The brain is set up to avoid unnecessary surprise at all costs. The better and more accurate the anticipation, of course, the better.

If the conclusions of the study are on the right track, that the "problem" is as much or more in executive function, then how can that (executive functioning) be enhanced systematically, as opposed to just attempting to limit random "input" and distraction surrounding the learner? We'll return to that question in subsequent blog posts but  one obvious answer is through development of highly disciplined practice regimens and careful, principled planning.

Sound rather like something of a return to more method- or instructor-centered instruction, as opposed to this passing era of overemphasis on learner autonomy and personal responsibility for managing learning? That's right. One of the great "cop outs" of contemporary instruction has been to pass off blame for failure on the learner, her genes and her motivation. That will soon be over, thankfully.

I can't wait . . .



Citation:
University of Washington. (2018, December 12). Attention, please! Anticipation of touch takes focus, executive skills. ScienceDaily. Retrieved December 21, 2018 from www.sciencedaily.com/releases/2018/12/181212093302.htm.

*Used on this blog to alert readers to the fact that the post contains reference to feelings and possibly "paper tigers" (cf., Tigger of Winnie the Pooh)


Monday, July 16, 2018

"A word in the hand is worth two in the ear!" (On the relationship between touch and audition in pronunciation teaching)

Clker.com
Just got back from a couple of weeks in China. Always good to reconnect with some of the roots of things haptic, especially Chinese traditional medicine and acupressure and acupuncture systems. About 30 years ago I was introduced to the concept of "qi" and the notion of the "energy healing" arts. Not surprisingly, the hands play a prominent part in that a number of key acupressure points are located there, especially the center of the hands, the palms. In fact, one of the most important acupressure points, Lao Gong Pericardium-8, one associated with "the place of labor" is there at the center of the palm. (To find it, make a gentle pointing fist and note where your ring finger touches the palm.)

In haptic pronunciation teaching,  most of the sounds are anchored using touch and movement, where movement, sound and touch intersect on stressed elements of words, phrases or sentences, where the fingers of one hand touch the center of the palm of the other, using any of several types of touch, e.g., tapping, scraping, slight pressure pushing up to intense, extended pressure.

In pronunciation teaching, and especially when focusing on vowel and consonant articulation, awareness and direction of touch, as with various articulators in the mouth or throat area, may or may not figure in prominently in pedagogy. Generally, the latter, unfortunately . . .

A fascinating new study by Yau of Baylor College of Medicine , reported by ResearchFeatures.com, has, in some sense "uncovered" more of the basic interdependence of  hearing and touch. In part that is because both senses are managed or mediated in something of the same area of the brain. The most striking finding, however, is that the same degree of "supramodality" probably applies across all the senses as we think of them today.

In other words, evidence of a touch-hearing supramodality confirms again that the same interrelationship probably does exist among all senses, including (as in haptic work) kinesthetic-visual-audio-tactile. One of the early discoveries about the function of touch in perception (and any number of studies since) has been that it serves to "unite" the senses, functioning in a more exploratory capacity, and often temporarily at that. (Fredembach, et al, 2009;  Legarde, J. and Kelso, J., 2006). Turns out, touch does more than that!

When instructors, especially those with adult students, refer to "multi-sensory" teaching they are typically referring to visual-auditory (and maybe) some kinesthetic engagement only, not use of systematic touch. With the Yau research we understand more as to how the senses naturally connect, even without our interference or design. Also, however, we see (and feel) here the capability of touch, for example, to affect learning of sound--and vice versa.

Those with any degree of synesthesia, where senses are actually experienced thorough some other modality, have been into this from birth. We are beginning to catch up and see the potential application of that perspective. The possibilities for any number of disciplines, from rehabilitation--to pronunciation instruction are fascinating.

To not go "supramodal" now would, of course, be . . . senseless.  More on the specific application of Yau's research to enhancing pronunciation instruction in general, and haptic work specifically, will follow in subsequent posts.

Keep in touch!













Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Thursday, July 27, 2017

Killing pronunciation 7: Talking learners (and instructors) out of pronunciation change

Credit: Anna Shaw
How do you persuade students to work on their pronunciation--or sell them on it, especially pronunciation-related homework?  If you are using more "distal senses" such as sight and/or sound, according to a new study by Elder, Schlosser, Poor, Xu of Brigham Young University, summarized by Science Daily, you may not have the right approach. If, on the other "hand", your method evokes a more "proximal" sense experience (such as movement, touch and/or taste), you are probably on the right track. (I'm sure you can see where this is headed!)

The BYU study dealt with the impact of advertising on what type of pitch and/sensory imagery seems to get you to make a commitment to buy sooner, rather than later. The actual journal title, So Close I Can Almost Sense It: The Interplay between Sensory Imagery and Psychological Distance, describes the research well. What they found, not surprisingly, is that imagery connecting to or evoking a "felt" somatic response from the body, in effect, draws you in faster, and more effectively.

That does not mean that you DO something physical, only that the imagery on a screen in this case, may get the customer or learner's brain to respond AS IF actual touch or taste was involved, generating a very real feeling or taste-related memory. That mirroring effect, in part entertained by "mirror neurons" in the brain, is well established in brain research. To the brain under most circumstances the distinction between how we feel when we observe and do can be minimal. Turns out our metaphors are more than metaphors, in other words.

Some of the variability here may have to do with our personal instructional style in bringing learners' attention to, in this case, what they need to do outside of class. How do you do that? A list somewhere in the syllabus? An oral announcement? Something written on the board? A brief oral run through of what is to be done? A brief rehearsal w/students of what is to be done? What is very important here is not the actual classroom activity but the imagery that it evokes. And the key to that is what prior schema the classroom event is linking back to--and how, in the moment, it is delivered and experienced.

Pronunciation instruction done right is both an exceedingly physical and meta-cognitive process. What haptic work attempts to do is achieve that balance consistently. There are other ways to do that, of course, but most student textbooks, for example, either don't or can't, in part because the activities are presented and taught in a purely linear fashion. Haptic is ALWAYS simultaneous--sound, movement, and cognition (haptic) engagement, in effect, communicating more intentionally with learners in pronunciation change in and with somatic (body-based) imagery.

Still not sold? Try rereading the blog in the hot tub or on an exercise ball . . .

Full citation from ScienceDaily.com:
Brigham Young University. (2017, June 28). Now or later: How taste and sound affect when you buy: The way ads play on our senses influences the timing of our purchases. ScienceDaily. Retrieved July 23, 2017 from www.sciencedaily.com/releases/2017/06/170628095858.htm




Saturday, January 28, 2017

Killing pronunciation improvement: better heard (and felt) but not seen!

Clker.com
Fascinating study, Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity, by Gibney, et al. Department of Neuroscience, Oberlin College.

Tigger warning: This is a thick, technical read, but the conclusions of the study have potentially important implications for pronunciation teaching, especially attempts to enhance uptake of new and corrected sounds or patterns that rely on effective integration of sounds, images, movement and vocal resonance. 

In essence, what the research examined was, as the title suggests, how distractions in the visual field affected subjects attention and ability to learn and recall audio-visual stimuli (images on a computer screen accompanied by sounds). What was striking (again as evident in the title) was that no matter how complex the task of associating the targeted sound with the visual image or object in focus, with even the slightest distraction created on the screen, e.g., a object briefly appearing in a corner, the subject's ability to integrate and recall the complex target later . . .was compromised.

The implications for pronunciation teaching?  Not surprisingly, attention is critical in integrating sensory information. We know that, of course. What is more interesting is the idea that any visual distraction whatsoever that occurs when sound, movement and visual imagery (such as the orthography or phonetic representation of a word or phrase) are being "integrated" may seriously  undermine the process. In other words, visual attention and eye tracking during the process may have dramatic impact. That is a "variable" that can, in principle, be managed in the classroom, although most do not consider visual distraction to be potentially that disruptive of pronunciation instruction. But it certainly can be.

We discovered early on that in haptic pronunciation work, where not only sound, visual imagery, movement and vocal resonance are involved--but touch as well, visual distraction can seriously derail the process. This research suggests, for example, that the same effect during general pronunciation work as well, especially oral work, may be a significant impediment in some contexts. 

The sterile, featureless language laboratory booth of old may have had more going for it than we thought! In early haptic work we experimented with controlling eye tracking. Perhaps it is time we revisited that idea. It certainly deserves our undivided attention.

Original research article: Front. Integr. Neurosci., 20 January 2017 | https://doi.org/10.3389/fnint.2017.00001

Tuesday, September 20, 2016

What (a window into the brain of) the mouse can teach us about learning pronunciation

Clker.com
Trigger warning: If you are especially attached to your mouse, you may want to skip over the third, italicized paragraph below . . . 

Fascinating research by Funamizu, Kuhn and Doya of Okinawa Institute of Science and Technology Graduate University, "Neural substrate of dynamic Bayesian inference in the cerebral cortex", originally published in Nature Neuroscience, summarized by Science Daily as, "Finding your way around in an uncertain world". (Full citation below.)

Basically, the study looked at how the (mouse's) brain uses movement of the mouse's body in creating meaning and thought. Reading the research methodology is not for the faint of heart. Here is a piece of the Science Daily summary describing it:

The team performed surgeries in which a small hole was made in the skulls of mice and a glass cover slip was implanted onto each of their brains over the parietal cortex. Additionally, a small metal headplate was attached in order to keep the head still under a microscope. The cover slip acted as a window through which researchers could record the activities of hundreds of neurons using a calcium-sensitive fluorescent protein that was specifically expressed in neurons in the cerebral cortex . . . The research team built a virtual reality system in which a mouse can be made to believe it was walking around freely, but in reality, it was fixed under a microscope. This system included an air-floated Styrofoam ball on which the mouse can walk and a sound system that can emit sounds to simulate movement towards or past a sound source.(ScienceDaily, September 16, 2016).

Got that? They then observed how the mice "navigate" the virtual space under different conditions, including almost complete reliance on body movement, rather than with access to any visual or auditory stimulus. The surprising finding (at least to me) was the extent to which kinesthetic memory or engagement took over, directing the mice to the "reward." There is much more to the work, of course, but this "window" into the functioning of the cerebral cortex is really consistent with a wide range of studies that point to "body-based" meaning creation and control.

So, what is the possible relevance of that to pronunciation teaching? (I never thought you'd ask!) Our work in haptic pronunciation teaching, for example, is based on the assumption, in effect, that "gesture comes first" (before sound and visual phonemes/graphemes) in instruction. (Based on Lessac's principle of "Train the body first" in voice and stage movement work.) For the most part today, pronunciation methodologists and theorists still see the role of gesture in teaching as being secondary, at best, an optional "reinforcer" of word-sound associations or a vehicle for "loosening up" learners and their bodies and emotional states-- or even just having fun!

What the "mice" study suggests is that sound, movement and vision are more integrated and interdependent in the brain than we generally acknowledge--or at least that movement is more central to meaning creation and retrieval. There are a number of body and movement-based theories that support that observation. In other words, the use of gesture in instruction deserves much more attention than it is currently getting. Much more than just a gesture . . .

Citation:
Okinawa Institute of Science and Technology Graduate University - OIST. "Finding your way around in an uncertain world." ScienceDaily. ScienceDaily, 19 September 2016. 

Friday, January 1, 2016

3D pronunciation instruction: Ignore the other 3 quintuplets for the moment!

Clker.com
For a fascinating look at what the field may feel like--from a somewhat unlikely source, a 2015 book, 3D Cinema: Optical illusions and tactile experience, by Ross, provides a (phenomenal) look at how and why contemporary 3D special effects succeeds in conveying the "sensation of touch". In other words, as is so strikingly done in the new Star Wars epic, the technology tricks your brain into thinking that you are not only there flying that star fighter but that you can feel the ride throughout your hands and body as well.

This effect is not just tied in to current gimmicks, such as moving and vibrating theater seats or spray mist blown on you, or various odors and aromas being piped in, although it can be. Your mirror neurons respond more as if it is you who is doing the flying, that you are (literally) "in touch" with the actor. The neurological interconnectedness between the senses (or modalities) provides the bridge to greater and greater sense of the real or a least very "close encounter."

How does the experience in a good 3D movie compare to your best multi-sensory events or teachable moments in the classroom, focusing on pronunciation? 

It is easy to see, in principle, the potential for language teaching, creating one vivid teachable moment after another, "Wowing!" the brain of the learner with multi-sensory, multi-,modal experience. As noted in earlier blogposts on haptic cinema, based in part on Marks (2002), that concept, "the more multi-sensory, the better", by just stimulating more of the learner's (whole) brain virtually anything is teachable, is implicit in much of education and entertainment.

Although earlier euphoria has moderated, one reason it can still sound so convincing is our common experience of remembering the minutest detail from a deeply moving or captivating event or presentation. We all have had the experience of being present at a poetry reading or great speech where it was as if all our senses were alive, on overdrive. We could almost taste the peaches; we could almost smell the gun powder.

Part of the point of 3D cinema is that it becomes SO engaging that our tactile awareness is also heightened enormously. As that happens the associated connections to other modalities are "fired" as well. We experience the event more and more holistically. How that integration happens exactly can probably be described informally as something like: audio-visual-cognitive-affective-kinasethetic-tactile-olfactory and "6th sense!" experienced simultaneously.

At that point, apparently the brain is multitasking at such high speed that everything is perceived as "there" all at once. And that is the key notion. That would seem to imply that if all senses are strongly activated and recording "data" then, what came in on each sensory circuit will later still be equally retrievable. Not necessarily. As extensive research and countless commercially available systems have long established,  for acquisition of vocabulary, pragmatics, reading skills and aural comprehension, the possibilities of rich multi-sensory instruction seem limitless at this point.

Media can provide memorable context and secondary support, but why that often does not work as well for learning of some other skills, including pronunciation is still something of a mystery. (Caveat emptor: I am just completing a month-long "tour of duty" with seven, young grandchildren . . . ) In essence, our sensory modalities are not unlike infant octuplets, competing for our attention and storage space. Although it is "possible" to attend to a few at once, it is simply not efficient. Best case, you can do maybe two at a time, one on each knee.

The analogy is more than apt. In a truly "3D" lesson, consistent with Ross (2015), whether f2f or in media, where, for example, the 5 primary "senses" of pronunciation instruction (visual, auditory, kinaesthetic, tactile and meta-cognitive) are near equally competitive, that is vividly present in the lesson, overwhelmingly so. Tactile/kinaesthetic can be unusually prominent, accessible, in part, as noted in earlier blogposts, because it serves to "bind together" the other senses. In that context, consciously attending to any two or three simultaneously is feasible.

So how can we exploit such vivid, holistically experienced, 3D-like milieu, where movement and touch figure in more prominently? I never thought you'd ask! Because of the essentially physical, somatic experience of pronunciation--and this is critical, from our experience and field testing--two of the three MUST be kinaesthetic and tactile--a basic principle of haptic pronunciation teaching.(Take your pick of the other three!)

Consider "haptic" simply an essential "add on" to your current basic three (visual, auditory and meta-cognitive), and "do haptic" along with one or two of the other three. The standard haptic line-of march:

A. Visual-Meta-cognitive (very brief explanation of what, plus symbol, or key word/phrase)
B. Haptic-metacognitive (movement and touch with spoken symbol name or key word/phrase, typically 3x)
C. Haptic-auditory (movement and touch, plus basic sound, if the target is a vowel or consonant temporarily in isolation, or target word/phrase, typically 3x)
D. Haptic-Visual-Auditory (movement and touch, plus contextualized word or phrase, spoken with strong resonance, typically 3x)
E. Some type of written note made for further reference or practice
F. (Outside of class practice, for a fixed period of up to 2 weeks follows much the same pattern.)

Try to capture the learner's complete (whole body/mind) attention for just 3 seconds per repetition--if possible! Not only can that temporarily let you pull apart the various dimensions of the phonemic target for attention, but it can also serve to create a much more engaging (near 3D) holistic experience out of a potentially "senseless" presentation in the first place--with "haptic" in the mix from the outset.

Happy New Year!

Keep in touch.

Citation:
Ross, M. (2015). 3D Cinema: Optical Illusions and Tactile Experiences. London: Springer, ISBN: 978-1-349-47833-0 (Print) 978-1-137-37857-6 (Online)



Wednesday, October 21, 2015

8 ways to teach English rhythm to EVERYbody but no BODY!

Here's one for your "kitchen sink" file (a research study that throws almost every imaginable technique at a problem--and succeeds) . . . well, sort of. In Kinoshita (2015) over the course of a four-week course, students were taught using seven different, relatively standard procedures for working on Japanese rhythm with JSL students. If you are new to rhythm work, check it out.

Those included: rhythmic marking (mark rhythm groups with a pencil and then trace them with their fingers), clapping (hands), pattern grouping (identify type of rhythm pattern for know vocabulary), metronome haiku (listening to and reading haiku to a metronome), auditory beat (reading grouped text out loud), acoustic analysis (using Praat), shadowing (attempting to read or speak along with an audio recording or live person). Impressive! They worked with each one for over an hour.

Not surprisingly, their rhythm improved. It is not entirely clear what else may have contributed to that effect, including other instruction and out of class experience, since there was no control group, but the students liked the work and identified their favorite procedure, which apparently aligned with their self-identified cognitive/learning style. Although after having done that many hours of rhythm work it had to be a bit difficult for the learner to  assess which technique they "liked" best, let alone which actually worked best for them individually.

Of particular interest here are the first two techniques, marking rhythm and tracing along with a finger, and clapping hands--both of which are identified as "kinaesthetic" by Kinoshita. (The other techniques are noted as combinations of auditory and visual.) They are, indeed, movement-and touch-based. The first at least involves moving a finger along a line. The second, clapping hands, could, in principle, involve more of the body then just the hands, but it also might not, of course.

Neither technique, at least on the face of it, meets our basic "haptic" threshold--involving more full-body engagement and distinctly anchoring stressed vowels. By that I mean that including touch in the process does not, in principle, help to anchor (better remember) the internal structure of the targeted rhythm groups--in fact it may serve to help cancel out memory for different levels of stress, length and volume of adjacent syllables. (There have been several blogposts dealing with this topic, one recently and the first, back in 2012 that focused on how haptic "events" are encoded or remembered.)

In essence, the haptic "brain" area(s) are not all that good at remembering different levels of pressure applied to the same point on the body. In other words, it is more challenging, for example, to remember which syllable in a clapped or traced rhythm group was prominent. (The number of syllables involved may be another matter.) So, to the extent that rhythm cannot or should not be divorced from word and phrasal stress, Kinoshita's two procedures probably are not contributing much variance to the final "progress" demonstrated.

That is not to say that more holistic,"full body" techniques such as "jazz chants", poetry, songs or dance, such as those promoted by Chan in her paper in the same conference proceedings (Pronunciation Workout), are not useful, fun, engaging, motivating and serve functions other than acquisition of the rhythm of an L2. 

A basic assumption of haptic work is that systematic body engagement, involving the whole person,  especially from the neck down, is essential to efficient instruction and learning. (Train the body first! - Lessac). v4.0 will include extensive use of "pedagogical dance steps" and practicing of most pedagogical movement patterns (gesture plus touch) to rhythmic percussion loops. 

As always, if you are looking for a near perfect "haptic" procedure for teaching English rhythm, where differentiated movement and touch contribute substantially to the process, I'd, of course, recommend begiining with the AHEPS v3.0 Butterfly technique-at least as a replacement for hand clapping. And for most of the other eight as well as matter of fact!


Full citation:
Kinoshita, N.(2015). Learner preference and the learning of Japanese rhythm. In J. Levis, R. Mohammed, M. Qian; Z. Zhou (Eds). Proceedings of the 6th Pronunciation in Second Language Learning and Teaching Conference (ISSN 2389566), Santa Barbara, CA (pp.49-62). Ames, IA: Iowa State University.

Wednesday, October 7, 2015

Great memory for words? They're probably out of their heads!

Perhaps the greatest achievement of neuroscience to date has been to repeatedly (and empirically) confirm common sense. That is certainly the case with teaching or training. Here's a nice one.

For a number of reasons, the potential benefit of speaking a word or words out loud and in public
Clipart: Clker.com
when you are trying to memorize or encode it--rather than just repeating it "in your head"--is not well understood in language teaching. For many instructors and theorists, the possible negative effects on the learner of speaking in front of others and getting "unsettling" feedback far outweigh the risks. (There is, of course, a great deal of research--and centuries of practice--supporting the practice of repeating words out loud in private practice.)

In what appears to be a relatively elegant and revealing (and also common-sense-confirming) study, Lafleur and Boucher of Montreal University, as summarized by ScienceDaily (full citation below) explored under which conditions subsequent memory for words is better: (a) saying it to yourself "in your head", (b) saying it to yourself in your head and moving your lips when you do, (c) saying it to yourself as you speak it out loud, and (d) saying the word out loud in the presence of another person. The last condition was substantially the best; (a) was the weakest.

The researchers do speculate as to why that should be the case. (ScienceDaily.com quoting the original study):

"The production of one or more sensory aspects allows for more efficient recall of the verbal element. But the added effect of talking to someone shows that in addition to the sensorimotor aspects related to verbal expression, the brain refers to the multisensory information associated with the communication episode," Boucher explained. "The result is that the information is better retained in memory."


The potential contribution of interpersonal communication as context information to memory for words or experiences is not surprising. How to use that effectively and "safely" in teaching is the question. One way, of course, is to ensure that the classroom setting is both as supportive and nonthreatening as possible. Add to that a social experience with others that also helps to anchor the memory better.

Haptic pronunciation teaching is based on the idea that instructor-student, and student-student communication about pronunciation must be both engaging and efficient--and resonately and richly spoken out loud. (Using systematic gesture does a great deal to make that work. See v4.0 later this month for more on that.)

I look forward to hearing how that happens in your class or your personal language development. If that thread gets going, I'll create a separate page for it. 

Keep in touch!

Citation:
University of Montreal. "Repeating aloud to another person boosts recall." ScienceDaily. ScienceDaily, 6 October 2015. .

Saturday, February 7, 2015

Why haptic (pronunciation) teaching and learning should be superior!

Wow. How about this "multi-sensory" conclusion from Max-Planck-Gesellschaft researchers Mayer, Yildiz, Macedonia, and von Kriegstein, Visual and motor cortices differentially support the translation of foreign language words (full citation below)--summarized by Science daily (boldface added for emphasis) :

"The motor system in the brain appears to be especially important: When someone not only hears vocabulary in a foreign language, but expresses it using gestures, they will be more likely to remember it. Also helpful, although to a slightly lesser extent, is learning with images that correspond to the word. Learning methods that involve several senses, and in particular those that use gestures, are therefore superior to those based only on listening or reading."

The basic "tools" of haptic pronunciation teaching, what we call "pedagogical movement patterns," are defined as follows:

As a word or phrase is visualized (visual) and spoken with resonant voice, a gesture moving across the visual field is preformed which culminates in hands touching on the stressed syllable of the word or phrase (cognitive/linguistic), as the sound of the word is experienced as articulatory muscle movement in the upper body and by vibrations in the body emanating from the vocal cords and (to some degree) sound waves returning to the ears (auditory). 
Clipart'
Clker.com

And what bonds that all together? A 2009 study by Fredembach,et al demonstrated just how haptic anchoring--and the PMP should work: in relative terms, the major contribution of touch may generally be exploratory and assembling of multi-sensory experiences. The key is to do as much as possible to ensure that learners keep as many senses in play during "teachable moments" when new word-sound complexes are being encountered and learned. 

Make sense? Keep in touch!

Citations:
Fredembach, B., Boisferon, A. & Gentaz, E. (2009) Learning of arbitrary association between visual and auditory novel stimuli in adults: The “Bond Effect” of haptic exploration. PLoS ONE, 2009, 4(3), 13-20.
Max-Planck-Gesellschaft. (2015, February 5). Learning with all the senses: Movement, images facilitate vocabulary learning. ScienceDaily. Retrieved February 7, 2015 from www.sciencedaily.com/releases/2015/02/150205123109.htm

Monday, January 5, 2015

Revenge of the multi-taskers: Distracted during motor (or pronunciation) learning or practice? No problem!

This is the second in a series of posts on creating and managing effective language or pronunciation practice, (analogically) based on Glyde's guitar practice framework. (See earlier post.) His
Clip art:
Clker.com
principle #5 was common-sensical: Failing to avoid distraction.

Earlier posts have looked at the interplay between haptic (movement and touch) and visual and auditory modalities. One general finding of research has been that visual stimuli or input tend to override auditory and haptic. In part for that reason, we have worked to restrict extraneous visual auditory distraction during haptic pronunciation work. In therapy, on the contrary, many times distraction is used quite strategically to draw the patient's attention away from a problematic experience or emotion.

Now comes a fascinating study by Song and Bedard of Boston University (summarized by Science Daily - See full citation below) demonstrating how visual distraction during motor learning may at least not be problematic. As long as subjects were subjected to relatively similar distraction on the recall task, the fact that they had been systematically distracted during the learning task seemed to have little or no effect. Furthermore, if the "distracted" subjects were later tested in the "non-distracting" condition, they did not perform as well as their "distracted" fellow subjects.

In other words, the visual context of motor learning was not a factor in recall--as long as it was reasonably consistent with the original learning milieu.

So, what does all that mean for effective pronunciation practice? Quite a bit, perhaps. Context, from many perspectives is critical. Establishing linguistic context has been a given for decades; managing the classroom environment (or the homework practice venue) so that new or changed sounds are recalled in a "relatively similar setting" to how they were learned is another question.

One of the principles of haptic pronunciation teaching is to use systematic gesture + touch across the visual field to anchor sound change--maintaining as much of learner attention as possible for at least 3 seconds. In practice, the same pedagogical movement patterns (PMP) are used--and, according to learners, even in spontaneous later recall of new material the PMPs often figure prominently in visual/auditory recall as well.

So, to paraphrase Glyde's 5th principle: Avoid inconsistent distraction (in pronunciation teaching), at least in those more motor-based work or phases. Or better yet, embrace it!

Citation:
Brown University. (2014, December 9). Distraction, if consistent, does not hinder learning. ScienceDaily. Retrieved December 18, 2014 from www.sciencedaily.com/releases/2014/12/141209120141.htm




Saturday, November 29, 2014

Seeing before believing: key to (haptic) pronunciation teaching

We call it "haptic pronunciation teaching." That is actually shorthand for something like:
simultaneous haptic-integration of visual-auditory-kinesthetic-tactile modalities in anchoring pronunciation

Almost every functional system for teaching pronunciation includes graphics or videos of some kind, even if it just a black and while line drawing of the mouth. Some substitute extensive written or verbal explanation for visual models. Our basic approach has been to use touch to link the senses, but often without too much concern for the precise order in which learner attention is directed through the various sources of information on the sound.

A fascinating new study on sensory sequencing in dance instruction by Blitz from Bielefeld University reported in Science Daily (see complete reference below) suggests that our real time sequencing in training in the use of pedagogical movement patterns (gesture, plus touch) is probably much more critical than we have assumed. That is especially relevant to how we (hapticians) maintain attention in the process. In other words, in the classroom, in what order do we introduce and train learners to the parameters of sounds and sound processes? That is, of course, equally relevant to all teaching!

NOTE: Please accept for the moment the parallel between dance instruction and our haptic work, that is training learners to experience, through gesture/touch and placement in the visual field, L2 or L1 sounds associated with targeted words. Also, allow me to side step the question of whether dancers are, by nature probably a bit "hyper-kinesthetic!"  

The study discovered that first viewing a dance sequence without verbal explanation or instruction--and then hearing or reading instructions after that was significantly more effective than the converse in long term memory for the sequence. Both visual and "cognitive" sources were present but the order was the critical variable. The subjects were apparently free to repeat both the visual and verbal inputs a limited number of times, but not to "mix" the ordering of them.

In other words, insight into what had been experienced was far more effective than was verbal cognitive schema in setting up and productively exploiting the visual experience or model to come. For us, the pedagogical implications are relatively clear, something like: (1) Observation (video clip) then (2) brief verbal explanation, then (3) experiential training in doing the gestural pattern, then (4) practice, along with (5) focused explanation of the context of the targeted sound.

How might that perspective impact your (pronunciation) teaching?

AMPISys, Inc.

See what I mean?






Full reference: Bielefeld University. "Best sensory experience for learning a dance sequence." ScienceDaily. ScienceDaily, 7 November 2014. .

Wednesday, June 4, 2014

Visual "Socailization" and visual pronunciation teaching methods

In a recent interview, Robert Thomson, chief executive officer of News Corp, commented on the far reaching impact of "visual socialization" on today's media and news organizations. One observation was that we are only beginning to understand the new,  overwhelming dominance of visual learning, what that means to both social connectedness and education. To get a feel for what visual connectedness and "Socail media" may be like, watch this "Socail Cave" video by Tiazzoldi or even check it out on Pinerest.
Photo credit: Moses Lam

Well . . . yes, there may be a bit of  random "dys-graphia" involved there, but the two pieces together do underscore Thomson's point, the all consuming influence of visual media. I may just adopt that acronym: SOCAIL? (So, Over-the-top  visual-Cognitive pronunciation teaching really Ain't It, Lads?) 

It is easy to underestimate the impact on our work. There are several methods or companies that appear to be more explicitly visual, such as "EyeSpeakEnglish.com." How well the new "visually socialized" generations of learners (VSLs) can learn pronunciation, can connect up sound and movement to their primary learning modality, visual imagery, is, of course, the question. In general, research and practice up to this point suggests that visual dominance simply overrides not only auditory but tactile as well. (See--literally--dozens of previous blog posts here on that topic!) 

My guess is that many highly visual pronunciation teaching methods (that do not involve strong compensatory auditory and movement components by design) are anachronisms, at best, created before the the emergence of new media and VSLs, overcompensating for earlier attraction of "colourful" or engaging visual images on those who had not experienced them previously. 

The antidote? (And I could provide anecdotes ad infinitum, of course.) Haptic. Keep in touch. 

Sunday, March 9, 2014

Getting pronunciation off your chest . . .

Photo: Vimeo via Telegraph.com
This one is too good to pass up. (Hat tip to Brian Teaman!) Leave it to the MIT Media lab (and Heibeck, Hope and Legault) as reported by Kinder at the Telegraph to come up with a vest that will allow you to feel the emotional states of the characters in a book, what they call, "Sensory Fiction." Such "haptic vests" have been around for quite some time but this one is more closely tied to a narrative that can serve "pedagogical" purposes. With the vest on, you experience something of what the character is feeling through a combination of temperature and pressure changes.

All you need for our work is to plug in the audio track and stick on a few mini-speakers around the upper body to make it a great tool for getting the "felt sense" of a sound. Deliver that with a great voice with rich resonance (George Clooney?), especially in a text with a bit more emotional zip than your average pronunciation book. (No great challenge there, of course!) Finally, connect it up to the EHIEP pedagogical movement patterns (gestures + touch) and you have the perfect "Haptic Friction."

Got to get me one of those!

Keep in touch!

Saturday, May 11, 2013

Paying attention to touch in pronunciation teaching. (No applause, please!)

Clip art: Clker
The most frequent question we get at workshops is: "How does haptic work, anyway?" This 2011 study by Blankenberg of Charité Universitätsmedizin Berlin (summarized by Science Daily) was instrumental in helping me understand how using touch and movement, synchronized with speech could function to enable both encoding in memory and subsequent recall. The key, it turns out is something analogous to Gendlin's notion of "felt sense:" both touch and conscious attention to the haptic "event" are essential to effective pedagogical or therapeutic intervention. According to the research, access to haptic or tactile memory can happen at any of several levels, from conscious to unconscious.

For example, having touched the table as you say a stressed syllable of a word may help you remember both the word and the stressed syllable in it later in spontaneous speaking. It might not--but consciously recalling the sensation of the event when you touched the table should increase your chances considerably. In other words, touch may not automatically activate memory of the "nexus" of the word but consciously focusing on the tactile dimension of the event may.

Earlier posts and the linked research studies have examined why clapping hands on every syllable of a word but doing a stronger clap on the stressed syllable to anchor stress may not work: all those "touches" are preserved almost as equals in memory, at least for a time. Unless something more is done to mark the stressed one (visual, auditory, kinaesthetic or some combination), the contribution of touch, best case, can be a wash: worse case, it compromises the focus of the gesture.

In other words, as we have seen in many different studies, touch acts as the "exploratory glue" that helps bind the senses together, creating the multiple modality experience we call "haptic anchoring." So why call it "haptic anchoring" then? Just to better bring it to your attention--whatever haptic pronunciation target that you happen to touch upon . . . 

Tuesday, May 7, 2013

Haptic cinema and EHIEP-tic pronunciation training

My discovery of "haptic cinema" and that approach to experiential entertainment and teaching about 6 years ago was a game changer. The integration of the senses, especially the place of perceived texture in that media became the phenomenological model for "haptic-integrated clinical pronunciation," and still is. Here is a great example, "Haptic cinema: a sensory interface to the city."  It is about 11 minutes long. Put on some earphones, sit someplace where you'll have no visual distractions and experience it. 

Clip art: Clker
That is what it should feel like, the felt sense of haptic anchoring in EHIEP instruction, when the learner articulates a sound or word with rich vocal resonance as hands move across the visual field (with some degree of eye tracking) and the hands touch on the stressed vowel--possibly followed by a short continued movement completing an intonation "denouement." 

To prepare for watching it, you might go outside and hug a tree first . . .