Showing posts with label proprioceptive. Show all posts
Showing posts with label proprioceptive. Show all posts

Sunday, November 4, 2012

Anchoring pronunciation: Do you see what you are saying?


Clip art: Clker
Clip art: Clker
You can, in fact--if you are pronouncing a sound, word or phrase using EHIEP-like pedagogical movement patterns, PMPs (gestures across the visual field terminating in some form of touch by both hands.) Not only CAN you, according to research by Xi and colleagues at Northwestern University, summarized by Science Daily, but your eyes strongly interpret for you the "feeling of how it happens." The visual "character" of the dynamic gesture (its positioning, fluidity, distance from the eyes and texture on contact with the other hand) may well override the actual tactile feedback from your hands and proprioceptic "coordinates" of movement from your arms.

In the study, subjects were simultaneously presented with video clips that slightly contradicted what their hands and arms were doing. It was clearly demonstrated that even though subjects were also instructed to ignore the video and concentrate on the actual positioning, movement and related information about touch and weight coming from the hands, the "eyes have it." What they were seeing reinterpreted the other incoming sensory data.

As noted in earlier posts, visual can often override other modalities. What is "new" here and contributes to our understanding of how and why haptic-integration works is that the subjects' perception of the EHIEP sound-touch-movement "event" would appear to be strongly influenced by the style or flair or precision and consistency of the PMP. That has been one of key problems in creating the video models: insufficient clarity and consistency in the execution of PMPs (by me!)

This is both good news and bad news. Good, in that the PMP is, indeed, a potentially a very powerful anchor--and that the visual "feel" of each can contribute substantially to anchoring effectiveness. Bad, in that for maximal effectiveness the video/visual model needs to be exceedingly precise and consistent. (I have explored the use of Avatars instead of me but there are even bigger potential issues there.) Preparing/getting in shape now to do a new set of videos after the holidays, based on this and simular research. Can't wait to see what those feel like!

Tuesday, July 17, 2012

Touching tactile tactics for tapping new pronunciation?


Clip art: Clker
Clip art: Clker
Previous posts have alluded to the fact that students working with haptic-integrated pronunciation change often report beginning to "listen" with their bodies, as if they have recorded a word or phrase by "moving" with it or mirroring what was said. (Recent research on mirror neurons of course strongly supports that observation.) Two fascinating studies summarized by Science Daily address the underlying mechanisms which may be involved. One was conducted by researchers at Yale in which subjects were trained using a robotic device attached to their jaws to pronounce new sounds. As they did, they became substantially better at hearing them as well, noting that " . . . Learning to talk also changes the way speech sounds are heard. . . " Wow. The other, by a team at the University of British Columbia, basically "confused" subjects into thinking what they heard were aspirated consonants (when they actually heard voiced, unaspirated consonants)--by gently hitting them in the back of the neck with a small burst of air on targeted sounds. (That's right. Got to try that sometime!) The first was a bit more kinaesthetic than tactile; the second, decidedly more tactile. In both cases, the haptic or tactile "anchoring" dramatically affected perception of sounds. That is also the intent of the haptic-integrated protocols of the EHIEP system. The idea is to train learners to anchor haptically new sounds or patterns, what we call "MAMs" (more appropriate models--using movement and touch along with articulating the sound) at places in the visual field that are as "proprioceptively," visually and perceptually as distinct as possible from the learner's "inaccurate" or less appropriate current version of the sound. The summary of the latter study begins with this great line, "Humans use their whole bodies, not just their ears, to understand speech . . . " Really.

Saturday, July 16, 2011

Insight from the blind

Clip art: Clker
In this 2000 paper, Design of Haptic and Tactile Interfaces for Blind Users, Christian makes an important observation: " . . . At a high level, an interface is a collection of objects and operations one can perform on those objects. The visual representation of an interface on the monitor is only one interpretation. The idea is that when affording the blind access to an interface, one should not convey the visual representation, but rather the interface itself. . . .  by translating the semantic level of the interface, one can convey the same constructs that are available to sighted users."

In other words, our use of the visual field is, more accurately, use of the "proprioceptive" field, which involves much more than just sight. We have all observed learners who, in attempting to focus on a sound, will close their eyes to enhance their concentration. Turns out, in a haptically-anchored system (such as EHIEP) for most, doing a full protocol (a set of sounds or sound patterns) or a single sound with eyes closed significantly intensifies concentration--and almost certainly, retention. Compared to "simply" saying a word "blind," the addition of the haptic anchor (movement terminating in touch of both hands in the visual field) creates an extraordinarily "vivid" experience.

Although I have not explored the application of this concept to all protocols systematically, the idea of blocking visual modality extensively is an intriguing possibility. It seems to work surprisingly well in most contexts. Try it. You are in for a "blinding" revelation . . .