Showing posts with label somatic grounding. Show all posts
Showing posts with label somatic grounding. Show all posts

Tuesday, January 22, 2019

Differences in pronunciation: Better felt than seen or heard?

clker.com
This feels like a "bigger" study, maybe even a new movement! (Speaking of new "movements", be sure to sign on for the February haptic webinars by the end of the month!)

There are any number of studies in various fields exploring the impact of racial, age or ethnic "physical presence" (what you look like) on perception of accent or intelligibility. In effect, what you see is what you "get!" Visual will often override audio, what the learner actually sounds like. Actually, that may be a good thing at times . . .

Haptic pronunciation teaching and similar movement-based methods use visual-signalling techniques, such as gesture, to communicate with learners concerning status of sounds, words and phrases. Exactly how that works has always been a question.

Research by Collegio, Nah, Scotti and Shomstein of George Washington University, summarized by Neurosciencenews.com“Attention scales according to inferred real-world object size", points to something of the underlying mechanism involved: perception of relative object size. The study compared subjects' reaction or processing time when attempting to identify the relative size of objects (as opposed to the size of the image of the object presented on the screen). What they discovered is that, regardless of the size of the images on the screen, the objects that were in reality larger consistently occupied more processing time or attention.

In other words, the brain accesses a spatial model or template of the object, not just the size of the visual image itself in "deciding" if it is bigger than an adjacent object in the visual field. A key element of that process is the longer processing time tied to the actual size of the object.

 How does this relate to gesture-based pronunciation teaching? In a couple of ways potentially. If students have "simply" seen the gestures provided by instructors (e.g., Chan, 2018) and, for example, in effect have just been commanded to make some kind of adjustment, that is one thing.The gesture is, in essence, a mnemonic, a symbol, similar to a grapheme, a letter. The same applies to such superficial signalling systems such as color, numbers or finger contortions.

If, on the other hand, the learner has been initially trained in using or experiencing the sign, itself, as in sign language, there is a different embodied referent or mapping, one of experienced physical action across space.

In haptic work, adjacent sounds in the conceptual and visual field are first embodied experientially. Students are briefly trained in using three different gesture types, distinctive lengths and speeds, accompanied by three distinctive types of touch. In initial instruction, students do exercises where they experience physically combinations of those different parameters as they say the sounds, etc.

For example, the contrastive, gestural patterns (done as the sound is articulated) for  [I], [i], [i:],and [iy] are progressively longer and more complex: (See linked video models.)
a. Lax vowels, e.g., [I] ("it')- Middle finger of the left hand quickly and lightly taps the palm of the right hand.
b. Tense vowels, e.g., [i] ("happy")- Left hand and right hands touch lightly with finger tips momentarily.
c. Vowel before voiced consonant, e.g., [i:] ("dean") - Left hand pushes right hand, with palms touching, firmly 5 centimeters to the right.
d. Tense vowel, plus off glide, e.g., [iy] ("see") - Finger nails of the left hand drag across the palm of the right hand  and, staying in contact then slide up about 10 centimeters and pause.

The same principle applies to most sets of contrastive structures and processes, such as intonation, rhythm and consonants. See what I mean, why embodied gesture for signalling pronunciation differences is much more effective? If not, go here, do a few haptic pedagogical movement patterns (PMPs) just to get the feel of them and then reconsider!





Saturday, February 7, 2015

Why haptic (pronunciation) teaching and learning should be superior!

Wow. How about this "multi-sensory" conclusion from Max-Planck-Gesellschaft researchers Mayer, Yildiz, Macedonia, and von Kriegstein, Visual and motor cortices differentially support the translation of foreign language words (full citation below)--summarized by Science daily (boldface added for emphasis) :

"The motor system in the brain appears to be especially important: When someone not only hears vocabulary in a foreign language, but expresses it using gestures, they will be more likely to remember it. Also helpful, although to a slightly lesser extent, is learning with images that correspond to the word. Learning methods that involve several senses, and in particular those that use gestures, are therefore superior to those based only on listening or reading."

The basic "tools" of haptic pronunciation teaching, what we call "pedagogical movement patterns," are defined as follows:

As a word or phrase is visualized (visual) and spoken with resonant voice, a gesture moving across the visual field is preformed which culminates in hands touching on the stressed syllable of the word or phrase (cognitive/linguistic), as the sound of the word is experienced as articulatory muscle movement in the upper body and by vibrations in the body emanating from the vocal cords and (to some degree) sound waves returning to the ears (auditory). 
Clipart'
Clker.com

And what bonds that all together? A 2009 study by Fredembach,et al demonstrated just how haptic anchoring--and the PMP should work: in relative terms, the major contribution of touch may generally be exploratory and assembling of multi-sensory experiences. The key is to do as much as possible to ensure that learners keep as many senses in play during "teachable moments" when new word-sound complexes are being encountered and learned. 

Make sense? Keep in touch!

Citations:
Fredembach, B., Boisferon, A. & Gentaz, E. (2009) Learning of arbitrary association between visual and auditory novel stimuli in adults: The “Bond Effect” of haptic exploration. PLoS ONE, 2009, 4(3), 13-20.
Max-Planck-Gesellschaft. (2015, February 5). Learning with all the senses: Movement, images facilitate vocabulary learning. ScienceDaily. Retrieved February 7, 2015 from www.sciencedaily.com/releases/2015/02/150205123109.htm

Wednesday, November 21, 2012

Pronunciation & body & media fit

Clip art: Clker
If you have been reading the blog occasionally, you are aware of the basis of the EHIEP model: (a) initial pronunciation teaching and (b) practice outsourced to video with subsequent (c) integrated use in the classroom, (d) strong haptic engagement (movement and touch) and (e) somatic or body awareness and training. For the latter piece, body monitoring, maybe what we need is something like the "BodyMedia FIT" system. I love the company's come on line: "Your body talks. We listen." Wish I had the spare change to buy one of those arm bands, just for fun. The research on effectiveness of the technology, using web-based systems,  is interesting. "Body training," in general, is biofeedback of one kind or another. This type of technology could easily be adapted to provide constant feedback on the quality of movement, relaxation, energy expenditure and body resonance. For much less money and hassle--with a modicum of self-discipline and persistence, learners can experience the same kind of integrated experience of speaking and pronunciation change with us. The future, however, is with technology such as this linked to CAPT (see previous post.) and haptic cinema. But if you have difficulty consistently managing your "current classroom body image" and its caloric correlates, consider "arming yourself" with such a band. 

Monday, July 30, 2012

Smile your pronunciation frustrations (and anchor) away!

Clip art: Clker

Clip art: Clker
In a couple of 2009 studies by Foroni and Semin of Utrecht University, summarized by Science Daily,  it was demonstrated that " . . . merely seeing a smile (or a frown, for that matter) will activate the muscles in our face that make that expression, even if we are unaware of it." In addition, seeing or reading a word such as 'smile' or 'frown' influenced the subjects' rating of images that followed, e.g., a "smile" would result in a more positive rating; a frown, a more negative rating. Although details of the experiments are sketchy in the summary, they obviously controlled carefully the subjects' attention, eliminating visual and auditory distractions as much as possible. In that setting, the intensity of the somatic response in the muscles of the face was optimized, engendering something of the corresponding emotion. The parallel to haptic anchoring or anchoring of any relationship between the felt sense of the pronunciation of a word and its meaning and orthographic representation is striking, on a couple of levels. Just the set of words used in setting up change and practice lists of sound complexes impacts both the effectiveness of specific anchoring and the overall anchoring "environment" as it happens. No wonder learners so enjoy practicing voiceless grooved sibilants ('s'), once told to just "Smile when you say that!" We knew that. The researchers conclude that " . . . language is not merely symbolic, but also somatic . . ." and " . . . these experiments provide an important bridge between research on the neurobiological basis of language and related behavioral research." Really, ya think? If that isn't enough to make you smile--that there is finally empirical evidence of that "bridge," I don't know what is . . .