Showing posts with label focus. Show all posts
Showing posts with label focus. Show all posts

Thursday, March 3, 2022

More than just a gesture: Non-referential gesture in children's conversation and (haptic pronunciation) instruction

Interesting study, summarized on Technologynetwork.com, one pointing the way to potentially greater, systematic application of gesture in instruction: Children use non-referential gestures in narrative speech to mark discourse elements which update common ground, by Rohrer, Florit-Pons, Vilà-Giménez and Prieto of Pompeu Fabra University and the University of Girona. What they were looking at is the use of "nonreferential" gesture by children, ages 6 to 9. Specifically those gestures were not iconic (representing an object of image) or deictic ("pointing" in the direction of a referent), but were synchronized with the rhythm or stress patterning to mark information structure in narrative discourse. For example, (from the paper)

"A non-referential gesture would be to simply move the hands up and down rhythmically or raise the eyebrows and move the head. These movements do not express the specific meaning of the verbal content. They are often made by politicians during their speeches to emphasize important points."

These gestural discourse markers can have many functions but, in essence, the speakers are using the body to focus the listener's attention in some way. In KINETIK, haptic pronunciation teaching, in principle, a gesture can be mapped on to any rhythm group or phrase, providing structure (that is indication of word grouping) emphasis, expressiveness, greater clarity, or, additional multisensory connectedness to enhance memory. 

Nice piece, I think! ("And a little child shall lead them . . . ") Will be reporting on this research both at TESOL Arabia next week and the TESOL Convention, March 23rd! Join us then if you can!

Keep in touch!

Bill


Reference: Rohrer PL, Florit-Pons J, Vilà-Giménez I, Prieto P. Children use non-referential gestures in narrative speech to mark discourse elements which update common ground. Front. Psychol. 2022;12. doi: 10.3389/fpsyg.2021.661339


Wednesday, February 16, 2022

"Parsing your words!" A key skill for teaching English rhythm!

A few (4) decades ago, in my first TESL course as an undergrad, we had a sentence something like the following, where the "point" was to show students that, in principle, any word in a sentence could be the location of the primary sentence stress, depending on the context and what had preceded in the conversation or story: 

                    My friend and I drove to the party in a rented, blue Ford station wagon. 

In our practicum, one of the assignments was, in fact, to have students repeat the sentence any number of times, even up to 15 in that case, where any word could be the focal or contrastive stress location. (You may have done something similar.) What that accomplished, in addition to massive confusion, is still not clear! In the unmaked condition, where that sentence somehow begins the conversation, on basic parsing would probably be: 

My friend and I / drove to the party / in a rented, / blue Ford station wagon.  

Ciker.com

To the native speaker, or near-native, that unmarked parsing is probably the concensus, and relatively easy to land on. Not so, generally, for the nonnative, however, in part because the decision as to where to parse the text relied on grammatical and discourse competence, not simply on how it "felt" to say it.  (In fact, I have found many native-speaking teacher trainees to be even less successful at producing the unmarked version of the text. They have been generally highly auditory and weak on grammar!) 

Once the "story" and previous preconceptions or events kick in, the stress could shift in any number of ways. There are some rules for guessing at the unmarked, of course, but they are not very helpful, such as:

  • Stress tends to fall:
    • on content words
    • to the right
    • on nouns or verbs, but not on prepositions, articles, adjectives or adverbs
    • important concepts introduced earlier in the narrative that are contrastive to what is expected on marked constituents (context or previous events based)

So, how does one whose L1 is not English, learn how to parse texts for students, as is the basis of the "Rhythm First" protocol of the KINETIK method, where you parse the text and identify the primary-stressed word in each parse (or rhythm) group. Good question. One way is to take the KINETIK Method Instructor Certificate Course (KMICC) where each week you work on various short texts learning how to effectively parse to the intrinsic rhythm of the written (or spoken) text. At the conclusion of the 10-week course, participants are very good at parsing texts into what we call "embodied oral readings (EORs)," the key building block of the haptic, KINETIK instructional system. 

That sound like / a very good tool / to have on hand

Or

That sound / like a very good tool / to have on hand? 

or

That sound like a very good tool/ to have on hand? 

If so, join us: actonhaptic.com/kinetik or email me directly at: wracton@gmail.com

Tuesday, May 26, 2020

The sound of gesture: Ending of gesture use in language (and pronunciation) teaching

Quick reminder:  Only one week to sign up for the next haptic pronunciation teaching webinars! 

Sometimes getting a rise (ing pitch) out of students is the answer . . . This is one of those studies that you read where a number of miscellaneous pieces of a puzzle momentarily seem to come together for you. The research, by Pouw and colleagues at the Donders Institute. “Acoustic information about upper limb movement in voicing”, summarized by Neurosciencenews.com, is, well . . . useful.

In essence, what they "found" was that at or around the terminal point of a gesture, where the movement stops, the pitch of the voice goes up slightly (for a number of physiological reasons). Subjects, with eyes closed, could still in many cases identify the gesture being used, based on parameters of the pitch change that accompanied the nonsense words. The summary is what is fun and actually helpful, however.

From the summary:

"These findings go against the assumption that gestures basically only serve to depict or point out something. “It contributes to the understanding that there is a closer relationship between spoken language and gestures. Hand gestures may have been created to support the voice, to emphasize words, for example.”

Although the way the conclusion is framed might suggest that the researchers may have missed roughly three decades of extensive research on the function of gesture, from theoretical and pedagogical perspectives, it certainly works for me--and all of us who work with haptic pronunciation teaching. That describes, at least in part, what we do: "  . . . Hand gestures . . . created to support the voice, to emphasize words, for example.” Now we have even more science to back us up! (Go take a look at the demonstration videos on www.actonhaptic.com, if you haven't before.) 

What can I say? I'll just stop right there. Anything more would just be but an empty gesture . . .

Source:
“Acoustic information about upper limb movement in voicing”. by Wim Pouw, Alexandra Paxton, Steven J. Harrison, and James A. Dixon. PNAS doi:10.1073/pnas.2004163117

Saturday, December 22, 2018

The feeling before it happens: Anticipated touch and executive function--in (haptic) pronunciation teaching

Tigger warning*: This post is (about) touching!

Another in our continuing, but much "anticipated", series of reasons why haptic pronunciation teaching works or not, based on studies that at first glance (or just before) may appear to be totally unrelated to pronunciation work.

Fascinating piece of research by Weiss, Meltzoff, and Marshall of  University of Washington's Institute for Learning and Brain Sciences, and Temple University entitled, Neural measures of anticipatory bodily attention in children: Relations with executive function", summarized by ScienceDaily.com. In that study they looked at what goes on in the (child's) brain prior to an anticipated touch of something. What they observed (from the ScienceDaily.com summary) is that: 

"Inside the brain, the act of anticipating is an exercise in focus, a neural preparation that conveys important visual, auditory or tactile information about what's to come  . . . in children's brains when they anticipate a touch to the hand, [this process] . . . relates this brain activity to the executive functions the child demonstrates on other mental tasks. [in other words] The ability to anticipate, researchers found, also indicates an ability to focus."

Why is that important? It suggests that those areas of the brain responsible for "executive" functions, such as attention, focus and planning, engage much earlier in the process of perception than is generally understood. For the child or adult who does not have the general, multi-sensory ability to focus effectively, the consequences can be far reaching.

In haptic pronunciation work, for example, we have encountered what appeared to be a whole range of random effects that can occur in the visual, auditory, tactile and conceptual worlds of the learner that may interfere with paying quality attention to pronunciation and memory. In some sense we have had it backwards.

What the study implies is that executive function mediates all sensory experience as we must efficiently anticipate what is to come--to the extent that any individual "simply" may or may not be able to attend long enough or deeply enough to "get" enough of the target of instruction. The brain is set up to avoid unnecessary surprise at all costs. The better and more accurate the anticipation, of course, the better.

If the conclusions of the study are on the right track, that the "problem" is as much or more in executive function, then how can that (executive functioning) be enhanced systematically, as opposed to just attempting to limit random "input" and distraction surrounding the learner? We'll return to that question in subsequent blog posts but  one obvious answer is through development of highly disciplined practice regimens and careful, principled planning.

Sound rather like something of a return to more method- or instructor-centered instruction, as opposed to this passing era of overemphasis on learner autonomy and personal responsibility for managing learning? That's right. One of the great "cop outs" of contemporary instruction has been to pass off blame for failure on the learner, her genes and her motivation. That will soon be over, thankfully.

I can't wait . . .



Citation:
University of Washington. (2018, December 12). Attention, please! Anticipation of touch takes focus, executive skills. ScienceDaily. Retrieved December 21, 2018 from www.sciencedaily.com/releases/2018/12/181212093302.htm.

*Used on this blog to alert readers to the fact that the post contains reference to feelings and possibly "paper tigers" (cf., Tigger of Winnie the Pooh)


Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching

Clker.com
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)


There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads, Businessinsider.com, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"










Thursday, January 4, 2018

Touching pronunciation teaching: a haptic Pas de trois

Wikipedia.org
For you ballet buffs this should "touch home" . . . The traditional "Pas de trois" in ballet typically involves 3 dancers who move through 5 phases: Introduction, 3 variations, each done by at least one dancer, and then a coda of some kind with all dancing.

A recent article by Lamothe in the UK Guardian, Let's touch: why physical connection between human beings matters, reminded us of some the earliest work we did in haptic pronunciation teaching that involved students working together in pairs, "conducted" by the instructor, in effect "touching" each other on focus words or stressed syllables in various ways, on various body parts.

In today's highly "touch sensitive" milieu, any kind of interpersonal touching is potentially problematic, especially "cross-gender" or "cross-power plane", but there still is an important place for it, as Lamothe argues persuasively. Maybe even in pronunciation teaching!

Here is one example from haptic pronunciation teaching. Everything in the method can be done using intra-personal and interpersonal touch, but this one is relatively easy to "see" without a video to demonstrate the interpersonal version of it:
  • Students stand face to face about a foot apart. Instructor demonstrates a word or phrase, tapping her right shoulder (with left hand) on stressed syllables and left elbow (with right hand) on unstressed syllables--the "Butterfly technique"
As teacher and students then repeat the word or phrase together,
  • One student will lightly tap the other on the outside of the her right shoulder on stressed syllables (using her left hand).
  • The other student will lightly tap the outside of the other student's left elbow on unstressed syllables (using her right hand). 
Note: Depending on the socio-cultural context, and depending on what the general attire of the class is, having all students use some kind of hand "disinfectant" may be in order! Likewise, pairing of students obviously requires knowing well both them individually and the interpersonal dynamics of the class. Consider competition among pairs or teams using the same technique. 

If you do have the class and context for it, try a bit of it, for instance on a few short idioms. It takes a little getting used to, but the impact of touch in this relatively simple exercise format--and the close paralinguistic "communication"-- can be very dramatic and . . . touching.

Keep in touch!

Sunday, August 20, 2017

Good listening (and pronunciation teaching) is in the EYE of the beholder (not just the ear)!

clker.com
Here is some research well worth gazing at and listening to by Pomper and Chait of University College London: The impact of visual gaze direction on auditory object tracking, summarized by Neurosciencenews.com:

In the study, subjects "sat facing three loudspeakers arranged in front of them in a darkened, soundproof room. They were instructed to follow sounds from one of the loudspeakers while ignoring sounds from the other two loudspeakers. . . . instructed to look away from the attended loudspeaker" in an aural comprehension task. What they found was that " . . . participants’ reaction times were slower when they were instructed to look away from the attended loudspeaker . . .  this was also accompanied by an increase in oscillatory neural activity . . .

 Look . .  I realize that the connection to (haptic) pronunciation teaching may not be immediately obvious, but it is potentially significant. For example, we know from several research studies (e.g., Molloy et al. 2015) that visual tends to override or "trump" audio--in "head to head" competition in the brain. In addition, auditory generally trumps kinesthetic, but the two together may override visual in some contexts. Touch seems to be able to complement the strength or impact of the other three or serve to unite them or integrate them in various ways. (See the two or three dozen earlier blog posts on those and related issues.)

In this study, you have three competing auditory sources with the eyes tracking to one as opposed to the others. Being done in a dark room probably helped to mitigate the effect of other possible visual distraction. It is not uncommon at all for a student to chose to close her eyes when listening or look away from a speaker (a person, not an audio loudspeaker as in the study). So this is not about simply paying attention visually. It has more to do with eyes either being focused or NOT. 

Had the researchers asked subjects to gaze at their navels--or any other specific object--the results might have been very different. In my view the study is not valid just on those grounds alone, but still interesting in that subjects' gaze was fixed at all.) Likewise, there should have been a control group that did the same protocols with the lights on, etc. In effect, to tell subjects to look away was equivalent to having them try to ignore the target sound and attend to it at the same time. No wonder there was " . . .  an increase in oscillatory neural activity"! Really!

In other words, the EYEs have it--the ability to radically focus attention, in this case to sound, but to images as well. That is, in effect, the basis of most hypnosis and good public speaking, and well-established in brain research. In haptic pronunciation teaching, the pedagogical movement patterns by the instructor alone should capture the eyes of the students temporarily, linking back to earlier student experience or orientation to those patterns. 

So try this: Have students fix their eyes on something reasonable or relevant, like a picture or neutral, like an area on the wall in front of them--and not look away--during a listening task. Their eyes should not wander, at least not much. Don't do it for a very long period of time , maybe 30 seconds, max at the start. You should explain to them this research so they understand why you are doing it. (As often as I hammer popular "Near-ol'-science", this is one case where I think the general findings of the research are useful and help to explain a very common sense experience.)

 I have been using some form of this technique for years; it is basic to haptic work except we do not specifically call attention to the eye tracking since the gestural work naturally accomplishes that to some degree. (If you have, too, let us know!)

This is particularly effective if you work in a teaching environment that has a lot of ambient noise in the background. You can also, of course, add music or white noise to help cancel out competing noise or maybe even turn down the lights, too, as in the research. See what I mean?

Good listening to you!

References:
UCL (2017, July 5). Gaze Direction Affects Sound Sensitivity. NeuroscienceNew. Retrieved July 5, 2017 from http://neurosciencenews.com/sound-sensitivity-gaze-direction-7029/
Molloy, K, Griffiths, D.,  Chait, Lavie, N. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience, 2015; 35 (49): 16046 DOI: 10.1523/JNEUROSCI.2931-15.2015





Wednesday, June 29, 2016

Temporary Mind-FILL-ness in (pronunciation) teaching: Weil's 4-7-8 technique

A few months ago I sat through a good presentation on a technique for "fixing" the English rhythm of adult Japanese learners--in relatively big classes. At the time I was very interested in research on the role of attention in learning. Later, over coffee I asked the presenter something to the effect of "How do you know that the students were paying attention?" (I had earlier taught for over a decade in a seemingly very similar context in Japan, myself.) His response was: "Good question . . . Almost everybody was looking at me and more than half of the lips were moving at the appropriate time . . . "

How do you establish, maintain and manage attention in your teaching? (Anybody looking for a great MA or PhD topic, take note!) Based on my recent survey of the research literature, I'm preparing a conference proposal on the subject now. This is a follow up to the earlier post on how pronunciation should be taught "separately", in effect partitioned off from the lesson of the day and the distractions of the room and surroundings.

One problem with efficient attention management  is often in the transitions between activities or just the initial set up. Some tasks require learners to be very much "up"; others, decidedly "down" and relaxed. 

The popularity of Mindfulness training today speaks to the relevance of managing attention in class and the potential benefits from many perspectives. Most of the basic techniques of Haptic Pronunciation Teaching are designed to require or at least strongly encourage at least momentary whole body engagement in learning and correcting articulation of sound in various ways. I have experimented with a number of Mindfulness-based techniques to, in effect, short-circuit mental multitasking and get learners (sort of) calmed down and ready to go . . .

Powerful, effective stuff, but it is not something that most teachers can just pick up and begin using in their classes without at least a few hours of training, themselves, especially in how to "talk" it through with students and monitor "compliance" (manage attention.) I'd recommend it, nonetheless.

I recently "rediscovered" an amazing focus technique, suggested by Dr Andrew Weil (Hat tip, this month's issue of Men's Health magazine!), that works to create very effective boundaries without requiring any special training to administer. One of the best I have ever used. Simple. "Mechanical" (not overly cognitive or "hypnosis"-like) and quick. Takes maximum of 90 seconds. Anybody can do it, even without having seen it done:

A. Breath in with mouth closed, a slow count of 4
B. Hold the breath for a slow count of 7
C. Blow out through the mouth softly for a slow count of 8

*Do that four times. It basically lowers the heart rate and helps one focus. May take a two or three times for 4-7-8 to get to full effectiveness, but it does quickly, almost without fail. You can use 4-7-8 two or three times per class period. If you don't have a warm up that gets everybody on board consistently, try this one. I'd especially recommend it before and after pronunciation mini-lessons.

Pronunciation, and especially haptic techniques, are very sensitive to distraction, especially excessive conscious analysis and commentary. 4-7-8 is not necessarily the answer, but it will at least temporarily get everybody's attention. After that . . . you're on!




Monday, January 19, 2015

When a nod is nod enough: Coconut Cheeseburger

Clip art:
Clker.com
On the way to the TESOL convention in 1987, at the Greenville-Spartenburg airport, there also was a “mass” of tanned, wacky (hungover?) college students on their way back from spring break in Florida. Next to where I was sitting there was a group of about a dozen who were laughing uproariously.

The story went that one of the young women had intended to order a coke and a cheeseburger at a restaurant, but was served, instead, a Coconut Cheeseburger. As the recipient of the exotic sandwich continued to deny having ordered it, another insisted that he had, in fact, observed her do just that.

What was fascinating was that both were using energetic upper torso nods with simultaneous "thigh slaps"—which created and emphasized either one or two tone/rhythm groups: (“I’d like a Coke / and a Cheeseburger.”) or what he said she said: (“I’d like a Coconut Cheeseburger.”)

It was easy to “see" how in a noisy restaurant--where there was, apparently, a coconut cheeseburger on the menu--that the waiter could get it wrong. Had she used one obvious upper torso nod or two? (Nod, if you guessed right, that the protagonist was a male, English major, almost certainly a significant other of the recipient of the burger--or trying desperately to become one!)

It would take me another two decades to figure out how to make that principle work systematically-- in haptic pronunciation teaching..

Wednesday, November 5, 2014

Workshop on stressing and de-stressing unstressed vowels: the haptic “thumb-flick” technique

On the 22nd of November at the local BCTEAL regional conference, I'll be doing a new haptic workshop on unstressed vowels, with Aihua Liu of Harbin Institute of Technology and Jean Jeon, a graduate student her at Trinity Western University. You can see an introduction to the technique here.

Summary:
Clip art:
Clker.com
This participatory, experiential session presents a haptic (gesture + touch) procedure for helping learners produce and better “hear” unstressed vowels in English. In essence, as words are articulated, learners touch hands at specific points in the visual field on stressed vowels and “flick their thumbs” on the unstressed vowels.

Proposal:
Working with unstressed vowels in English is often neglected. The problem is often “solved” by avoiding the issue entirely or by emphasizing suprasegmentals (rhythm, stress and intonation) which, research suggests, do indeed help to determine the prominence of unstressed syllables to some extent. In addition, there may be some limited, indirect attention to unstressed vowels in oral practice of reduced forms, especially in fixed phrases (e.g., “salt ‘n pepper) and idioms.

Research has recently demonstrated that disproportionate attention to suprasegmentals (rhythm, stress and intonation) without a balanced, production-oriented treatment of key segmentals (vowels and consonants) may be very counter-productive, undermining intelligibility substantially. That is especially the case with learners whose L1 is Vietnamese, for example.

This technique helps to address that issue by facilitating more appropriate, controlled focus on the vowel quality in unstressed syllables.  It involves the use of two types of pedagogical gestures, one that adds additional attention to the stressed vowel of the word and a second that helps learners to better approximate the target sound and maintain the basic syllabic structure of the word.

The session is experiential and highly participatory. Participants are provided materials and links to Youtube.com videos demonstrating the technique.

Wednesday, November 28, 2012

Aiming at good pronunciation: on the Q(E)T

Clip art: Clker
Clip art: Clker
Always looking for ways to enhance haptic anchoring, I came across some interesting new research  by Wood and Wilson of Exeter University using Quiet Eye Training (QET), a well-established technique for helping one (especially professional athletes under pressure) aim at (or focus attention on) a target.The training assists the shooter in putting distraction out of mind. (Some studies report even more generalized impact on everyday cognitive functioning and sense of control as well.)

This is potentially a good fit with other attention management strategies in the EHIEP approach. Early on in the development of the system we experimented with some eye-tracking techniques similar to those used in OEI but discovered that they were a little too "high octane" for general pronunciation work. (In working with "fossilized" individuals I still use some of those regularly, however.) Since QET does not require instructor presence when the shot is taken, it may be possible to use it in some form. Will figure out how to adapt QET training, how to better enable learners to anchor what they do on the q.t. and get back to you. 

Thursday, December 22, 2011

Monkey see and monkey do: efficient multi-tasking in pronunciation work

Clip art: Clker
Here is one of those research reports that inevitably evokes the same somewhat exasperated reaction from me (and I expect from most of you, as well). Ready?  It has been discovered that we--well some of our purported "cousins," at least-- are wired to multitask! Think of it . . . you can, for example, now watch TV and read a book at the same time or run on a treadmill without worry that you are going against your very nature or doing irreparable harm to your equipment.

It is an important study, reportedly one of the first to establish that empirically. The trick apparently is just how closely related the two tasks are. If they are sufficiently distinct, either in terms of intra-modality contrast (like two pictures) or inter-modality (like singing and knitting), go to it! Any number of previous posts have looked at the interplay among visual and auditory and haptic modalities, coming to much the same conclusion: that we can, under the right circumstances attend quite well to both haptic and auditory (and in controlled contexts, visual) simultaneously.

HICP/EHIEP is based on the idea of continuous, simultaneous engagement of multiple modalities (what we often refer to with the acronym "CHI"--for continuous haptic integration, haptic having the primary function of anchoring and integrating.) In other words, doing pedagogical movement patterning and seeing (tracking those movements of the hands across the visual field) and speaking at the same time should be a piece of cake. If not, we may just  have too much time on our hands--or not enough. Certainly nothing to HICP at!

Friday, November 11, 2011

Not aware of an effective HICP technique? Good!


Clip art:
Clker
Clip art:
Clker
We often use the terms "attention" and "awareness" interchangeably in informal conversation or in describing what is going on at any moment in the instructional process. I have used the acronym, AFAPAI (Attention-Focus-Anchoring-Practice-Awareness-Integration), pronounced: "half-a-pie," for some time. (See earlier posts on the HICP learning model.) That "half" models the process of sound change; the other half is that being learned: sounds, words, processes and patterns.

"Unaware" of the research linked above, I had apparently gotten close to one theory of how those two concepts, awareness and attention are related. In essence, what the study by Watanabe, Cheng,  Murayama, Ueno, Asamizuya, Tanaka, and Logothetis. summarized by Science Daily, demonstrated was that, in principle (neurophysiologically, at least), it is possible to pay attention without being aware, and vice versa. So what does that mean for classroom instruction? Simply this: If learners are just "aware" of what is being presented, nothing may "stick" later; focused/undivided attention is required, which, in effect, limits general awareness, especially of the visual field but, apparently of all modalities as well. In other words, complete, at least momentary attention is required for maximal impact.

In the six-step HICP process (AFAPAI), note where awareness comes into play: after regular practice, generally in conversation, as both "old" and "repaired or new" forms are brought to awareness in a manner that seems almost accidental or incidental, but not purposefully attended to. (See also posts on post-hypnotic suggestion and related strategies.) That, in turn, should help to further along the integration process. If you have been paying attention, that should be exciting stuff. If not, you are at least now aware of the research. After all, even AFAPAI is better than (just) noting!

Tuesday, September 27, 2011

The Krieger method of accent reduction

(Note: What follows should in NO way be construed as an endorsement of this linked Youtube video!)

Clip art:
Clker
I just posted this comment on another discussion board: "His claims are, of course, outlandish. But watch that video a couple of times very carefully. He has stumbled on to an essentially kinesthetic, "ballistic" technique that, for some learners, will enhance their intelligibility—if all they need is more stress contrast and processing time for their listeners. It is used in many public speaking courses, in fact."

I worked with something like that about 20 years ago. I still use it occasionally when I have a learner who needs a very quick fix —and maybe just needs to slow down and kick back." Everybody has a piece of the puzzle. In the EHIEP system we do sometimes do a haptic version of the "Krieger thrust" to affect more integration--but never up front!

The music of rise-fall prosodic triggers

Clip art: Clker
Listen to a native speaking English language instructor repeat the citation form a word (in isolation) at the front of the class. Almost invariably, he or she will use what is termed a "rise-fall" intonation pattern, peaking on the primary-stressed syllable. This 2009 summary of a  research report on Livescience.com by Hsu, by Janata at University of California, Davis explains why, in recalling and vocalizing a word, the "music or melody" of the word, its intonation or tonal pattern, should help it "come back."

Haptic research suggests that EHIEP-like "haptic anchoring" of that rise-fall contour (a pedagogical movement pattern across the visual field which includes some type of hand touching on the primary stressed syllable) should enhance encoding and recall. I have yet to do a controlled, empirical test of that prediction with such prosodic triggers, but it has been standard practice for some time to have learners use a rise-fall PMP as one step in working with new or changed sounds or words in homework. They consistently report that it helps them remember the felt sense of a word either (1) in trying to access the pronunciation directly or--(2) more importantly--in noticing after-the-fact a mispronounced or changed target in conversation (as explored in a recent post.)

Try it. Add a little prosodic or  haptic riff to your citation forms. Stay tuned.

Thursday, September 1, 2011

Bottom-up pronunciation teaching: "Touchinami"

Clip art: Clker
Here is a 1997 article by Chela-Flores that was influential in forming my understanding of the place of rhythm in pronunciation instruction. Essentially, the position was that pronunciation instruction should be based on rhythm groups, with all other elements seen as fitting within and taught within that structure. Lessons are rhythm-centered; the felt sense of a word has a clear rhythmic identity, etc.

Now, take that concept and add on top of each rhythm group an "intonation group" as characterized nicely by Celik--and a "haptic-anchor" as developed here on this blog--and you have what, in EHIEP  work, we call a "touch-i-nami (from Japanese: touch wave)," a basic pedagogical tool: a rhythm group with a chunk of intonation "on top," well-grounded haptically in memory. Bottoms up!

Sunday, August 28, 2011

HAPTICULATE! (Learning new or changed pronunciation efficiently)

Clip art: Clker
I like that term . . . Among voice coaches, the asymetrical relationship between "bone conduction" (perception of one's own voice experienced through the bones of the face) and "air conduction" (awareness based on input via the auditory nerve from the ears) is generally a given. Estimates range from 80/20 to 60/40. Thus, in training programs, the internal "felt sense" of the voice is understood as primary. (This abstract of a  study looks at varying frequency ranges involved.)

Assuming that observation is essentially correct, or at least useful--and drawing on research cited in several recent posts on the relative strength of different modalities in speech production and comprehension, here are the fundamentals from a HICPR perspective on how to manage your attention (or those of your students), to learn a new or corrected sound with optimal efficiency. In brief, there are 4 basic components: (What function each fulfills has also been elaborated in previous blog posts.)

A. Breathe in through the nose, then breathe out through the mouth as the word or phrase is articulated, accompanied by specific modality management--with haptic anchoring. See B and C, below.)
B. Focus strongly on the felt sense in your personal Vowel Resonance Center (a point, typically, in the bones of the face between the eyes or thereabouts, where bone-sound conduction is experienced most intensely or, for some, at a point in the throat or chest when speaking). The breathing procedure in A helps to create and maintain that focus.
C. Manage the visual field (Visual Field Management). Do that either by focusing on a fixed point in front of you, tracking hand movements with eyes or closing your eyes--or some combination.
D. Perform 2 or 3 "pedagogical movement patterns" (basically sign language-like movements/gestures through the visual field, terminating with both hands touching on the key, stressed syllable –haptic anchoring) as the target word or expression is . . . well . . hapticulated!

Friday, August 26, 2011

To breathe or not to breathe during pronunciation practice

Clip art: Clker
In most basic strength and flexibility training, some kind of systematic control of breathing is practiced. My experience has been primarily with running, weight training and yoga, where there is a general consensus that "nose breathing," at least when inhaling, is recommended. Here is a brief summary of some of the potential health benefits. (There is extensive, well established research also on the effects of breathing in yoga systems.)

I have been exploring the use of controlled breathing in HICP/EHIEP work for sometime now. The idea is to breathe in through the nose before haptic anchoring of a sound or word, then exhaling through the mouth with the anchor as the sound or word is articulated (hapticulated, as we say!) There are several potential benefits (in addition to the biochemical changes evident in the research) including: improved pacing of exercises, enhanced "felt sense" and concentration on the target sound, improved posture encouraged by conscious nasal inhaling, improved aspiration on aspirated consonants--and perhaps most strikingly, a general sense of well-being that remains for some time after practice. (Research seems to indicate that that feeling is  probably the result of greater oxygen absorption.)

So, if your pronunciation work seems to be sucking all the oxygen and enthusiasm out of the room . . . such controlled, embodied systematic "inspiration" (and expiration) could well be a real "breath of fresh air!"

Friday, July 22, 2011

Pronunciation modalities: out of sight--but IN mind!.

clip art: Clker
In this 2009 study of modality dominance, by Hecht and Reiner, when visual is paired one-on-one with either haptic or auditory competing stimuli, visual consistently overpowers either of the two. When the three are presented simultaneously, however, the dominance of visual disappears. That may explain why having some learners focus on a visual schema (such as the orthography) while articulating or practicing a new sound may not turn out to be very efficient--or doing a kinesthetic "dance" of some kind to practice a rhythm pattern (without speaking at the same time) while looking at something in the visual field, may not work all that well either for some learners.

The presence of eye engagement may override or nullify information in the competing modality. In HICP, where all three modalities are usually engaged, the "distracting" influence of sight is at least lessened. In fact, the tri-modality "hexus" should only better  facilitate the integration of the graphic word, the felt (haptic) sense of producing it and the internal (auditory) bone- resonance and vibrations. Although a substantial amount of pronunciation learning may be better accomplished with eyes closed, tri-modal (haptic, visual and auditory) techniques probably come in a close second. We will "see" in forthcoming research!