Showing posts with label anchoring. Show all posts
Showing posts with label anchoring. Show all posts

Sunday, March 10, 2024

What to do for falling student confidence!

One of the joys of teaching is all those times when you stumble on a wonderful technique . . . almost by accident, when the lesson that you designed goes way beyond your objectives for it. The research literature is filled with reports of classroom procedures that inspire/develop confidence, (cf. Cadiz-Gabejan, 2021 . . . but not this one in this field.


For you to be able to do this technique with your students tomorrow, I need to give you little primers on haptic pronunciation teaching (HPT) and Observed Experiential Integration (OEI) therapy.

HPT, basically, uses gesture and touch to enhance memory and expressiveness by generally having a gesture terminate on a stressed syllable where the hands touch. The gesture can have several functions such as rhythmic or intonation patterns, or specific vowels or consonants. (For examples of some of the Movement, tone and touch techniques, goto: www.actonhaptic.com/HaPT.)

One of the techniques, used to create the deep falling tone at the end of a conversational turn, for example, has the learner move one hand from in front of the eyes down to about the level of the solar plexus, with the eyes following. The voice also falls as low as possible, in some creating the "creaky" voice quality. One of the students, in working with the practice dialogs "discovered" that she felt more and more confident by using that move . . . beyond the exercises. Her general demeanor and speaking "presence" made that evident as well from that point on. 

I had seen a somewhat analogous technique used about 20 years ago in observing psychologists working with Observed Experiential Integration (OEI) therapy, where the patient basically followed the hand movement of the clinician across the visual field, terminating about the same place, sometimes along with the clinician's voice, sometimes their own, but the effect was the same: a sense of calm and confidence. That location in the visual field, down and to the right, seemed to act as an anchor for a sense of at least temporarily closing down, calm or resting. 

Many systems use similar anchoring for a myriad of purposes. In this case, we were working with a basic sentence-final falling tone--that just keeps falling until it "hits bottom." Have been using it for the last two years in various ways, such as short passages or conversational gambits, with pretty striking results Here is a short video clip from the KINETIK training video series.  Give it a try and let me know how it works in your class (as I'm CONFIDENT that it will!) 

v7.0 will be available sometime later this spring or early summer. 

Keep in touch!

Bill


Monday, November 27, 2023

Better pronunciation at your fingertips!

New favorite terms: viscoelastic and deformation. Recent research by Hannes, Ingvars and Roland, "Memory at your fingertips: how viscoelasticity affects tactile neuron signaling," helps explain the power of touch, especially as it relates to interpretation of intensity (from several perspectives) and memory--in haptic pronunciation teaching (HaPT)--and elsewhere. 

Just heard of a great technique from a fiend, a professional vocal artist and instructor. While attending a clinic held by a renowned opera singer and instructor, herself, was required sing a brief piece, in part, to demonstrate her professional "voice" to the seminar. The mentor, although apparently impressed with what she had heard, could see (and hear) that there was much more there. She asked my friend to sing some of the piece again, but this time to engage her fingers on the table as if she were playing the piano, accompanying herself. The result was  . . . astounding . . . her expressiveness, engagement, projection of the piece and her persona were almost overpowering, even for the other members of the seminar. How did that work? (Watch the hands of a great opera singer sometime!)

In the Saal et al study, in essence what they found was that the "history" of previous touch on a location of the skin, described as a "deformation," carried a great deal of information in interpreting current touch, and that past touch was generally as perceptually salient as the current tactile event, as critical to the brain being able to interpret it accurately. In other words, memory for touch is highly complex and dynamic in sensing whether a current impact event has "the same meaning" or different--and in what way.  

In principle, in haptic pronunciation work, any sound or sound pattern can be anchored with movement and touch, touch landing on the stressed syllable of a word or word of a phrase or clause. As developed in an earlier post, there are about a dozen types of touch in the system, each location on the hands or upper body in the visual field target for one or more touch types--and sounds. What the Hannes et al study clarifies is how, for example, three vowel sounds in HaPT such as [i]. [I] and[iy] which are located in the same place in the visual field (as high, front vowels) can still have very different somatic (feeling-based) identities based on distinct types of touch. (See demonstrations.) 

  • {i] is performed as a brief hold of the hands as the vowel is articulated. 
  • [I[ is performed as a quick, sharp tap touch, as the vowel is articulated.
  • [iy] involves 2 motions, an initial glancing scratch of the fingernails of the right hand up across the palm of the left hand as the core vowel [i] is articulated, followed by the right hand fingers gliding to the top of the fingers fingers of the left hand and stopping there as the [y] offglide is articulated. 

In the same way, the potentially "tactemic" finger touch points around the upper body and visual field provide strong, memorable anchors for varied sounds, words and sound patterns or processes. The tactile memory and touch differentiation in the hands is striking. If you'd like to learn more about the KINETIK system, we'd be happy to "give you a hand," of course!

Source: Saal Hannes P., Birznieks I,, Johansson Roland S. (2023) Memory at your fingertips: how viscoelasticity affects tactile neuron signaling eLife 12:RP89616 https://doi.org/10.7554/eLife.89616.1

Saturday, October 1, 2022

What comes next in pronunciation teaching! (Why being in touch is so important!)

An intriguing new study by researchers at East Anglia University, Aix-Marseille University and Maastricht University, summarized by Neurosciencenews.com: How the Sounds We Hear Help Us Predict How Things Feel,  (title and actual empirical findings to be revealed later, with no link to the actual study, itself, other than a note that it will appear in Cerebral Cortex)

I am, nonetheless, delighted to take their word for it since I LOVE the conclusions and find them "touching!" Apparently they have uncovered yet another "new" type of connection between sound and touch or tactile processing. The key finding from the summary:  

“ . . . research shows that parts of our brains, which were thought to only respond when we touch objects, are also involved when we listen to specific sounds associated with touching objects. (Italics, mine.) This supports the idea that a key role of these brain areas is to predict what we might experience next, from whatever sensory stream is currently available.”

Across this unique, recently discovered circuit, for example, when we hear a sound, like that of a single consonant, the brain in principle simultaneously connects it with the physical sensations (touch, vocal resonance, micro-movements involved in producing it) associated with articulating it. If the focus is a word, on the other hand, we assume that other multiple, analogous circuits come into play that link to other dimensions. But the "touch" circuit has those unique properties. 

So what might that mean in the classroom, especially pronunciation and effectiveness? (I'll get to haptic pronunciation later, of course!) For one thing, (NO SURPRISE HERE!) a sound may be associated with the somatic (body) sensations in the vocal tract but not necessarily with a the concept, or phoneme, the phonological complex/nexus and the graphemic representation, itself. It is as if the sound points at the body, not the "brain" as a whole. 
 
On the other "hand," any number of other words could have have virtually identical "points of impact" on the body, associated with the same vowel "sound." The same may apply to a word articulated simultaneously with a gesture, or any experience associated with a sound, one heard or self-generated. That circuit connects the auditory image to at least the "body," but not necessarily one concept. 

Then what is the "workaround" for bringing together the multisensory event termed a "word," or for  example, assuming that it has been learned truly "multi-sensorialy," that is with as many senses as possible, or at least a "quorum level," vividly or intensely engaged as possible? 

In a sense, the "answer" is in the question: consistent, rich multisensory engagement. There are an almost infinite number of ways to accomplish that, of course, but haptic pronunciation teaching, based on touch-anchored speech-synchronized gesture attempts to do that, systematically. In principle, any sound, word or sound process can be experienced as a nexus involving: 
  • the physical sensation of articulating the sound/process
  • the auditory features of the sound (acoustic)
  • a concept (in the case of a word or, in come cases, patterns of pitch movement)
  • a gesture that involves hands touching with each other or the body, in some manner that mimics either the nature of the sensations involved in articulation or the "shape" of the concept itself, such as hands rising on a rising pitch or intonation, or hands positioned high in the visual field to represent a "high" vowel.  
According to the study, the use of haptic, touch-anchored gesture should strengthen considerably the connection between the concept associated with the gesture and the sound by "pointing" to the body-sensations involved in articulating the sound.

 And, of course, from our perspective, KINETIK (method) is what is coming next! 

 Source: https://neurosciencenews.com/auditory-tactile-processing-21279/


Sunday, December 27, 2020

New "NewBees'" Haptic Pronunciation course!

Want to teach pronunciation but have no training and no time in class to do it even if you knew how? 

We have a great new course for you: Acton Haptic Pronunciation: Content Complement System (AHP-CCS). 

It has been created so that you can use haptic pronunciation techniques (gesture controlled by touch) to:

  • Improve memory for content you are teaching (in speaking, listening, reading, grammar, vocabulary, stories, concepts, etc.)
  • Improve expressiveness, emphasis, and intelligibility
  • Improve impact of modeling, feedback and correction
  • Improve class engagement on Zoom
  • Provide a way to work with pronunciation (on the spot) in any type of class
Specifics: 
  • (Ideally) You study with another person who teaches the same type of student 
  • 12 week course/4 modules/12 lessons. 
  • The first ones begin on 3/25 and others can start anytime after when there are minimum of two students who want to do the course. 
  • 60 minutes of practice on your own per week 
  • 30 minutes of homework (on your own or with your friend) per week
  • a 45 minute Zoom session each week, the two you, (Usually on Saturday) working with a  "Haptician" who also has experience teaching students of that age and level 
  • Haptician: Trained by Bill Acton in the Haptic Pronunciation Teaching (HaPT)
  • Cost: 
    • 1 person ($1600 CAD each) - not recommended, but possible. 
    • 2 people together ($800 CAD each or $ per 200 module) - best plan, especially if you are friends! 
    • 3 people together ($600 CAD each or $150 per module) - OK if you are working together!  
    • 4 people together ($400 CAD each or $100 per module) 
    • (Locals.com subscription, $5 CAD monthly, also required to take an AHP-CCS course)

Designed for those 

  • with little or no previous training in phonetics or pronunciation teaching
  • who are teaching content classes or language classes
  • teaching students of any age or proficiency
  • have a colleague or friend that they can do the class with (if not, maybe we can find one for you!) 
  • who have two or three hours a week for the course
  • who would like to be part of a community of people who love teaching pronunciation and other things!
  • on a tight budget!
More details: 
  • Weekly Zoom sessions focus on how to use the pedagogical movement patterns (PMPs) of the lesson in your class
  •  Both you and your friend should ideally be teaching or have taught the same kind of students if at all possible
  • Certificate awarded after completion of the last Module!
  • All materials furnished
  • Basic training materials are designed to be used with students of any age and proficiency level, in class or out of class. 
Courses begin on 3/25/2021

For more information: Contact info@actonhaptic.com and go to actonhaptic@Locals.com

Friday, November 27, 2020

Motivation to do Pronunciation work: Smell-binding study!

Rats! Well . . . actually . . . mice who are motivated to (voluntarily) exercise more are genetically set up or developed to have better, more discriminating vomeronasal glandular structure. Is that big, or what? Check out the Neuroscience News summary of this unpublished study by Haga-Yamanaka, Garland and colleagues at UC-Riverside, forthcoming in PLOS ONE, Exercise Motivation Could Be Linked to Certain Smells  I LOVE the researchers' potential application of the research: 

“It’s not inconceivable that someday we might be able to isolate the chemicals and use them like air fresheners in gyms to make people even more motivated to exercise,” Garland said. “In other words: spray, sniff, and squat.”

Being a runner, myself,  I especially like the study since it uses mice who are what they term "high runners!" Admittedly it is a bit of a stretch to jump to the gym and then to the ELT/pronunciation classroom from the study, but the reality of how smell affects performance is well established in several disciplines--and probably in your classroom as well! 

Decades ago, a colleague who specialized in olfactory therapies and was a consultant in the corporate world on creating good-smelling work spaces, etc., sold me on the idea of using a scent generator in my pronunciation teaching. Required a mixing of two or three oils to get students in the mood to do whatever I wanted them to  better. Back then it seemed to be effective but there was little research to back it up and it was before we have been forced to work in "scent-free" and other things-free spaces.

What is interesting about the study to our work is the connection between persistence in physical exercise and heightened general sensory awareness, and the way smell in this case is enhanced. My guess is that touch, foundational in haptic pronunciation teaching is keyed in similar ways. Gradually as students practice consistently with the gestural gross and fine motor gestural patterns, what we call pedagogical movement patterns, their differential use of touch increases. (An earlier post identifies over two dozen "-emic" types of touch in the system.) In other words, touch becomes more and more powerful/effective in anchoring sound change and memory for it. 

That insight is central to the new haptic pronunciation teaching system, Acton Haptic Pronunciation Complement--Rhythm First, which will be rolled out early in 2021. (For preliminary details on that, check out the refurbished Acton Haptic website! )



Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.
Clker.com

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!
Clker.com

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!




Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Wednesday, February 19, 2020

RHYTHM FIRST (new) pronunciation teaching technique: Haptic Side Step!

Full disclosure: the following post includes explicit, dance and intrapersonal touching, something of a
follow up to two recent posts:
Clker.com
What is new here is the active, simultaneous use of feet, literally and figuratively. The idea is that much of the basics of English pronunciation and practice can (and should) be taught to the beat of the rhythmic feet of the text being spoken. The tempo will vary but the “dance step” is essentially the same.
  • All text used at the beginning should be staged/indicated on paper or expressed or broken up into rhythmic feet (groups of 1~9 syllables in this system, although in the classical sense, a "foot" is usually limited to 3 or 4 syllables). For example: 
    • The stressed syllable / in the word or phrase / should, in general, / be highlighted / (underlined or boldfaced / for example.) 
  • The body is moving gently from side to side, to the rhythm of the designated rhythmic feet, using what we call a "haptic side step, where the forefoot comes down on the stressed element. 
  • See short video of me "DEMONSPLAINING" how the basic procedure works in a clip from a recent presentation at UBC. (It is especially clear in the second part of the 15 minute video.) Password: HaPT-Demo3
  • As noted in the video, in haptic pronunciation work the upper body may also be simultaneously executing various touch-based pedagogical (gesture) movement patterns related to a targeted pronunciation feature, such as a vowel sound or key word, a rhythm or intonation pattern, etc.  
The "side step" has been developed over the last five years as an optional feature of more advanced, accent modification work.The rest of the full, full-body version of the haptic system, Haptic Pronunciation Teaching, v5.0: RHYTHM FIRST! will be rolled out later this fall.

In the meantime, try some form of that basic technique in class with any simple dialogue, or word list, or dialogue or even spontaneous chat (as I do on the video) and, as usual, report back!

The technique will be featured at the next webinar, March 27th and 28th. (Contact: info@actonhaptic.com for further information.)

Caveat emptor: This looks easy.

Wednesday, June 5, 2019

Anchoring L2 pragmatics (language use and context) with touch and prosody

New article just published with Burri and Baker, Proposing a haptic approach to facilitating L2 learners’ pragmatic competence. This piece is relatively practical, focusing on use of haptic pronunciation teaching pedagogy for enhancing instruction and memory recall of the stuff of pragmatics: conversational conventions, politeness, indirectness, presupposition, implicature, irony . . . . humor!

It is based on three  . . . well . . . presuppositions. First, is that it is often really difficult to remember meaning and words that occur in only very narrowly defined situations. Second, one of the key functions of pronunciation is helping to anchor expressions and their contexts in memory.  Third, touch-moderated gesture (as in haptic pronunciation teaching) is a better way to do that.

 This is also a pretty good introduction to haptic pronunciation teaching. Of course, if you want more (and you will!), join us in our next webinars July 12th and 13th!  For reservations for the webinar: info@actonhaptic.com 

Saturday, March 30, 2019

Under (or between) cover pronunciation teaching: CHIP

Clker.com
Here is an instructive tale, describing a situation that may actually be becoming even more common, ironically, as textbooks "improve" and demands on teachers to do more and more "book keeping"--as opposed to teaching--increase.

Heard recently from a reliable source at a well-paying language school where there is (a) an unbelievably detailed curriculum, right down to near minute-by-minute classroom instructions and draconian oversight, (b) all books provided, no teacher-choice or adaptation allowed, and (c) at least three core, nonnegotiable methodological principles: No grammar, No vocabulary and No pronunciation. (There are virtually none of those in the lesson plans.) The curriculum, although basically English for Academic purposes is essentially extensive reading, free conversation and writing-centered. And when they say "no pronunciation" . . .  they mean it!

Now, granted that is a little extreme, even for a profitable North American sweat shop, but around the rest of the world, it isn't at all. The root cause may be different, of course, but the result is the same: Teach the book or you are gone!

So . . .  if you were teaching there and you believed that pronunciation work is essential (to both
intelligibility and. well . . . encoding and memory recall) what would you do and not lose you job in the process? Seriously, if you have an effective workaround where you teach (anonymously, of course), comment on this post and tell us. I have my grad students working on it, too, and will report back after they finish their  research papers.

Not surprisingly, we have one answer: Covert Haptic-Integrated Pronunciation or CHIP. It works like this: systematically, map onto any language used in the classroom some kind of gesture or body-synchronized movement. In the covert version, you can't talk about pronunciation or explain too much without giving away the game, but if it is apparently spontaneous and done consistently, there are ways.

In the "regular" version Haptic Pronunciation Teaching (HaPT-Eng), v5.0:

(a) We begin with  some kind of very brief mini-lesson (~5 minutes) where learners are introduced to sound(s) or sound process and then briefly embody/practice it accompanied by specifically designed pedagogical gestures. That is just to introduce mind and body to the "embodied pronunciation schema" (EPS).

(b) Next, either by design or when an obvious opportunity or need comes up in the lesson plan, the gestural set is mapped on to language being learned or practiced. That may or may not involve a little explicit, verbal explanation or reminder, pointing back to the EPS mini-module. The "learning" in a very real sense, happens here, with embodied practice, in what we call "initial interdictions" or IIDs, pronounced: I-Ds.

(c) From then on, anytime pronunciation feedback, modelling or correction will be advantageous, the gestural mapping is used, without accompanying explanation or focus, in "subsequent interdictions" or SIDs, pronounced: sids.

(d) Ideally, best case, pronunciation that is "body-lighted" in class is then automatically or routinely  assigned to homework practice, using the same gestural complex in practice. In other words, speaking out loud with accompanying gesture.

In one way or another, however, the key is still EPS, the initial, embodied understanding of how (and with what) to change pronunciation, consistently, over time. The general model is termed: EPS*AIC (embodied pronunciation schema, applied in the integrated classroom).

The covert version, not recorded in lesson plans or done when hostile observers are in the room, begins with a basic IID, done with as little verbal rationale as possible, and is followed up with SIDs, whenever. For most learners, just mapping on gesture, either modeling it with no comment or having them do it with the instructor is good, especially with kids. That relationship is, of course, something of the core of empathetic communication in all cultures and face to face interaction. (See forthcoming blogpost on that!)

Good, I-D, eh? Tell us how you teach pronunciation successfully, covertly. 

And . . . remember to sign up for next Haptic Webinar, May 17th and 18th (email: info@actonhaptic.com)


Tuesday, January 22, 2019

Differences in pronunciation: Better felt than seen or heard?

clker.com
This feels like a "bigger" study, maybe even a new movement! (Speaking of new "movements", be sure to sign on for the February haptic webinars by the end of the month!)

There are any number of studies in various fields exploring the impact of racial, age or ethnic "physical presence" (what you look like) on perception of accent or intelligibility. In effect, what you see is what you "get!" Visual will often override audio, what the learner actually sounds like. Actually, that may be a good thing at times . . .

Haptic pronunciation teaching and similar movement-based methods use visual-signalling techniques, such as gesture, to communicate with learners concerning status of sounds, words and phrases. Exactly how that works has always been a question.

Research by Collegio, Nah, Scotti and Shomstein of George Washington University, summarized by Neurosciencenews.com“Attention scales according to inferred real-world object size", points to something of the underlying mechanism involved: perception of relative object size. The study compared subjects' reaction or processing time when attempting to identify the relative size of objects (as opposed to the size of the image of the object presented on the screen). What they discovered is that, regardless of the size of the images on the screen, the objects that were in reality larger consistently occupied more processing time or attention.

In other words, the brain accesses a spatial model or template of the object, not just the size of the visual image itself in "deciding" if it is bigger than an adjacent object in the visual field. A key element of that process is the longer processing time tied to the actual size of the object.

 How does this relate to gesture-based pronunciation teaching? In a couple of ways potentially. If students have "simply" seen the gestures provided by instructors (e.g., Chan, 2018) and, for example, in effect have just been commanded to make some kind of adjustment, that is one thing.The gesture is, in essence, a mnemonic, a symbol, similar to a grapheme, a letter. The same applies to such superficial signalling systems such as color, numbers or finger contortions.

If, on the other hand, the learner has been initially trained in using or experiencing the sign, itself, as in sign language, there is a different embodied referent or mapping, one of experienced physical action across space.

In haptic work, adjacent sounds in the conceptual and visual field are first embodied experientially. Students are briefly trained in using three different gesture types, distinctive lengths and speeds, accompanied by three distinctive types of touch. In initial instruction, students do exercises where they experience physically combinations of those different parameters as they say the sounds, etc.

For example, the contrastive, gestural patterns (done as the sound is articulated) for  [I], [i], [i:],and [iy] are progressively longer and more complex: (See linked video models.)
a. Lax vowels, e.g., [I] ("it')- Middle finger of the left hand quickly and lightly taps the palm of the right hand.
b. Tense vowels, e.g., [i] ("happy")- Left hand and right hands touch lightly with finger tips momentarily.
c. Vowel before voiced consonant, e.g., [i:] ("dean") - Left hand pushes right hand, with palms touching, firmly 5 centimeters to the right.
d. Tense vowel, plus off glide, e.g., [iy] ("see") - Finger nails of the left hand drag across the palm of the right hand  and, staying in contact then slide up about 10 centimeters and pause.

The same principle applies to most sets of contrastive structures and processes, such as intonation, rhythm and consonants. See what I mean, why embodied gesture for signalling pronunciation differences is much more effective? If not, go here, do a few haptic pedagogical movement patterns (PMPs) just to get the feel of them and then reconsider!





Sunday, November 11, 2018

Beyond gesture: when visual-auditory-kinesthetic is not enough in pronunciation teaching!

Haptic engagement (adding touch to gesture) in pronunciation teaching began in 2005, in response to a number of potential problem(s) of using "simple" gesture in the classroom:
  • Inconsistency of results! 
    • Sometimes gesture seems to work well in learning and recalling pronunciation:
      • As a motivator or generator of enthusiasm and releasing inhibitions, it can be terrific . . . sometimes!
      • But sometimes not, depending on a number of factors. Research on efficacy of general gesture use in teaching has been consistently inconclusive, at best. Part of the reason for that, of course, is that the phonological distinctions, themselves, may be perceptually relatively ambiguous as well, such as that between [i], as in "seat" and [I] as in "sit" in English for learners of many other L1s.
  • Some individuals and cultures are more "gesticular" than others.
    •  Some of us are just better performers and more comfortable with having others mirror our movement in public. We found the teachers in Costa Rica to be some of most naturally "haptic" in that regard!
    • Some of us are just not wired for it. In a few cases, such as the ambidextrous or highly visually eidetic (have photographic visual memory) may find this kind of teaching unsettling, to put it mildly. (But with careful control and use of pre-recorded video models, most can successfully work with the haptic system.)  
  • Teacher training
    • It has turned out, not surprisingly, to be exceedingly difficult to train teachers to use a common set of pedagogical gestures, especially when training is done online and not f2f. Our haptic pronunciation training here on campus has been very successful, but it goes on for 12 weeks, 2 or 3 hours per week. (But see note and links at the bottom for a new option next month!)
The intuitive "solution" turned out to be relatively straightforward:
  • Anchor gesture with touch on stressed syllables or prominent words in phrases and sentences.
  • Create instructional videos (of me, for the most part) that did the teaching, instead of requiring individual teachers to do it themselves, at least initially. 
The underlying reason or research justification for why that approach "worked" only emerged gradually . . . and recently, in fact! A new study by Fairhurst, M., Travers, E., Hayward, V. and Deroy, O. of Ludwig Maximilians-Universitat-Munchen, Confidence is higher in touch than in vision in cases of perceptual ambiguity, provides a striking piece of the puzzle.

In the experiment, subjects basically had to judge the relative length of two sticks. When the difference was more obvious, they relied solely on vision. When the difference was visually very close or ambiguous, however, they turned to touch to determine which was longer--even though the actual difference in length was actually insignificant. In other words, with touch their judgments were significantly more confident. In effect, "Seeing, as the expression goes, may be believing, but feeling is truth."

The main effect addresses the problem of movement and gesture being potentially difficult to locate consistently in the visual field of the learner and instructor. Although a pattern itself may look "the same" when performed at different locations in front of the learner, it may well not be recognized or remembered as such. (That has always been our experience.)  Unless you apply the magic . . . touch!

Touch as linked to gestural patterns such as those for  tense vowels with off-glides, where the touch occurs on the stressed vowel in a word or phrase, not only consolidates the voice and hand/arm movement and helps identify more consistent locations for the pedagogical gestures, but also gives learners confidence in finding them in the first place. That is especially the case where two sounds or patterns are both conceptually and phonologically in very close proximity, such as the space/distinction between [iy] and [ey] in English in this demonstration video from haptic pronunciation training, version 2.0. 

Now I realize this may all be a bit hard to "grasp" at first, but after you have had just a "touch" of haptic work, it makes perfect sense!

Need to know more and be trained in Haptic Pronunciation Training? Go here and then sign up here!

Source:
Confidence is higher in touch than in vision in cases of perceptual ambiguity, Scientific Reports, volume 8, Article number: 15604 (2018)



Sunday, August 26, 2018

It's not what you learn but where: how visual context matters

 If you have seen this research study Retinal-specific category learning. recently by Rosedahl, Eckstein and Ashby of  UC-Santa Barbara, (Summarized by Science Daily) I have a few questions for you: (If not, read it at eye level or  better just above, holding whatever it is in accordingly.)
  • Where did that happen (Where was your body; in what posture did it happen)?
  • What media (paper, computer, etc.) did it happen on?
  • What was your general emotional state when that happened? 
  • What else were you doing while you internally processed the story? (Were you taking notes, staring out the train window, watching TV . . . ?)
  • Where in your visual field did you read it? If it was an audio source, what were you looking at as you listened to it?
Research in neuroscience and elsewhere has demonstrated that any of those conditions may significantly impact perception and learning. Rosendal et al (2018) focuses on the last condition: position in the visual field. What they demonstrated was that what is learned in one consistent or typical place in the visual field tends not be recognized as well if appearing later somewhere else in the visual field, or at least on the opposing side. 

In the study, when subjects were trained to recognize classes of objects with one eye, with the other eye covered, they were not as good at recognizing the same objects with the other eye. In other words, just the position in the visual field appeared to make a difference. The summary in Science Daily does not describe the study in much detail. For example, were the direction of the protocol training from left to right, that is learning the category with the left eye (with right eye dominant learners), I'd predict that the effect would be less pronounced than in the opposite direction, based on extensive research on the relative differential sensitivity of the left and right side visual fields. Likewise, I'd predict that you could find the same main effect just by comparing objects high in the visual field with those lower, at the peripheries. But the conclusion is fascinating, nonetheless.

The relevance to research and teaching in pronunciation is striking (or eye opening?) . . . If you want learners to remember sound-schema associations, do what you can to not just provide them with a visual schema in a box on paper, such as a (colored?) chart on a page, but consider creating the categories or anchoring points in the active, dynamic three dimensional space in front of them.That could be a relatively big space on the wall or closer in, right in front of them, in their  personal visual space.

One possibility, which I have played with occasionally, is giving students a big piece of paper with the vowels of English displayed around the periphery so that the different vowels are actually anchored more prominently with one eye or the other or "noticeably" higher or lower in the visual field--and having them hold it very close to their faces as they learn some of the vowels. The problem there, of course, is that they can't see anything else! (Before giving up, I tried using transparent overhead projector slides, too, but that was not much better, for other reasons.) 

In haptic pronunciation work, of course, that means using hands and arms in gesture and touch to create a clock-like visual schema about 12 inches away from the body, such that sounds can be, in effect consistently sketched across designated trajectories or be anchored to one specific point in space. For example, we have used in the past something called the "vowel clock" where the IPA vowels of English are mapped on, with the high front tense vowel [i] at one o'clock and the mid-back-tense vowel [o] at 9 o'clock. Something like that.

In v5.0 of Haptic Pronunciation Training-English (HaPT-Eng), the clock is replaced by a more effective compass-like visual-kinesthetic schema of sorts, where the hands-arms-gesture creates the position in space and touch of various kinds embodies the different vowel qualities of the sounds that are located on that azimuth or trajectory in the visual field. (Check that out in the fall!)

In "regular" pronunciation or speech teaching those sorts of things go on ad hoc all the time, of course, such as when we point with gesture or verbally point at something in the immediate vicinity, hoping to briefly draw learners' attention. Conceptually, we create those spaces constantly and often very creatively. Rosendahl et al (2018) demonstrates that there is much more potentially in what (literally) meets the eye. 

Source:
University of California - Santa Barbara. (2018, August 15). Category learning influenced by where an object is in our field of vision. ScienceDaily. Retrieved August 23, 2018 from www.sciencedaily.com/releases/2018/08/180815124006.htm


Saturday, April 14, 2018

Out of touch and "pointless" gesture use in (pronunciation) teaching

Two recently published, interesting papers illustrate potential problems and pleasures with gesture use in (pronunciation) teaching. The author(s) both, unfortunately, implicate or misrepresent haptic pronunciation training.

Note: In Haptic Pronunciation Training-English (HaPT-Eng) there is NO interpersonal touch, whatsoever. A learner's hands may touch either each other or the learner holds something, such as a ball or pencil that functions as an extension of the hand. Touch typically serves to control and standardize gesture--and integrate the senses--while amplifying the focus on stressed syllables in words or phrases.

This from Chan (2018): Embodied Pronunciation Learning: Research and Practice in special issue of the CATESOL journal on research-based pronunciation teaching:

"In discussing the use of tactile communication or haptic interventions, they (Hişmanoglu and Hişmanoglu, 2008) advise language teachers to be careful. They cite a number of researchers who distinguish high-contact, touch-oriented societies (e.g., Filipino, Latin American, Turkish) from societies that are low contact and not touch oriented (e.g., Chinese, Japanese, Korean); the former may perceive the teacher’s haptic behavior (emphasis mine)as normal while the latter may perceive it as abnormal and uncomfortable. They also point out that in Islamic cultures, touching between people (emphasis mine) of the same gender is approved, but touching between genders is not allowed. Thus, while integrating embodied pronunciation methods into instruction, teachers need to remain constantly aware of the individuals, the classroom dynamics, and the attitudes students express toward these activities."

What Chan means by the "teacher's haptic behavior" is not defined. (She most probably means simply touching--tactile, not "haptic" in the technical sense as in robotics, for example, or as we use it in HaPT-Eng, that is: gesture synchronized with speech and anchored with intra-personal touch that provides feedback to the learner.) For example, to emphasize word stress in HaPT-Eng, in a technique called the "Rhythm Fight Club", the teacher/learner may squeeze a ball on a stressed syllable, as the arm punches forward, as in boxing. .

Again: There is absolutely no "interpersonal touch" or tactile or haptic communication, body-to-body, utilized in  HaPT-Eng . . . it certainly could be, of course--acknowledging the precautions noted by Chan. 

Clker.com
A second study, Shadowing for pronunciation development: Haptic-shadowing and IPA-shadowing, by Hamada, has a related problem with the definition of "haptic". In the nice study, subjects "shadowed" a model, that is attempted to repeat what they heard (while view a script), simultaneously, along with the model. (It is a great technique, one use extensively in the field.) The IPA group had been trained in some "light" phonetic analysis of the texts, before attempting the shadowing. The "haptic" group were trained in what was said (inaccurately) to be the Rhythm Fight Club. There was a slight main effect, nonetheless, the haptic group being a bit more comprehensible.

The version of the RFC used was not haptic; it was only kinesthetic (there was no touch involved), just using the punching gesture, itself, to anchor/emphasize designated stressed syllables in the model sentences. The kinesthetic (touchless) version of the RFC has been used in other studies with even less success! It was not designed to be used without something for the hand to squeeze on the stressed element of the word or sentence, making it haptic. In that form, the gesture use can easily become erratic and out of control--best case! One of the main--and fully justified--reasons for avoidance of gesture work by many practitioners, as well as the central focus of HaPT-Eng: controlled, systematic use of gesture in anchoring prominence in language instruction.  

But a slight tweak of the title of the Hamada piece from "haptic" to "kinesthetic", of course, would do the trick.

The good news: using just kinesthetic gesture (movement w/o touch anchoring), the main effect was discernable. The moderately "bad" news: it was not haptic--which (I am absolutely convinced) would have made the study much more significant--let alone more memorable, touching and moving . . .

Keep in touch! v5.0 of HaPT-Eng will be available later this summer!








Thursday, February 8, 2018

The feeling of how it happens: haptic cognition in (pronunciation) teaching

Am often asked the question as to how "haptic" (movement+touch) can enhance teaching, especially pronunciation teaching. A neat new study by Shaikh, Magana, Neri, Escobar-Castillejos, Noguez and Benes, Undergraduate students’ conceptual interpretation and perceptions of haptic-enabled learning experiences, is "instructive". Specifically, the study,

 " . . . explores the potential of haptic technologies in supporting conceptual understanding of difficult concepts in science, specifically concepts related to electricity and magnetism."

Now aside from the fact that work with (haptic) pronunciation teaching should certainly feel at times both "electric and magnetic", the research illustrates how haptic technology, in this case a joy-stick-like device, can help students more effectively figure out some basic, fundamental concepts. In essence, the students were able to "feel" the effect of current changes and magnetic attraction as various forces and variables were explored. The response from students to the experience was very positive, especially in terms of affirmation of understanding the key ideas involved.

The real importance of the study, however, is that haptic engagement is not seen as simply "reinforcing" something taught visually or auditorily. It is basic to the pedagogical process. In other words, experiencing the effect of electricity and magnetic attraction as the concepts are presented results in (what appears to be) a more effective and efficient lesson. It is experiential learning at its best, where what is acquired is more fully integrated cognition, where the physical "input" is critical to understanding, or may, in fact, precede more "frontal" conscious analysis and access to memory. (Reminiscent, of course, of Damasio's 2000 book: The feeling of how it happens: Body and emotion in the making of consciousness. Required reading!)

An analogous process is evident in haptic pronunciation instruction or any approach that systematically uses gesture or rich body awareness. The key is for that awareness, of movement and vibration or resonance, to at critical junctures PRECEDE explanation, modeling, reflection and analysis, not simply to accompany speech or visual display. (Train the body first! - Lessac)

We are doing a workshop in May that will deal with discourse intonation and orientation (the phonological processes that span sentence and conversational turn boundaries). We'll be training participants in a number of pedagogical gestures that later will accompany the speech in that bridging. To see what some of those used for expressiveness look (and feel) like, go here!

KIT






http://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-017-0053-2

Monday, January 29, 2018

Anxious about your (pronunciation) teaching? You’d better act fast!



Probably the most consistent finding in research on pronunciation teaching from instructors and student alike is that it can be . . . stressful and anxiety producing. And compounding that is often the additional pressure of providing feedback or correction. A common response, of course, is just to not bother with pronunciation at all. One coping strategy often recommended is to provide "post hoc" feedback, that is after the leaner or activity is finished, where you refer back to errors, in as low key and supportive a manner as possible. (As explored in previous posts, you might also toss in some deep nasal breathing, mindfulness or holding of hot tea/coffee cups at the same time, of course.) Check that . . .

A new study by Zhan Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation, published in Frontiers in Human Neuroscience, adds an interesting perspective to the problem. What they found, in essence, was that: 

Learners who tended toward high anxiety responded better to immediate positive feedback than such feedback postposed, or provided later. The same type of learners also perceived overall outcomes of the training as lower, were the feedback to be provided later.
Learners who tended toward low anxiety responded equally well to immediate or delayed feedback and judged the training as effective in either condition. There was also a trend toward making better use of feedback as well.
Just why that might be the case is not explored in depth but it obviously has something to do with being able to hold the experience in long term memory more effectively, or with less clutter or emotional interference.






I'm good!

So, if that is more generally the case, it presents us with a real a conundrum on how to consistently provide feedback in pronunciation teaching, or any teaching for that matter. Few would say that generating anxiousness, other than in the short term as in getting "up" for tests or so-called healthy motivation in competition, is good for learning. If pronunciation work itself makes everybody more anxious, then it would seem that we should at least focus more on more immediate feedback and correction or positive reinforcement. Waiting longer apparently just further handicaps those more prone to anxiety. How about doing nothing?


This certainly makes sense of the seemingly contradictory results of research in pronunciation teaching showing instructors biased toward less feedback and correction but students consistently wanting more

How do you provide relatively anxiety-free, immediate feedback in your class, especially if your preference is for delayed feedback? Do you? In haptic work, the regular warm up preceding pronunciation work is seen as critical to that process. (but we use a great deal of immediate, ongoing feedback.) Other instructors manage to set up a more general nonthreatening, supportive, open and accommodating classroom milieu and "safe spaces". Others seem to effectively use the anonymity of whole class responses and predictable drill-like activities, especially in oral output practice.


Anxiety management or avoidance. Would, of course, appreciate your thoughts and best practice 0n this . . as soon as possible!


Citation: Zhang X, Lei Y, Yin H, Li P and Li H (2018) Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation. Front. Hum. Neurosci. 12:20. doi: 10.3389/fnhum.2018.00020