Tuesday, March 24, 2020

Recipe for curing (Chinese) distaste for pronunciation teaching

Have trouble selling your students on pronunciation, developing an 'appetite" for it? Research by Madzharov, Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food, summarized by Science Direct, suggests that you may just need to give at least the more self-controlled among them a "hands-on" taste of it to get them to buy in. To quote the abstract:

"The present paper presents four studies that explore how sampling and eating food by touching it directly with hands affects hedonic evaluations and consumption volume."

What they found, however, was that for only the high self-control, disciplined consumers that they perceived the food to be better tasting and they were disposed to eat more of it. For the other subjects (like me maybe!), adding touch did not appear to contribute or enhance either taste or appetite for the food samples in the study. Why that should be the case, was not clear, other than the possibility that in the less self-controlled consumers, the executive control centers of the brain were offline already in terms of the direct, unfettered attraction of FOOD!

A few years ago, had a visiting scholar from China here with us for a year. It took almost the entire time for her to get me to understand how to get Chinese students to buy in to (haptic) pronunciation teaching, specifically, but, in general, more integrated, communicative pronunciation work. My "mistake" had been trying to convince relatively high-control consumers of pronunciation teaching in this case, to first be more like me, less high-control and more experiential as learners.

It has always been a problem for some, not just the Chinese students, to buy into highly gesture-based instruction. But touch was another thing entirely. Most any student can "get it", how touch can enhance learning and memory-- and be coaxed into trying some of the gestural, kinaethetic techniques. Probably for several reasons, one being that the functions of touch in the haptic system are to (1) carefully control gesture use, and (2) intensify the connection between the gesture and lexical or phonological target, the word or sound process. Also, it was  (3) much easier to present the general, popular research on the contribution of touch to experience and learning, and (4) the concept of somehow getting a learner to work in their least dominant modality, a basic construct in hypnosis, for example, can be the most effective or powerful.

The assumption here is that the metacognitively self-controlled are less likely to be influenced by immediate feelings or impressions, but once that "barrier" is bridged, as touch does so effectively, the relatively novel sensual experience for them has greater impact. Think: men and the power of perfume . . .

In other words focusing initially on the touch that concluded every gesture made a difference. Have been doing that ever since. Students are much more receptive to trying the gestural techniques once they feel that they have sufficient understanding . . . and then once they have tried it, focusing more on touch than on gesture . .  they are "hooked" . . . being more able and amenable to sense the power of embodiment in learning pronunciation from then on.

If you have a taste for pronunciation work with Chinese students, what is your recipe?

Keep in touch . . .

Original Source:
Madzharov, A. Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food Journal of Retailing Volume 95, Issue 4, December 2019, Pages 170-185 https://doi.org/10.1016/j.jretai.2019.10.009


Thursday, March 19, 2020

Love it or leave it: 2nd language body, voice, pronunciation and identity

Clker.com
Recall (if you can) the first time you were required to listen to or maybe analyze a recording of your voice. Surprising? Pleasing? Disgusting? Depressing? There are various estimates as to how much of your awareness of your voice is based on what it "feels" like to you, not your ears, but somewhere around 80% or so. Turns out your awareness of what your body looks like is similar.

A new study by Neyret, Bellido Rivas, Navarro and Slater, of the Experimental Virtual Environments (EVENT) Lab, University of Barcelona,  “Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality” as summarized by Neuroscience News, found that our simple gut feelings about how (un)attractive our body shape or image is is generally more negative  than if we are able to view it more dispassionately or objectively "from a distance," as it were. Surprise. Using virtual reality technology subjects were presented with different body types and sizes, among them one that is precisely, to the external observer what the subject's body shape is. Subjects rated their "virtual body" shape more favorably than their earlier pre-experiment self-ratings presented in something analogous to a questionnaire format.

In psychotherapy, the basic principle of "distancing" from emotional grounding is fundamental; all sorts of ways to accomplish that such as visualizing yourself watching yourself doing something disconcerting or threatening to you. It is the "step back" metaphor that the brain takes very seriously if done right.

In this case, when visualizing the shape of your body (or your voice, by extension as part of the body,) you'll see it at least a little more favorably than when you describe it based on how it "feels" internally, the reason "body shaming" can work so effectively in some cases, or in pronunciation work, "accent shaming."

So, how can we use the insights from the research? First, systematic work by learners in critically listening to their voice should pay off, at least in some sense of resignation or even "like" so that the ear is not automatically tuned to react or aver.  (I'm sure there is research on that someplace but, for the life of me, I can't find it! Please help out with a good reference, if you can on that!) Is this some long overdue partial vindication of the seemingly interminable hours spent in the language lab? Could be in some cases.

Second, once a learner is able to "view" their L2 voice/identity relative to some ideal more dispassionately, it should be easier to work with it and make accommodations. That is one of the central assumptions of the "Lessac method" of voice development, which I have been relying on for over 30 years. It also calls into question the idea that aiming toward an ideal, native speaker accent is necessarily a mistake. You have to "see" yourself relative to it as more of an outsider, not  just from your solar plexus out . . . through your flabby abs, et al. . . .  My approach to accent reduction always begins there, before we get to changing anything. Call it: voice and body "re-sensitization."

See what I mean? If not, have somebody you don't know read this post to you again at Starbucks . . .

Original Source:
“Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality”. Solène Neyret, Anna I. Bellido Rivas, Xavi Navarro and Mel Slater. Frontiers in Robotics and AI doi:10.3389/frobt.2020.00031.

Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Sunday, March 8, 2020

Becoming a great (haptic), "good looking" pronunciation teacher: Modeling

If your are in the Vancouver, British Columbia next month, join us at the joint 2020 BCTEAL and Image Conference. Always a great get together.

If you haven't done a video of yourself teaching in the last couple of years, you might do that before you read the rest of this post. Better still, doing pronunciation or conversation work where you, up front, are providing at least some of the pronunciation models. (I have a rubric for that for my grad students. If you'd like a copy, email me.) 

I'll be doing a new workshop, "Modeling and correcting pronunciation in and out of class," based on the idea that as an instructor, really any kind, but especially one doing (haptic) pronunciation, your dynamic pedagogical body image (DPBI) e.g., Iverson, 2012, your visual model, your physical presence, movement and gesture in the classroom, from several perspectives, are worth considering carefully. How you dress, your pronunciation and accent, the coordination of your speech with your overall body movement in providing models of language and general postural presentation, all have meaning. When, as in haptic pronunciation work, you are asking students to synchronize some of their speech and gesture with yours, the nature of what is in front of them visually, can obviously contribute to or detract from instructional effectiveness.

In haptic work, in principle, all aspects of pronunciation can be represented/portrayed or embodied using gesture and body movement. From that perspective then, just modeling a word, or phrase or clause, or passage, involves choreography, demonstrating both the sound but also the gestural complex that represents it. (to see examples of the earlier v4.5 version of the haptic system, check out the models on the website).

The same goes for in-class correction or required homework on the form attended to in class or self-correction by the student. The instructor may present the more appropriate form first, choreographed, and then have the student or students "do" the targeted piece of language/text together (never "repeat after me", always "let's do that together.") All key, necessary pronunciation work is to be embedded, practiced, synchronized with gesture for at least a week or so as homework to insure some degree of anchoring in memory and spontaneous speaking, or at least aural comprehension.

For most kinds of instruction what you look like and how you move can be pretty much irrelevant--one of the reasons I love online teaching!!! For some, however, it does, even if it means just cutting down on "clutter" in the visual field up front.

v5.0 will be out before long. This is, nonetheless, a good first step . . . continually taking a "good look" up front at the dynamic model you are providing for your students, and yourself.