Sunday, August 28, 2016

Great pronunciation teaching? (The "eyes" have it!)

Clker.com
Attention! Внимание!

Seeing the connection between two new studies, one on the use of gesture by trial lawyers in concluding arguments and one on how a "visual nudge" can seriously disrupt our ability to describe recalled visual properties of common objects--and by extension, pronunciation teaching--may seem a bit of a stretch, but the implications for instruction, especially systematic use of gesture in the classroom are fascinating.

The bottom line: what the eyes are doing during pronunciation work can be critical, at least to efficient learning. Have done dozens posts over the years on the role or impact of visual modality on pronunciation work; this adds a new perspective. 

The first, by Edmiston and Lupyan of  University of Wisconsin-Madison, Visual interference disrupts visual knowledge, summarized in a ScienceDaily summary:

"Many people, when they try to remember what someone or something looks like, stare off into space or onto a blank wall," says Lupyan. "These results provide a hint of why we might do this: By minimizing irrelevant visual information, we free our perceptual system to help us remember."

The "why" was essentially that visual distraction during recall (and conversely in learning, we assume), could undermine ability to describe visual properties of even common well-known objects, such as the color of a flower. That is a striking finding, countering the prevailing wisdom that such properties are stored in the brain more abstractly, not so closely tied to objects themselves in recall.

Study #2: Matoesian and Gilbert of the University of Illinois at Chicago, in an article published in Gesture entitled, Multifunctionality of hand gestures and material conduct during closing argument. The research looked at the potential contribution of gesture to the essential message and impact of the concluding argument to the jury. Not surprisingly, it was evident that the jury's visual attention to the "performance" could easily be decisive in whether the attorney's position came across as credible and persuasive. From the abstract:

This work demonstrates the role of multi-modal and material action in concert with speech and how an attorney employs hand movements, material objects, and speech to reinforce significant points of evidence for the jury. More theoretically, we demonstrate how beat gestures and material objects synchronize with speech to not only accentuate rhythm and foreground points of evidential significance but, at certain moments, invoke semantic imagery as well. 

The last point is key.  Combine that insight with the "Nudge" study. It doesn't take much to interfere with "getting" new visual/auditory/kinesthetic/tactile input. The dominance of visual over the other modalites is well established, especially when it comes to haptic (movement plus touch). These two studies add an important piece, that random VISUAL, itself, can seriously interfere with targeted visual constructs or imagery as well. In other words, what your student LOOK at and how effective their attention is during pronuncation work can make a difference--an enormous difference, as we have discovered in haptic pronunciation teaching.

Whether learners are attempting to connect the new sound to the script in the book or on the board, or are attempting to use a visually created or recalled script (which we often initiate in instruction) or are mirroring or coordinating their body movement/gesture with the pronunciation of a text of some size, the "main" effect is still there: what is at that time in their visual field in front of them or in their created visual space in their brain may strongly dictate how well things are integrated--and recalled later. (For a time I experimented with various system of eye tracking control, myself, but could not figure out how to develop that effectively--and safely, but emerging technologies offer us a new "look" at that methodology in several fields today.)

So, how do we appropriately manage "the eyes" in pronunciation instruction? Gestural work helps to some extent, but it requires more than that. I suspect that virtual reality pronunciation teaching systems will solve more of the problem. In the meantime, just as a point of departure and in the spirit of the earlier, relatively far out "suggestion-based" teaching methods, such as Suggestopedia, assume that you are responsible for everything that goes on during a pronunciation intervention (or interdiction, as we call it) in the classroom. (See even my 1997 "suggestions" in that regard as well!)

Now I mean . . . everything, which may even include temporarily suspending extreme notions of learner autonomy and metacognitive engagement . . .

See what I mean?

Sources: 
Matoesian, G. and Gilbert, K.  (2016). Multifunctionality of hand gestures and material conduct during closing argument. Gesture, Volume 15, Issue 1, 2016, pages: 79 –114
Edmiston, P. and  Gary Lupyan, G. (2017) Visual interference disrupts visual knowledge. Journal of Memory and Language, 2017; 92: 281 DOI: 10.1016/j.jml.2016.07.002

Sunday, August 14, 2016

Conducting a great (pronunciation) class--according to Hazlewood!

If you haven't seen this phenomenal 2011TED talk by Charles Hazlewood--and you plan to be an even better (haptic pronunciation) teacher--this is definitely REQUIRED VIEWING! Hazlewood, one of the world's premier orchestra conductors, demonstrates beautifully both the "gestural" art of conducting and the central role of trust in the relationship between the conductor and the musicians. (The finale, from Haydn, alone is worth watching the talk for.)

Photo credit: Hazlewood.com
The parallel to what we do (or what we could do) is striking. One "problem" with pronunciation teaching is that it demands both serious risk taking on the part of the learner and the ability of the instructor to "conduct" the class in a atmosphere of genuine trust with strong musical overtones of rhythm and melody. Hazlewood's depiction of the "degrees of freedom" between the conductor and members of the orchestra is a fine analogy to what is foundational to any "great class". 

Enjoy!


Saturday, August 13, 2016

Haptic Pronunciation Teaching Certificate Course (v4.1) Preview - FREE!

HaPT-E v4.1
v4.1 includes several new "touching" features. The Preview includes:
  • Modules 1 and 2 (of the 10 modules system)
  • 1 month free access to the course videos
  • The Instructor's manual
  • SKYPE consultation upon completion of the Preview.
  • Option to purchase the course at a reduced price (until January 2017)
For more detail on the system itself, check out the Certificate Description, including the Certificate Forum. v5.0 is now scheduled to be available in January or February, 2017.

If you purchase v4.1 you will automatically be given access to the new v5.0 beta videos as they come on line for field testing this fall. 

To get the preview, contact: info@actonhaptic.com


Tuesday, August 9, 2016

Haptic dance, Aikido and the future of language (and pronunciation) teaching

Clker.com
For an intriguing glance at what future language (or pronunciation) teaching may "look" like, check out the following "haptic dances" by media artist, Landau: Exploratory Dance 1.1, and Motor Imagery Dance. Computer-mediated mirroring will be key; not just culturally appropriate body movement and gestures, for example, but gesture-synchronized speech as well (not unlike what we see in haptic pronunciation teaching, not surprisingly!)

Another piece or theoretical model of what that process will involve is evident in this article in Gesture focusing on the full-body, dialogic "dance" between opponents in Aikido,"The coordination of moves in Aikido interaction." Lefebvre's 2016 framework was developed examining the interplay involved, created with the goal of being able to better characterize the way entire bodies communicate with each other, the intricate synchrony of moves and counters that characterizes all "conversation".

Aikido embodies the moves of your opponent, in part by almost subliminally synchronizing the body to the motion coming at you. Bouts are "won" often by simply redirecting or escorting one's opponent to the floor or out of the ring. (That was, by the way, my wife's basic approach to dealing with first graders: You cannot possibly just block them or stop them, but you can almost always deftly redirect their energy and motion more to your purposes!)

What those two systems in part provide us with is the beginnings of a framework by which to design methodologies that (literally) embody language models, including technology that "manages" articulation as well. There have been for quite some time haptic systems that assist patients with various articulatory conditions, guiding the vocal apparatus in producing more "normal" speech patterns.

Embodied, computer-mediated language learning, something analogous to the Aikido experience, will provide learners with a way to (safely and completely) give themselves over to the "dance" as they are guided to speak and move with models, and ultimately be able to adopt and use the energies, words and moves of the L2, themselves--faster and more efficiently.

This is one dance you'll not want to miss! In the meantime, of course, you might prepare by doing some Aikido--and Haptic Pronunciation Teaching!

Full citations:
Daniel Landau - http://www.daniel-landau.com/about
Lefebvre, A. The coordination of moves in Aikido interaction. Gesture 5(2) 123-155.