Showing posts with label multiple modalities. Show all posts
Showing posts with label multiple modalities. Show all posts

Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!

Clker.com
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

Thursday, November 26, 2015

Drawing on the haptic side of the brain (in edutainment and pronunciation teaching)

ClipArt: Clker.com
How is your current "edutainmental quality of experience" (E-QoE), defined as degree of excitement, enjoyment and "natural feel" (to multimedia applications) by Hamam, Eid and El Saddik of the DISCOVER Lab, University of Ottawa, in a nice 2013 report, "Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications"? (Full citation below.) EQoE (pronounced: E-quo, I'd guess) is a great concept. Need to come up with a reliable way of measuring it in our research, something akin to that in Hamam et al. (2013).


In that study, a gaming application configured both with and without haptic or kinaesthetic features (computer mediated movement and touch in various combinations, in this case a haptic stylus)--as opposed to having just visual or auditory engagement, employing just eyes, ears and hands--was examined for relative EQoE. Not surprisingly, the former was significantly higher in EQoE, as indicated in subject self-reports.

I am often asked how "haptic" contributes to pronunciation teaching and what is "haptic" about EHIEP. This piece is not a bad, albeit indirect, Q.E.D. (quod erat demonstrandum)--one of my favorite Latin acronyms learned in high school math! (EHIEP uses movement and touch for anchoring sound patterns but not computer-mediated, guided movement--at least for the time being!)

The potential problems with use of gesture in instruction, the topic of several earlier posts, tend to be (a) inconsistent patterns in the visual field, (b) perception by many instructors and students as being out of their personal and cultural comfort zones, and (c) over-exuberant, random and uncontrolled gesture use in general in teaching, often vaguely related to attempts to motivate or "loosen up" learners--or, more legitimately, to just have fun. EHIEP succeeds in overcoming most of the potential "downside" of body-assisted Teaching (BAT).

In a forthcoming 2016 article on the function of gesture in pronunciation teaching, the EHIEP (Essential, Haptic-integrated English Pronunciation) method is somewhat inaccurately described as just a "kinaesthetic" system for teaching pronunciation using gesture, a common misconception. EHIEP does, indeed, use gesture (pedagogical movement patterns) to teach sound patterns, but the key innovation is use of touch to make application of gesture in teaching controlled, systematic and more effective in providing modeling and feedback--and obviously enhance E-QoE--very much in line with Hamam et al (2013).

The gaming industry has been on to haptic engagement for decades; edutainment is coming on board as well. Now if we can just do the same with something as unexciting, un-enjoyable and "unnatural" as most pronunciation instruction. We have, in fact . . .

Keep in touch!

Citation:

Hamam, A, Eid, M., and  El Saddik, A. (2013). Effect of kinaesthetic and tactile haptic feedback on the quality of experience of edutainment applications.Multimedia Tools and Applications archive
67:2, 455-472.

Sunday, July 28, 2013

Dealing with problem pronunciation? Gesticulate!

Clip art;
Clker
This from a Science Daily summary of new research by Miller and O'Neil of San Francisco State University on the role of gesture in problem solving variability in young children. The basic finding was that the more "gesticular" were better at solving problems. Furthermore: 

"There is a growing body of research that suggests gesturing may play a significant role in the processes that people use to solve a problem or achieve a goal. These processes include holding information in memory, keeping the brain from choosing a course too quickly and being flexible in adding new or different information to handle a task."

So, how does that relate to haptic pronunciation teaching? 

  • Holding information in memory (by means of haptic anchoring, using movement and touch on stressed syllables and words)
  • Keeping the brain from choosing a course too quickly (managing attention, haptically, with gesture and touch, at least 3 seconds at a time!)
  • Being flexible in adding new or different information to handle a task (enabling learners to work with multiple modalities in pronunciation work simultaneously, i.e., auditory, visual, kinaesthetic, tactile, etc. )

And you have a problem with that? Good!

Thursday, December 22, 2011

Monkey see and monkey do: efficient multi-tasking in pronunciation work

Clip art: Clker
Here is one of those research reports that inevitably evokes the same somewhat exasperated reaction from me (and I expect from most of you, as well). Ready?  It has been discovered that we--well some of our purported "cousins," at least-- are wired to multitask! Think of it . . . you can, for example, now watch TV and read a book at the same time or run on a treadmill without worry that you are going against your very nature or doing irreparable harm to your equipment.

It is an important study, reportedly one of the first to establish that empirically. The trick apparently is just how closely related the two tasks are. If they are sufficiently distinct, either in terms of intra-modality contrast (like two pictures) or inter-modality (like singing and knitting), go to it! Any number of previous posts have looked at the interplay among visual and auditory and haptic modalities, coming to much the same conclusion: that we can, under the right circumstances attend quite well to both haptic and auditory (and in controlled contexts, visual) simultaneously.

HICP/EHIEP is based on the idea of continuous, simultaneous engagement of multiple modalities (what we often refer to with the acronym "CHI"--for continuous haptic integration, haptic having the primary function of anchoring and integrating.) In other words, doing pedagogical movement patterning and seeing (tracking those movements of the hands across the visual field) and speaking at the same time should be a piece of cake. If not, we may just  have too much time on our hands--or not enough. Certainly nothing to HICP at!

Friday, November 18, 2011

Haptic technology for pronunciation teaching "on hand!"

Hat tip to Matt McLean for pointing me to these two URLs on the development of Omnitouch/
Kinect technology. As you can see in this YouTube video, embodying some of the EHIEP techniques is technically feasible now. In fact, even without the computer interface, the basic embodiment strategy of touching a spot on the other hand or arm or wherever is, from a HICP perspective, probably as effective in anchoring as the visual interface used there itself.

Actually, even if you just project the image on the wall or the desk and use that target as the visual field, it would still create a potentially workable, basic haptic anchoring template. It would not be as effective or portable or engaging as what we have already--using just the visual field of the learner and bilateral haptic anchoring--but it would certainly be a touch better than anything else available today!

Saturday, August 27, 2011

Why "haptic-integrated" pronunciation method? Really?

Clip art: Clker
I am frequently asked why I continue to insist on using the phrase “haptic-integrated pronunciation” as the focus of HICP/EHIEP. Much of what passes for pronunciation instruction today is still (at best) like a good Youtube video: (a) an explanation, followed by (b) classroom practice—some of it very well done, by the way, but generally conducted as decontextualized exercises, and then (c) . . . nothing . . . the learner is from that point on entrusted with the responsibility of either figuring out how to practice outside of class, or assumed to subconsciously integrate the new pronunciation without further attention or guidance.

The EHIEP model attempts to “supercharge” both the classroom and out-of-classroom experience by helping to integrate pronunciation teaching more effectively, in two senses. First, after initial brief training sessions (9 or 10, 30-minute modules, done by the instructor or video-based, spread out over the course of about 12 weeks) attention to pronunciation from then on occurs within the context of “regular” speaking and listening tasks, integrated in as the need or opportunity arises for increased intelligibility or accuracy. Second, learners experience in-class and out-of-class (in regular, prescribed homework), consistent, multi-sensory/modality learning of sounds and words that should greatly facilitate integrating those elements into their spontaneous speaking and listening. I had the basic idea back in 1984, but could never quite figure out how to get consistent integration and anchoring. About thirty years later, I was introduced to haptic research.

Thursday, August 11, 2011

(With Child L2 pronunciation learning) Haptic or not too haptic?

Clip art: Clker
In this study by Gori et al (2008) we get a glimpse into why children up to the age of 8-10 are more haptic in some contexts. For example, in determining relative size they will tend to rely on relative touch and motion; in figuring out orientation in space, more visual. Beyond that age, modalities are gradually more and more integrated. In working with L2 pronunciation of children with the EHIEP protocols it is striking (but from this research not surprising) how quickly they are able to "get" and mirror accurately the pedagogical movement patterns across the visual field and beyond. And, at the same time, they learn the haptic anchoring of sounds and recall them without the teacher even mentioning what is going on.

Later, as we have seen, as adults the senses are capable of working together or in opposition, depending on a number factors. To the old canard, "learn like a child," we can here see why it can be so difficult to get into that state--and how we may be able to construct frameworks that do allow occasional access to more uni-modal, direct learning when necessary.

Seen from that perspective, haptic is not just for kids.