|
Clipart: Clker |
|
Clipart: Clker |
One of the early inspirations for the EHIEP system,
as reported earlier, was my work with deaf students at the university who communicated with American Sign Language (ASL). Sign gestures, along with upper body and facial expressiveness have been researched extensively; I have referred to those studies several times in trying to better understand how ASL behaviors can be effectively exploited in our pronunciation work. I found this 2009 study on "visual intonation"in Israeli Sign Language recently which demonstrates vividly the role of the systematic movement of facial muscles (e.g. raised eyebrows, squints, knitting brows) in conveying prosody, especially as it accompanies pitch change. The analogous pedagogical movement patterns of EHIEP are designed primarily to help the learner get the felt sense of intonation, where learners' hands and arms move around the visual field to anchor pitch (H, M and L) and pitch change (fall-rise, rise-fall, etc.) However, once a student is able to "repeat" a PMP along with the instructor in articulating a target word or phrase, the "visual intonation" portrayal also serves to communicate to the instructor and student something of accuracy of the pitch or pitch change. In other words, an instructor also uses the visual PMPs in both assessing and providing feedback in class on intonation. Just as in the case of the ISL study, spatial-gesture serves more than just as an analogy for pitch and pitch change--it is potentially a very powerful physical anchor.
See what I mean?
No comments:
Post a Comment