|
Clip art: Clker |
|
Clip art: Clker |
Mirroring, having learners move along with a model, is a common technique in pronunciation teaching, especially at more advanced levels such as
this by Goodwin at UCLA. There are any number of applications of the concept, for various purposes. In EHIEP work, mirroring figures in prominently from the beginning. As noted in previous posts, some highly visual learners find imprecise modelling by the speaker being mirrored to be very disconcerting. For example, one pedagogical movement pattern (PMP) involves moving the left hand across the visual field in an ascending motion as a rising intonation contour is spoken. For sometime we have been looking at the possibility of using avatars that would perform perfectly precise PMPs in new versions of the haptic videos to compensate for the fact that a human model (namely me on the current videos) cannot possibly be consistent enough to satisfy the few most radically visual learners.
Research by Thomaz at Georgia State University seems to suggest that the only way to do that with robotic models--would be to build in "human-like" variability of motion into the repetitions of PMPs in training. In other words, the slight differences in the track of the gestural patterns is essential to creating a sufficiently engaging model to effectively keep subjects' attention. Rats. Better go back to figuring out both how to be more "humanly" precise in modelling PMPs and developing techniques that will assist the "visually-challenged" in loosening up a bit. Figuring out exactly what acceptable deviations from ideal PMPs are is, in principle, doable, of course. Just a matter of studio time and field testing. Keep in touch.
No comments:
Post a Comment