|Clip art: Clker|
The same goes for HICP work (which will one day also be done solely in VR). Learners both mirror the (pedagogical movement patterns) PMPs of instructors at times and the instructors are able to "monitor" learner individual pronunciation or group haptic practice visually--and then signal back appropriately. As strange as this may sound, providing feedback by means of haptically anchored PMPs generally seems more efficient (for several reasons) than is "correcting" or adjusting the production of the sound, itself, by "simply" eliciting a repetition, etc. (See earlier posts on how that is done.)
That, of course, is an empirically verifiable claim which we will test further in the near future. So listen carefully and haptically--and give your local avatar's pronunciation a hand.