In the early 90s a paint ball game designer in Japan told me that my kinaesthetic work was a natural for virtual reality. Several times since I have explored that idea, including developing an avatar in Second Life and, more recently, creating an avatar in my image to perform on video for me. (Have done half a dozen posts over the last three years playing with that idea.) How the brain functions and learner learns in VR is a fascinating area of research that is just beginning to develop.
In a 2013 study by Dodds, Mohler and Bülthoff of the Max Planck Institute for Biological Cybernetics reported in Science Daily, " . . . the best performance was obtained when both avatars were able to move according to the motions of their owner . . . the body language of the listener impacted success at the task, providing evidence of the need for nonverbal feedback from listening partners . . . with virtual reality technology we have learned that body gestures from both the speaker and listener contribute to the successful communication of the meaning of words."
The mirroring, synchrony and ongoing feedback of haptic-integrated pronunciation work are key to effective anchoring of sounds and words as well, whether done "live" in class or in response to the haptic video of AH-EPS. (In the classroom, with the students dancing along with the videos the instructor, as observer, is charged with responding in various ways to nonverbal and verbal feedback such as mis-aligned pedagogical movement patterns or "incorrect" articulation or questions from students.) What the research suggests is that listener body movement not only continuously informs the speaker and helps mediate what comes next, but that movement tied to the meanings of the words contributes significantly, apparently even more so than in "live" lectures.
There any number of possible reasons for that effect, of course, but "moving" past the mesmerizing, immobilizing impact of video viewing appears critical to VR training (and HICP!) KIT
Clip art: Clker |
The mirroring, synchrony and ongoing feedback of haptic-integrated pronunciation work are key to effective anchoring of sounds and words as well, whether done "live" in class or in response to the haptic video of AH-EPS. (In the classroom, with the students dancing along with the videos the instructor, as observer, is charged with responding in various ways to nonverbal and verbal feedback such as mis-aligned pedagogical movement patterns or "incorrect" articulation or questions from students.) What the research suggests is that listener body movement not only continuously informs the speaker and helps mediate what comes next, but that movement tied to the meanings of the words contributes significantly, apparently even more so than in "live" lectures.
There any number of possible reasons for that effect, of course, but "moving" past the mesmerizing, immobilizing impact of video viewing appears critical to VR training (and HICP!) KIT
No comments:
Post a Comment