Showing posts with label avatar. Show all posts
Showing posts with label avatar. Show all posts

Thursday, July 14, 2016

Why your avatar (could/will) make a better pronunciation teacher than your are!

Clker.com
Since the emergence of Second Life in 2003, I have been fascinated with the prospect of avatars teaching language. At the time, for technical reasons, I could not get my avatars to respond quickly enough with good audio to do much and gave up. (From recent reviews, it appears that most of those issues, including monitoring of offensive content, have been resolved and I may give it another look.)

A 2016 study of avatars teaching math to kids by Cook, Friedman, Duggan, Cui and Popescu provides an interesting perspective. The focus of the study was to attempt to isolate the effect of gesture, independent of facial expression, body motion and other features of the presenter's persona. As the researchers note, one of the problems with identifying the impact of gesture (from the abstract) is that it is "known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements . . . "

The avatars presented a fixed background such that only the hand movement varied. (The voice used and various graphic figures remained constant.) The effect was "pronounced". The subjects who viewed the gesturing avatar not only learned the concepts more successfully but also were later able to apply the material better. (That is not really surprising since a number of studies have established that students just learn better when teachers gesture more.) But avatars bring something more to the party--or less!

In principle, how much of pronunciation could an avatar teach (either with or without gesture assist)? Probably most of it. (And I predict that that day is not far off.) One reason for that, mentioned above by Cook et al. is the fact that gesture tends to co-vary with other "non-verbal behaviors" such as . . . prosody? (Prosody is nonverbal? Really?)The basis of effective gesture use in instruction often depends critically on the learners' attention being "locked" on the cuing or anchoring motion; the gesture in tern is also strongly associated with a sound or process.

As reported in several previous posts, loss of attention or distraction is a most important variable in haptic (gesture plus touch) pronunciation teaching as well. The video models that we use now are for the most part black and white, with black background and no subtitles on screen, designed to focus learner attention on the movement and positioning basically of my hands, not the model's face or body. Addition of color, extraneous movement, or additional graphics will always pull at least some learners away from the focus of the lesson embodied in the pedagogical gestures. (Research on competition between visual, auditory and kinaesthetic or haptic, has demonstrated consistently that visual displays almost always trump the others, even in combination.)

For gesture-based pronunciation or other kinds of instruction for that matter, interactive "thinking" and responding Avatars offer real promise. The technology has been around for over a decade, in fact. Advantages of avatars include:
  • Individualized, more affordable computer-based instruction 
  • Systematic application of gesture in instruction, especially providing consistent placement of gesture in the visual field.
  • More effective attention management, neutralizing potential visual distractions
  • Emotionally "comfortable" instruction for a wider range learner personalities
  • Avoids unconscious transmission of:
    • Instructor "bad day" images and attitudes
    • Typical "hyperactive" pronunciation teacher behavior
    • Overreactions, positive or negative, to student miscues or "victories"
    • Instructor bias toward "teacher pets" or gaze avoidance in eye contact patterning during instruction
 Time to reactivate my Avatar. Will upload a demo later this summer.  

 Cook, S. W., Friedman, H. S., Duggan, K. A., Cui, J. and Popescu, V. (2016), Hand Gesture and Mathematics Learning: Lessons From an Avatar. Cognitive Science. doi: 10.1111/cogs.12344


Monday, May 6, 2013

The sound of gesture: kinaesthetic listening during "haptic video" pronunciation instruction

In the early 90s a paint ball game designer in Japan told me that my kinaesthetic work was a natural for virtual reality. Several times since I have explored that idea, including developing an avatar in Second Life and, more recently, creating an avatar in my image to perform on video for me. (Have done half a dozen posts over the last three years playing with that idea.) How the brain functions and learner learns in VR is a fascinating area of research that is just beginning to develop.

Clip art:
Clker
In a 2013 study by Dodds, Mohler and Bülthoff of the Max Planck Institute for Biological Cybernetics reported in Science Daily, " . . . the best performance was obtained when both avatars were able to move according to the motions of their owner . . . the body language of the listener impacted success at the task, providing evidence of the need for nonverbal feedback from listening partners  . . . with virtual reality technology we have learned that body gestures from both the speaker and listener contribute to the successful communication of the meaning of words."

The mirroring, synchrony and ongoing feedback of haptic-integrated pronunciation work are key to effective anchoring of sounds and words as well, whether done "live" in class or in response to the haptic video of AH-EPS. (In the classroom, with the students dancing along with the videos the instructor, as observer, is charged with responding in various ways to nonverbal and verbal feedback such as mis-aligned pedagogical movement patterns or "incorrect" articulation or questions from students.) What the research suggests is that listener body movement not only continuously informs the speaker and helps mediate what comes next, but that movement tied to the meanings of the words contributes significantly, apparently even more so than in "live" lectures.

There any number of possible reasons for that effect, of course, but "moving" past the mesmerizing, immobilizing impact of video viewing appears critical to VR training (and HICP!) KIT




Saturday, December 10, 2011

Pedagogical movement patterns and emotional avatars

Clip art: Clker
Imagine having Neytiri from the movie Avatar show up to sub for you in your HICP class on the day that the lesson plan calls for intonation and discourse markers of emotion work. Sound pretty far out? Maybe not. In the 2002 University of Berkeley dissertation by Barrientos, a model is developed for providing avatars with a relatively simple but adequate (for avatars) gesture + emotion repertoire. In fact, I am beginning to think that avatars could probably do a better job of teaching some pedagogical movement patterns than could a live instructor at the head of the class, for several reasons.

First: consistent, precision of movement pattern, both in terms of size, position in the visual field and speed. Second: With slight facial adjustment and vocal expression, the avatar can present most basic emotions with the pattern with words--free of personal agenda, high-fashion outfit of the day or other distraction, allowing learner to focus on and either repeat or mirror the PMP and the emotion conveyed--not the gesticulating bozo up front. (There is a great deal of research in the psychotherapeutic literature on the interaction between therapist and client in face-to-face "instruction.")

Even when doing EHIEP work "live," ourselves, we have learned through review of haptic and psychotherapeutic research and classroom experience that the key to efficient HICP instruction is to assume a slightly robotic "persona" at times. (Note the EHIEP-bot logo in the upper right hand corner of the blog.) Any extraneous visual distraction can (literally) kill haptic anchoring. So watch yourself! (Preferably on video many times.) Your students are . . .

Monday, August 8, 2011

Haptic testimonials from the stars - III

Clip art: Clker
Imagine having Neytiri from the movie Avatar show up to sub for you in your HICP class on the  day that the lesson plan calls for intonation and discourse markers of emotion work. Sound pretty far out? Maybe not. In the 2002 University of Berkeley dissertation by Barrientos,  a model is developed for providing avatars with a relatively simple but adequate (for avatars) gesture + emotion repertoire.

In fact, I am beginning to think that avatars could probably do a better job of teaching some pedagogical movement patterns than could a live instructor at the head of the class, for several reasons. First: consistent, precision of movement pattern, both in terms of size, position in the visual field and speed. Second: With slight facial adjustment and vocal expression, the avatar can present most basic emotions with the pattern with words--free of personal agenda, high-fashion outfit of the day or other distraction, allowing learner to focus on and either repeat or mirror the PMP and the emotion conveyed--not the gesticulating bozo up front. (There is a great deal of research in the psychotherapeutic literature on the interaction between therapist and client in face to face "instruction.")

Even when doing EHIEP work "live," ourselves, we have learned through review of haptic research and classroom experience that the key to efficient HICP instruction is to assume a slightly robotic "persona" at times as we do. (Note the EHIEP-bot logo in the upper right hand corner of the blog.) So watch yourself! (Preferably on video many times.) Your students are . . .