Showing posts with label virtual reality. Show all posts
Showing posts with label virtual reality. Show all posts

Thursday, March 19, 2020

Love it or leave it: 2nd language body, voice, pronunciation and identity

Clker.com
Recall (if you can) the first time you were required to listen to or maybe analyze a recording of your voice. Surprising? Pleasing? Disgusting? Depressing? There are various estimates as to how much of your awareness of your voice is based on what it "feels" like to you, not your ears, but somewhere around 80% or so. Turns out your awareness of what your body looks like is similar.

A new study by Neyret, Bellido Rivas, Navarro and Slater, of the Experimental Virtual Environments (EVENT) Lab, University of Barcelona,  “Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality” as summarized by Neuroscience News, found that our simple gut feelings about how (un)attractive our body shape or image is is generally more negative  than if we are able to view it more dispassionately or objectively "from a distance," as it were. Surprise. Using virtual reality technology subjects were presented with different body types and sizes, among them one that is precisely, to the external observer what the subject's body shape is. Subjects rated their "virtual body" shape more favorably than their earlier pre-experiment self-ratings presented in something analogous to a questionnaire format.

In psychotherapy, the basic principle of "distancing" from emotional grounding is fundamental; all sorts of ways to accomplish that such as visualizing yourself watching yourself doing something disconcerting or threatening to you. It is the "step back" metaphor that the brain takes very seriously if done right.

In this case, when visualizing the shape of your body (or your voice, by extension as part of the body,) you'll see it at least a little more favorably than when you describe it based on how it "feels" internally, the reason "body shaming" can work so effectively in some cases, or in pronunciation work, "accent shaming."

So, how can we use the insights from the research? First, systematic work by learners in critically listening to their voice should pay off, at least in some sense of resignation or even "like" so that the ear is not automatically tuned to react or aver.  (I'm sure there is research on that someplace but, for the life of me, I can't find it! Please help out with a good reference, if you can on that!) Is this some long overdue partial vindication of the seemingly interminable hours spent in the language lab? Could be in some cases.

Second, once a learner is able to "view" their L2 voice/identity relative to some ideal more dispassionately, it should be easier to work with it and make accommodations. That is one of the central assumptions of the "Lessac method" of voice development, which I have been relying on for over 30 years. It also calls into question the idea that aiming toward an ideal, native speaker accent is necessarily a mistake. You have to "see" yourself relative to it as more of an outsider, not  just from your solar plexus out . . . through your flabby abs, et al. . . .  My approach to accent reduction always begins there, before we get to changing anything. Call it: voice and body "re-sensitization."

See what I mean? If not, have somebody you don't know read this post to you again at Starbucks . . .

Original Source:
“Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality”. Solène Neyret, Anna I. Bellido Rivas, Xavi Navarro and Mel Slater. Frontiers in Robotics and AI doi:10.3389/frobt.2020.00031.

Tuesday, September 20, 2016

What (a window into the brain of) the mouse can teach us about learning pronunciation

Clker.com
Trigger warning: If you are especially attached to your mouse, you may want to skip over the third, italicized paragraph below . . . 

Fascinating research by Funamizu, Kuhn and Doya of Okinawa Institute of Science and Technology Graduate University, "Neural substrate of dynamic Bayesian inference in the cerebral cortex", originally published in Nature Neuroscience, summarized by Science Daily as, "Finding your way around in an uncertain world". (Full citation below.)

Basically, the study looked at how the (mouse's) brain uses movement of the mouse's body in creating meaning and thought. Reading the research methodology is not for the faint of heart. Here is a piece of the Science Daily summary describing it:

The team performed surgeries in which a small hole was made in the skulls of mice and a glass cover slip was implanted onto each of their brains over the parietal cortex. Additionally, a small metal headplate was attached in order to keep the head still under a microscope. The cover slip acted as a window through which researchers could record the activities of hundreds of neurons using a calcium-sensitive fluorescent protein that was specifically expressed in neurons in the cerebral cortex . . . The research team built a virtual reality system in which a mouse can be made to believe it was walking around freely, but in reality, it was fixed under a microscope. This system included an air-floated Styrofoam ball on which the mouse can walk and a sound system that can emit sounds to simulate movement towards or past a sound source.(ScienceDaily, September 16, 2016).

Got that? They then observed how the mice "navigate" the virtual space under different conditions, including almost complete reliance on body movement, rather than with access to any visual or auditory stimulus. The surprising finding (at least to me) was the extent to which kinesthetic memory or engagement took over, directing the mice to the "reward." There is much more to the work, of course, but this "window" into the functioning of the cerebral cortex is really consistent with a wide range of studies that point to "body-based" meaning creation and control.

So, what is the possible relevance of that to pronunciation teaching? (I never thought you'd ask!) Our work in haptic pronunciation teaching, for example, is based on the assumption, in effect, that "gesture comes first" (before sound and visual phonemes/graphemes) in instruction. (Based on Lessac's principle of "Train the body first" in voice and stage movement work.) For the most part today, pronunciation methodologists and theorists still see the role of gesture in teaching as being secondary, at best, an optional "reinforcer" of word-sound associations or a vehicle for "loosening up" learners and their bodies and emotional states-- or even just having fun!

What the "mice" study suggests is that sound, movement and vision are more integrated and interdependent in the brain than we generally acknowledge--or at least that movement is more central to meaning creation and retrieval. There are a number of body and movement-based theories that support that observation. In other words, the use of gesture in instruction deserves much more attention than it is currently getting. Much more than just a gesture . . .

Citation:
Okinawa Institute of Science and Technology Graduate University - OIST. "Finding your way around in an uncertain world." ScienceDaily. ScienceDaily, 19 September 2016. 

Saturday, August 1, 2015

How YOU elocute is how I elocute: Collaborative haptic motor skill (and pronunciation) learning

For a glimpse into the future of instruction, have a look at Chellali, Dumas and Milleville-Pennel (2010) "A Haptic Communication Paradigm For Collaborative Motor Skills Learning." Their WYFIWIF (What you feel is what I feel) model illustrates nicely just what haptic technology is, in essence using a computer-mediated interface to guide movement, using basically pressure translated through some kind of device such as a glove. In the study, subjects were guided to better performance on a focused manual task, moving a needle, by a haptic-assisted instructor. Not surprisingly, the control group, the visual or verbally-guided only group, did not perform as well. 

Another example of haptic communication, as defined in WYFIWIF, might be an instructor first leading a learner through a gesture pattern with haptic technology and then continuing to provide haptic guidance as the learner attempts to practice and master the pattern. The researchers note that in a virtual environment, as in haptics-assisted surgery or training, " . . . haptic communication is combined (more and more with complementary) visual and verbal communication in order to help an expert to transfer his knowledge to a novice operator."

Although the haptic application to our pronunciation work does not involve haptics technology, but rather hands touching on target or stressed sounds--following the visual and spoken guidance of an instructor or peer--the parallel is striking. It is the collaborative haptic-embodied task (instructor and learner engaged in a tightly linked, synchronous, communicative, embodied "dance") that greatly enables and facilitates learning. 

In the conclusion of the study, there is a truly striking recommendation for further research: the impact on haptic communication of the "verbal communications between the instructor and the leaner." We have  over a decade of experience--and a few dozen blogposts--with that! Now "needle-less" to say,  if we can just get our hands in some of those gloves . . .

Full citation:
Amine Chellali, C ́edric Dumas, Isabelle Milleville-Pennel. WYFIWIF: A Haptic Communication Paradigm For Collaborative Motor Skills Learning. IADIS. Web Virtual Reality and Three-Dimensional Worlds 2010, Jul 2010, Freiburg, Germany. IADIS, pp.301-308, 2010.

Monday, May 6, 2013

The sound of gesture: kinaesthetic listening during "haptic video" pronunciation instruction

In the early 90s a paint ball game designer in Japan told me that my kinaesthetic work was a natural for virtual reality. Several times since I have explored that idea, including developing an avatar in Second Life and, more recently, creating an avatar in my image to perform on video for me. (Have done half a dozen posts over the last three years playing with that idea.) How the brain functions and learner learns in VR is a fascinating area of research that is just beginning to develop.

Clip art:
Clker
In a 2013 study by Dodds, Mohler and Bülthoff of the Max Planck Institute for Biological Cybernetics reported in Science Daily, " . . . the best performance was obtained when both avatars were able to move according to the motions of their owner . . . the body language of the listener impacted success at the task, providing evidence of the need for nonverbal feedback from listening partners  . . . with virtual reality technology we have learned that body gestures from both the speaker and listener contribute to the successful communication of the meaning of words."

The mirroring, synchrony and ongoing feedback of haptic-integrated pronunciation work are key to effective anchoring of sounds and words as well, whether done "live" in class or in response to the haptic video of AH-EPS. (In the classroom, with the students dancing along with the videos the instructor, as observer, is charged with responding in various ways to nonverbal and verbal feedback such as mis-aligned pedagogical movement patterns or "incorrect" articulation or questions from students.) What the research suggests is that listener body movement not only continuously informs the speaker and helps mediate what comes next, but that movement tied to the meanings of the words contributes significantly, apparently even more so than in "live" lectures.

There any number of possible reasons for that effect, of course, but "moving" past the mesmerizing, immobilizing impact of video viewing appears critical to VR training (and HICP!) KIT




Thursday, February 2, 2012

The (haptic) handwriting on the (virtual) wall for pronunciation instruction

"MENE, MENE, TEKEL, PARSIN!" (Possible pronunciation: Many, many tech'cle person!
At the TESOL convention in March, I'll be giving a talk in a Symposium on integration of pronunciation teaching. The title will be something like "Post-pronunciation, pronunciation instruction." Will argue three points: (a) The movement toward integration of pronunciation teaching into all skill areas signals the end of what we do as we do it. (b) Those can operate comfortably in virtual technology are going to take over and , (c) but the emergence in the last decade of haptics technology, haptic engagement in pedagogy, and haptic video and cinema, among others . . . offers exciting possibilities! (Rough translation and extrapolation of the Babylonian above: It's about over, gang. The field has had near enough of our disembodied,  insiders' club attitude. Our best tricks are about to be passed out to tech- and haptic-savy Newbees.) Could be worse . . . we could be in a lion's den . . .  or Philadelphia . . . 

Sunday, January 8, 2012

Gee! Video games and getting a feel for how we will learn

Clip art: Clker
Here is a great article about the work of James Gee and a 20-minute PBS video of him talking about the kind of learning that video gaming may offer for education. The future of pronunciation instruction lies is a similar "embodiment" in video game-like virtual reality. Gee's musings should be required viewing for anyone in the field today who plans on sticking around much longer! Note, especially, how he focuses on being able to "grab" the learner and keep him or her in the game. Haptics is seen by many as central to the future of that kind of gaming.

Pronunciation work, especially, can be very difficult to design from that perspective in the classroom, let alone online. Not that HICP/EHIEP has all the answers either but it "moves" in that direction and should focus more on the part of the process where engagement is key to integration. It is easy to imagine a game of international intrigue where pronunciation or intelligibility on key phrases, for example, would be required to advance in the game. Gee! (To quote Sherlock Holmes) The game is afoot!

Tuesday, December 20, 2011

Why multimedia teaching of aerobics (and pronunciation, I'm sure!) is more effective!

Clip art: Clker
There are times when you read an abstract and you like the conclusion so much that you are actually afraid to read the article! The work reported by Li and Sun (2008) appears to suggest that the virtual reality or video-based version of EHIEP (Essential Haptic-integrated English Pronunciation) could be much more effective than face-to-face. Wow! There are some issues of language in the abstract so I may be misinterpreting the conclusions . . . But judge for yourself:

"The results indicated: (1) the multi-media teaching for sports aerobics, which takes the students' study as the center, pays great attention to the learning environment design is helpful in making the student to establish the correct technical movement concept, and then raise the utilization rate of effective time in class, and increase the teaching information capacity the grades, (2) the sports aerobics received in the experiment group are better than those of the students in the opposite one, and (3) the multi-media teaching has its unique superiority in theoretical knowledge and the technical skill instruction aspects of sports aerobics compared to conventional teaching methods."

And there you have it . . . 

Saturday, November 5, 2011

Listening (for pronunciation improvement) with your hands

Clip art: Clker
In this Science Daily summary, the research of Dodds, Mohler, and Bülthoff demonstrates the impact of both speaker and listener gesture in a virtual reality setting. As the two avatars (represented by humans in VR suits) "conversed," gestures of the listener contributed substantially to the effectiveness of the communication, apparently providing feedback and  showing need for further elaboration or clarification.

The same goes for HICP work (which will one day also be done solely in VR). Learners both mirror the (pedagogical movement patterns) PMPs of instructors at times and the instructors are able to "monitor" learner individual pronunciation or  group haptic practice visually--and then signal back appropriately. As strange as this may sound, providing feedback by means of haptically anchored PMPs generally seems more efficient (for several reasons) than is "correcting" or adjusting the production of the sound, itself, by "simply" eliciting a repetition, etc. (See earlier posts on how that is done.)

That, of course, is an empirically verifiable claim which we will test further in the near future.  So listen carefully and haptically--and give your local avatar's pronunciation a hand.

Wednesday, October 12, 2011

A Wii sample of haptic anchoring of rhythm

Clip art: Clker
To see how easy it would be to bring haptic anchoring into video and virtual reality, check out any of  the Wii teasers. The touch function in this case is carried out by the hand-held controllers. In the current version there is little active haptic feedback provided directly (it is primarily visual) but some others do have such response systems already. Likewise, the controllers could be set to require a squeeze or button push or move across the visual field on a stressed syllable. The EHIEP "Fight club" protocol (linked here in the "Hollywood" version) uses a very Wii-compatible pedagogical movement pattern, just with boxing gloves on the attacking hands--and the targets, the opponents abs--practicing the 16 basic rhythmic feet of English. (The usual disclaimer: No animals or graduate students were harmed in the production of this YouTube video . . . )

Tuesday, August 16, 2011

(Haptic) Pronunciation Rehabilitation

Clip art: Clker
Here is an interesting paper outlining a virtual-reality approach to using haptic rehabilitation technology with stroke victims. The parallels to some aspects of haptic-integrated pronunciation work, especially in dealing with fossilized pronunciation, are striking: (a) focus on "daily" actions, (b) exploit the visual field as a 3D structure--not just 2-dimensional, vertical and horizontal, and (c) use haptic guidance and anchoring. Changing fossilized (cf., Acton 1984) pronunciation requires a somewhat different approach where the targets must, at least initially, be words and phrases with high likelihood of daily active or receptive use by the learner. (Often you have to simply "ferret out" every word with problematic sounds, one by one!)

Following Lessac, only then can language bits practiced in (relative) isolation as "homework" begin to integrate into spontaneous speaking. The 3-dimensional space allows not only consistent haptic anchoring of language bits but also provides for registering emotional and expressive intensity, key elements in working with seemingly intractable mispronunciations. From that perspective, the term "rehabilitating (fossilized) pronunciation," has a nice ring to it. Now if we can just apply that principle to contemporary pronunciation teaching in general . . .

Monday, August 8, 2011

Haptic feedback: On the other hand . . .

Clip art: Clker
Here is an abstract (only) of a 2005 exploratory project by  Kohli and Whitton of the University of North Carolina at Chapel Hill that looked at feasibility of using the "non-dominant" hand to provide feedback to the dominant hand in virtual reality. (Typically, only the dominant, or at least one hand, is engaged.) This is the first study that I have found that seems to suggest that research is catching up with EHIEP design! What it means, essentially, is that the concept of both hands touching in the visual field to enhance multiple modality learning appears very consistent with current VR technology. (EHIEP pedagogical movement patterns all involved both hands touching in the visual field on a stress syllable.) Only a matter of time before EHIEP goes VR? Perhaps. Still a great deal of work to do, however, before it can be "handed over!" But let's give them a hand, regardless!

Sunday, July 24, 2011

Haptic Cow!

clip art: Clker
I am asked repeatedly, how to apply haptic thinking and technology to more detailed articulatory work with vowels and consonants, much like what a professional speech therapist does. We may have the answer here in an extraordinary invention by Baillie. The "haptic cow," of course, has been designed for training in veterinary medicine. Trainees can (virtually and haptically) put their hand "inside" the cow to develop a required skill set that feels just like the real thing, such as delivering a calf.

Imagine our L2 learner doing the same sort of thing--except from a different perspective and virtual point of entry, of course! Being able to explore the inside of a "living," drooling, pronouncing  mouth with both hands as it does diphthong after glorious diphthong . . . "How now, brown cow?"

Friday, July 1, 2011

Field independence in haptic pronunciation instruction

As reported in this article by Hecht and Reiner, field dependence/independence cognitive style may have an impact on how readily one is able to "get" the felt sense of a haptically anchored object in virtual reality or through haptic video as well. In HICP terms, that would suggest that the field independent learner should be better able to focus on and recall targeted objects (sounds, words or processes)--without getting too engaged or distracted by any one modality involved or feature of the visual field--bringing as much information and cognitive integration to the event as possible.

Clip art:
Clker
There is a fascinating interplay involved here. The "danger" of haptic-based or other "physical" techniques is that the learner may be so engaged with the somatic experience that the learning objective or structure in focus is lost or at least not well connected. Field independence suggests the possibility of better cognitive/noncognitive balance in the experience. On the face of it, that does seem to explain why some learners (although not many) find haptic work less effective or efficient. For example, they may be able to remember the pedagogical movement pattern (PMP) associated with a vowel but not the pronunciation. Likewise, a learner's over-enthusiastic, dramatic or emotional response in anchoring a targeted expression, not uncommon in field-dependent individuals, may actually be counter-productive, resulting in relatively poor, limited access and recall later.

Effective multiple modality learning requires that information from all senses being brought to the problem "at hand" are represented appropriately and optimally. EHIEP protocols work only to the extent that instructors and students maintain control and maximal attention in the process. Working with body movement there is always the possibility of things getting a bit "out of hand," but that should be avoided to the extent possible--especially for the more field dependent and hyperactive among us.