Showing posts with label listening. Show all posts
Showing posts with label listening. Show all posts

Sunday, April 1, 2018

Blogpost #1000! - Gender discrimination in L2 listening and teaching!

How appropriate that the 1000th post on this blog is on the lighter side--but still with a useful "in-sight!"

Ever wonder why girls are better language learners than boys? A new study, Explicit Performance in Girls and Implicit Processing in Boys: A Simultaneous fNIRS–ERP Study on Second Language Syntactic Learning in Young Adolescents  by Sugiura, Hata, Matsuba-Kurita, Uga, Tsuzuki, Dan, Hagiwara, and Homae at Tokyo Metropolitan University, summarized by ScienceDaily.com, has recently demonstrated that, at least in listening to an L2:
  • Middle school boys tend to rely more on their left pre-frontal cortex, that part of the brain that is more visual, analytic and rule-oriented--and is connected more to the left hemisphere of the brain and right visual field. 
  • Middle school girls, on the other hand, tend to to use the right area at the back of the brain that is more holistic, meaning and relation-based--that is connected to the right hemisphere and left visual field.
Now granted the subjects were pre-adolescent. That could well mean that within a year or two their general ability to "absorb" language holistically will begin to degrade even further, adding to the boy's handicap. (Although there is still the remote possibility that the effect would impact girls more than boys? Not really.) 

Clker.com
Research on what is processed better in the left, as opposed to right visual field (the right, as opposed to left brain hemisphere) was referenced recently in a fun piece in Neurosciencemarketing.com, How a Strange Fact About Eyeballs Could Change Your Whole Marketing Plan: What public speakers accidentally know about neuroanatomy, by Tim David, that finally provided an explanation for the long established principle in show business that you go "stage left" (into the right visual field of the audience) if you want to get a laugh, and you go stage right if you want tears and emotion. (If you don't believe that is true, try both perspectives in class a few times.)

(Most of us) boys really don't have a chance, at least not in terms of contemporary language teaching methodology either! Not only does de-emphasis on form or structure in instruction give girls an unfair advantage, moving away from boy's preferred processing style, but where are left-brained (generally right-handed) instructors more likely to gesture and direct their gaze? You got it--right into the girls' preferred left visual fields.  And that is NOT funny!

So, lighten your cognitions up a bit, move more stage left,
and cater a little more to the boys' need for rules and reasons, eh!



Sunday, August 20, 2017

Good listening (and pronunciation teaching) is in the EYE of the beholder (not just the ear)!

clker.com
Here is some research well worth gazing at and listening to by Pomper and Chait of University College London: The impact of visual gaze direction on auditory object tracking, summarized by Neurosciencenews.com:

In the study, subjects "sat facing three loudspeakers arranged in front of them in a darkened, soundproof room. They were instructed to follow sounds from one of the loudspeakers while ignoring sounds from the other two loudspeakers. . . . instructed to look away from the attended loudspeaker" in an aural comprehension task. What they found was that " . . . participants’ reaction times were slower when they were instructed to look away from the attended loudspeaker . . .  this was also accompanied by an increase in oscillatory neural activity . . .

 Look . .  I realize that the connection to (haptic) pronunciation teaching may not be immediately obvious, but it is potentially significant. For example, we know from several research studies (e.g., Molloy et al. 2015) that visual tends to override or "trump" audio--in "head to head" competition in the brain. In addition, auditory generally trumps kinesthetic, but the two together may override visual in some contexts. Touch seems to be able to complement the strength or impact of the other three or serve to unite them or integrate them in various ways. (See the two or three dozen earlier blog posts on those and related issues.)

In this study, you have three competing auditory sources with the eyes tracking to one as opposed to the others. Being done in a dark room probably helped to mitigate the effect of other possible visual distraction. It is not uncommon at all for a student to chose to close her eyes when listening or look away from a speaker (a person, not an audio loudspeaker as in the study). So this is not about simply paying attention visually. It has more to do with eyes either being focused or NOT. 

Had the researchers asked subjects to gaze at their navels--or any other specific object--the results might have been very different. In my view the study is not valid just on those grounds alone, but still interesting in that subjects' gaze was fixed at all.) Likewise, there should have been a control group that did the same protocols with the lights on, etc. In effect, to tell subjects to look away was equivalent to having them try to ignore the target sound and attend to it at the same time. No wonder there was " . . .  an increase in oscillatory neural activity"! Really!

In other words, the EYEs have it--the ability to radically focus attention, in this case to sound, but to images as well. That is, in effect, the basis of most hypnosis and good public speaking, and well-established in brain research. In haptic pronunciation teaching, the pedagogical movement patterns by the instructor alone should capture the eyes of the students temporarily, linking back to earlier student experience or orientation to those patterns. 

So try this: Have students fix their eyes on something reasonable or relevant, like a picture or neutral, like an area on the wall in front of them--and not look away--during a listening task. Their eyes should not wander, at least not much. Don't do it for a very long period of time , maybe 30 seconds, max at the start. You should explain to them this research so they understand why you are doing it. (As often as I hammer popular "Near-ol'-science", this is one case where I think the general findings of the research are useful and help to explain a very common sense experience.)

 I have been using some form of this technique for years; it is basic to haptic work except we do not specifically call attention to the eye tracking since the gestural work naturally accomplishes that to some degree. (If you have, too, let us know!)

This is particularly effective if you work in a teaching environment that has a lot of ambient noise in the background. You can also, of course, add music or white noise to help cancel out competing noise or maybe even turn down the lights, too, as in the research. See what I mean?

Good listening to you!

References:
UCL (2017, July 5). Gaze Direction Affects Sound Sensitivity. NeuroscienceNew. Retrieved July 5, 2017 from http://neurosciencenews.com/sound-sensitivity-gaze-direction-7029/
Molloy, K, Griffiths, D.,  Chait, Lavie, N. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience, 2015; 35 (49): 16046 DOI: 10.1523/JNEUROSCI.2931-15.2015





Wednesday, May 25, 2016

Haptic Pronunciation Teaching and Applied Phonology Course, August 1st~26th in Vancouver, BC!

If you are in the Vancouver area in August, join us at Trinity Western University for the Ling 611 Applied Phonology course (3 graduate credits), part of the MATESOL or just the Haptic Pronunciation Teaching component of that course. (Housing available.)

Ling 611 meets on campus 9~12:00, Tuesday through Friday. August 2nd ~ 18th. Monday's are "reading days". Fridays, students in teams submit a brief research report on the week's work. During the 4th week of the course, students do an individual research paper in consultation with the instructors and take final certification test in haptic pronunciation teaching. 

HaPT-E Certification Course
General syllabus:
  • Week 1 - Learning and teaching pronunciation
  • Week 2 - Teaching listening and pronunciation
  • Week 3 - Teaching speaking and pronunciation
The  topics of the 3 hours of each morning are roughly as follows:
  • Hour 1 - Haptic pronunciation teaching
  • Hour 2 - Phonetic analysis of learner data
  • Hour 3 - Theory and methodology
Options: (If interested, contact me at TWU: william.acton@twu.ca)
  • Take the graduate course for credit (about $2400) or as an auditor (less than half price). You have to apply for that and have some prerequisite background in either case. 
  • Do just the Haptic Pronunciation part. That means 12 hours of class, plus about 12 hours of  homework, which includes 2 tests. If you pass the tests, you get a certificate in HPT. (Cost of that will be about $500, which includes materials and certificate. You'll also be free to sit in on the other two hours of Ling 611 if you have time.) Limited number of places available for that option. 
Keep in touch!

Bill

Saturday, August 30, 2014

Improve L2 pronunciation-- with or without lifting a finger!

Clip art: Clker
Listen to this! (You may even want to sit down before you do!) New study showing how movement can affect listening by Mooney and colleagues at Duke University, summarized by Science Daily. Here's the summary:

"When we want to listen carefully to someone, the first thing we do is stop talking. The second thing we do is stop moving altogether. The interplay between movement and hearing has a counterpart deep in the brain. A new study used optogenetics to reveal exactly how the motor cortex, which controls movement, can tweak the volume control in the auditory cortex, which interprets sound."

Now, granted, the study was done on mice who probably have some other stuff going on down there in their motor cortices as well. Nonetheless, the striking insight into the underlying relationship between movement and volume control on our auditory input circuits is enough to give us (an encouraging) "pause . . . " in two senses:

First, learning new pronunciation begins with aural comprehension, being able to "hear" the sound distinctions. We have played with the idea of having learners gesture along with instructor models while listening. The study suggests that may not be as effective as we thought, or at least the conditions that we set up have to be more sensitive to "volume" and ambient static. You can see the implications for aural comprehension work in general as well. 

Second, during early speaking production in haptic pronunciation instruction, being able to temporarily  suppress auditory input (coming in through the ears) is seen as essential. Following Lessac and many others in speech and voice training, what we are after initially is focus on vocal resonance in the upper body and kinaesthetic awareness of the gestural patterns, what we call "pedagogical movement patterns" or PMPs. 


We do that, in part, to dampen (i.e., turn down the volume) on how the learner's production is perceived initially, filtered through the L1 or personal interlanguage versions, trying to focus instead on the core of the sound(s), approximations, not absolute accuracy. Some estimates of our awareness of our own voice suggest that it is less than 25% auditory, that is coming in through the air to our ears, the rest being body-based, or somatic. 

What we hear should be moving, not what we hear with apparently! 

SCID citation: Duke University. "Stop and listen: Study shows how movement affects hearing." ScienceDaily. ScienceDaily, 27 August 2014 .

Monday, March 31, 2014

TESOL 2014: Why didn't they mention THIS?

As evident in the previous post, it was a good conference for Hapticians and friends. If you work at it and go to a conference with focus, that'll always be the case. A few more post-Portland thoughts:

  • The 50/50 rule held. Half of the presentations you attend are good. Half of those involve something that you can take back to your school or classroom. (The other half you can still learn from!)
  • Of the roughly 2 dozen refereed presentations related to speaking, listening and pronunciation, a little more than half a dozen provided practical training and techniques. Three of those were haptic. (There were another couple dozen or so unrefereed publishers' sessions pitching books, software and materials.) The others were research-based.
  • The three haptic presentations (General workshop, intonation workshop and "fight club" demonstration) were not only packed, but fun. We have do much more of that.
  • The reaction to our haptic work was better than in the past, in part because we are getting better at presenting it. We are better now at scaffolding in the "body" training so that few in the audience cannot keep up. (Has taken us a long time to get that right.)
  • Haptic work is highly relational. At a conference, when you are trying to connect with your audience, that is great. In the classroom, using the haptic video system (AH-EPS) may be a better strategy, depending on your level of training in pronunciation teaching and the nature of the crowd in front of you. (See several earlier posts on that!)
  • Clip art;
    Clker
  • The word, haptic, is finally getting out. That has been our primary objective for the last two years. It is apparently spreading a little better "horizontally" than "vertically" . . . After our workshop, one of the participants came up to me very much excited about what she had just experienced. She begins by commenting that the day before she had been to two workshops on pronunciation by "experts" in the field. Then (using emphatic gesture) she says:

 "Why didn't they mention THIS!!!"

Good question.






Monday, May 6, 2013

The sound of gesture: kinaesthetic listening during "haptic video" pronunciation instruction

In the early 90s a paint ball game designer in Japan told me that my kinaesthetic work was a natural for virtual reality. Several times since I have explored that idea, including developing an avatar in Second Life and, more recently, creating an avatar in my image to perform on video for me. (Have done half a dozen posts over the last three years playing with that idea.) How the brain functions and learner learns in VR is a fascinating area of research that is just beginning to develop.

Clip art:
Clker
In a 2013 study by Dodds, Mohler and Bülthoff of the Max Planck Institute for Biological Cybernetics reported in Science Daily, " . . . the best performance was obtained when both avatars were able to move according to the motions of their owner . . . the body language of the listener impacted success at the task, providing evidence of the need for nonverbal feedback from listening partners  . . . with virtual reality technology we have learned that body gestures from both the speaker and listener contribute to the successful communication of the meaning of words."

The mirroring, synchrony and ongoing feedback of haptic-integrated pronunciation work are key to effective anchoring of sounds and words as well, whether done "live" in class or in response to the haptic video of AH-EPS. (In the classroom, with the students dancing along with the videos the instructor, as observer, is charged with responding in various ways to nonverbal and verbal feedback such as mis-aligned pedagogical movement patterns or "incorrect" articulation or questions from students.) What the research suggests is that listener body movement not only continuously informs the speaker and helps mediate what comes next, but that movement tied to the meanings of the words contributes significantly, apparently even more so than in "live" lectures.

There any number of possible reasons for that effect, of course, but "moving" past the mesmerizing, immobilizing impact of video viewing appears critical to VR training (and HICP!) KIT




Sunday, April 7, 2013

Perfect pronunciation?

Clip art: 
Clker
There are any number of ways to do that, of course. ("Per'fect" it, that is!) One place I would generally not begin with beginners or upper beginners, however, would be at the Merriam Websters Learner Dictionary and its "Perfect Pronunciation Exercises."  As noted in earlier posts, one place I might start however, one of my favorites, one very compatible with AH-EPS, is at English Accent Coach. The "difference" between the two sites is instructive. One begins with listening to words in sentence context; the other, begins with the sounds themselves, leads learners through a series of exercises (and games) and then extends to the sounds in words, etc. The EAC model is a good one, whether you use that specific site or do the same within your listening/pronunciation syllabus/curriculum. Most of the traditional pronunciation packages did something similar but did not have the technology available to make it fast and efficient, as does EAC.

Now EAC will probably not entirely agree with me that embodying them first gives learners a much better "touch" for the vowels of English, before they play the game there, but give Professor Thomson a break . . . he's listening.

So, once you finish Module 3 in AH-EPS, send your students over to EAC for some very fine, fine tuning. 

Saturday, November 3, 2012

Treasuring listening: near-ear training for pronunciation work

clip art: Clker
Good TED talk by Julian Treasure. For enhanced interpersonal listening he ends with the acronym RASA (Receive, Appreciate, Summarize, Ask), your basic attending skills--and even world peace! What is worth "listening to," however, is how he gets there, what he terms "savouring, mixing and listening positioning." In essence, "savouring" is focusing for a period of time either on one sound in your environment---or silence--for a couple of minutes; "mixing" is focusing briefly on the sounds in your environment, one after another for maybe half a minute each; "positioning" is the process of intentionally listening with a purpose or conceptual "filter" in mind (for example, to very consciously, listen empathetically or critically or sympathetically.) Now i'm not quite sure how you do the third (positioning) in our work, but the first two forms of auditory attention management, savouring and mixing, are intriguing. Those appear to be apt, applicable analogs for what is involved in "training the body first" to attend to the felt sense of movement and somatic resonance (good vibrations in the vocal track and upper body.) I have not systematically worked with such pre-pre-listening such as that described by Treasure but it sounds like a perfect fit. First chance I get I'll "embody" some of it in an upcoming EHIEP session and report back. Hear, hear! 

Friday, January 6, 2012

Grow Staged (Haptic-integrated) Self-directed (language) learners


Clip art: Clker
Here is a Slideshare presentation of Grow's (1991) Staged Self-Directed Learner model. The four learner stages (dependency, interested, involved and self-directed) are matched with instructor stages (authority or coach, motivator or guide, facilitator, consultant or delegator). The EHIEP system focuses especially on the first two stages in order to enable the latter two, which are more a function of the complete language study and experience, not just pronunciation, per se. For any number of reasons, the nine basic modules of EHIEP are tightly controlled and monitored. At the conclusion of that program, at Grow's Stage Three, the learner should be well trained with a set of learning and anchoring strategies that are appropriate for both individual and classroom work.

As noted in a couple of earlier blogposts, one of the "shibboleths" or critical benchmarks of effective HICP work is what we call for lack of a better term, full-body listening. Learners consistently report that they are much better at being able to listen and "play back" what they hear, not just the words but the expressiveness involved--through their bodies. Some say it is as if the recording goes on in the chest as well as the ears. Good (haptically-integrated) pronunciation and listening is GROW'n--not just bored into being that way!

Saturday, November 5, 2011

Listening (for pronunciation improvement) with your hands

Clip art: Clker
In this Science Daily summary, the research of Dodds, Mohler, and Bülthoff demonstrates the impact of both speaker and listener gesture in a virtual reality setting. As the two avatars (represented by humans in VR suits) "conversed," gestures of the listener contributed substantially to the effectiveness of the communication, apparently providing feedback and  showing need for further elaboration or clarification.

The same goes for HICP work (which will one day also be done solely in VR). Learners both mirror the (pedagogical movement patterns) PMPs of instructors at times and the instructors are able to "monitor" learner individual pronunciation or  group haptic practice visually--and then signal back appropriately. As strange as this may sound, providing feedback by means of haptically anchored PMPs generally seems more efficient (for several reasons) than is "correcting" or adjusting the production of the sound, itself, by "simply" eliciting a repetition, etc. (See earlier posts on how that is done.)

That, of course, is an empirically verifiable claim which we will test further in the near future.  So listen carefully and haptically--and give your local avatar's pronunciation a hand.

Wednesday, October 5, 2011

Cooperative Attending Skills Training for ESL students and haptic feedback

Clip art: Clker
I was not aware that this article by Corrine Cope and myself from 1999 on "attending skill" training was still accessible. (The linked Eric version is a relatively poor quality pdf, but still readable.) It provides what I think is still an excellent framework for creating very focused, peer monitoring group conversation where students can work on integrating in new and corrected sounds or words or phrases or strategies into their spontaneous speech. I have used some version of attending skill training in virtually every ESL/EFL class I have taught (of any size and level) and I recommend it highly. In addition to assisting students in becoming simply better listeners, it provides them with a (relatively) stress free and supportive setting where they can  experiment with new language and where peers can actually be of real value in helping them do that.

Two "haptic" applications:  (1) Learners are relaxed to the point in speaking that they have a much better chance of staying tuned in to the "felt sense" of their voice, and, consequently are more likely to detect (unobtrusively) haptic anchored-errors or changes, and (2) when peers observe a problem with a targeted element of pronunciation in one of the speakers, they, or the instructor can provide appropriate "haptic feedback," that is (possibly) saying the word or phrase using a haptically anchored corrected version or request that the speaker try to provide it in the debriefing session.

It can be clinical pronunciation work at its best--in part probably because attending skill training was developed in counseling psychology in the first place. It can also change the way you "attend to" integrating sound change in the classroom.

Tuesday, August 16, 2011

Mirroring, Tracking and Listening

M, T and L are basic tools of pronunciation teaching. It has been assumed for some time that tracking, that is having a learner speak along with a simple audio recording, is something of an overt form of what naturally goes on in the body in listening. There was earlier research that seemed to suggest that the vocal apparatus (mouth, vocal cords, etc.) moved along with the incoming speech at a subliminal level.
Clip art: Clker
Turn outs, according to this research, by Menenti of the University of Glasgow, Hagoort of Radboud University, and Gierhan and Segaert of the Max Planck Institute, summarized by Science Daily, that general listening (without seeing the speaker, "live," visually) does not necessarily involve such sympathetic "vibrations." In other words, the felt sense of listening in some contexts can be decidedly non-somatic or divorced from embodied attention.

That does not mean that tracking is still not a useful technique for assisting learners with the intonation of the language, but clearly, the neuro-physiological rationale may be suspect. This raises several interesting questions related to the complex inter-relationships underlying listening, speaking and pronunciation skills--and how to teach them, especially in adults. The evidence that mirroring, on the other hand, engages the body is unequivocal. That certainly speaks to the HICP/EHIEP--and any pronunciation teaching practitioner who is listening . . .

Wednesday, August 3, 2011

Kinesthetic empathy and haptic listening

Here is the first of two very cool videos from a neuroscience/dance project and conference: "From Mirror neurons to Kinesthetic Empathy." (The sound quality is problematic in places.) Dance-related research in kinesthetic empathy explores, in part, how the observer of dance "moves along" with the dancer--and how that experience can be utilized and enhanced.

Credit: www.watchingdance.ning.com
One frequent observation by EHIEP learners is that near the end of the program their listening skills have improved in a somewhat unexpected manner. Specifically, they have become better at remembering what is said, how it is said and able to repeat what they have heard (often using EHIEP pedagogical movement patterns). The "felt sense" of that experience seems to be very much body-based, non-cognitive, as if the whole body is recording the conversation. Although we have for sometime been terming that "kinesthetic listening," we have not yet developed the advanced listening comprehension protocol systematically. We should soon. Hapticempathy?

Friday, June 17, 2011

Keeping listening in the picture . . . or out of it!

Clip art: Clker
Several posts have addressed the question of the relationship between learning modalities in general learning and pronunciation teaching. What this important 2010 study by Lavie and Macdonald of the Institute of Cognitive Neuroscience at UCL, reported by Science Daily, demonstrates is that in some contexts visual input appears to trump auditory input. In other words, being engaged visually in a task may limit ability to hear critical information.

We know from experience that some highly visual learners may find learning pronunciation especially difficult. This helps to explain why. From whatever source, even stunning visual aids or computer displays, "visual interference" with learning new sounds may be significant. The implication for EHIEP instruction is that haptic and auditory input, key components of  multiple modality instruction--along with a modest amount of video on the side, perhaps, is the best overall learning format. Get the picture . . .or the sound . . . take your pick!