Showing posts with label perception. Show all posts
Showing posts with label perception. Show all posts

Tuesday, January 21, 2025

Guilt by reason of "accentedness" (and what should be done about it!)

Interesting study out of the UK: Stereotyped accent judgements in forensic contexts: listener perceptions of social traits and types of behaviour” by Alice Paver et al., summarized by Neuroscience.com, as: "Do accents influence guilt perceptions?"  (I might also add that accents influence getting work!) It raises so many issues that I'd recommend that you read the full article yourself. The summary is not sufficient but is certainly provocative!!! Here is the Neuroscience,com summary: 

"Researchers analyzed responses from 180 participants who rated voices from 10 UK accents on social traits and likelihood of certain behaviors, including crimes. . . ." Leaving aside some obvious potential shortcomings of the design--some of which are acknowledged by the reseachers, such as using male speakers only and a design that sets up the focus on the "bias" before hearing the samples--the conclusions are . . . striking, to say the least:

"Accents influence perceptions of guilt, with those judged as “lower status” being considered more likely to commit crimes."

Now assuming that the results hold up later with

  • An acceptable definition of what constitutes and accent
  • Replication involving the other gender(s) and 
  • Possibly a different general elicitation format, and that 
  • The working class dialects do come with features that could  undermine the credibility or "hiring potential"--an intuition strongly confirmed or established in research over the decades . . . 

What should our approach be in the classroom in working with students who come to us with "working class" dialects who are aiming at white collar careers, for example? 

First, one of the other possibly relevant findings was that nonnative accents of the dialects tended to be seen as more trustworthy than the native speakers in the samples. Although it was not reported specifically which nonnative accents carried that "advantage," that sounds like good news for those who'd rather not get into accent work in the first place. Maybe. The distinction between "accent" and "pronunciation" that I give students is something like:

  •  If when speaking slowly, your listeners have trouble understanding you, you need pronunciation work. Basic rhythm, stress and intonation instruction is key at that level. 
  • If when speaking quickly and maybe under some stress, your listeners have trouble understanding you, you need accent work. Requires attention to better, more accurate production of key/professional terms and dialect features. pacing and voice quality settings. (May even include breath, posture and self-moitoring training.)

So, if your students come to you in a position where they have "absorbed" the features of a less prestigious, disadvantageous dialect and they are preparing for job interviews, f you can't help them at the accent-level, you may need work . . . or you may be doing so already and not know it! If you do need to upgrade your accent work toolkit, join us for the next haptic course next month!

Keep in touch!

Bill

Credit:
Clker.com





Original Research: Open access.
Stereotyped accent judgements in forensic contexts: listener perceptions of social traits and types of behaviour” by Alice Paver et al. Frontiers in Communication

Wednesday, March 27, 2024

How do you "get" the rhythm of a new language? Can you?

Clker.com
This is something of a follow up to a 3/10/24 blog post (All you need is rhythm . . . ).Turns out, not surprisingly, that natural "rhythmic sense" may give you an advantage in at least acquiring the pronunciation of a language . . . at least Norwegian! Interesting finding in a new study, Replication of population-level differences in auditory-motor synchronization ability in a Norwegian-speaking population, by Sjuls, Vulchanova & Assaneo of Norwegian University of Science and Technology (summarized in Neuroscience News as: Can rhythm sense predict language skills?).

The research found "pronounced" differences in the subjects of the study in terms of how quickly they could lock on to (or sync their body with) the rhythm of speech samples. Earlier research by the same team had established the general correlation between rhythmic sense and pronunciation accuracy. This study extends those findings considerably, implying that language learning more broadly considered may hang on perception of rhythm. The nexus of connections of rhythmic processing in the brain and grammatical structure has long been recognized and investigated. 

Of course, to quote my favorite Bertrand Russell quip: a difference that doesn't make a difference . . . doesn't make a difference, the critical thresholds on the rhythm perception continuum were not investigated but the existence of such barriers or facilitation points seems obvious. Any experienced language instructor who works with speaking in almost any context "knows" learners who fit both ends of the scale. The question is: what can be done for the naturally "rhythmically challenged?" 

A number of studies have demonstrated the benefit of early focus on the rhythm in acquiring an L2, but the direct connection to the underlying process involved has never been clear. In other words, the implications are that working with rhythm just for rhythm's sake for the FUN of it--not directly tied to the structure of the text in the lesson or specific words or lexical constructions  . . . may still be highly beneficial. So get out your guitar, raps and books of poetry . . .  just for the embodied experience of "getting" the rhythm of the L2. (You knew that!) You now have Neuroscience's permission! Go for it!(and you come join us who do embodied rhythm the haptic pronunciation teaching way, of course!) 


Source: Sjuls, G.S., Vulchanova, M.D. & Assaneo, M.F. Replication of population-level differences in auditory-motor synchronization ability in a Norwegian-speaking population. Commun Psychol 1, 47 (2023). https://doi.org/10.1038/s44271-023-00049-2

Thursday, July 21, 2022

Giving the nod to good pronunciation teaching: the "Coconut Cheeseburger" effect

Many in the field "look down on" using gesture and body movement extensively in pronunciation teaching; some of it is deserved, of course. But a new study adds an interesting new twist: upper torso "nodding" (at least in English), often observable when a native speaker is speaking rhythmically or stressing words in speech. (Note: This is a bit of stretch--literally, of the neck--but hang with me. My "discovery" of the upper torso nod early on was simply a game changer.)

In a study by Fumiaki Sato of Toyohashi Institute of Technology and colleagues (Summarized by Neurosciencenews.com) titled, Backward and forward neck tilt affects perceptual bias when interpreting ambiguous figures, subjects were shown three-dimensional cubes in their visual field where they had to either look up to focus on it, or look down to identify which of two or three others they were looking at. Basically, when nodding their head down slightly they were able to identify the cube more quickly than if they were looking up at it. (Moving to the left or right did not evoke an analogous difference in perception.) Fascinating study . . .  The researchers' discussion focuses on the role of that postural adjustment in affecting perception, without speculating further as to the implications of that finding. Allow me!

In 1987, on my way to a convention, I observed two strikingly different upper torso nods associated with the words, Coconut Cheeseburger. (For the full story, see the blog post on it from 2015.) One person, trying to explain why his friend had mistakenly received a 'coconut cheeseburger,' was claiming that what had been said was "coconut cheeseburger," used one torso nod, culminating on 'cheese.' The other person, argued that what she had actually said was, "Coke and a cheeseburger," using two torso nods, one on 'coke' and one on 'cheeseburger." You see the problem. Said with one torso nod--given that there was a sandwich of that description at time in the Florida Keys--the misunderstanding is  . . .well . .  understandable. 

In haptic pronunciation teaching--and perhaps all teaching in some sense in English, that basic pendulum-like motion of the body rhythm in speaking is fundamental, reflecting the muscles of the upper and lower chest, and diaphragm, coming together to expel air up and out through the vocal cords. At the "bottom" of each nod is where there should be, according to the research, at least some greater clarity and focus. If you "think" about, that downward motion of upper torso can have meaning in interpersonal communication from several perspectives. 

Some it, of course, is just visual marking of stress assignment, similar to the "baton" gesture. It can also, however, signify other concepts externally, such resignation or confidence or, depending on the speed of the gesture, varying degrees of engagement or energy involved. Regular, uncluttered rhythmic torso nods can imply semantic coherence in the speaker, that what is being said is thoroughly integrated at that point in time. Any highly accomplished public speaker generally has near total control and expressive use of upper torso "nodding" as well. 

In haptic work, almost every one of the three dozen or so designed gestures may be accompanied by an upper torso nod, depending on whether the stretch of speech is being articulated in "pieces" for some pedagogical purpose or fluently, approaching natural speech. In effect, the torso, not the head and arms is where the "action" is. How's yours?

See what I mean? If not, set up a video camera off to your left or right as you teach. Note when your speech is generally synced with your upper torso nods, and when it is not. If it is, well . . . take a bow! Then join us at www.actonhaptic.com!

And, of course, keep in touch!

Bill

Source: https://www.nature.com/articles/s41598-022-10985-4


Thursday, January 27, 2022

BIG news about Haptic Pronunciation Teaching!

Size DOES matter it turns out, according to research by Masarwa, Kreichman, and Gilaie-Dotan of Bar Ilan University and University College London, summarized by NeuroscienceNews.com as "In visual memory size matters." One of the key features of Haptic Pronunciation Teaching (HaPT) is the use of relatively large sweeping gestures across the visual field in front of the class to represent sounds and patterns of the language. (As students do it along with the instructor, typically.) We have known for a couple of decades that that "larger than life" visual representation of the sounds in communicating with the class is highly effective. 

Now we have a little more evidence as to just why. In the study, simply put, under various experimental conditions, it was demonstrated that the larger image was remembered better. The researchers' conclusion:

" Our study indicates that physical stimulus dimensions (as the size of an image) influence memory, and this may have significant implications to learning, aging, development, etc."

Fascinating study, linked below. In other words, our method is "bigger" than your method. There is actually much MORE to the story, of course! Go here to find out!


and, of course, keep in touch!

Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.
Clker.com

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

Clker.com
What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Thursday, June 4, 2020

CPR for Pronunciation homework and teaching . . . that works!

Clker.com
Excellent study by Martin, "Pronunciation Can Be Acquired Outside the Classroom: Design and Assessment of Homework-Based Training," a real MUST READ for you if you are serious about pronunciation teaching, demonstrating that at least one kind of (computer-mediated)  homework system is not only effective, but may work as well as classroom-only instruction. 

The basic process in the homework phase was what is termed, iCPR, computer-based, intelligibility focused cued pronunciation reading. Learners are provided with explicit instruction, explanation and then both perceptual and production training and practice, with feedback in the perceptual phase/practice only. 

The study involved adult learners of German, extending over 10 weeks, with the equivalent of about 30 minutes of instruction either in class or out of class. The in-class lessons seemed to closely mimic the process and time allocation of the homework. From a number of perspectives, either treatment showed equally significant improvement and student satisfaction. Methodologically, the project seems tight, although the use of the term, homework, is probably a little misleading today when the learner never really "leaves" the web in some form during the day except for sleep . . . 

In corresponding with the researcher, my only question was: How (on earth) did you get the students to DO their homework? Surely it  had something to do with the "sell" up front, the allocation of grade points (easily accounted for in the computer-mediated system) and (probably) early student awareness to some degree of the program's efficacy. So . . . it looks well conceived, a highly detailed blueprint of how to set up a similar system. 

Setting aside the question of just how readily the process can be adopted and adapted for the moment, what this shows or means is that Martin has given us another intriguing picture of the future of pronunciation teaching: pronunciation work handled outside of in-class instruction. 

To paraphrase Lincoln Steffens: "I have seen the future (of pronunciation teaching) and it works. [remark after visiting the Soviet Union in 1919]” or maybe even Marshall McLuhan: "If it works, it's obsolete." . . . The field is changing fast. Pronounced change, to put it mildly!

Source: 
The Modern Language Journal, 0, 0, (2020) DOI: 10.1111/modl.12638 0026-7902/20/1–23 National Federation of Modern Language Teachers Associations

Tuesday, January 22, 2019

Differences in pronunciation: Better felt than seen or heard?

clker.com
This feels like a "bigger" study, maybe even a new movement! (Speaking of new "movements", be sure to sign on for the February haptic webinars by the end of the month!)

There are any number of studies in various fields exploring the impact of racial, age or ethnic "physical presence" (what you look like) on perception of accent or intelligibility. In effect, what you see is what you "get!" Visual will often override audio, what the learner actually sounds like. Actually, that may be a good thing at times . . .

Haptic pronunciation teaching and similar movement-based methods use visual-signalling techniques, such as gesture, to communicate with learners concerning status of sounds, words and phrases. Exactly how that works has always been a question.

Research by Collegio, Nah, Scotti and Shomstein of George Washington University, summarized by Neurosciencenews.com“Attention scales according to inferred real-world object size", points to something of the underlying mechanism involved: perception of relative object size. The study compared subjects' reaction or processing time when attempting to identify the relative size of objects (as opposed to the size of the image of the object presented on the screen). What they discovered is that, regardless of the size of the images on the screen, the objects that were in reality larger consistently occupied more processing time or attention.

In other words, the brain accesses a spatial model or template of the object, not just the size of the visual image itself in "deciding" if it is bigger than an adjacent object in the visual field. A key element of that process is the longer processing time tied to the actual size of the object.

 How does this relate to gesture-based pronunciation teaching? In a couple of ways potentially. If students have "simply" seen the gestures provided by instructors (e.g., Chan, 2018) and, for example, in effect have just been commanded to make some kind of adjustment, that is one thing.The gesture is, in essence, a mnemonic, a symbol, similar to a grapheme, a letter. The same applies to such superficial signalling systems such as color, numbers or finger contortions.

If, on the other hand, the learner has been initially trained in using or experiencing the sign, itself, as in sign language, there is a different embodied referent or mapping, one of experienced physical action across space.

In haptic work, adjacent sounds in the conceptual and visual field are first embodied experientially. Students are briefly trained in using three different gesture types, distinctive lengths and speeds, accompanied by three distinctive types of touch. In initial instruction, students do exercises where they experience physically combinations of those different parameters as they say the sounds, etc.

For example, the contrastive, gestural patterns (done as the sound is articulated) for  [I], [i], [i:],and [iy] are progressively longer and more complex: (See linked video models.)
a. Lax vowels, e.g., [I] ("it')- Middle finger of the left hand quickly and lightly taps the palm of the right hand.
b. Tense vowels, e.g., [i] ("happy")- Left hand and right hands touch lightly with finger tips momentarily.
c. Vowel before voiced consonant, e.g., [i:] ("dean") - Left hand pushes right hand, with palms touching, firmly 5 centimeters to the right.
d. Tense vowel, plus off glide, e.g., [iy] ("see") - Finger nails of the left hand drag across the palm of the right hand  and, staying in contact then slide up about 10 centimeters and pause.

The same principle applies to most sets of contrastive structures and processes, such as intonation, rhythm and consonants. See what I mean, why embodied gesture for signalling pronunciation differences is much more effective? If not, go here, do a few haptic pedagogical movement patterns (PMPs) just to get the feel of them and then reconsider!





Monday, September 18, 2017

Killing Pronunciation 9: Reappraising negative attitudes toward pronunciation

Clker.com
 Maybe the most consistent finding of research on pronunciation teaching is that (at least from instructors who have yet to recover from structuralism, "communicative language teaching" or cognitive phonology) there are a lot of negatives associated with it (e.g., Baker, 2015 and many others). My approach has always been to stay calm and train teachers in how to do pronunciation well, figuring that success will eventually get them past all the noise out there.

I may have to reappraise that line of march, especially with my Chinese students. Maybe I could do more to attack those negative feelings and perceptions directly. But how?

New research by Wu, Guo, Tang, Shi, and Luo reported in Role of Creativity in the Effectiveness of Cognitive Reappraisal suggests a way to do just that: a little instructor-directed and controlled creativity, something I suspect that only a team from the Beijing Key Laboratory of Learning and Cognition, The Collaborative Innovation Center for Capital Education Development, Department of Psychology, Capital Normal University, Beijing, China and the Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China--could possibly pull off!

In essence, they confirmed that subjects recognized creativity as a potentially powerful antidote to negative emotions, something that has been established empirically for some time. What was fascinating, however, was that subjects negative feelings about the targeted video scenes could only be "affected" substantially by being led through creative exercises. In other words, they couldn't get past the negatives by doing something creative on their own, themselves, without help. Wow.

Instructor-conducted / creativity-driven / negative attitudes /  toward pronunciation teaching repair/reappraisal (INPRR pronounced: In-P-RR). What a concept! Well, actually, much of what passes for creativity training is instructor-centered, not designed to provide you with the tools but to guide you in thinking outside of the box so  you know what it feels like when it happens. I was really into that for a couple of decades in pronunciation teacher training, in fact. There are still those in the field, like Marsha Chan, who do that well, the "there are all kinds of really creative, fun things you can do when teaching  pronunciation" shtick. Working with kids, that plays well; with adults, on the whole I have always thought it is at best counter productive.  (The reasons for that have been developed on the blog extensively.)

However, I may have it wrong. But rather than training teacher trainees in creative techniques to use in the classroom, I should be doing creative activities with them that address their underlying negative feelings (fear, self doubt, etc.) directly. Some suggestions, most of which I have seen over the years at conferences or on the web. I'll get things started with a few that are research-based (and reported on the blog recently) and then you help by adding to the list your best INPR:
  • Have them list all those negative pronunciation-induced emotions on the top of cookies or in chocolate and eat them.
  • Lead them in doing your basic OEI switching technique to defuse the emotion if it is really strong. (Done with only one student at a time, in private, however.)
  • Have them talk about themselves fearing pronunciation in the 3rd person (See Gollum Speak)
  •  Lead them in coming up with a list of all the ways they might overcome such emotions and then have select students read out each expressively and dramatically in their heaviest L1 accent (I like that one!)
  • Have them share with each other in pairs their negative feelings toward pronunciation holding a hot beverage. That one is incredibly powerful.
  • Then have them report back to the class in pantomime, having the rest of the class guess what it is. 
  • You stand up in front of the class and begin listing verbally the unrealistic fears your students may have about pronunciation or those that they may have now but will be "gone" at the end of the course. Also have a list on the board of epithets appropriate for shouting down goofy ideas which the students produce after you state each, possibly accompanied by gesture. 
  • Come to class dressed as Sigmund Freud or your neighborhood therapist. Sit in a comfortable chair and answer their questions chewing on a pipe, suggesting hilariously funny solutions to their fears. (I sat in on one of those in Japan that was priceless and exceedingly effective, I think.)
  • Have a "Love me, love my accent day" in class where students intentionally speak with stereo-typically heavy accent. (Have seen that recommended a number of times.)
Your turn! I'll award a set of the v4.5 AHEPS DVDs to the contributor of the best one!

Source: 
Retrieved September 18, 2017 from http://journal.frontiersin.org/article/10.3389/fpsyg.2017.01598/full


Tuesday, November 8, 2016

The "myth-ing" link in (pronunciation) teaching: Haptic cognition

Nice piece from The Guardian Teacher Network, Four neuro-myths still prevalent in schools, debunked, by Bradley Bush (@Inner_Drive). Now granted, The Guardian is not your average  refereed, first-line journal, but the sources and research cited in the readable piece are credible. Just in case you need a little more information to help your colleague finally abandon any of them, check it out. The four myths are:
Haptic Wolverine, 2016
  • Learning styles are important in teaching and instruction
  • We use just 10% of our brains.
  • Right vs left brain is a relevant distinction in understanding learning and designing instruction
  • Playing "brain" games makes you smarter and should have a more prominent place in instruction
So, if those popular "teacher cognitions" are lacking in empirical support, especially the first and third, how should that affect design of instruction? (The fact that the second and fourth just seem so "right" at times when in the classroom, notwithstanding!)

One helpful framework, cited by Bush (and this blog earlier) is Goswami (2008), which argues that learners learn best, in general, when taught using a  multi-sensory, multiple-modality approach. From that perspective, for example, when teaching a sound or process or vocabulary word, as many senses as possible must be brought to the party, either simultaneously or in close proximity:
  • Auditory (sound)
  • Visual (imagery)
  • Kinesthetic (muscle movement and memory)
  • Tactile/cutaneous (surface skin touch)
  • General (somatic) sensation of vocal resonance throughout the head and upper body. 
  • In addition, the potential impact of that is conditioned by the degree of meta-cognitive engagement (conscious awareness on the part of the learner of all that sensory input, plus existing schemas, such as rules, experience and connections to related sounds and language bits and processes). 
How to best do that consistently is the question. The concept of "haptic cognition" (Gentaz and Rossetti, in press) suggests why haptic awareness can function to bring together all those modalities in learning. From the conclusion:

"Taken together, this suggests that the links between perception and cognition may depend on the perceptual modality: visual perception is discontinuous with cognition whereas haptic perception is continuous with cognition." (Emphasis, mine.)

In other words, visual schema, such as charts, colors and even text itself, may actually work against integration of sound, resonance, movement and meaning in pronunciation teaching. Research from a number of fields has established the potentially problematic nature of visual modality overriding auditory, in effect disconnecting sound from meaning. On the contrary, the haptic modality generally serves to unite sensory input, connecting more readily with cognition based in sound, resonance and meaning. 

Another myth then, that of visual explanatory schemas (images and text) being a good approach in pronunciation teaching in textbooks and media--as opposed to active experience of sound, movement and awareness of resonance, plus some visual support, needs serious reexamination. What Gentaz and Rossetti are asserting (or confirming) is that visual imagery may not always effectively contribute to conscious, critical, cognitive integration and awareness in learning--the ultimate goal of all media advertising!

In other words, pronunciation instruction should be centered more on comprehensive haptic cognition. If you are not sure just how that happens . . . ask your local haptician!

(Coincidentally, the name of our company is: Acton Multiple-Modality Pronunciation Instruction Systems, AMPISys, inc.!)




Monday, November 30, 2015

The Music of Pronunciation (and language) Teaching

Like many pronunciation and "speaking" specialists, I have long believed that in some way systematic use of music should be "in play" at all times in class. I suspect most in the field feel the same. Up until recently there has not appeared to be much of an academically credible way to justify that or investigate the potential connection to language teaching more empirically.

A recent 2015 study, Music Congruity Effects on Product Memory, Perception, and Choice, by North, Sheridan and Areni, published in the Journal of Retailing (DOI, below), suggests some interesting possibilities. Quoting the ScienceDirect.com summary, the study basically demonstrated that:
  • Ethnic music (e.g., Chinese, Indian) increased the recall of menu items from the same country.
  • Ethnic music increased the likelihood of choosing menu items from the same country.
  • Classical music increased willingness to pay for products related to social identity.
  • Country music increased willingness to pay for utilitarian products.
    Clker.com
So, what may that mean for our work, or explain what we have seen in our classrooms?
  • (Recall) For example, we might predict that using English music of some kind with prominent vowels, consonants, intonation and rhythm patterns would enhance memory for them.
  • (Perception) Having listened to "English" music should enable being able to better perceive or recognize appropriate pronunciation models or patterns of English. I suspect that most language teachers believe that intuitively, have seen the indirect effects in how students' engagement with the music of the culture "works". 
  • (Milieu) I, like many, have used classical music for "selling" and relaxing and creating ambiance for decades. There is research from several fields supporting that. Only recently have I been attempting to tie it into specific phonological structures or sounds, especially the expressive, emotional and relational side of work in intonation. 
  • (Function) I frequently use country-like music or rap for working on functional areas, warm ups, rhythm patterns, and specific vowel contrasts.
I am currently experimenting more with different rhythmic, stylistic and genre-based varieties of music. (Specifically, the new, v4.0 version of the haptic pronunciation teaching system, EHIEP - Essential Haptic-integrated English Pronunciation.) Over the years I have used music, from general background or mood setting to highly rhythmic tunes tied directly to the patterns being practiced. I just knew it worked . . .

The "Music congruity" study begins to show in yet another way just how music affects both associative memory and perception, conveying in very real terms broad connections to culture and context. More importantly, however, it gives us more justification for creating a much richer and more "memorable" classroom experience.

If you use music, use more. If not, why not? 

In press (2015) doi:10.1016/j.jretai.2015.06.001

Tuesday, July 14, 2015

/i/ or /ɪ/: Perception to Production

Nice piece of research by Lee and Lyster (Lee and Lyster, 2015 - Full citation below) demonstrating the impact of feedback on "instructed" L2 speech perception. (Hat tip to Michael Burri for pointing me at it!) In a simulated-classroom setting, native Korean language students significantly improved their ability to perceive the distinction between /i/ and /I/ in English. The full article is worth the read. Just a couple of caveats before we talk about what that might mean for teaching in the classroom:
  • The title is a bit deceptive, as the authors note: " . . . our use of simulated classrooms in this study begs the question as to whether such intense instruction would be feasible in a regular classroom curriculum and whether the results would be similar."
  • The tasks are, indeed,  excellent and well controlled--but give almost any competent pronunciation teacher about 6+ hours of classroom time with a homogenous group to work on just that single contrast and see what happens. (I may try to do that, in fact!) 
That does not diminish the importance of the study. The point is that with focused instruction, perception of vowel contrast can be radically improved--and by implication, production also. The question is, how can we begin to approximate that effect in the classroom? (If you are a regular reader of the blog, I'm sure you can see what is coming!)

Photo credit: EHIEP, v4.0 logo
Anna Shaw
Dealing with that /i/-/I/ distinction in North American English (as opposed to British or Australian) is one of the most straight-forward and effective features of the EHIEP (Essential, haptic-integrated English Pronunciation) system. Rather than taking about 5 hours to set things up (and in Lee and Lyster, 2015 there is no long-term follow up on the effect of the study), the EHIEP method, were it to focus only on that contrastive pair, would in toto run less than 1 hour initially and then be integrated into general classroom instruction from there on. 

Without going into all the details here (detailed in AHEPS v3.0 and coming this fall, v4.0), check out the free demos: lax/rough vowels, tense/double vowels and/or our 2012 conference write up, citation below), the procedure is basically:
  • Introduce the EHIEP lax and tense vowel pedagogical movement patterns, either with the video (about 15 minutes each) or do it in person.
  • Practice just those two vowels in word lists and in context in class: about 30 minutes
  • Begin providing both modelling and corrective, in context feedback in class regularly.
  • Watch how the contrast shows up in student spontaneous production
I realize that sounds far too simple and obvious to be effective. Great classroom techniques are often like that! We now have over a decade of experience using that basic procedure. Given Lee and Lyster (2015), a classroom-based study using the EHIEP framework, and integrating some of those tasks, especially the Bingo and card sort techniques, seems very possible. Before we get to that, try it yourself and let us know. 

Full citations:

Acton, W., Baker, A., Burri, M., Teaman, B. (2013). Preliminaries to haptic-integrated pronunciation instruction. In J. Levis, K. LeVelle (Eds.). Proceedings of the 4th Pronunciation in Second Language Learning and Teaching Conference,Aug. 2012. (pp. 234-244). Ames, IA: Iowa State University.

Lee, A. H., & Lyster, R. (2015). The effects of corrective feedback on instructed L2 speech perception. Studies in Second Language Acquisition, 38. doi: http://dx.doi.org/10.1017/S0272263115000194.

Monday, April 6, 2015

Power Posing as (but) feelings of confidence?

Clipart:
Clker.com
There was a well-publicized study and TED talk in 2010 by Cuddy of Harvard School of Business that demonstrated that "power posing" (striking and briefly holding a confident pose) actually made you feel more confident and showed up in changed action and blood chemistry. Those findings certainly resonated with our consistent observations as to the impact of embodied, haptic pronunciation teaching.

But now comes a new study by Ranehill and colleagues at the University of Zurich, calling into question the early research, (summarized by Science Daily.com -- see full citation below) that comes to this conclusion:
"This indicates that the main influence of power poses is the fact that subjects realize that the [sic] feel more self-confident. We find no proof, however, that this has any effect on their behavior or their physiology." (Emphasis, mine!) Feelings of confidence but no observable other effects? Really?

On the face of it, the new study does seem a fair replication, except possibly for this: subjects in the first study were students in the Harvard School of Business; subjects in the second: " . . . 102 men and 98 women, most of them students from Zurich . . . " (Emphasis, mine.)

Need I pose the question?

Probably not!

Full citation:
University of Zurich. (2015, April 1). Poses of power are less powerful than we thought. ScienceDaily. Retrieved April 6, 2015 from www.sciencedaily.com/releases/2015/04/150401084325.htm

Sunday, February 17, 2013

Perceived distance between or difference in accents


Clker
Clip art: 
Stick that in Google and you get over 9,700,000 hits. There is apparently some interest in that question. My dissertation (1979) on perceived social distance paralleled recently published research on perceived emotional distance inherent in personal relationships. What Frost of Columbia University and colleagues found in looking at "millions" of entries on blogs, according to Science Daily, was that it was not especially relevant whether an individual perceived themselves to be particularly close or distant from some other person or entity but whether that distance was seen as close to or far from what they considered "ideal", for whatever reason.

In my research it appeared that to be a good language learner (at least in the earlier stages of acculturation) it was not all that important how close or distant you perceived yourself to be from the L2 culture--as long as you saw yourself as about the same distance from your L1 culture. In other words, the ideal model of the more successful learners seemed to be a kind of equi-distance. Whether very close or quite "far" did not appear to be a factor.

So, what does that mean for our work in accent and pronunciation? Something like this: The targets learners have in their heads for the L2 sounds may not be the key factor at all--assuming that they have them. How they feel about that distance, however, is another matter. Do you have an adequate system for engaging either or both? The key notion is that you probably can't do the latter without the former being relatively well established.

One of the major innovations of haptic-integrated clinical pronunciation as realized in the AH-EPS system is the use of not just sound or color or key words or an IPA vowel chart for establishing L2 pronunciation targets, but, in addition, well-established locations in the visual field anchored by touch. Those locations, alone, assist learners in managing the effect and affect of the "distance" between L1 and L2 sound targets. Keep in touch. 

Sunday, December 23, 2012

Sound discrimination training: perceived "phon-haptic" distance

Clip art: Clker
Clip art: Clker
Ask any Japanese EFL student how they managed to perceive and later produce the distinction between [i] and [I] or [u] and [U] in English and they'll probably tell you that it was difficult . . . or impossible. The same goes, of course, for L1/L2 phoneme mismatches for most learners, at least initially. The problem, of course, is the "competition" between phonetic or articulatory distance, that is how different, physically it is to produce two sounds, and phonemic categorical distance. If the brain "decides" that two sounds represent the same phoneme, regardless of how different it "feels" to produce them--case closed. At least that is what most research suggests. A 2004 study by Gerrits and Schouten of Utrecht University (linked here at the University of Rochester) suggests that the task used in the discrimination process can significantly impact perception of phonemic categories.

In plain English, what does that mean? Basically this: The method you use to assist learners in hearing or producing a phonemic distinction in their L2 can, itself, affect whether they get it or not. Really? Well, maybe . . .  So how do you usually do that? Do a class listening discrimination task of some kind? Give them an audio to listen to? Show them line drawings and have them repeat after you? Sit down with the learner and use a Starbucks coffee stir to get their articulators realigned?

 As described in earlier blogposts, the EHIEP approach is to establish points in the visual field where the hands touch as the sound is articulated, what we term "phon-haptically." Those points, or nodes, are strategically placed so that distinctions such as those above are experienced as being both physically distant from each other and somatically have very distinct texture or type of touch involved (tapping, pushing, scratching, brushing, twisting, etc.) The touch-type is chosen to "imitate" the felt sense of producing the vowel in the the vocal tract in some way, if only metaphorically. Does it work? Try it and let us know. Keep in touch. 

Friday, November 25, 2011

Doing is believing

Clip art: Clker
This study from Lee and Noppeney of the Max Planck Institute, summarized by Science Daily, demonstrated that if you can do something (in this case, play piano) you have a better chance of being able to see if it is done correctly or with correct timing.

And how does this apply to HICP work?  Relatively simple . . .  By practicing HICP pedagogical movement patterns (e.g., saying a vowel-sound, for example, one completes a movement across the visual field, ending in a touch of the other hand), the learner becomes better able to "uptake" guidance or visual corrective input from the instructor or other students in the form of PMPs as well. Most HICP error correction, as well as initial presentation, is done haptically, providing a clean, visual model and the requiring haptic "repetition" or mirroring of the the adjusted form. Or to paraphrase the old saw: Monkey do; monkey see-- and integrate it more efficiently into spontaneous speaking.