Showing posts with label acquisition. Show all posts
Showing posts with label acquisition. Show all posts

Thursday, October 22, 2015

We have met the enemy (of pronunciation teaching in TESOL), and he is us!

Clker.com
Am often reminded of that great quip in the political cartoon Pogo, by Walt Kelly, embellished in the title of this post. In workshops we often encounter the following three misconceptions about pronunciation teaching, based vaguely and incorrectly on "research" in the field. Recently, in the comments of one reviewer of a proposal for a workshop on teaching consonants for the 2016 TESOL convention--which was rejected, by the way--all three showed up together! Here they are, with my responses in italics:

Currency/Importance/Appropriateness 
"Most learners have access to websites that model phonemes, such as Rachel’s English and Sounds of Speech by the University of Iowa."

Really? "Most" learners? What planet is that on? Billions of learners don't have web access, including the preponderance of those in settlement programs here in Vancouver. And even those that do still need competent instruction on not only to use them effectively, but find them in the first place. Furthermore, those sites are strongly visual-auditory and EAP biased, better suited to what we term "EAP-types" (English for the academically privileged). For the kinaesthetic or less literate learner, those web resources are generally of little value. There are half a dozen other reasons why that perspective is excessively "linguini-centric."

Theory, Practice and Research Basis ·      
"There has been much research, which has shown the central importance of the peak vowel in a stressed syllable. The focus on consonant articulation is less important."

That represents an "uninformed" consensus from more than a decade ago. Any number of studies have since established the critical importance of selected consonants for intelligibility of learners of specific L1s. Think: Final consonants in English for some Vietnamese dialects or some Spanish L1 speakers of English. 

Support for Practices, Conclusions, and/or Recommendations ·      
"The article made a nice specific connection between haptic activities, and acquisition of consonant sounds. However, there was only one source."

Good grief. The workshop was proposed as a practical, hands-on session for teachers, presenting techniques for dealing with specific consonants.(The one reference is a published conference paper linked off the University of Iowa website.) Have heard similar reports from other classroom practitioners, such as myself, who had  proposals rejected: Only "researcher certified" proposals welcome. So much for our earlier enthusiasm in TESOL for teacher empowerment . . .

Sunday, December 14, 2014

Out of sight--but well-filed and managed (English) pronunciation change

Clip art:
Clker
A key feature of haptic pronunciation teaching is homework and practice management. In other words, once a new or improved sound or word has been introduced and anchored in class or someplace, it must be worked on by the learner consistently and systematically. (We typically tell students that it takes a couple of weeks to accomplish that.)

Research by Storm and Stone of UCSD, summarized by Science Daily (See full citation below) suggests that the process can be improved significantly by the learner employing an optimal filing system, one that allows "offloading" work done but with clear pathways back for future reference. What they found was that as long as subjects had confidence that the material learned remained accessible, their ability to go on learn new material was significantly better. If not, performance was equal to that of the control group.

Just for fun, ask your students to show you their pronunciation notes sometime . . .

There are on the market several language learning-specific apps for learning vocabulary, etc., but our experience is that most any word processor with companion filing system will work. As long as students are trained in how to practice, how much and when--and how to file it--the "savings" should be substantial! Nothing complicated, just hierarchically organized folders with "memorable" names!

A forthcoming blogpost will detail some of the alternatives that we have found productive. In the meantime, keep in touch--and clean up all that useless clutter on your hard drive!

Full citation:
Association for Psychological Science. (2014, December 10). Saving old information can boost memory for new information. ScienceDaily. Retrieved December 14, 2014 from www.sciencedaily.com/releases/2014/12/141210080740.htm

Monday, September 8, 2014

More than a gesture: When to use gesture in L2 teaching

Should you still need more convincing as to the value and contribution of gesture in L2 learning and instruction, the September 2014 issue of The Modern Language Journal (98) has two excellent,  complementary articles that you should read, one by Dahl and Ludvigsen on the effect of gesture on listening comprehension and a second, by Morett, on gesture as a "cognitive aid" during speaking production and communication. (See full references below.)

The first study examines how observing gesture complements comprehension; the second then demonstrates how actually producing the gesture as you learn and then communicate with a new L2 term in the early stages of the process results in more effective acquisition, retention and recall. 

The learner populations involved are quite different, as are the research methodologies, but the two studies together contribute substantially to our understanding of how and when gesture works. (You'll have to access them through your library online or shell out the usual 5-6 Vente Carmel Frap equivalents for each, of course--but it may be worth it in this case.) There is also an earlier (free, accessible online) 2012 paper by Morett, Gibbs and McWhinney, The Role of Gesture in Second Language Learning: Communication, Acquisition, & Retention, that lays out the theoretical background for the new study as well.

One striking (but not surprising) finding of the Morett study is that using a gesture while speaking and communicating results in better acquisition than just observing the gesture being used by someone else. The other study examines the conditions under which seeing gesture performed functions best. 

AH-EPS v3.0
The bottom line: Systematic incorporation of gesture in (at least initial) L2 learning is again shown to be exceedingly effective. It must be carefully timed and linked to meaning, but the results of both studies are very persuasive. Another good example of that, of course, is AH-EPS v3.0 Bees and Butterflies - Serious fun! (Which rolls out this month, in fact!) 


Full references:
Dahl, T. and Ludvigsen, S. (2014). How I See What You're Saying: The Role of Gestures in Native and Foreign Language Listening Comprehension The Modern Language Journal, 98, 3, (2014), pp. 813–833.
Morett, L. (2014) When Hands Speak Louder Than Words: The Role of Gesture in the Communication, Encoding, and Recall of Words in a Novel Second Language, The Modern Language Journal, 98, 3, (2014), pp. 834–853.





Sunday, August 10, 2014

Blocking poor (and improved) pronunciation with Mindfulness

Mindfulness is big. It is described a number of ways, according to Wikipedia:

"Mindfulness is a way of paying attention that originated in Eastern meditation practices."
"Paying attention in a particular way: on purpose, in the present moment, and non-judgmentally"
"Bringing one’s complete attention to the present experience on a moment-to-moment basis"


In earlier blogposts, I have focused on the possible benefits to our work of M-training. I may have been missing something . . . In a provocative 2013 study by Howard and Stillman of Georgetown University, (summarized by Science Daily) they conclude that:

 " . . . mindfulness may help prevent formation of automatic habits -- which is done through implicit learning -- because a mindful person is aware of what they are doing." 
Clip art: Clker

And in addition:

"The researchers found that people reporting low on the mindfulness scale tended to learn more -- their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often."

The study is, of course, more complex and the tasks involved may not be all that analogous to what we do in pronunciation teaching. Nonetheless, the striking preliminary finding, that conscious, meta-cognitive attention to the ongoing learning process may, in fact, work counter to some types of "implicit," or body-based learning is indeed very germane. So, when it comes to pronunciation work tasks, such as repetition, pattern recognition, drill--and even haptic anchoring-- to paraphrase Nike's classic moniker, perhaps the secret is to: Just do it! 

At least something to be "mindful" of . . .

Thursday, March 20, 2014

Situated, epistemologically "HIP," pronunciation teaching!

Clip art:
Clker
Hat tip to fellow Haptician, Angelina VanDyke of Simon Fraser University, for this great quote from Brown, Collins and Duguid (1998): 

"A theory of situated cognition suggests that activity and perception are important and epistemologically prior at a non-conceptual level - to conceptualization, and that it is on them that more attention needs to be focused. An epistemology that begins with activity and perception, which are first and foremost embedded in the world, may simply bypass the classical problem of reference-of mediating conceptual representations." (Brown, Collins and Duguid (1998) Situated Cognitions and the Culture of Learning, pp. 28, 29.)

Is that not us (HIP - Haptic-integrated Pronunciation)? Trying to successfully bypass the amount of "hyper-cognition" and "talk about" that often represents itself as sufficient or legitimate, effective pronunciation instruction can be a challenge. 

It's the old (live) chicken and egg (head) conundrum. By the time you finish your explanation (no matter how elegant, engaging and worthy of noticing it be), it is probably too late. 

Enough said . . . 

Thursday, February 28, 2013

Improving pronunciation in your sleep?

Clip art: Clker
For any number of reasons, I have always advised students to do their regular pronunciation practice in the morning. I may have to rethink that. Based on a study comparing adults and children in developing explicit knowledge of the structure and sequencing of a complex motor task (pushing up to 16 buttons in the right order), it was demonstrated that in both adults and children, but especially in children, that knowledge emerges much faster and consistently after a night's sleep.

 As reported in Science Daily--and what I could get from looking over a pdf of the tables in the $32 article in the journal, Nature Neuroscience--the study by Wilhelm, of the University of Tübingen and colleagues, demonstrates convincingly that sleep after motor training significantly enhances both facility in doing the motor sequence task later but also development of an explicit, conscious understanding of the patterning involved That kids are better than adults is no surprise, but the additional finding that a night's sleep, as opposed to an intervening day of normal activities in living, was significantly better in facilitating development of a conscious understanding of the underlying patterning is big. (No hint of that was provided during the motor training.)

The interplay in pronunciation work between providing explicit rules for sound change and doing various kinds of implicit oral practice is central to the process. Especially in HICP work, where motor routines are associated with the targeted sounds and linguistic structures, this research has interesting implications, to be sure. Bottom line: At least in some phases of haptic pronunciation work, the time of day when practice is done may make a difference. Will work on that concept and get back to you. Something to sleep on . . . 

Thursday, December 27, 2012

The pitch for teaching prosody first

Clip art: Clker
There are numerous examples of methods where either intonation is taught first in pronunciation work or shortly thereafter using techniques such as "reverse accent mimicry" or computer assisted verbal tracking or imitating actors without attending to the meanings of words. Anecdotally, they all seem to work. From a research perspective, intonation or pitch change has been employed extensively in exploring neuroplasticity, the ability of the brain "learn" and adapt. For most learners, mimicking simple pitch contours in English is not that difficult. If you examine student course books, what you find is that they all include pitch contour work but where it occurs and how much is done seems completely random.

A new study by Sober and Brainard of UCSF (summarized by Science Daily) of how song birds correct their singing draws an interesting conclusion: they fix the little mistakes and ignore the big ones. The Bengalese finches provide us with an intriguing clue as to how to organize L2 pronunciation work as well: begin with the easy stuff--not the messy articulatory problems or complex phoneme contrasts or conflicts. The arguments for establishing prosody (intonation, rhythm and stress) first are compelling at one level (theoretically) but from the perspective of measuring tangible progress, it is still difficult at best to demonstrate what has been learned, given the tools we have available today.

Children clearly learn prosody first. (In the EHIEP system intonation is now in module four but I am considering introducing it earlier, in part based on this research.) Practically speaking, doing early prosody work is relatively straightforward and not costly. You can do it for a song, in fact.  

Thursday, November 15, 2012

FLASH! Conscious suppression of pronunciation work!

Clip art: Clker
Clip art: Clker
Conscious Flash Suppression (CFS) technology could well be in the future of pronunciation teaching, based on research by Hassin, Sklar, Goldstein, Levy, Mandel and Maril at Hebrew University, as reported in Science Daily. CFS is described as " . . . one eye is exposed to a series of rapidly changing images, while the other is simultaneously exposed to a constant image. The rapid changes in the one eye dominate consciousness, so that the image presented to the other eye is not experienced consciously." What they discovered was that the material not experienced consciously was still processed and responded to non-consciously in various ways.

Their conclusion: " . . . humans can perform complex, rule-based operations unconsciously, contrary to existing models of consciousness and the unconscious." Avoiding conscious interference with pronunciation change is big. Now that may sound like a candidate for your "Well . . . duh!" file (A finding that is not only common sense but probably not worth the grant money blown on coming up with it.) Two important developments here, however:

  • First, so much of what happens between instruction and spontaneous performance in pronunciation work is unconscious--or at least not the subject of research today. Even the focus in HICPR on the "clinical" is still a relative "outlier" in this field, although not in some related disciplines. We should be able to study that more systematically. 
  • Second, all methodologists assign a great deal of the work to the "dark side," whether they make that explicit (consciously) or not, some more than others, such as Lozanov . . . or Acton! We need to stop suppressing the use of several great techniques that have been proven by experience to work the subconscious effectively.

Would love to get ahold of some of that CFS technology and try it out with haptic anchoring of academic word list vocabulary in time for TESOL in Dallas. Just imagine the impact of a pedagogical movement pattern accompanying the "constant" image of the acronym "CFS." Hard to suppress the excitement already . . .   

Wednesday, November 14, 2012

Pronunciation change readiness: Meditate amygdala affect collar? Better pronunciation should "faller!"


Clip art: Clker
Clip art: Clker
This one is a bit of a stretch . . . stick with me. The impact of affect and emotion on pronunciation, both acquisition and production, is reasonably well understood--but how to manage it is not. One of the principles or assumptions has been that management of emotion should go on simultaneously with instruction, that a learner's affective state (relatively out of consciousness) tends to be pretty fragile and easily disrupted. (That certainly seems to be the case with one's "haptic state," at least. A number of studies have been reported on the blog pointing to the importance of attention management during haptic work.)

In new research by Desbordes and colleagues at Boston University, summarized by Science Daily, on the lasting impact of meditation training, it has been demonstrated that the effect of mediating amygdala responsiveness--through two types of standard meditation work--may persist for some time, the "physical" changes to the brain being clearly evident in increased mass and activity, or lack of, in the targeted area.

What that means for us, in principle, is that some kind of brain "training" (or maybe analogous neuro-therapeutic treatment) could have real promise for enhancing pronunciation change. The key here is that what is done (a) impacts general emotional responsiveness, and (2) may well be unrelated to what is considered "normal" classroom instruction, as long as it assists the learner in achieving a more "amiable (and less hyper-reactive) amygdala." Now if that immediately strikes you as utter nonsense . . . you, yourself, may be a good candidate for a little mindful, "amygdala tune up"!

Thursday, February 23, 2012

Vowel reduction and word stress: one word at a time

Clipart: Clker
Especially in pre-academic ESL/EFL instruction, the common strategy is to devote considerable instructional time to rules and patterns of word stress assignment, and attention to principles of vowel reduction. As noted in earlier posts, some of that emphasis is due to the fact that many acquire pronunciation through reading--not speaking and listening-- and need good strategies for figuring out word stress and vowel reduction, especially with technical terms. With the advent of good audio sources for pronunciation, at least some of the need for essentially "phonic" decoding has been lessened. Flege and Bohn's 1989 research inked above came to the striking conclusion that " . . .  L2 learners acquire [word] stress placement and vowel reduction in English on a word-by-word basis." In that study, the vowel quality of the vowels in the words that were being learned appeared to be impervious to alteration or enhancement once the word had been assigned meaning and use conditions. If that is indeed the case in general--and that has certainly been my experience in working with stress and vowel reduction in conversational language, especially with intermediate-level and above, then the key to developing accurate pronunciation at least at the segment-level seems to be experiencing and anchoring the felt sense of a word as a whole, not as simply a token of a pattern or process. Now the pattern of the stress assignment or vowel reduction involved may generate to other new words in some manner, but basically, those aspects of the word that are learned as part of the initial, overall configuration, which includes any number of factors in addition to those that are sound-related, are highly resistant to change later. In other words, the brain of the learner apparently grasps whatever pronunciation of the new word is immediately available or possible-- and then doesn't look back much or continue trying to approximate the L2 target much further, if at all. In a word, we do appear to learn the pronunciation of the language . . .  one word at a time. 

Tuesday, October 18, 2011

From vowel color to vocabulary recall

Clip art: Clker
Common sense and marketers' and advertisers' collective wisdom suggest that color does have meaning, some of it culturally determined. As noted in the previous post, HICP assumes that the visual field also "contains" quadrants that have different emotional or experiential sensitivities. In very general terms, we associate basic colors with each quadrant: Northeast=yellow, Southeast=red, Northwest=green and Southwest=blue. Depending where in the quadrant, in the articulatory "chart" (a mirror-image of the standard IPA chart) a vowel is located, its intensity or hue may be increased or diminished accordingly. 2006 research by Spence, Wong, Rusan, and Rastegar of the University of Toronto makes a fascinating point as to when the color association must be made for maximum effectiveness.

In that study, various color conditions of natural scenes are used in different timings. Essentially what they discovered was that for best recall, color had to be very focused and associated with basic features or figures of the picture immediately, and not just the overall scene. One implication for our work is that color may work best in conjunction with haptic anchoring if it is introduced "from the beginning" of the process and (probably) limited to the vowel or syllable only and not the entire word, as is the usual practice with color/vowel pedagogical practices. Remember that, next time you need to make your vowels and vocabulary work more memorable, eh! 

Monday, September 19, 2011

Thresholds in learning pronunciation

Clip art:
Clker
Previous posts have considered thresholds in several disciplines, including recent looks at learning to juggle and the use of hypnotic suggestion in facilitating pronunciation change. In Lessac's work there is a similar point in the 12-step process where the student has arrived at a level where a quantum leap has been achieved (the ability to perform "the call") and the voice has a new quality about it that does not follow directly from the work that has proceded. The same experience is frequent in development of skill with musical instruments and complex athletic skills.

Here is study of, of all things, "critical evaluation of information resources" by upper division undergraduates. Those who were seen as having crossed the threshold into their chosen professions, in some recognizable sense, were able to " . . . establish the authority, quality and credibility of [discipline-specific] information sources--a remarkable, if somewhat mystical experience.

Pronunciation change often happens as abruptly, with analogous parameters. The "authority" of a sound or word is best thought of as its place in the system or in those words where it occurs; the "quality" of the sound, its resonant and articulatory features; the "credibility," both the felt sense (haptic anchoring) and the confidence attributed to the changed sound or word. To the learner, a new pronunciation should, for the most part, just "show up," be a pleasant surprise, not be consciously integrated into spontaneous speech most of the time. For the instructor, the designer, the process and protocols must be transparent and managed. We haven't crossed that threshold yet, but we are closer.

Saturday, September 17, 2011

Learning and coaching new pronunciation and juggling

Clip art:
Clker
There are several "methods" for learning to juggle, as there are many methods for helping a learner change pronunciation. Linked above is a mathematician's method that presents a fascinating parallel with HICP/EHIEP haptic-integrated line-of-march. (Also check out Tim Murphey's sometime, too!) The 7 step process is very much "felt sense" based but moves systematically from conscious focus on individual movements to "automatic" performance. It involves three phases:
(1) anchoring the basic moves
(2) instructor and learner working together to integrate the moves
(3) learner "solos!"

In PHASE ONE, learner works on the felt sense of tossing (i) one ball in one hand, (ii) one ball going back and forth between hands, and then (iii) a second ball is introduced in the hand that will catch the other ball and be tossed away just before "main" ball arrives. The haptic parallel is basically anchoring the essential movements of the target sound without attempting to coordinate them. (There are rarely more than three critical parameters.)

In PHASE TWO, learner begins to combine features as the instructor/coach responds when needed in achieving accurate individual movements. Next learner and coach juggle/do the sound together. In the process, the learner's attention is directed away not only from environmental distractions but also from focus on the mechanics of each parameter, which is becoming more automatic and non-conscious. The instructor/student dance does much to enable that integration.

In PHASE THREE, learners "juggle" the new sound on their own. I have not seen a better model (or metaphor) for changing pronunciation. So should you learn to juggle first or simply "juggle" your teaching? It's probably a toss up . . .

Monday, May 30, 2011

Is it "the drill" or "the thrill" in pronunciation learning?

Clip art: Clker
Apparently it is the latter--when it comes to efficient learning in young children. In a remarkable study by Medina, Trueswell, and Gleitman, of the University of Pennsylvania and Snedeker,of Harvard University, reviewed by Science Digest (a frequent source of good links for us), it appears that new words and concepts are best learned in insightful "hot" events (i.e., teachable moments), rather than through repeated, gradual associations building up over time. Asher, in the 1960s, came to a similar conclusion in studying the effect of learning a command on the first attempt, as opposed "getting it" gradually. He found that the faster learned; the more accurately recalled.

That is also the essential assumption of HICP work: haptic-anchoring of sounds, words and related processes (done correctly) should be consistently so vivid, engaging and attention grabbing--that what is learned is learned quicker and deeper. Turns out that the rap on  mechanical, mind-numbing drill in pronunciation teaching as not being the cost efficient way to learn is closer to the truth than we had realized. "Keep in touch" with this line of research; it is very likely just the beginning of a revolution in how we think about integrated, experiential learning.