Showing posts with label recall. Show all posts
Showing posts with label recall. Show all posts

Monday, March 15, 2021

Killing Pronunciation 15: Feelings . . . nothing more than feelings?

Finally (what seems to me) a fascinating glimpse, or at least different perspective, into why for many
language learners it can be so difficult to remember how sounds are pronounced in their second language. Fascinating study, by Fandakova, Johnson, and Ghetti of UC-Davis: Distinct neural mechanisms underlie subjective and objective recollection and guide memory-based decision making (summarized by ScienceDaily as "Making decisions based on how we feel about memories, not accuracy.") Now I'm  not sure that SD summary is entirely accurate, but it is close . . . 

Exploring the brain circuits involved in recalling past events, in essence what emerged was the "fact" that one circuit is more responsible for something resembling data, e.g., who, what, where, when; the other, with the emotion or "feeling" associated with the event. What the research demonstrated was that recall was overwhelmingly triggered through the affective/subjective wiring, not the objective circuit(s). In other words, in some very general sense our access to memory is substantially more emotion-based, not visual/objective data-oriented. 

So, other than the fact that there may be some potential gender bias there . . . how does that relate to learning the sound system of a language effectively? Ask yourself: How do you and your students feel about learning pronunciation? Does that answer the question? For many it does. If affect or feeling is that critical to good recall, then pronunciation learned may be especially vulnerable to being inaccessible in varying degrees. 

Now the "feeling" of  pronunciation could come from at least three primary sources: the affective climate of the class where it is studied; the relative engagement or appeal of the instruction to the individual, itself or satisfaction entailed or-- the somatic, physical sensations of what it is mechanically to perform or articulate the sound. 

I, myself, was trained in pronunciation teaching by one amazing speech therapist and early leaders in the field of TESOL. What I learned, which most pronunciation teaching does not take seriously enough or does not really focus on at all is how to help the learner get the richest possible somatic experience (mostly tactile and kinaesthetic) as to how the sound or pattern feels when it is articulated. Part of that, of course, is the metalanguage used in talking about it and to some extent, the procedures and practice routines, themselves. 

In other words, without a good sense of "the feeling of how it happens" (Damasio, 1999), often it just doesn't happen or at least is not anchored adequately to be remembered or recalled efficiently. There are any number of methods or systems for establishing that critical link between the sound and the feeling of the sound, not just its conceptual, visual, auditory and orthographic features. Of course, we FEEL that haptic pronunciation teaching, founded on gesture and touch, has "got that," and more. If your pronunciation work just doesn't feel right . . . get into touch . . . with us, or your local speech therapist! 

Sources: (Cited in ScienceDaily summary)
University of California - Davis. (2021, March 10). Making decisions based on how we feel about memories, not accuracy. ScienceDaily. Retrieved March 14, 2021 from www.sciencedaily.com/releases/2021/03/210310150347.htm
Yana Fandakova, Elliott G Johnson, Simona Ghetti. Distinct neural mechanisms underlie subjective and objective recollection and guide memory-based decision making. eLife, 2021; 10 


Sunday, November 1, 2020

Managing distraction in (haptic pronunciation) teaching: to block or to hype . . . or both!

New study by Udakis et al:  Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory
synapses onto CA1 pyramidal neurons shapes hippocampal output,
 characterized by Science Daily as a  . . . a breakthrough in understanding how memories can be so distinct and long-lasting without getting muddled up." Normally, I wouldn't take a shot at connecting research in basic neuroscience to haptic pronunciation teaching, but this one, describing the basic mechanisms by which some memories get stored so that they are recalled vividly later, points to a couple of principles that should underlie all instruction, not just haptic pronunciation teaching. 

In essence what were identified are two key "circuits," in effect, one that basically intensified the event and another that served to block out distraction, or put another way functions to inhibit other "learning" that might cover over or undermine an experience. One interesting implication of that model is that the brain, in some sense, is "intentionally" managing distraction. Now the conditions that have to be in play for an experience to be "protected" are, of course, myriad, but the concept that highly systematic attention to distraction, not just increasing excitement or emotional engagement in a "teachable moment" is critical is worth considering. 

Clker.com

In the comment on the earlier post on distraction, the observation was made that, at least in one program, distraction was not seen as having any relevance in instruction, whatsoever. My guess is that that is the case in many systems as well. In our haptic pronunciation teaching workshops one of the questions we must explore is how teachers explicitly and intentionally deal with in class distractions, of all kinds, but especially extraneous kinetic (movement in the room), visual (elements in the visual field of learners), auditory (any noise coming in from outside or being generated in the room), olfactor (odors), airborne (pollution, etc.), temperature fluctuations and furniture comfort and distribution. 

Any one of those can seriously undermine instruction, of course. In haptic work which is based on systematic control of movement and gesture and utilization of the visual field, you can see how any distraction, in addition to just naturally "wandering students minds" can undermine the process. Consequently, we attend to ALL of them in our initial assessment of the classroom setting that learners are about to enter. 

Just the use of gesture and movement synchronized with speaking will capture the attention of learners at least temporarily mediating the surrounding potentially distractions, but the idea is that in addition to learners being "captivated" by the lesson content, activities and instructor delivery, attention to or control of select environmental features may be extraordinarily important. Assuming you can not control everything at once, I'd suggest you use our basic heuristic: adjust . . . at least just one or two intentionally . . . each class--without letting learners know what you are up to.  Then maybe do some kind of warm up, maybe not like this one of mine, but you get the idea!


Source: 

University of Bristol. (2020, September 8). Research unravels what makes memories so detailed and enduring. ScienceDaily. Retrieved November 1, 2020 from www.sciencedaily.com/releases/2020/09/200908131139.htm

Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.
Clker.com

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Tuesday, August 13, 2019

Why rhythm comes first in pronunciation teaching (Haptic Pronunciation Teaching Tip 63 or so!)

Rhythm, stress and intonation. There are, of course, phonaesthetic explanations as to why we list those concepts in that order, including having to do with relative "weight" landing to the right end and the intrinsic qualities of the vowels and consonants themselves. Try saying those three out loud in different orders. Give native speakers three nonsense words of similar syllable structure and they'll typically prefer hearing the 3-syllable word last. Same applies for compound nouns and many other collocations.

I did a quick survey of a few popular pronunciation student books, checking for order of presentation and practice of those three processes, independent of treatment of vowels and consonants. Some did introduce the processes earlier or later but in terms of actual oral practice, there was/is a general agreement, at least the relationship between stress and rhythm. Work on stress comes first.

Lado and Fries (1954)         S - I - R
Prator and Robinett (1972)  S - R - I
Bowen, D. (1975)                I - S - R
Dauer, R. (1993)                  S - R - I
Miller. S. (2000)                 *S - R - I
Gilbert, J. (2012)                  S - R - I
Grant, L. (2017)                   S - R - I

Haptic pronunciation teaching (v5.0)  R - S - I

Miller (2000) probably comes closest to the Rhythm-then-Stress-then-Intonation model, even though the subtitle of the book is: Intonation, sounds (including word stress) and rhythm, echoing Bowen (1975). I taught with Bowen 1975 for several years and loved it. (Still do, in fact!) Like in Lado and Fries (1972), the earlier introduction of intonation patterns always made sense, in part because we were often working from a structural perspective, with smaller clauses or sentences as we "built up" from the bottom.

When it comes to guidance from methodologists on setting up repetition and practice of words and expressions, however, in most cases the attention initially is almost exclusively on the stress syllable, not the rhythmic structure or tonal expression.  One effect of that is possibly to "train" learners in a global rhythm that is very much analytic, yet random . . . the way anyone's processing and speech would be when the focus is just on stress but not the overall flow and fluency of the discourse.

The new haptic pronunciation teaching system (v5.0 - available in Fall 2019) is close to Miller (2000) in approach, beginning with rhythm and then going to stress and intonation.

So, why not begin with rhythm, add the stressed syllable(s) and then the tone pattern for that thought or rhythm group? Many do, if only implicitly or inductively, using songs, poetry or verbal games initially.  More importantly, however, even at the level of requesting a simple repetition of a sentence, approaching it from an ordered perspective of R - S - I is a powerful heuristic, one basic to haptic pronunciation teaching. For example:

"He worked all day on the report."

.Before learners actually say the expression or word out loud, here is how it works. We use the terms: Parse, Focus, Move --- DO! (PFMD!)
  • First, identify the rhythm grouping: (for example) He worked all day on the report. 
  • Second, identify stress assignment: (for example) He worked all day / on the report (underline = sentence stress)
  • Third, identify the intonation (pitch movement or non movement): Rising slightly on 'day'; falling on 'port' (with louder volume indicating sentence stress.)
  • Then (if you are doing haptic) as you say the sentence, add some type of pedagogical movement pattern/gesture (PMP) on the two stressed syllables, There are several way that can be done, synchronizing the gesture with stressed vowels, phrasal rhythm patterns or pitch movement on the stressed vowels (intonation).  
Our experience (in HaPT-Eng) has been that, both in terms of immediate verbal performance and memory recall for text, the order in which learners' attention is directed to attend to the three prosodic components of the sentence along with the accompanying pedagogical gesture may be critical: R - S - I. And why is that? In part it is probably because it uses gesture and touch to integrate or knit together the three features consistently.

Try that tomorrow. It'll change the way you and your students look at (and are moved by) both oral expressiveness and pronunciation.

And it you like that technique, you'll LOVE the next basic haptic pronunciation teaching webinar (hapticanar) on October 12th!






Tuesday, April 23, 2019

50+ ways to touch on and remember better pronunciation

Fascinating study by Hutmacher and Kuhbandner of the University of Regensburg  (summarized by ScienceDaily.com) that helps us better understand the possibilities and potential of haptic engagement in integrated learning and recall: Long-term memory for haptically explored objects: fidelity, durability, incidental encoding, and cross-modal transfer. 

In that study, blindfolded and not blindfolded subjects were asked to consider the texture, weight and size of 168 everyday objects--by handling them. The first group were told to memorize the objects since they would be tested later. (post-test accuracy of 94% ) The second group was instructed just to evaluate each item on its aesthetic qualities without further clarification as to what that meant.

In the follow up tests a week later subjects (blindfolded) were given half the items accompanied by similar items varying in only one parameter (texture, weight or size). Both groups demonstrated remarkable ability to distinguish the targeted objects (79% ~73% respectively). The point of the study was to explore both the extent of information recall in the purely haptic condition, as opposed to the visual-haptic experience, and the relative impact across modalities.

The parallel to haptic pronunciation work is striking: identifying differences in sounds or sound patterns that are, in reality, very similar and initially difficult to both perceive and produce for the learner--based to some extent on both touch and touch plus conscious visual appreciation of the objects. 

Haptic pronunciation teaching, not surprisingly, involves extensive use of about a half dozen types of touch. If we count based on technique/type x location, there are something like 400+ actual instances of the hands touching in various ways, various other "body parts." The ability to discriminate between types of touch appears to be the key--a valuable feature of  all pronunciation teaching but especially haptic work.

It works something like this. The targeted sound, a vowel, for example, is associated with:
  • a position in the visual field 
  • a position of one hand at that point in the visual field (at a azimuth on the compass)
  • a trajectory of the other hand from in front of the larynx (voice box) to touching the other hand that varies in terms of speed and course (straight or curved) 
  • some type of touch (See description of touch types below.) That is part of the information encoded with the sound which should contribute to production and recall. 
The idea, the fundamental principle of haptic pronunciation work, is that learners can more accurately recall the sound while performing the haptic "move" that accompanies it. (Research on gesture-enabled recall is compelling and extensive in several perceptual domains.)  In fact, to be most effective, when corrective feedback is provided, generally the leaner first sees the instructor perform the gestural move, termed a "pedagogical movement pattern", without the sound before performing the "haptic complex" of sound plus movement and touch themselves.

Representative haptic (variable touch-plus-gesture) gesture types and visual properties involved:
  • light tap of finger tips in middle of palm 
  • hold (full hands touch; no movement) 
  • finger tips touch: then push in one direction 
  • open hand moves/rolls around fist 
  • finger nails scratch across palms 
  • light touch of ball in hand 
  • strong squeeze of ball in hand 
  • middle fingers slide from finger tips to heel of other hand 
  • finger tips tap deltoid muscle 
  • finger tips tap bachio-radialis above elbows 
  • feet contact with floor, either to syllable stress or heels raise on rising pitch
  • hands to various points of contact on the face, collar bones, abs, etc.
  • tongue, teeth, lips touched by wooden stick or hands to mark points of articulation
To see demonstrations of those haptic pedagogical movement patterns (PMPs) and learn more about haptic pronunciation teaching, join us at the next webinars on May 17th and 18th. For reservations: info@actonhaptic.com.

Source:
Association for Psychological Science. (2018, November 27). Touch can produce detailed, lasting memories. ScienceDaily. Retrieved April 22, 2019 from www.sciencedaily.com/releases/2018/11/181127092532.htm

Wednesday, November 14, 2018

When "clear speech" is not clear . . . or meaningful, but still instructive.

Clker.com
Once in a while you stumble on a study that seems, at least at first, to fly in the face of contemporary theory and methodology. This is one does: "How clear speech equates to clear memory: Researchers find that a speaker's clearly articulated style can improve a listener's memory of what was said." by  researchers Keerstock and Smiljanic of the University of Texas at Austin.

Actually, the title, when read correctly does get at the reality behind oral comprehension work: the type of "clear speech" used in the study SHOULD result in "clear" memory, that is nothing much of substance or meaning being recalled later. The results seem to confirm that, in fact.

Let me summarize it for you so you don't have to read it yourself. There is an (ironically) useful piece to the study, albeit not what the researchers intended. They head in the right direction initially but land someplace else:
  • Subjects, natives and nonnatives, heard 6 sets of 12 sentences read either in " . . . "clear" speech, in which the speaker talked slowly, articulating with great precision, and (or) a more casual and speedily delivered "conversational" manner." (Can't wait to see what controls they had in place in terms of every variable related to content and delivery!)
  • After hearing the 12 sentences they were given some "clues" for each sentence and then asked to write down verbatim the rest of the words in each sentence. (Since no data or protocols are provided, we must assume that the sentences were of reasonable length and vocabulary level, and as a group were probably not thematically related.) 
  • Everybody remembered more words in the "clear speak" condition. (Did the natives or nonnative speakers understand the meaning better? Are the results based just on how many words were recalled? Hard to tell from the brief description of the study.)
Their conclusion (from the ScienceDaily.com summary):

"That appears to be an efficient way of conveying information, not only because we can hear the words better but also because we can retain them better."

Wow. I don't even know where to begin on that . . . so I won't, but if you are not up to speed on current thinking in L2 aural comprehension work, check out Conti's blog on that topic.  I will just note that the practice of doing a precise word-by-word oral reading--and then doing the same PASSAGE of say 200 words or more a second time in a highly expressive frame of voice and mind has long standing in both public speaking and "Lectio Divina" traditions. It is a proven technique, a way to both prepare for an expressive oral reading and dig into the meaning of the text. In haptic work, that practice is fundamental as well.

But the methodology of this study has to be one of the best ways to "clear memory" of meaning and motivation imaginable!

So . . . try . . . that . . . out . . . with . . . your . . . class . . . tomorrow . . . morning . . . and . . . see . . . how . . . it . . . works! And report back.

KIT

Don't forget to sign up for the upcoming Haptic Pronunciation Training Webinars!!!


Source: 
https://www.sciencedaily.com/releases/2018/11/181105200736.htm

Friday, December 15, 2017

Object fusion in (pronunciation) teaching for better uptake and recall!

Your students sometimes can't remember what you so ingeniously tried to teach them? New study by D’Angelo, Noly-Gandon, Kacollja, Barense, and Ryan at the Rotman Research Institute in Ontario, Breaking down unitization: Is the whole greater than the sum of its parts?” (reported by Neurosciencenews.com) suggests an "ingenious" template for helping at least some things "click and stick" better. What you need for starters:
  • 2 objects (real or imagined) (to be fused together)
  • an action linking or involving them, which fuses them
  • a potentially tangible, desirable consequence of that fusion
Clker.com
The example from the research of the "fusing" protocol was to visualize sticking an umbrella in the key hole of your front door to remind yourself to take your umbrella so you won't get soaking wet on the way to work tomorrow. Subjects who used that protocol, rather than just motion or action/consequence, were better at recalling the future task. Full disclosure here: the subjects were adults, age 61 to 88. Being near dead center in the middle of that distribution, myself, it certainly caught my attention! I have been using that strategy for the last two weeks or so with amazing results . . . or at least memories!

So, how might that work in pronunciation teaching? Here's an example

Consonant: th - (voiceless)
Objects: upper teeth, lower teeth, tongue
Fusion: tongue tip positioned between teeth as air blows out (action)
Consequence - better pronunciation of the th sound

Haptic pronunciation adds to the con-fusion

Vowel (low, central 'a'), done haptically (gesture + touch)
Objects: hands touch at waist level, as vowel is articulated, with jaw and tongue lowered in mouth, with strong, focused awareness of vocal resonance in the larynx and bones of the face.
Fusion: tongue and hand movement, sound, vocal resonance and touch
Consequence: better pronunciation of the 'a' sound

Key concept: It is not much of a stretch to say that our sense of touch is really our "fusion" sense, in that it serves as a nexus-agent for the others  (Fredembach, et al, 2009; Legarde and Kelso 2006). Much like the created image of the umbrella in the key hole evokes a memorable "embodied" event, probably even engaged with our tactile processing center(s), the haptic pedagogical movement pattern (PMP) should work in similar manner, either in actual physical practice or visualized.

One very effective technique, in fact, is to have learners visualize the PMP (gesture+sound+touch) without activating the voice. (Actually, when you visualize a PMP it is virtually impossible to NOT experience it, centered in your larynx or voice box.)

If this is all difficult for you to visualize or remember, try first imagining yourself whacking your forehead with your iPhone and shouting "Eureka!"

Citation:
Baycrest Center for Geriatric Care (2017, August 11). Imagining an Action-Consequence Relationship Can Boost Memory. NeuroscienceNew. Retrieved August 11, 2017 from http://neurosciencenews.com/Imagining an Action-Consequence Relationship Can Boost Memory/

Saturday, February 4, 2017

Killing Pronunciation 2: "Over and under-learning"

You may have seen a report on this research on "overlearning" recently, Overlearning hyperstabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, by Shibata, Sasaki, Bang, Walsh, Machizawa, Tamaki, Chang and Watanabe of Brown University. (There is a pretty readable summary on Medicalexpress.com.) According to the abstract: "Overlearning in humans abruptly changes neurochemical processing, to hyperstabilize and protect trained perceptual learning from subsequent new learning."

Wow. Some useful terms there for you: Neurochemical processing . . . hyperstabilize  . . . inhibitory-dominant . . . 

Clker.com
Basically, researchers examined the effect of overlearning of a visual mapping procedure on retention in one of three conditions: (a) another new learning procedure was introduced immediately, (b) a time period was inserted (3 hours) before the next procedure, or (c) the first procedure was carried out with overlearning (operationalized as going over the correct set of moves yet again, again), followed by a second new procedure.

In essence, both (b) and (c) resulted in better recall later. In other words, you can protect new learning by putting some space between that and the next piece of training--especially if the two procedures have some potential overlap of some kind, or . . . by hammering it in, so to speak.

Shibata et al. suggest that the findings probably apply to a wide range of learning contexts, while conceding that the focus on visual modality also limits applicability. More research needed, of course. But what might that imply for pronunciation teaching? A few things:
  • Some kinds of drill may work as well as we know they do. (Especially if it is anchored with gesture-plus-touch!)
  • Research has long established that just "pointing out" or simple recasting (repeating back the correct pronunciation without further comment) rarely are effective. 
  • As was reported in the previous blogpost, the role of visual stimuli and distraction in moderating integration of other modalities, can be problematic, at best. That is to say the applicability of this "visual" study to embodied pronunciation may be marginal. 
  • The concept of "spacing" various procedures in pronunciation training does make. The behaviorists had that one figured out 60 or 70 years ago. (In fact, this possible additional empirical validation of overlearning must put a bit of a smile on the face of any "hyper-senior" researchers of the period still with us.)
  • Good trainers in virtually all physical disciplines know and practice this idea. Again, as developed in several previous blogposts, the idea of partitioning off leaning has always been central to hypnosis, allowing the unconscious mind a role in the party. How you do that can vary enormously, simple waiting time being one. 
Two possible takeaways here: (a) However you accomplish it, pronunciation learning, being the highly modality-integrated process that it is, requires or should be followed by uncompromised attention, processing space around it of some kind and "full-body" armor. (b) If not an integral part of your method, don't be surprised if little sticks or is "uptaken"!

If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings

Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
*With apologies, of course, to Bill O'Reilly for the use of his "killing" meme, as in his recent books on well known figures of the past, e.g., Killing Jesus, Killing Lincoln, Killing Kennedy. At least a couple of future posts will use the same "killer" title hook.

Source:
Nature Neuroscience (2017)doi:10.1038/nn.4490






  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
    Overlearning hyper-stabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, Nature Neuroscience, nature.com/articles/doi:10.1038/nn.4490

    Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp
    Overlearning hyper-stabilizes a skill by rapidly making neurochemical processing inhibitory-dominant, Nature Neuroscience, nature.com/articles/doi:10.1038/nn.4490

    Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp






  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp







  • To cement quickly, overlearning should help, but beware it might interfere with similar learning it that follow immediately.
  • Without overlearning, don't try to learn something similar in rapid succession because there is a risk that the second bout of will undermine the first.
  • If you have enough time, you can learn two tasks without interference by leaving a few hours between the two trainings.


  • Read more at: https://medicalxpress.com/news/2017-01-overlearning.html#jCp

    Saturday, January 28, 2017

    Killing pronunciation improvement: better heard (and felt) but not seen!

    Clker.com
    Fascinating study, Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity, by Gibney, et al. Department of Neuroscience, Oberlin College.

    Tigger warning: This is a thick, technical read, but the conclusions of the study have potentially important implications for pronunciation teaching, especially attempts to enhance uptake of new and corrected sounds or patterns that rely on effective integration of sounds, images, movement and vocal resonance. 

    In essence, what the research examined was, as the title suggests, how distractions in the visual field affected subjects attention and ability to learn and recall audio-visual stimuli (images on a computer screen accompanied by sounds). What was striking (again as evident in the title) was that no matter how complex the task of associating the targeted sound with the visual image or object in focus, with even the slightest distraction created on the screen, e.g., a object briefly appearing in a corner, the subject's ability to integrate and recall the complex target later . . .was compromised.

    The implications for pronunciation teaching?  Not surprisingly, attention is critical in integrating sensory information. We know that, of course. What is more interesting is the idea that any visual distraction whatsoever that occurs when sound, movement and visual imagery (such as the orthography or phonetic representation of a word or phrase) are being "integrated" may seriously  undermine the process. In other words, visual attention and eye tracking during the process may have dramatic impact. That is a "variable" that can, in principle, be managed in the classroom, although most do not consider visual distraction to be potentially that disruptive of pronunciation instruction. But it certainly can be.

    We discovered early on that in haptic pronunciation work, where not only sound, visual imagery, movement and vocal resonance are involved--but touch as well, visual distraction can seriously derail the process. This research suggests, for example, that the same effect during general pronunciation work as well, especially oral work, may be a significant impediment in some contexts. 

    The sterile, featureless language laboratory booth of old may have had more going for it than we thought! In early haptic work we experimented with controlling eye tracking. Perhaps it is time we revisited that idea. It certainly deserves our undivided attention.

    Original research article: Front. Integr. Neurosci., 20 January 2017 | https://doi.org/10.3389/fnint.2017.00001

    Sunday, August 28, 2016

    Great pronunciation teaching? (The "eyes" have it!)

    Clker.com
    Attention! Внимание!

    Seeing the connection between two new studies, one on the use of gesture by trial lawyers in concluding arguments and one on how a "visual nudge" can seriously disrupt our ability to describe recalled visual properties of common objects--and by extension, pronunciation teaching--may seem a bit of a stretch, but the implications for instruction, especially systematic use of gesture in the classroom are fascinating.

    The bottom line: what the eyes are doing during pronunciation work can be critical, at least to efficient learning. Have done dozens posts over the years on the role or impact of visual modality on pronunciation work; this adds a new perspective. 

    The first, by Edmiston and Lupyan of  University of Wisconsin-Madison, Visual interference disrupts visual knowledge, summarized in a ScienceDaily summary:

    "Many people, when they try to remember what someone or something looks like, stare off into space or onto a blank wall," says Lupyan. "These results provide a hint of why we might do this: By minimizing irrelevant visual information, we free our perceptual system to help us remember."

    The "why" was essentially that visual distraction during recall (and conversely in learning, we assume), could undermine ability to describe visual properties of even common well-known objects, such as the color of a flower. That is a striking finding, countering the prevailing wisdom that such properties are stored in the brain more abstractly, not so closely tied to objects themselves in recall.

    Study #2: Matoesian and Gilbert of the University of Illinois at Chicago, in an article published in Gesture entitled, Multifunctionality of hand gestures and material conduct during closing argument. The research looked at the potential contribution of gesture to the essential message and impact of the concluding argument to the jury. Not surprisingly, it was evident that the jury's visual attention to the "performance" could easily be decisive in whether the attorney's position came across as credible and persuasive. From the abstract:

    This work demonstrates the role of multi-modal and material action in concert with speech and how an attorney employs hand movements, material objects, and speech to reinforce significant points of evidence for the jury. More theoretically, we demonstrate how beat gestures and material objects synchronize with speech to not only accentuate rhythm and foreground points of evidential significance but, at certain moments, invoke semantic imagery as well. 

    The last point is key.  Combine that insight with the "Nudge" study. It doesn't take much to interfere with "getting" new visual/auditory/kinesthetic/tactile input. The dominance of visual over the other modalites is well established, especially when it comes to haptic (movement plus touch). These two studies add an important piece, that random VISUAL, itself, can seriously interfere with targeted visual constructs or imagery as well. In other words, what your student LOOK at and how effective their attention is during pronuncation work can make a difference--an enormous difference, as we have discovered in haptic pronunciation teaching.

    Whether learners are attempting to connect the new sound to the script in the book or on the board, or are attempting to use a visually created or recalled script (which we often initiate in instruction) or are mirroring or coordinating their body movement/gesture with the pronunciation of a text of some size, the "main" effect is still there: what is at that time in their visual field in front of them or in their created visual space in their brain may strongly dictate how well things are integrated--and recalled later. (For a time I experimented with various system of eye tracking control, myself, but could not figure out how to develop that effectively--and safely, but emerging technologies offer us a new "look" at that methodology in several fields today.)

    So, how do we appropriately manage "the eyes" in pronunciation instruction? Gestural work helps to some extent, but it requires more than that. I suspect that virtual reality pronunciation teaching systems will solve more of the problem. In the meantime, just as a point of departure and in the spirit of the earlier, relatively far out "suggestion-based" teaching methods, such as Suggestopedia, assume that you are responsible for everything that goes on during a pronunciation intervention (or interdiction, as we call it) in the classroom. (See even my 1997 "suggestions" in that regard as well!)

    Now I mean . . . everything, which may even include temporarily suspending extreme notions of learner autonomy and metacognitive engagement . . .

    See what I mean?

    Sources: 
    Matoesian, G. and Gilbert, K.  (2016). Multifunctionality of hand gestures and material conduct during closing argument. Gesture, Volume 15, Issue 1, 2016, pages: 79 –114
    Edmiston, P. and  Gary Lupyan, G. (2017) Visual interference disrupts visual knowledge. Journal of Memory and Language, 2017; 92: 281 DOI: 10.1016/j.jml.2016.07.002

    Monday, December 14, 2015

    Can't see teaching (or learning) pronunciation? Good idea!

    Clker.com
    A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

    In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

    The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

    What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

    The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

    The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

    The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

    See what I mean? (You heard it here!)

    Full citation:
    Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

    Wednesday, October 7, 2015

    Great memory for words? They're probably out of their heads!

    Perhaps the greatest achievement of neuroscience to date has been to repeatedly (and empirically) confirm common sense. That is certainly the case with teaching or training. Here's a nice one.

    For a number of reasons, the potential benefit of speaking a word or words out loud and in public
    Clipart: Clker.com
    when you are trying to memorize or encode it--rather than just repeating it "in your head"--is not well understood in language teaching. For many instructors and theorists, the possible negative effects on the learner of speaking in front of others and getting "unsettling" feedback far outweigh the risks. (There is, of course, a great deal of research--and centuries of practice--supporting the practice of repeating words out loud in private practice.)

    In what appears to be a relatively elegant and revealing (and also common-sense-confirming) study, Lafleur and Boucher of Montreal University, as summarized by ScienceDaily (full citation below) explored under which conditions subsequent memory for words is better: (a) saying it to yourself "in your head", (b) saying it to yourself in your head and moving your lips when you do, (c) saying it to yourself as you speak it out loud, and (d) saying the word out loud in the presence of another person. The last condition was substantially the best; (a) was the weakest.

    The researchers do speculate as to why that should be the case. (ScienceDaily.com quoting the original study):

    "The production of one or more sensory aspects allows for more efficient recall of the verbal element. But the added effect of talking to someone shows that in addition to the sensorimotor aspects related to verbal expression, the brain refers to the multisensory information associated with the communication episode," Boucher explained. "The result is that the information is better retained in memory."


    The potential contribution of interpersonal communication as context information to memory for words or experiences is not surprising. How to use that effectively and "safely" in teaching is the question. One way, of course, is to ensure that the classroom setting is both as supportive and nonthreatening as possible. Add to that a social experience with others that also helps to anchor the memory better.

    Haptic pronunciation teaching is based on the idea that instructor-student, and student-student communication about pronunciation must be both engaging and efficient--and resonately and richly spoken out loud. (Using systematic gesture does a great deal to make that work. See v4.0 later this month for more on that.)

    I look forward to hearing how that happens in your class or your personal language development. If that thread gets going, I'll create a separate page for it. 

    Keep in touch!

    Citation:
    University of Montreal. "Repeating aloud to another person boosts recall." ScienceDaily. ScienceDaily, 6 October 2015. .

    Tuesday, January 20, 2015

    Don't look now: Recall, rapport and (haptic) pronunciation teaching

    Nice study by Nash, Nash, Morris and Smith. (summarized by Science Daily) titled, " Does rapport-building boost the eyewitness eyeclosure effect in closed questioning?" (See full citation below.) 
    Photo credit:
    Clker.com

    Many appear to close their eyes to help remember. Research in several fields has looked at the impact of eye closure. One of the persistent puzzles has been why there should be such variability in subject response, whether in hypnosis or, as in this study, witness recall of events. One hypothesis has been that rapport with the interviewer or researcher is critical. Here is the bottom line from the authors: 

    "It is clear from our research that closing the eyes and building rapport help with witness recall . . . Although closing your eyes to remember seems to work whether or not rapport has been built beforehand, our results show that building rapport makes witnesses more at ease with closing their eyes. That in itself is vital if we are to encourage witnesses to use this helpful technique during interviews."

    I have for decades ( more or less randomly) asked students to use eye closure when trying to "anchor" or recall pronunciation. By anchoring I mean using a gesture culminating in touch of both hands on the stressed syllable of a word or phrase. When doing that, in general, the eyes tend to follow the hands, to some degree controlling attention in the visual field. There are, occasionally, learners who seem to be better at anchoring with eyes closed as well. (I have worked with a few blind students and, once guided through the gestures of the system, they do at least as well as the sighted, if not better.) 

    Another aspect of the process that I have not always attended to well is what I'd call "daily rapport," something closer to what is used in the study, working quickly to get relaxed, comfortable attention in the class before getting back to the heavy lifting.

    Going to begin taking a second look at eye closure during directed recall in our work and the requisite level of rapport to enhance it. 

    An "eye opening" piece of research, eh! 

    Full citation:
    University of Surrey. (2015, January 16). Closing your eyes boosts memory recall, new study finds. ScienceDaily. Retrieved January 20, 2015 from www.sciencedaily.com/releases/2015/01/150116085606.htm

    Tuesday, July 22, 2014

    Stop using excessive repetition in pronunciation teaching! (Especially if your student almost gets it right the first time!)

     "Words, words, words." (Hamlet)

    There is probably no topic more controversial in pronunciation teaching than the role of repetition in learning and change. Key in "repetition in pronunciation teaching" into Google and you get about 1,000,000 hits. Educated opinion ranges from "use only sparingly and strategically, if at all" to highly sophisticated routines with multiple repetitions.

    Applicability of repetition of language forms varies greatly, in differing forms and with learner populations. The operating principle may, in fact, be--to paraphrase an old pop song--neither "too much repetition-- or not quite enough."

    The former injunction, to use repetition sparingly in at least some contexts, is seemingly supported by a 2014 study by Reagh and Yassa of the University of California-Irvine (summarized by Science Daily) in which repeated viewing of pictures seemed to " . . . increase factual recall but actually hindered subjects' ability to reject similar "imposter" pictures. This suggests that the details of those memories may have been shaken loose by repetition." Their model, Competitive Trace Theory, also is said to postulate that " . . . details of a memory become more subjective the more they're recalled and can compete with bits of other similar memories."

    Now granted, that study focused only on repeated viewing of pictures, rather than oral (or haptic) repetition. What that does at least in part explain, however, is why repetition may not only be ineffective at times but possibly counterproductive, downgrading even further the memory of the target sound, word or phrase. In cases where there is a competing or "dangerously similar" L1 or L2 sound, word or phrase in the neighbourhood, either phonologically or semantically, the effect may be significant.

    Recall that Asher's 1970's pre-Total Physical Response research was, in part, based on the concept that the fewer the number of repetitions when a word is learned for the first time, the better the chances of it being remembered.)

    There are any number of approaches to effective repetition in pronunciation teaching, depending on what is being learned and when. If just articulation of a specific sound is the purpose, multiple, rapid repetition may be in order. If, on the other hand, the pronunciation of new or "repaired" vocabulary is the goal, then the effect alluded to by Reagh and Yassa may be in operation: the "uniqueness" of the target being hammered off or dulled.

    In EHIEP work we generally try to limit the number of repetitions of words or short phrases to 3x, and even then requiring as much intense "full body" engagement as possible, accompanied by haptic anchoring--movement and touch on a stressed syllable.

    Coming soon!
    AHEPS v3.0 Bee & Butterfly
    (Artist: Anna Shaw)
    Repetition, like all aspects of instructional design must be intentional, meaningful and developmentally appropriate. Working 1x1, as in tutoring, that is more manageable. At the class level or during independent study, however, it is another question entirely.

    Just ask Zig Zigler“Repetition is the mother of learning, the father of action, which makes it the architect of accomplishment.” 




    Wednesday, October 24, 2012

    Sound-grapheme nexus: why 'haptic' works!

    Credit: jp41.com
    The research on why haptic integration in pronunciation work should facilitate encoding and recall is substantial. A good example is the study of learning sounds related to a set of Japanese characters, by Gentaz and colleagues at the Université de Savoie, summarized by Science Daily. Their conclusion: "When visual stimuli can be explored both visually and by touch, adults learn arbitrary associations between auditory and visual stimuli more efficiently." The same team had earlier done similar research with children as beginning readers. Earlier posts have also examined the intervening variables that may compromise that effectiveness, such as other visual or auditory clutter, imprecise haptic anchoring and certain types of repeated touch which in effect cancels out earlier anchoring. Haptic-integration in EHIEP work is, of course, not a "no-brainer" but it is a very powerful and "hand-eye" tool!