Showing posts with label distraction. Show all posts
Showing posts with label distraction. Show all posts

Tuesday, February 28, 2023

Using gesture and movement to avoid "Pop Outs" in (pronunciation) teaching!

I like this study. One of the biggest obstacles in effective teaching (of anything) are sudden distractions, when what should have "popped in" easily in a lesson . . . doesn't . . . because of what just "popped out or up." Interesting piece of research by Klink et al,  on visual distraction--and a potential strategy for dealing with it, summarized by Neurosciencenews.com, Trained Brains Rapidly Suppress Visual Distractions. Title of the original study, published on PNAS: Inversion of pop-out for a distracting feature dimension in monkey visual cortex, (Ignore that term "monkey" in the original there!)

In essence the "subjects" were trained as followed (from the summary):

"The researchers trained monkeys to play a video game in which they searched for a unique shape among multiple items, while a uniquely colored item tried to distract them. As soon as the monkeys found the unique shape, they made an eye movement to it to indicate their choice. After some training, monkeys became very good at this game and almost never made eye movements to the distractor."

So what is a potential application of that "discovery" in teaching? What visual distractions are your students subject to in the classroom? On a task by task basis, how do you maintain student attention to the focus of the activity? 

For example, in haptic pronunciation teaching, instructor and students do a great deal of repeating words, phrases, sentences and dialogues together (not repeating after) while using speech-synchronized gestures continuously. In this choreographed technique, what we call "movement, tone and touch techniques" (MT3s) it is essential that instructor and student gesturing is constantly synchronized, throughout. You can "SEE" just how disruptive a visual distraction in the room in the visual fields of students could be. 

On the flip side, however, you can also "SEE" how MT3 training, itself--or even typical gesture use in teaching or communication, whether designed or impromptu, can, in principle, serve to enhance general visual attention in the classroom. 

How free of distraction or immune to it is the visual field in your classroom? Can you manage it better, more "movingly?" 






Source: Klink, P., Teeuwen, R., Lorteije, J. and P. Roelfsema. (2023). Inversion of pop-out for a distracting feature dimension in monkey visual cortex. PNAS February 22, 2023  https://doi.org/10.1073/pnas.2210839120

Sunday, November 1, 2020

Managing distraction in (haptic pronunciation) teaching: to block or to hype . . . or both!

New study by Udakis et al:  Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory
synapses onto CA1 pyramidal neurons shapes hippocampal output,
 characterized by Science Daily as a  . . . a breakthrough in understanding how memories can be so distinct and long-lasting without getting muddled up." Normally, I wouldn't take a shot at connecting research in basic neuroscience to haptic pronunciation teaching, but this one, describing the basic mechanisms by which some memories get stored so that they are recalled vividly later, points to a couple of principles that should underlie all instruction, not just haptic pronunciation teaching. 

In essence what were identified are two key "circuits," in effect, one that basically intensified the event and another that served to block out distraction, or put another way functions to inhibit other "learning" that might cover over or undermine an experience. One interesting implication of that model is that the brain, in some sense, is "intentionally" managing distraction. Now the conditions that have to be in play for an experience to be "protected" are, of course, myriad, but the concept that highly systematic attention to distraction, not just increasing excitement or emotional engagement in a "teachable moment" is critical is worth considering. 

Clker.com

In the comment on the earlier post on distraction, the observation was made that, at least in one program, distraction was not seen as having any relevance in instruction, whatsoever. My guess is that that is the case in many systems as well. In our haptic pronunciation teaching workshops one of the questions we must explore is how teachers explicitly and intentionally deal with in class distractions, of all kinds, but especially extraneous kinetic (movement in the room), visual (elements in the visual field of learners), auditory (any noise coming in from outside or being generated in the room), olfactor (odors), airborne (pollution, etc.), temperature fluctuations and furniture comfort and distribution. 

Any one of those can seriously undermine instruction, of course. In haptic work which is based on systematic control of movement and gesture and utilization of the visual field, you can see how any distraction, in addition to just naturally "wandering students minds" can undermine the process. Consequently, we attend to ALL of them in our initial assessment of the classroom setting that learners are about to enter. 

Just the use of gesture and movement synchronized with speaking will capture the attention of learners at least temporarily mediating the surrounding potentially distractions, but the idea is that in addition to learners being "captivated" by the lesson content, activities and instructor delivery, attention to or control of select environmental features may be extraordinarily important. Assuming you can not control everything at once, I'd suggest you use our basic heuristic: adjust . . . at least just one or two intentionally . . . each class--without letting learners know what you are up to.  Then maybe do some kind of warm up, maybe not like this one of mine, but you get the idea!


Source: 

University of Bristol. (2020, September 8). Research unravels what makes memories so detailed and enduring. ScienceDaily. Retrieved November 1, 2020 from www.sciencedaily.com/releases/2020/09/200908131139.htm

Sunday, October 18, 2020

Good, or at least less "distracting" distraction in (pronunciation) teaching

Now here is some "different" research from the Journal of Food Science Education and the journal, Perception, that you may have missed (summarized by Science Daily). The first, by Schmidt of the University of Illinois at Urbana-Champaign, titled: Distracted learning: Big problem and golden opportunity; the second, by Hipp, Olsen and GerhardsteinMind-Craft: Exploring the Effect of Digital Visual Experience on Changes to Orientation Sensitivity in Visual Contour Perception.

In pronunciation teaching, and especially so in haptic work, distraction can be lethal, depending on which modality it is coming through! Dealing with it is always high priority. We manage distraction and attention several ways, but principally with gesture, touch and management of the visual field. 

Schmidt's report reviews research on sources of potential distraction evident in the multitasking world of today and then considers a number of potentially effective measures for addressing them. Hibb et al examine an intriguing phenomena where the brain/eyes are seen adapting in surprising ways to the visual digital milieu, especially shifting among different environments that we are engaged in today. Taken together, the two studies seem to suggest that, probably for a number of reasons, distraction is emerging as a much more complex and variable phenomenon in the experience of those who have "grown up" in that milieu than we often assume. 

In other words, the impact of disruptive elements on learning and teaching-- and consequently the potential effectiveness of mediation procedures, needs to be reconsidered. Listed below, paraphrased and reorganized into three categories, are the set of recommendations from Schmidt's study: 

Pre-Conditions:
  • Removing extraneous devices from workspaces
  • Incorporating movement into classroom activities
  • Promoting and implementing active learning
  • Using a work-reward system
Classroom protocols: 
  • Alternating intensive periods of focused work with preplanned bursts of pleasure
  • Developing course content on topics of students' choosing 
  • Having them teach it to other students
Cognitive and meta-cognitive:
  • Encouraging development of internal locus of control
  • Fostering a work-hard, play-hard mindset
  • Encouraging setting of goals related to academic performance 
Nothing there, in itself, surprising, of course, but taken together or reconsiderd as a fuller set of strategies that may, in combination, work to moderate distraction--as a more primary/preliminary target of instruction with today's learners, with their evolving attentional systems, is worth "attending to!" 

Bottom line: The impact of both distraction and of those mediation strategies on "native media-ites," those who have grown up in computer mediated experience (and devices), probably those now in their mid to late 20s or somewhat earlier, may be evolving or emerging in new forms. In other words, multitasking for those learners is apparently becoming experientially and phenomenologically different than it is to earlier "pre-media" generations: they seem to be adapting in ways such that they can be both less . . .  distracted and, consequently, more amenable to pedagogical mediation. 

In a subsequent post, I'll continue this thread exploring specific mediations that apply to (haptic) pronunciation teaching. 

Sources: 

Shelly J. Schmidt. Distracted learning: Big problem and golden opportunity. Journal of Food Science Education, 2020; 19 (4): 278 DOI: 10.1111/1541-4329.12206

University of Illinois at Urbana-Champaign, News Bureau. (2020, October 14). Distracted learning a big problem, golden opportunity for educators, students. ScienceDaily. Retrieved October 17, 2020 from www.sciencedaily.com/releases/2020/10/201014140932.htm

D. Hipp, S. Olsen, P. Gerhardstein. Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes to Orientation Sensitivity in Visual Contour Perception. Perception, 2020; 030100662095098 DOI: 10.1177/0301006620950989

Binghamton University. (2020, September 30). Screen time can change visual perception -- and that's not necessarily bad. ScienceDaily. Retrieved October 17, 2020 from www.sciencedaily.com/releases/2020/09/200930144422.htm

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!
Clker.com

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!




Sunday, August 12, 2018

Feeling distracted, distant or drained by pronunciation work? Don't be downcast; blame your smartphone!

clker.com
It all makes sense now. I knew there had to be more (or less) going on when students are not thoroughly engaged or seemingly not attentive during pronunciation teaching, mine, especially. Two new studies, taken together, provide a depressing picture of what we are up against, but also suggest something of an antidote as well.

Tigger warning: This may be perceived as slightly more fun than (new/old) science. 

The first, summarized by ScienceDaily.com, is Dealing with digital distraction: Being ever-connected comes at a cost, studies find, by Dwyer and Dunn of The University of British Columbia. From the summary:

"Our digital lives may be making us more distracted, distant and drained . . . Results showed that people reported feeling more distracted during face-to-face interactions if they had used their smartphone compared with face-to-face interactions where they had not used their smartphone. The students also said they felt less enjoyment and interest in their interaction if they had been on their phone."

What is most interesting or relevant about the studies reported, and the related literature review, is the focus on the impact of digital smartphone use prior to what should be quite meaningful f2f  interaction--either dinner or what should have been a more intimate conversation--THE ESSENCE OF EFFECTIVE PRONUNCIATION AND OTHER FORMS OF INSTRUCTION! Somehow the digital "appetizer" made the meal and interpersonal interaction . . . well . . . considerably less appetizing.

Why should that be the case? The research on the multiple ways in which digital life can be depersonalizing and disconnecting is extensive and persuasive, but there is maybe something more "at hand" here.

A second study--which caught my eye as I was websurfing on the iPhone in the drive-through lane at Starbucks--dealt with what seem to be similar effects produced by "bad" posture, specifically studying something with head bowed, as opposed to doing the same with the text at eye level, with optimal posture: Do better in math: How your body posture may change stereotype threat response” by Peper, Harvey, Mason, and Lin of San Francisco State University, sumarized in NeuroscienceNews.com. 

Subjects did better and felt better if they sat upright and relaxed, as opposed to looking down at the study materials, a posture which according the authors . .  "is a defensive posture that can trigger old negative associations."

So, add up the effect of those two studies and what do you get? Lousy posture AND digital, draining distraction. Not only do my students use smartphones WITH HEAD BOWED up until the moment class starts, but I even have them do more of it in class! 

Sit up and take note, eh!

Citations:
American Psychological Association. (2018, August 10). Dealing with digital distraction: Being ever-connected comes at a cost, studies find. ScienceDaily. Retrieved August 12, 2018 from www.sciencedaily.com/releases/2018/08/180810161553.htm

San Francisco State University (2018, August 4). Math + Good Posture = Better Scores. NeuroscienceNews. Retrieved August 4, 2018 from http://neurosciencenews.com/math-score-posture-9656/


Saturday, July 28, 2018

Mesmerizing teaching (and pronunciation teachers)


clker.com
The topics of  attention salience and unconscious learning have come up any number of times over the course of the history of the blog, beginning with one of my favorites on that subject back in 2011 on Milton Erickson. In part because of the power of media today and the "discoveries" by neuroscience that we do, indeed, learn on many levels, some out of our immediate awareness, there is renewed interest in the topics--even from Starbucks!

A fascinating new book (to me at least) by Ogden, Credulity: A Cultural History of US Mesmerism, summarized by Neuroscience News, explores the history of  "Mesmerism" and a bit about its contemporary manifestations.(QED. . . . if you were not aware that it is still with us!) Ogden is most interested in understanding the abiding attraction of purposeful manipulation or management of unconscious communication, attention and learning. One fascinating observation, from the Neuroscience News summary is:

" . . . that one person’s power of suggestion over another enables the possibility of creating a kind of collaborative or improvisational performance, even unintentionally without people setting it up on purpose."

Get that?  ". . . collaborative or improvisational performance . . . created "unintentionally" Are you aware that you promote that or do any of that in your classroom? If you are, great; if not, great, but is that not also an interesting characterization of the basis of interaction in the language teaching classroom, especially where the focus is modeling, corrective feedback and metacognitive work in pragmatics and usage? In other words, suggestion is at the very heart of instructor-student engagement in some dimensions of the pedagogical process. Unconscious learning and relational affinities were for some time contained in Chomsky's infamous "black box," but are now the subject of extensive research in neuroscience and elsewhere.

And there are, of course, any number of factors that may affect what goes on "below decks" as it were. Turns out there is  (not surprisingly) even a well-established gender dimension or bias to unconscious learning as well.Ya think? A 2015 study by Ziori and Dienes, summarized by Frontiers in Psychology.org, highlights a critical feature of that cognitive process keyed or confounded by the variable of "attentional salience."

In that study, "Facial beauty affects implicit and explicit learning of men and women differently", the conscious and unconscious learning of men was significantly downgraded when the task involved analyzing language associated with the picture of a beautiful woman. Women, on the other hand, actually did BETTER in that phase of the study. The beautiful face did  not distract them in the least, it seemed, in fact to further concentrate their cognitive processing of the linguistic puzzle.

Now exactly why that is the case the researchers only speculate. For example, it may be that men are programmed to tend to see a beautiful woman more initially as "physically of interest", whereas women may see or sense first a competitor, which actually sharpens their processing of the problem at hand.  It was very evident, however, that what is termed "incentive salience" had a strong impact or at least siphoned off cognitive processing resources  . . . for the boys.

There are many dimensions of what we do in instruction that are loaded with "incentive salience", fun or stimulating stuff that we suppose will in essence attract attention or stimulate learners to at least wake up so we can do something productive. Pronunciation instruction is filled with such gimmicks and populated by a disproportionate number of former cheer leaders and "dramatic persona." The combination of unconscious connectivity and "beautiful" techniques may actually work against us.

In haptic work we figured out about a decade ago that not only how you look but what you wear can impact effectiveness of mirroring of instructor gesture in class. The fact that I am old and bald may account for the fact that students find me easier to follow than some of my younger associates? Take heart, my friends, the assumed evolutionary advantage of "beautiful people" may not only be waning, but actually be working against them in the pronunciation classroom at least! 



Thursday, May 24, 2018

Paying attention to paying attention! Or else . . . !

Two very accessible, useful blogposts, primers by Mike Hobbis, PhD student in neuroscience @UCL on attention in teaching worth a read, one on why there should be more research on attention in the classroom, and a second, which I like a lot, on attention as an effect, not a just cause.

Clker.com
Hobbis' basic point is that attention should be more the "center of attention" in methodology and research today than it is. Why it isn't is really good question. In part, there are just so many other things to "attend to"  . . .

I was really struck by the fact that I, too, still tend to use attention more as a cause, not an effect, meaning: if students are not paying attention in some form, my lesson plan or structure can not possibly be at fault: it is probably the continuous "laptopping" during the class or lack of sleep on their parts. The research on the impact of multitasking at the keyboard in school on a whole range of subjects and tasks, for example, is extensive . . . and inconclusive-- except in teaching pronunciation, where, as far as I can determine, there is none. (If you know of some PLEASE post the link here!)

There is, of course, a great deal of research on paying attention to pronunciation from various perspectives, per se, such as Counselman 2015, on "forcing" students to pay attention to their pronunciation and variance from a model. But, the extent to which variable attention alone contributes to the overall main effect is not pulled out in any study that I have been able to find.

Now I am not quite to Counselman's level of "forcing" attention, either by totally captivating instruction or capturing the attention and holding it hostage along the way, but Hobbis makes a very good point in the two blogposts that must go in both directions, if not simultaneously but at least systematically. In haptic pronunciation work--or most pronunciation teaching for that matter-- the extensive use of gesture alone should function at both levels. The same applies to any movement-enhanced methodology such as TPR (Total Physical Response) or  mind-body interplay, as in Mindfulness training. The question, of course, is how mindful and intentional in methodology are we.

There has been a resurgence of attention to attention in the last decade in a number of sub-disciplines in neuroscience as well. Have you been paying attention--either to the research or in your classroom? If so, share that w/us, too! (The next blogpost will focus on the range of attention-driven, neuroscience-grounded best practice classroom techniques.) Join that conversation. You have our attention!




Monday, March 26, 2018

What you see is what you forget: pronunciation feedback perturbations

Tigger warning* This blogpost concerns disturbing images, perturbations, during pronunciation
work.

In some sense, almost all pronunciation teaching involves some type of imitation and repetition of a model. A key variable in that process is always feedback on our own speech, how well it conforms to the model presented, whether coming to us through the air or perhaps via technology, such as headsets--in addition to the movement and resonance we feel in our vocal apparatus and bone structure in the head and upper body.  Likewise, choral repetition is probably the most common technique, used universally. There are, of course, an infinite number of reasons why it may or may not work, among them, of course, distraction or lack of attention.

Clker.com
We generally, however, do not take all that seriously what is going on in the visual field in front of the learner while engaged in repetition of L2 sounds and words. Perhaps we should. In a recent study by Liu et al, Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates,  it was shown that differing amounts of random light flashes in the visual field  affected the ability of learners to adjust the pitch of their voice to the model being presented for imitation. The research was done in Chinese, with native Mandarin speakers, attempting to adjust the tone patterns of words presented to them, along with the "light show". They were instructed to produce the models they heard as accurately as possible.

What was surprising was the degree to which visual distraction (perturbation) seemed to directly impact subjects' ability to adjust their vocal production pitch in attempting to match the changing tone of the models they were to imitate. In other words, visual distraction was (cross-modally) affecting perception of change and/or subsequent ability to reproduce it. The key seems to be the multi-modal nature of working memory itself. From the conclusion: "Considering the involvement of working memory in divided attention for the storage and maintenance of multiple sensory information  . . .  our findings may reflect the contribution of working memory to auditory-vocal integration during divided attention."

The research was, of course, not looking at pronunciation teaching, but the concept of management of attention and the visual field is central to haptic instruction, in part because touch, movement and sound are so easily overridden by visual stimuli or distraction. Next time you do a little repetition or imitation work, figure out some way to insure that working memory perturbation by what is around learners is kept to a minimum. You'll SEE the difference. Guaranteed.

Citation:
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B and Liu H (2018) Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front. Neurosci. 12:113. doi: 10.3389/fnins.2018.00113

*The term "Tigger warning" is used on this blog to indicate potentially mild or nonexistent emotional disruption that can easily be overrated. 

Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching

Clker.com
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)


There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads, Businessinsider.com, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"










Tuesday, June 27, 2017

Distracting new research: Try some "strategic attention" or a millennial!

Clker.com
This could be either a sign of things to come or at least a pleasant "distraction."

I have done literally dozens of blogposts here (out of the roughly 1,000) that involve in some way the concept of "attention". Likewise, in our decade or so of experience with haptic pronunciation teaching, capturing the learner's attention--for at least 3 seconds--has been shown to be critical. Any number of factors may serve to seriously distract the student and undermine the process.


Now comes a study, Selectively Distracted: Divided Attention and Memory for Important Information, by Middlebrooks, Kerr, and Castel of UCLA, summarized by Sciencedaily.com, suggesting that background distraction can be overcome . . . by "strategic attention", characterized this way:

"The ability to prioritize high-value information during study was consistently immune to the effects of divided attention, regardless of the extent of the distractions that participants faced . . . the current results intimate that divided attention did not incapacitate metacognitive mechanisms in either of the current experiments leaving participants capable of judging their memory capacity, performance, and methods by which they might compensate for additional demands on attention (p. 32)"

Subjects were subjected to various distractions while learning sets of words, such music and having to attend to random numbers during the treatment phase. Although overall performance/recall was not quite as high among the "distracted", they were later equally successful in recalling key words in the post tests.

This sounds like partial validation of the "hyper-cognitivist's" position that just by pointing out (pronunciation) errors or pointing to key (phonological) features in texts, for example, learners may "uptake" such focused input and effectively make use of that information later. Could be. We have all witnessed such potentially "teachable moments", but with so many other things going on in the classroom environment, what are the chances, really?

According to the study, it all comes down to what has been prioritized by the learner, the instructor and the context. Wow. But wait . . . just who were the subjects? Any chance that they were just more naturally adept at dealing with distraction? 192 paid undergraduates, probably in introductory psychology courses, the usual guinea pigs in such studies. Interestingly, the researchers do not comment on the young millennials' social media competence.

Any number of other recent studies have observed, seemingly to the contrary, that the "hyper-media generation" is in some respects less capable of keeping their eye on the ball. (Even the NBA has gotten the message, planning to shorten games!) Surprise . . . 

The good news: Perhaps upcoming generations are in fact becoming more "immune" to distraction in learning and studying, especially in certain e-contexts.  If so, that has intriguing implications for instructional design and tolerance for random iPhone use in class.

The bad news: Wonky studies like this one can easily distract us (or at least me!) from the more important work of creating classrooms where our priority, our attention is focused totally on effective teaching and learning. Just thought that I should point that out . . .

Citations:
Association for Psychological Science. (2017, June 21). Strategic studying limits the costs of divided attention. ScienceDaily. Retrieved June 26, 2017 from www.sciencedaily.com/releases/2017/06/170621082442.htm

Tuesday, May 9, 2017

Killing pronunciation 6: Eliminating distraction (and episodic memory) with gesture!

Clker.com
Have wondered for years why at times even the most ingenious use of gesture itself may not enhance memory for a sound or word. I assumed that it had something to do with what the learner was paying attention to at the time but had never seen any study that seemed to unpack that problem all that well. We know, for example, that visual distraction can effectively all but cancel out the impact of a haptic (movement + gesture) stimulus or haptic-anchored gesture. But why doesn't gesture generally just reinforce whatever is the focus of instruction or repetition? Turns out that it may be our Achilles Heel. Here's a clue.


A fascinating study by Laurent, Ensslin and Mari-Beffa (2015) entitled, An action to an object does not improve its episodic encoding, but removes distraction, illustrates the potentially double-edged nature of gesture. Without getting into the somewhat complex but innovative research design, what they discovered is that gesture accompanying focus on an object did not enhance episodic memory for the object and the context or surroundings but did strongly curtail distraction. evident in the diminished memory for other elements of the event. (Think of episodic memory as basically potential recall of emotional setting plus the 5 "W"s: who, what, where, why and when of a happening.) 

In other words, gesture accompanying a phrase, for example, should at least cut back on distracting features of the moment or context . . . but, other than that, it may not be adding much to the mix. It may be actually working against you.

At first glance, that may appear to at least to some extent undermine use of gesture in teaching. It does, in fact. Haptic pronunciation teaching, which uses gesture anchored by touch on stressed elements, is based on the principle that gesture that is not carefully controlled and focused with touch is "a wash" . . . it may or may not work. Over enthusiastic gesture use, for example, may not only turn off many of the students, compounded by cultural differences, but, in effect, it can be so distracting in itself that the language focus is lost entirely. 

It took me a couple of decades of working with kinesthetic pronunciation teaching techniques to figure that out. That insight came basically in the form of wildly divergent reports and feedback on gesture effectiveness by classroom teachers. Pronunciation teachers are generally by nature more "gesticular", often highly energetic and "moving" speakers. Perhaps you have to be in many contexts just to motivate students and maintain their attention, but it can, indeed, be our Achilles Heel. Is it yours? 

If so, get in touch (either with us or your local yoga, Alexander Technique, Lessac practitioner or Tai Chi shop!)

Source:
Laurent, X.; Ensslin, A. and  Mari-Beffa, P. (2015) An action to an object does not improve its episodic encoding, but removes distraction. Journal of Experimental Psychology: Human Perception and Performance 44(1), 244.


Saturday, January 28, 2017

Killing pronunciation improvement: better heard (and felt) but not seen!

Clker.com
Fascinating study, Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity, by Gibney, et al. Department of Neuroscience, Oberlin College.

Tigger warning: This is a thick, technical read, but the conclusions of the study have potentially important implications for pronunciation teaching, especially attempts to enhance uptake of new and corrected sounds or patterns that rely on effective integration of sounds, images, movement and vocal resonance. 

In essence, what the research examined was, as the title suggests, how distractions in the visual field affected subjects attention and ability to learn and recall audio-visual stimuli (images on a computer screen accompanied by sounds). What was striking (again as evident in the title) was that no matter how complex the task of associating the targeted sound with the visual image or object in focus, with even the slightest distraction created on the screen, e.g., a object briefly appearing in a corner, the subject's ability to integrate and recall the complex target later . . .was compromised.

The implications for pronunciation teaching?  Not surprisingly, attention is critical in integrating sensory information. We know that, of course. What is more interesting is the idea that any visual distraction whatsoever that occurs when sound, movement and visual imagery (such as the orthography or phonetic representation of a word or phrase) are being "integrated" may seriously  undermine the process. In other words, visual attention and eye tracking during the process may have dramatic impact. That is a "variable" that can, in principle, be managed in the classroom, although most do not consider visual distraction to be potentially that disruptive of pronunciation instruction. But it certainly can be.

We discovered early on that in haptic pronunciation work, where not only sound, visual imagery, movement and vocal resonance are involved--but touch as well, visual distraction can seriously derail the process. This research suggests, for example, that the same effect during general pronunciation work as well, especially oral work, may be a significant impediment in some contexts. 

The sterile, featureless language laboratory booth of old may have had more going for it than we thought! In early haptic work we experimented with controlling eye tracking. Perhaps it is time we revisited that idea. It certainly deserves our undivided attention.

Original research article: Front. Integr. Neurosci., 20 January 2017 | https://doi.org/10.3389/fnint.2017.00001

Saturday, September 10, 2016

Remembering new pronunciation (or anything) . . . in a flash!

Here is another for your "So THAT's why it works" file, from neuroscience. (Hat tip: Robert Murphy.)

Clker.com
The phenomenon, explored by Morris and researchers at Edinburgh reported by Neuroscience News, is called: flashbulb memory. (See full citation below.) Working with mice, they found, basically, that a vivid, striking event can cause the release of dopamine by the locus coeruleus, which, in turn " . . . carries dopamine to the hippocampus . . . " which affects how effectively memories are stored.

So, if you (and your mouse) are about to learn something new--or just did, it will be remembered more efficiently if it is "bookended" by a "flashbulb" event . Talk about counter-intuitive! I have done dozens of posts over the years on how attention figures into learning. (In our haptic work, for example, we often note that we need the attention of the learner for only 3 seconds to anchor a new sound.) In the Neuroscience news summary it is noted that "Our research suggests that a skillful teacher may be able to take advantage of these little surprises to help pupils learn and remember.” Really? How so? They don't speculate--for good reason. How might you adopt that insight?

My first thought was to go find one of those camera flash attachments and try it out next week. But wait. There may be more to this, more than just dopamine.

About 35 years ago, I was very much interested in clinical hypnosis, in part as a way to better understand unconscious communication and learning in the classroom. One basic feature some models of trance work was that you had to be very careful to distract the learner (or client) immediately after a significant suggestion has been provided or "uploaded".

The explanation was that that would keep the conscious mind of the learner from deconstructing and dismissing or undermining the suggestion or metaphor, not letting it be absorbed in toto, in effect. That could be accomplished in any number of ways, such as switching topics abruptly, showing a picture or doing something more physical or kinaesthetic, such as standing up or a gesture of some kind.

In other words, the principle, of selectively partitioning off classroom experience makes sense. Rather than thinking in terms of always integrating the entire class period and lesson so that learners are metacognitively "on top of it all", so that they constantly know why they are learning what and consciously (metaphorically) attempting to file everything away for later use, think: switch-flash-divert-surprise.

I knew that my distinct tendency toward ADHD-like excessive multi-tasking was really a good thing! If you have a good "Flash dance" technique that you can share w/us, please do!

Keep in touch!

Full citation:
University of Edinburgh. (2016, September 8). How New Experiences Boost Memory Formation. NeuroscienceNews. Retrieved September 8, 2016 from http://neurosciencenews.com/experience-memory-neuroscience-4991/

Sunday, August 28, 2016

Great pronunciation teaching? (The "eyes" have it!)

Clker.com
Attention! Внимание!

Seeing the connection between two new studies, one on the use of gesture by trial lawyers in concluding arguments and one on how a "visual nudge" can seriously disrupt our ability to describe recalled visual properties of common objects--and by extension, pronunciation teaching--may seem a bit of a stretch, but the implications for instruction, especially systematic use of gesture in the classroom are fascinating.

The bottom line: what the eyes are doing during pronunciation work can be critical, at least to efficient learning. Have done dozens posts over the years on the role or impact of visual modality on pronunciation work; this adds a new perspective. 

The first, by Edmiston and Lupyan of  University of Wisconsin-Madison, Visual interference disrupts visual knowledge, summarized in a ScienceDaily summary:

"Many people, when they try to remember what someone or something looks like, stare off into space or onto a blank wall," says Lupyan. "These results provide a hint of why we might do this: By minimizing irrelevant visual information, we free our perceptual system to help us remember."

The "why" was essentially that visual distraction during recall (and conversely in learning, we assume), could undermine ability to describe visual properties of even common well-known objects, such as the color of a flower. That is a striking finding, countering the prevailing wisdom that such properties are stored in the brain more abstractly, not so closely tied to objects themselves in recall.

Study #2: Matoesian and Gilbert of the University of Illinois at Chicago, in an article published in Gesture entitled, Multifunctionality of hand gestures and material conduct during closing argument. The research looked at the potential contribution of gesture to the essential message and impact of the concluding argument to the jury. Not surprisingly, it was evident that the jury's visual attention to the "performance" could easily be decisive in whether the attorney's position came across as credible and persuasive. From the abstract:

This work demonstrates the role of multi-modal and material action in concert with speech and how an attorney employs hand movements, material objects, and speech to reinforce significant points of evidence for the jury. More theoretically, we demonstrate how beat gestures and material objects synchronize with speech to not only accentuate rhythm and foreground points of evidential significance but, at certain moments, invoke semantic imagery as well. 

The last point is key.  Combine that insight with the "Nudge" study. It doesn't take much to interfere with "getting" new visual/auditory/kinesthetic/tactile input. The dominance of visual over the other modalites is well established, especially when it comes to haptic (movement plus touch). These two studies add an important piece, that random VISUAL, itself, can seriously interfere with targeted visual constructs or imagery as well. In other words, what your student LOOK at and how effective their attention is during pronuncation work can make a difference--an enormous difference, as we have discovered in haptic pronunciation teaching.

Whether learners are attempting to connect the new sound to the script in the book or on the board, or are attempting to use a visually created or recalled script (which we often initiate in instruction) or are mirroring or coordinating their body movement/gesture with the pronunciation of a text of some size, the "main" effect is still there: what is at that time in their visual field in front of them or in their created visual space in their brain may strongly dictate how well things are integrated--and recalled later. (For a time I experimented with various system of eye tracking control, myself, but could not figure out how to develop that effectively--and safely, but emerging technologies offer us a new "look" at that methodology in several fields today.)

So, how do we appropriately manage "the eyes" in pronunciation instruction? Gestural work helps to some extent, but it requires more than that. I suspect that virtual reality pronunciation teaching systems will solve more of the problem. In the meantime, just as a point of departure and in the spirit of the earlier, relatively far out "suggestion-based" teaching methods, such as Suggestopedia, assume that you are responsible for everything that goes on during a pronunciation intervention (or interdiction, as we call it) in the classroom. (See even my 1997 "suggestions" in that regard as well!)

Now I mean . . . everything, which may even include temporarily suspending extreme notions of learner autonomy and metacognitive engagement . . .

See what I mean?

Sources: 
Matoesian, G. and Gilbert, K.  (2016). Multifunctionality of hand gestures and material conduct during closing argument. Gesture, Volume 15, Issue 1, 2016, pages: 79 –114
Edmiston, P. and  Gary Lupyan, G. (2017) Visual interference disrupts visual knowledge. Journal of Memory and Language, 2017; 92: 281 DOI: 10.1016/j.jml.2016.07.002

Thursday, July 14, 2016

Why your avatar (could/will) make a better pronunciation teacher than your are!

Clker.com
Since the emergence of Second Life in 2003, I have been fascinated with the prospect of avatars teaching language. At the time, for technical reasons, I could not get my avatars to respond quickly enough with good audio to do much and gave up. (From recent reviews, it appears that most of those issues, including monitoring of offensive content, have been resolved and I may give it another look.)

A 2016 study of avatars teaching math to kids by Cook, Friedman, Duggan, Cui and Popescu provides an interesting perspective. The focus of the study was to attempt to isolate the effect of gesture, independent of facial expression, body motion and other features of the presenter's persona. As the researchers note, one of the problems with identifying the impact of gesture (from the abstract) is that it is "known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements . . . "

The avatars presented a fixed background such that only the hand movement varied. (The voice used and various graphic figures remained constant.) The effect was "pronounced". The subjects who viewed the gesturing avatar not only learned the concepts more successfully but also were later able to apply the material better. (That is not really surprising since a number of studies have established that students just learn better when teachers gesture more.) But avatars bring something more to the party--or less!

In principle, how much of pronunciation could an avatar teach (either with or without gesture assist)? Probably most of it. (And I predict that that day is not far off.) One reason for that, mentioned above by Cook et al. is the fact that gesture tends to co-vary with other "non-verbal behaviors" such as . . . prosody? (Prosody is nonverbal? Really?)The basis of effective gesture use in instruction often depends critically on the learners' attention being "locked" on the cuing or anchoring motion; the gesture in tern is also strongly associated with a sound or process.

As reported in several previous posts, loss of attention or distraction is a most important variable in haptic (gesture plus touch) pronunciation teaching as well. The video models that we use now are for the most part black and white, with black background and no subtitles on screen, designed to focus learner attention on the movement and positioning basically of my hands, not the model's face or body. Addition of color, extraneous movement, or additional graphics will always pull at least some learners away from the focus of the lesson embodied in the pedagogical gestures. (Research on competition between visual, auditory and kinaesthetic or haptic, has demonstrated consistently that visual displays almost always trump the others, even in combination.)

For gesture-based pronunciation or other kinds of instruction for that matter, interactive "thinking" and responding Avatars offer real promise. The technology has been around for over a decade, in fact. Advantages of avatars include:
  • Individualized, more affordable computer-based instruction 
  • Systematic application of gesture in instruction, especially providing consistent placement of gesture in the visual field.
  • More effective attention management, neutralizing potential visual distractions
  • Emotionally "comfortable" instruction for a wider range learner personalities
  • Avoids unconscious transmission of:
    • Instructor "bad day" images and attitudes
    • Typical "hyperactive" pronunciation teacher behavior
    • Overreactions, positive or negative, to student miscues or "victories"
    • Instructor bias toward "teacher pets" or gaze avoidance in eye contact patterning during instruction
 Time to reactivate my Avatar. Will upload a demo later this summer.  

 Cook, S. W., Friedman, H. S., Duggan, K. A., Cui, J. and Popescu, V. (2016), Hand Gesture and Mathematics Learning: Lessons From an Avatar. Cognitive Science. doi: 10.1111/cogs.12344


Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!

Clker.com
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.

Tuesday, July 21, 2015

Back to the future of pronunciation teaching (and the "Goldfish" standard for attention management)

You apparently have a bit more than 8 seconds to read this post. So you may want to just scroll down to the conclusion and start there . . .

Clip art: 
Capturing and holding attention, if only for a few seconds, is the key to effective change in pronunciation work, especially for "mechanical" adjustments--and most other things in life. In earlier blog posts, the "gold standard" or is sine qua non of haptic pronunciation work has been seen to be about 3 seconds. In other words, for a learner to adequately experience the totality of a new sound or word, physically, auditorily, visually and conceptually--connecting things together, before moving on to practice or at least noticing or any chance at "uptake"-- takes complete, undivided attention for at least that long or longer.

Even that is often an unrealistic requirement with all the other potential distractions in the classroom or visual field. Research on the effectiveness of recasting learner utterances by instructors, for example, (Loewen and Philip, 2006) suggests that most of the time that strategy is relatively ineffective. One critical variable is always the quality or intentionality of learner attention, both in term of what the function the instructor is attempting to carry out and general learner receptivity.
Clker.com

Recall that Microsoft claims that our collective attention span, in part due to the impact of technology, has now dropped to about 8 seconds, just below that of the goldfish. (The UK Telegraph report is much more entertaining than that from the techies.

A new study by Moher, Anderson and Song of Brown University, summarized by Science Daily.com, adds a fascinating piece to the puzzle and may suggest how to begin to maintain attention better in class. What they discovered in an experimental study was that their subjects were, in effect, better able to "block" obvious distractions than they were more subtle ones. Backgrounded images in the visual field had more effect on subsequent action than did foregrounded, more striking elements which appeared to be easier for the brain to manage or ignore. They seem to have "discovered" one possible path into the mind by subliminal stimuli, evading first line conceptual or perceptual defences.

What is the obvious "subtle, unobtrusive, yet potent" application to pronunciation teaching? If you don't have "full body, mind and visual field" attention, there is no telling what is interfering with anchoring of sound change in the brain and subsequent total or partial recall.

Early on in EHIEP (Essential Haptic-integrated English Pronunciation) work I experimented extensively with controlling eye movement, in part to maintain concentration and attention, based primarily on the research underlying the therapeutic model of "Observed experiential integration" (See citation below) developed by  Bradshaw and Cook (2011). The effect was dramatic in working with individuals but applying those techniques to the classroom proved at least impractical. In part because the haptic pedagogical system was just developing, I backed off from eye patterning techniques in pronunciation work in 2009.

Based on Moher et al's research, however, it is perhaps time to again give directed eye movement management a "second look" in our work, going back to what I believe is the (haptic) future of pronunciation instruction, especially in virtual, computer-mediated applications.

Will report back on an in progress exploratory study with one learner using some eye movement management later this summer. Not surprisingly I am already "seeing" some promising results, attending to features of the teaching session that I would normally not have noticed!

Full citations:

Brown University. "Surprise: Subtle distractors may divert action more than overt ones." ScienceDaily. ScienceDaily, 16 July 2015, www.sciencedaily.com/releases/2015/07/150716123831.htm. (Jeff Moher, Brian A. Anderson , Joo-Hyun Song. Dissociable Effects of Salience on Attention and Goal-Directed Action. Current Biology, 2015 DOI: 10.1016/j.cub.2015.06.029)

Bradshaw, R. A., Cook, A., McDonald, M. J. (2011). Observed experiential integration (OEI): Discovery and development of a new set of trauma therapy techniques. Journal of Psychotherapy Integration, 21(2), 104-171.

Loewen, S., and Philip, J. (2006). Recasts in the adult English L2 classroom: Characteristics, explicitness, and effectiveness. The Modern Language Journal, 90, 536-556.

Saturday, January 10, 2015

Mastering new movement (and pronunciation!): Follow through, follow up or foul up?

Mastery learning has gotten an undeservedly bad rap in many areas of education--but not fo
Clip art:
Clker
r those of us engaged in the "somatic" or bodily arts, where systematic control of movement in training is critical. In athletic or music training it is a given; in contemporary pronunciation work and elsewhere it is a decidedly mixed bag. Articulatory work with learners, for example, can be incredibly difficult. What level of mastery of a sound, for example, is adequate in a given context? More importantly, how can you get there?

A new study by Howard, Wolpert and Franklin (Summarized by Science Daily -  See complete reference below), looked at the function of follow through in learning new movement. Subjects were trained in a new hand movement (grasping and turning a handle of sorts) and a "path" to a resting state for the hand to take after the targeted movement execution. 

What they discovered was that the more inconsistent the movement on the follow-through path, the more the mastery of the targeted movement was compromised: " . . . this research suggests that this variability . . . reduces the speed of learning of the skill that is being practiced . . . "

Keep in mind that this is training in movement, although the parallel to learning in general seems striking. There are analogous practices in various disciplines. In hypnotherapy, for example, what immediately follows the focused training will always be some kind of dis-associative technique to "protect" what has been anchored from distraction and conscious "doubt" or negation.

Following up on a recent post on "distraction," after reading the study, I did a quick review of the pedagogical movement patterns (movement, or controlled gesture, plus touch on stressed vowel) that we use in haptic pronunciation teaching. About half have a prescribed follow through back to a resting posture or state. Interestingly, the ones that do NOT tend to be the more problematic. Definitely requires follow up on my part!

How well  or consistently do you "conduct" the physical side of your teaching, especially pronunciation?  

Full citation:
Ian S. Howard, Daniel M. Wolpert, David W. Franklin. The Value of the Follow-Through Derives from Motor Learning Depending on Future Actions. Current Biology, 2015 DOI: 10.1016/j.cub.2014.12.037

Monday, January 5, 2015

Revenge of the multi-taskers: Distracted during motor (or pronunciation) learning or practice? No problem!

This is the second in a series of posts on creating and managing effective language or pronunciation practice, (analogically) based on Glyde's guitar practice framework. (See earlier post.) His
Clip art:
Clker.com
principle #5 was common-sensical: Failing to avoid distraction.

Earlier posts have looked at the interplay between haptic (movement and touch) and visual and auditory modalities. One general finding of research has been that visual stimuli or input tend to override auditory and haptic. In part for that reason, we have worked to restrict extraneous visual auditory distraction during haptic pronunciation work. In therapy, on the contrary, many times distraction is used quite strategically to draw the patient's attention away from a problematic experience or emotion.

Now comes a fascinating study by Song and Bedard of Boston University (summarized by Science Daily - See full citation below) demonstrating how visual distraction during motor learning may at least not be problematic. As long as subjects were subjected to relatively similar distraction on the recall task, the fact that they had been systematically distracted during the learning task seemed to have little or no effect. Furthermore, if the "distracted" subjects were later tested in the "non-distracting" condition, they did not perform as well as their "distracted" fellow subjects.

In other words, the visual context of motor learning was not a factor in recall--as long as it was reasonably consistent with the original learning milieu.

So, what does all that mean for effective pronunciation practice? Quite a bit, perhaps. Context, from many perspectives is critical. Establishing linguistic context has been a given for decades; managing the classroom environment (or the homework practice venue) so that new or changed sounds are recalled in a "relatively similar setting" to how they were learned is another question.

One of the principles of haptic pronunciation teaching is to use systematic gesture + touch across the visual field to anchor sound change--maintaining as much of learner attention as possible for at least 3 seconds. In practice, the same pedagogical movement patterns (PMP) are used--and, according to learners, even in spontaneous later recall of new material the PMPs often figure prominently in visual/auditory recall as well.

So, to paraphrase Glyde's 5th principle: Avoid inconsistent distraction (in pronunciation teaching), at least in those more motor-based work or phases. Or better yet, embrace it!

Citation:
Brown University. (2014, December 9). Distraction, if consistent, does not hinder learning. ScienceDaily. Retrieved December 18, 2014 from www.sciencedaily.com/releases/2014/12/141209120141.htm




Sunday, November 17, 2013

Pay attention to pronunciation!

As reported in earlier posts, no matter how terrific our attempt at pronunciation teaching is, if a learner isn't paying attention or is distracted, chances are not much uptake will happen--especially when haptic anchoring is involved. No surprise there. A new study by Lavie and colleagues of UCL Institute of Cognitive Neuroscience, focusing on "inattentional blindness" entitled,"How Memory Load Leaves Us 'Blind' to New Visual Information," just reported at Science Daily, sheds new "light" on exactly how visual attention serves learning.

In essence, when subjects were required to momentarily attend to an event or object in the visual field and remember it, their ability to respond to new events or distractions occurring immediately afterward was curtailed significantly. (The basic stuff of hypnosis, stage magicians and texting while driving, of course!)

What is of particular interest here is that, whereas the visual image that one is attempting to focus on can strongly exclude other competing distractions, that effect works precisely the other way around in haptic-integrated pronunciation instruction. It helps explain the potential effectiveness of pedagogical movement patterns of EHIEP and AH-EPS:

  • Carefully designed gestures across the visual field 
  • Performed while saying a word, sound or phrase 
  • With highly resonate voice, and
  • Terminating in some kind of touch on a stressed vowel, what we term "haptic anchoring." 
It also explains why insightful and potentially priceless comments from instructors coming in too close proximity to vivid and striking pronunciation-related "visual events" . . . may not stick or get "uptaken!" 

See what we mean?