Showing posts with label attention. Show all posts
Showing posts with label attention. Show all posts

Monday, January 17, 2022

Improved pronunciation "in the blink of any eye!"

How important is general/not directly task-based body movement, especially the lack of it, to learning pronunciation, creativity or just learning? In haptic pronunciation teaching learners are encouraged or required to move almost constantly, primarily through speech-synchronized gesture, but also through "Mindfulness-like" practices that monitor the state of the muscles and posture of the body, along with breathing patterns. 

But what about the impact on learning when students' bodies are held more in check, with restricted motor engagement? A new study by Murali and Händel of Julius-Maximilians-Universität Würzburg, Motor restrictions impair divergent thinking during walking and during sitting, summarized by ScienceDaily.com, not only affirms our intuitions about the central role of embodiment in thought and learning, but suggests something more: even while seated, a little movement appears to go a long way in maintaining creativity and attention. (What a shocker, eh? Hope you were sitting down when you read that!)

Ciker.com
The actual protocols of the research, which involved measurement of eye "blinking" responses as indices of degree of engagement, are not described in the summary, but the title of the original piece is interesting. To quote from the summary of the study: "Our research shows that it is not movement per se that helps us to think more flexibly," says neuroscientist Dr Barbara Händel from Julius-Maximilians-Universität Würzburg (JMU) in Bavaria, Germany. Instead, the freedom to make self-determined movements (emphasis, mine there) is responsible for it." 

In other words, messing with the body's incredible range of what appear to be random movements, apparently unassociated with the task being consciously in focus, may have dramatic consequences. An extreme analogy might even be talking with friends who are somewhere on the autism or ADHD spectrums. Their body and eye movements seem to suggest that they are not pay sufficient attention when in fact that is not the case at all. 

Now I am not saying that "thinking more flexibly" at any moment in instructional time is necessarily a good thing, of course, but the principle of allowing the body to also think and create on its own on an ongoing basis, in some sense "non or extra-verbally," if  you will, certainly is. On behalf of all elementary school boys on the planet who have had to sit in/through years of class to learn with girls when we should, instead, have been outside learning with our hands and whole bodies, I can only say, AMEN! 

Think about it. While you were reading this blogpost, what "else" was your body doing? If you can't remember . . . Q.E.D (quod erat demonstrandum)

Keep in touch!

Bill

Original source: 

Supriya Murali, Barbara Händel. Motor restrictions impair divergent thinking during walking and during sitting. Psychological Research, 2022; DOI: 10.1007/s00426-021-01636-w

Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

Clker.com
What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!
Clker.com

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!




Tuesday, January 22, 2019

Differences in pronunciation: Better felt than seen or heard?

clker.com
This feels like a "bigger" study, maybe even a new movement! (Speaking of new "movements", be sure to sign on for the February haptic webinars by the end of the month!)

There are any number of studies in various fields exploring the impact of racial, age or ethnic "physical presence" (what you look like) on perception of accent or intelligibility. In effect, what you see is what you "get!" Visual will often override audio, what the learner actually sounds like. Actually, that may be a good thing at times . . .

Haptic pronunciation teaching and similar movement-based methods use visual-signalling techniques, such as gesture, to communicate with learners concerning status of sounds, words and phrases. Exactly how that works has always been a question.

Research by Collegio, Nah, Scotti and Shomstein of George Washington University, summarized by Neurosciencenews.com“Attention scales according to inferred real-world object size", points to something of the underlying mechanism involved: perception of relative object size. The study compared subjects' reaction or processing time when attempting to identify the relative size of objects (as opposed to the size of the image of the object presented on the screen). What they discovered is that, regardless of the size of the images on the screen, the objects that were in reality larger consistently occupied more processing time or attention.

In other words, the brain accesses a spatial model or template of the object, not just the size of the visual image itself in "deciding" if it is bigger than an adjacent object in the visual field. A key element of that process is the longer processing time tied to the actual size of the object.

 How does this relate to gesture-based pronunciation teaching? In a couple of ways potentially. If students have "simply" seen the gestures provided by instructors (e.g., Chan, 2018) and, for example, in effect have just been commanded to make some kind of adjustment, that is one thing.The gesture is, in essence, a mnemonic, a symbol, similar to a grapheme, a letter. The same applies to such superficial signalling systems such as color, numbers or finger contortions.

If, on the other hand, the learner has been initially trained in using or experiencing the sign, itself, as in sign language, there is a different embodied referent or mapping, one of experienced physical action across space.

In haptic work, adjacent sounds in the conceptual and visual field are first embodied experientially. Students are briefly trained in using three different gesture types, distinctive lengths and speeds, accompanied by three distinctive types of touch. In initial instruction, students do exercises where they experience physically combinations of those different parameters as they say the sounds, etc.

For example, the contrastive, gestural patterns (done as the sound is articulated) for  [I], [i], [i:],and [iy] are progressively longer and more complex: (See linked video models.)
a. Lax vowels, e.g., [I] ("it')- Middle finger of the left hand quickly and lightly taps the palm of the right hand.
b. Tense vowels, e.g., [i] ("happy")- Left hand and right hands touch lightly with finger tips momentarily.
c. Vowel before voiced consonant, e.g., [i:] ("dean") - Left hand pushes right hand, with palms touching, firmly 5 centimeters to the right.
d. Tense vowel, plus off glide, e.g., [iy] ("see") - Finger nails of the left hand drag across the palm of the right hand  and, staying in contact then slide up about 10 centimeters and pause.

The same principle applies to most sets of contrastive structures and processes, such as intonation, rhythm and consonants. See what I mean, why embodied gesture for signalling pronunciation differences is much more effective? If not, go here, do a few haptic pedagogical movement patterns (PMPs) just to get the feel of them and then reconsider!





Saturday, December 22, 2018

The feeling before it happens: Anticipated touch and executive function--in (haptic) pronunciation teaching

Tigger warning*: This post is (about) touching!

Another in our continuing, but much "anticipated", series of reasons why haptic pronunciation teaching works or not, based on studies that at first glance (or just before) may appear to be totally unrelated to pronunciation work.

Fascinating piece of research by Weiss, Meltzoff, and Marshall of  University of Washington's Institute for Learning and Brain Sciences, and Temple University entitled, Neural measures of anticipatory bodily attention in children: Relations with executive function", summarized by ScienceDaily.com. In that study they looked at what goes on in the (child's) brain prior to an anticipated touch of something. What they observed (from the ScienceDaily.com summary) is that: 

"Inside the brain, the act of anticipating is an exercise in focus, a neural preparation that conveys important visual, auditory or tactile information about what's to come  . . . in children's brains when they anticipate a touch to the hand, [this process] . . . relates this brain activity to the executive functions the child demonstrates on other mental tasks. [in other words] The ability to anticipate, researchers found, also indicates an ability to focus."

Why is that important? It suggests that those areas of the brain responsible for "executive" functions, such as attention, focus and planning, engage much earlier in the process of perception than is generally understood. For the child or adult who does not have the general, multi-sensory ability to focus effectively, the consequences can be far reaching.

In haptic pronunciation work, for example, we have encountered what appeared to be a whole range of random effects that can occur in the visual, auditory, tactile and conceptual worlds of the learner that may interfere with paying quality attention to pronunciation and memory. In some sense we have had it backwards.

What the study implies is that executive function mediates all sensory experience as we must efficiently anticipate what is to come--to the extent that any individual "simply" may or may not be able to attend long enough or deeply enough to "get" enough of the target of instruction. The brain is set up to avoid unnecessary surprise at all costs. The better and more accurate the anticipation, of course, the better.

If the conclusions of the study are on the right track, that the "problem" is as much or more in executive function, then how can that (executive functioning) be enhanced systematically, as opposed to just attempting to limit random "input" and distraction surrounding the learner? We'll return to that question in subsequent blog posts but  one obvious answer is through development of highly disciplined practice regimens and careful, principled planning.

Sound rather like something of a return to more method- or instructor-centered instruction, as opposed to this passing era of overemphasis on learner autonomy and personal responsibility for managing learning? That's right. One of the great "cop outs" of contemporary instruction has been to pass off blame for failure on the learner, her genes and her motivation. That will soon be over, thankfully.

I can't wait . . .



Citation:
University of Washington. (2018, December 12). Attention, please! Anticipation of touch takes focus, executive skills. ScienceDaily. Retrieved December 21, 2018 from www.sciencedaily.com/releases/2018/12/181212093302.htm.

*Used on this blog to alert readers to the fact that the post contains reference to feelings and possibly "paper tigers" (cf., Tigger of Winnie the Pooh)


Sunday, August 12, 2018

Feeling distracted, distant or drained by pronunciation work? Don't be downcast; blame your smartphone!

clker.com
It all makes sense now. I knew there had to be more (or less) going on when students are not thoroughly engaged or seemingly not attentive during pronunciation teaching, mine, especially. Two new studies, taken together, provide a depressing picture of what we are up against, but also suggest something of an antidote as well.

Tigger warning: This may be perceived as slightly more fun than (new/old) science. 

The first, summarized by ScienceDaily.com, is Dealing with digital distraction: Being ever-connected comes at a cost, studies find, by Dwyer and Dunn of The University of British Columbia. From the summary:

"Our digital lives may be making us more distracted, distant and drained . . . Results showed that people reported feeling more distracted during face-to-face interactions if they had used their smartphone compared with face-to-face interactions where they had not used their smartphone. The students also said they felt less enjoyment and interest in their interaction if they had been on their phone."

What is most interesting or relevant about the studies reported, and the related literature review, is the focus on the impact of digital smartphone use prior to what should be quite meaningful f2f  interaction--either dinner or what should have been a more intimate conversation--THE ESSENCE OF EFFECTIVE PRONUNCIATION AND OTHER FORMS OF INSTRUCTION! Somehow the digital "appetizer" made the meal and interpersonal interaction . . . well . . . considerably less appetizing.

Why should that be the case? The research on the multiple ways in which digital life can be depersonalizing and disconnecting is extensive and persuasive, but there is maybe something more "at hand" here.

A second study--which caught my eye as I was websurfing on the iPhone in the drive-through lane at Starbucks--dealt with what seem to be similar effects produced by "bad" posture, specifically studying something with head bowed, as opposed to doing the same with the text at eye level, with optimal posture: Do better in math: How your body posture may change stereotype threat response” by Peper, Harvey, Mason, and Lin of San Francisco State University, sumarized in NeuroscienceNews.com. 

Subjects did better and felt better if they sat upright and relaxed, as opposed to looking down at the study materials, a posture which according the authors . .  "is a defensive posture that can trigger old negative associations."

So, add up the effect of those two studies and what do you get? Lousy posture AND digital, draining distraction. Not only do my students use smartphones WITH HEAD BOWED up until the moment class starts, but I even have them do more of it in class! 

Sit up and take note, eh!

Citations:
American Psychological Association. (2018, August 10). Dealing with digital distraction: Being ever-connected comes at a cost, studies find. ScienceDaily. Retrieved August 12, 2018 from www.sciencedaily.com/releases/2018/08/180810161553.htm

San Francisco State University (2018, August 4). Math + Good Posture = Better Scores. NeuroscienceNews. Retrieved August 4, 2018 from http://neurosciencenews.com/math-score-posture-9656/


Saturday, July 28, 2018

Mesmerizing teaching (and pronunciation teachers)


clker.com
The topics of  attention salience and unconscious learning have come up any number of times over the course of the history of the blog, beginning with one of my favorites on that subject back in 2011 on Milton Erickson. In part because of the power of media today and the "discoveries" by neuroscience that we do, indeed, learn on many levels, some out of our immediate awareness, there is renewed interest in the topics--even from Starbucks!

A fascinating new book (to me at least) by Ogden, Credulity: A Cultural History of US Mesmerism, summarized by Neuroscience News, explores the history of  "Mesmerism" and a bit about its contemporary manifestations.(QED. . . . if you were not aware that it is still with us!) Ogden is most interested in understanding the abiding attraction of purposeful manipulation or management of unconscious communication, attention and learning. One fascinating observation, from the Neuroscience News summary is:

" . . . that one person’s power of suggestion over another enables the possibility of creating a kind of collaborative or improvisational performance, even unintentionally without people setting it up on purpose."

Get that?  ". . . collaborative or improvisational performance . . . created "unintentionally" Are you aware that you promote that or do any of that in your classroom? If you are, great; if not, great, but is that not also an interesting characterization of the basis of interaction in the language teaching classroom, especially where the focus is modeling, corrective feedback and metacognitive work in pragmatics and usage? In other words, suggestion is at the very heart of instructor-student engagement in some dimensions of the pedagogical process. Unconscious learning and relational affinities were for some time contained in Chomsky's infamous "black box," but are now the subject of extensive research in neuroscience and elsewhere.

And there are, of course, any number of factors that may affect what goes on "below decks" as it were. Turns out there is  (not surprisingly) even a well-established gender dimension or bias to unconscious learning as well.Ya think? A 2015 study by Ziori and Dienes, summarized by Frontiers in Psychology.org, highlights a critical feature of that cognitive process keyed or confounded by the variable of "attentional salience."

In that study, "Facial beauty affects implicit and explicit learning of men and women differently", the conscious and unconscious learning of men was significantly downgraded when the task involved analyzing language associated with the picture of a beautiful woman. Women, on the other hand, actually did BETTER in that phase of the study. The beautiful face did  not distract them in the least, it seemed, in fact to further concentrate their cognitive processing of the linguistic puzzle.

Now exactly why that is the case the researchers only speculate. For example, it may be that men are programmed to tend to see a beautiful woman more initially as "physically of interest", whereas women may see or sense first a competitor, which actually sharpens their processing of the problem at hand.  It was very evident, however, that what is termed "incentive salience" had a strong impact or at least siphoned off cognitive processing resources  . . . for the boys.

There are many dimensions of what we do in instruction that are loaded with "incentive salience", fun or stimulating stuff that we suppose will in essence attract attention or stimulate learners to at least wake up so we can do something productive. Pronunciation instruction is filled with such gimmicks and populated by a disproportionate number of former cheer leaders and "dramatic persona." The combination of unconscious connectivity and "beautiful" techniques may actually work against us.

In haptic work we figured out about a decade ago that not only how you look but what you wear can impact effectiveness of mirroring of instructor gesture in class. The fact that I am old and bald may account for the fact that students find me easier to follow than some of my younger associates? Take heart, my friends, the assumed evolutionary advantage of "beautiful people" may not only be waning, but actually be working against them in the pronunciation classroom at least! 



Thursday, May 24, 2018

Paying attention to paying attention! Or else . . . !

Two very accessible, useful blogposts, primers by Mike Hobbis, PhD student in neuroscience @UCL on attention in teaching worth a read, one on why there should be more research on attention in the classroom, and a second, which I like a lot, on attention as an effect, not a just cause.

Clker.com
Hobbis' basic point is that attention should be more the "center of attention" in methodology and research today than it is. Why it isn't is really good question. In part, there are just so many other things to "attend to"  . . .

I was really struck by the fact that I, too, still tend to use attention more as a cause, not an effect, meaning: if students are not paying attention in some form, my lesson plan or structure can not possibly be at fault: it is probably the continuous "laptopping" during the class or lack of sleep on their parts. The research on the impact of multitasking at the keyboard in school on a whole range of subjects and tasks, for example, is extensive . . . and inconclusive-- except in teaching pronunciation, where, as far as I can determine, there is none. (If you know of some PLEASE post the link here!)

There is, of course, a great deal of research on paying attention to pronunciation from various perspectives, per se, such as Counselman 2015, on "forcing" students to pay attention to their pronunciation and variance from a model. But, the extent to which variable attention alone contributes to the overall main effect is not pulled out in any study that I have been able to find.

Now I am not quite to Counselman's level of "forcing" attention, either by totally captivating instruction or capturing the attention and holding it hostage along the way, but Hobbis makes a very good point in the two blogposts that must go in both directions, if not simultaneously but at least systematically. In haptic pronunciation work--or most pronunciation teaching for that matter-- the extensive use of gesture alone should function at both levels. The same applies to any movement-enhanced methodology such as TPR (Total Physical Response) or  mind-body interplay, as in Mindfulness training. The question, of course, is how mindful and intentional in methodology are we.

There has been a resurgence of attention to attention in the last decade in a number of sub-disciplines in neuroscience as well. Have you been paying attention--either to the research or in your classroom? If so, share that w/us, too! (The next blogpost will focus on the range of attention-driven, neuroscience-grounded best practice classroom techniques.) Join that conversation. You have our attention!




Sunday, April 29, 2018

Mission unpronouncable: When there's no method to the madness . . .

Clker.com (the kitchen sink)
Caveat Emptor: I am a (a) near fanatical exerciser (b) language teaching method/ologist with about 50 years in the field, (c) compulsive researcher, and (d) this post is maybe a little "retro." You'd think that the (b) and (c) skill sets would naturally combine to make me a near world-class athlete. In my dreams, maybe . . .

For years, when asked how to get started exercising like I do, my standard response has been:
  • Pick your grandparents well.
  • Get a trainer or sign up for a class -- Don't do it on your own. 
  • Follow the method.
  • Be disciplined and consistent.
  • Run the long race: a life of better fitness. 
Should have taken my own advice. I (mistakenly) thought that I was perfectly capable of creating my own system to run fast, based on research and my understanding of how methods and the body work. My self-assembled and constructed "method" has always been reasonably good for staying fit and strong . . .

I typically don't have time for classes, am genetically averse to following other people's methods and figured that I am smart enough to research my way to excellence. Not quite. I had fallen prey to a common version of the electronic post-modernist's "Decartes' Error" (I think, therefore I am) able to do this myself, with a little "Google shopping".

So, I  present my "method," a full report on what I had done the preceding two weeks, to my new coach. In retrospect, it had everything but the kitchen sink in it. She was kind, to put it mildly. When I first explained my essentially ad hoc method her reaction was (in essence):

"Hmm . . . Nice collection of tools . . . but where is your method? Aren't you a teacher?"

Turns out that I had many near-appropriate techniques and procedures, but they were either in the wrong order, done without the correct form or amount of weights or repetitions. In other words, great ideas, but a weak or counterproductive system.

So, how’s your (pronunciation) method?  Tried describing it lately? Could you? (Ask my grad students how easy that is!) When it comes to pronunciation, I think I know how to do that and help others in many different contexts construct their own, unique systems, but when it came to competitive running, turns out that I really didn’t have a clue, plan . . . or effective method.

I have one (plan+coach) now, one that applies as much (or more) to fast running as it does to effective pronunciation teaching or any instruction for that matter. Some features  of "our" new method:
  • Reasonable and really achievable goals that will reveal incremental progress.
  • Progress is not always immediate and perceptible, but it becomes evident "on schedule" according to the method/ologist! (Good methods "future pace", spell out what should happen and when.)
  • Near perfect form as a target is essential, if only in terms of simplicity of focus, but combined with the ongoing assessment and assistance of a "guide," gradual approximation is the gold standard.
  •  Having a model, in my case, Bill Rogers, Olympic marathoner perhaps, or a native speaker in teaching, is OK as long as the goal is the good form of the model, the process, not the ultimate outcome.
  • Regular, proscribed practice, coupled with systematic feedback, probably from a person at this point in time, is the soul of method. "Overdoing" it is as counterproductive as "under-doing" it.
  • Lessons and homework are rationally and explicitly scaffolded, building across time, for the most part at the direction of the method/ologist. That can't be "neo-behaviorist" in nature, but the framework has to be there in some cognitive-behavioral-neurophysiological form, where focus of attention is engineered in carefully.
  • Unstructured, random meta-cognitive analysis of the method (not the data) undermines results, but near absolute concentration on movement and intensity,  moment by moment, is the sine qua non of it all. 
  • Meta-communication (planning, monitoring) of the process, should be highly interactive, of course, but generally more controlled by the method/ologist than the learner, flexible enough to adjust to learners and contexts, of course, and but only when the brain/mind is allowed such "out of body" experience. 
To the extent that pronunciation is a more somatic/physical process, does that not suggest why efficient pronunciation work can be illusive? If you are in a program where there is a pronunciation class that approaches some or most of that criteria--and where the other instructors in the program can support and follow up to some extent on what is done there-- things work.

If not, if it is mostly just up to you, what do you do? Well, you pick some strategic targets, like stress, intonation and high functional load consonants for your students. In addition, you selectively use some of the features above, many of which apply to all instruction, not just pronunciation, and hope for the best.

Method rides again, but this time as a comprehensive body-mind system that is more and more feasible and achievable, e.g., Murphy's new book, but still potentially time consuming, expensive and maddening if you have to go it alone. 

Of course, if you don't have the time or resources to do relatively minimal pronunciation work, you can still probably find an expert-book-website to send yourself and students to for basics. There are many. Of course, I'd suggest one in particular . . .







Monday, March 26, 2018

What you see is what you forget: pronunciation feedback perturbations

Tigger warning* This blogpost concerns disturbing images, perturbations, during pronunciation
work.

In some sense, almost all pronunciation teaching involves some type of imitation and repetition of a model. A key variable in that process is always feedback on our own speech, how well it conforms to the model presented, whether coming to us through the air or perhaps via technology, such as headsets--in addition to the movement and resonance we feel in our vocal apparatus and bone structure in the head and upper body.  Likewise, choral repetition is probably the most common technique, used universally. There are, of course, an infinite number of reasons why it may or may not work, among them, of course, distraction or lack of attention.

Clker.com
We generally, however, do not take all that seriously what is going on in the visual field in front of the learner while engaged in repetition of L2 sounds and words. Perhaps we should. In a recent study by Liu et al, Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates,  it was shown that differing amounts of random light flashes in the visual field  affected the ability of learners to adjust the pitch of their voice to the model being presented for imitation. The research was done in Chinese, with native Mandarin speakers, attempting to adjust the tone patterns of words presented to them, along with the "light show". They were instructed to produce the models they heard as accurately as possible.

What was surprising was the degree to which visual distraction (perturbation) seemed to directly impact subjects' ability to adjust their vocal production pitch in attempting to match the changing tone of the models they were to imitate. In other words, visual distraction was (cross-modally) affecting perception of change and/or subsequent ability to reproduce it. The key seems to be the multi-modal nature of working memory itself. From the conclusion: "Considering the involvement of working memory in divided attention for the storage and maintenance of multiple sensory information  . . .  our findings may reflect the contribution of working memory to auditory-vocal integration during divided attention."

The research was, of course, not looking at pronunciation teaching, but the concept of management of attention and the visual field is central to haptic instruction, in part because touch, movement and sound are so easily overridden by visual stimuli or distraction. Next time you do a little repetition or imitation work, figure out some way to insure that working memory perturbation by what is around learners is kept to a minimum. You'll SEE the difference. Guaranteed.

Citation:
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B and Liu H (2018) Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front. Neurosci. 12:113. doi: 10.3389/fnins.2018.00113

*The term "Tigger warning" is used on this blog to indicate potentially mild or nonexistent emotional disruption that can easily be overrated. 

Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching

Clker.com
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)


There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads, Businessinsider.com, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"










Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Friday, December 15, 2017

Object fusion in (pronunciation) teaching for better uptake and recall!

Your students sometimes can't remember what you so ingeniously tried to teach them? New study by D’Angelo, Noly-Gandon, Kacollja, Barense, and Ryan at the Rotman Research Institute in Ontario, Breaking down unitization: Is the whole greater than the sum of its parts?” (reported by Neurosciencenews.com) suggests an "ingenious" template for helping at least some things "click and stick" better. What you need for starters:
  • 2 objects (real or imagined) (to be fused together)
  • an action linking or involving them, which fuses them
  • a potentially tangible, desirable consequence of that fusion
Clker.com
The example from the research of the "fusing" protocol was to visualize sticking an umbrella in the key hole of your front door to remind yourself to take your umbrella so you won't get soaking wet on the way to work tomorrow. Subjects who used that protocol, rather than just motion or action/consequence, were better at recalling the future task. Full disclosure here: the subjects were adults, age 61 to 88. Being near dead center in the middle of that distribution, myself, it certainly caught my attention! I have been using that strategy for the last two weeks or so with amazing results . . . or at least memories!

So, how might that work in pronunciation teaching? Here's an example

Consonant: th - (voiceless)
Objects: upper teeth, lower teeth, tongue
Fusion: tongue tip positioned between teeth as air blows out (action)
Consequence - better pronunciation of the th sound

Haptic pronunciation adds to the con-fusion

Vowel (low, central 'a'), done haptically (gesture + touch)
Objects: hands touch at waist level, as vowel is articulated, with jaw and tongue lowered in mouth, with strong, focused awareness of vocal resonance in the larynx and bones of the face.
Fusion: tongue and hand movement, sound, vocal resonance and touch
Consequence: better pronunciation of the 'a' sound

Key concept: It is not much of a stretch to say that our sense of touch is really our "fusion" sense, in that it serves as a nexus-agent for the others  (Fredembach, et al, 2009; Legarde and Kelso 2006). Much like the created image of the umbrella in the key hole evokes a memorable "embodied" event, probably even engaged with our tactile processing center(s), the haptic pedagogical movement pattern (PMP) should work in similar manner, either in actual physical practice or visualized.

One very effective technique, in fact, is to have learners visualize the PMP (gesture+sound+touch) without activating the voice. (Actually, when you visualize a PMP it is virtually impossible to NOT experience it, centered in your larynx or voice box.)

If this is all difficult for you to visualize or remember, try first imagining yourself whacking your forehead with your iPhone and shouting "Eureka!"

Citation:
Baycrest Center for Geriatric Care (2017, August 11). Imagining an Action-Consequence Relationship Can Boost Memory. NeuroscienceNew. Retrieved August 11, 2017 from http://neurosciencenews.com/Imagining an Action-Consequence Relationship Can Boost Memory/

Saturday, October 14, 2017

Empathy for strangers: better heard and not seen? (and other teachable moments)

The technique of closing one's eyes to concentrate has both everyday sense and empirical research support. For many, it is common practice in pronunciation and listening comprehension instruction. Several studies of the practice under various conditions have been reported here in the past. A nice 2017 study by Kraus of Yale University, Voice-only communication enhances empathic accuracy, examines the effect from several perspectives.
😑
What the research establishes is that perception of the emotion encoded in the voice of a stranger is more accurately determined with eyes closed, as opposed to just looking at the video or watching the video with sound on. (Note: The researcher concedes in the conclusion that the same effect might not be as pronounced were one listening to the voice of someone we are familiar or intimate with, or were the same experiments to be carried out in some culture other than "North American".) In the study there is no unpacking of just which features of the strangers' speech are being attended to, whether linguistic or paralinguistic, the focus being:
 . . . paradoxically that understanding others’ mental states and emotions relies less on the amount of information provided, and more on the extent that people attend to the information being vocalized in interactions with others.
😑
The targeted effect is statistically significant, well established. The question is, to paraphrase the philosopher Bertrand Russell, does this "difference that makes a difference make a difference?"--especially to language and pronunciation teaching?
😑
How can we use that insight pedagogically? First, of course, is the question of how MUCH better will the closed eyes condition be in the classroom and even if it is initially, will it hold up with repeated listening to the voice sample or conversation? Second, in real life, when do we employ that strategy, either on purpose or by accident? Third, there was a time when radio or audio drama was a staple of popular media and instruction. In our contemporary visual media culture, as reflected in the previous blog post, the appeal of video/multimedia sources is near irresistible. But, maybe still worth resisting?
😑
Especially with certain learners and classes, in classrooms where multi-sensory distraction is a real problem, I have over the years worked successfully with explicit control of visual/auditory attention in teaching listening comprehension and pronunciation. (It is prescribed in certain phases of hapic pronunciation teaching.) My sense is that the "stranger" study actually is tapping into comprehension of new material or ideas, not simply new people/relationships and emotion. Stranger things have happened, eh!
😑
If this is a new concept to you in your teaching, close your eyes and visualize just how you could employ it next week. Start with little bits, for example when you have a spot in a passage of a listening exercise that is expressively very complex or intense. For many, it will be an eye opening experience, I promise!
😑

Source:
Kraus, M. (2017). Voice-only communication enhances empathic accuracy, American Psychologist 72(6)344-654.