Sunday, August 12, 2018

Feeling distracted, distant or drained by pronunciation work? Don't be downcast; blame your smartphone!

clker.com
It all makes sense now. I knew there had to be more (or less) going on when students are not thoroughly engaged or seemingly not attentive during pronunciation teaching, mine, especially. Two new studies, taken together, provide a depressing picture of what we are up against, but also suggest something of an antidote as well.

Tigger warning: This may be perceived as slightly more fun than (new/old) science. 

The first, summarized by ScienceDaily.com, is Dealing with digital distraction: Being ever-connected comes at a cost, studies find, by Dwyer and Dunn of The University of British Columbia. From the summary:

"Our digital lives may be making us more distracted, distant and drained . . . Results showed that people reported feeling more distracted during face-to-face interactions if they had used their smartphone compared with face-to-face interactions where they had not used their smartphone. The students also said they felt less enjoyment and interest in their interaction if they had been on their phone."

What is most interesting or relevant about the studies reported, and the related literature review, is the focus on the impact of digital smartphone use prior to what should be quite meaningful f2f  interaction--either dinner or what should have been a more intimate conversation--THE ESSENCE OF EFFECTIVE PRONUNCIATION AND OTHER FORMS OF INSTRUCTION! Somehow the digital "appetizer" made the meal and interpersonal interaction . . . well . . . considerably less appetizing.

Why should that be the case? The research on the multiple ways in which digital life can be depersonalizing and disconnecting is extensive and persuasive, but there is maybe something more "at hand" here.

A second study--which caught my eye as I was websurfing on the iPhone in the drive-through lane at Starbucks--dealt with what seem to be similar effects produced by "bad" posture, specifically studying something with head bowed, as opposed to doing the same with the text at eye level, with optimal posture: Do better in math: How your body posture may change stereotype threat response” by Peper, Harvey, Mason, and Lin of San Francisco State University, sumarized in NeuroscienceNews.com. 

Subjects did better and felt better if they sat upright and relaxed, as opposed to looking down at the study materials, a posture which according the authors . .  "is a defensive posture that can trigger old negative associations."

So, add up the effect of those two studies and what do you get? Lousy posture AND digital, draining distraction. Not only do my students use smartphones WITH HEAD BOWED up until the moment class starts, but I even have them do more of it in class! 

Sit up and take note, eh!

Citations:
American Psychological Association. (2018, August 10). Dealing with digital distraction: Being ever-connected comes at a cost, studies find. ScienceDaily. Retrieved August 12, 2018 from www.sciencedaily.com/releases/2018/08/180810161553.htm

San Francisco State University (2018, August 4). Math + Good Posture = Better Scores. NeuroscienceNews. Retrieved August 4, 2018 from http://neurosciencenews.com/math-score-posture-9656/


Saturday, July 28, 2018

Mesmerizing teaching (and pronunciation teachers)


clker.com
The topics of  attention salience and unconscious learning have come up any number of times over the course of the history of the blog, beginning with one of my favorites on that subject back in 2011 on Milton Erickson. In part because of the power of media today and the "discoveries" by neuroscience that we do, indeed, learn on many levels, some out of our immediate awareness, there is renewed interest in the topics--even from Starbucks!

A fascinating new book (to me at least) by Ogden, Credulity: A Cultural History of US Mesmerism, summarized by Neuroscience News, explores the history of  "Mesmerism" and a bit about its contemporary manifestations.(QED. . . . if you were not aware that it is still with us!) Ogden is most interested in understanding the abiding attraction of purposeful manipulation or management of unconscious communication, attention and learning. One fascinating observation, from the Neuroscience News summary is:

" . . . that one person’s power of suggestion over another enables the possibility of creating a kind of collaborative or improvisational performance, even unintentionally without people setting it up on purpose."

Get that?  ". . . collaborative or improvisational performance . . . created "unintentionally" Are you aware that you promote that or do any of that in your classroom? If you are, great; if not, great, but is that not also an interesting characterization of the basis of interaction in the language teaching classroom, especially where the focus is modeling, corrective feedback and metacognitive work in pragmatics and usage? In other words, suggestion is at the very heart of instructor-student engagement in some dimensions of the pedagogical process. Unconscious learning and relational affinities were for some time contained in Chomsky's infamous "black box," but are now the subject of extensive research in neuroscience and elsewhere.

And there are, of course, any number of factors that may affect what goes on "below decks" as it were. Turns out there is  (not surprisingly) even a well-established gender dimension or bias to unconscious learning as well.Ya think? A 2015 study by Ziori and Dienes, summarized by Frontiers in Psychology.org, highlights a critical feature of that cognitive process keyed or confounded by the variable of "attentional salience."

In that study, "Facial beauty affects implicit and explicit learning of men and women differently", the conscious and unconscious learning of men was significantly downgraded when the task involved analyzing language associated with the picture of a beautiful woman. Women, on the other hand, actually did BETTER in that phase of the study. The beautiful face did  not distract them in the least, it seemed, in fact to further concentrate their cognitive processing of the linguistic puzzle.

Now exactly why that is the case the researchers only speculate. For example, it may be that men are programmed to tend to see a beautiful woman more initially as "physically of interest", whereas women may see or sense first a competitor, which actually sharpens their processing of the problem at hand.  It was very evident, however, that what is termed "incentive salience" had a strong impact or at least siphoned off cognitive processing resources  . . . for the boys.

There are many dimensions of what we do in instruction that are loaded with "incentive salience", fun or stimulating stuff that we suppose will in essence attract attention or stimulate learners to at least wake up so we can do something productive. Pronunciation instruction is filled with such gimmicks and populated by a disproportionate number of former cheer leaders and "dramatic persona." The combination of unconscious connectivity and "beautiful" techniques may actually work against us.

In haptic work we figured out about a decade ago that not only how you look but what you wear can impact effectiveness of mirroring of instructor gesture in class. The fact that I am old and bald may account for the fact that students find me easier to follow than some of my younger associates? Take heart, my friends, the assumed evolutionary advantage of "beautiful people" may not only be waning, but actually be working against them in the pronunciation classroom at least! 



Monday, July 16, 2018

"A word in the hand is worth two in the ear!" (On the relationship between touch and audition in pronunciation teaching)

Clker.com
Just got back from a couple of weeks in China. Always good to reconnect with some of the roots of things haptic, especially Chinese traditional medicine and acupressure and acupuncture systems. About 30 years ago I was introduced to the concept of "qi" and the notion of the "energy healing" arts. Not surprisingly, the hands play a prominent part in that a number of key acupressure points are located there, especially the center of the hands, the palms. In fact, one of the most important acupressure points, Lao Gong Pericardium-8, one associated with "the place of labor" is there at the center of the palm. (To find it, make a gentle pointing fist and note where your ring finger touches the palm.)

In haptic pronunciation teaching,  most of the sounds are anchored using touch and movement, where movement, sound and touch intersect on stressed elements of words, phrases or sentences, where the fingers of one hand touch the center of the palm of the other, using any of several types of touch, e.g., tapping, scraping, slight pressure pushing up to intense, extended pressure.

In pronunciation teaching, and especially when focusing on vowel and consonant articulation, awareness and direction of touch, as with various articulators in the mouth or throat area, may or may not figure in prominently in pedagogy. Generally, the latter, unfortunately . . .

A fascinating new study by Yau of Baylor College of Medicine , reported by ResearchFeatures.com, has, in some sense "uncovered" more of the basic interdependence of  hearing and touch. In part that is because both senses are managed or mediated in something of the same area of the brain. The most striking finding, however, is that the same degree of "supramodality" probably applies across all the senses as we think of them today.

In other words, evidence of a touch-hearing supramodality confirms again that the same interrelationship probably does exist among all senses, including (as in haptic work) kinesthetic-visual-audio-tactile. One of the early discoveries about the function of touch in perception (and any number of studies since) has been that it serves to "unite" the senses, functioning in a more exploratory capacity, and often temporarily at that. (Fredembach, et al, 2009;  Legarde, J. and Kelso, J., 2006). Turns out, touch does more than that!

When instructors, especially those with adult students, refer to "multi-sensory" teaching they are typically referring to visual-auditory (and maybe) some kinesthetic engagement only, not use of systematic touch. With the Yau research we understand more as to how the senses naturally connect, even without our interference or design. Also, however, we see (and feel) here the capability of touch, for example, to affect learning of sound--and vice versa.

Those with any degree of synesthesia, where senses are actually experienced thorough some other modality, have been into this from birth. We are beginning to catch up and see the potential application of that perspective. The possibilities for any number of disciplines, from rehabilitation--to pronunciation instruction are fascinating.

To not go "supramodal" now would, of course, be . . . senseless.  More on the specific application of Yau's research to enhancing pronunciation instruction in general, and haptic work specifically, will follow in subsequent posts.

Keep in touch!













Thursday, May 24, 2018

Paying attention to paying attention! Or else . . . !

Two very accessible, useful blogposts, primers by Mike Hobbis, PhD student in neuroscience @UCL on attention in teaching worth a read, one on why there should be more research on attention in the classroom, and a second, which I like a lot, on attention as an effect, not a just cause.

Clker.com
Hobbis' basic point is that attention should be more the "center of attention" in methodology and research today than it is. Why it isn't is really good question. In part, there are just so many other things to "attend to"  . . .

I was really struck by the fact that I, too, still tend to use attention more as a cause, not an effect, meaning: if students are not paying attention in some form, my lesson plan or structure can not possibly be at fault: it is probably the continuous "laptopping" during the class or lack of sleep on their parts. The research on the impact of multitasking at the keyboard in school on a whole range of subjects and tasks, for example, is extensive . . . and inconclusive-- except in teaching pronunciation, where, as far as I can determine, there is none. (If you know of some PLEASE post the link here!)

There is, of course, a great deal of research on paying attention to pronunciation from various perspectives, per se, such as Counselman 2015, on "forcing" students to pay attention to their pronunciation and variance from a model. But, the extent to which variable attention alone contributes to the overall main effect is not pulled out in any study that I have been able to find.

Now I am not quite to Counselman's level of "forcing" attention, either by totally captivating instruction or capturing the attention and holding it hostage along the way, but Hobbis makes a very good point in the two blogposts that must go in both directions, if not simultaneously but at least systematically. In haptic pronunciation work--or most pronunciation teaching for that matter-- the extensive use of gesture alone should function at both levels. The same applies to any movement-enhanced methodology such as TPR (Total Physical Response) or  mind-body interplay, as in Mindfulness training. The question, of course, is how mindful and intentional in methodology are we.

There has been a resurgence of attention to attention in the last decade in a number of sub-disciplines in neuroscience as well. Have you been paying attention--either to the research or in your classroom? If so, share that w/us, too! (The next blogpost will focus on the range of attention-driven, neuroscience-grounded best practice classroom techniques.) Join that conversation. You have our attention!




Thursday, May 10, 2018

I like the way you move there! (Why haptic pronunciation teaching is so attractive!)

Do you like your students? Really? If you do, can they tell? If you don't, do they know? Do you like
teaching pronunciation? Does it show?

Clker.com
If your answer to any of those 6 questions is "I don't know . . . ,"  A meta-analytic investigation of the relation between interpersonal attraction and enacted behavior, by Montoya, Matthew; Kershaw and Prosser, summarized by Neurosciencenews.com, may be of interest. What they did is look at a bunch of studies, done on "hundreds" of cultures, trying to find universally recognized human behaviors that signal attraction (e.g., I like you!) Those nonverbal behaviors that (they claim) are universal are: 
  • Smiling
  • Eye contact
  • Proximity (getting close in space)
  • Laughter
Now, of course, how those behaviors are actually conveyed in different cultures may be quite different, but it is a fascinating claim. The summary goes on: 

"Other behaviors showed no evidence of being related to liking, including when someone flips their hair, lifts their eyebrows, uses gestures, tilts their head, primps their clothes, maintains open body posture or leans in." (Some of those at least intuitively seem to be related to attraction, at least in North American or Northern European cultures.)
One of the other, most striking findings (to me, at least) is that mimicking (or mirroring) and head nods were only associated with attraction in English. In other words, if your nonverbal messaging or expectations of students in the classroom relies to any extent on mirroring (of you or of your mirroring of them) or head nodding--and for the native English speaking instructor it certainly will to some degree--there can be a very real affective mismatch. 

Any native English speaker who has taught in Japan, for instance, can easily have their perception of audience engagement scrambled initially, when those in the audience sit (apparently) very still, with less body movement or mirroring, and nod heads for reasons other than just understanding or attraction. 

The intriguing implication of that research, in terms of haptic pronunciation teaching and training, is that both head nods and mirroring figure in very prominently in the teaching methodology, in effect making it perhaps even more "English-centric" than we had imagined. In most instances of modeling or correction of pronunciation, for example, a student "invited" to synchronize his or her upper body movement with the instructor or other students, as they repeat the targeted word, phrase or clause together. Likewise, upper torso movement in English and in the haptic system accompanies or drives head nodding, often referred to as upper torso nods, in fact.

In other words, the basic pedagogical process of haptic pronunciation work is, itself, "attractive," involving nonverbal "synchronization" of head and body in ways that enable acquisition of at least English. The only other language that we have done some work in to date is Spanish, but its "body language" is, of course, closely related to English. 

Even if you are not entirely "attracted" to haptic yet, this research certainly lends more support for the use of mirroring in English language instruction, especially pronunciation. (Nod if you agree!)



Source:
“A meta-analytic investigation of the relation between interpersonal attraction and enacted behavior” by Montoya, R. Matthew; Kershaw, Christine; & Prosser, Julie L. in Psychological Bulleting. Published May 8 2018. doi:10.1037/bul0000148


Sunday, April 29, 2018

Mission unpronouncable: When there's no method to the madness . . .

Clker.com (the kitchen sink)
Caveat Emptor: I am a (a) near fanatical exerciser (b) language teaching method/ologist with about 50 years in the field, (c) compulsive researcher, and (d) this post is maybe a little "retro." You'd think that the (b) and (c) skill sets would naturally combine to make me a near world-class athlete. In my dreams, maybe . . .

For years, when asked how to get started exercising like I do, my standard response has been:
  • Pick your grandparents well.
  • Get a trainer or sign up for a class -- Don't do it on your own. 
  • Follow the method.
  • Be disciplined and consistent.
  • Run the long race: a life of better fitness. 
Should have taken my own advice. I (mistakenly) thought that I was perfectly capable of creating my own system to run fast, based on research and my understanding of how methods and the body work. My self-assembled and constructed "method" has always been reasonably good for staying fit and strong . . .

I typically don't have time for classes, am genetically averse to following other people's methods and figured that I am smart enough to research my way to excellence. Not quite. I had fallen prey to a common version of the electronic post-modernist's "Decartes' Error" (I think, therefore I am) able to do this myself, with a little "Google shopping".

So, I  present my "method," a full report on what I had done the preceding two weeks, to my new coach. In retrospect, it had everything but the kitchen sink in it. She was kind, to put it mildly. When I first explained my essentially ad hoc method her reaction was (in essence):

"Hmm . . . Nice collection of tools . . . but where is your method? Aren't you a teacher?"

Turns out that I had many near-appropriate techniques and procedures, but they were either in the wrong order, done without the correct form or amount of weights or repetitions. In other words, great ideas, but a weak or counterproductive system.

So, how’s your (pronunciation) method?  Tried describing it lately? Could you? (Ask my grad students how easy that is!) When it comes to pronunciation, I think I know how to do that and help others in many different contexts construct their own, unique systems, but when it came to competitive running, turns out that I really didn’t have a clue, plan . . . or effective method.

I have one (plan+coach) now, one that applies as much (or more) to fast running as it does to effective pronunciation teaching or any instruction for that matter. Some features  of "our" new method:
  • Reasonable and really achievable goals that will reveal incremental progress.
  • Progress is not always immediate and perceptible, but it becomes evident "on schedule" according to the method/ologist! (Good methods "future pace", spell out what should happen and when.)
  • Near perfect form as a target is essential, if only in terms of simplicity of focus, but combined with the ongoing assessment and assistance of a "guide," gradual approximation is the gold standard.
  •  Having a model, in my case, Bill Rogers, Olympic marathoner perhaps, or a native speaker in teaching, is OK as long as the goal is the good form of the model, the process, not the ultimate outcome.
  • Regular, proscribed practice, coupled with systematic feedback, probably from a person at this point in time, is the soul of method. "Overdoing" it is as counterproductive as "under-doing" it.
  • Lessons and homework are rationally and explicitly scaffolded, building across time, for the most part at the direction of the method/ologist. That can't be "neo-behaviorist" in nature, but the framework has to be there in some cognitive-behavioral-neurophysiological form, where focus of attention is engineered in carefully.
  • Unstructured, random meta-cognitive analysis of the method (not the data) undermines results, but near absolute concentration on movement and intensity,  moment by moment, is the sine qua non of it all. 
  • Meta-communication (planning, monitoring) of the process, should be highly interactive, of course, but generally more controlled by the method/ologist than the learner, flexible enough to adjust to learners and contexts, of course, and but only when the brain/mind is allowed such "out of body" experience. 
To the extent that pronunciation is a more somatic/physical process, does that not suggest why efficient pronunciation work can be illusive? If you are in a program where there is a pronunciation class that approaches some or most of that criteria--and where the other instructors in the program can support and follow up to some extent on what is done there-- things work.

If not, if it is mostly just up to you, what do you do? Well, you pick some strategic targets, like stress, intonation and high functional load consonants for your students. In addition, you selectively use some of the features above, many of which apply to all instruction, not just pronunciation, and hope for the best.

Method rides again, but this time as a comprehensive body-mind system that is more and more feasible and achievable, e.g., Murphy's new book, but still potentially time consuming, expensive and maddening if you have to go it alone. 

Of course, if you don't have the time or resources to do relatively minimal pronunciation work, you can still probably find an expert-book-website to send yourself and students to for basics. There are many. Of course, I'd suggest one in particular . . .







Saturday, April 14, 2018

Out of touch and "pointless" gesture use in (pronunciation) teaching

Two recently published, interesting papers illustrate potential problems and pleasures with gesture use in (pronunciation) teaching. The author(s) both, unfortunately, implicate or misrepresent haptic pronunciation training.

Note: In Haptic Pronunciation Training-English (HaPT-Eng) there is NO interpersonal touch, whatsoever. A learner's hands may touch either each other or the learner holds something, such as a ball or pencil that functions as an extension of the hand. Touch typically serves to control and standardize gesture--and integrate the senses--while amplifying the focus on stressed syllables in words or phrases.

This from Chan (2018): Embodied Pronunciation Learning: Research and Practice in special issue of the CATESOL journal on research-based pronunciation teaching:

"In discussing the use of tactile communication or haptic interventions, they (Hişmanoglu and Hişmanoglu, 2008) advise language teachers to be careful. They cite a number of researchers who distinguish high-contact, touch-oriented societies (e.g., Filipino, Latin American, Turkish) from societies that are low contact and not touch oriented (e.g., Chinese, Japanese, Korean); the former may perceive the teacher’s haptic behavior (emphasis mine)as normal while the latter may perceive it as abnormal and uncomfortable. They also point out that in Islamic cultures, touching between people (emphasis mine) of the same gender is approved, but touching between genders is not allowed. Thus, while integrating embodied pronunciation methods into instruction, teachers need to remain constantly aware of the individuals, the classroom dynamics, and the attitudes students express toward these activities."

What Chan means by the "teacher's haptic behavior" is not defined. (She most probably means simply touching--tactile, not "haptic" in the technical sense as in robotics, for example, or as we use it in HaPT-Eng, that is: gesture synchronized with speech and anchored with intra-personal touch that provides feedback to the learner.) For example, to emphasize word stress in HaPT-Eng, in a technique called the "Rhythm Fight Club", the teacher/learner may squeeze a ball on a stressed syllable, as the arm punches forward, as in boxing. .

Again: There is absolutely no "interpersonal touch" or tactile or haptic communication, body-to-body, utilized in  HaPT-Eng . . . it certainly could be, of course--acknowledging the precautions noted by Chan. 

Clker.com
A second study, Shadowing for pronunciation development: Haptic-shadowing and IPA-shadowing, by Hamada, has a related problem with the definition of "haptic". In the nice study, subjects "shadowed" a model, that is attempted to repeat what they heard (while view a script), simultaneously, along with the model. (It is a great technique, one use extensively in the field.) The IPA group had been trained in some "light" phonetic analysis of the texts, before attempting the shadowing. The "haptic" group were trained in what was said (inaccurately) to be the Rhythm Fight Club. There was a slight main effect, nonetheless, the haptic group being a bit more comprehensible.

The version of the RFC used was not haptic; it was only kinesthetic (there was no touch involved), just using the punching gesture, itself, to anchor/emphasize designated stressed syllables in the model sentences. The kinesthetic (touchless) version of the RFC has been used in other studies with even less success! It was not designed to be used without something for the hand to squeeze on the stressed element of the word or sentence, making it haptic. In that form, the gesture use can easily become erratic and out of control--best case! One of the main--and fully justified--reasons for avoidance of gesture work by many practitioners, as well as the central focus of HaPT-Eng: controlled, systematic use of gesture in anchoring prominence in language instruction.  

But a slight tweak of the title of the Hamada piece from "haptic" to "kinesthetic", of course, would do the trick.

The good news: using just kinesthetic gesture (movement w/o touch anchoring), the main effect was discernable. The moderately "bad" news: it was not haptic--which (I am absolutely convinced) would have made the study much more significant--let alone more memorable, touching and moving . . .

Keep in touch! v5.0 of HaPT-Eng will be available later this summer!








Sunday, April 1, 2018

Blogpost #1000! - Gender discrimination in L2 listening and teaching!

How appropriate that the 1000th post on this blog is on the lighter side--but still with a useful "in-sight!"

Ever wonder why girls are better language learners than boys? A new study, Explicit Performance in Girls and Implicit Processing in Boys: A Simultaneous fNIRS–ERP Study on Second Language Syntactic Learning in Young Adolescents  by Sugiura, Hata, Matsuba-Kurita, Uga, Tsuzuki, Dan, Hagiwara, and Homae at Tokyo Metropolitan University, summarized by ScienceDaily.com, has recently demonstrated that, at least in listening to an L2:
  • Middle school boys tend to rely more on their left pre-frontal cortex, that part of the brain that is more visual, analytic and rule-oriented--and is connected more to the left hemisphere of the brain and right visual field. 
  • Middle school girls, on the other hand, tend to to use the right area at the back of the brain that is more holistic, meaning and relation-based--that is connected to the right hemisphere and left visual field.
Now granted the subjects were pre-adolescent. That could well mean that within a year or two their general ability to "absorb" language holistically will begin to degrade even further, adding to the boy's handicap. (Although there is still the remote possibility that the effect would impact girls more than boys? Not really.) 

Clker.com
Research on what is processed better in the left, as opposed to right visual field (the right, as opposed to left brain hemisphere) was referenced recently in a fun piece in Neurosciencemarketing.com, How a Strange Fact About Eyeballs Could Change Your Whole Marketing Plan: What public speakers accidentally know about neuroanatomy, by Tim David, that finally provided an explanation for the long established principle in show business that you go "stage left" (into the right visual field of the audience) if you want to get a laugh, and you go stage right if you want tears and emotion. (If you don't believe that is true, try both perspectives in class a few times.)

(Most of us) boys really don't have a chance, at least not in terms of contemporary language teaching methodology either! Not only does de-emphasis on form or structure in instruction give girls an unfair advantage, moving away from boy's preferred processing style, but where are left-brained (generally right-handed) instructors more likely to gesture and direct their gaze? You got it--right into the girls' preferred left visual fields.  And that is NOT funny!

So, lighten your cognitions up a bit, move more stage left,
and cater a little more to the boys' need for rules and reasons, eh!



Monday, March 26, 2018

Haptic Pronunciation Teaching Workshops in Japan!

We are now scheduling workshops in Japan between June 19th and 26th. If your school would like to host a half-day, Haptic Pronunciation Teaching workshop, let us know, as soon as possible. We have just those 7 days open.
  • The workshops can be morning or afternoon, and can involve up to 200 participants. 
  • A nice venue with moveable chairs (no tables) and good sound is all that is required. Materials, including access to web-based video models of all techniques presented, are provided. A video recording of the workshop is also OK. 
  • There are 4 different workshops available. One for experienced teachers, one for teachers-in-training, one for teachers with little or no background in pronunciation teaching, and one for high school age learners and older.
  • Cost for the workshops begins at $500 CAD (40,000 yen), depending on audience size.
  • If interested, contact us by comment here or at: info@actonhaptic.com! (If your school is in some other country, we will be available for another "tour" Spring, 2019!)





What you see is what you forget: pronunciation feedback perturbations

Tigger warning* This blogpost concerns disturbing images, perturbations, during pronunciation
work.

In some sense, almost all pronunciation teaching involves some type of imitation and repetition of a model. A key variable in that process is always feedback on our own speech, how well it conforms to the model presented, whether coming to us through the air or perhaps via technology, such as headsets--in addition to the movement and resonance we feel in our vocal apparatus and bone structure in the head and upper body.  Likewise, choral repetition is probably the most common technique, used universally. There are, of course, an infinite number of reasons why it may or may not work, among them, of course, distraction or lack of attention.

Clker.com
We generally, however, do not take all that seriously what is going on in the visual field in front of the learner while engaged in repetition of L2 sounds and words. Perhaps we should. In a recent study by Liu et al, Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates,  it was shown that differing amounts of random light flashes in the visual field  affected the ability of learners to adjust the pitch of their voice to the model being presented for imitation. The research was done in Chinese, with native Mandarin speakers, attempting to adjust the tone patterns of words presented to them, along with the "light show". They were instructed to produce the models they heard as accurately as possible.

What was surprising was the degree to which visual distraction (perturbation) seemed to directly impact subjects' ability to adjust their vocal production pitch in attempting to match the changing tone of the models they were to imitate. In other words, visual distraction was (cross-modally) affecting perception of change and/or subsequent ability to reproduce it. The key seems to be the multi-modal nature of working memory itself. From the conclusion: "Considering the involvement of working memory in divided attention for the storage and maintenance of multiple sensory information  . . .  our findings may reflect the contribution of working memory to auditory-vocal integration during divided attention."

The research was, of course, not looking at pronunciation teaching, but the concept of management of attention and the visual field is central to haptic instruction, in part because touch, movement and sound are so easily overridden by visual stimuli or distraction. Next time you do a little repetition or imitation work, figure out some way to insure that working memory perturbation by what is around learners is kept to a minimum. You'll SEE the difference. Guaranteed.

Citation:
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B and Liu H (2018) Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front. Neurosci. 12:113. doi: 10.3389/fnins.2018.00113

*The term "Tigger warning" is used on this blog to indicate potentially mild or nonexistent emotional disruption that can easily be overrated. 

Wednesday, March 14, 2018

Teaching EnglishL2 advanced conversation (with hand2hand prosodic and paralinguistic "comeback")

Clker.com
We'll be doing a new workshop: "Pronunciation across the 'spaces' between sentences and speakers."  At the 2018 BCTEAL Conference here in Vancouver in May. Here is the summary:


This workshop introduces a set of haptic (movement + touch) based techniques for working with English discourse-level prosodic and paralanguistic bridges between participants in conversation, including, key, volume and pace. Some familiarity with teaching of L2 prosodics (basically: rhythm, stress, juncture and intonation) is recommended.

The framework is based to some extent on Prosodic Orientation in English Conversation, by Szczepek-Reed, and new features of v5.0 of the haptic pronunciation teaching system: Essential Haptic-interated English Pronunciation (EHIEP), available by August, 2018. The innovation is the use of several pedagogical movement patterns (PMPs) that help learners attend to the matches and mismatches of prosodics and paralanguage between participants in conversation that create and maintain coherence and . . . empathy across conversational turns.

For a quick glimpse of just the basic prosodic PMPs, see the demo of the AH-EPS ExIT (Expressiveness) from EHIEP v2.0.

The session is only 45 minutes long, so it will just be an experiential overview or tour of the set of speech-synchronized-gesture-and-touch techniques. The video, along with handouts, will be linked here in late May.

Join us!





Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching

Clker.com
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)


There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads, Businessinsider.com, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"










Friday, February 23, 2018

How watching curling can make you a better teacher!

Clker.com
Tigger alert: This post contains application of insights from curling and business sales to teaching, certainly nothing to be Pooh-Poohed. 

The piece linked above by Dooley on Forbes.com, How watching curling helps you sell better, explores the potential effects of ongoing attention to sales, brushing away obstacles, influencing the course of "the rock." Most importantly, however, it emphasizes the idea of constantly examining and influencing the behavior of your customers (your students.)

It sounds at first like that analogy flies in the face of empowering the learner and encouraging learner autonomy, let alone questionable manipulation . . .  Not quite. It speaks more to instructor responsibility for doing as much as possible to facilitate the process, but especially the whole range of "influencing" behaviors that neuroscience is "rediscovering" for us, many times less explicit and only marginally out of learner awareness, such as room milieu, pacing, voice characteristics, timing and even . . . homework or engagement with the language outside of class.

Marketers, wedded to the new neuroscience (or pseudo-science) consultants, are way out ahead of us in some respects, far behind in others. What are some major "rocks" that you might better outmaneuver with astute, consistent micro-moves, staying ahead, brushing aside obstacles? One book you might consider "curling  up with, with a grain of salt" is Dooley's Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing.


Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!

Clker.com
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.


Citation: (Open access)









Thursday, February 8, 2018

The feeling of how it happens: haptic cognition in (pronunciation) teaching

Am often asked the question as to how "haptic" (movement+touch) can enhance teaching, especially pronunciation teaching. A neat new study by Shaikh, Magana, Neri, Escobar-Castillejos, Noguez and Benes, Undergraduate students’ conceptual interpretation and perceptions of haptic-enabled learning experiences, is "instructive". Specifically, the study,

 " . . . explores the potential of haptic technologies in supporting conceptual understanding of difficult concepts in science, specifically concepts related to electricity and magnetism."

Now aside from the fact that work with (haptic) pronunciation teaching should certainly feel at times both "electric and magnetic", the research illustrates how haptic technology, in this case a joy-stick-like device, can help students more effectively figure out some basic, fundamental concepts. In essence, the students were able to "feel" the effect of current changes and magnetic attraction as various forces and variables were explored. The response from students to the experience was very positive, especially in terms of affirmation of understanding the key ideas involved.

The real importance of the study, however, is that haptic engagement is not seen as simply "reinforcing" something taught visually or auditorily. It is basic to the pedagogical process. In other words, experiencing the effect of electricity and magnetic attraction as the concepts are presented results in (what appears to be) a more effective and efficient lesson. It is experiential learning at its best, where what is acquired is more fully integrated cognition, where the physical "input" is critical to understanding, or may, in fact, precede more "frontal" conscious analysis and access to memory. (Reminiscent, of course, of Damasio's 2000 book: The feeling of how it happens: Body and emotion in the making of consciousness. Required reading!)

An analogous process is evident in haptic pronunciation instruction or any approach that systematically uses gesture or rich body awareness. The key is for that awareness, of movement and vibration or resonance, to at critical junctures PRECEDE explanation, modeling, reflection and analysis, not simply to accompany speech or visual display. (Train the body first! - Lessac)

We are doing a workshop in May that will deal with discourse intonation and orientation (the phonological processes that span sentence and conversational turn boundaries). We'll be training participants in a number of pedagogical gestures that later will accompany the speech in that bridging. To see what some of those used for expressiveness look (and feel) like, go here!

KIT






http://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-017-0053-2

Monday, January 29, 2018

Anxious about your (pronunciation) teaching? You’d better act fast!

Probably the most consistent finding in research on pronunciation teaching from instructors and student alike is that it can be . . . stressful and anxiety producing. And compounding that is often the additional pressure of providing feedback or correction. A common response, of course, is just to not bother with pronunciation at all. One coping strategy often recommended is to provide "post hoc" feedback, that is after the leaner or activity is finished, where you refer back to errors, in as low key and supportive a manner as possible. (As explored in previous posts, you might also toss in some deep nasal breathing, mindfulness or holding of hot tea/coffee cups at the same time, of course.) Check that . . . 

A new study by Zhang, Lei , Yin, Li and Li (2018) Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation, published in Frontiers in Human Neuroscience, adds an interesting perspective to the problem. What they found, in essence, was that: 
  • Learners who tended toward high anxiety responded better to immediate positive feedback than such feedback postposed, or provided later. The same type of learners also perceived overall outcomes of the training as lower, were the feedback to be provided later. 
  • Learners who tended toward low anxiety responded equally well to immediate or delayed feedback and judged the training as effective in either condition. There was also a trend toward making better use of feedback as well. 
  •  Just why that might be the case is not explored in depth but it obviously has something to do with being able to hold the experience in long term memory more effectively, or with less clutter or emotional interference.
I'm good!
So, if that is more generally the case, it presents us with a real a conundrum on how to consistently provide feedback in pronunciation teaching, or any teaching for that matter. Few would say that generating anxiousness, other than in the short term as in getting "up" for tests or so-called healthy motivation  in competition, is good for learning. If pronunciation work itself makes everybody more anxious, then it would seem that we should at least focus more on more immediate feedback and correction or positive reinforcement. Waiting longer apparently just further handicaps those more prone to anxiety. How about doing nothing? 

This certainly makes sense of the seemingly contradictory results of research in pronunciation teaching showing instructors biased toward less feedback and correction but students consistently wanting more

How do you provide relatively anxiety-free, immediate feedback in your class, especially if your preference is for delayed feedback? Do you? In haptic work, the regular warm up preceding pronunciation work is seen as critical to that process. (but we use a great deal of immediate, ongoing feedback.) Other instructors manage to set up a more general nonthreatening, supportive, open and accommodating classroom milieu and "safe spaces". Others seem to effectively use the anonymity of whole class responses and predictable drill-like activities, especially in oral output practice.

Anxiety management or avoidance. Would, of course, appreciate your thoughts and best practice 0n this . . as soon as possible!



Citation: Zhang X, Lei Y, Yin H, Li P and Li H (2018) Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation. Front. Hum. Neurosci. 12:20. doi: 10.3389/fnhum.2018.00020

Sunday, January 21, 2018

An "after thought" no longer: Embodied cognition, pronunciation instruction and warm ups!

If your pronunciation work is less than memorable or engaging, you may be missing a simple but critical step: warming up the body . . . and mind (cf., recent posts on using Mindfulness or Lessac training for that purpose.) Here's why.

A recent, readable piece by Cardona, Embodied Cognition: A Challenging Road for Clinical Neuropsychology presents a framework that parallels most contemporary models of pronunciation instruction. (Recall the name of this blog: Haptic-integrated CLINICAL pronunciation research!) The basic problem is not that the body is not adequately included or applied in therapy or instruction, but that it generally "comes last" in the process, often just to reinforce what has been "taught", at best.

That linear model has a long history, according to Cardona, in part due to " the convergence of the localizationist approaches and computational models of information processing adopted by CN (clinical neuropsychology)".  His "good news" is that research in neuroscience and embodied cognition has (finally) begun to establish more of the role of the body, relative to both thought and perception, one of parity, contributing bidirectionally to the process--as opposed to contemporary "disembodied and localization connectivist" approaches. (He might as well be talking about pronunciation teaching there.)

"Recently, embodied cognition (EC) has put the sensory-motor system on the stage of human cognitive neuroscience . . .  EC proposes that the brain systems underlying perception and action are integrated with cognition in bidirectional pathways  . . , highlighting their connection with bodily  . . . and emotional  . . .  experiences, leading to research programs aimed at demonstrating the influence of action on perception . . . and high-level cognition  . . . "  (Cardona, 2017) (The ellipted sections represent research citations in the original.) 

Pick up almost any pronunciation teaching text today and observe the order in which pronunciation features are presented and  taught. I did that recently, reviewing over two dozen recent student and methods books. Almost without exception the order was something like the following:
  • perception (by focused listening) 
  • explanation/cognition (by instructor), 
  • possible mechanical adjustment(s), which may or may not include engagement of more of body than just the head (i.e., gesture), and then 
  • oral practice of various kinds, including some communicative pair or group work 
There were occasional recommendations regarding warm ups in the instructor's notes but nothing systematic or specific as to what that should entail or how to do it. 

The relationship between perception, cognition and body action there is very much like what Cardona describes as endemic to clinical neuropsychology: the body is not adequately understood as influencing how the sound is perceived or its essential identity as a physical experience. Instead, the targeted sound or phoneme is encountered first as a linguistic construct or constructed visual image.

No wonder an intervention in class may not be efficient or remembered . . .

Clker.com
So, short of becoming a "haptician" (one who teaches pronunciation beginning with the body movement and awareness)--an excellent idea, by the way, how do you at least partially overcome the disembodiment and localization that can seriously undermine your work? A good first step is to just consistently do a good warm up before attending to pronunciation, a basic principle of haptic work, such as this one which activates a wide range of muscles, sound mechanisms and mind.

One of the best ways to understand just how warm ups work in embodying the learning process is this IADMS piece on warming up before dance practice. No matter how you teach pronunciation, just kicking off your sessions with a well-designed warmup, engaging the body and mind first, will always produce better results. It may take three or four times to get it established with your students, but the long term impact will be striking. Guaranteed . . . or your memory back!



Thursday, January 4, 2018

Touching pronunciation teaching: a haptic Pas de trois

Wikipedia.org
For you ballet buffs this should "touch home" . . . The traditional "Pas de trois" in ballet typically involves 3 dancers who move through 5 phases: Introduction, 3 variations, each done by at least one dancer, and then a coda of some kind with all dancing.

A recent article by Lamothe in the UK Guardian, Let's touch: why physical connection between human beings matters, reminded us of some the earliest work we did in haptic pronunciation teaching that involved students working together in pairs, "conducted" by the instructor, in effect "touching" each other on focus words or stressed syllables in various ways, on various body parts.

In today's highly "touch sensitive" milieu, any kind of interpersonal touching is potentially problematic, especially "cross-gender" or "cross-power plane", but there still is an important place for it, as Lamothe argues persuasively. Maybe even in pronunciation teaching!

Here is one example from haptic pronunciation teaching. Everything in the method can be done using intra-personal and interpersonal touch, but this one is relatively easy to "see" without a video to demonstrate the interpersonal version of it:
  • Students stand face to face about a foot apart. Instructor demonstrates a word or phrase, tapping her right shoulder (with left hand) on stressed syllables and left elbow (with right hand) on unstressed syllables--the "Butterfly technique"
As teacher and students then repeat the word or phrase together,
  • One student will lightly tap the other on the outside of the her right shoulder on stressed syllables (using her left hand).
  • The other student will lightly tap the outside of the other student's left elbow on unstressed syllables (using her right hand). 
Note: Depending on the socio-cultural context, and depending on what the general attire of the class is, having all students use some kind of hand "disinfectant" may be in order! Likewise, pairing of students obviously requires knowing well both them individually and the interpersonal dynamics of the class. Consider competition among pairs or teams using the same technique. 

If you do have the class and context for it, try a bit of it, for instance on a few short idioms. It takes a little getting used to, but the impact of touch in this relatively simple exercise format--and the close paralinguistic "communication"-- can be very dramatic and . . . touching.

Keep in touch!

Saturday, December 23, 2017

Vive la efference! Better pronunciation using your Mind's Ear!

"Efference" . . . our favorite new term and technique: to imagine saying something before you actually say it out loud, creating an "efferent copy" that the brain then uses in efficiently recognizing what is heard or what is said.  Research by Whitford, Jack, Pearson, Griffiths, Luque, Harris, Spencer, and Pelley of University of New South Wales, Neurophysiological evidence of efference copies to inner speech, summarized by ScienceDaily.com, explored the neurological underpinnings of efferent copies, having subjects imagine saying a word before it was heard (or said.)
Clker.com

The difference in the amount of processing required of subsequent occurrences following the efferent copies, as observed by fMRI-like technology, was striking. The idea is that this is one way the brain efficiently deals with speech recognition and variance. By (unconsciously) having "heard" the target or an idealized version of it just previously in the "mind's ear", so to speak, we have more processing  power available to work on other things with . . .

Inner speech has been studied and employed in the second language research and  practice extensively  (e.g., Shigematsu, 2010, dissertation: Second language inner voice and identity) and in different disciplines.  There is no published research on the direct application of efference in our field to date that I’m aware of.

The haptic application of that general idea is to “imagine” saying the word or phrase synchronized with a specifically designed pedagogical gesture before articulating it.  In some cases, especially where the learner is highly visual, that seems to be helpful, but we have done no systematic work on it.  The relationship with video modeling effectiveness may be very relevant as well. Here is a quick thought/talk problem for you to demonstrate how it works:

Imagine yourself speaking a pronunciation-problematic word in one of your other languages before trying to say it out loud. Do NOT subvocalize, move your mouth muscles. (Add a gesture for more punch!) How’d it work?

Imagine your pronunciation work getting better while you are at it!




Friday, December 15, 2017

Object fusion in (pronunciation) teaching for better uptake and recall!

Your students sometimes can't remember what you so ingeniously tried to teach them? New study by D’Angelo, Noly-Gandon, Kacollja, Barense, and Ryan at the Rotman Research Institute in Ontario, Breaking down unitization: Is the whole greater than the sum of its parts?” (reported by Neurosciencenews.com) suggests an "ingenious" template for helping at least some things "click and stick" better. What you need for starters:
  • 2 objects (real or imagined) (to be fused together)
  • an action linking or involving them, which fuses them
  • a potentially tangible, desirable consequence of that fusion
Clker.com
The example from the research of the "fusing" protocol was to visualize sticking an umbrella in the key hole of your front door to remind yourself to take your umbrella so you won't get soaking wet on the way to work tomorrow. Subjects who used that protocol, rather than just motion or action/consequence, were better at recalling the future task. Full disclosure here: the subjects were adults, age 61 to 88. Being near dead center in the middle of that distribution, myself, it certainly caught my attention! I have been using that strategy for the last two weeks or so with amazing results . . . or at least memories!

So, how might that work in pronunciation teaching? Here's an example

Consonant: th - (voiceless)
Objects: upper teeth, lower teeth, tongue
Fusion: tongue tip positioned between teeth as air blows out (action)
Consequence - better pronunciation of the th sound

Haptic pronunciation adds to the con-fusion

Vowel (low, central 'a'), done haptically (gesture + touch)
Objects: hands touch at waist level, as vowel is articulated, with jaw and tongue lowered in mouth, with strong, focused awareness of vocal resonance in the larynx and bones of the face.
Fusion: tongue and hand movement, sound, vocal resonance and touch
Consequence: better pronunciation of the 'a' sound

Key concept: It is not much of a stretch to say that our sense of touch is really our "fusion" sense, in that it serves as a nexus-agent for the others  (Fredembach, et al, 2009; Legarde and Kelso 2006). Much like the created image of the umbrella in the key hole evokes a memorable "embodied" event, probably even engaged with our tactile processing center(s), the haptic pedagogical movement pattern (PMP) should work in similar manner, either in actual physical practice or visualized.

One very effective technique, in fact, is to have learners visualize the PMP (gesture+sound+touch) without activating the voice. (Actually, when you visualize a PMP it is virtually impossible to NOT experience it, centered in your larynx or voice box.)

If this is all difficult for you to visualize or remember, try first imagining yourself whacking your forehead with your iPhone and shouting "Eureka!"

Citation:
Baycrest Center for Geriatric Care (2017, August 11). Imagining an Action-Consequence Relationship Can Boost Memory. NeuroscienceNew. Retrieved August 11, 2017 from http://neurosciencenews.com/Imagining an Action-Consequence Relationship Can Boost Memory/

Wednesday, December 6, 2017

OLOA! Pronunciation Teaching Lagniappe!

Clker.com
When the "oral reading baby" was for a time tossed out with the structuralist reading and pronunciation teaching "bath", a valuable resource was temporarily mislaid. New research by Forrin and MacLeod of Waterloo University confirms what common sense tells us: that reading a text aloud or even verbalizing something that you need to remember (get ready!) actually may help. Really? In that study the "production effect" was quite significant. From the Science Daily summary:

"The study tested four methods for learning written information, including reading silently, hearing someone else read, listening to a recording of oneself reading, and reading aloud in real time. Results from tests with 95 participants showed that the production effect of reading information aloud to yourself resulted in the best remembering . . . And we know that regular exercise and movement are also strong building blocks for a good memory."

There have been any number of blogposts here advocating the use of oral reading in pronunciation teaching, but this is one argument that I had not encountered or was not all that interested in, in part because I had an Aunt who read and thought aloud constantly and very "irritatingly"! (And who, it appears not incidentally, had a  phenomenal memory for detail.) You may well have an aunt or associate who uses the same often socially dysfunctional memory heuristic.

One often unrecognized source of lagniappe (bonus) from attention to pronunciation, especially in the form of oral reading in class or as personalized homework, is this production effect, which is the actual focus of the study: any number of actions or physical movement may contribute to memory for language material. The text being verbalized still has to be "meaningful" in some sense, according to the study. In haptic work we use the acronym OLOA (out loud oral anchoring), targeted elements of speech accompanied by gesture and touch. 

That can happen any time in instruction, of course, but the precise conditions for it being effective are interesting and worth exploring. One of the procedures I have frequently set up in teaching observations is analyzing the extent and quality of OLOA (In Samoan: one's labor, skill or possessions!) See if you can remember to use more of that intentionally next week in class and observe what happens. (If not, try a little OLOA on this blogpost!)

Citation:
University of Waterloo. (2017, December 1). Reading information aloud to yourself improves memory of materials. ScienceDaily. Retrieved December 6, 2017 from www.sciencedaily.com/releases/2017/12/171201090940.htm