Sunday, December 27, 2020

New "NewBees'" Haptic Pronunciation course!

Want to teach pronunciation but have no training and no time in class to do it even if you knew how? 

We have a great new course for you: Acton Haptic Pronunciation: Content Complement System (AHP-CCS). 

It has been created so that you can use haptic pronunciation techniques (gesture controlled by touch) to:

  • Improve memory for content you are teaching (in speaking, listening, reading, grammar, vocabulary, stories, concepts, etc.)
  • Improve expressiveness, emphasis, and intelligibility
  • Improve impact of modeling, feedback and correction
  • Improve class engagement on Zoom
  • Provide a way to work with pronunciation (on the spot) in any type of class
Specifics: 
  • (Ideally) You study with another person who teaches the same type of student 
  • 12 week course/4 modules/12 lessons. 
  • The first ones begin on 3/25 and others can start anytime after when there are minimum of two students who want to do the course. 
  • 60 minutes of practice on your own per week 
  • 30 minutes of homework (on your own or with your friend) per week
  • a 45 minute Zoom session each week, the two you, (Usually on Saturday) working with a  "Haptician" who also has experience teaching students of that age and level 
  • Haptician: Trained by Bill Acton in the Haptic Pronunciation Teaching (HaPT)
  • Cost: 
    • 1 person ($1600 CAD each) - not recommended, but possible. 
    • 2 people together ($800 CAD each or $ per 200 module) - best plan, especially if you are friends! 
    • 3 people together ($600 CAD each or $150 per module) - OK if you are working together!  
    • 4 people together ($400 CAD each or $100 per module) 
    • (Locals.com subscription, $5 CAD monthly, also required to take an AHP-CCS course)

Designed for those 

  • with little or no previous training in phonetics or pronunciation teaching
  • who are teaching content classes or language classes
  • teaching students of any age or proficiency
  • have a colleague or friend that they can do the class with (if not, maybe we can find one for you!) 
  • who have two or three hours a week for the course
  • who would like to be part of a community of people who love teaching pronunciation and other things!
  • on a tight budget!
More details: 
  • Weekly Zoom sessions focus on how to use the pedagogical movement patterns (PMPs) of the lesson in your class
  •  Both you and your friend should ideally be teaching or have taught the same kind of students if at all possible
  • Certificate awarded after completion of the last Module!
  • All materials furnished
  • Basic training materials are designed to be used with students of any age and proficiency level, in class or out of class. 
Courses begin on 3/25/2021

For more information: Contact info@actonhaptic.com and go to actonhaptic@Locals.com

Wednesday, December 23, 2020

Killing Pronunciation 14: One tip at a time (or better still . . . "pho-nunciation")

Nice new book just out by Mark Hancock, 50 tips for teaching pronunciation. Currently it is only available in hardcopy, but you can preview it off that link, the latest addition to Scott Thornbury's "Tips" series. Other than the fact that it has the "wrong" vowel system (British), it is very cool. 

It is, however, also a perfect candidate for the 14th in our "Killing" series. In that spirit, it might also be characterized, to paraphrase the 'death by a thousand cuts' notion as: Death (of pronunciation teaching) by a thousand tips.

 Hancock's book is a pretty comprehensive, self-guided short course in itself in teaching pronunciation. (I have it as recommended in my graduate applied phonology course.) The title is misleading, however. It is not just a random set of techniques; it is a relatively systematic set of principles, "tips," if you will. It is actually, read front to back, a pronunciation teaching method. 

It represents the state of the art in the field today: Go big or go home . . . either you invest a considerable amount of time in training to bring pronunciation teaching into your classroom, so you can integrate it in or teach a free standing class, or you avoid it entirely or use a few relatively ineffective techniques here and there and call it a day. In truth, there is very little middle ground left, especially with curriculum priorities in most teaching institutions, especially K-12, that allow precious little space, if any, for attention to pronunciation.  

So . . . Hancock's book is on the right track: it adds up to a method. (Since we are supposed to be all "post-method" now, Hancock probably didn't dare mention that, but I can, of course!) And the reason I do, is that Haptic Pronunciation Teaching (HaPT) is also a coherent method, one best learned from front to back, but the differences are:

  • Although you can "do" our course, yourself, and take it to the classroom, you don't have to. You can just stream the lessons to your students and let me do the initial teaching and you do the follow up. 
  • 50 Tips is designed so that you can do it on your own. The HaPT system almost has to be learned "in community." Actually, you go through the course with two or three other newbees, guided by an experienced "Haptician," somebody who is certified in HaPT and is available to help out and "test" you at each benchmark. 
  • 50 Tips is great for coming up with quick, mini-lessons, integrating in pronunciation here and there and getting a basic background in pronunciation teaching. HaPT can be used the same (old fashioned way) but it is really aimed at using pronunciation (or what we call "phonunciation") to enhance memory for regular course content, expressiveness, emphasis and (surprise!) pronunciation intelligibility. 
  • The new HaPT method, coming out next month,  Acton Haptic Pronunciation: Content Complement System (CCS, for short), focuses on "phonunciation," not pronunciation. You can use it any time you are working with content, a story, a dialogue, a word list, a song, a set of instructions. Basically, you embed HaPT techniques (gestures anchored by touch) in almost anything to enhance it and make it more memorable. 
  • CCS has been created for those with no background in pronunciation teaching and (typically) no time during the week to do it effectively. 
  • Keep in touch for more announcements. It will roll out first here and then actually go live on Locals. Go join up now and be part of the Acton Haptic Pronunciation Community when it happens! 


Saturday, December 5, 2020

Out of sight!. . . Speechless! . . .Hands on teaching of the "grammar" of phonology and pronunciation

The study, “Feeling Phonology: The Conventionalization of Phonology in Protactile Communities in the United States" by Edwards of Saint Louis University and Brentari of the University of Chicago, could be something a game changer for us in haptic pronunciation teaching. (It will be published in Language shortly, but, we'll assume that the tantalizing neuroscience summary is correct for the time being!)  From the summary: 

In order to uncover the emergence of new grammatical structure in protactile language, pairs of DeafBlind research participants were asked to describe three objects to one another: a lollipop, a jack (the kind children use to play the game ‘jacks’) and a complex wooden toy with movable arms, magnets, and magnetized pieces.  . . . They found that the early stages of the conventionalization of protactile phonology involve assigning specific grammatical roles to the hands (and arms) of Signer 1 (the conveyer of information) and Signer 2 (the receiver of information). It is the clear and consistent articulatory forms used by each of the four hands that launches the grammar in this case and allows for the rapid exchange of information.

Let me try to translate: Signer 1, using only touch, is passing on a "description" of each object to Signer 2. The four hands involved quickly assume their respective "grammatical functions" in conveying the critical information about the objects. That level of detail is not unpacked in the summary, but we can assume that that is referring to functions such as agent, object, action (verb-like), conjunction (joining), descriptor (adjectival, adverbial), etc. 

In effect, in haptic pronunciation, where the hands of the instructor, for example, moving through the visual field with speech synchronized gesture, depict the embodied nature of a phrase or word, such as "I'm speechless!" --which is simultaneously mirrored by the student in receiving that information, the functionality of the hands and arms of each in the interaction is quite analogous. 

For example, one hand/arm may trace out the path of an intonation contour, whereas the other hand serves as the "landing point" for the other hand ono the stressed element in the phrase. Given the general structure of English grammar, that landing point is also generally the place where the sound system and new information intersect. (New information tends to be near the end of a phrase or sentence.) 

Although sight and sound are involved, the fundamental "vehicle" for the engagement is the movement of the hands and arms, culminating in the hands touching in various ways on the stressed syllable in the phrase or word--mirrored and modulated also by the mirror neurons in the brain of the both participants. Each part of the process or mechanism has its own basic function or purpose in conveying the information. Add to that the notion that every pedagogical gesture used can be performed at differing speeds or pitches or volume, and the roles of the instructor's hands and arms, and those of the students, can take on a wide range of subtle meanings and responsibilities. 

Cannot wait to "lay my hands on" that article!

Keep in touch!

Friday, December 4, 2020

Killing pronunciation 13: Mastering mastery learning, teddy bears and other nonsense!

Good news for those who still believe that their students just need lots and lots of exposure to the language in meaningful contexts--and that their brains miraculously keep track of situations filled with incomplete, seemingly random bits of data that eventually result in the emergence in the mind of words and structures--without the requirement to mastery one word at a time or "get" a grammar structure the first time they encounter it. In fact, in many contexts, mastery learning, seen from this perspective, may have just the opposite effect: destruction of the delicate, potentially associative links of words and actions across situations. 

It is analogous to reading an engaging blog post that has all kinds of interesting "facts" or observations but that doesn't appear to make any sense, at least at the moment. Read on, Dear Reader . . . 

Interesting study, "Learning vocabulary and grammar from cross-situational statistics," by Rebuschat, Monaghan, and Schoetensack, in Cognition in the prestigous journal, no less, reported by Neurosciencenews.com. Their conclusion: 

“We have discovered that the chicken-and-egg problem of learning language can be solved just by hearing lots of language and applying some very simple but very powerful learning to this. Our brains are clearly geared up to keep track of these links between words and the world. We know that infants already have the same power to their learning as adults, and we are confident that young children acquire language using the same types of learning as the adults in our study.”'

And what was that type of learning that was evident in the subjects of the study? In essence they "learned" an artificial language created for the experiment (simply) by looking at a picture of action or a scene while listening to it being talked about. Just that. With repeated iterations, the subjects gradually made sense of what they had heard in terms of being able to associate words with images or concepts and being able to identify the basics of the underlying grammar or syntax of the language. 

The researchers "associate" that innate ability with how babies learn language, where to them words like "teddy bear" and all the other meaningless babble around them begin to connect across situations, where the same combinations of sounds keep showing up, etc. The fascinating finding . . . or claim . . . is that the brain has enormous capacity for holding the information inherent in situations somewhat in "limbo" for a time without requiring instant, meaningful connection to what was encountered earlier.--much more so than current language learning theory generally credits it with. 

The key to the study, however, is that the depicted action and associated objects that the subjects were observing, as the babble poured in, was, itself, meaningful in some broad sense, so that the sound complex was associated with the situation, not the abstracted concept or word, per se. The "very powerful learning" being referred to is what they term, "cross-situational statistical learning." What a perfect metaphor. Recall your first statistics course, the flooding of your brain initially with totally disembodied nonsense that only could be applied meaningfully after multiple passes and luck.   

This is (potentially) big, implying as it does more of a theoretical basis for immersion-based language learning and other less deductive practice. For us in pronunciation work, it suggests that more highly intentional focus of learner attention on both sound and context is critical. It is especially common practice to teach pronunciation without regard to the learner encountering the target of instruction in meaningful, memorable context or story. (If you are looking for a way to better anchor pronunciation to context--and the body--we have more good news for you! Check out the recent IATEFL Pronsig haptic pronunciation teaching webinar

If that doesn't make sense now . . .  it will later, eh! 

Friday, November 27, 2020

Motivation to do Pronunciation work: Smell-binding study!

Rats! Well . . . actually . . . mice who are motivated to (voluntarily) exercise more are genetically set up or developed to have better, more discriminating vomeronasal glandular structure. Is that big, or what? Check out the Neuroscience News summary of this unpublished study by Haga-Yamanaka, Garland and colleagues at UC-Riverside, forthcoming in PLOS ONE, Exercise Motivation Could Be Linked to Certain Smells  I LOVE the researchers' potential application of the research: 

“It’s not inconceivable that someday we might be able to isolate the chemicals and use them like air fresheners in gyms to make people even more motivated to exercise,” Garland said. “In other words: spray, sniff, and squat.”

Being a runner, myself,  I especially like the study since it uses mice who are what they term "high runners!" Admittedly it is a bit of a stretch to jump to the gym and then to the ELT/pronunciation classroom from the study, but the reality of how smell affects performance is well established in several disciplines--and probably in your classroom as well! 

Decades ago, a colleague who specialized in olfactory therapies and was a consultant in the corporate world on creating good-smelling work spaces, etc., sold me on the idea of using a scent generator in my pronunciation teaching. Required a mixing of two or three oils to get students in the mood to do whatever I wanted them to  better. Back then it seemed to be effective but there was little research to back it up and it was before we have been forced to work in "scent-free" and other things-free spaces.

What is interesting about the study to our work is the connection between persistence in physical exercise and heightened general sensory awareness, and the way smell in this case is enhanced. My guess is that touch, foundational in haptic pronunciation teaching is keyed in similar ways. Gradually as students practice consistently with the gestural gross and fine motor gestural patterns, what we call pedagogical movement patterns, their differential use of touch increases. (An earlier post identifies over two dozen "-emic" types of touch in the system.) In other words, touch becomes more and more powerful/effective in anchoring sound change and memory for it. 

That insight is central to the new haptic pronunciation teaching system, Acton Haptic Pronunciation Complement--Rhythm First, which will be rolled out early in 2021. (For preliminary details on that, check out the refurbished Acton Haptic website! )



Tuesday, November 17, 2020

Zoom(h)aptic: Haptic Pronunciation Teaching online

Keeping in touch doing pronunciation online with your students a problem? We've been working with what we call "haptic videos" for over a decade. Basically by that we mean using video models that learners move along with and in the process use gestures that are mediated and regulated by touch. (The touch usually occurs in the path of a speech-synchronized gesture where the stressed syllable in the word or phrase is articulated.) 

Just read a fun piece by Powers and Parisi on Techcrunch.com (hat tip to haptician Skye Playsted) The hype, haplessness and hope of haptics in the COVID-19 era, I'll focus a bit on the latter! What they get to is a number of haptic technologies, some of which at least promise to help us touch during COVID so we don't pass on something, such as virtual bank "touch" screens that feel to your fingers like you are actually touching the buttons when, in fact, you aren't. They also mention the sort of thing we have been following here for years such as haptic prosthetics, full-body suits and vests and gaming consoles. 

What we have discovered in doing haptic pronunciation teaching online for the last few years is that having learners "dance" along with us haptically, with extensive use of gesture and touch as they repeat or speaking spontaneously from various perspectives, is that the work really does "connect" us. Because the gesture complexes (pedagogical movement patterns - PMPs) are very easy to teach and conduct on Zoom, for instance, everybody (or every body) should get the sense of greater participation and what we term "haptic presence." 

Years of research on mirror neurons has demonstrated that if you are paying careful attention to the motions of another your brain is experiencing much of what is happening as if you, yourself, were the source of the action. What that means is that after students have been introduced to the gestural patterns--by doing them along with select phrases, when they see them again, it should (and generally does from our experience) resonate with them. In informal experiments where we ask students NOT to move along with us, they report that their bodies generally cannot help but move along to some degree. (That is a doctoral degree research project for any haptician who is interested!!!)

So . . . pack up your mirror neurons and go over the www.actonhaptic.com and look at the demonstration videos. And, while you are at it, check our our latest webinar with IATEFL on HaPT! After you do, come back and we'll sign you up for some HaPT training. Right now we are still in v4.5 but v5.0 "ActonHaptic Pronunciation Complement" will be rolling out later this fall!


Sunday, November 1, 2020

Managing distraction in (haptic pronunciation) teaching: to block or to hype . . . or both!

New study by Udakis et al:  Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory
synapses onto CA1 pyramidal neurons shapes hippocampal output,
 characterized by Science Daily as a  . . . a breakthrough in understanding how memories can be so distinct and long-lasting without getting muddled up." Normally, I wouldn't take a shot at connecting research in basic neuroscience to haptic pronunciation teaching, but this one, describing the basic mechanisms by which some memories get stored so that they are recalled vividly later, points to a couple of principles that should underlie all instruction, not just haptic pronunciation teaching. 

In essence what were identified are two key "circuits," in effect, one that basically intensified the event and another that served to block out distraction, or put another way functions to inhibit other "learning" that might cover over or undermine an experience. One interesting implication of that model is that the brain, in some sense, is "intentionally" managing distraction. Now the conditions that have to be in play for an experience to be "protected" are, of course, myriad, but the concept that highly systematic attention to distraction, not just increasing excitement or emotional engagement in a "teachable moment" is critical is worth considering. 

Clker.com

In the comment on the earlier post on distraction, the observation was made that, at least in one program, distraction was not seen as having any relevance in instruction, whatsoever. My guess is that that is the case in many systems as well. In our haptic pronunciation teaching workshops one of the questions we must explore is how teachers explicitly and intentionally deal with in class distractions, of all kinds, but especially extraneous kinetic (movement in the room), visual (elements in the visual field of learners), auditory (any noise coming in from outside or being generated in the room), olfactor (odors), airborne (pollution, etc.), temperature fluctuations and furniture comfort and distribution. 

Any one of those can seriously undermine instruction, of course. In haptic work which is based on systematic control of movement and gesture and utilization of the visual field, you can see how any distraction, in addition to just naturally "wandering students minds" can undermine the process. Consequently, we attend to ALL of them in our initial assessment of the classroom setting that learners are about to enter. 

Just the use of gesture and movement synchronized with speaking will capture the attention of learners at least temporarily mediating the surrounding potentially distractions, but the idea is that in addition to learners being "captivated" by the lesson content, activities and instructor delivery, attention to or control of select environmental features may be extraordinarily important. Assuming you can not control everything at once, I'd suggest you use our basic heuristic: adjust . . . at least just one or two intentionally . . . each class--without letting learners know what you are up to.  Then maybe do some kind of warm up, maybe not like this one of mine, but you get the idea!


Source: 

University of Bristol. (2020, September 8). Research unravels what makes memories so detailed and enduring. ScienceDaily. Retrieved November 1, 2020 from www.sciencedaily.com/releases/2020/09/200908131139.htm

Tuesday, October 27, 2020

The "Marshmallow effect" in (haptic pronunciation) teaching

Following up on the previous blogpost on "distracting from distractions," here is a "delicious" study by Heyman, of University of California San Diego, and colleagues, summarized by Science Daily that attempts to refine the classic "Marshmallow effect," studies where children are bribed with marshmallows to see how long they will wait to eat them. Basically, they are told if they can just hold off a bit, they'll get more marshmallows. Those that do turn out to be more successful later in life, maybe more disciplined, etc. 

In this study the added variable was that the 3 and 4 year olds in different groups were told (a) their teacher would find out how much time they waited, (b) their friends would find out, or (c) in the control group, no such instruction were provided. As you can guess, the first two groups waited longer; the first, more than twice as long as the second. (The researchers' conclusions as to what is actually motivating the kids--they say " . . .  findings suggest that the desire to impress others is strong and can motivate human behavior starting at a very young age." Well maybe, in the case of (b), but in (a), given that the research was done in China . . . could there be other cultural factors involved, such as fear of teacher reaction or discipline? Maybe . . . 

In haptic pronunciation teaching, but in many different teaching contexts, instructors pay very close attention to time on task work by students both in and out of class. A basic HaPT protocol is that students have to report weekly in some detail on their out of class practice, including how much time on assigned task and various levels of evaluation of how "it" went. Here, too, however, there is the same "Marshmallow" question . . . Those that do consistently report seem to do much better; those that don't, don't. But there is no obvious way to assign simple cause and effect there. Maybe it is just that the disciplined do better, including at providing good reports on time allocation, etc. 

I have been unable to find a decent piece of research that parallels what we do in the "ActonHaptic" version of HaPT with time management and reporting. (If you know of one, please pass that on!) But, the general effect always seems to be more focused, less distracted work/study. I do something similar in my some of my grad courses, in fact, where at least the monitoring effect, that they have to report to me regularly always seems to "work." I do have data from final course evaluations that confirms that consistently. 

So . . . try applying that idea to your course. As you do, take careful notes on how much time you spend on what, and when, and how it seemed to work. Then report back to me . . . or else!

Source: 

Association for Psychological Science. (2020, September 10). Children will wait to impress others -- another twist on the classic marshmallow test. ScienceDaily. Retrieved October 26, 2020 from www.sciencedaily.com/releases/2020/09/200910110826.htm

Sunday, October 18, 2020

Good, or at least less "distracting" distraction in (pronunciation) teaching

Now here is some "different" research from the Journal of Food Science Education and the journal, Perception, that you may have missed (summarized by Science Daily). The first, by Schmidt of the University of Illinois at Urbana-Champaign, titled: Distracted learning: Big problem and golden opportunity; the second, by Hipp, Olsen and GerhardsteinMind-Craft: Exploring the Effect of Digital Visual Experience on Changes to Orientation Sensitivity in Visual Contour Perception.

In pronunciation teaching, and especially so in haptic work, distraction can be lethal, depending on which modality it is coming through! Dealing with it is always high priority. We manage distraction and attention several ways, but principally with gesture, touch and management of the visual field. 

Schmidt's report reviews research on sources of potential distraction evident in the multitasking world of today and then considers a number of potentially effective measures for addressing them. Hibb et al examine an intriguing phenomena where the brain/eyes are seen adapting in surprising ways to the visual digital milieu, especially shifting among different environments that we are engaged in today. Taken together, the two studies seem to suggest that, probably for a number of reasons, distraction is emerging as a much more complex and variable phenomenon in the experience of those who have "grown up" in that milieu than we often assume. 

In other words, the impact of disruptive elements on learning and teaching-- and consequently the potential effectiveness of mediation procedures, needs to be reconsidered. Listed below, paraphrased and reorganized into three categories, are the set of recommendations from Schmidt's study: 

Pre-Conditions:
  • Removing extraneous devices from workspaces
  • Incorporating movement into classroom activities
  • Promoting and implementing active learning
  • Using a work-reward system
Classroom protocols: 
  • Alternating intensive periods of focused work with preplanned bursts of pleasure
  • Developing course content on topics of students' choosing 
  • Having them teach it to other students
Cognitive and meta-cognitive:
  • Encouraging development of internal locus of control
  • Fostering a work-hard, play-hard mindset
  • Encouraging setting of goals related to academic performance 
Nothing there, in itself, surprising, of course, but taken together or reconsiderd as a fuller set of strategies that may, in combination, work to moderate distraction--as a more primary/preliminary target of instruction with today's learners, with their evolving attentional systems, is worth "attending to!" 

Bottom line: The impact of both distraction and of those mediation strategies on "native media-ites," those who have grown up in computer mediated experience (and devices), probably those now in their mid to late 20s or somewhat earlier, may be evolving or emerging in new forms. In other words, multitasking for those learners is apparently becoming experientially and phenomenologically different than it is to earlier "pre-media" generations: they seem to be adapting in ways such that they can be both less . . .  distracted and, consequently, more amenable to pedagogical mediation. 

In a subsequent post, I'll continue this thread exploring specific mediations that apply to (haptic) pronunciation teaching. 

Sources: 

Shelly J. Schmidt. Distracted learning: Big problem and golden opportunity. Journal of Food Science Education, 2020; 19 (4): 278 DOI: 10.1111/1541-4329.12206

University of Illinois at Urbana-Champaign, News Bureau. (2020, October 14). Distracted learning a big problem, golden opportunity for educators, students. ScienceDaily. Retrieved October 17, 2020 from www.sciencedaily.com/releases/2020/10/201014140932.htm

D. Hipp, S. Olsen, P. Gerhardstein. Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes to Orientation Sensitivity in Visual Contour Perception. Perception, 2020; 030100662095098 DOI: 10.1177/0301006620950989

Binghamton University. (2020, September 30). Screen time can change visual perception -- and that's not necessarily bad. ScienceDaily. Retrieved October 17, 2020 from www.sciencedaily.com/releases/2020/09/200930144422.htm

Tuesday, September 29, 2020

(New) v5.0 Haptic Pronunciation Teaching as "Metanique": any text, story, class or time

If you are new to haptic pronunciation, here is a quick history. (If not, drop down to after the bullets!) To understand the importance of this new development, Haptic as Metanique, a little background is helpful:

  • 1970's - I was trained in pronunciation teaching, especially from a speech pathology, highly tactile and kinesthetic perspective
  • 1980s - Extensive work in accent enhancement involving both kinesthetic and psychological models
  • 1990s - Large class teaching of pronunciation in Japan and research on uses of gesture in ;pronunciation teaching
  • 2000 ~ 2020 - Development of haptic pronunciation teaching, inspired in part by work in psychotherapy for PTSD, especially use of the visual field and touch.
  • 2020 - v5.0 Haptic pronunciation teaching as "metanique" (a system of procedures where attention to pronunciation can be mapped on to any meaning or narrative-based classroom teaching text or technique. 

Haptic (Pronunciation Teaching as) Metanique is, in effect, a series of complementary overlays to any L2 instruction that can be applied in any class any time that any learners (all ages and contexts) are engaged in meaningful texts or interpersonal communication practice. 

We use the Butterfly above as our symbol of metanique, in general, a gesture complex that, in a sense, floats above or lands on any word, phrase, clause of sentence, embodying it. The Butterfly pedagogical movement pattern has been central to the haptic system from the outset. (See a demonstration of early butterfly and other PMPs.) and others from v1.0. Here is an example of how  metaniques, in this case the Butterfly and the intonation PMP, Touchinamis, might be applied to presentation of a model dialogue to embody lexical items (words), the rhythm patterning or the intonation contours:

X is Y /and Z, / but A, / who is from B, / is very much C, / to be sure. 

ooO        oO       oO              oooO                 ooooO                 ooO  (using Butterfly)

--/           -/          -/ \               ---/ \                  ----\                     --/ \   (using Touchinamis)

The concept is that anything that is the focus of instruction, where it is embedded in a vivid context or narrative, where some complementary attention to form would fit in relatively seamlessly without disrupting comprehension or production, can be "metaniqued!"

For more on metaniquing and v5.0, join us at the upcoming webinar in November (or possibly the webinar upcoming on 10/2 -- if you hurry and register at info@actonhaptic,com!





Monday, September 28, 2020

Believing in pronunciation teaching -- at least at the beginning!

Have believed for . . . a long time . . . that early pronunciation instruction and learning is not only a higher calling, but  in some sense qualitatively different than later language acquisition. Once some "quorum level" of sounds and patterns are acquired, it is a different process or at least teaching problem. Hence, we see the often confused debates as to what degree pronunciation work is "physical" or more "conscious/cognitive." I believe two recently published studies help unpack the dichotomy or paradox. '

 A new study, Implicit pattern learning predicts individual differences in belief in God in the United States and Afghanistan, by Weinburger et al, has interesting, albeit somewhat indirect implications for pronunciation teaching.  Sciencedaily describes the focus of the study, quoting the researchers:  

"This is not a study about whether God exists, this is a study about why and how brains come to believe in gods. Our hypothesis is that people whose brains are good at subconsciously discerning patterns in their environment {emphasis, mine}may ascribe those patterns to the hand of a higher power," 

In a relatively straight-forward design, the research "correlated" relative ability to unconsciously identify language and symbolic patterns with stronger, fundamentalist religious belief in the two cultures/faith traditions, Christianity and Islam. Subjects more adept at pattern recognition tended toward stronger belief. (There are not just a few potential cross-cultural and methodological issues with the research, but I really like the conclusion!) 

And then this study on early versus later learning of Mandarin by Qi and colleagues at the University of Delaware, Learning language: New insights into how brain functions.  Their conclusion, focusing on brain function, summarized in Science Daily:

"The left hemisphere showed a substantial increase of activation later in the learning process -- the right hemisphere in the most successful learners was most active in the early, sound-recognition stage. . . "

Now granted, learning Mandarin may require a little more right hemisphere than English, as has been shown in previous studies, but the basic concept, pattern recognition, a more specialized function of the right hemisphere, is a key feature of early or initial learning of sounds. The researchers also note that more right hemisphere engagement was also key to eventual success in the language as well. Implicit pattern recognition . . .not explicit, left-hemisphere-like processing. 

There are no studies that I am aware of which correlate fundamentalist religious beliefs with acquisition of  L2 sound systems, but the connection between more right hemisphere based unconscious or inductive learning and early pronunciation teaching and learning is striking. That suggests that more experiential techniques and procedures, even drill, when carried out in ways that allow the brain time and input to "intuit" or acquire the somatic patterning involved, are essential to efficient instruction. So how do we do that well? 

Better pray about that . . . but will get right back to you!

Bill


Sources: 

University of Delaware. (2019, May 8). Learning language: New insights into how brain functions. ScienceDaily. Retrieved September 18, 2020 from www.sciencedaily.com/releases/2019/05/190508093716.htm

Adam B. Weinberger, Natalie M. Gallagher, Zachary J. Warren, Gwendolyn A. English, Fathali M. Moghaddam, Adam E. Green. Implicit pattern learning predicts individual differences in belief in God in the United States and Afghanistan. Nature Communications, 2020; 11: 4503 DOI: 10.1038/s41467-020-18362-3




Tuesday, September 8, 2020

Next Haptic Pronunciation Teaching (Free!) Webinars!

We (the MATESOL at Trinity Western University) are doing two FREE introductory webinars on haptic pronunciation teaching: Friday, October 2nd and Saturday, November 14th, 2020. The webinars are held from 7:30 p.m. PST to 9:00. Contact: william.acton@twu.ca for more information and reservations.  (Places limited!) At least two reasons we are offering those: 

First, "haptic" is the only way to teach pronunciation (at least in our modest opinion!) 

Second, every spring, beginning in mid-January, we offer an online, 3-credit graduate course, Ling 611 - Applied Phonology. Roughly one quarter of that course is "Haptic Pronunciation Teaching." 

For more detail on the webinars noncredit haptic course and the grad course, go here! 

You can apply to take either the regular course (for about $2200 CAD, as a special student) or the noncredit haptic stream by itself (for about $500--comes with a certificate.)

You do need some prerequisite work to do Ling 611, for example, some background in phonetics, linguistics and pronunciation teaching. (Check with me if you have a question on that.) No prereqs required for the haptic stream, however. The grad course runs 14 weeks; the haptic certificate, 12. The grad course takes about 8~10 hours a week; the certificate, about 3. 

Ling 611 or the certificate course can also be hosted at your school or program, done for groups or individually.  

See you next month!

Bill

william.acton@twu.ca




Tuesday, August 4, 2020

(New) Acton Haptic Accent Enhancement for International Professionals

For the last 5 or 6 years I have been working with a "new" accent enhancement system, based on haptic pronunciation teaching face-to-face, on campus, with select international graduate students and professionals. With COVID, beginning early this spring, I began working on a new online version of that individualized course. It is all one-on-one (or possibly one-on-two) with weekly, 45-minute sessions on Zoom or SKYPE. 

I have been doing accent work since about 1975 or so. The first paper was published on it in 1984. (If you'd like a free copy of that, let me know and I'll send you one.) Our 2013 article gives you a pretty good picture of what it is about. Would love to work with you if you have the "wiring" and time. If interested, check out the AHAE program page. (It is still a work in progress but it will give you a pretty good idea of what it is about.) 

Bill

Sunday, July 19, 2020

Fixing your eyes on better pronunciation--or before it!

ClipArt by
Early on in the development of haptic pronunciation teaching, we began by borrowing a number of techniques from Observed Experiential Integration therapy, developed by Rick Bradshaw and colleagues about 20 years ago. OEI has proved to be particularly effective in the treatment of PTSD.  In OEI one of the basic techniques is the use of eye tracking, that is therapists carefully control the eye movements of patients, in some cases stopping at places in the visual field to "massage" points through various loops and depth of field tracking.
Clker.com

We discovered that attempting to control students' eye movement, having them follow with their eyes the track of the gestures across the visual field being used to anchor sounds during pronunciation work, that although memory for sounds seemed better, the holding of attention for such extended lengths of time could be really counterproductive. In some cases, students even became slightly dizzy or disoriented after only a few minutes. (And, in retrospect, we were WAY out of our league . . . )

Consequently, attention shifted to visual focus on only the terminal point in the gestural movement where the stressed syllable of the word or phrase was located, where the hands touched. We have been using that protocol for about a decade.

Now comes a fascinating study by Badde et al., "Oculomotor freezing reflects tactile temporal expectation and aids tactile perception" summarized by ScienceDaily.com, that helps refine our understanding of the relationship between eye movement and touch in focusing attention. In essence, what the research demonstrated was that by stopping or holding eye movement just prior to a when subject was to touch a targeted object, the intensity of the tactile sensation was significantly enhanced. Or, the converse: random eye movement prior to touch tended to diffuse or undermine the impact of touch. That helps explain something . . .

The rationale for haptic pronunciation teaching is, essentially, that the strategic use of touch both successfully manages gesture and focuses much more effectively the placement of stressed syllables in words accompanying the gesture in gesture synchronized speech. In almost all cases, the eyes focus in on the hand about to be touched--just prior to what we term the: TAG (touch-activated ganglia) where touch literally "brings together" or assembles the sound, body movement, vocal resonance and iwth graphic visual schema and meaning of the word or phoneme, itself.

In other words, the momentary freezing of eye movement an instant before the touch event should greatly intensify the resulting impact and later recall produced by the pedagogical strategy. We knew it worked, just didn't really understand why. Now we do.

Put your current pronunciation system on hold for bit . . . and get (at least a bit) haptic!

Original source:
Stephanie Badde, Caroline F. Myers, Shlomit Yuval-Greenberg, Marisa Carrasco. Oculomotor freezing reflects tactile temporal expectation and aids tactile perception. Nature Communications, 2020; 11 (1) DOI: 10.1038/s41467-020-17160-1

Sunday, June 28, 2020

Haptic pronunciation teaching (un)masked!

A student just asked the question: How can I teach pronunciation in a mask? Where he is, already back in the classroom, he and most of his students are wearing masks. It can be difficult enough when you can't see your students' faces, let alone when they can't see yours! The end of pronunciation teaching as we know it? No, not at all. Here's how . . .

In 2014, I was in the Middle East doing teacher training workshops. I was scheduled to do one at a women's college. NEVER occurred to me that the (150) students might be wearing burqas . . . which almost all of them were, covered, head to foot. One of the most successful and well received sessions I have ever done. (See the blogpost on that for more detail as to how it happened and my thoughts as to why it seemed to go so well!) 

With the exception of most consonants and a few features of vowels, most everything else of real importance in pronunciation work can be done in a mask . . . haptically. By that I mean, taught "from scratch," except where the learner has relatively little idea of where things in the vocal track have to go and touch to come up with a vowel or consonant sound.

Suprasegmentals (rhythm, stress and intonation) done in masks is a piece of cake, in fact, maybe even preferable in some cases. If you haven't already, go to www.actonhaptic.com and watch the demo videos. Even for vowels, you can do correction and feedback in a mask effectively, as long as the learner has the basic physical routine stored "in there" somewhere that can be recalled.
Clker.com

Doing a new demonstration shortly of more ideas on effective "masked" pronunciation as part of the upcoming webinars. July 24th and 25th. Contact info@actonhaptic.com for reservations.

Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

Clker.com
What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Thursday, June 4, 2020

CPR for Pronunciation homework and teaching . . . that works!

Clker.com
Excellent study by Martin, "Pronunciation Can Be Acquired Outside the Classroom: Design and Assessment of Homework-Based Training," a real MUST READ for you if you are serious about pronunciation teaching, demonstrating that at least one kind of (computer-mediated)  homework system is not only effective, but may work as well as classroom-only instruction. 

The basic process in the homework phase was what is termed, iCPR, computer-based, intelligibility focused cued pronunciation reading. Learners are provided with explicit instruction, explanation and then both perceptual and production training and practice, with feedback in the perceptual phase/practice only. 

The study involved adult learners of German, extending over 10 weeks, with the equivalent of about 30 minutes of instruction either in class or out of class. The in-class lessons seemed to closely mimic the process and time allocation of the homework. From a number of perspectives, either treatment showed equally significant improvement and student satisfaction. Methodologically, the project seems tight, although the use of the term, homework, is probably a little misleading today when the learner never really "leaves" the web in some form during the day except for sleep . . . 

In corresponding with the researcher, my only question was: How (on earth) did you get the students to DO their homework? Surely it  had something to do with the "sell" up front, the allocation of grade points (easily accounted for in the computer-mediated system) and (probably) early student awareness to some degree of the program's efficacy. So . . . it looks well conceived, a highly detailed blueprint of how to set up a similar system. 

Setting aside the question of just how readily the process can be adopted and adapted for the moment, what this shows or means is that Martin has given us another intriguing picture of the future of pronunciation teaching: pronunciation work handled outside of in-class instruction. 

To paraphrase Lincoln Steffens: "I have seen the future (of pronunciation teaching) and it works. [remark after visiting the Soviet Union in 1919]” or maybe even Marshall McLuhan: "If it works, it's obsolete." . . . The field is changing fast. Pronounced change, to put it mildly!

Source: 
The Modern Language Journal, 0, 0, (2020) DOI: 10.1111/modl.12638 0026-7902/20/1–23 National Federation of Modern Language Teachers Associations

Tuesday, May 26, 2020

The sound of gesture: Ending of gesture use in language (and pronunciation) teaching

Quick reminder:  Only one week to sign up for the next haptic pronunciation teaching webinars! 

Sometimes getting a rise (ing pitch) out of students is the answer . . . This is one of those studies that you read where a number of miscellaneous pieces of a puzzle momentarily seem to come together for you. The research, by Pouw and colleagues at the Donders Institute. “Acoustic information about upper limb movement in voicing”, summarized by Neurosciencenews.com, is, well . . . useful.

In essence, what they "found" was that at or around the terminal point of a gesture, where the movement stops, the pitch of the voice goes up slightly (for a number of physiological reasons). Subjects, with eyes closed, could still in many cases identify the gesture being used, based on parameters of the pitch change that accompanied the nonsense words. The summary is what is fun and actually helpful, however.

From the summary:

"These findings go against the assumption that gestures basically only serve to depict or point out something. “It contributes to the understanding that there is a closer relationship between spoken language and gestures. Hand gestures may have been created to support the voice, to emphasize words, for example.”

Although the way the conclusion is framed might suggest that the researchers may have missed roughly three decades of extensive research on the function of gesture, from theoretical and pedagogical perspectives, it certainly works for me--and all of us who work with haptic pronunciation teaching. That describes, at least in part, what we do: "  . . . Hand gestures . . . created to support the voice, to emphasize words, for example.” Now we have even more science to back us up! (Go take a look at the demonstration videos on www.actonhaptic.com, if you haven't before.) 

What can I say? I'll just stop right there. Anything more would just be but an empty gesture . . .

Source:
“Acoustic information about upper limb movement in voicing”. by Wim Pouw, Alexandra Paxton, Steven J. Harrison, and James A. Dixon. PNAS doi:10.1073/pnas.2004163117

Monday, May 18, 2020

Cognitive Restructuring of Pronunci-o-phobia - (and Alexa-phobia): Hear, hear! (Just don't peek!)

Clker.com
Caveat emptor: If you are emotionally co-dependent on Alexa, you might want to "ALEXA, STOP ME!" at this point. We love you, but you are lost . . .

New study by "a team of researchers at Penn State" (Summarized by ScienceDaily.com) explored the idea of using ALEXA to help you "cognitively restructuring" your public speaking anxiety, Anxious about public speaking? Your smart speaker could help. Actually what they did was to compare two different ALEXAs in talking you through/out of some of your public speaking, pre-speech anxiety, a more social one with a less social one. (Fasten your seat belt . . . ) Subjects who engaged with the former felt less stressed at the prospect of the giving a speech. From the summary from the researchers:

"People are not simply anthropomorphizing the machine, but are responding to increased sociability by feeling a sense of closeness with the machine, which is associated with lowered speech anxiety . . . Alexa is one of those things that lives in our homes, . . As such, it occupies a somewhat intimate space in our lives. It's often a conversation partner, so why not use it for other things rather than just answering factual questions?"

Houston, we have a problem. Several, in fact. For instance, if ALEXA can do that, imagine what a real person online, just audio only, could accomplish! Forget Zoom and SKYPE! I'd predict that that may even account for some, if not a great deal, of the reduction in anxiety alone. In that condition, a real person might be exponentially more effective . . . worth checking on, I'd think. In addition, from the brief report we get no indication as to what ALEXA actually said, only that "she" was more socially engaging in one condition, than the other. 

What it does suggest, however, is that we should be able to use the same general strategy in dealing with the well-researched anxiety on the part of  instructors and students toward pronunciation work. The impact of a person facing you as you try to modify your pronunciation is important. For many learners, they literally have to close their eyes to repeat a phrase with a different articulation--or at least dis-focus their eyes momentarily. That is is an especially critical dimension of haptic and general gesture techniques in pronunciation teaching. 

This idea is explored in Webinar II in the upcoming Haptic Teaching Webinars I and II, June 5th and 6th. Please join us! (Contact info@actonhaptic.com to reserve you place!) 

And if you'd like to continue this discussion, give me a call . . . Keep in Touch!

Source:
Penn State. (2020, April 25). Anxious about public speaking? Your smart speaker could help. ScienceDaily. Retrieved May 18, 2020 from www.sciencedaily.com/releases/2020/04/200425094114.htm

Saturday, May 2, 2020

Killing pronunciation 12: Memory for new pronunciation: Better heard (or felt) but not seen!

Another in our series of practices that undermine effective pronunciation instruction!
Clker.com

(Maybe) bad news from visual neuroscience: You may have to dump those IPA charts, multi-colored vowel charts, technicolor xrays of the inside of mouth, dancing avatars--and even haptic vowel clocks! Well . . . actually, it may be better to think of those visual gadgets as something you use briefly in introducing sounds, for example, but then dispose of them or conceptually background them as quickly as possible.

New study by Davis et al at University of Connecticut, Making It Harder to “See” Meaning: The More You See Something, the More Its Conceptual Representation Is Susceptible to Visual Interference, summarized by Neurosciencenews.com, suggests that visual schemas of vowel sounds, for example, could be counter productive--unless of course, you close your eyes . . . but then you can't see the chart in front of you, of course. 

Subjects were basically confronted with a task where they had to try and recall a visual image or physical sensation or sound while being presented with visual activity or images in their immediate visual field. The visual "clutter" interfered substantially with their ability to recall the other visual "object" or image, but it did not impact their recall of other sensory "image" (auditory, tactile or kinesthetic) representation, such as non-visual concepts like volume or heat, or energy, etc.

We have had blogposts in the past that looked at research where it was discovered that it is more difficult to "change the channel," such that if a student is mispronouncing a sound, many times just trying to repeat the correct sound instead, with out introducing a new sensual or movement-set to accompany the new sound is not effective. In other words, an "object" in one sensory modality is difficult to just "replace," you must work around it, in effect, attaching other sensory information to it (cf multi-modal or multi-sensory instruction.)

So, according to the research, what is the problem with a vowel chart? Basically this: the target sound may be primarily accessed through the visual image, depending on the learner's cognitive preferences. I only "know" or suspect that from years of tutoring and asking students to "talk aloud" me through their strategies for remembering pronunciation of new words. It is overwhelming by way of the orthographic representation, the "letter" itself, or its place in a vowel chart or listing of some kind. (Check that out yourself with your students.)

So . .  what's the problem? If your "trail of bread crumbs" back to a new sound in memory is through a visual image of some kind, then if you have any clutter in your visual field that is the least distracting as you try to recall the sound, you are going to be much less efficient, to put it mildly. That doesn't mean you can't teach using charts, etc., but you'd better be engaging more of the multisensory system when you do or your learners' access to those sounds may be very inefficient, at best--or downgrade their importance in your method appropriately. 

In our haptic work we have known for a decade that our learners are very susceptible to being distracted by things going on in their visual field that pull their attention away from experiencing the body movement and "vibrations" in targeted parts of their bodies. Good to see "new-ol' science" is catching up with us!

I've got a feeling Davis et al are on to something there! I've also got a feeling that there are a few of you out there who may "see" some issues here that you are going to have to respond to!!!




Wednesday, April 15, 2020

What do you expect? (A "Tsough" question for pronunciation teaching!)

Intriguing title of  recent piece/summary on ScienceDaily.com: "Flaw in Rubber Hand Illusion raise tsough questions for psychology" (a real double threat: not only a spelling miscue, but a grammar issue as well.)  Do those two little "glitches" affect your expectations as to what is in the article? Unavoidably, eh . . . and that is too bad. The research by Lush of University of Sussex being summarized is potentially paradigm shaking (original title): Demand Characteristics Confound the Rubber Hand Illusion.
From the summary: 
Clker.com

"The Rubber Hand Illusion, where synchronous brush strokes on a participant's concealed hand and a visible fake hand can give the impression of illusory sensations of touch and of ownership of the fake hand, has been cited in more than 5,000 articles since it was first documented more than 20 years ago."

What that appeared to establish early on is that the brain was in some sense "hard wired" to tranfer sensation throughout the body, as a function of consciousness. The problem, according to Lush, and demonstrated in the study, is that the results from experiments exploring that effect, may be hopeless biased by what are termed "demand characteristics," of the study, in effect (hypnotic-like) suggestion as to what the researcher expects to find and the subjects experience. 

In other words, subjects will do their best to exhibit the effect being elicited. In Lush's study, subjects' expectations for how they would respond to the "rubber hand", having read the original introductory protocols, were striking to the extent that they were biased in favor of experiencing the "ghost sensations" in the rubber hand. 

Since in haptic pronunciation teaching the hands play a central role in linking sound, gesture and concepts, we clearly have a "pony in this race" as well.

A couple of decades ago, in a piece on the role of suggestion in language teaching in the JALT Language Teacher, I cited a paragraph from a (then) popular student pronunciation book (bold-face, mine):

"Acquiring good pronunciation is the most difficult part of learning a new language. As you improve your articulation you have to learn to listen and imitate all over again. As with any activity you wish to do well, you have to practice, practice, practice, and then practice some more . Remember that you cannot accomplish good pronunciation overnight; improvement takes time. Some students may find it more difficult than others and will need more time than others to improve" (Orion, 1989, pp. xxiii-iv).

I went on to note: "In those . . . words and phrases . . . can you not hear echoes of that famous line above the door in Dante's Inferno, "Abandon hope, all ye who enter here?"

This relates back to two blog posts ago on "pronunciation preambles," that is the way instructors set up work in pronunciation. Human beings, at least most of them, are highly suggestable. They have to be to be capable of picking up subtle cues in their environment quickly and efficiently. Pronunciation teaching, and pronunciation, in general, has gotten a bad rap, some of it deservedly so, of course, but how it is presented to learners, consciously and subconsciously, makes an enormous difference in outcome.

A "slight of hand" in the truest sense. What are you suggesting?

Source: 
University of Sussex. (2020, April 10). Flaw in Rubber Hand Illusion raise tsough questions for psychology. ScienceDaily. Retrieved April 15, 2020 from www.sciencedaily.com/releases/2020/04/200410162432.htm

Friday, April 10, 2020

Haptic Pronunciation Teaching Webinars!

The first new, v5.0 "double webinar" is set to go, October 2nd and November 21st, 1930~2100 hours, Pacific Standard Time. Reserve your place now. (No deposit required.) Fee: 40 CAD

The webinars are highly experiential and participatory. You'll need
  •  a hands free set up
  • preferably projected on a TV screen, laptop or iPad of some kind, but a handheld with a BIG screen is OK, too 
  • positioned at eye level  
  • Wireless headsets or no headset at all are best, but headsets with a long cord are adequate, 
  • since you have to stand up and "dance" on several occasions! 
The 75 minute, recorded sessions are followed by 15 minute Q and A.
Enrolment is limited to 50 participants in each webinar. There may be some time-zone restrictions, depending on early registration. Reserve your place now at: william.acton@twu.ca

Webinar topics 
  • Introduction to Haptic Pronunciation Teaching
  • Dictionary use for pronunciation
  • North American English vowels
  • Syllables and phrase grouping
  • Intonation 
  • Haptic homework
  • Select consonants
  • Fluency and linking
  • Conversation rhythm and pausing
  • Advanced intonation and secondary stress
  • Classroom correction, feedback integration techniques
Webinars can be offered exclusively for one English teaching organization, as well as "on the ground," f2f one-day workshops.  (Contact: info@actonhaptic.com for information on group packages.)
-----------------------------------------------------------------------
The noncredit haptic pronunciation course meets in a weekly 1-hour webinar and includes about two  hours of practice following the session. Course completion requires passing a certification test which includes a video test. 
-----------------------------------------------------------------------
The graduate course, Ling 611 - Applied phonology, is a 3-credit online seminar. It is composed of three relatively equal streams: (a) the haptic pronunciation teaching, which is essentially the same as the noncredit course, (b) a phonological analysis of learner data stream, and (c) a theory and methods of applied linguistics stream with focus on speaking, listening and pronunciation. There is a combination of synchronous and asynchronous meetings and assignments. 
-----------------------------------------------------------------------


Monday, April 6, 2020

The "story" of pronunciation teaching: Engaging Preambles

One of the potential advantages of having taught pronunciation for a few years (in my case, almost 50) is that you have on hand a near endless supply of "success stories" from former students, no matter what you are teaching, ways to introduce and (hopefully) motivate yourself and students at the "drop of a hat."

Was reminded of that recently after viewing a plenary by one of the great storytellers in our field, Mario Rinvolucri. Although he does not talk about the use of stories as "preambles" in instruction per se in that talk or in this nice piece in TeachingEnglish.org,  I'm sure he'd concur with their value as such. Several other studies of storytelling in the field cover a wide range of classroom possibilities, but none that I have been able to find examine the "preamble" function.

My introduction to this function of storytelling was the work of Milton Erickson, back in the 1980s. (One of my all time favorite books on that was Erickson's classic "My voice will go with you." Here is an example of one of Erickson's stories done by Bill O'Hanlon (The audio of the originals with Erickson actually telling the stories is available but less accessible.)

I'll begin with one of my favorite personal "pronunciation preambles." Please add one of yours. Let's see where this story takes us!

Better pronunciation: over night!

Clker.com
I did a 1-hour workshop at a Korean University for about 400 undergraduates. The objective was to improve the rhythm of their spoken English . . .  overnight. All of them had conversations classes the next morning. (Important note: Only one of about 6 of the conversation teachers came to the workshop, although all were invited.) I trained the students to act like they were boxing when they spoke along first with easy dialogues on the screen and then, before we finished, with simple roleplays, in pairs. It got a little chaotic, as you can imagine, but they loved it! And just before I concluded the workshop, I gave them a "secret mission" . . . The next morning, in their speaking classes they were to use the same feeling in their upper bodies--without punching the air as boxing, as they were speaking in class WITHOUT LETTING ON TO THEIR TEACHERS THAT ANYTHING WAS DIFFERENT. I heard some amazing stories back. In the classes that pulled it off, the teachers were stunned by the difference in the rhythm and energy . . . and even playfulness evident in the speaking of the class.

Never fails. To see the basic technique, go here and check out the RFC demo.

Give us your best Pronunciation Preamble!


Tuesday, March 24, 2020

Recipe for curing (Chinese) distaste for pronunciation teaching

Have trouble selling your students on pronunciation, developing an 'appetite" for it? Research by Madzharov, Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food, summarized by Science Direct, suggests that you may just need to give at least the more self-controlled among them a "hands-on" taste of it to get them to buy in. To quote the abstract:

"The present paper presents four studies that explore how sampling and eating food by touching it directly with hands affects hedonic evaluations and consumption volume."

What they found, however, was that for only the high self-control, disciplined consumers that they perceived the food to be better tasting and they were disposed to eat more of it. For the other subjects (like me maybe!), adding touch did not appear to contribute or enhance either taste or appetite for the food samples in the study. Why that should be the case, was not clear, other than the possibility that in the less self-controlled consumers, the executive control centers of the brain were offline already in terms of the direct, unfettered attraction of FOOD!

A few years ago, had a visiting scholar from China here with us for a year. It took almost the entire time for her to get me to understand how to get Chinese students to buy in to (haptic) pronunciation teaching, specifically, but, in general, more integrated, communicative pronunciation work. My "mistake" had been trying to convince relatively high-control consumers of pronunciation teaching in this case, to first be more like me, less high-control and more experiential as learners.

It has always been a problem for some, not just the Chinese students, to buy into highly gesture-based instruction. But touch was another thing entirely. Most any student can "get it", how touch can enhance learning and memory-- and be coaxed into trying some of the gestural, kinaethetic techniques. Probably for several reasons, one being that the functions of touch in the haptic system are to (1) carefully control gesture use, and (2) intensify the connection between the gesture and lexical or phonological target, the word or sound process. Also, it was  (3) much easier to present the general, popular research on the contribution of touch to experience and learning, and (4) the concept of somehow getting a learner to work in their least dominant modality, a basic construct in hypnosis, for example, can be the most effective or powerful.

The assumption here is that the metacognitively self-controlled are less likely to be influenced by immediate feelings or impressions, but once that "barrier" is bridged, as touch does so effectively, the relatively novel sensual experience for them has greater impact. Think: men and the power of perfume . . .

In other words focusing initially on the touch that concluded every gesture made a difference. Have been doing that ever since. Students are much more receptive to trying the gestural techniques once they feel that they have sufficient understanding . . . and then once they have tried it, focusing more on touch than on gesture . .  they are "hooked" . . . being more able and amenable to sense the power of embodiment in learning pronunciation from then on.

If you have a taste for pronunciation work with Chinese students, what is your recipe?

Keep in touch . . .

Original Source:
Madzharov, A. Self-Control and Touch: When Does Direct Versus Indirect Touch Increase Hedonic Evaluations and Consumption of Food Journal of Retailing Volume 95, Issue 4, December 2019, Pages 170-185 https://doi.org/10.1016/j.jretai.2019.10.009


Thursday, March 19, 2020

Love it or leave it: 2nd language body, voice, pronunciation and identity

Clker.com
Recall (if you can) the first time you were required to listen to or maybe analyze a recording of your voice. Surprising? Pleasing? Disgusting? Depressing? There are various estimates as to how much of your awareness of your voice is based on what it "feels" like to you, not your ears, but somewhere around 80% or so. Turns out your awareness of what your body looks like is similar.

A new study by Neyret, Bellido Rivas, Navarro and Slater, of the Experimental Virtual Environments (EVENT) Lab, University of Barcelona,  “Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality” as summarized by Neuroscience News, found that our simple gut feelings about how (un)attractive our body shape or image is is generally more negative  than if we are able to view it more dispassionately or objectively "from a distance," as it were. Surprise. Using virtual reality technology subjects were presented with different body types and sizes, among them one that is precisely, to the external observer what the subject's body shape is. Subjects rated their "virtual body" shape more favorably than their earlier pre-experiment self-ratings presented in something analogous to a questionnaire format.

In psychotherapy, the basic principle of "distancing" from emotional grounding is fundamental; all sorts of ways to accomplish that such as visualizing yourself watching yourself doing something disconcerting or threatening to you. It is the "step back" metaphor that the brain takes very seriously if done right.

In this case, when visualizing the shape of your body (or your voice, by extension as part of the body,) you'll see it at least a little more favorably than when you describe it based on how it "feels" internally, the reason "body shaming" can work so effectively in some cases, or in pronunciation work, "accent shaming."

So, how can we use the insights from the research? First, systematic work by learners in critically listening to their voice should pay off, at least in some sense of resignation or even "like" so that the ear is not automatically tuned to react or aver.  (I'm sure there is research on that someplace but, for the life of me, I can't find it! Please help out with a good reference, if you can on that!) Is this some long overdue partial vindication of the seemingly interminable hours spent in the language lab? Could be in some cases.

Second, once a learner is able to "view" their L2 voice/identity relative to some ideal more dispassionately, it should be easier to work with it and make accommodations. That is one of the central assumptions of the "Lessac method" of voice development, which I have been relying on for over 30 years. It also calls into question the idea that aiming toward an ideal, native speaker accent is necessarily a mistake. You have to "see" yourself relative to it as more of an outsider, not  just from your solar plexus out . . . through your flabby abs, et al. . . .  My approach to accent reduction always begins there, before we get to changing anything. Call it: voice and body "re-sensitization."

See what I mean? If not, have somebody you don't know read this post to you again at Starbucks . . .

Original Source:
“Which Body Would You Like to Have? The Impact of Embodied Perspective on Body Perception and Body Evaluation in Immersive Virtual Reality”. Solène Neyret, Anna I. Bellido Rivas, Xavi Navarro and Mel Slater. Frontiers in Robotics and AI doi:10.3389/frobt.2020.00031.