Showing posts with label embodied cognition. Show all posts
Showing posts with label embodied cognition. Show all posts

Sunday, September 18, 2022

Killing (Pronunciation) Learning 16*: Move (with) it or Lose it!

Cilker.com
Fascinating new research--with intriguing implications: "Hand constraint reduces brain activity and affects the speed of verbal responses on semantic tasks,“ by Onishi, Tobita and Makioka of Osaka Metropolitan University, one that gives the metaphor to "sit on your hands," neuroscientific validation . . .almost!

In the study, subjects sat at computers and had to make judgments as to the relative size of different objects on the screen. In one condition, subjects viewing objects that entailed the use of the hands, such as a broom, were not allowed move their hands as they responded. That significantly slowed down brain processing, compared to responding to objects, such as a house, which do not involve as direct hand engagement or learning experience, where the restraint on their hand movement had no discernable effect. 

From the perspective of embodied cognition theory that makes sense, where, in principle, all learning . . and thought is inexorably bound together with the entire body in multiple dimensions. Some of that interconnectedness derives from when something is learned; some, from the primal notion that all experience is embodied, that is grounded in what the body is doing either in saving to memory or memory access. 

Assuming that general principle holds--and I am absolutely convinced that it does from about 50 years in the field of pronunciation teaching--how does impact our understanding of the function of body movement in the classroom? For one, requiring students to sit near motionless, especially in language learning, let alone elementary school classrooms, is a killer, best case. Just being able to move around a little, keeping loose and responding easily and with all your body (and being) means something, literally. That is something we all know intuitively, of course, but what the study shows is that at some level a body constraint is a "thought" constraint as well. 

In (haptic) pronunciation teaching, virtually all basic instruction is based on gesture-synchronized speech, where all speech production can be accompanied by gesture, and body awareness of constant motion and synchrony between body and speech rhythm develop throughout the process. The hands and arms play prominently in the method. For more on that: www.actonhaptic.com

Do a video of your class (any class) sometime. Is it moving? It should be . . . 

*This is number 16 in the series of blogposts highlighting factors or variables that can seriously interfere with learning and teaching pronunciation. 

Source:
Onishi, S., Tobita, K. & Makioka, S. Hand constraint reduces brain activity and affects the speed of verbal responses on semantic tasks. Sci Rep 12, 13545 (2022). https://doi.org/10.1038/s41598-022-17702-1

Tuesday, July 5, 2022

Embodied (and great) learning of pronunciation: Exploring Arthur Lessac!

Once in a while, we should go back to the source, what inspired us to be in this field, just to understand better where are at the moment. Two months ago, I recommended to you a new book, Movement Matters: How Embodied Cognition Informs Teaching and Learning, edited by Macrine, S. & Fugate, J., which represents some extraordinary progress is getting the body systematically back into instruction. Lessac had it figured out over 75 years ago. 

His work is not widely known outside of the fields of speech and drama, in part, because it is so "body-centered," requiring students to learn to explore themselves, their place in the world--and their voice through something similar to what we now know as "mindfulness" but also in persona of an actor to inhabit any number of other agents . . . or even musical instruments and animals, let alone metaphor upon metaphor. In other words, in theater, he had found a path back to fully engaged--and joyful use of the body and voice. 

What is so evident in Macrine & Fugate (2022) is that embodiment is key, but how you get there may vary widely . . . and neuroscience has explored a myriad ways in which that can happen effectively, many of them seem straight out of Lessac's work. 

From my perspective, in terms of a complete system, an accompanied, experiential guide to embodied "learning (through constant) exploration" (as he would characterize it), his two classics, Body Wisdom, and The Use and Training of the Human Voice, are almost without peer. 

Of course, to follow Lessac through the system, or through the courses available through the Lessac Institute, takes time, maybe six months or so before you get there, where you and your body have become wonderfully "reintegrated," as you were when you were a child. To the post-modern mind, from the "outside," it appears as though you have simply given yourself over to the whims of body, but in fact, what as happened is you and your body are just communicating together as a team. 

But to get there, generally requires going back to square one, exploring the experience of speaking and moving again, setting aside temporarily the layer upon layer of words and experience that determine what we are allowed to sense and understand. To Lessac, it was all about "exploration," being perpetually in that state of discovery with the body as the "territory," and the mind as being the map being constantly created out of experience-- not the reverse. 

In other words, to quote Lessac, train the body first. KINETIK does that. Join us this fall. (www.actonhaptic.com) or email me directly: wracton@gmail.com for custom programs, etc. 

It's good to be back. More on the KINETIK project, "KINETIK (embodied speaking and teaching) Method" soon! 

Saturday, May 14, 2022

Required reading! (New book on Embodied Cognition in Teaching and Learning)

Put this one on your list:  Movement Matters: How Embodied Cognition Informs Teaching and Learning, edited by Macrine, S. & Fugate, J., MIT Press DOI: https://doi.org/10.7551/mitpress/13593.001.0001

From the promo: "Experts translate {at least some of} the latest findings on embodied cognition from neuroscience, psychology, and cognitive science to inform teaching and learning pedagogy." (Braces, mine!) There are "only" 18 chapters, 330 pages, and the topics covered are not exhaustive, of course, but several, including the opening section on theories of embodied cognition are well worth a careful read. That is especially the case since it is FREE, open access!

In addition to the excellent concluding section, my favorite chapter thus far, one that connects very directly to the KINETIK Method and haptic pronunciation teaching is: "Embodied Classroom Activities for Vocabulary Acquisition," by Gomez, L. and Glenberg, A. DOI: https://doi.org/10.7551/mitpress/13593.003.0011

Enjoy! Embody it all! 

Bill

Saturday, June 5, 2021

KINETIK (Pronunciation Teaching) Method: Embodied cognition-centered, the way kid's learn . . . math!

One of the most intriguing parallels to haptic pronunciation teaching is with embodied math instruction with children. In a 2021piece in Frontiers in Psychology by Berman and Ramani, Integrating Embodied Cognition and Information Processing: A Combined Model of the Role of Gesture in Children's Mathematical Environments, of University of Maryland, propose a comprehensive model that also applies in very interesting ways to the new KINETIK Method. Beginning from an embodied cognition perspective (that is the learning experience must be understood as anchored in both the body and the "outside" milieu, the social context,) it connects more explicitly the critical role and function played by the hands-on methodology in that problem-based context, to math concept learning. 

The contribution of the haptic (gesture, plus touch) techniques of the KINETIK method, especially the several ways in which the hand engagement defines what an object is and how it relates other objects and the focus of the task "at hand," provides a framework for interpreting the place of the various components of the gesture and touch based techniques. 

To see more about just how that framework connects to classroom instruction in pronunciation with children and adults, join us at the weekly haptic webinars (Hapticanars) beginning on June 8th! For more information on the (free) Hapticanars and sign up, go to www.actonhaptic.com. 

Source: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.650286/full


Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

Clker.com
What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Monday, July 1, 2019

Grasping (and reaching for) pronunciation together improves memory!

There are countless studies demonstrating how under certain conditions repeating a word out loud enhances memory for it (e.g., Sciencedaily.com/Boucher, 2016), including a couple of earlier blogpost summaries here and here also associating that process with use of  movement, touch and gesture.

A new study by Rizzi, Coban and Tan of University of Basel. Excitatory rubral cells encode the acquisition of novel complex motor tasks. summarized by Sciencedaily.com, exploring the connection between fine motor engagement such as reaching for and grasping objects and enhanced brain plasticity (learning) adds another fascinating piece to that puzzle. (It is almost worth reading the original article just to have the term, "excitatory rubral cells," part of your active vocabulary . . . )

Why is this of such interest to haptic pronunciation teaching (HaPT)--literally, and language teaching in general, figuratively? At least three reasons. HaPT involves:
1. Synchronized movement between student and instructor or student and student.
2. Repetition of words, phrases or clauses in coordination w/#1
3. Use of gesture anchored by touch on stressed vowels in the words, phrases or clauses of #2, where one hand either grasps or taps the other hand in various ways. (To see demonstrations of some of those combinations, go check them out here.)

The study itself is perhaps something of a reach . . . in that Tan et al. are studying the effect in mouse brains, looking at the impact of fine motor learning on increased plasticity. (If those neuroscientists think the parallel between rodent brain plasticity and ours is worthy of research and publication, who am I to disagree?) See if you can "grasp" the concept from the ScienceDaily summary:

"The red nucleus, which, over the years, has received little attention in brain research, plays an important role in fine motor coordination. Here the brain learns new fine motor skills for grasping and stores what it has learned."

What this study adds for us is, to quote the authors, the potential impact of novel complex motor tasks on plasticity--in other words learning new patterning and relationships. In the HaPT-English system today there are over 300 novel complex motor tasks, that is combinations of gestures+touch associated with unique positions in visual field or on the upper body. They are "novel" in the sense that gesture complexes have been designed to be as distinct as possible from gestures associated with natural languages and cultural systems.

In fact, over the years probably 50 or 60 potential "pedagogical movement patterns" (PMPs) have been proposed and dropped due to possible parallel signalling of other meanings and significance to one culture or another. In that sense then the sound-motor-touch complexes, or PMPs should be both novel to the learner and physically and interpersonally engaging.

This same principle applies to use of gesture in teaching and learning as well, of course. Consistent use of movement and gesture in instruction appears to promote more general brain plasticity than often assumed. So, even if you consider systematic body work useful just to keep things "loose" and flexible, you may have had it right all along.

Start a new movement today!
Clker.com
BPTRRCE! (Better pronunciation through rubral red cell excitation!)
And don't forget to join us for the next bi-monthly Webinar, what we call "Hapticanar" on July 17th and 18th! (For reservations, contact:
info@actonhaptic.com)

Original source:
Giorgio Rizzi, Mustafa Coban, Kelly R. Tan. Excitatory rubral cells encode the acquisition of novel complex motor tasks. Nature Communications, 2019; 10 (1) DOI: 10.1038/s41467-019-10223-y



Saturday, April 14, 2018

Out of touch and "pointless" gesture use in (pronunciation) teaching

Two recently published, interesting papers illustrate potential problems and pleasures with gesture use in (pronunciation) teaching. The author(s) both, unfortunately, implicate or misrepresent haptic pronunciation training.

Note: In Haptic Pronunciation Training-English (HaPT-Eng) there is NO interpersonal touch, whatsoever. A learner's hands may touch either each other or the learner holds something, such as a ball or pencil that functions as an extension of the hand. Touch typically serves to control and standardize gesture--and integrate the senses--while amplifying the focus on stressed syllables in words or phrases.

This from Chan (2018): Embodied Pronunciation Learning: Research and Practice in special issue of the CATESOL journal on research-based pronunciation teaching:

"In discussing the use of tactile communication or haptic interventions, they (Hişmanoglu and Hişmanoglu, 2008) advise language teachers to be careful. They cite a number of researchers who distinguish high-contact, touch-oriented societies (e.g., Filipino, Latin American, Turkish) from societies that are low contact and not touch oriented (e.g., Chinese, Japanese, Korean); the former may perceive the teacher’s haptic behavior (emphasis mine)as normal while the latter may perceive it as abnormal and uncomfortable. They also point out that in Islamic cultures, touching between people (emphasis mine) of the same gender is approved, but touching between genders is not allowed. Thus, while integrating embodied pronunciation methods into instruction, teachers need to remain constantly aware of the individuals, the classroom dynamics, and the attitudes students express toward these activities."

What Chan means by the "teacher's haptic behavior" is not defined. (She most probably means simply touching--tactile, not "haptic" in the technical sense as in robotics, for example, or as we use it in HaPT-Eng, that is: gesture synchronized with speech and anchored with intra-personal touch that provides feedback to the learner.) For example, to emphasize word stress in HaPT-Eng, in a technique called the "Rhythm Fight Club", the teacher/learner may squeeze a ball on a stressed syllable, as the arm punches forward, as in boxing. .

Again: There is absolutely no "interpersonal touch" or tactile or haptic communication, body-to-body, utilized in  HaPT-Eng . . . it certainly could be, of course--acknowledging the precautions noted by Chan. 

Clker.com
A second study, Shadowing for pronunciation development: Haptic-shadowing and IPA-shadowing, by Hamada, has a related problem with the definition of "haptic". In the nice study, subjects "shadowed" a model, that is attempted to repeat what they heard (while view a script), simultaneously, along with the model. (It is a great technique, one use extensively in the field.) The IPA group had been trained in some "light" phonetic analysis of the texts, before attempting the shadowing. The "haptic" group were trained in what was said (inaccurately) to be the Rhythm Fight Club. There was a slight main effect, nonetheless, the haptic group being a bit more comprehensible.

The version of the RFC used was not haptic; it was only kinesthetic (there was no touch involved), just using the punching gesture, itself, to anchor/emphasize designated stressed syllables in the model sentences. The kinesthetic (touchless) version of the RFC has been used in other studies with even less success! It was not designed to be used without something for the hand to squeeze on the stressed element of the word or sentence, making it haptic. In that form, the gesture use can easily become erratic and out of control--best case! One of the main--and fully justified--reasons for avoidance of gesture work by many practitioners, as well as the central focus of HaPT-Eng: controlled, systematic use of gesture in anchoring prominence in language instruction.  

But a slight tweak of the title of the Hamada piece from "haptic" to "kinesthetic", of course, would do the trick.

The good news: using just kinesthetic gesture (movement w/o touch anchoring), the main effect was discernable. The moderately "bad" news: it was not haptic--which (I am absolutely convinced) would have made the study much more significant--let alone more memorable, touching and moving . . .

Keep in touch! v5.0 of HaPT-Eng will be available later this summer!








Sunday, January 21, 2018

An "after thought" no longer: Embodied cognition, pronunciation instruction and warm ups!

If your pronunciation work is less than memorable or engaging, you may be missing a simple but critical step: warming up the body . . . and mind (cf., recent posts on using Mindfulness or Lessac training for that purpose.) Here's why.

A recent, readable piece by Cardona, Embodied Cognition: A Challenging Road for Clinical Neuropsychology presents a framework that parallels most contemporary models of pronunciation instruction. (Recall the name of this blog: Haptic-integrated CLINICAL pronunciation research!) The basic problem is not that the body is not adequately included or applied in therapy or instruction, but that it generally "comes last" in the process, often just to reinforce what has been "taught", at best.

That linear model has a long history, according to Cardona, in part due to " the convergence of the localizationist approaches and computational models of information processing adopted by CN (clinical neuropsychology)".  His "good news" is that research in neuroscience and embodied cognition has (finally) begun to establish more of the role of the body, relative to both thought and perception, one of parity, contributing bidirectionally to the process--as opposed to contemporary "disembodied and localization connectivist" approaches. (He might as well be talking about pronunciation teaching there.)

"Recently, embodied cognition (EC) has put the sensory-motor system on the stage of human cognitive neuroscience . . .  EC proposes that the brain systems underlying perception and action are integrated with cognition in bidirectional pathways  . . , highlighting their connection with bodily  . . . and emotional  . . .  experiences, leading to research programs aimed at demonstrating the influence of action on perception . . . and high-level cognition  . . . "  (Cardona, 2017) (The ellipted sections represent research citations in the original.) 

Pick up almost any pronunciation teaching text today and observe the order in which pronunciation features are presented and  taught. I did that recently, reviewing over two dozen recent student and methods books. Almost without exception the order was something like the following:
  • perception (by focused listening) 
  • explanation/cognition (by instructor), 
  • possible mechanical adjustment(s), which may or may not include engagement of more of body than just the head (i.e., gesture), and then 
  • oral practice of various kinds, including some communicative pair or group work 
There were occasional recommendations regarding warm ups in the instructor's notes but nothing systematic or specific as to what that should entail or how to do it. 

The relationship between perception, cognition and body action there is very much like what Cardona describes as endemic to clinical neuropsychology: the body is not adequately understood as influencing how the sound is perceived or its essential identity as a physical experience. Instead, the targeted sound or phoneme is encountered first as a linguistic construct or constructed visual image.

No wonder an intervention in class may not be efficient or remembered . . .

Clker.com
So, short of becoming a "haptician" (one who teaches pronunciation beginning with the body movement and awareness)--an excellent idea, by the way, how do you at least partially overcome the disembodiment and localization that can seriously undermine your work? A good first step is to just consistently do a good warm up before attending to pronunciation, a basic principle of haptic work, such as this one which activates a wide range of muscles, sound mechanisms and mind.

One of the best ways to understand just how warm ups work in embodying the learning process is this IADMS piece on warming up before dance practice. No matter how you teach pronunciation, just kicking off your sessions with a well-designed warmup, engaging the body and mind first, will always produce better results. It may take three or four times to get it established with your students, but the long term impact will be striking. Guaranteed . . . or your memory back!



Thursday, April 13, 2017

The elephant in the room: Body awareness in language (and pronunciation) teaching

In the previous post, I mentioned that we are considering proposing a colloquium at the next TESOL convention (in Chicago, in March, 2018) with the title of something like: Embodiment and the body in TESOL. That could bring together a wide range of researchers and practitioners, in addition to hapticians!

Now comes this neat little study of body awareness in elephants:  Elephants know when their bodies are obstacles to success in a novel transfer task by Dale and Plotnik of University of Cambridge, summarized by NeuroScience News. Basically, they demonstrated that elephants are very much tuned into the impact that their bodies have on their immediate environment. In the study, subjects were posed with a problem such that they could not pass on a baton with a cord attached to the mat they were standing on--without getting off the mat first.

To the apparent surprise of the researchers, that was a piece of cake for the elephants.

Body awareness is getting more attention lately, for example in discussions of  body image  by scholars and "body shaming", even at #Starbucks . . .

Clker.com
But now for the "elephant in room" that will be the topic of the colloquium: To paraphrase the title of the study: Researchers (and some instructors) don't know when (or how) their bodies are obstacles to effective pronunciation teaching. Not to pull the mat out from under current teaching methodology, of course, but the point of this blog for the last 7 years has been just that: systematic work with the body is ultimately the key to pronunciation teaching.

That almost certainly means the integration of "full body" methodology in computer-mediated or virtual reality environments. The technology is available to do that now, used primarily at this point in gaming, rehabilitation and the military.

So what do we mean by "the body"? Essentially, what is termed "embodied cognition", meaning that is based in some condition or movement of our physical experience. It can be gesture, posture or "regular" motion or movement in learning, but it can also relate to anything about the physical environment of the classroom, or the genders, identities or perceived body images of participants.

50  years ahead of his time, Arthur Lessac put it so well in 1967: Train the body first! Join us in Chicago (hopefully) next spring in passing on that baton! Something noBODY should miss!

Citation: University of Cambridge “Elephant’s “Body Awareness” Adds to Increasing Evidence of Their Intelligence.” NeuroscienceNews. NeuroscienceNews, 12 April 2017.
.

Friday, May 8, 2015

Been there, done that: One-shot (pronunciation) teaching and learning!

When  or how does pronunciation work STICK--quickly?
Clipart:
Clker.com 

Here is a fascinating new, seemingly counter-intuitive study on what people do with some types of new information they encounter - by Lee, O’Doherty, and Shimojo of CALTECH: Neural Computations Mediating One-Shot Learning in the Human Brain. Summarized by ScienceDaily.com - Full citation below, whose title I like: Switching on one-shot learning in the brain. Essentially what they found was:

"Many have assumed that the novelty of a stimulus would be the main factor driving one-shot learning, but our computational model showed that causal uncertainty was more important . . . If you are uncertain, or lack evidence, about whether a particular outcome was caused by a preceding event, you are more likely to quickly associate them together."

For example, if a learner immediately associates or links a pronunciation correction back to some (probably conscious, cognitive) aspect of previous instruction, the brain may just switch off the "one-shot" learning circuits and activate "been there, done that" processing instead. In other words, taking the "time" even if involuntarily to connect back mentally to a previous schema or visual image can actually inhibit "quick" learning. Any number of studies over the decades in several fields have established the concept that in some contexts, the faster something is learned, the better. (That was, in fact, the motivation behind early development of Total Physical Response teaching.)

So when might quick or "one-shot" learning happen? My two favourite questions for speaking/listening/pronunciation classroom teachers are: (a) How (if at all) do you follow up in class after you present and (maybe) practice some aspect of pronunciation? (b) How (if at all) do you do spontaneous correction of pronunciation in class?

 . . . I'll wait a minute while you answer those questions, yourself . . . The general answer, in one form or another, is: Not much, if at all. Frequent reasons for that: (a) Don't know how. (b) Don't have time. (c) Not necessary, as long as I do a first rate job of presenting and practice in class and (d) Learners are pretty much responsible once I have done "c"!

Bottom line: One of the reasons that gesture works--and that haptic works even better by adding systematic touch--is that to some degree it bypasses conscious cognitive "cause and effect" processing. (Asher described that more or less metaphorically as by passing the left hemisphere in favour of the right, which was earlier said to much more holistic, more consciously analytic, etc. As a shorthand, I'm ok with that but in reality it a gross oversimplification and probably creates more problems than it solves today.)

I'm not saying that we should do away with formal instruction in pronunciation, including books, explanation, drill and contextual practice in class--just adding another "quick change channel."

Using EHIEP (Essential haptic-integrated English pronunciation) pedagogical movement patterns (PMP, a gesture anchored by touch associated with a sound of sound pattern) generally will not interrupt the flow of conversation or narrative as a correction is performed. It is, in effect, operating on another channel, more outside of language awareness, not disrupting as much speaking and thought. That assumes that learners have been earlier introduced to the kinaesthetic patterning of the PMP; haptic "signalling" during classroom instruction or during homework can be exceedingly effective and seamless to the course of the lesson and on other modalities.

In some sense, mindless drill doesn't engage the cognitive side of the house either--but it also can easily deaden all the senses instead if not done very carefully with as much somatic engagement as possible. (A very good example of doing drill well, however, is Kjellin's approach which I often use when anchoring a specific sound articulation.)

Haptic pronunciation teaching--Give it a shot! (A perfect place to start is here, of course!)

Full citation:
California Institute of Technology. "Switching on one-shot learning in the brain." ScienceDaily. ScienceDaily, 28 April 2015. .

Thursday, January 29, 2015

A new angle on (kinaesthetic geometry or haptic pronunciation) teaching

"Embodied cognition" is, or should be, the point of departure for pronunciation teaching--and for elementary math-geometry, according to a "moving" study by Smith, King, and Hoyte, University of Vermont (Summarized by Science Daily). "Learning angles through movement: Critical actions for developing understanding in an embodied activity." (Full citation below.)

Here is one researcher's take on embodied cognition: ". . . the brain alone does not generate behavior, but that it actually works in concert with physical movements and other environmental and neural processes such as perception, action and emotion."

In the study, elementary school-age subjects who formed geometric shapes or angles with their bodies " . . . made significant gains in the understanding of angles and angle measurements . . . while interacting with a Kinect for Windows mathematics program." 

The function of body movement (and gesture) in learning has been established and understood in many disciplines or fields of research. This study adds a more direct connection to abstract concepts, not just communicative intentions or emotions. In pronunciation teaching there are several dozen "concepts" that can be used pedagogically (such as symbols for vowels), all of them, or at least most of them can be represented in visual schema, or (in haptic work) in pedagogical movement patterns (gesture plus touch on a focal element in the word or phrase). 

What is also nice about this study is that to create those angles with the body requires a requisite degree of accuracy and dimensionality--kinaesthetically for the learner and visually (for feedback) for the instructor. That is also the key to haptic pronunciation work--and what makes it particularly effective; precision of body position and gesture in the visual field. ( One of the chief criticisms of gestural work, in general, is the inconsistent presentation of patterns in the visual field and variability of emotional expressiveness.)

The future of pronunciation teaching lies in such embodied technology.  May be time to connect with Kinect . . . 

Citation:
University of Vermont. (2015, January 26). Students master math through movement using Kinect for Windows. ScienceDaily. Retrieved January 28, 2015 from www.sciencedaily.com/releases/2015/01/150126135210.htm

Wednesday, November 13, 2013

Embodied cognitive complexity--with haptic-integrated pronunciation!

I'm doing a plenary at the BCTEAL regional conference next week. Here is the abstract:
Credit: Villanova.edu


"This interactional presentation focuses on three of the most influential ideas in research in the field today: e-learning, embodiment and cognitive complexity. Taken together, the three help us address the question: How can students effectively acquire a second language--and especially pronunciation and high level cognitive functions--when more and more of their learning experience is mediated through computers?"

The point of my talk will be the power of haptic anchoring (as a form of embodiment), both in developing technologies such as the iPhone and in representing and teaching very complex concepts--even pronunciation! Those two perspectives are converging rapidly today, especially when it comes to dealing with today's media-immersed and media-integrated learners. Ironically, embodied methodologies, where explicit training and control of the body and management of its immediate physical milieu, provide both great promise and great cause for "a sober second look," as Canadians often remark. 

I'll spend more time on the former but will return to the latter here in a later post. If you'd like to initiate that discussion now, feel free! (Note: Unfortunately, I have had to switch to moderating all comments on this blog. If you do propose a comment, I'll review it quickly. Promise!) 

Friday, May 17, 2013

In search of a "touch" for pronunciation teaching

Scott Thornbury, of the New School, recently gave a plenary at TESOL-Spain that at least had a great title: The Human Touch: How we learn with our bodies. (His blog, An A-Z of ELT, is a good read; one of his 2010 posts on embodied cognition I have linked to earlier.) From the abstract, it is clear that the "touch" in "human touch" is the more general, metaphorical use of the word, although the tactile dimension will certainly figure into his comments, particularly as developments in this area have begun linking more and more to the neurophysicality of touch (See earlier blog on the texture of touch in haptic pronunciation work, for example.) Hopefully we can get access to the text or video of the plenary. Thornbury is always a "moving" speaker.

In HICP work the application of touch, within the larger notion of embodied cognition,  is in connecting vocal resonance with some type of pedagogical gesture, what we call: pedagogical movement patterns. For some time I had been puzzled as to why there wasn't more--or much of any--research on the use of touch in teaching, distinct from movement and gesture in general.

Clip art: Clker
What I have only recently discovered, in preliminary "re-reviews" of some seminal gestural research is that touch, as a component of gesture, is often reported almost as an aside or simple descriptor in studies of gesture-synchronized learning or vocal production. In other words, some gestures involve touch; some do not. (One of the early influences on the development of HICP was the observation that in American Sign Language (ASL) the predominance of signs that carry high emotional loading also tend to involve touch.)

In other words, interesting "data" on the effect of touch within gestural systems seems to be there, buried in earlier research. As far as I can tell, it has for the most part just not been isolated and examined as a relevant variable in learning or expression. My current research reanalyzing earlier language-teaching related gestural studies already shows promise. (More on that in subsequent blogposts and other publications, I'm sure!)  If you know of published research that unpacks that role of touch, please link it here! In the meantime, KIT!


Friday, November 2, 2012

Minimal pairs booed! Bad, Bud?


Clip art: Clker
Say it ain't so! And if so, so? Using minimal pairs in reading (and by extension) pronunciation instruction to teach phonic rules has been the "go to" technique for generations. Now a new study by Apfelbaum, McMurray and Hazeltine at the University of Illinois suggests that phonic rules are learned much more efficiently when encountered with "variability," to quote the researchers--who are quoted in Science Daily:
Clip art: Clker

"During the study, one group of students learned using lists of words with a small, less variable set of consonants, such as maid, mad, paid, and pad. This is close to traditional phonics instruction, which uses similar words to help illustrate the rules and, presumably, simplify the problem for learners. A second group of students learned using a list of words that was more variable, such as bait, sad, hair, and gap, but which embodied (italics, mine) the same rules."

EMBODIED! See that? Maybe that is why it worked--or maybe not? Caveat emptor: They used a commercially available system called Access Code which has been around for some time to provide the treatment for the study.

This is going to take some time to process, of course . . . At a minimum, will first have to try it out in several different contexts and compare. There!

Thursday, November 1, 2012

Pronunciation improvement: analyze or empathize?

Just not at the same time, according to new research on the interplay between analytic and emotional processing in the brain (Summarized by Science Daily) by Jack and colleagues at Case Western Reserve. One of the conclusions: "Empathetic and analytic thinking are, at least to some extent, mutually exclusive in the brain." Turns out, both types of processing occur in the same "channel," in the same neurological network, so to speak. (An earlier post, The change-the-channel fallacy, addressed some similar questions in relation to basic pronunciation change, and why, for example, oral repetition as a strategy to correct an "incorrect" articulation may not be effective in many cases.) That also explains, in part, how meta-cognitive (analysis, monitoring, reflection, planning) activity can compete with embodiment (affect, movement, felt-sense of articulation and vocal resonance) for the attention of the learner. It's sort of analogous to just not having enough "band width" to handle all the messaging.

Or it would be something like trying to listen to Fraser and Dornyei simultaneously . . . Fraser in your right ear; Dorneyi, in your left--which would be a terrific idea for a symposium, by the way. (Dornyei's new website is a gold mine of free downloads, by the way--as is Fraser's.