Showing posts with label left-right hemisphere specialization. Show all posts
Showing posts with label left-right hemisphere specialization. Show all posts

Monday, September 28, 2020

Believing in pronunciation teaching -- at least at the beginning!

Have believed for . . . a long time . . . that early pronunciation instruction and learning is not only a higher calling, but  in some sense qualitatively different than later language acquisition. Once some "quorum level" of sounds and patterns are acquired, it is a different process or at least teaching problem. Hence, we see the often confused debates as to what degree pronunciation work is "physical" or more "conscious/cognitive." I believe two recently published studies help unpack the dichotomy or paradox. '

 A new study, Implicit pattern learning predicts individual differences in belief in God in the United States and Afghanistan, by Weinburger et al, has interesting, albeit somewhat indirect implications for pronunciation teaching.  Sciencedaily describes the focus of the study, quoting the researchers:  

"This is not a study about whether God exists, this is a study about why and how brains come to believe in gods. Our hypothesis is that people whose brains are good at subconsciously discerning patterns in their environment {emphasis, mine}may ascribe those patterns to the hand of a higher power," 

In a relatively straight-forward design, the research "correlated" relative ability to unconsciously identify language and symbolic patterns with stronger, fundamentalist religious belief in the two cultures/faith traditions, Christianity and Islam. Subjects more adept at pattern recognition tended toward stronger belief. (There are not just a few potential cross-cultural and methodological issues with the research, but I really like the conclusion!) 

And then this study on early versus later learning of Mandarin by Qi and colleagues at the University of Delaware, Learning language: New insights into how brain functions.  Their conclusion, focusing on brain function, summarized in Science Daily:

"The left hemisphere showed a substantial increase of activation later in the learning process -- the right hemisphere in the most successful learners was most active in the early, sound-recognition stage. . . "

Now granted, learning Mandarin may require a little more right hemisphere than English, as has been shown in previous studies, but the basic concept, pattern recognition, a more specialized function of the right hemisphere, is a key feature of early or initial learning of sounds. The researchers also note that more right hemisphere engagement was also key to eventual success in the language as well. Implicit pattern recognition . . .not explicit, left-hemisphere-like processing. 

There are no studies that I am aware of which correlate fundamentalist religious beliefs with acquisition of  L2 sound systems, but the connection between more right hemisphere based unconscious or inductive learning and early pronunciation teaching and learning is striking. That suggests that more experiential techniques and procedures, even drill, when carried out in ways that allow the brain time and input to "intuit" or acquire the somatic patterning involved, are essential to efficient instruction. So how do we do that well? 

Better pray about that . . . but will get right back to you!

Bill


Sources: 

University of Delaware. (2019, May 8). Learning language: New insights into how brain functions. ScienceDaily. Retrieved September 18, 2020 from www.sciencedaily.com/releases/2019/05/190508093716.htm

Adam B. Weinberger, Natalie M. Gallagher, Zachary J. Warren, Gwendolyn A. English, Fathali M. Moghaddam, Adam E. Green. Implicit pattern learning predicts individual differences in belief in God in the United States and Afghanistan. Nature Communications, 2020; 11: 4503 DOI: 10.1038/s41467-020-18362-3




Tuesday, November 8, 2016

The "myth-ing" link in (pronunciation) teaching: Haptic cognition

Nice piece from The Guardian Teacher Network, Four neuro-myths still prevalent in schools, debunked, by Bradley Bush (@Inner_Drive). Now granted, The Guardian is not your average  refereed, first-line journal, but the sources and research cited in the readable piece are credible. Just in case you need a little more information to help your colleague finally abandon any of them, check it out. The four myths are:
Haptic Wolverine, 2016
  • Learning styles are important in teaching and instruction
  • We use just 10% of our brains.
  • Right vs left brain is a relevant distinction in understanding learning and designing instruction
  • Playing "brain" games makes you smarter and should have a more prominent place in instruction
So, if those popular "teacher cognitions" are lacking in empirical support, especially the first and third, how should that affect design of instruction? (The fact that the second and fourth just seem so "right" at times when in the classroom, notwithstanding!)

One helpful framework, cited by Bush (and this blog earlier) is Goswami (2008), which argues that learners learn best, in general, when taught using a  multi-sensory, multiple-modality approach. From that perspective, for example, when teaching a sound or process or vocabulary word, as many senses as possible must be brought to the party, either simultaneously or in close proximity:
  • Auditory (sound)
  • Visual (imagery)
  • Kinesthetic (muscle movement and memory)
  • Tactile/cutaneous (surface skin touch)
  • General (somatic) sensation of vocal resonance throughout the head and upper body. 
  • In addition, the potential impact of that is conditioned by the degree of meta-cognitive engagement (conscious awareness on the part of the learner of all that sensory input, plus existing schemas, such as rules, experience and connections to related sounds and language bits and processes). 
How to best do that consistently is the question. The concept of "haptic cognition" (Gentaz and Rossetti, in press) suggests why haptic awareness can function to bring together all those modalities in learning. From the conclusion:

"Taken together, this suggests that the links between perception and cognition may depend on the perceptual modality: visual perception is discontinuous with cognition whereas haptic perception is continuous with cognition." (Emphasis, mine.)

In other words, visual schema, such as charts, colors and even text itself, may actually work against integration of sound, resonance, movement and meaning in pronunciation teaching. Research from a number of fields has established the potentially problematic nature of visual modality overriding auditory, in effect disconnecting sound from meaning. On the contrary, the haptic modality generally serves to unite sensory input, connecting more readily with cognition based in sound, resonance and meaning. 

Another myth then, that of visual explanatory schemas (images and text) being a good approach in pronunciation teaching in textbooks and media--as opposed to active experience of sound, movement and awareness of resonance, plus some visual support, needs serious reexamination. What Gentaz and Rossetti are asserting (or confirming) is that visual imagery may not always effectively contribute to conscious, critical, cognitive integration and awareness in learning--the ultimate goal of all media advertising!

In other words, pronunciation instruction should be centered more on comprehensive haptic cognition. If you are not sure just how that happens . . . ask your local haptician!

(Coincidentally, the name of our company is: Acton Multiple-Modality Pronunciation Instruction Systems, AMPISys, inc.!)




Thursday, November 28, 2013

Giving aural comprehension "a hand"-- in haptic pronunciation training

A common question we get is something to the effect of "How do the pedagogical gestures (PMPs - movement across the visual field terminating in touch on a stressed element of a word) work?" 2012 research by Turkeltaub and colleagues at Georgetown University, reported by Science Daily, suggests how that happens. In that study
it was demonstrated that what you are doing with your hands may affect what you hear, or at least how quickly you hear it.

In essence, subjects were instructed to respond by touching a button when they detected a heavily embedded background sound, either with their right or left hand. Right handed response was better at detecting fast-changing sounds; the left, better at slow changing sounds, according to Turkeltaub, " . . . the left hemisphere likes rapidly changing sounds, such as consonants, and the right hemisphere likes slowly changing sounds, such as syllables or intonation . . . " Well, maybe . . .

The study at least further establishes the potential connection between haptic work and L2 sound change. In this case, when the learner performs a PMP, mirroring the model and listening to the model of the target sound--without overt speaking--anchoring should be enhanced, more efficient. Part of the reason for that, as reported in several pervious posts, is that "fast" sounds tend to be in the right visual field (attached to the left hemisphere) and "slower" sounds, the left.

AMPISys, Inc. 
In the EHIEP protocol for intonation, for example, the intonation contour or tone group begins in the left visual field with the left hand moving to the right until it touching the right hand on the stressed syllable or focus word. (See Intonation PMP demonstration linked off earlier post.) In the vowel protocols, similar PMPS are involved as well as the visual display reflects the "fast and slow" phonaesthetic quality of the vowels. (See earlier post on that as well.)

Keep in touch! (v2.0 will be released next week!)

Saturday, June 30, 2012

The (left-to-) right way to teach and anchor pronunciation


Clipart: Clker
Clipart:
Clker
Earlier posts have examined aspects of the visual field. In general terms, for at least English speakers, the right side comes off as (a) somewhat brighter, (b) more energetic, (c) more analytic, (d) more change-oriented . . . and, it turns out,  according to this study of soccer referees, (e) a bit more positive (or less "foul"?)--when an action is seen as moving left to right, rather than in the opposite direction. If the potential foul incurred by a player moving right to left, versus left to right in the visual field of the referee, there was a statistically significant chance that it was more likely to be called. According to the ScienceShots research summary, this phenomena is established in other fields as well and is actively exploited, for example, by cartoonists. Of course, some of the basis for that has to do with the fact that each eye is (roughly speaking) "controlled by" the opposing hemisphere of the brain. The research and popular understanding of "left" vs "right" brain functioning corresponds to many of those differing characteristics of the visual field as well. The fact that most of the EHIEP pedagogical movement patterns go from left to right and terminate in the right visual field is, however, post-theoretical. By that I mean that the practice developed through classroom experience initially, not based on the neurophysiological evidence that has come to light since. The "positive" bias goes consistently in the student's direction. In the visual field of the instructor observing students doing PMPs, on the contrary, all the motion appears to go . . . right to left. I'm going to explore this. In the meantime, just check your mood before class begins (See previous post!) and go easy on "yellow" carding of pronunciation and sloppy PMPing, eh!

Saturday, April 21, 2012

Better (looking) intonation with just the wave of a hand

Clip art: Clker
Clip art: Clker
A new study, summarized by Science Daily, explores why the left side of the face seems more attractive than the right. One explanation offered is that the left side is more emotionally expressive than the right, since that the right hemisphere, which controls the left side of the face, is also more closely associated with emotion. Most thespian logos seem to concur. (Of course, the same does not hold in many cultures for the hands or respective sides of the body.) Earlier posts on the "aesthetics of the visual field," for instance this one or that one, have looked at what it may mean to position or anchor a word or intonation contour in various quadrants of the visual field. There are certainly well documented differences between left and right and upper and "downer." In various studies, the left~right dimension has been characterized with terms such as: cool~hot, soft~rough, stability~change, passive~active, holistic~particulate, analogue~digital, etc. Granted, those are very "rough" generalizations relating to the corresponding brain processing centers. Here is the relevance to HICP work. Intonation contours are performed by the left hand, beginning in the left visual field and then moving over to the right visual field to touch the right hand as the  prominent syllable, word or discourse element is articulated (anchored). We have known for some time that the quality or fluidity and "grace" of the left hand in tracing out the intonation contour of a phrase or clause was a factor but this brings the issue into focus. The character of the pedagogical movement pattern with that hand does much to set up or mirror the emotional and affective mood of the utterance, before the key information is foregrounded. What is also intriguing is that for the observer, the left to right gesture is read in the right eye, which has been shown in many studies--for most people--to be the more emotionally reactive or intense. Express with the left; read and foreground with the right. Wave if you get it . . . (with your left hand, of course!)