Sunday, December 20, 2015

Lost in space: Why phoneme vowel charts may inhibit learning of pronunciation

In a recent workshop I inadvertently suggested that the relative distances between adjacent English vowels on various standard charts, such as the IPA matrix or those used in pronunciation teaching were probably not all that important. Rather than "stand by" that comment, I need to "distance myself" from it! Here's why.

Several posts on the blog, including a recent one, have dealt with the basic question of to what extent visual stimuli can potentially undermine learning of sound, movement and touch (the basic stuff of the haptic approach to pronunciation teaching.) I went back to Doellar and Burgess (2008), "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries" (Full citation below.), one of the key pieces of research/theory that our haptic work has been based on.

In essence, that study demonstrated that we have two parallel systems for learning locations, in two different parts of the brain, one from landmarks in the visual (or experiential) field and another from boundaries of the field. Furthermore, boundaries tend to override landmarks in navigating. (For instance, when finding your way in the dark, your first instinct is to go along the wall, touching what is there, if possible, not steer through landmarks or objects in the field in front of you whose relative location may be much less fixed in your experience.)

Most importantly for us, boundaries tend to be learned incidentally; landmarks, associatively. In other words, location relative to boundaries is more like a map, where the exact point is first identified by where it is relative to the boundary, not the other points within the map itself. Conversely, landmarks tend to be learned associatively, relative to each other, not in relation to the boundary of the field, which may be irrelevant anyway, not conceptually present.

So what does that imply for teaching English vowels? 
  • Learner access in memory to the vowels when still actively working on improving pronunciation is generally a picture or image of a matrix, where the vowels are placed in it. (Having asked learners for decades how they "get to" vowels, the consistent answer is something like: "I look at the vowel chart in my mind.")
  • The relative position of those vowels, especially adjacent vowels, is almost certainly tied more to the boundaries of the matrix, the sides and intersecting lines, not the relative auditory and articulatory qualities of the sounds themselves. 
  • The impact of visual schema and processing over auditory and haptic is such that, at least for many learners, the chart is at least not doing much to facilitate access to the articulatory and somatic features of the phonemes, themselves. (I realize that is an empirical question that cries out for a controlled study!)
  • The phonemic system of a language is based fundamentally on relative distances between phonemes. The brain generally perceives phonemic differences as binary, e.g., it is either 'u' or 'U', or 'p' or 'b', although actual sound produced may be exceedingly close to the conceptual "boundary" separating them. 
  • Haptic work basically backgrounds visual schema and visual prominence, attempting to promote a stronger association between the sounds, themselves, and the "distance" between them, in part by locating them in the visual field immediately in front of the learner, using gesture, movement and touch, so that the learner experiences the relative phonemic "differences" as distinctly as possible.
  • We still do some initial orientation to the vowel system using a clock image with the vowels imposed on it, to establish the technique of using vowel numbers for correction and feedback, but try to get away from that as soon as possible, since that visual schema as well gives the impression that the vowels are somehow "equidistant" from each other--and, of course, according to Doellar and Burgess (2008) probably more readily associated with the boundary of the clock than with each other.
 (Based on excerpt from Basic Haptic Pronunciation, v4.0, forthcoming, Spring, 2016.)

Doellar, C. and Burgess, N. (2008). "Distinct error-correcting and incidental learning of location relative to landmarks and boundaries", retrieved from http://www.pnas.org/content/105/15/5909.long, December 19, 2015.


Friday, December 18, 2015

On developing excellent pronunciation and gesture--according to John Wesley,1770.

Have just rediscovered Wesley's delightful classic "Directions Concerning Pronunciation and Gesture", a short pamphlet published in 1770. The style  that Wesley was promoting was to become something of the hallmark of the Wesleyan movement: strong, persuasive public speaking. Although I highly recommend reading the entire piece, here are some of Wesley's  (slightly paraphrased) "rules" below well worth heeding, most of which are as relevant today as were they then.

 Pronunciation
  • Study the art of speaking betimes and practice it as often as possible.
  • Be governed in speaking by reason, rather than example, and take special care as to whom you imitate.
  • Develop a clear, strong voice that will fill the place wherein you speak.
  • To do that, read or speak something aloud every morning for at least 30 minutes.
  • Take care not to strain your voice at first; start low and raise it by degrees to a height.
  • If you falter in your speech, read something in private daily, and pronounce every word and syllable so distinctly that they may have all their full sound and proportion . . . (in that way) you may learn to pronounce them more fluently at your leisure.
  • Should you tend to mumble, do as Demosthenes, who cured himself of this defect by repeating orations everyday with pebbles in his mouth. 
  • To avoid all kinds of unnatural tones of voice, endeavor to speak in public just as you do in common conversation.
  • Labour to avoid the odious custom of spitting and coughing while speaking.
Gesture
  • There should be nothing in the dispositions and motions of your body to offend the eyes of the spectators.
  • Use a large looking glass as Demosthenes (again) did; learn to avoid all disagreeable and "unhandsome" gestures.
  • Have a skillful and faithful friend to observe all your motions and to inform you which are proper and which are not.
  • Use the right hand most, and when you use the left let it only be to accompany the other.
  • Seldom stretch out your hand sideways, more than half a foot from the trunk of your body.
  •  . . . remember while you are actually speaking you are not be studying any other motions, but use those that naturally arise from the subject of your discourse.
  • And when you observe an eminent speaker, observe with utmost attention what conformity there is between his action and utterance and these rules. (You may afterwards imitate him at home 'till you have made his graces your own.)
 Most of the "gesture" guidelines and several of those for pronunciation are employed explicitly in public speaking training--and in haptic pronunciation teaching. Even some of the more colorful ones are still worth mentioning to students in encouraging effective speaking of all sorts. 



Monday, December 14, 2015

Can't see teaching (or learning) pronunciation? Good idea!

Clker.com
A common strategy of many learners when attempting to "get" the sound of a word is to close their eyes. Does that work for you? My guess is that those are highly visual learners who can be more easily distracted. Being more auditory-kinesthetic and somewhat color "insensitive" myself, I'm instead more vulnerable to random background sounds, movement or vibration. Research by Molloy et al. (2015), summarized by Science Daily (full citation below) helps to explain why that happens.

In a study of what they term "inattentional deafness," using MEG (magnetoencephalography), the researchers were able to identify in the brain both the place and point at which auditory and visual processing in effect "compete" for prominence. As has been reported more informally in several earlier posts, visual consistently trumps auditory, which accounts for the common life-ending experience of  having been  oblivious to the sound of screeching tires while crossing the street fixated on a smartphone screen . . . The same applies, by the way, for haptic perception as well--except in some cases where movement, touch, and auditory team up to override visual. 

The classic "audio-lingual" method of language and pronunciation teaching, which made extensive use of repetition and drill, relied on a wide range of visual aids and color schemas, often with the rationale of maintaining learner attention. Even the sterile, visual isolation of the language lab's individual booth may have been especially advantageous for some--but obviously not for everybody!

What that research "points to" (pardon the visual-kinesthetic metaphor) is more systematic control of attention (or inattention) to the visual field in teaching and learning pronunciation. Computer mediated applications go to great lengths to manage attention but, ironically, forcing the learner's eyes to focus or concentrate on words and images, no matter how engaging, may, according to this research, also function to negate or at least lesson attention to the sounds and pronunciation. Hence, the intuitive response of many learners to shut their eyes when trying to capture or memorize sound. (There is, in fact, an "old" reading instruction system called the "Look up, say" method.)

The same underlying, temporary "inattention deafness" also probably applies to the use of color associated with phonemes --or even the IPA system of symbols in representing phonemes. Although such visual systems do illustrate important relationships between visual schemas and sound that help learners understand the inventory of phonemes and their connection to letters and words in general, in the actual process of anchoring and committing pronunciation to memory, they may in fact diminish the brain's ability to efficiently and effectively encode the sound and movement used to create it.

The haptic (pronunciation teaching) answer is to focus more on movement, touch and sound, integrating those modalities with visual.The conscious focus is on gesture terminating in touch, accompanied by articulating the target word, sound or phrase simultaneously with resonant voice. In many sets of procedures (what we term, protocols) learners are instructed to either close their eyes or  focus intently on a point in the visual field as the sound, word or phrase to be committed to memory is spoken aloud.

The key, however, may be just how you manage those modalities, depending on your immediate objectives. If it is phonics, then connecting letters/graphemes to sounds with visual schemas makes perfect sense. If it is, on the other hand, anchoring or encoding pronunciation (and possibly recall as well), the guiding principle seems to be that sound should be best heard (and experienced somatically, in the body) . . . but (to the extent possible) not seen!

See what I mean? (You heard it here!)

Full citation:
Molloy, K., Griffiths, T., Chait, M., and Lavie, N. 2015. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience 35 (49): 16-46.