Showing posts with label modalities. Show all posts
Showing posts with label modalities. Show all posts

Friday, September 29, 2017

The "Magpie Effect" in pronunciation teaching: what you see is (not necessarily) what you get!

Credit:
Clker.com
Wow! I knew there had to be a term for flashy, beautiful visual aids in pronunciation teaching, and education in general, that may (best case) contribute very little, if anything to the process: the Magpie Effect.


One of the fundamental assumptions of materials design is that visual salience--what stands out due to design, color, placement, etc.--is key to uptake. In general, the claims are far stronger than that. Brighter colors, striking photos and engaging layouts are the stuff of advertising and marketing. Marketing research has long established the potential impact of all of those, in addition to seduction of the other senses.

A new study by Henderson and Hayes of UC Davis, “Meaning-based guidance of attention in scenes as revealed by meaning maps', as reported by NeuroscienceNews.com, provides a striking alternative view into how visual processing and visual attention work. Quoting from the summary:

"Saliency is relatively easy to measure. You can map the amount of saliency in different areas of a picture by measuring relative contrast or brightness, for example. Henderson called this the “magpie theory” our attention is drawn to bright and shiny objects.“It becomes obvious, though, that it can’t be right,” he said, otherwise we would constantly be distracted."

What the Henderson and Hayes (2017) research suggests is that what we attend to in the visual field in front of us has more to do with the mental schema or map we bring to the experience than with the "bright and shiny" object there. Of course, that does not exclude being at least momentarily distracted by those features, or even more importantly the visual "clutter" undermining the connection to the learner's body or somatic experience of a sound or expression.

There have been literally dozens of blog posts here exploring the basic "competition" between visual and auditory modalities. Hint: Visual almost always trumps audio or haptic, except when audio and  haptic team up in some sense--as in haptic pronunciation teaching! The question is, if the impact of glitz and graphics may be a wash, or random at best, what do optimal "maps" in pronunciation teaching "look" like to the learner? The problem, in part, is in the way the question is stated, the visual metaphor itself: look.

Whenever I get stuck on a question of modalities in learning, I go back to Lessac (1967): Train the body first. Anchor sound in body movement and vocal resonance, and then use that mapping in connecting up words and speaking patterns in general. (If you are into mindfulness training, you get this!) The reference to the learner is always what it FEELS like in the entire body to pronounce a word or phrase, not it's visual/graphic representation or cognitive rationale or procedural protocol for doing it!

So, how can we describe the right map in pronunciation teaching? Gendlin's (1981) concept of "felt sense" probably captures it best, a combination of movement, touch and resonance generated by the sound, combined with cognitive insight/understanding of the process and place of the sound in the phonology of the L2. But always IN THAT ORDER, with that sense of priorities.The key is to be able to in effect "rate" or scale the intensity and boundaries of a sensation in the body, still a highly cognitive, conscious process. From there, the sensation can be recalled or moderated, or even associated to other concepts or symbols.

In other words, in pronunciation instruction the body is the territory; designated locations,  measured sensations and movements across it are the map that must be in place before words and meanngs are efficiently attached or reattached. Setting up the map still requires . . .  serious drill and practice. Once done, feel free to channel your "inner Magpie", glitz, color, song and dance!

Original source: 
“Meaning-based guidance of attention in scenes as revealed by meaning maps” by  Henderson. J. and Hayes, T.,  in Human Nature. Published online September 25 2017 doi:10.1038/s41562-017-0208-0

Sunday, November 13, 2016

(New) Haptic cognition-based pronunciation teaching workshop at 2016 TESL Ontario Conference

If you are coming to the 2016 TESL Ontario Conference later this month (November 24 and 25 in Toronto) please join us for the Haptic Pronunciation Teaching Workshop, on Thursday, 3:45 to 4:45. This will introduce the new "haptic cognition" framework for (amazingly) more efficient and integrated pronunciation modeling and correction that we have been developing for the last year or so. (See previous post on the applicability of a haptic cognition-based  model to pronunciation teaching in general.)
HaPT-E, v4.0

Haptic cognition defined: 
  • The felt sense of pronunciation change (Gendlin, 1996) – somatic (body) awareness and conscious, meta-cognitive processing 
  • Change activated consciously and initially through body movement pattern use (Lessac, 1967) 
  • Haptic (movement+touch) uniting, integrating and “prioritizing” of modalities in anchoring and recall (Minogue, 2006)
Modalities of the model:
  • Meta-cognitive (rules, schemas, explanations, conscious association of sound or form to other sounds or forms)
  • Auditory (sound patterns presented or recalled) 
  • Haptic
    • Kinesthetic (movement patterns experienced/performed or mirrored by the body, gesture, motion patterns)
    •  Cutaneous (differential skin touch: pressure, texture, temperature)
  • Vocal resonance (vibrations throughout upper body, neck and head)
  • Visual (visual schema presented or recalled: graphemes, charts, colors, modeling, demonstrations) 
 General instructional principles:
  • Get to "haptic" as soon as possible in modeling and correcting.
  • Use precise pedagogical movements patterns (PMPs), including tracking and speed in the visual field.
  • Insure as much cutaneous anchoring as possible.
  • Go “light” on visual; avoid overly “gripping” visual schema during haptic engagement.
  • Use as much vocal resonance as possible.
  • Repeat as few times as possible.
  • Insure that homework/follow up is feasible, clear—and done (including post hoc reporting of work, results and incidental/related learnings).
  • Use haptic PMPs first in correction/recall prompting, before providing oral, spoken model.
The elaborated, audio-embedded Powerpoint from the workshop will be available later this month.

KIT







Tuesday, November 8, 2016

The "myth-ing" link in (pronunciation) teaching: Haptic cognition

Nice piece from The Guardian Teacher Network, Four neuro-myths still prevalent in schools, debunked, by Bradley Bush (@Inner_Drive). Now granted, The Guardian is not your average  refereed, first-line journal, but the sources and research cited in the readable piece are credible. Just in case you need a little more information to help your colleague finally abandon any of them, check it out. The four myths are:
Haptic Wolverine, 2016
  • Learning styles are important in teaching and instruction
  • We use just 10% of our brains.
  • Right vs left brain is a relevant distinction in understanding learning and designing instruction
  • Playing "brain" games makes you smarter and should have a more prominent place in instruction
So, if those popular "teacher cognitions" are lacking in empirical support, especially the first and third, how should that affect design of instruction? (The fact that the second and fourth just seem so "right" at times when in the classroom, notwithstanding!)

One helpful framework, cited by Bush (and this blog earlier) is Goswami (2008), which argues that learners learn best, in general, when taught using a  multi-sensory, multiple-modality approach. From that perspective, for example, when teaching a sound or process or vocabulary word, as many senses as possible must be brought to the party, either simultaneously or in close proximity:
  • Auditory (sound)
  • Visual (imagery)
  • Kinesthetic (muscle movement and memory)
  • Tactile/cutaneous (surface skin touch)
  • General (somatic) sensation of vocal resonance throughout the head and upper body. 
  • In addition, the potential impact of that is conditioned by the degree of meta-cognitive engagement (conscious awareness on the part of the learner of all that sensory input, plus existing schemas, such as rules, experience and connections to related sounds and language bits and processes). 
How to best do that consistently is the question. The concept of "haptic cognition" (Gentaz and Rossetti, in press) suggests why haptic awareness can function to bring together all those modalities in learning. From the conclusion:

"Taken together, this suggests that the links between perception and cognition may depend on the perceptual modality: visual perception is discontinuous with cognition whereas haptic perception is continuous with cognition." (Emphasis, mine.)

In other words, visual schema, such as charts, colors and even text itself, may actually work against integration of sound, resonance, movement and meaning in pronunciation teaching. Research from a number of fields has established the potentially problematic nature of visual modality overriding auditory, in effect disconnecting sound from meaning. On the contrary, the haptic modality generally serves to unite sensory input, connecting more readily with cognition based in sound, resonance and meaning. 

Another myth then, that of visual explanatory schemas (images and text) being a good approach in pronunciation teaching in textbooks and media--as opposed to active experience of sound, movement and awareness of resonance, plus some visual support, needs serious reexamination. What Gentaz and Rossetti are asserting (or confirming) is that visual imagery may not always effectively contribute to conscious, critical, cognitive integration and awareness in learning--the ultimate goal of all media advertising!

In other words, pronunciation instruction should be centered more on comprehensive haptic cognition. If you are not sure just how that happens . . . ask your local haptician!

(Coincidentally, the name of our company is: Acton Multiple-Modality Pronunciation Instruction Systems, AMPISys, inc.!)




Friday, June 17, 2011

Keeping listening in the picture . . . or out of it!

Clip art: Clker
Several posts have addressed the question of the relationship between learning modalities in general learning and pronunciation teaching. What this important 2010 study by Lavie and Macdonald of the Institute of Cognitive Neuroscience at UCL, reported by Science Daily, demonstrates is that in some contexts visual input appears to trump auditory input. In other words, being engaged visually in a task may limit ability to hear critical information.

We know from experience that some highly visual learners may find learning pronunciation especially difficult. This helps to explain why. From whatever source, even stunning visual aids or computer displays, "visual interference" with learning new sounds may be significant. The implication for EHIEP instruction is that haptic and auditory input, key components of  multiple modality instruction--along with a modest amount of video on the side, perhaps, is the best overall learning format. Get the picture . . .or the sound . . . take your pick!

Thursday, June 2, 2011

How to engage the haptic-o-phobes and the kinesthetically challenged

From this 2011 research by Yang (University of Wisconsin-Milwaukee), Ringberg (Copenhagen Business School), Mao (University of Central Florida), and Peracchio (University of Wisconsin-Milwaukee), it appears that the secret is to check first as to whether the learner is creative enough to love haptic in the first place. If not, forget it. If so, however, it appears that the sufficiently creative are much more open to working in their non-dominant cognitive styles or modalities. HICP that training in non-dominant modalities is critical for the learner, in many cases explicitly avoiding the learner's primary "cluttered" channel(s) or cognitive style.
Clip art: Clker

So perhaps, we need to do some more creativity training earlier on to engage the dull, unimaginative stragglers, before we ask them to hyper-gesticulate in public . . . or possibly forward them on to a program that better fits their personalities? That should not be too difficult, eh.