Credit: |
Clker.com |
One of the fundamental assumptions of materials design is that visual salience--what stands out due to design, color, placement, etc.--is key to uptake. In general, the claims are far stronger than that. Brighter colors, striking photos and engaging layouts are the stuff of advertising and marketing. Marketing research has long established the potential impact of all of those, in addition to seduction of the other senses.
A new study by Henderson and Hayes of UC Davis, “Meaning-based guidance of attention in scenes as revealed by meaning maps', as reported by NeuroscienceNews.com, provides a striking alternative view into how visual processing and visual attention work. Quoting from the summary:
"Saliency is relatively easy to measure. You can map the amount of saliency in different areas of a picture by measuring relative contrast or brightness, for example. Henderson called this the “magpie theory” our attention is drawn to bright and shiny objects.“It becomes obvious, though, that it can’t be right,” he said, otherwise we would constantly be distracted."
What the Henderson and Hayes (2017) research suggests is that what we attend to in the visual field in front of us has more to do with the mental schema or map we bring to the experience than with the "bright and shiny" object there. Of course, that does not exclude being at least momentarily distracted by those features, or even more importantly the visual "clutter" undermining the connection to the learner's body or somatic experience of a sound or expression.
There have been literally dozens of blog posts here exploring the basic "competition" between visual and auditory modalities. Hint: Visual almost always trumps audio or haptic, except when audio and haptic team up in some sense--as in haptic pronunciation teaching! The question is, if the impact of glitz and graphics may be a wash, or random at best, what do optimal "maps" in pronunciation teaching "look" like to the learner? The problem, in part, is in the way the question is stated, the visual metaphor itself: look.
Whenever I get stuck on a question of modalities in learning, I go back to Lessac (1967): Train the body first. Anchor sound in body movement and vocal resonance, and then use that mapping in connecting up words and speaking patterns in general. (If you are into mindfulness training, you get this!) The reference to the learner is always what it FEELS like in the entire body to pronounce a word or phrase, not it's visual/graphic representation or cognitive rationale or procedural protocol for doing it!
So, how can we describe the right map in pronunciation teaching? Gendlin's (1981) concept of "felt sense" probably captures it best, a combination of movement, touch and resonance generated by the sound, combined with cognitive insight/understanding of the process and place of the sound in the phonology of the L2. But always IN THAT ORDER, with that sense of priorities.The key is to be able to in effect "rate" or scale the intensity and boundaries of a sensation in the body, still a highly cognitive, conscious process. From there, the sensation can be recalled or moderated, or even associated to other concepts or symbols.
In other words, in pronunciation instruction the body is the territory; designated locations, measured sensations and movements across it are the map that must be in place before words and meanngs are efficiently attached or reattached. Setting up the map still requires . . . serious drill and practice. Once done, feel free to channel your "inner Magpie", glitz, color, song and dance!
Original source:
“Meaning-based guidance of attention in scenes as revealed by meaning maps” by Henderson. J. and Hayes, T., in Human Nature. Published online September 25 2017 doi:10.1038/s41562-017-0208-0