Sunday, August 26, 2018

It's not what you learn but where: how visual context matters

 If you have seen this research study Retinal-specific category learning. recently by Rosedahl, Eckstein and Ashby of  UC-Santa Barbara, (Summarized by Science Daily) I have a few questions for you: (If not, read it at eye level or  better just above, holding whatever it is in accordingly.)
  • Where did that happen (Where was your body; in what posture did it happen)?
  • What media (paper, computer, etc.) did it happen on?
  • What was your general emotional state when that happened? 
  • What else were you doing while you internally processed the story? (Were you taking notes, staring out the train window, watching TV . . . ?)
  • Where in your visual field did you read it? If it was an audio source, what were you looking at as you listened to it?
Research in neuroscience and elsewhere has demonstrated that any of those conditions may significantly impact perception and learning. Rosendal et al (2018) focuses on the last condition: position in the visual field. What they demonstrated was that what is learned in one consistent or typical place in the visual field tends not be recognized as well if appearing later somewhere else in the visual field, or at least on the opposing side. 

In the study, when subjects were trained to recognize classes of objects with one eye, with the other eye covered, they were not as good at recognizing the same objects with the other eye. In other words, just the position in the visual field appeared to make a difference. The summary in Science Daily does not describe the study in much detail. For example, were the direction of the protocol training from left to right, that is learning the category with the left eye (with right eye dominant learners), I'd predict that the effect would be less pronounced than in the opposite direction, based on extensive research on the relative differential sensitivity of the left and right side visual fields. Likewise, I'd predict that you could find the same main effect just by comparing objects high in the visual field with those lower, at the peripheries. But the conclusion is fascinating, nonetheless.

The relevance to research and teaching in pronunciation is striking (or eye opening?) . . . If you want learners to remember sound-schema associations, do what you can to not just provide them with a visual schema in a box on paper, such as a (colored?) chart on a page, but consider creating the categories or anchoring points in the active, dynamic three dimensional space in front of them.That could be a relatively big space on the wall or closer in, right in front of them, in their  personal visual space.

One possibility, which I have played with occasionally, is giving students a big piece of paper with the vowels of English displayed around the periphery so that the different vowels are actually anchored more prominently with one eye or the other or "noticeably" higher or lower in the visual field--and having them hold it very close to their faces as they learn some of the vowels. The problem there, of course, is that they can't see anything else! (Before giving up, I tried using transparent overhead projector slides, too, but that was not much better, for other reasons.) 

In haptic pronunciation work, of course, that means using hands and arms in gesture and touch to create a clock-like visual schema about 12 inches away from the body, such that sounds can be, in effect consistently sketched across designated trajectories or be anchored to one specific point in space. For example, we have used in the past something called the "vowel clock" where the IPA vowels of English are mapped on, with the high front tense vowel [i] at one o'clock and the mid-back-tense vowel [o] at 9 o'clock. Something like that.

In v5.0 of Haptic Pronunciation Training-English (HaPT-Eng), the clock is replaced by a more effective compass-like visual-kinesthetic schema of sorts, where the hands-arms-gesture creates the position in space and touch of various kinds embodies the different vowel qualities of the sounds that are located on that azimuth or trajectory in the visual field. (Check that out in the fall!)

In "regular" pronunciation or speech teaching those sorts of things go on ad hoc all the time, of course, such as when we point with gesture or verbally point at something in the immediate vicinity, hoping to briefly draw learners' attention. Conceptually, we create those spaces constantly and often very creatively. Rosendahl et al (2018) demonstrates that there is much more potentially in what (literally) meets the eye. 

Source:
University of California - Santa Barbara. (2018, August 15). Category learning influenced by where an object is in our field of vision. ScienceDaily. Retrieved August 23, 2018 from www.sciencedaily.com/releases/2018/08/180815124006.htm


2 comments:

  1. Was just reminded that there are any number of different (more or less empirical) frameworks or models about the emotional intensity or conceptual biases of the various quadrants or locations in the visual field, depending on handedness and eye dominance, e.g., hot/cold, internal/external, modality specific (auditory, visual, kinesthetic). Add that to the mix!

    ReplyDelete
  2. Here is a good primer if you are not up on current research related to eye tracking in language studies.
    http://www.cambridge.org/us/academic/subjects/languages-linguistics/applied-linguistics-and-second-language-acquisition/eye-tracking-guide-applied-linguistics-research?format=PB

    ReplyDelete