Sunday, August 26, 2018

It's not what you learn but where: how visual context matters

 If you have seen this research study Retinal-specific category learning. recently by Rosedahl, Eckstein and Ashby of  UC-Santa Barbara, (Summarized by Science Daily) I have a few questions for you: (If not, read it at eye level or  better just above, holding whatever it is in accordingly.)
  • Where did that happen (Where was your body; in what posture did it happen)?
  • What media (paper, computer, etc.) did it happen on?
  • What was your general emotional state when that happened? 
  • What else were you doing while you internally processed the story? (Were you taking notes, staring out the train window, watching TV . . . ?)
  • Where in your visual field did you read it? If it was an audio source, what were you looking at as you listened to it?
Research in neuroscience and elsewhere has demonstrated that any of those conditions may significantly impact perception and learning. Rosendal et al (2018) focuses on the last condition: position in the visual field. What they demonstrated was that what is learned in one consistent or typical place in the visual field tends not be recognized as well if appearing later somewhere else in the visual field, or at least on the opposing side. 

In the study, when subjects were trained to recognize classes of objects with one eye, with the other eye covered, they were not as good at recognizing the same objects with the other eye. In other words, just the position in the visual field appeared to make a difference. The summary in Science Daily does not describe the study in much detail. For example, were the direction of the protocol training from left to right, that is learning the category with the left eye (with right eye dominant learners), I'd predict that the effect would be less pronounced than in the opposite direction, based on extensive research on the relative differential sensitivity of the left and right side visual fields. Likewise, I'd predict that you could find the same main effect just by comparing objects high in the visual field with those lower, at the peripheries. But the conclusion is fascinating, nonetheless.

The relevance to research and teaching in pronunciation is striking (or eye opening?) . . . If you want learners to remember sound-schema associations, do what you can to not just provide them with a visual schema in a box on paper, such as a (colored?) chart on a page, but consider creating the categories or anchoring points in the active, dynamic three dimensional space in front of them.That could be a relatively big space on the wall or closer in, right in front of them, in their  personal visual space.

One possibility, which I have played with occasionally, is giving students a big piece of paper with the vowels of English displayed around the periphery so that the different vowels are actually anchored more prominently with one eye or the other or "noticeably" higher or lower in the visual field--and having them hold it very close to their faces as they learn some of the vowels. The problem there, of course, is that they can't see anything else! (Before giving up, I tried using transparent overhead projector slides, too, but that was not much better, for other reasons.) 

In haptic pronunciation work, of course, that means using hands and arms in gesture and touch to create a clock-like visual schema about 12 inches away from the body, such that sounds can be, in effect consistently sketched across designated trajectories or be anchored to one specific point in space. For example, we have used in the past something called the "vowel clock" where the IPA vowels of English are mapped on, with the high front tense vowel [i] at one o'clock and the mid-back-tense vowel [o] at 9 o'clock. Something like that.

In v5.0 of Haptic Pronunciation Training-English (HaPT-Eng), the clock is replaced by a more effective compass-like visual-kinesthetic schema of sorts, where the hands-arms-gesture creates the position in space and touch of various kinds embodies the different vowel qualities of the sounds that are located on that azimuth or trajectory in the visual field. (Check that out in the fall!)

In "regular" pronunciation or speech teaching those sorts of things go on ad hoc all the time, of course, such as when we point with gesture or verbally point at something in the immediate vicinity, hoping to briefly draw learners' attention. Conceptually, we create those spaces constantly and often very creatively. Rosendahl et al (2018) demonstrates that there is much more potentially in what (literally) meets the eye. 

Source:
University of California - Santa Barbara. (2018, August 15). Category learning influenced by where an object is in our field of vision. ScienceDaily. Retrieved August 23, 2018 from www.sciencedaily.com/releases/2018/08/180815124006.htm


Sunday, August 12, 2018

Feeling distracted, distant or drained by pronunciation work? Don't be downcast; blame your smartphone!

clker.com
It all makes sense now. I knew there had to be more (or less) going on when students are not thoroughly engaged or seemingly not attentive during pronunciation teaching, mine, especially. Two new studies, taken together, provide a depressing picture of what we are up against, but also suggest something of an antidote as well.

Tigger warning: This may be perceived as slightly more fun than (new/old) science. 

The first, summarized by ScienceDaily.com, is Dealing with digital distraction: Being ever-connected comes at a cost, studies find, by Dwyer and Dunn of The University of British Columbia. From the summary:

"Our digital lives may be making us more distracted, distant and drained . . . Results showed that people reported feeling more distracted during face-to-face interactions if they had used their smartphone compared with face-to-face interactions where they had not used their smartphone. The students also said they felt less enjoyment and interest in their interaction if they had been on their phone."

What is most interesting or relevant about the studies reported, and the related literature review, is the focus on the impact of digital smartphone use prior to what should be quite meaningful f2f  interaction--either dinner or what should have been a more intimate conversation--THE ESSENCE OF EFFECTIVE PRONUNCIATION AND OTHER FORMS OF INSTRUCTION! Somehow the digital "appetizer" made the meal and interpersonal interaction . . . well . . . considerably less appetizing.

Why should that be the case? The research on the multiple ways in which digital life can be depersonalizing and disconnecting is extensive and persuasive, but there is maybe something more "at hand" here.

A second study--which caught my eye as I was websurfing on the iPhone in the drive-through lane at Starbucks--dealt with what seem to be similar effects produced by "bad" posture, specifically studying something with head bowed, as opposed to doing the same with the text at eye level, with optimal posture: Do better in math: How your body posture may change stereotype threat response” by Peper, Harvey, Mason, and Lin of San Francisco State University, sumarized in NeuroscienceNews.com. 

Subjects did better and felt better if they sat upright and relaxed, as opposed to looking down at the study materials, a posture which according the authors . .  "is a defensive posture that can trigger old negative associations."

So, add up the effect of those two studies and what do you get? Lousy posture AND digital, draining distraction. Not only do my students use smartphones WITH HEAD BOWED up until the moment class starts, but I even have them do more of it in class! 

Sit up and take note, eh!

Citations:
American Psychological Association. (2018, August 10). Dealing with digital distraction: Being ever-connected comes at a cost, studies find. ScienceDaily. Retrieved August 12, 2018 from www.sciencedaily.com/releases/2018/08/180810161553.htm

San Francisco State University (2018, August 4). Math + Good Posture = Better Scores. NeuroscienceNews. Retrieved August 4, 2018 from http://neurosciencenews.com/math-score-posture-9656/