For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
- The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
- About .5 meters straight ahead of your
- With the center at eye level--or equivalent relative size on the board or wall or projector,
- Such that if the head does not move,
- The eyes will be forced at times to move close to the edges of the visual field
- To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.)
- Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos)
11. [uw]
“moo”
10. [ʊ]
“cook”
(Northwest)
|
1. [iy]
“me”
2. [I]
“chicken”
(Northeast)
|
|
9. [ow]
“mow”
8. [Ɔ]
“salt”
(West)
|
3. [ey]
“may”
4. [ɛ]
“best”
(East)
|
|
7.
[ʌ]
“love”
(Southwest)
|
5.
[ae]
“fat”
|
|
6.
[a]
“hot/water”
(South) |
||
Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.
Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )
The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!
To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.
So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)
There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there. Let me see if I can get that study past the human subjects review committee . . .
Keep in touch! v5.0 will be on screen soon!
Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,
Was just reminded that in the haptic system it may be that the gestural patterns alone, not the accompanying eye movements, following the hand and arm movements in the visual field that may be contributing to memory enhancement and embodied anchoring of the sounds and sound patterns. Could be, indeed. For the most part, learners eyes seem to follow the hands, even when their eyes are closed. We can study that! The point is that the "eyes come first," in effect reflecting internal processing and memory recall, not just immediate experience.
ReplyDelete