Showing posts with label visual field. Show all posts
Showing posts with label visual field. Show all posts

Tuesday, February 28, 2023

Using gesture and movement to avoid "Pop Outs" in (pronunciation) teaching!

I like this study. One of the biggest obstacles in effective teaching (of anything) are sudden distractions, when what should have "popped in" easily in a lesson . . . doesn't . . . because of what just "popped out or up." Interesting piece of research by Klink et al,  on visual distraction--and a potential strategy for dealing with it, summarized by Neurosciencenews.com, Trained Brains Rapidly Suppress Visual Distractions. Title of the original study, published on PNAS: Inversion of pop-out for a distracting feature dimension in monkey visual cortex, (Ignore that term "monkey" in the original there!)

In essence the "subjects" were trained as followed (from the summary):

"The researchers trained monkeys to play a video game in which they searched for a unique shape among multiple items, while a uniquely colored item tried to distract them. As soon as the monkeys found the unique shape, they made an eye movement to it to indicate their choice. After some training, monkeys became very good at this game and almost never made eye movements to the distractor."

So what is a potential application of that "discovery" in teaching? What visual distractions are your students subject to in the classroom? On a task by task basis, how do you maintain student attention to the focus of the activity? 

For example, in haptic pronunciation teaching, instructor and students do a great deal of repeating words, phrases, sentences and dialogues together (not repeating after) while using speech-synchronized gestures continuously. In this choreographed technique, what we call "movement, tone and touch techniques" (MT3s) it is essential that instructor and student gesturing is constantly synchronized, throughout. You can "SEE" just how disruptive a visual distraction in the room in the visual fields of students could be. 

On the flip side, however, you can also "SEE" how MT3 training, itself--or even typical gesture use in teaching or communication, whether designed or impromptu, can, in principle, serve to enhance general visual attention in the classroom. 

How free of distraction or immune to it is the visual field in your classroom? Can you manage it better, more "movingly?" 






Source: Klink, P., Teeuwen, R., Lorteije, J. and P. Roelfsema. (2023). Inversion of pop-out for a distracting feature dimension in monkey visual cortex. PNAS February 22, 2023  https://doi.org/10.1073/pnas.2210839120

Thursday, January 27, 2022

BIG news about Haptic Pronunciation Teaching!

Size DOES matter it turns out, according to research by Masarwa, Kreichman, and Gilaie-Dotan of Bar Ilan University and University College London, summarized by NeuroscienceNews.com as "In visual memory size matters." One of the key features of Haptic Pronunciation Teaching (HaPT) is the use of relatively large sweeping gestures across the visual field in front of the class to represent sounds and patterns of the language. (As students do it along with the instructor, typically.) We have known for a couple of decades that that "larger than life" visual representation of the sounds in communicating with the class is highly effective. 

Now we have a little more evidence as to just why. In the study, simply put, under various experimental conditions, it was demonstrated that the larger image was remembered better. The researchers' conclusion:

" Our study indicates that physical stimulus dimensions (as the size of an image) influence memory, and this may have significant implications to learning, aging, development, etc."

Fascinating study, linked below. In other words, our method is "bigger" than your method. There is actually much MORE to the story, of course! Go here to find out!


and, of course, keep in touch!

Wednesday, June 24, 2020

Getting a feel for pronunciation: What our pupils can tell us!

Clker.com
What do you do with your eyes when you are struggling to understand something that you are listening to? (Quick: Write that down.) Now some of that, of course, depends on your personal wiring, but this new study “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry” by Nakakoga, Higashi, Muramatsu, Nakauchi, and Minami of Toyohashi University of Technology, as reported in neuroscience.com, sheds some new "light" on how the emotions may exert influence on our ongoing perception and learning. Using eye tracking and emotion measuring technology, a striking pattern emerges.

From the summary (boldface, mine):
"It suggests that visual perception elicits emotions in all attentional states, whereas auditory perception elicits emotions only when attention is paid to sounds, thus showing the differences in the relationships between attentional states and emotions in response to visual and auditory stimuli."

So, what does that imply for the pronunciation teacher? Several things, including the importance of what is going on in the visual field of learners when they are attempting to learn or change sounds. It has been long established that the process of learning pronunciation is especially susceptible to emotion. It can be an extraordinarily stressful experience for some learners. Even when there are no obvious stressors present, techniques such as relaxation or warm ups have been shown to facilitate learning of various aspects of pronunciation.

Consequently, any emotional trigger in the visual field of the learner can have either "pronounced" positive or negative impact, regardless of what the instructor is attempting to direct the learners' attention to. If, on the other hand, learners' attention is focused narrowly on auditory input and the emotional impact, you have a better chance of managing emotional impact FOR GOOD if you can successfully manage or restrict what is going on in the visual field of the learner that could be counterproductive emotionally (Think: Hypnosis 101. . . or a good warm up . . . or a mesmerizing lecture!)

That doesn’t mean we teach pronunciation with our eyes closed . . . when  it comes to the potential impact of the visual field on our work. Quite the contrary! How does the “front” of the room (or the scenes on screen) feel to your pupils? Can you enhance that? 

To learn more about one good (haptic) way to do that, join us at the next webinars!

Original Research: Open access
 “Asymmetrical characteristics of emotional responses to pictures and sounds: Evidence from pupillometry”.by Nakakoga, S., Higashi, H., Muramatsu, J., Nakauchi, S.,  and Minami, T.
PLOS ONE doi:10.1371/journal.pone.0230775

Saturday, March 14, 2020

Pronunciation in the eyes of the beholder: What you see is what you get!

This post deserves a "close" read. Although it applies new research to exploring basics of haptic pronunciation teaching specifically, the complex functioning of the visual field, itself, and eye movement in teaching and learning, in general, is not well understood or appreciated.

For over a decade we have "known" that there appears to be an optimal position in the visual field in
front of the learner for the "vowel clock" or compass in basic introduction in haptic pronunciation teaching to the (English) vowel system. Assuming:
  • The compass/clock below is on the equivalent of an 8.5 x 11 inch piece of paper
  •  About .5 meters straight ahead of your 
  • With the center at eye level--or equivalent relative size on the board or wall or projector, 
  • Such that if the head does not move, 
  • The eyes will be forced at times to move close to the edges of the visual field 
  • To lock on or anchor the position of each vowel (some vowels could, of course be positioned in the center of the visual field, such as schwa or some diphthongs.) 
  • Add to that simultaneous gestural patterns concluding in touch at each of those points in the visual field (www.actonhaptic.com/videos) 
Something like this:

11.  [uw]
“moo”
10.  [ʊ]
“cook”
(Northwest)
(North) 
1.  [iy]
“me”
2.  [I]
“chicken”
(Northeast)



9.  [ow]
“mow”
8.  [Ɔ]
“salt” 
(West)


(eye level)
3.  [ey]
“may”
4.  [ɛ]
“best”
(East)



7.    [ʌ]
“love”
(Southwest)


5. [ae]
“fat”
 (Southeast)

6. [a]       
“hot/water”
(South)






Likewise, we were well aware of previous research by Bradshaw, et al. (2016), for example, on the function of eye movement and position in the visual field related to memory formation and recall. A new study Eye movements support behavioral pattern completion” by Wynn, Ryan, and Buchsbaum of Baycrest’s Rotman Research Institute, summarized by Neurosciencenews.com, seems (at least to me) to unpack more of the mechanism underlying that highly "proxemic" feature.

Subjects were introduced to a set of pictures of objects positioned uniquely on a video screen. In phase two, they were presented with sets of objects containing both the original and new objects, in various conditions, and tasked with indicating whether they had seen each object before. What they discovered was that in trying to decide whether the image was new or not, subjects' eye patterning tended to reflect the original position in the visual field where it was introduced. In other words, the memory was accessed through the eye movement pattern, not "simply" the explicit features of the objects, themselves. (It is a bit more complicated than that, but I think that is close enough . . . )

The study is not claiming that the eyes are "simply" using some pattern reflecting an initial encounter with an image, but that the overt actions of the eyes in recall is based on some type of storage or processing patterning. The same would apply to any input, even a sound heard or sensation with the eyes closed, etc. Where the eyes "land" could reflect any number of internal processing phenomena, but the point is that a specific memory entails a processing "trail" evident in or reflected by observable eye movements--at least some of the time!

To use the haptic system as an example, . . . in gesturing through the matrix above, not only is there a unique gestural pattern for each vowel--if the visual display is positioned "close enough" so that the eyes must also move in distinctive patterns across the visual field--you also have a potentially powerful process or heuristic for encoding and recalling sound/visual/kinesthetic/tactile complexes.

So . . . how do your students "see" the features of L2 pronunciation? Looking at a little chart on their smartphone or on a handout or at an LCD screen across the room will still entail eye movement, but of what and to what effect? What environmental "stimulants" are the sounds and images being encoded with and how will they be accessed later? (See previous blogpost on "Good looking" pronunciation teaching.)

There has to be a way, using my earlier training in hypnosis, for example, to get at learner eye movement patterning as they attempt to pronounce a problematic sound. Would love to compare "haptic" and "non-haptic-trained" learners. Again, our informal observation with vowels, for instance, has been that students may use either or both the gestural or eye patterning of the compass in accessing sounds they "experienced" there.  Let me see if I can get that study past the human subjects review committee . . .

Keep in touch! v5.0 will be on screen soon!

Source: Neurosciencenews.com (April 4, 2020) Our eye movements help us retrieve memories,


Tuesday, January 22, 2019

Differences in pronunciation: Better felt than seen or heard?

clker.com
This feels like a "bigger" study, maybe even a new movement! (Speaking of new "movements", be sure to sign on for the February haptic webinars by the end of the month!)

There are any number of studies in various fields exploring the impact of racial, age or ethnic "physical presence" (what you look like) on perception of accent or intelligibility. In effect, what you see is what you "get!" Visual will often override audio, what the learner actually sounds like. Actually, that may be a good thing at times . . .

Haptic pronunciation teaching and similar movement-based methods use visual-signalling techniques, such as gesture, to communicate with learners concerning status of sounds, words and phrases. Exactly how that works has always been a question.

Research by Collegio, Nah, Scotti and Shomstein of George Washington University, summarized by Neurosciencenews.com“Attention scales according to inferred real-world object size", points to something of the underlying mechanism involved: perception of relative object size. The study compared subjects' reaction or processing time when attempting to identify the relative size of objects (as opposed to the size of the image of the object presented on the screen). What they discovered is that, regardless of the size of the images on the screen, the objects that were in reality larger consistently occupied more processing time or attention.

In other words, the brain accesses a spatial model or template of the object, not just the size of the visual image itself in "deciding" if it is bigger than an adjacent object in the visual field. A key element of that process is the longer processing time tied to the actual size of the object.

 How does this relate to gesture-based pronunciation teaching? In a couple of ways potentially. If students have "simply" seen the gestures provided by instructors (e.g., Chan, 2018) and, for example, in effect have just been commanded to make some kind of adjustment, that is one thing.The gesture is, in essence, a mnemonic, a symbol, similar to a grapheme, a letter. The same applies to such superficial signalling systems such as color, numbers or finger contortions.

If, on the other hand, the learner has been initially trained in using or experiencing the sign, itself, as in sign language, there is a different embodied referent or mapping, one of experienced physical action across space.

In haptic work, adjacent sounds in the conceptual and visual field are first embodied experientially. Students are briefly trained in using three different gesture types, distinctive lengths and speeds, accompanied by three distinctive types of touch. In initial instruction, students do exercises where they experience physically combinations of those different parameters as they say the sounds, etc.

For example, the contrastive, gestural patterns (done as the sound is articulated) for  [I], [i], [i:],and [iy] are progressively longer and more complex: (See linked video models.)
a. Lax vowels, e.g., [I] ("it')- Middle finger of the left hand quickly and lightly taps the palm of the right hand.
b. Tense vowels, e.g., [i] ("happy")- Left hand and right hands touch lightly with finger tips momentarily.
c. Vowel before voiced consonant, e.g., [i:] ("dean") - Left hand pushes right hand, with palms touching, firmly 5 centimeters to the right.
d. Tense vowel, plus off glide, e.g., [iy] ("see") - Finger nails of the left hand drag across the palm of the right hand  and, staying in contact then slide up about 10 centimeters and pause.

The same principle applies to most sets of contrastive structures and processes, such as intonation, rhythm and consonants. See what I mean, why embodied gesture for signalling pronunciation differences is much more effective? If not, go here, do a few haptic pedagogical movement patterns (PMPs) just to get the feel of them and then reconsider!





Sunday, August 26, 2018

It's not what you learn but where: how visual context matters

 If you have seen this research study Retinal-specific category learning. recently by Rosedahl, Eckstein and Ashby of  UC-Santa Barbara, (Summarized by Science Daily) I have a few questions for you: (If not, read it at eye level or  better just above, holding whatever it is in accordingly.)
  • Where did that happen (Where was your body; in what posture did it happen)?
  • What media (paper, computer, etc.) did it happen on?
  • What was your general emotional state when that happened? 
  • What else were you doing while you internally processed the story? (Were you taking notes, staring out the train window, watching TV . . . ?)
  • Where in your visual field did you read it? If it was an audio source, what were you looking at as you listened to it?
Research in neuroscience and elsewhere has demonstrated that any of those conditions may significantly impact perception and learning. Rosendal et al (2018) focuses on the last condition: position in the visual field. What they demonstrated was that what is learned in one consistent or typical place in the visual field tends not be recognized as well if appearing later somewhere else in the visual field, or at least on the opposing side. 

In the study, when subjects were trained to recognize classes of objects with one eye, with the other eye covered, they were not as good at recognizing the same objects with the other eye. In other words, just the position in the visual field appeared to make a difference. The summary in Science Daily does not describe the study in much detail. For example, were the direction of the protocol training from left to right, that is learning the category with the left eye (with right eye dominant learners), I'd predict that the effect would be less pronounced than in the opposite direction, based on extensive research on the relative differential sensitivity of the left and right side visual fields. Likewise, I'd predict that you could find the same main effect just by comparing objects high in the visual field with those lower, at the peripheries. But the conclusion is fascinating, nonetheless.

The relevance to research and teaching in pronunciation is striking (or eye opening?) . . . If you want learners to remember sound-schema associations, do what you can to not just provide them with a visual schema in a box on paper, such as a (colored?) chart on a page, but consider creating the categories or anchoring points in the active, dynamic three dimensional space in front of them.That could be a relatively big space on the wall or closer in, right in front of them, in their  personal visual space.

One possibility, which I have played with occasionally, is giving students a big piece of paper with the vowels of English displayed around the periphery so that the different vowels are actually anchored more prominently with one eye or the other or "noticeably" higher or lower in the visual field--and having them hold it very close to their faces as they learn some of the vowels. The problem there, of course, is that they can't see anything else! (Before giving up, I tried using transparent overhead projector slides, too, but that was not much better, for other reasons.) 

In haptic pronunciation work, of course, that means using hands and arms in gesture and touch to create a clock-like visual schema about 12 inches away from the body, such that sounds can be, in effect consistently sketched across designated trajectories or be anchored to one specific point in space. For example, we have used in the past something called the "vowel clock" where the IPA vowels of English are mapped on, with the high front tense vowel [i] at one o'clock and the mid-back-tense vowel [o] at 9 o'clock. Something like that.

In v5.0 of Haptic Pronunciation Training-English (HaPT-Eng), the clock is replaced by a more effective compass-like visual-kinesthetic schema of sorts, where the hands-arms-gesture creates the position in space and touch of various kinds embodies the different vowel qualities of the sounds that are located on that azimuth or trajectory in the visual field. (Check that out in the fall!)

In "regular" pronunciation or speech teaching those sorts of things go on ad hoc all the time, of course, such as when we point with gesture or verbally point at something in the immediate vicinity, hoping to briefly draw learners' attention. Conceptually, we create those spaces constantly and often very creatively. Rosendahl et al (2018) demonstrates that there is much more potentially in what (literally) meets the eye. 

Source:
University of California - Santa Barbara. (2018, August 15). Category learning influenced by where an object is in our field of vision. ScienceDaily. Retrieved August 23, 2018 from www.sciencedaily.com/releases/2018/08/180815124006.htm


Monday, March 26, 2018

What you see is what you forget: pronunciation feedback perturbations

Tigger warning* This blogpost concerns disturbing images, perturbations, during pronunciation
work.

In some sense, almost all pronunciation teaching involves some type of imitation and repetition of a model. A key variable in that process is always feedback on our own speech, how well it conforms to the model presented, whether coming to us through the air or perhaps via technology, such as headsets--in addition to the movement and resonance we feel in our vocal apparatus and bone structure in the head and upper body.  Likewise, choral repetition is probably the most common technique, used universally. There are, of course, an infinite number of reasons why it may or may not work, among them, of course, distraction or lack of attention.

Clker.com
We generally, however, do not take all that seriously what is going on in the visual field in front of the learner while engaged in repetition of L2 sounds and words. Perhaps we should. In a recent study by Liu et al, Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates,  it was shown that differing amounts of random light flashes in the visual field  affected the ability of learners to adjust the pitch of their voice to the model being presented for imitation. The research was done in Chinese, with native Mandarin speakers, attempting to adjust the tone patterns of words presented to them, along with the "light show". They were instructed to produce the models they heard as accurately as possible.

What was surprising was the degree to which visual distraction (perturbation) seemed to directly impact subjects' ability to adjust their vocal production pitch in attempting to match the changing tone of the models they were to imitate. In other words, visual distraction was (cross-modally) affecting perception of change and/or subsequent ability to reproduce it. The key seems to be the multi-modal nature of working memory itself. From the conclusion: "Considering the involvement of working memory in divided attention for the storage and maintenance of multiple sensory information  . . .  our findings may reflect the contribution of working memory to auditory-vocal integration during divided attention."

The research was, of course, not looking at pronunciation teaching, but the concept of management of attention and the visual field is central to haptic instruction, in part because touch, movement and sound are so easily overridden by visual stimuli or distraction. Next time you do a little repetition or imitation work, figure out some way to insure that working memory perturbation by what is around learners is kept to a minimum. You'll SEE the difference. Guaranteed.

Citation:
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B and Liu H (2018) Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front. Neurosci. 12:113. doi: 10.3389/fnins.2018.00113

*The term "Tigger warning" is used on this blog to indicate potentially mild or nonexistent emotional disruption that can easily be overrated. 

Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching

Clker.com
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)


There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads, Businessinsider.com, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"










Saturday, June 11, 2016

Gesticulate your way to better pronunciation teaching?

If you have never seen Howard Keel do "Gesticulate" from the 1953 musical, Kismet--especially if you are an aspiring "Haptician"-- it is a must. I'm going to kick off an upcoming half-day Haptic Pronunciation Teaching workshop September 30 at the BC TEAL Interior Regional Conference at Thompson Rivers University, here in British Columbia with it!

In haptic pronunciation teaching the focus is first on hand position and movement across the visual field, not on what the arm, head, voice and torso are doing. The idea is that the hand in some sense becomes the "conductor" of what the rest of  the body is doing. It is, of course, far more than just "gesticulating" but Keel's performance does certainly make the point!

Enjoy! And if you are in the Kamloops area at the end of September, please join us!

Tuesday, July 21, 2015

Back to the future of pronunciation teaching (and the "Goldfish" standard for attention management)

You apparently have a bit more than 8 seconds to read this post. So you may want to just scroll down to the conclusion and start there . . .

Clip art: 
Capturing and holding attention, if only for a few seconds, is the key to effective change in pronunciation work, especially for "mechanical" adjustments--and most other things in life. In earlier blog posts, the "gold standard" or is sine qua non of haptic pronunciation work has been seen to be about 3 seconds. In other words, for a learner to adequately experience the totality of a new sound or word, physically, auditorily, visually and conceptually--connecting things together, before moving on to practice or at least noticing or any chance at "uptake"-- takes complete, undivided attention for at least that long or longer.

Even that is often an unrealistic requirement with all the other potential distractions in the classroom or visual field. Research on the effectiveness of recasting learner utterances by instructors, for example, (Loewen and Philip, 2006) suggests that most of the time that strategy is relatively ineffective. One critical variable is always the quality or intentionality of learner attention, both in term of what the function the instructor is attempting to carry out and general learner receptivity.
Clker.com

Recall that Microsoft claims that our collective attention span, in part due to the impact of technology, has now dropped to about 8 seconds, just below that of the goldfish. (The UK Telegraph report is much more entertaining than that from the techies.

A new study by Moher, Anderson and Song of Brown University, summarized by Science Daily.com, adds a fascinating piece to the puzzle and may suggest how to begin to maintain attention better in class. What they discovered in an experimental study was that their subjects were, in effect, better able to "block" obvious distractions than they were more subtle ones. Backgrounded images in the visual field had more effect on subsequent action than did foregrounded, more striking elements which appeared to be easier for the brain to manage or ignore. They seem to have "discovered" one possible path into the mind by subliminal stimuli, evading first line conceptual or perceptual defences.

What is the obvious "subtle, unobtrusive, yet potent" application to pronunciation teaching? If you don't have "full body, mind and visual field" attention, there is no telling what is interfering with anchoring of sound change in the brain and subsequent total or partial recall.

Early on in EHIEP (Essential Haptic-integrated English Pronunciation) work I experimented extensively with controlling eye movement, in part to maintain concentration and attention, based primarily on the research underlying the therapeutic model of "Observed experiential integration" (See citation below) developed by  Bradshaw and Cook (2011). The effect was dramatic in working with individuals but applying those techniques to the classroom proved at least impractical. In part because the haptic pedagogical system was just developing, I backed off from eye patterning techniques in pronunciation work in 2009.

Based on Moher et al's research, however, it is perhaps time to again give directed eye movement management a "second look" in our work, going back to what I believe is the (haptic) future of pronunciation instruction, especially in virtual, computer-mediated applications.

Will report back on an in progress exploratory study with one learner using some eye movement management later this summer. Not surprisingly I am already "seeing" some promising results, attending to features of the teaching session that I would normally not have noticed!

Full citations:

Brown University. "Surprise: Subtle distractors may divert action more than overt ones." ScienceDaily. ScienceDaily, 16 July 2015, www.sciencedaily.com/releases/2015/07/150716123831.htm. (Jeff Moher, Brian A. Anderson , Joo-Hyun Song. Dissociable Effects of Salience on Attention and Goal-Directed Action. Current Biology, 2015 DOI: 10.1016/j.cub.2015.06.029)

Bradshaw, R. A., Cook, A., McDonald, M. J. (2011). Observed experiential integration (OEI): Discovery and development of a new set of trauma therapy techniques. Journal of Psychotherapy Integration, 21(2), 104-171.

Loewen, S., and Philip, J. (2006). Recasts in the adult English L2 classroom: Characteristics, explicitness, and effectiveness. The Modern Language Journal, 90, 536-556.

Saturday, June 28, 2014

Conducing feelings and emotions with vowels!

How's this for an opening line of a new Science Daily summary of 2014 research by Rummer and Grice entitled, Mood is linked to vowel type: The role of articulatory movements: "Ground-breaking experiments have been conduced (sic) to uncover the links between language and emotions." (Love that possible typo, "conduced," by the way--maybe something of a portmanteau between conduct and conduce perhaps? That actually unpacks the study quite well! To "conduce" means to "lead to a particular result." Science can be like that, eh!

Basically what they discovered was that if you have subjects do something like bite on a pencil (so that they come up with a smile, of sorts) or just keep repeating the high front vowel /i/ that has that
Clip art:
Clker
articulatory setting while they watch a cartoon, they tend to see things as more amusing. If, on the other hand,  you have them stick the end of that pencil in their mouth so that they develop an extreme pucker, or keep repeating the vowel /o/, they tend to see things as less amusing

So? It has been known for decades that vowels do have phonaesthetic qualities. (See several previous blog posts.) The question has always been . . . but why? The conclusion: Because of what the facial muscles are doing while the vowel is articulated, especially as it relates to non-lexical (non word) emotional utterances. Could be, but they should have also tossed in some controls, some other vowels, too, such as having subjects use a mid, front unrounded vowel such as /ae/, as in "Bad!"-- or a high front rounded vowel, such as /ü/, as "Uber," the web-based taxi service, or a high back unrounded vowel. 

As much as I like the haptic pencil technique, which I use myself occasionally (using coffee stirs, however) for anchoring lip position with those vowels and others, there is obviously more going on here, such as the phonaesthetic qualities of the visual field. Also consider the fact that the researchers appear to be ethnically German, perhaps seriously compromising their ability to even perceive "amusing" in the first place, conducing them into that interpretation of the results. 
 
Nonetheless, an interesting and possibly useful study for us, more than mere "lip" service, to be sure. 

Thursday, February 27, 2014

New Colour Vowel Clock for haptic pronunciation teaching!

We have just revised the AH-EPS v2.0 vowel clock. I say "we" because Karen redesigned the clock to include all of the key words and symbols. I added a new colour overlay to her design that is, I think, a little more compatible with the general phonaesthetic qualities of the visual field. (See earlier post related to the colour issue with the popular Color Vowel Chart.) Kudos to Karen. Will have various v3.0 sizes available on website, too.

Keep in touch!


Monday, December 23, 2013

(Haptic pronunciation) movement training: mouse to mouth?

Clip art:
Clker
New study by Kording of Northwestern University and the Rehabilitation Institute of Chicago, summarized by Science Daily, purports to show that " . . . computer use not only changes our lifestyle but also fundamentally affects the neural representation of our movements . . . " Really? The research compared the "movement generalizability" ability of heavy computer users with those who were not. Those proficient "mousers" were, not surprisingly, able to more quickly learn new mouse patterns.

What is of particular interest to haptic pronunciation teaching, however, was that after about two weeks of specifically designed mouse-based computer game playing, the former "non-mousers" had, in effect, caught up. Their brains and hands had achieved what appeared to be the same "broad movement generalization" capability. This helps explain a key phase or problem in haptic pronunciation learning--and suggests something of a solution. 

For some learners, being able to follow along with the pedagogical movement patterns (hand and arm movements across the visual field accompanied by speaking a word or phrase, concluding in hands touching on a stressed syllable) used by instructors can be initially difficult. In our experience it may take up to a month for them to be able to begin easily generalizing a movement pattern of a vowel, for example, in practicing pronunciation of new words.

There are any number of studies reported here earlier considering why that may be the case, from pedagogical to psycho-social to neurological. The concept of training learners to be better at learning movement first, in a low key, maybe even "fun" set of procedures, however, is intriguing. Whatever the cause, if "simple" movement training, rather than more radical intervention--or giving up in despair, can enhance haptic pronunciation learning and teaching up front, that is indeed big. 

Will try designing some kind of analogous "Mini-Mouse Module," or perhaps just require a few minutes of iPhone game work before or during class regularly to keep everybody up to speed!

 Keep in touch!

Sunday, November 17, 2013

Pay attention to pronunciation!

As reported in earlier posts, no matter how terrific our attempt at pronunciation teaching is, if a learner isn't paying attention or is distracted, chances are not much uptake will happen--especially when haptic anchoring is involved. No surprise there. A new study by Lavie and colleagues of UCL Institute of Cognitive Neuroscience, focusing on "inattentional blindness" entitled,"How Memory Load Leaves Us 'Blind' to New Visual Information," just reported at Science Daily, sheds new "light" on exactly how visual attention serves learning.

In essence, when subjects were required to momentarily attend to an event or object in the visual field and remember it, their ability to respond to new events or distractions occurring immediately afterward was curtailed significantly. (The basic stuff of hypnosis, stage magicians and texting while driving, of course!)

What is of particular interest here is that, whereas the visual image that one is attempting to focus on can strongly exclude other competing distractions, that effect works precisely the other way around in haptic-integrated pronunciation instruction. It helps explain the potential effectiveness of pedagogical movement patterns of EHIEP and AH-EPS:

  • Carefully designed gestures across the visual field 
  • Performed while saying a word, sound or phrase 
  • With highly resonate voice, and
  • Terminating in some kind of touch on a stressed vowel, what we term "haptic anchoring." 
It also explains why insightful and potentially priceless comments from instructors coming in too close proximity to vivid and striking pronunciation-related "visual events" . . . may not stick or get "uptaken!" 

See what we mean? 



Monday, October 28, 2013

Introduction to haptics and some possible applications

If you are new to the idea of haptics and "haptic," here is a neat 6 minute TEDYouth 2012 talk by Kuchenbecker of University of Pennsylvania (Hat tip to Karen Rauser.) Our work in haptic-integrated pronunciation teaching is something of the flip side of this. Whereas Kuchenbecker's work digitizes touch and movement to accompany video, we create the haptic felt sense of sound (through awareness of vocal resonance, upper movement and touch) to accompany the positioning of the hands and arms in the visual field. Have been working on the outlines of a TED talk proposal myself for next year. Keep in touch!

Thursday, October 3, 2013

The "touch-ture" of haptic pronunciation teaching

Clip art: Clker
A new study by researchers from Laboratoire de psychologie et neurocognition (LPNC) (CNRS/Université Pierre Mendès France/ Savoie University) in collaboration with Geneva University's Faculté de psychologie et des sciences de l'éducation and Les Doigts Qui Rêvent (Dreaming Fingers) in Talant (Côte-d'Or, France), reported by Science Daily, demonstrated the positive impact of variable texture on image comprehension in blind children. In essence, by providing materials with different, distinctive surface textures for the hands to survey, subjects were able to learn and recall more effectively. Research has long established that the blind develop superior touch-based senses that serve to replace visual--often in the same areas of the visual cortex as the sighted use.

The same principle should also apply to the application of touch and movement in our work. In the EHIEP (Essential haptic-integrated English pronunciation) approach, there are "roughly" a dozen distinct types of touch, each having its own texture. In principle, the "touch-tures" are related to the phonaesthetic and somatic qualities of the sound or sound process. For example:

For lax, or short vowels (such as: I, ae, a, Ə, U), the "touch-ture" is a light tap of both hands
For tense vowels+off glide (such as iy, ey, ay, ow, uw), the "touch-ture" is a brushing motion of one hand across the other as the first part of the vowel is pronounced. The moving hand then continues on to a location in the visual field associated with either glide, w or y.

We often have learners close their eyes or use eye tracking as they execute various pedagogical movement patterns across the visual field in presenting or correcting pronunciation. More focused attention to the "felt sense" or "touch-ture" of the hands in the process and the attendant vocal resonance has always been understood to be very important. Here is more evidence why. Keep in touch. 

Monday, September 16, 2013

Famous "Alcohol/L2 pronunciation study" mystery solved: Here's (NOT) looking at you, kid!

Clip art: Clker
If you have done some formal study of second language pronunciation teaching and learning, you have almost certainly ran across the 1972 "Alcohol" study done by Guiora and colleagues. Explanations as to exactly why drinking about an ounce and half of alcohol seemed to improve subjects' ability to imitate an audio recording of Thai sentences have run from Guiora's theoretical construct of "enhanced ego permeability" to simply "muscle relaxation" (Brown 2006 and elsewhere.) If you have followed this blog some you are aware of the critical importance of limiting visual field distraction to effectiveness of haptic pronunciation teaching techniques. (That observation is backed up by any number of studies in general "haptic" learning that demonstrate how visual modality consistently overrides auditory and tactile engagement.)

In Guiora's study, subjects sat facing an experimenter who operated the tape recorder. I have long wondered what would have happened had the imitation phase been done in a lab, rather than face to  face. (In a 1980 attempt to replicate the alcohol study later--in which I was on the research team, the attractive "social presence" of one of the (female) experimenters appeared to demonstrate the added impact of a face on the effect.)

A new study by Gorka, Fitzgerald, King, and Phan at University of Illinois at Chicago College of Medicine, reported by Science DailyAlcohol attenuates amygdala–frontal cortex connectivity during processing social signals in heavy social drinkers, suggests another, related explanation for the improved performance of subjects on the imitation task: desensitization to "threatening" features in the visual field in front of them. In the current study, "heavy social drinkers." given an appropriate size drink, were significantly slower in reacting to pictures of "threatening" facial expressions. The bottom line: the alcohol served to somewhat disconnect the connection between the (emotion-related) amygdala and the pre-frontal (visual) cortex.

There are many ways to functionally do the same thing in pronunciation instruction, restricting the emotional/social/visual impact on learner's attention. The field (pronunciation teaching) has figured out how to deal with the social and emotion milieu reasonably well but generally does not focus on the potentially disruptive effect of what is going on, on an ongoing basis,  in the visual field. In our work, that is essential--a given. SEE what I mean?

Apologies to Bogart for the take off on his famous line from Casablanca in the post title.

Sunday, May 26, 2013

Motion "IQ" and haptic pronunciation teaching

Clip art: 
Clker
A few decades back, the distinction between field independence and field dependence was investigated extensively in this field and others. (Several previous blogposts report often seemingly contradictory findings.) A new study reported by Science Daily, by by Melnick, Harrison, Park, Bennetto and Tadin, at the University of Rochester, on the relationship between general intelligence and ability to suppress some types of background motion in the visual field adds a new "wrinkle."

They found a striking correlation between IQ and ability to screen out small, background moving "clutter" and score on a standard IQ test. Even more surprisingly, according to the authors, they discovered that the high "IQ" subjects were correspondingly much worse at detecting large background shifts in the visual field itself. (According to the article, you can even test yourself on motion "IQ" with this YouTube video!)

Translation/relevance: Whether you are high or low IQ, being able to function in a non-distracting visual field makes you functionally more intelligent! (Being sensitive to movement of the larger field has been indirectly related to general empathy and relational awareness, which is also a good idea in language learning.)

In haptic pronunciation teaching, that principle is paramount. In the classroom, when doing haptic work  (movement-plus-touch) related to sound learning and change, visual distraction must be limited as much as possible. Anything that pulls the eyes and attention away from the pedagogical movement pattern can potentially "kill" or greatly limit the effectiveness of the haptic anchor in associating the gesture with the sound.

The design and format and background of the AH-EPS haptic video system is centered on that same concept: (a) black background, (b) clean, uncluttered movement, and (c) careful management of placement in the visual field. (For example, it is important to stay as close to the center of the visual field for general ease of maintaining attention and control.)

Just a little background for you there . . . and I mean a LITTLE!


Tuesday, April 2, 2013

Gestures "count" in pronunciation teaching!

Clip art: 
Clker
New study by Fenn and Duffy of Michigan State University and Cook of University of Iowa, summarized by Science Daily, demonstrates that using gestures as a teacher--at least in 4th grade match--results in better learning for students. (Other research has detected the same tendency in one-on-one tutoring as well.) In the study, the focus was an algebra equation. The "gesture" group saw an instructor gesture with one hand, mirror image, to the side of the equation being talked about as it happened. The control group was just "talked to."

They (not surprisingly) offer no explanation as to what may have been behind the striking difference in post treatment testing between the groups, but they do offer three near breath-taking observations ". . . Gesturing can be a very beneficial tool that is completely free and easily employed in classrooms . . . I think it can have long-lasting effects . . . Teachers in the United States tend to use gestures less than teachers in other countries."

The study used "deictic" gestures (pointing at something physically present or conceptual). It is still an interesting piece of evidence. (They could, of course, have tested the main effect by having another group that did not see a gesturing instructor but were, instead, provided with left or right pointing graphic arrows superimposed on the screen.) Just thought I'd point that out . . .

In AH-EPS all pedagogical movement patterns involve deictic anchoring in the visual field as well. That  has to count for something, eh?

Thursday, January 24, 2013

Synesthesia alert: No magnetic letters on your refrigerator!

Image credit: Synesthete.org
Especially if you have toddlers in the house! Well, not really. This study, by Witthoft and Winnawer of Stanford University, summarized by Science Daily, reports on what may well be a rather spurious or at least indirect correlation between the development of synesthesia and the presence on our refrigerators of those cute, plastic colored letters with magnets for young children to play with. What they found was that synesthetes, when given lists of colorless numbers and letters , tend to pick the same colors as those refrigerator magnet letters, whereas non-synesthetes' responses are pretty much random. How could that be? They don't say really, stopping short of suggesting that there is some direct relationship between the synesthesia and those letters being on the refrigerator during child development. Hmmm.  I just posted the following on an NLP discussion list:

"Interesting. Go to the website and take the test. When you do, before you respond to the query for your read on the "color" of the number or letter, say the number or letter out loud slowly, like a kid might. Note the overall felt sense of that articulation, where it lands in your head and vocal tract… and then pick your vowel. Better yet, look away from the grapheme when you do that. I can almost get to the synesthesia threshold that way . . . The research design neatly ignores controlling for how subjects get to making a decision, what cognitive and experiential process they lead with. (It is apparently done as a web-based survey only.) I am very suspicious of any direct link to childhood letters. That the letters happen to have been assigned those colors in the first place by the initial designers is probably more where it all leads."

So what does that have to do with haptic-integrated pronunciation work? Everything. The phonaesthetic   and somatic felt sense qualities of vowels, both in visual and articulatory terms, are well researched from several disciplines. Where the vowels are placed in the visual field in EHIEP and how the vowel sounds are presented and identified (or mis-identified) with letters in phonic characterizations, as in the "Refrigerator" study, does make a difference. (See earlier posts on the pedagogical application of vowel color such as this one.) Keep in touch.