Sunday, July 31, 2011

Haptic Drilling!

Clip art: Clker
Well . . . the application of the use of visual-haptic systems in "drilling down into" data of various kinds reported (in what journal or publication or when, I know not from the URL!) in a paper by Liu and Laycock, University of East Anglia, to our work may not be that evident at first, but the underlying principle is the same. The haptic-based controller arm provides feedback to the user on the surface, density and other features of the virtual object.

I am often asked how a learner with a disability such that he or she had only use of one hand could work in the system. It would appear that EHIEP-type pronunciation teaching protocols could rather easily be "embodied" in that system, using just one hand. Even a system that used only eye tracking to adjust position in the virtual-visual field (as used in many of today's weapons systems) could probably accomplish the same thing. Now THAT would be "thrilling drill!" (See earlier post, "Is it the thrill or the drill?")

Saturday, July 30, 2011

Gut (surgeons') Reactions to haptic feedback and support

clip art: Clker
Although the link is only to an abstract (I couldn't afford to pay the gut wrenching fee to read the full article . . . ), the stated conclusion is striking: " . . . on average subjects performed 36% faster and 97% more accurately with haptic feedback than without . . . " Surgeons performed a "transfer task" in a surgery training simulator (perhaps a transplant procedure?) Especially the more experienced surgeons appear to have benefited greatly from having their hands strategically guided by the computer, getting simulated haptic (tactile and kinesthetic) feedback on aspects such as surface condition and pressure, etc.

In the earlier "Haptic Cow" post I did, in fact, suggest (somewhat tongue-in-cheek, I must admit) something quite similar--having learners haptically explore and experience the inside of a virtual, speaking mouth. Wow! I had a gut feeling at the time that I was on to something . . . QED.

Friday, July 29, 2011

Tactile interpersonal communication framework

clip art: Clker
Interpersonal physical contact in public in many cultures among "normal" people is by varying degrees prohibited. Licensed professionals such as speech or massage therapists--or athletes, are a different matter, of course. Here is a list of suggestions for using touch extensively in working with deafblind children. Given recent blog posts relating to the metaphorical and neurophysiological relatedness of touch and sound, it is not too difficult to "visualize the felt sense" of doing many aspects of the EHIEP system in pairs, where the haptic anchoring would be accomplished by touching the hand or arm of another person, rather than your own.

I have over the years used discrete amounts of touch in that manner with students in class. Culturally, my role of instructor in that public setting gives me a bit more license to do that. Actual student-student contact, however, is another question entirely. However, looking at the range of techniques in the deafblind guide, it obvious that using conventions of public dance or professional tactile behavior, for example, a handshake,  it should be quite possible to develop interpersonal touch techniques (on prominent words, for example) to greatly enhance both the effectiveness of anchoring (and subsequent recall) and expressiveness. Yet another example of the deafblind leading the sighted!

Wednesday, July 27, 2011

Touching sound to teach it

image credit: yvonbonenfant.com
When contemporary vocal artists, in this case Yvon Bonenfant, create in multi-senses, the interplay between sound and touch often becomes the focal "zonenubergang," the crossover. As has been evident in posts related to haptic interfaces and, more recently, deafblind communication, our touch metaphors, e.g.,"How touching!" connect much more than simple mental concepts. Bonnefant describes the sense of the engagement of touch and sound in his work as best experienced as a silk-like "membrane" between us where the sound passes through into tactile meaning and understanding almost unimpeded. That is a remarkable characterization of what we are after in linking pronunciation with the felt sense of producing it.

Haptices and Haptemes

clip art: Clker
Here is an announcement of a 2010 workshop that I am sorry I missed on "Social Haptic Communication." Although the brief description only defines "haptices" as "touch messages" and "haptemes" as elements of touch, especially as applying art experiences to the body . . . I like the "felt sense" of the terms and may adopt both terms for use in some aspects of HICP/EHIEP work. The use of those concepts seems to have been originated or at least promoted by Dr Riita Lahtinen of the Ear Foundation. (Lahtinen's book, Haptices and Haptemes, sounds fascinating.) The relationship between the skin-on-skin haptic communication "language" used by those who work with the "deafblind," and our haptically anchored pronunciation methods is intriguing. It is clear from recent research that the brain does not recognize much of a difference between the two kinds of communication. Welcome to the "Hapt-Team!"

Monday, July 25, 2011

Engaging haptic hormone?

The hormone oxytocin has been associated with a range of social functions such as trust, facial recognition, massage technique and lactating. In this 2008 study by Gordon,  Zagoory-Sharon, Leckman, and Feldman, summarized by Science Daily, it was also found to be associated with parenting styles that involved richer communication and engaging touch with infants in both fathers and mothers.

Clip art: Clker
The concept of "haptic visuality" was proposed by media/art critic Marks, that the eyes, in some multi-literacy, multimedia environments, such as haptic cinema, become very much "tactile-like," interpreting experience more as through the skin, potentially bypassing higher cognitive critical functioning and filters. What that means for development of haptic interfaces is that in a vivid multisensory "nexus" or event, the brain becomes much more holistic in interpreting what is in front of it, in effect interpreting the experience as a "felt" whole, not allowing critical analysis or deconstructing to enter into the process much. This is, of course, also the "Holy Grail" of advertising, movie makers and marketing.

A shot of oxytocin before your next haptic pronunciation lesson? Touching . . .

Sunday, July 24, 2011

Haptic Cow!

clip art: Clker
I am asked repeatedly, how to apply haptic thinking and technology to more detailed articulatory work with vowels and consonants, much like what a professional speech therapist does. We may have the answer here in an extraordinary invention by Baillie. The "haptic cow," of course, has been designed for training in veterinary medicine. Trainees can (virtually and haptically) put their hand "inside" the cow to develop a required skill set that feels just like the real thing, such as delivering a calf.

Imagine our L2 learner doing the same sort of thing--except from a different perspective and virtual point of entry, of course! Being able to explore the inside of a "living," drooling, pronouncing  mouth with both hands as it does diphthong after glorious diphthong . . . "How now, brown cow?"

Saturday, July 23, 2011

The inner game of pronunciation learning

Clip art: Clker
The "ancient" Taoists and Lessac had it right, when it came to rehabilitation--among other things. The central concept was to focus on managing what was going on in the body and begin the healing/learning process there, being far less concerned with external appearance and incoming sounds.

In the last two years or so, as you can see from the  near 700 blog posts, I have been reporting on research that looks at learning that is less aural (listening-based) and visual (relying on sight as the lead system in learning new sounds). In a very real sense some of the key insights have come from research and practice in rehabilitation methodologies developed for the blind (haptic computer interfaces), deaf (sign and haptic systems) and physically disabled (especially tactile-enabled prosthetics and robotics). Much of the genius of Lessac was his ability to interpret to the Western mind (and body) how to also learn from the inside out. We may yet be able to rehabilitate pronunciation teaching!

Friday, July 22, 2011

Pronunciation modalities: out of sight--but IN mind!.

clip art: Clker
In this 2009 study of modality dominance, by Hecht and Reiner, when visual is paired one-on-one with either haptic or auditory competing stimuli, visual consistently overpowers either of the two. When the three are presented simultaneously, however, the dominance of visual disappears. That may explain why having some learners focus on a visual schema (such as the orthography) while articulating or practicing a new sound may not turn out to be very efficient--or doing a kinesthetic "dance" of some kind to practice a rhythm pattern (without speaking at the same time) while looking at something in the visual field, may not work all that well either for some learners.

The presence of eye engagement may override or nullify information in the competing modality. In HICP, where all three modalities are usually engaged, the "distracting" influence of sight is at least lessened. In fact, the tri-modality "hexus" should only better  facilitate the integration of the graphic word, the felt (haptic) sense of producing it and the internal (auditory) bone- resonance and vibrations. Although a substantial amount of pronunciation learning may be better accomplished with eyes closed, tri-modal (haptic, visual and auditory) techniques probably come in a close second. We will "see" in forthcoming research!

A touch for pronunciation learning? Hyper- and Hypo-hatics

Clip art: Clker
Autotelic need-for-touch subjects in this 2007 study by Krishna and Morrin were better at distinguishing "diagnostic" from "non-diagnostic" touch in relation to food attractiveness. In other words, if the "feel" of a product related to its essence or identity, for example, the softness of a banana, the "autotels" were better at recognizing it. Likewise, they were better at ignoring non-taste relevant features, such as the feel of the label. Non-autotels, were less able to get diagnostic features--but were likely to be more taken in by non-diagnostic features. (You can "see" the implications for marketing in grocery stores!)

The research includes a very informal questionnaire to self-identify degree of haptic need-to-touch which I could easily amplify and use in research. We see constantly the range of "autotelicity" in our students. If we can identify that by degree, perhaps we can, as the research suggests, develop ways to better orient students at either end of the continuum to their use of hapticity in learning the system. That will certainly help in selling the "product!"

Thursday, July 21, 2011

V-Braille meets Kinect in Hapticland

Image credit: www. pmos.org.uk
To get an idea of what a haptic-based, computer-mediated, near virtual reality system for pronunciation teaching might work and look like, imagine a Kinect interface with a V-Braille set up, where you wear gloves that have sensors covering the end of your middle fingers (like a thimble) and quarter-size sensors on both the palm and back of each hand. As has been explored in earlier posts, using the HICP/EHIEP framework, learning new pronunciation should be so easy and efficient that you could learn it "with your eyes closed"--and probably should!

Why the Mikado image? Listening to it right now. What better model of integration (of fun and politics?) 

Saturday, July 16, 2011

Insight from the blind

Clip art: Clker
In this 2000 paper, Design of Haptic and Tactile Interfaces for Blind Users, Christian makes an important observation: " . . . At a high level, an interface is a collection of objects and operations one can perform on those objects. The visual representation of an interface on the monitor is only one interpretation. The idea is that when affording the blind access to an interface, one should not convey the visual representation, but rather the interface itself. . . .  by translating the semantic level of the interface, one can convey the same constructs that are available to sighted users."

In other words, our use of the visual field is, more accurately, use of the "proprioceptive" field, which involves much more than just sight. We have all observed learners who, in attempting to focus on a sound, will close their eyes to enhance their concentration. Turns out, in a haptically-anchored system (such as EHIEP) for most, doing a full protocol (a set of sounds or sound patterns) or a single sound with eyes closed significantly intensifies concentration--and almost certainly, retention. Compared to "simply" saying a word "blind," the addition of the haptic anchor (movement terminating in touch of both hands in the visual field) creates an extraordinarily "vivid" experience.

Although I have not explored the application of this concept to all protocols systematically, the idea of blocking visual modality extensively is an intriguing possibility. It seems to work surprisingly well in most contexts. Try it. You are in for a "blinding" revelation . . . 

Wednesday, July 13, 2011

Meaning through doing

Clip art: Clker
In this fascinating paper by Kilbourn and Isaksson (2007) the case is made for the place of some types or phases of noncognitive learning through haptic exploration. As we consider the wide range of phenomena related to learning of the sound system of a language, it becomes very evident that some aspects are "more" modality specific than others. For example, learning the orthography is obviously a more visual task--although it assumes some degree of felt sense of the vowels and consonants. But for the "rest" of the language, in Kilbourn and Isaksson's framework, even for the preponderance of early vocabulary learning, the experiential, haptic anchors are not just ancillary or "add ons" to the process but are fundamental to acquiring meaning.

Virtually all of pronunciation is closely tied to kinaesthetic and tactile sensory networks. It should come as no surprise then that it can also be learned efficiently,  haptically. This framework suggests that at least for some set of learners a primarily haptic and less cognitive/visual presentation approach will be best. Based on our experience, that seems to apply to most learners, particulary in terms of learning basic vowels, stress, rhythm and intonation. Success may just depend upon how long one stays in touch . . .

Saturday, July 9, 2011

How sound touches us

There have been many relatively recent studies of synaesthetic metaphor (combing senses, e.g., a moving speech, a sharp flavor) in various disciplines. In a 1996 study by Day, it was shown, for example, that in English literature, the dominate synaesthetic metaphor tends to be "sound-touch," such as a "piercing scream." In music metalanguage, the audio-tactile metaphors related to pitch, loudness and other qualities are so pervasive that it would be nearly inconceivable to speak of musical sound otherwise (e.g., G-sharp or G-flat).

The cognitive-affective-visual-auditory-kinesthetic-tactile "hexus" (see earlier "hexus" post) that is a word or sound in language can be  committed to memory or accessed in any number of ways, but that close affinity between touch and sound, both metaphorically and neuro-physiologically, is especially relevant in teaching pronunciation.

The EHIEP system, with about a dozen distinct touch types tied to pedagogical movement patterns that anchor L2 sounds and sound structures, is certainly a sound, touching step in the right direction . . .

Wednesday, July 6, 2011

Just do it! . . . haptically

Adding touch to movement, tactile to kinesthetic, has proven to be very powerful, especially in getting learners to anchor sounds or sound processes consistently in the same location in the visual field. According to this 2011 research by Smith and colleagues at the Harvard School of Engineering and Applied Sciences, summarized by Science Daily, achieving that kind of precision, which is very important to efficient haptic work, is best accomplished by ". . .continually adjusting the goals of practice movements so that systematic differences (errors) between these movements and the intended motion can be reduced . ."

clip art: Clker
Furthermore, one of the implications is  " . . . a new approach to neurological rehabilitation: one that continually adjusts the goals of practice movements so that systematic differences (errors) between these movements and the intended motion can be reduced." In other words, the learner's attention must be constantly redirected to better positioning or touch or resonance or general form of a haptically embodied sound, providing a very rich type of "motion-referenced learning."

In terms of EHIEP work, that means, for example, focusing on a different parameter of a pedagogical movement pattern, rather than a "simple" repetition beyond a few iterations, such as the positioning, speed, intensity of contact between hands, etc., in anchoring a new or "corrected" sound. The articulation involved may not change perceptively but the practice will continue to be experienced as progressive motor learning, in the sense of the Harvard study.

 It is actually quite easy--once you stop thinking about it . . . and just do it. 

Saturday, July 2, 2011

Getting Rhythm--haptically!

Clip art: Clker
Any pronunciation teaching method text will recommend kinaesthetic techniques for practicing rhythm, such as clapping hands or rhythmic dance-like movements. How productive use of rhythm is acquired, however, remains a mystery. In L1 learning, inability to perceive rhythm, for example, may be a contributing factor in some types of autism and dyslexia. As reported in this study, at least drum rhythm can be learned haptically, in the absence of either visual or auditory input. 

In almost all HICP work, rhythm and rhythm groups are haptically anchored, accompanying, for example,  primary focus on vowels, stressed syllables and intonation contours (or tone groups.) In fact, EHIEP should be seen as very much "rhythm-centered," experienced by the learner in conversation-like rhythm groups of syllables, often accompanied by music during practice.


Friday, July 1, 2011

Field independence in haptic pronunciation instruction

As reported in this article by Hecht and Reiner, field dependence/independence cognitive style may have an impact on how readily one is able to "get" the felt sense of a haptically anchored object in virtual reality or through haptic video as well. In HICP terms, that would suggest that the field independent learner should be better able to focus on and recall targeted objects (sounds, words or processes)--without getting too engaged or distracted by any one modality involved or feature of the visual field--bringing as much information and cognitive integration to the event as possible.

Clip art:
Clker
There is a fascinating interplay involved here. The "danger" of haptic-based or other "physical" techniques is that the learner may be so engaged with the somatic experience that the learning objective or structure in focus is lost or at least not well connected. Field independence suggests the possibility of better cognitive/noncognitive balance in the experience. On the face of it, that does seem to explain why some learners (although not many) find haptic work less effective or efficient. For example, they may be able to remember the pedagogical movement pattern (PMP) associated with a vowel but not the pronunciation. Likewise, a learner's over-enthusiastic, dramatic or emotional response in anchoring a targeted expression, not uncommon in field-dependent individuals, may actually be counter-productive, resulting in relatively poor, limited access and recall later.

Effective multiple modality learning requires that information from all senses being brought to the problem "at hand" are represented appropriately and optimally. EHIEP protocols work only to the extent that instructors and students maintain control and maximal attention in the process. Working with body movement there is always the possibility of things getting a bit "out of hand," but that should be avoided to the extent possible--especially for the more field dependent and hyperactive among us.