Showing posts with label modality. Show all posts
Showing posts with label modality. Show all posts

Wednesday, June 4, 2014

Visual "Socailization" and visual pronunciation teaching methods

In a recent interview, Robert Thomson, chief executive officer of News Corp, commented on the far reaching impact of "visual socialization" on today's media and news organizations. One observation was that we are only beginning to understand the new,  overwhelming dominance of visual learning, what that means to both social connectedness and education. To get a feel for what visual connectedness and "Socail media" may be like, watch this "Socail Cave" video by Tiazzoldi or even check it out on Pinerest.
Photo credit: Moses Lam

Well . . . yes, there may be a bit of  random "dys-graphia" involved there, but the two pieces together do underscore Thomson's point, the all consuming influence of visual media. I may just adopt that acronym: SOCAIL? (So, Over-the-top  visual-Cognitive pronunciation teaching really Ain't It, Lads?) 

It is easy to underestimate the impact on our work. There are several methods or companies that appear to be more explicitly visual, such as "EyeSpeakEnglish.com." How well the new "visually socialized" generations of learners (VSLs) can learn pronunciation, can connect up sound and movement to their primary learning modality, visual imagery, is, of course, the question. In general, research and practice up to this point suggests that visual dominance simply overrides not only auditory but tactile as well. (See--literally--dozens of previous blog posts here on that topic!) 

My guess is that many highly visual pronunciation teaching methods (that do not involve strong compensatory auditory and movement components by design) are anachronisms, at best, created before the the emergence of new media and VSLs, overcompensating for earlier attraction of "colourful" or engaging visual images on those who had not experienced them previously. 

The antidote? (And I could provide anecdotes ad infinitum, of course.) Haptic. Keep in touch. 

Monday, November 26, 2012

Physical vs social domains in pronunciation work

Ever wonder why students may not be able to use a new piece of pronunciation in pair work or controlled conversation or on their way our the door? Forthcoming research (already!) published in NeuroImage by Jacka, Dawsona, Beganya, Leckiea, Barrya, Cicciab and Snyderc, fMRI reveals reciprocal inhibition between social and physical cognitive domains (in the brain) suggests part of the answer: "Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning."
clip art: Ckler
The implications of that for integration of pronunciation work, both in the lesson and in the brain of the learner, are worth an "uninhibited" reexamination. For one, perhaps insight, explanation, meaningful conversations, "lite drills" and metacognitive encouragement are not enough for efficient "uptake" to occur. Likewise, decontextualized "body drills" that focus primarily on the mechanics of articulation are not going to automatically bridge the "domain gap" either--in the classroom or on the street. Optimal learning in both domains must go on either simultaneously or in some kind of intricate dance that achieves both outcomes. Haptic integration is one answer to that, where the "channels" of communication and change are not quite in as direct competition. The only problem is often just overcoming the inhibitions of the "haptically challenged."

Thursday, April 5, 2012

Rhythm and prominence: The place of effective haptic anchoring

Clip art: Clker
Clip art: Clker
I have been aware of the "Interactive Metronome" program, linked above, for several years. For a number of conditions it has been used very successfully. (The promo says that there are over 20,000 certified trainers!) It uses a combination of audio input of a rhythmic beep, along with hand clapping and feet tapping for steps, for both diagnostic and therapeutic purposes. I was asked if it fit within the HICP framework of haptic procedures. Yes and no.

On the one hand (no pun intended) it does involve movement and touch. In general, however, I am not a proponent of hand clapping and repetitious foot tapping for anything other than just getting the "feel" of the rhythm of speaking. I do not recommend tapping out or clapping out the syllables of a word or phrase, for example, as the primary technique for anchoring prominence (word or phrasal stress). One reason for that relates to the research reported earlier on the blog focusing on the nature of tactile and kinaesthetic memory. Tactile memory, relative to audio and video memory, for example, tends to be more easily "overwritten," or sensitive to cross-modal competition. In other words, another anchor or distraction in the same "vicinity" will, in effect, be more likely to "erase," downgrade or get confounded with the earlier one.

What that implies is that a pedagogical movement pattern (PMP) that attempts to anchor stressed syllables haptically in one area of the body and unstressed syllables in another, for instance, should be more effective than simple, repetitive clapping of hands or tapping of feet, which tries to anchor or write in both stressed and non-stressed on the same location. The rhythmic practice versions of four EHIEP protocols do that in using a regular rhythm "tempo," not all that different from a metronome. (I sometimes do use a metronome with learners who do not have much of a felt sense of rhythm--of any kind!) The body location combinations are: deltoid--elbow, hand touch--outside hip, index finger tap--to center palm or fingernails to center palm in various positions in the visual field. No excessive applause or foot stomping to "get" attention for prominence needed--or all that effective either!

Wednesday, January 4, 2012

Haptic anchoring and anchor redux: better felt and not seen for optimal conflux

Research by Alecuyer on what is known as "pseudo-haptic feedback" dramatically demonstrates the potential dominance of visual modality over haptic. When provided with contradictory feedback, such as seeing a distorted image of what we are touching, the brain will favor the visual image, especially in terms of determining the size or shape of the object in view. (On questions of texture or other material properties the balance may swing in the other direction.)

HICP/EHIEP "haptic integration" attempts to consistently shift perception toward "material properties" of a sound, away from its orthographic image, which, in turn, may be associated with inaccurate or underdeveloped pronunciation.  So, attention to the conflux of the visual shape of the word and its auditory properties must be secondary--as noted in several other posts based on other disciplines, e.g. Lessac. What that should accomplish is both more efficient encoding and anchoring of new sounds but also more effective "haptic monitoring" during spontaneous speech.

One of the most common reports from learners is the "return" of the clear, momentary felt sense of a sound being worked on either as it is pronounced more accurately or when it is still being used inaccurately, what we call "anchor redux." Those events, which are established and anticipated in the mind of the learner through several aspects of the system (what is termed, future pacing, in hypnotic work) are one of the basic benchmarks of HICP. Should you not see my point at this point . . . I'm sure you'll get a feel for it later . . . 

Thursday, December 22, 2011

Monkey see and monkey do: efficient multi-tasking in pronunciation work

Clip art: Clker
Here is one of those research reports that inevitably evokes the same somewhat exasperated reaction from me (and I expect from most of you, as well). Ready?  It has been discovered that we--well some of our purported "cousins," at least-- are wired to multitask! Think of it . . . you can, for example, now watch TV and read a book at the same time or run on a treadmill without worry that you are going against your very nature or doing irreparable harm to your equipment.

It is an important study, reportedly one of the first to establish that empirically. The trick apparently is just how closely related the two tasks are. If they are sufficiently distinct, either in terms of intra-modality contrast (like two pictures) or inter-modality (like singing and knitting), go to it! Any number of previous posts have looked at the interplay among visual and auditory and haptic modalities, coming to much the same conclusion: that we can, under the right circumstances attend quite well to both haptic and auditory (and in controlled contexts, visual) simultaneously.

HICP/EHIEP is based on the idea of continuous, simultaneous engagement of multiple modalities (what we often refer to with the acronym "CHI"--for continuous haptic integration, haptic having the primary function of anchoring and integrating.) In other words, doing pedagogical movement patterning and seeing (tracking those movements of the hands across the visual field) and speaking at the same time should be a piece of cake. If not, we may just  have too much time on our hands--or not enough. Certainly nothing to HICP at!

Sunday, August 21, 2011

The Myth of Learning Styles

Here is a "must-read" on the concept of "learning styles" from Change Magazine, 2010. It begins with an interesting claim: "There is no credible evidence that learning styles exist." 



Although Riener and Willingham focus on the validity of the idea at college-level, their basic claim, that ability, student background and content (including the media in which the instruction is packaged) are far more relevant to instruction than is the potential impact of individual learning style (visual, auditory, kinesthetic, etc.), is simply dead on. (This is one of those pieces of research that you discover--with which you almost agree with too much--that forces you to smile for the rest of the day!) Enjoy!

Friday, July 1, 2011

Field independence in haptic pronunciation instruction

As reported in this article by Hecht and Reiner, field dependence/independence cognitive style may have an impact on how readily one is able to "get" the felt sense of a haptically anchored object in virtual reality or through haptic video as well. In HICP terms, that would suggest that the field independent learner should be better able to focus on and recall targeted objects (sounds, words or processes)--without getting too engaged or distracted by any one modality involved or feature of the visual field--bringing as much information and cognitive integration to the event as possible.

Clip art:
Clker
There is a fascinating interplay involved here. The "danger" of haptic-based or other "physical" techniques is that the learner may be so engaged with the somatic experience that the learning objective or structure in focus is lost or at least not well connected. Field independence suggests the possibility of better cognitive/noncognitive balance in the experience. On the face of it, that does seem to explain why some learners (although not many) find haptic work less effective or efficient. For example, they may be able to remember the pedagogical movement pattern (PMP) associated with a vowel but not the pronunciation. Likewise, a learner's over-enthusiastic, dramatic or emotional response in anchoring a targeted expression, not uncommon in field-dependent individuals, may actually be counter-productive, resulting in relatively poor, limited access and recall later.

Effective multiple modality learning requires that information from all senses being brought to the problem "at hand" are represented appropriately and optimally. EHIEP protocols work only to the extent that instructors and students maintain control and maximal attention in the process. Working with body movement there is always the possibility of things getting a bit "out of hand," but that should be avoided to the extent possible--especially for the more field dependent and hyperactive among us.

Friday, November 19, 2010

The "Change the Channel Fallacy"

Several years ago a colleague who worked in addiction counseling explained to me why many therapeutic approaches to addiction fail: they try to change the channel. By that he meant that the method may "simply" try to convince the brain to go for "good" pleasure rather than bad, like switching drugs--or exchange good thoughts for bad. That is why successful treatments are multi-modal, using some other channel from the one the addiction is logged into to eventually change the behavior or the problematic channel. In my view that is one of the key reasons that pronunciation work can fail as well: (simply) trying to substitute sounds, in the auditory track. That is why, in many cases, cognitive and meta-cognitive "treatments" or instructional focus can be successful--depending on the modality preferences of the learner. That is also why, haptic engagement seems to work.

Both kinesthetic (movement) and tactile (touch)are, for most, secondary channels which can be trained without directly confronting the primary processing channels, whether visual or auditory. Ironically and somewhat counter intuitively, for a highly visual learner, a strong auditory training focus may work, or vice versa. One the principles of Neurolinguistic Programming (NLP) which I have found most helpful is: for maximum effect, try to anchor a sensation in a learner's secondary or tertiary channel, not primary. How you figure out what that means in a big class is a matter for another post, but you see/hear/feel/are moved or touched by the idea?