Showing posts with label context. Show all posts
Showing posts with label context. Show all posts

Wednesday, December 21, 2022

Context, AI voice technology, haiku and (pronunciation) teaching

Two studies just published and summarized by ScienceDaily.com together illustrate how critical context is to understanding and (pronunciation) teaching. One is on the impact of voice technology; the other, on AI-assisted or created Haiku. 

In the first, How voice technology influences what we reveal about ourselves, by Melzner, Bonezzi, and Meyvis, published originally in the Journal of Marketing Research, it was revealed that customers, not surprisingly, will reveal more about themselves, directly or indirectly when responding by speaking to automated systems, rather than interacting on the keyboard with text messaging. Some of that "revelation" is actually paralinguistic (voice characteristics such as pitch and pacing) and background sounds. In other words . . . context. 

In the Haiku study, Basho in the machine: Humans find attributes of beauty and discomfort in algorithmic haiku, by Ueda at Kyoto University Institute for the Future of Human and Society, discovered, basically, that subjects rated haiku created by humans collaborating with AI higher, in general, than Haiku created just by AI or humans, but, if they suspected the engagement of AI in the process, the ratings went down. (Exactly how subjects were prompted to check for that is not indicated, but just the very suggestion of possible AI "meddling" had to have a pervasive effect, undermining the validity of the study, potentially skewing the perceptions and expectations of the subjects . . . )

(Full disclosure, having lived in Japan for a decade, I came away with a great appreciation for haiku, such that it gradually became both my preferred genre for reading pleasure and poetic expression.) 

How does this all tie together with pronunciation teaching? In both studies, context is critical. In fact, haiku only works when the context is either provided in advance and often explained in great detail OR where the subjects have grown up in . . . well . . .Japan, where the form is encountered from infancy. Great haiku as an art form, itself, generally recreates context (or possible contexts) in the mind of the devotee/reader. In both cases, comprehension is grounded in context, not in the form, itself. 

So it is with pronunciation teaching as well. It is possible to work with pronunciation out of explicit context and the sounds or patterns be later available in spontaneous language use, but the treatment has to have almost "haiku-like" in salience for the learner. On the other hand, the immediate context of let's say attention to a consonant, as in the AI voice study, is encoded along with the targeted sound, e.g., the voice characteristics, including stress and disruptive performance markers of student and model, the room ambience, and the neurology and biomes of learners and instructors all--and all either enabling or disabling recall later. 

In other words, unless context, in several senses, is working for  you or being proactively generated in pronunciation work, the odds are not with you. For that reason, in part, the KINETIK method approach and others like it are designed to consistently embed pronunciation in regular, good course content or personally memorable, engaging narrative of some kind, where the chances of the focus of the work being remembered should be at least better, or at least with less clutter. 

O AI, eh? Aye!
O the story is long but . . . 
The tail is longer





Sources: 

American Marketing Association. (2022, November 30). How voice technology influences what we reveal about ourselves. ScienceDaily. Retrieved December 20, 2022 from www.sciencedaily.com/releases/2022/11/221130114606.htm

Kyoto University. (2022, December 2). Basho in the machine: Humans find attributes of beauty and discomfort in algorithmic haiku. ScienceDaily. Retrieved December 20, 2022 from www.sciencedaily.com/releases/2022/12/221201122920.htm

Sunday, August 26, 2018

It's not what you learn but where: how visual context matters

 If you have seen this research study Retinal-specific category learning. recently by Rosedahl, Eckstein and Ashby of  UC-Santa Barbara, (Summarized by Science Daily) I have a few questions for you: (If not, read it at eye level or  better just above, holding whatever it is in accordingly.)
  • Where did that happen (Where was your body; in what posture did it happen)?
  • What media (paper, computer, etc.) did it happen on?
  • What was your general emotional state when that happened? 
  • What else were you doing while you internally processed the story? (Were you taking notes, staring out the train window, watching TV . . . ?)
  • Where in your visual field did you read it? If it was an audio source, what were you looking at as you listened to it?
Research in neuroscience and elsewhere has demonstrated that any of those conditions may significantly impact perception and learning. Rosendal et al (2018) focuses on the last condition: position in the visual field. What they demonstrated was that what is learned in one consistent or typical place in the visual field tends not be recognized as well if appearing later somewhere else in the visual field, or at least on the opposing side. 

In the study, when subjects were trained to recognize classes of objects with one eye, with the other eye covered, they were not as good at recognizing the same objects with the other eye. In other words, just the position in the visual field appeared to make a difference. The summary in Science Daily does not describe the study in much detail. For example, were the direction of the protocol training from left to right, that is learning the category with the left eye (with right eye dominant learners), I'd predict that the effect would be less pronounced than in the opposite direction, based on extensive research on the relative differential sensitivity of the left and right side visual fields. Likewise, I'd predict that you could find the same main effect just by comparing objects high in the visual field with those lower, at the peripheries. But the conclusion is fascinating, nonetheless.

The relevance to research and teaching in pronunciation is striking (or eye opening?) . . . If you want learners to remember sound-schema associations, do what you can to not just provide them with a visual schema in a box on paper, such as a (colored?) chart on a page, but consider creating the categories or anchoring points in the active, dynamic three dimensional space in front of them.That could be a relatively big space on the wall or closer in, right in front of them, in their  personal visual space.

One possibility, which I have played with occasionally, is giving students a big piece of paper with the vowels of English displayed around the periphery so that the different vowels are actually anchored more prominently with one eye or the other or "noticeably" higher or lower in the visual field--and having them hold it very close to their faces as they learn some of the vowels. The problem there, of course, is that they can't see anything else! (Before giving up, I tried using transparent overhead projector slides, too, but that was not much better, for other reasons.) 

In haptic pronunciation work, of course, that means using hands and arms in gesture and touch to create a clock-like visual schema about 12 inches away from the body, such that sounds can be, in effect consistently sketched across designated trajectories or be anchored to one specific point in space. For example, we have used in the past something called the "vowel clock" where the IPA vowels of English are mapped on, with the high front tense vowel [i] at one o'clock and the mid-back-tense vowel [o] at 9 o'clock. Something like that.

In v5.0 of Haptic Pronunciation Training-English (HaPT-Eng), the clock is replaced by a more effective compass-like visual-kinesthetic schema of sorts, where the hands-arms-gesture creates the position in space and touch of various kinds embodies the different vowel qualities of the sounds that are located on that azimuth or trajectory in the visual field. (Check that out in the fall!)

In "regular" pronunciation or speech teaching those sorts of things go on ad hoc all the time, of course, such as when we point with gesture or verbally point at something in the immediate vicinity, hoping to briefly draw learners' attention. Conceptually, we create those spaces constantly and often very creatively. Rosendahl et al (2018) demonstrates that there is much more potentially in what (literally) meets the eye. 

Source:
University of California - Santa Barbara. (2018, August 15). Category learning influenced by where an object is in our field of vision. ScienceDaily. Retrieved August 23, 2018 from www.sciencedaily.com/releases/2018/08/180815124006.htm


Tuesday, March 1, 2016

"Classless" pronunciation teaching and "miscue-aggression"?

Attended a delightful, engaging, stimulating and very well presented workshop on teaching pronunciation last week--by a charismatic, former drama teacher who had been teaching a twice-weekly pronunciation course for college ESL students for well over a decade. After the session, in the hall, one of the less experienced participants remarked: "Phenomenal presentation . . . but I couldn't possibly use any of those techniques in my class!" No kidding. Why not?

One of the most "striking" techniques demonstrated was when the teacher or student would comically hit a student over the head with an artificial daisy whenever he or she made a pronunciation miscue. The presenter remarked, in fact, that in all her years of teaching pronunciation she had never had a student complain about being corrected. And, after just an hour in the presence of that presenter, I don't doubt that . . .for a minute.

Two reasons most of what was presented was pretty much "in-applicable" to most of us in the audience. First, rapport. The presenter was one of those gifted teachers who almost instantly creates a safe and yet wildly creative milieu where learners will engage in extraordinary risk taking and not be threatened in the least. Second, and related, was the fact that many of the techniques demonstrated required that kind of "wide open" classroom setting to work effectively and especially--efficiently, in the first place.

The point: so often what can be done in a dedicated pronunciation class or language lab, with all its relational and situational constraints and social contracts, cannot be done in an integrated classroom setting where pronunciation is taught or attended to only piecemeal or occasionally or on a more impromptu basis. As research has demonstrated convincingly, instructors and students alike do not generally feel comfortable with much of how pronunciation is taught today. With good reason.
Photo: Dartmouth.edu

The affective and emotional context of pronunciation teaching is critical, even more so than for many other aspects of language teaching. In a dedicated "dramatic" class, strange things may work well; in an integrated "classless" setting, the rules and consequences can be very different. The "take way" from the dramatic, engaging workshop: Very little . . .

John Rassias (1925-2015) where are you when we need you?




Tuesday, February 3, 2015

Context rehabilitation in (or as a substitute for) pronunciation and accent work

Credit:
Clker.com
Part of the system I wrote about in 1984 (Acton 1984) included the almost tongue-in-cheek notion of "context rehabilitation." (See recent, relatively accurate, 2014, outline of that article by Polinedrio and Colon). The idea was to very proactively train students in how to influence the attitudes of their supervisors and co-workers as regards their  improving comprehensibility--while at the same time making substantive, noticeable changes in intelligibility as soon as possible in the program, of course! Some of that came from the early work of Rubin (1975) and others, and work on attending skills, e.g.,  Acton and Cope (1999).  

A recent, very informative review of research on the effectiveness in pronunciation instruction by Thomson and Derwing (2014) concludes with this interesting and revealing comment:  


"In immigrant situations, native speakers of the L2 can be helped to become better listeners as well (Derwing et al. 2002; Kang and Rubin 2012) . . .  Communication is a two-way street, thus L2 speakers’ interlocutors sometimes need support in building confidence that they have the skills to interact with L2 accented individuals." 

Other than the near-comma-splice, love that word "support" in that final statement. It may well be that educational campaigns and law suits to change societal attitudes toward accents will, indeed, in the long run be the most cost-efficient and effective approach to improving intercultural communication--and making much pronunciation instruction less (or ir-)relevant . . .

For a much fuller exploration of that and related themes, get a copy of a great-looking new (VERY EXPENSIVE - $176 CAD in hardcover and I can't find it in paperback yet) book, Social dynamics in second language accent (2014), edited by Levis and Moyer! (My library doesn't have it yet but most of the chapters seem to be obvious continuations of each author's best stuff.) 

Keep in touch. 


Tuesday, May 6, 2014

Gesture to teach L2 vocabulary (and pronunciation) by!

Clip art:
Clker
Required reading: New, 2014 article by Macedonia (University of Linz) and Klimesch (University of Salzburg), published in Mind, Brain and Education 8 (2): 74-86, entitled, "Long-term Effects of Gestures on Memory for Foreign Language Words Trained in the Classroom." In essence what the study revealed (or confirmed) was--as the title declares--that systematic use of gesture, especially dramatic and iconic gesture, enhanced long term memory for vocabulary. The comprehensive literature review on the function of gesture in learning and memory alone makes the piece worth reading.

Although from the description of the treatment in the experiment it is not entirely clear just how many of the gestures involved touch, those that were used were reported to be generally dramatic and/or iconic (representing an object by tracing its shape in the air). Words learned with accompanying gesture were remembered better, even 4 months out in the follow up.

And the fascinating aspect of that research for out haptic work is that the terms were learned generally in short phrases or as single words in isolation, out of any context such as a story, conversation or other narrative. Our upcoming workshop at TESL Canada this weekend in Regina focuses on just that: haptic (gesture +touch) anchoring of relatively out of context terms taken from the Academic Word List. Good to have a little more empirical evidence for the efficacy of gestural anchoring with us as we do!

Keep in touch!

Saturday, December 1, 2012

The body language of pronunciation teaching: Karaoke Affect

Clip art: Clker
One of the potential "turn offs" for some instructors and students in buying into the gestural and somatic basis of pronunciation work is . . . how "goofy" it looks (with apologies to Goofy, of course.) And some of it does, unquestionably. If you need to get to "goofy," you have to ramp up use of wilder gesticulation gradually, what we call "Karaoke Affect." As long as you establish the context carefully and set up good conceptual partitions, most students will come along with you . . . to goofy and beyond.

But to one who is not in the typical pronunciation teaching box, or just passing by, who has no clue what the class is about, what do the typical, gestural classroom techniques communicate: (a) clapping hands, (b) snapping fingers, (c) stretching rubber bands, (c) humming with a kazoo, (d) thumping on the desk, (e) stamping feet, (f) waving hands in the air to imitate intonation, (g) tracing lines on worksheets with fingers, (h) stepping up and down with sentence stress, (i) popping candy in the mouth on certain vowels, (j) throwing bean bags on stressed words, and (k) let alone the dozens of mouth machinations done for teaching specific vowel and consonant articulation?

According to recent research by Aviezer of Hebrew University, Trope of New York University and Todorov of Princeton University, summarized by Science Daily, it is the body that accurately communicates feelings (at least), not the face and mouth. In the study, subjects were much better at determining emotional state by focusing on movement and gesture, not looking from the neck up.

Situating and contextualizing those "bizarre" behaviours and what they communicate requires a coherent system to use them in. As we have seen in research in dozens of blog posts, it can go either way. (The EHIEP "way" is a good start, of course!) So, climb in your Karaoke Affect Box, affect your best your Eliza Doolittle, and  Show me

Tuesday, September 20, 2011

Marking the territory: What wolves can teach us about integrated anchoring and learning of L2 pronunciation

Clip art:
Clker
There are many who report using haptic anchoring in teaching pronunciation, techniques such as clapping hands, stomping feet, tapping on the desk or pulling at rubber bands-- coordinated with stressed words or rhythm groups in speaking. Such "marking of the territory" does certainly help to reinforce the goal of the activity, but it is not "haptic-integrated," in the sense that we use it here.

In the 2003 study of the marking behavior of wolves in Poland, by Zub, Theuerkauf, Jędrzejewski, Jędrzejewska, Schmidt and Kowalczyk, an elaborate system was revealed such that each incidence of territorial marking could only be interpreted by considering three parameters simultaneously: (a) significance, (b) variability, and (c) relationship to other marking(s). In other words, the efficacy and meaning of the "mark" was dependent entirely upon its relative place in an integrated, multi-dimensional map of the territory.

In the same sense, haptic anchoring of new pronunciation only contributes effectively when it is thoroughly integrated into speaking, listening, reading or writing tasks. If it is experienced outside of meaningful discourse, narrative and task sequencing, as traditional, isolated pronunciation exercises inserted in the midst of a lesson--no matter how vividly or dramatically the haptic anchoring is executed, chances are it will not be all that different or memorable in principle than when the wolves, themselves, employ the same marking "technology" for other, more mundane functions in the Bialowieza woods . . .

Friday, September 16, 2011

The case for slightly boring pronunciation classes

It is not easy to come up with a reason to justify mind-numbing, repetitive decontextualized pronunciation drill, but maybe here is one . . . This 2001 study by Martin at the University of Missouri, summarized by a UK Daily Mail reporter, "discovered" that if you are having too good a time, it may affect your ability to remember "data." As the summary of the research notes, that could be caused by several factors, but one connection to haptic work is what others refer to as the Hansel and Gretel effect: context is encoded with the stuff to be remembered. If the trace back to a word is only "bread crumbs," chances are you won't get there; if the path back involves getting in a party mood first, that can be almost as bewildering later it turns out.
Clip art:
Clker
Clip art:
Clker


The trick, of course, is to create an event or experience that is anchored as efficiently as possible (with "noticing" in overdrive), with attention limited to the target, not the visual or emotional setting--or even unduly "thick" memories of past events. The immediate linguistic context of a sound or word, however, must also remain as a permanent part of the package. To do that for most learners requires momentary, conscious control of mind, brain, body and immediate surroundings: from our perspective, a "haptic-integrated" felt sense that is both highly energized and relaxed at the same time. In such a heightened state of awareness it is, of course, relatively easy to stay "in touch"-- and nearly impossible to be bored--or miss the party.