Friday, April 22, 2011
Common sense tells us that if we engage our visual, auditory, kinesthetic and tactile neural hardware simultaneously, we should be able to learn more effectively. The study linked above by Kelly & Avraamides demonstrates something of how that works. In effect, the information from all modalities tends to be "stored" in a common location in the brain. (It is also stored in other neural sites as well.) The implication for our work is that fixed locations in the visual field that are strongly associated with a sound or sound process, having been learned by (a) simultaneously locking the eyes and/or prioprioceptic nervous system on that point (b) where the hands have just arrived and are (d) momentarily touching, as the sound is (e) articulated, and the sound (f) is (to some degree) "heard" through the ears--should be readily recalled from any of those five directions. Pedagogically, that simply means efficiency. In other words, learners of any cognitive style, functioning in any skill area, should be better able to recall and begin using what they have anchored, what they have haptically integrated, later.