SAS for scientists and scholars

face_brain_hp2It's one thing discovering an intervention method that works, but an entirely different thing explaining why and how it works. A famous example of this is Aspirin, one of the most widely used medicines in the world. Since antiquity it has been known that certain plant extracts help to reduce headaches, pains or fevers. Hippocrates (around 400 BC), the father of modern medicine, described how the bark and leaves of the willow tree could be used to make a powder with these properties. It was not until the mid 1800's that this natural Aspirin was being reproduced in laboratories and by the early 1900's Aspirin had became a household name in medicine. Research did not discover the basic mechanisms behind the effectiveness of Aspirin until the 1960's and even today it continues to be researched further. So the use of the active ingredients of Aspirin predate the scientific understanding by several millennia.In this paper I will outline the scientific foundations underlying the SAS neuro-sensory activation method, in the full knowledge that this is a work in progress. Further research in to the fundamental neuro-science and the efficacy and effectiveness of the methodology is still required, which SAS is actively pursuing in conjunction with a number of educational and academic research establishments. Not unlike Aspirin fifty years ago, we have observed the effectiveness of the method and are now researching the basic mechanisms underlying the intervention technique. The assumptions underlying the method, however, are based on solid scientific knowledge and existing validated research as outlined below.

Can the brain change?

The proposed SAS method will only be of value if the brain can change in response to external stimuli through the senses. Until the late 20th century this was held to be impossible, but more recent research into brain plasticity points towards the brain's remarkable ability to change:

The brain, as the source of human behavior, is by design molded by environmental changes and pressures, physiologic modifications, and experiences. This is the mechanism for learning and for growth and development — changes in the input of any neural system, or in the targets or demands of its efferent connections, lead to system reorganization that might be demonstrable at the level of behavior, anatomy, and physiology and down to the cellular and molecular levels.

Therefore, plasticity is not an occasional state of the nervous system; instead, it is the normal ongoing state of the nervous system throughout the life span. A full, coherent account of any sensory or cognitive theory has to build into its framework the fact that the nervous system, and particularly the brain, undergoes continuous changes in response to modifications in its input afferents and output targets. Implicit to the commonly held notion of plasticity is the concept that there is a definable starting point after which one may be able to record and measure change. In fact, there is no such beginning point because any event falls upon a moving target, i.e., a brain undergoing constant change triggered by previous events or resulting from intrinsic remodeling activity. We should not therefore conceive of the brain as a stationary object capable of activating a cascade of changes that we call plasticity, nor as an orderly stream of events driven by plasticity. Instead we should think of the nervous system as a continuously changing structure of which plasticity is an integral property and the obligatory consequence of each sensory input, motor act, association, reward signal, action plan, or awareness. In this framework, notions such as psychological processes as distinct from organic-based functions or dysfunctions cease to be informative. Behavior will lead to changes in brain circuitry, just as changes in brain circuitry will lead to behavioral modifications. (Pascual-Leone et al, 2005) [1]

The brain thus continuously changes its structure in consequence of sensory and motor input. Another study has shown that permanent, long term, changes can occur as a result of repeated cognitive and sensory input.

Structural MRIs of the brains of humans with extensive navigation experience, licensed London taxi drivers, were analyzed and compared with those of control subjects who did not drive taxis. The posterior hippocampi of taxi drivers were significantly larger relative to those of control subjects.

Hippocampal volume correlated with the amount of time spent as a taxi driver.

It seems that there is a capacity for local plastic change in the structure of the healthy adult human brain in response to environmental demands. (Maguire et al, 2000) [2]

Brain plasticity is thus not limited to functional connections, but actually can result in permanent physiological changes. Nor is it limited to younger subjects and can take place in older adults.

Intervention for (C)APD [(Central) Auditory Processing Disorders] has received much attention recently due to advances in neuroscience demonstrating the key role of auditory plasticity in producing behavioral change through intensive training. With the documented potential of a variety of auditory training procedures to enhance auditory processes, the opportunity now exists to change the brain, and in turn, the individual’s auditory behavior through a variety of multidisciplinary approaches that target specific auditory deficits. Customizing therapy to meet the client’s profile (e.g., age, cognition, language, intellectual capacity, comorbid conditions) and functional deficits typically involves a combination of bottom-up and top-down approaches. (American Academy of Audiology, 2003) [3]

Auditory training may be able to change the way the brain processes or reacts to incoming information.

A way into the brain.

In order to activate the brain, and preferably each hemisphere separately, we need a way into the brain. As we are not considering invasive surgical, chemical or transcranial magnetic stimulation techniques, the main entry points will be those offered to us by the sensory systems. The five most promising sensory modes are the visual, auditory, tactile, vestibular and proprioceptive systems. The olfactory (smell) and gustatory (taste) systems can be used in a limited way in multi-sensory teaching environments, but are difficult to control for intensive, rapidly changing, stimulation, and are not considered here.There is a wide range of intervention techniques based on movement and touch, using the tactile, vestibular and proprioceptive systems as input channels. The advantage of most of these methods is the minimal requirement of specialised equipment and easy application in most settings, including as home programmes. The disadvantage is that they require the cooperation and a certain level of ability of the client, and that the effectiveness is often only achieved after several months of daily exercises. The duration of the programmes often dictate that the client executes the exercises at home, which can lead to premature cessation of the programme. At SAS Centres we often compliment the SAS auditory activation method with a range of movement based intervention techniques.The visual and auditory systems are the preferred entry points in traditional education, using cognitive based tasks. This undoubtedly will remain the main method for teaching. However, in cases where developmental milestones are not reached at an appropriate age, where learning achievement is lagging behind, or daily life is affected by a lack of development in cognition, social skills, emotional or behavioural maturity, a different approach may be required. When traditional methods do not achieve the required results, a less cognitive and more direct sensory approach may be appropriate.Of all sensory input the visual system accounts for about 90 % of all information flow into the brain. It is therefore a prime candidate for a sensory based intervention technique. There are several non-cognitive methods that aim to influence the brain through the visual modality. If we wish to activate individual brain hemispheres, however, the stimulus needs to be presented to each individual visual field separately and this requires either a knowledge of where the focus of visual attention is at any time or the cooperation and attention of the client. This complication is due to the layout of the pathways of the optic nerves. At the optic chiasm in the brain the information coming from both eyes splits according to the visual field. The corresponding halves of the field of view (right and left) are sent to the left and right hemispheres, respectively. Thus the right side of the primary visual cortex deals with the left half of the field of view from both eyes, and similarly for the left hemisphere. A small region in the centre of the field of view is processed by both hemispheres. Currently, it is therefore difficult to reliably activate each hemisphere separately, although new eye tracking technology offers opportunities for the future.The auditory system is another prime candidate when considering sensory activation of the brain through one of the senses. It is easy to gain separate access to each ear through the use of close-fitting ear covering headphones. Once sound has been converted to neural signals in the inner ear, the situation becomes more complex, however, as described by Weihing and Musiek (2007) [4]:

In the central auditory nervous system, two main pathways extend from the periphery to the auditory cortex. The stronger of these two pathways consists of the contralateral connections, which connect the left periphery to the right hemisphere and the right periphery to the left hemisphere. However, there also exists weaker ipsilateral connections which connect, for instance, the left periphery to the left hemisphere (Pickles, 1982) [5]. As animal models have shown, the ipsilateral connections may be weaker, in part, because there are more contralateral connections in the central nervous system. (Rosenzweig, 1951) [6]; (Tunturi, 1946) [7]

Utilization of these two pathways depends on the mode of stimulation. When a stimulus is presented monotically both the contralateral and ipsilateral pathways are used to bring the neural signal to the cerebrum. For instance, if "hot dog" is presented to the right ear, the ipsilateral connections will bring the signal to the right hemisphere, whereas the contralateral connections will bring the signal to the left hemisphere. The situation changes, however, when stimuli are presented dichotically at equal sensation levels. The contralateral connections will still carry the signal, but the ipsilateral connections will now be suppressed to some degree (Hall & Goldstein, 1968) [8]; (Rosenzweig, 1951) [6]. This means that under dichotic conditions, the pathways contributing to auditory processing are mainly the stronger contralateral connections.

By using carefully designed dichotic (using both ears) signals, it is possible to reach each hemisphere separately, with only a limited amount of ipsilateral (same side) stimulation. If required, however, it is also possible to strengthen the ipsilateral pathways by adjusting the amplitude and temporal (time) properties of the signal.A great advantage of using the auditory system to reach the brain is that auditory processing takes place 24 hours a day, awake or asleep, when paying attention or not. This allows for a methodology to be developed that will suit almost all clients, irrespective of their abilities, attention or cooperation. Another key advantage is that it can reach the speech and language centres in the brain which play such an important role in the production of speech, one of the most important developmental milestones in an individual's growth.It is for the above reasons that SAS currently specialises in methods that use the auditory system as its main entry point into the brain.

The role of inter-hemispheric communication and synchronisation.

The role that the main fibre tract between the two brain hemispheres, the corpus callosum, plays in the communication and synchronisation of the diverse functions of the brain is an area of intense research. There is, however, a growing body of evidence that links poor functioning of the transfer function of the corpus callosum to a range of learning difficulties. Sensory processing, understanding, memory, creativity, reading ability have all been linked to various forms of inter-hemispheric deficiencies.
As a whole, these findings indicate that integrated white matter tracts underlie creativity. These pathways involve the association cortices and the corpus callosum, which connect information in distant brain regions and underlie diverse cognitive functions that support creativity. Thus, our results are congruent with the ideas that creativity is associated with the integration of conceptually distant ideas held in different brain domains and architectures and that creativity is supported by diverse high-level cognitive functions, particularly those of the frontal lobe. (Takeuchi et al, 2010) [9]

Although inter-hemispheric interaction via the callosum is most often conceived as a mechanism for transferring sensory information and coordinating processing between the hemispheres, it will be argued here that the callosum also plays an important role in attentional processing. (Banich, 1998) [10]

In the current experiment, we investigate whether IHI (Inter-Hemispheric Interaction) increases attentional capacity outside the visual system by manipulating the selection demands of an auditory temporal pattern-matching task. We find that IHI expands attentional capacity in the auditory system. This suggests that the benefits of requiring IHI derive from a functional increase in attentional capacity rather than the organization of a specific sensory modality. (Scalf et al, 2009) [11]

In the present study, we focused on three of the deficits which have been thought to accompany and to a certain extent, to explain dyslexia: an abnormal pattern of hemispheric asymmetry, abnormal hemispheric communication, and abnormal motor control. (Velay, 2002) [12]

Spectral and coherence characteristics of the EEG photic driving show different aspects of latent abnormal interhemispheric asymmetry in autistics: the right hemisphere "hyporeactivity" and potential "hyperconectivity" of likely compensatory nature in the left hemisphere. (Lazarev et al, 2010) [13]

We report new evidence showing that they also manifest deficits in interhemispheric integration of information, probably reflecting a corpus callosum dysfunction. Their performance during the time-limited trials was abnormal, showing that interhemispheric communication was inadequate. We report a new set of cognitive deficits compatible with a dysfunction of another major structure, the corpus callosum (CC), whose principal function is to allow the exchange of information between the hemispheres. Results reported herein indicate that Alzheimer's patients show an interhemispheric disconnection syndrome similar in nature to that demonstrated by split-brain subjects, i.e., patients whose CC was sectioned to alleviate intractable epilepsy. (Lakmache et al, 1998) [14]

SAS uses point sound sources moving from one ear to the other to induce inter-hemispheric communication signals through the corpus callosum.

Using music as an activation signal.

Using music as an activation signal may seem a logical choice, but it is also backed up by recent research, which offers important insights into music-induced neuroplasticity, relevant to brain development and neurorehabilitation (Amagdei et al, 2010) [15]. The emotional impact of music can also assist in maintaining attention and lengthening the concentration span. The structure of music can assist in strengthening the segmentation function of the brain, important in coping with sensory input, as outlined by Sridharan et al, (2007) [16]:

Event segmentation is fundamental to object identification and feature extraction. The real world typically presents our sensory systems with a continuous stream of undifferentiated information. In order to make sense of this information, the brain needs to segment or chunk the incoming stimulus stream into meaningful units; it accomplishes this by extracting information about beginnings, endings, and event boundaries from the input.

Music is innate to all human cultures, and there is evidence suggesting that the ability to appreciate music can develop even without explicit training (Trehub, 2003) [17]; hence, music is considered an ecologically valid auditory stimulus. Like speech, music is hierarchically organized (Cooper & Meyer, 1960) [18]; (Lehrdahl & Jackendoff, 1983) [19]; perceptual event boundaries in music exist at several well-defined hierarchical levels and time scales, including discrete tones, rhythmic motifs, phrases, and movements.

Adjacent movements within a single work are generally delimited by a number of different cues: changes in tempo (gradual slowing), tonality (changes in the tonic or key center), rhythm, pitch, timbre, contour, and boundary silences (gradual drop in intensity). While each movement may last from several to ten or more minutes, transitions between movements take place over the time scale of a few seconds. Movement transitions are perceptually salient event boundaries that demarcate such long time-scale structural changes, partitioning a large-scale musical composition into thematically coherent subsections.

Studying such segmentation processes in music may be a useful window into similar processes in other domains, such as spoken and signed language, visual perception, and tactile perception.

In many of its programmes SAS uses classical music as the sound source prior to filtering and processing.

The language and speech centres in the brain.

The development of language understanding and speech production are key developmental milestones and delays in these areas will have a great impact on the abilities of a child.Modern medical imaging techniques show that a number of areas of the brain are involved in language and speech processing. The left hemisphere is dominant in 98 percent of right-handed people while a high left dominance is also observed in left-handed people. However, the right hemisphere plays an important role in prosody, the rhythm, stress and intonation of speech.

Structural asymmetries in the supratemporal plane of the human brain are often cited as the anatomical basis for the lateralization of language predominantly to the left hemisphere. However, similar asymmetries are found for structures mediating earlier events in the auditory processing stream, suggesting that functional lateralization may occur even at the level of primary auditory cortex. We tested this hypothesis using functional magnetic resonance imaging to evaluate human auditory cortex responses to monaurally presented tones. Relative to silence, tones presented separately to either ear produced greater activation in left than right Heschl’s gyrus, the location of primary auditory cortex. (Devlin et al, 2003) [20]

Hemisphere specific auditory stimulation may be a way to activate the language centres in the brain and may be able subdue the non-dominant hemisphere.

The results suggest that in girls higher prenatal testosterone exposure facilitates left hemisphere language processing, whereas in boys it reduces the information transfer via the corpus callosum. (Lust et al, 2010) [21]

Gender specific programme design may be required to customise the intervention to meet the client’s profile.

The importance of ear dominance.

For most people we know if they are right or left handed as only a small number are ambidextrous (using the right and left hand equally). But ear dominance is not as easy to observe, nor is it widely known that ear dominance can have a substantial effect on speech and language development.

Right ear advantage for language processing may be caused by several interacting factors. The left hemisphere, especially for right-handed individuals, is specialised in language processing. Kimura postulated that auditory input delivered to the left ear, which is sent along the ipsilateral auditory pathways, is suppressed by the information coming from the right ear. Input to the left ear, which first reaches the contralateral right hemisphere, must be transferred via the corpus collosum to the left hemisphere where the language processing areas are located. The transfer of linguistic information from the right hemisphere to the left hemisphere results in a slight delay in processing. No such transfer delay is found for the right ear, thereby favouring the right ear for speech processing. (Kimura, 1961) [22]

Right ear preference can also affect communication strategies and behaviour:

According to the authors, taken together, these results confirm a right ear/left hemisphere advantage for verbal communication and distinctive specialization of the two halves of the brain for approach and avoidance behavior. (Tommasi & Marzoli, 2009) [23]

Ear dominance may also play a part in speech impediments such as stuttering / stammering:

There is evidence of differences in linguistic processing between people who stutter and people who do not stutter.(Ward, 2006) [24] Brain scans of adult people who stutter have found increased activation of the right hemisphere, which is associated with emotions, than in the left hemisphere, which is associated with speech. In addition reduced activation in the left auditory cortex has been observed. (Gordon, 2002) [25]; (Guitar, 2005) [26]

Through the use of temporal processing, phase shifting, intensity and movement control it is possible to direct the attention of the listener to one particular ear, which can result in longer term altered habits of ear preference.

Frequency discrimination ability linked to intelligence and learning difficulties.

Our ability to differentiate between sounds with different frequencies (tones) may seem to be a rather technical issue without much practical use in daily life, unless you are a musician, of course. However, there is a growing body of evidence that links this frequency discrimination ability with learning ability and intelligence.

The present study would suggest that frequency discrimination ability may be related to intelligence. (Langille, 2008) [27]

On a very practical level, improvements in frequency discrimination can help with conditions such as developmental Dyslexia.

Developmental dyslexics reportedly discriminate auditory frequency poorly. (France et al, 2002) [28]

The standard SAS programmes include elements that are designed to strengthen frequency discrimination, while at SAS Centres we provide specialist training sessions that specifically target this ability.

Brainwaves that relate to our 'state of being'.

Brainwaves in humans were discovered through the application of EEG (electroencephalography) measurements nearly one hundred years ago. It was soon realised that certain frequency bands were associated with typical states of being, although recent research indicates that these distinctions are not as clear-cut as previously believed. The main brainwave frequency bands are: Delta (under 4 Hz.) Related to the deepest stages of N3 slow-wave sleep. Delta waves show a lateralisation, with right hemisphere dominance during sleep (Mistlberger et al, 1987) [29]. Disrupted delta-wave activity is associated with attention deficit disorder (ADD) and attention deficit hyperactivity disorder (ADHD) (Clarke et al, 2001) [30]. Theta (4 – 7 Hz.) Associated with drowsy, meditative or sleeping states. Research indicates that the Theta rhythm is involved in spatial learning and navigation (Buzsáki, 2005). [31]

Functional and topographical differences between processing of spoken nouns which were remembered or which were forgotten were shown by means of EEG coherence analysis. Later recalled nouns were related with increased neuronal synchronization (= cooperation) between anterior and posterior brain regions regardless of presented word category (either concrete or abstract nouns). However, theta coherence exhibited topographical differences during encoding of concrete and abstract nouns whereby former were related with higher short-range (mainly intrahemispheric), later with higher long-range (mainly interhemispheric) coherence. Thus, theta synchronization possibly is a general phenomenon always occurring if task demand increases and more efficient information processing is required. Measurement of EEG coherence yields new information about the neuronal interaction of involved brain regions during memory encoding of different word classes. (Weiss et al, 2000)[32]

Alpha (8 – 12 Hz.) Related to relaxed wakefulness and REM (Rapid Eye Movement) sleep. Alpha brainwaves increase when the eyes are closed. Beta (13 – 30 Hz.) Associated with normal waking consciousness. active, busy, or anxious thinking and active concentration. Gamma (over 30 Hz.) Involved in cognitive processing and inter-hemispheric synchronisation.Certain conditions, such as Attention Deficit Hyper Activity Disorder (ADHD), are known to display unusual relationships between these various brainwave frequency bands.

Adolescent unmedicated ADHD males and age- and sex-matched normal control subjects were examined simultaneously using EEG and EDA measures in a resting eyes-open condition. ADHD adolescents showed increased absolute and relative Theta and Alpha1 activity, reduced relative Beta activity, reduced skin conductance level (SCL) and a reduced number of non-specific skin conductance responses (NS.SCRs) compared with the control subjects. Our findings indicate the continuation of increased slow wave activity in ADHD adolescents and the presence of a state of autonomic hypoarousal in this clinical group. (Lazzaro et al, 1999) [33]

Hemispheric-synchronised sounds, or binaural frequency differentials as used in the SAS programmes, can have unexpected influence on the body and mind, as shown in a double-blind randomised trial in the U.K. in 1999:

The possible antinociceptive effect of hemispheric-synchronised sounds, classical music and blank tape were investigated in patients undergoing surgery under general anaesthesia. The study was performed on 76 patients, ASA 1 or 2, aged 18–75 years using a double-blind randomised design. Patients to whom hemispheric-synchronised sounds were played under general anaesthesia required significantly less fentanyl compared with patients listening to classical music or blank tape (mean values: 28 mg, 124 mg and 126 mg, respectively) (p < 0.001). This difference remained significant when regression analysis was used to control for the effects of age and sex. (Kliempt et al, 1999) [34]

An earlier study found the brainwave entrainment impacted on learning achievement:

This preliminary data suggests that use of AVS (AudioVisual Stimulator) entrainment to challenge and stimulate the brain appears to result in improved functioning on intelligence tests, achievement tests, and behavior as rated by parents and teachers. Results suggest significant improvement following this training and that longer training time results in greater improvement. (Carter & Russell, 1981) [35]

SAS uses sophisticated Binaural Frequency Differentials (BFD) in most programmes, designed to gently move the brainwave activity of the listener to the desired state. This may be relaxation, from Beta to Alpha or Theta, or for clients suffering from hyperactive behaviour, up from Theta and Alpha1 to Beta. Gamma waves are used extensively with the aim to activate the inter-hemispheric synchronisation. Breathing rate entrainment is interwoven with the BFD programmes to either relax or activate the body.

Influencing the psyche through the use of therapeutic language.

Therapeutic language is used extensively in psychotherapy to help clients see situations from a different perspective and broaden their options and choices in life.

Therapeutic influence, far from being ineffable and based on therapists' impenetrable charisma, derives from concrete actions and an interactional approach that can be described and used in the psychotherapeutic training of university students. (Blanchet et al, 2005) [36]

Many psychotherapeutic approaches make use of metaphors to re-frame the client's view on their current life situation. Traditional fairy tales also make extensive use of metaphors and the use of metaphoric stories can be an effective way of communicating with children.

Our results show that all children preferred metaphors to literal instructions. Our findings also suggest that internalizing symptoms and higher levels of cognitive functioning are related to greater compliance with metaphors. (Heffner et al, 2003) [37]

Traditionally psychotherapy is conducted on a one-to-one basis between the client and the psychotherapist, but pre-recorded stories using metaphors and therapeutic language, tailored to the age, condition and general environment of the client, can be used with the aim to dispel fears of, for instance, social interactions in the playground, or boosting confidence and feelings of achievement and self-worth.In addition to music and language based sessions, the SAS method also makes use of client appropriate pre-recorded Therapeutic Language Programmes (TLP).

Application of the SAS neuro-sensory activation method.

The application of the SAS neuro-sensory activation method is simple. Clients are required to listen to the selected pre-recorded programmes for one or one-and-a-half hour each day, completing a minimum of 18 hours of listening within two to three weeks. After the first five consecutive days a break of one or two days is allowed. Full-sized over ear headphones of good quality are used and the volume is kept low, typically around 70 dBA. The client is not required to pay specific attention to the programmes although many prefer to do so, especially for the story based language programmes.

Components the SAS neuro-sensory activation programmes.

The SAS neuro-sensory activation method uses a wide range of techniques to promote change, based on the scientific principles as outlined above. Programmes are categorised into three main categories: Music, Language and Therapeutic Language Programmes (TLP). Programmes within each category can contain specific Binaural Frequency Differential (BFD) components. Most programmes are graduated, starting at a mild level of activation, gradually increasing to maximum activation before returning to the mild starting level again. A range of activation levels are available to suit the needs of the client. Various breathing and activation/relaxation levels can accommodate clients of different ages and for application at different times of the day.Summary.The SAS neuro-sensory activation method has been used by thousands of clients in a range of settings, from one-to-one application in clinic based SAS Centres, in group settings at schools and hospitals and as home programmes by private clients. Clients are requested to provide post-programme feedback covering 27 different areas of ability and behaviour - the aggregated results of the client feedback is published on the SAS website, see The SAS organisation is actively pursuing academic research into the method and results can be found on the same website.The aim of this paper is to add to this the scientific foundations underlying the SAS neuro-sensory activation method. Both the application and the scientific underpinning of the method are under constant review and are regularly updated.In this spirit we need to remember the wisdom of Socrates:“The only true wisdom is in knowing you know nothing”.Steven Michaëlis, London, March 2013.


[1] Pascual-Leone, A., Amedi, A., Fregni, F., Merabet, L.B. (2005). The Plastic Human Brain Cortex. Annual Review of Neuroscience 2005, Volume 28: 377-401.[2] Maguire, A.E., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S.J., Frith, C.D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. PNAS, April 11, 2000 vol. 97 no. 8 4398-4403.[3] American Academy of Audiology (2010). Guidelines for the Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder. American Academy of Audiology Clinical Practice Guidelines, page 3, 8/24/2010.[4] Weihing, J.A., Musiek, F.E. (2007). Dichotic Interaural Intensity Difference (DIID) Training. Auditory Processing Disorders: assessment, management, and treatment. Plural Publishing, 2007, 284-285.[5] Pickles, J.O. (1982). An introduction to the physiology of hearing. London, Academic Press.[6] Rosenzweig, M. (1951). Representations of two ears at the auditory cortex. American Journal of Physiology, 167, 147-158.[7] Tunturi, A. (1946). A study of the pathway from the medial geniculate body to the acoustic cortex in the dog. American Journal of Physiology, 147, 311-319.[8] Hall, J. & Goldstein, M. (1968). Representations of binaural stimuli by single units in primary auditory cortex of unanesthetized cats. Journal of the Acoustical Society of America, 43, 456-561.[9] Takeuchi, H., Taki, Y., Sassa, Y., Hashizume, H., Sekiguchi, A., Fukushima, A., Kawashima, R. (2010). White matter structures associated with creativity: evidence from diffusion tensor imaging. Neuroimage. 2010 May 15;51(1):11-8. Epub 2010 Feb 17.[10] Banich, M.T., (1998). The Missing Link: The Role of Interhemispheric Interaction in Attentional Processing. Brain and Cognition 36, 128–157.[11] Scalf, P.E., Banich, M.T., Erickson, A.B. (2009). Interhemispheric interaction expands attentional capacity in an auditory selective attention task. Exp Brain Res. 2009 Apr;194(2):317-22.[12] Velay, J.L., Daffaure, V., Giraud, K., Habib, M. (2002). Interhemispheric sensorimotor integration in pointing movements: a study on dyslexic adults. Neuropsychologia. 2002;40(7):827-34.[13] Lazarev, V.V., Pontes, A., Mitrofanov, A.A., deAzevedo, L.C. (2010). Interhemispheric asymmetry in EEG photic driving coherence in childhood autism. Clin Neurophysiol. 2010 Feb;121(2):145-52.[14] Lakmache, Y., Lassonde, M., Gauthier, S., Frigon, J., Lepore, F. (1998) Interhemispheric disconnection syndrome in Alzheimer’s disease. Proc Natl Acad Sci U S A. 1998 July 21; 95(15): 9042–9046.[15] Amagdei, A., Balteş, F.R., Avram, J., Miu, A.C. (2010). Perinatal exposure to music protects spatial memory against callosal lesions. Int J Dev Neurosci. 2010 Feb;28(1):105-9. Epub 2009 Sep 6.[16] Sridharan, D., Levitin, D.J., Chafe, C.H., Berger, J., Menon, V. (2007). Music and the Brain. Neuron 55, 521–532, August 2, 2007.[17] Trehub, S.E. (2003). The developmental origins of musicality. Nat. Neurosci. 7, 669–673.[18] Cooper, G.W., and Meyer, L.B. (1960). The Rhythmic Structure of Music. University of Chicago Press.[19] Lehrdahl, F., and Jackendoff, R. (1983). A Generative Theory of Tonal Music. Cambridge, MA: MIT Press.[20] Devlin, J.T., Raley, J., Tunbridge, E., Lanary, K., Floyer-Lea, A., Narain, C., Cohen, I., Behrens, T., Jezzard, P., Matthews, P.M., Moore, D.R. (2003). Functional Asymmetry for Auditory Processing in Human Primary Auditory Cortex. The Journal of Neuroscience, December 17, 2003, 23(37): 11516–11522.[21] Lust, J.M., Geuze, R.H., Van de Beek, C., Cohen-Kettenis, P.T., Groothuis, A.G., Bouma, A. (2010). Sex specific effect of prenatal testosterone on language lateralization in children. Neuropsychologia. 2010 Jan; 48(2): 536-540.[22] Kimura, D. (1961). Cerebral dominance and the perception of visual stimuli. Canadian Journal of Psychology, 15(3), 166-177.[23] Tommasi, L., Marzoli, D. (2009). New research demonstrates humans' right ear preference for listening. 23 June 2009, Springer Science + Business Media.[24] Ward, D. (2006). Stuttering and Cluttering: Frameworks for understanding treatment. Hove and New York City: Psychology Press.[25] Gordon, N. (2002). Stuttering: incidence and causes. Developmental medicine and child neurology, 44 (4): 278–81.[26] Guitar, B. (2005). Stuttering: An Integrated Approach to Its Nature and Treatment. San Diego: Lippincott Williams & Wilkins.[27] Langille K. (2008). Frequency Discrimination, the Mismatch Negativity ERP, and Cognitive Abilities. Thesis, April 2008, The Department of Psychology, St. Thomas University, Fredericton, Canada.[28] France, S.J., Rosner, B.S., Hansen, P.C., Calvin, C., Talcott, J.B., Richardson, A.J., Stein, J.F. (2002). Auditory frequency discrimination in adult developmental dyslexics. Perception & Psychophysics, 2002, 64 (2), 169-179.[29] Mistlberger, R.E., Bergmann, B.M., Rechtschaffen, A. (1987). Relationships among wake episode lengths, contiguous sleep episode lengths, and electroencephalographic delta waves in rats with suprachiasmatic nuclei lesions. Sleep, 10(1), 12-24.[30] Clarke, A.R., Barry, R.J., McCarthy, R., Selikowitz, M. (2001). EEG-defined subtypes of children with attention-deficit/hyperactivity disorder. Clinical neurophysiology: official journal of the International Federation of Clinical Neurophysiology, 1 November 2001, volume 112, issue 11, 2098-2105.[31] Buzsáki, G (2005). Theta rhythm of navigation: link between path integration and landmark navigation, episodic and semantic memory. Hippocampus, 15 (7): 827–40.[32] Weiss, S., Müller, H.M., Rappelsberger, P. (2000). Theta synchronization predicts efficient memory encoding of concrete and abstract nouns. Neuroreport: 3 August 2000, volume 11, issue 11, 2357-2361.[33] Lazzaro, I., Gordon, E., Li, W., Lim, C.L., Plahn, M., Whitmont, S., Clarke, S., Barry, R.J., Dosen, A., Meares, R. (1999). Simultaneous EEG and EDA measures in adolescent attention deficit hyperactivity disorder. Int J Psychophysiol. 1999 Nov; 34(2): 123-34.[34] Kliempt, P., Ruta, D., Ogston, S., Landeck, A., Martay, K. (1999). Hemispheric-synchronisation during anaesthesia: a double-blind randomised trial using audiotapes for intra-operative nociception control. Anaesthesia, 1999, 54, 769–773.[35] Carter, J.L., Russell, H.L. (1981). A Pilot Investigation of Auditory and Visual Entrainment of Brain Wave Activity in Learning Disabled Boys. Paper presented at the Annual International Convention of The Council for Exceptional Children (59th, New York, April 1981, Session A-3).[36] Blanchet, A., Batt, M., Trognon, A., Masse, L. (2005). The hidden structure of interaction: from neurons to culture patterns. Amsterdam, IOS Press, 2005.[37] Heffner, M., Greco, L.A., Eifert, G.H. (2003). Pretend You Are a Turtle: Children's Responses to Metaphorical versus Literal Relaxation Instructions. Child & Family Behavior Therapy, Volume 25, Issue 1, 2003.
This article was read 18303 times.

Related Articles

No results found.